text
string | filename
string | file_size
int64 | title
string | authors
string | journal
string | category
string | publisher
string | license
string | license_url
string | doi
string | source_file
string | content
string | year
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# The Evaluation of the Number and the Entropy of Spanning Trees on Generalized Small-World Networks
**Authors:** Raihana Mokhlissi; Dounia Lotfi; Joyati Debnath; Mohamed El Marraki; Noussaima EL Khattabi
**Journal:** Journal of Applied Mathematics
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1017308
---
## Abstract
Spanning trees have been widely investigated in many aspects of mathematics: theoretical computer science, combinatorics, so on. An important issue is to compute the number of these spanning trees. This number remains a challenge, particularly for large and complex networks. As a model of complex networks, we study two families of generalized small-world networks, namely, the Small-World Exponential and the Koch networks, by changing the size and the dimension of the cyclic subgraphs. We introduce their construction and their structural properties which are built in an iterative way. We propose a decomposition method for counting their number of spanning trees and we obtain the exact formulas, which are then verified by numerical simulations. From this number, we find their spanning tree entropy, which is lower than that of the other networks having the same average degree. This entropy allows quantifying the robustness of the networks and characterizing their structures.
---
## Body
## 1. Introduction
Recently, the analysis of complex networks has received a major boost caused by the huge network data resources and many systems in the real world can be described and characterized by complex networks [1]. Some scientific studies have inspired researchers to construct network models to explain the existing common characteristics in real-life systems. Among the well-known models of the complex networks, there is a small-world network. It displays rich behavior as observed in a large variety of real systems including Internet (websites with navigation menus), electric power grids, networks of brain neurons, telephone call graphs, and social networks. It is characterized by specific structural features: large clustering coefficient and small average distance. To analyze this class of complex networks, theories are needed to explain their inherent and emergent properties. New formal models of these networks are needed to predict accurately their performance, assert the guarantees of their reliability, and quantify their robustness. The graph theory has a powerful tool to simplify this theoretical study by enumeratingthe spanning trees of a network G [2]. The latter are defined as a connected and acyclic subgraph of G having all vertices (nodes) of G and some or all its edges. The goal of this paper is to know how many spanning trees can have a network. The enumeration of these spanning trees tends to be one of the most important parameters that characterizes the network reliability [3]. We denote the number of spanning trees by τ(G), also known as the complexity of a network. In general, it can be obtained by calculating the determinant or the eigenvalues of the Laplacian matrix corresponding to the network [4]. However, this general method is not acceptable for large and complex networks due to its high computing time complexity. Therefore, it is interesting to develop techniques and methods to facilitate the calculation of the number of spanning trees and find its exact formula for special classes of networks. In this context, our work proposes a combinatorial method for determining the spanning trees number for some complex networks, which is the decomposition method [5]. It relies on the principle of a process of “Divide and Conquer" by dividing a problem in subproblems, solving each of these subproblems and then incorporating the partial results for a general solution.As an application of the number of spanning trees of a network, we usethe entropy of spanning trees or what is called the asymptotic complexity (see, e.g., Dehmer, Emmert-Streib, Chen, Li, and Shi [2, 6]). By calculating this entropy, we can estimate how the network will evolve to infinity. This parameter permits us to quantify the robustness of complex networks and to characterize their structures [7]. It is related to the ability of the network to resist random changes in its structures. Many researchers have used this measure to estimate the robustness of some complex networks and the heterogeneity of their structures such as the small-world Farey graph [8], the two-tree network [9], the planar unclustered networks [10], the prism and antiprism graphs [11], and the lattices [12].The novelty of our work is to analytically investigate two generalized families of small-world networks, called the Small-World Exponential network. See, e.g., Mokhlissi, Lotfi, Debnath and El Marraki [13] and Liu, Dolgushev, Qi and Zhang [14], and the Koch network. See, e.g., Zhang, Zhou, Xie, Chen, Lin and Guan [15] and Zhang, Gao, Chen, Zhou, Zhang, and Guan [16]. The first network is based on complete graphs and the second network is based on the classical fractal Koch curve [17], which has many important properties observed in real networks. To generalize these two networks, we add two important parameters related to the size of the cyclic subgraphs and the dimension of the cyclic subgraphs (the number of the cyclic subgraphs added). We suggest two iterative algorithms generating their structures, we determine their topological properties, and we calculate their complexities. In the end, we evaluate and compare their spanning trees entropy with other networks having the same average degree as the Hanoi network, the Flower network, the Honeycomb lattice. As a result, we conclude that the generalized Small-World Exponential network and the generalized Koch network have the same spanning tree entropy, so the same robustness although their structures and properties are totally different, and this entropy depends just on the size of the cyclic subgraphs, which means the articulation nodes degree of the first iteration increases according to the dimension of the cyclic subgraphs; it does not influence the spanning tee entropy. The scope of this study is that the generalization of these two small-world networks does not affect the concept of the small-world networks (large clustering coefficient and small average distance). The work of this paper presents an alternative perspective in the analysis of small-world networks that exhibit typical features of real-world systems.The outline of this paper is organized as follows. In Section2, we present the preliminaries and the used methodology. The construction, the properties, and the complexity of the generalized Small-World Exponential network and the generalized Koch network are provided in Sections 3 and 4. Then, the spanning trees entropy of these small-world networks are presented in Section 5. Finally, the conclusion is included in Section 6.
## 2. Preliminaries
In this section, we introduce some notations and the method used to facilitate the calculation of the complexity of a complex network. LetG=(V(G),E(G),F(G)) be a connected planar graph with V(G) being its number of vertices, E(G) being its number of edges, and F(G) being its number of faces; it has no loops and no parallel edges. The number of vertices of a graph refers to its order and its number of edges refers to its size. The terms graph and network are used indistinctly. A network is said to be a small-world network if the distance L between two random nodes grows proportionally to the logarithm of the number of nodes in the network, that is, L∝logN, while the clustering coefficient (measure of the degree to which nodes in a network tend to cluster together) is not small.Euler’s formula [22]: Euler’s formula is a topological invariant that characterized the topological properties related to the number of vertices, edges, and faces.Theorem 1.
LetG be a connected planar graph with n vertices, m edges, and f faces. These numbers are connected by the well-known Euler’s relation; then(1)n-m+f=2The selection of the appropriate method for calculating the spanning trees number is a key factor in a given network. For this work, we put forward a decomposition method to make the number of spanning trees easy for computation. This method relies on the principle of Divide and Conquer; we decompose the graph into different subgraphs according certain constraints: by following one node, two nodes, an edge, and a path. In this work, we study the case where subgraphs are connected by one vertex (see Figure1). To apply this method, we follow this algorithm:(1)
We decompose the original graph into different subgraphs that are connected to one vertex.(2)
We calculate the number of spanning trees for each of subgraph.(3)
We collect the results to obtain the complexity of the original graph.Figure 1
Star network and chain network.LetG be a chain of planar graphs defined by G=C1•C2•…•Cn (see Figure 1). The number of spanning trees in G is given by the following formula:(2)τG=∏i=1nτCi.If the complexity of a networkτ(G) grows exponentially with the number of vertices VG, then there exists a constant ρG, called the entropy of spanning trees or the asymptotic complexity [23], described by this relation:(3)ρG=limVG⟶∞lnτGVGThe entropy of spanning trees of a networkG is a quantitative measure of the number of spanning trees to evaluate the robustness of a network and to characterize its structure. The most robust network with the stronger heterogeneous topology is the network that has the highest spanning tree entropy. According to the definition of the entropy of spanning trees of a network, the bigger the entropy value, the more the number of spanning trees, so there are more possibilities of connections between two nodes related to defective links that ensures a good reliability and robustness.
## 3. A Generalized Small-World Exponential NetworkGk,l,n
In this section, we introduce a well-known family of small-world network: the Small-World Exponential network [24]. It has an exponential form of degree distribution and the same number of nodes and edges as the dual Sierpinski gaskets [25]. It has been observed from some real-life systems as tensor networks, social networks, quantum walks. We propose a generalized Small-World Exponential network, where the difference relies on the size of the cyclic subgraph and the dimension of the cyclic subgraph (the number of the cyclic subgraphs added). We also investigate its construction and structural properties and calculate its complexity.
### 3.1. The Construction and the Properties of the Generalized Small-World Exponential NetworkGk,l,n
The generalized Small-World Exponential network is denoted byGk,l,n with two controllable parameters: l is the size of the cyclic subgraph and k is the dimension of the cyclic subgraph, i.e., the number of the cyclic subgraphs added. The construction of Gk,l,n follows this algorithm: at n=0, we have a simple node. At first generation, Gk,l,1 is a cyclic graph with the size l. For n>1, each node in the network of the previous iteration is replaced by k new cyclic subgraphs having the size l. Thus, each of the newly appeared cyclic subgraphs contains exactly one node of the network of the previous iteration and the articulation nodes degree of the first iteration is dGk,l,n=2(kn-1)/(k-1) (in Figure 2, the articulation nodes are colored by the red). The same process is used for the other iterations. In Figure 2, the first four iterations of the generalized Small-World Exponential network Gk,l,n are illustrated.Figure 2
The first four generations of the generalized Small-World Exponential networkG2,4,n.Let us compute the order, the size, the number of faces, the average degree, and the diameter of the generalized Small-World Exponential networkGk,l,n. Let VGk,l,n be the numbers of nodes created at n. From Figure 2, we notice for i from 1 to n: VGk,l,i=lk×VGk,l,i-1-(k-1)l. Then, we multiply the equation of VGk,l,n-1 by (lk), the equation of VGk,l,n-2 by (lk)2, and so on until the last equation VGk,l,1 which will be multiplied by (lk)(n-1). Summing all the obtained equations: ∑i=0n-1(lk)iVGk,l,n-i=∑i=0n-1(lk)i+1VGk,l,n-i-1-(k-1)l∑i=0n-1(lk)i. We find the following results: VGk,l,n=(lk)nVGk,l,0-(k-1)l∑i=0n-1(lk)i with VGk,l,0=1. Thus,the number of nodes ofGk,l,nis(4)VGk,l,n=lknl-1+k-1llk-1,n≥0.LetEGk,l,n be the numbers of links created at iteration n. By construction, for i from 1 to n, we have EGk,l,i=lk×EGk,l,i-1+l. Then, we multiply the equation of EGk,l,n-1 by (lk), the equation of EGk,l,n-2 by (lk)2, and so on until the last equation EGk,l,1 which will be multiplied by (lk)(n-1). Summing all the obtained equations: ∑i=0n-1(lk)iEGk,l,n-i=∑i=0n-1(lk)i+1EGk,l,n-i-1+l∑i=0n-1(lk)i. We find EGk,l,n=(lk)nEGk,l,0+l∑i=0n-1(lk)i with EGk,l,0=0. Thus,the number of links ofGk,l,nis(5)EGk,l,n=l×lkn-1lk-1,n≥0.LetFGk,l,n be the numbers of faces created at generation n. We apply Theorem 1; we obtain thatthe number of faces ofGk,l,nis(6)FGk,l,n=lkn+lk-2lk-1,n≥0.The average degree of G k , l , n is (which is approximately 3 for large n)(7)zGk,l,n=2EGk,l,nVGk,l,n=2l×lkn-1lknl-1+k-1l,n≥0.The diameterD is the maximum of the shortest distance between any two nodes (u,v) of a network: D=maxu,vd(u,v). Let DGk,l,n be the diameter of Gk,l,n created at generation n. This diameter can be calculated in two cases:(i)
If the size of cyclic subgraphsl is pair, we can calculate the diameter as follows: at iteration n=1, the diameter DGk,l,1=l/2. For n>1, the diameter of Gk,l,n increases by l at most.(ii)
If the size of cyclic subgraphsl is odd, we can calculate the diameter as follows: at iteration n=1, the diameter DGk,l,1=⌊l/2⌋. For n>1, the diameter of Gk,l,n increases by (l-1) at most.Sothe diameter ofGk,l,nis(8)DGk,l,n=l-ϵ2+l-ϵn-1withϵ=0,ifliseven,ϵ=1,iflisoddThis diameter can be presented by another formula which grows logarithmically with the number of vertices of the network indicating thatGk,l,n is a small-world network.(9)DGk,l,n=l-ϵ2+l-ϵloglkVGk,l,nlk-1-k-1ll-1-1withϵ=0,ifliseven,ϵ=1,iflisodd
### 3.2. The Number of Spanning Trees of the Generalized Small-World Exponential NetworkGk,l,n
The enumeration of spanning trees is a fundamental issue in many problems encountered in network analysis. However, explicitly determining this interesting quantity in networks is a theoretical challenge specially for the complex networks. Fortunately, the construction of the generalized Small-World Exponential networkGk,l,n makes it possible to derive the exact formula of this number using the decomposition method.Theorem 2.
LetGk,l,n denote the generalized Small-World Exponential networks. The complexity of Gk,l,n is given by the following formula:(10)τGk,l,n=llkn-1/lk-1,n≥1.Proof.
From Figure2, we see that Gk,l,n contains several cyclic subgraphs Yk,l,n. Using (2) we obtain τ(Gk,l,n)=∏δYk,l,nτ(Yk,l,n)=τ(Yk,l,n)δYk,l,n, where δYk,l,n is the number of cyclic subgraphs in Gk,l,n. In order to calculate the number of spanning trees of Gk,l,n, we need to find firstly the number of cyclic subgraphs in Gk,l,n. From our network, for i from 1 to n, we see δYk,l,i=lk×δYk,l,i-1+1. Then, we multiply the equation of δYk,l,n-1 by (lk), the equation of δYk,l,n-2 by (lk)2, and so on until the last equation δYk,l,1 which will be multiplied by (lk)n-1. Summing all the obtained equations: ∑i=0n-1(lk)iδYk,l,n-i=∑i=0n-1(lk)i+1δYk,l,n-i-1+∑i=0n-1(lk)i. We find the number of cycles in Gk,l,n: δYk,l,n=((lk)n-1)/((lk)-1). We replace it in the equation of τ(Gk,l,n); hence, we obtain τ(Gk,l,n)=l((lk)n-1)/(lk-1).
Fork=1 and l=3, the network G1,3,n is the Small-World Exponential network. Its number of spanning trees is given by the following formula [26]:(11)τG1,3,n=33n-1/2,n≥1.
## 3.1. The Construction and the Properties of the Generalized Small-World Exponential NetworkGk,l,n
The generalized Small-World Exponential network is denoted byGk,l,n with two controllable parameters: l is the size of the cyclic subgraph and k is the dimension of the cyclic subgraph, i.e., the number of the cyclic subgraphs added. The construction of Gk,l,n follows this algorithm: at n=0, we have a simple node. At first generation, Gk,l,1 is a cyclic graph with the size l. For n>1, each node in the network of the previous iteration is replaced by k new cyclic subgraphs having the size l. Thus, each of the newly appeared cyclic subgraphs contains exactly one node of the network of the previous iteration and the articulation nodes degree of the first iteration is dGk,l,n=2(kn-1)/(k-1) (in Figure 2, the articulation nodes are colored by the red). The same process is used for the other iterations. In Figure 2, the first four iterations of the generalized Small-World Exponential network Gk,l,n are illustrated.Figure 2
The first four generations of the generalized Small-World Exponential networkG2,4,n.Let us compute the order, the size, the number of faces, the average degree, and the diameter of the generalized Small-World Exponential networkGk,l,n. Let VGk,l,n be the numbers of nodes created at n. From Figure 2, we notice for i from 1 to n: VGk,l,i=lk×VGk,l,i-1-(k-1)l. Then, we multiply the equation of VGk,l,n-1 by (lk), the equation of VGk,l,n-2 by (lk)2, and so on until the last equation VGk,l,1 which will be multiplied by (lk)(n-1). Summing all the obtained equations: ∑i=0n-1(lk)iVGk,l,n-i=∑i=0n-1(lk)i+1VGk,l,n-i-1-(k-1)l∑i=0n-1(lk)i. We find the following results: VGk,l,n=(lk)nVGk,l,0-(k-1)l∑i=0n-1(lk)i with VGk,l,0=1. Thus,the number of nodes ofGk,l,nis(4)VGk,l,n=lknl-1+k-1llk-1,n≥0.LetEGk,l,n be the numbers of links created at iteration n. By construction, for i from 1 to n, we have EGk,l,i=lk×EGk,l,i-1+l. Then, we multiply the equation of EGk,l,n-1 by (lk), the equation of EGk,l,n-2 by (lk)2, and so on until the last equation EGk,l,1 which will be multiplied by (lk)(n-1). Summing all the obtained equations: ∑i=0n-1(lk)iEGk,l,n-i=∑i=0n-1(lk)i+1EGk,l,n-i-1+l∑i=0n-1(lk)i. We find EGk,l,n=(lk)nEGk,l,0+l∑i=0n-1(lk)i with EGk,l,0=0. Thus,the number of links ofGk,l,nis(5)EGk,l,n=l×lkn-1lk-1,n≥0.LetFGk,l,n be the numbers of faces created at generation n. We apply Theorem 1; we obtain thatthe number of faces ofGk,l,nis(6)FGk,l,n=lkn+lk-2lk-1,n≥0.The average degree of G k , l , n is (which is approximately 3 for large n)(7)zGk,l,n=2EGk,l,nVGk,l,n=2l×lkn-1lknl-1+k-1l,n≥0.The diameterD is the maximum of the shortest distance between any two nodes (u,v) of a network: D=maxu,vd(u,v). Let DGk,l,n be the diameter of Gk,l,n created at generation n. This diameter can be calculated in two cases:(i)
If the size of cyclic subgraphsl is pair, we can calculate the diameter as follows: at iteration n=1, the diameter DGk,l,1=l/2. For n>1, the diameter of Gk,l,n increases by l at most.(ii)
If the size of cyclic subgraphsl is odd, we can calculate the diameter as follows: at iteration n=1, the diameter DGk,l,1=⌊l/2⌋. For n>1, the diameter of Gk,l,n increases by (l-1) at most.Sothe diameter ofGk,l,nis(8)DGk,l,n=l-ϵ2+l-ϵn-1withϵ=0,ifliseven,ϵ=1,iflisoddThis diameter can be presented by another formula which grows logarithmically with the number of vertices of the network indicating thatGk,l,n is a small-world network.(9)DGk,l,n=l-ϵ2+l-ϵloglkVGk,l,nlk-1-k-1ll-1-1withϵ=0,ifliseven,ϵ=1,iflisodd
## 3.2. The Number of Spanning Trees of the Generalized Small-World Exponential NetworkGk,l,n
The enumeration of spanning trees is a fundamental issue in many problems encountered in network analysis. However, explicitly determining this interesting quantity in networks is a theoretical challenge specially for the complex networks. Fortunately, the construction of the generalized Small-World Exponential networkGk,l,n makes it possible to derive the exact formula of this number using the decomposition method.Theorem 2.
LetGk,l,n denote the generalized Small-World Exponential networks. The complexity of Gk,l,n is given by the following formula:(10)τGk,l,n=llkn-1/lk-1,n≥1.Proof.
From Figure2, we see that Gk,l,n contains several cyclic subgraphs Yk,l,n. Using (2) we obtain τ(Gk,l,n)=∏δYk,l,nτ(Yk,l,n)=τ(Yk,l,n)δYk,l,n, where δYk,l,n is the number of cyclic subgraphs in Gk,l,n. In order to calculate the number of spanning trees of Gk,l,n, we need to find firstly the number of cyclic subgraphs in Gk,l,n. From our network, for i from 1 to n, we see δYk,l,i=lk×δYk,l,i-1+1. Then, we multiply the equation of δYk,l,n-1 by (lk), the equation of δYk,l,n-2 by (lk)2, and so on until the last equation δYk,l,1 which will be multiplied by (lk)n-1. Summing all the obtained equations: ∑i=0n-1(lk)iδYk,l,n-i=∑i=0n-1(lk)i+1δYk,l,n-i-1+∑i=0n-1(lk)i. We find the number of cycles in Gk,l,n: δYk,l,n=((lk)n-1)/((lk)-1). We replace it in the equation of τ(Gk,l,n); hence, we obtain τ(Gk,l,n)=l((lk)n-1)/(lk-1).
Fork=1 and l=3, the network G1,3,n is the Small-World Exponential network. Its number of spanning trees is given by the following formula [26]:(11)τG1,3,n=33n-1/2,n≥1.
## 4. A Generalized Koch NetworkCk,l,n
In this section, another class of small-world networks called theKoch networkCn is studied analytically. This network is derived from the class of Koch curves. They are one of the interesting families of fractals. We use them to understand the geometric fractals in real systems. This Koch network incorporates some properties characterizing a majority of real-life network systems: a high clustering coefficient and a small diameter, indicating that the Koch network is a small-world network. We put forward a family of generalized Koch network Ck,l,n, where the difference relies on the size of the cyclic subgraphs and the number of the cyclic subgraphs added in each node change according to two parameters k and l. We propose analytically an algorithm of the construction of the generalized Koch network, we determine its properties and we calculate its complexity.
### 4.1. The Construction and the Properties of the Generalized Koch NetworkCk,l,n
Inspired by the algorithm of the Koch network, we propose a family of generalized Koch network asCk,l,n with two integer parameters l (the size of the cyclic subgraph) and k (the dimension of the cyclic subgraph). The algorithm of its construction is as follows: initially (n=0), Ck,l,0 is a cyclic graph with the size l. For n≥1, Ck,l,n is obtained from Ck,l,n-1 by adding k new cyclic subgraphs having the size l for each of the nodes of every existing cyclic subgraph in Ck,l,n-1. The growth process of the generalized Koch network to the next generation keeps on in a similar way. The articulation nodes degree of the first iteration is dCk,l,n=2(k+1)n (in Figure 3, the articulation nodes are colored by the green). Figure 3 illustrates the growing process of the networks for the first three generations of Ck,l,n.Figure 3
The first three generations of the generalized Koch networkC2,4,n.In this section, exact expressions for the properties of the generalized Koch NetworkCk,l,n are given. Then the explicit results for its number of nodes, number of edges, number of faces, average degree, and diameter are stated.The structural properties of the generalized Koch NetworkCk,l,n are presented as follows: the number of nodes of Ck,l,n is calculated as follows. From Figure 3, we notice for i from 1 to n: VCk,l,i=(lk+1)×VCk,l,i-1-lk. Then, we multiply the equation of VCk,l,n-1 by (lk+1), the equation of VCk,l,n-2 by (lk+1)2, and so on until the last equation VCk,l,1 which will be multiplied by (lk+1)(n-1). Summing all the obtained equations: ∑i=0n-1(lk+1)iVCk,l,n-i=∑i=0n-1(lk+1)i+1VCk,l,n-i-1-lk∑i=0n-1(lk+1)i. We find VCk,l,n=(lk+1)nVCk,l,0-lk∑i=0n-1(lk+1)i with VCk,l,0=l. Sothe number of nodes ofCk,l,nis(12)VCk,l,n=l-1lk+1n+1,n≥0.The number of edges ofCk,l,n is calculated as follows: from Figure 3, we notice for i from 1 to n: ECk,l,i=(lk+1)ECk,l,i-1 (a geometric suite). Sothe number of edges ofCk,l,nis(13)ECk,l,n=llk+1n,n≥0.The number of faces ofCk,l,n is calculated as follows: from Figure 3, we notice for i from 1 to n: FCk,l,i=(lk+1)×FCk,l,i-1-lk. Then, the equation of FCk,l,n-1 is multiplied by (lk+1), the equation of FCk,l,n-2 by (lk+1)2, and so on until the last equation FCk,l,1 which is multiplied by (lk+1)n-1. Summing all the obtained equations: ∑i=0n-1(lk+1)iFCk,l,n-i=∑i=0n-1(lk+1)i+1FCk,l,n-i-1-lk∑i=0n-1(lk+1)i. We find FCk,l,n=(lk+1)nFCk,l,0-lk∑i=0n-1(lk+1)i with FCk,l,0=2. Sothe number of faces ofCk,l,nis(14)FCk,l,n=lk+1n+1,n≥0.We can obtain the number of faces ofCk,l,n also by using Theorem 1.The average degree of C k , l , n is (which is approximately 3 for large n)(15)zCk,l,n=2ECk,l,nVCk,l,n=2llk+1nl-1lk+1n+1,n≥0LetDCk,l,n be the diameter of Ck,l,n created at generation n.This diameter can be presented by the following formula for n≥0:(16)DCk,l,n=l-ϵ2+nl-ϵwithϵ=0,ifliseven,ϵ=1,iflisoddWe can present it by another formula which grows logarithmically with the number of vertices of the network indicating thatCk,l,n is a small-world network.(17)DCk,l,n=l-ϵ2+l-ϵloglk+1VCk,l,n-1l-1withϵ=0,ifliseven,ϵ=1,iflisodd
### 4.2. The Number of Spanning Trees of the Generalized Koch NetworkCk,l,n
In order to calculate the number of spanning trees of the generalized Koch NetworkCk,l,n, we use the same method as the other networks studied before: the decomposition method.Theorem 3.
LetCk,l,n denote the generalized Koch network. The complexity of Ck,l,n is given by the following formula:(18)τCk,l,n=llk+1n,n≥0Proof.
From Figure3, we see that Ck,l,n contains several cyclic subgraphs Xk,l,n. Using (2) τ(Ck,l,n)=∏δXk,l,nτ(Xk,l,n)=τ(Xk,l,n)δXk,l,n with δXk,l,n as the number of the cyclic subgraphs in Ck,l,n. From Figure 3, we see for i from 1 to n: δXk,l,i=(lk+1)δXk,l,i-1(a geometric suite). So the number of cyclic subgraphs in Ck,l,n is δXk,l,n=(lk+1)n. Replacing this result in the equation of τ(Ck,l,n) with τ(Xk,l,n)=l, hence we obtain τ(Ck,l,n)=l(lk+1)n,n≥0. For k=1 and l=3, the network C1,3,n is the Koch network. Its number of spanning trees is given by the following formula [16]:(19)τC1,3,n=34n,n≥0.
## 4.1. The Construction and the Properties of the Generalized Koch NetworkCk,l,n
Inspired by the algorithm of the Koch network, we propose a family of generalized Koch network asCk,l,n with two integer parameters l (the size of the cyclic subgraph) and k (the dimension of the cyclic subgraph). The algorithm of its construction is as follows: initially (n=0), Ck,l,0 is a cyclic graph with the size l. For n≥1, Ck,l,n is obtained from Ck,l,n-1 by adding k new cyclic subgraphs having the size l for each of the nodes of every existing cyclic subgraph in Ck,l,n-1. The growth process of the generalized Koch network to the next generation keeps on in a similar way. The articulation nodes degree of the first iteration is dCk,l,n=2(k+1)n (in Figure 3, the articulation nodes are colored by the green). Figure 3 illustrates the growing process of the networks for the first three generations of Ck,l,n.Figure 3
The first three generations of the generalized Koch networkC2,4,n.In this section, exact expressions for the properties of the generalized Koch NetworkCk,l,n are given. Then the explicit results for its number of nodes, number of edges, number of faces, average degree, and diameter are stated.The structural properties of the generalized Koch NetworkCk,l,n are presented as follows: the number of nodes of Ck,l,n is calculated as follows. From Figure 3, we notice for i from 1 to n: VCk,l,i=(lk+1)×VCk,l,i-1-lk. Then, we multiply the equation of VCk,l,n-1 by (lk+1), the equation of VCk,l,n-2 by (lk+1)2, and so on until the last equation VCk,l,1 which will be multiplied by (lk+1)(n-1). Summing all the obtained equations: ∑i=0n-1(lk+1)iVCk,l,n-i=∑i=0n-1(lk+1)i+1VCk,l,n-i-1-lk∑i=0n-1(lk+1)i. We find VCk,l,n=(lk+1)nVCk,l,0-lk∑i=0n-1(lk+1)i with VCk,l,0=l. Sothe number of nodes ofCk,l,nis(12)VCk,l,n=l-1lk+1n+1,n≥0.The number of edges ofCk,l,n is calculated as follows: from Figure 3, we notice for i from 1 to n: ECk,l,i=(lk+1)ECk,l,i-1 (a geometric suite). Sothe number of edges ofCk,l,nis(13)ECk,l,n=llk+1n,n≥0.The number of faces ofCk,l,n is calculated as follows: from Figure 3, we notice for i from 1 to n: FCk,l,i=(lk+1)×FCk,l,i-1-lk. Then, the equation of FCk,l,n-1 is multiplied by (lk+1), the equation of FCk,l,n-2 by (lk+1)2, and so on until the last equation FCk,l,1 which is multiplied by (lk+1)n-1. Summing all the obtained equations: ∑i=0n-1(lk+1)iFCk,l,n-i=∑i=0n-1(lk+1)i+1FCk,l,n-i-1-lk∑i=0n-1(lk+1)i. We find FCk,l,n=(lk+1)nFCk,l,0-lk∑i=0n-1(lk+1)i with FCk,l,0=2. Sothe number of faces ofCk,l,nis(14)FCk,l,n=lk+1n+1,n≥0.We can obtain the number of faces ofCk,l,n also by using Theorem 1.The average degree of C k , l , n is (which is approximately 3 for large n)(15)zCk,l,n=2ECk,l,nVCk,l,n=2llk+1nl-1lk+1n+1,n≥0LetDCk,l,n be the diameter of Ck,l,n created at generation n.This diameter can be presented by the following formula for n≥0:(16)DCk,l,n=l-ϵ2+nl-ϵwithϵ=0,ifliseven,ϵ=1,iflisoddWe can present it by another formula which grows logarithmically with the number of vertices of the network indicating thatCk,l,n is a small-world network.(17)DCk,l,n=l-ϵ2+l-ϵloglk+1VCk,l,n-1l-1withϵ=0,ifliseven,ϵ=1,iflisodd
## 4.2. The Number of Spanning Trees of the Generalized Koch NetworkCk,l,n
In order to calculate the number of spanning trees of the generalized Koch NetworkCk,l,n, we use the same method as the other networks studied before: the decomposition method.Theorem 3.
LetCk,l,n denote the generalized Koch network. The complexity of Ck,l,n is given by the following formula:(18)τCk,l,n=llk+1n,n≥0Proof.
From Figure3, we see that Ck,l,n contains several cyclic subgraphs Xk,l,n. Using (2) τ(Ck,l,n)=∏δXk,l,nτ(Xk,l,n)=τ(Xk,l,n)δXk,l,n with δXk,l,n as the number of the cyclic subgraphs in Ck,l,n. From Figure 3, we see for i from 1 to n: δXk,l,i=(lk+1)δXk,l,i-1(a geometric suite). So the number of cyclic subgraphs in Ck,l,n is δXk,l,n=(lk+1)n. Replacing this result in the equation of τ(Ck,l,n) with τ(Xk,l,n)=l, hence we obtain τ(Ck,l,n)=l(lk+1)n,n≥0. For k=1 and l=3, the network C1,3,n is the Koch network. Its number of spanning trees is given by the following formula [16]:(19)τC1,3,n=34n,n≥0.
## 5. The Spanning Tree Entropy of the Generalized Small-World Exponential Network and the Generalized Koch Network
The spanning tree number of the generalized small-world networks grows exponentially, so we can calculate their spanning trees entropy according to the definition of the entropy in Section2. Let ρGk,l,n be the entropy of spanning trees for the generalized Small-World Exponential network and ρCk,l,n be the entropy of spanning trees for the generalized Koch network.Corollary 4.
The entropy of spanning trees of the generalized Small-World Exponential networkGk,l,n is(20)ρGk,l,n=lnll-1
The entropy of spanning trees of the generalized Koch networkCk,l,n is(21)ρCk,l,n=lnll-1From the results, we find that the generalized Small-World Exponential network and the generalized Koch network have the same entropy even if their complexities are different. The entropy depends just on the size of the cyclic subgraphsl and not on the dimension of the cyclic subgraphs k. It means that generalized Small-World Exponential network and the generalized Koch network have the same robustness despite the fact that their structures and properties are different. Notice that the degree of the articulation nodes of the first iteration increases according to the value of k, and it does not influence the spanning tree entropy and, therefore, does not influence the robustness of these two small-world networks.Figure4 shows that increasing the size of the cyclic subgraphs l leads to the decreasing of the entropy of spanning trees of Gk,l,n and Ck,l,n. This result proves that these networks having low value of l are more robust than those having high value of l.Figure 4
The spanning tree entropy of the generalized Small-World Exponential network and the generalized Koch network.From Table1, we compare the spanning trees entropy of the Small-World Exponential network G1,3,n and the Koch network C1,3,n (0.549) with those of other networks having the same average degree 3. We notice that the value of their spanning trees entropy is the smallest known for networks with average degree 3. This reflects the fact that the Koch network and the Small-World Exponential network are less robust and their topology is less heterogeneous than other networks having the same average degree.Table 1
T: The spanning trees entropy of several networks having the same average degree.
Type of network z ρ Koch network C1,3,n 3 0.549 Small-World Exponential network G1,3,n 3 0.549 The Hanoi network [18] 3 0.677 The 2-Flower network [19] 3 0.6931 The 3-2-12 lattices [20] 3 0.721 The 4-8-8 bathroom tile [20] 3 0.787 Honeycomb lattice [21] 3 0.807
## 6. Conclusion
In this paper, we have studied the problem of efficiently computing the number of spanning trees in two well-known small-world networks: Generalized Small-World Exponential network and the generalized Koch network. We have examined their construction and determined a detailed analysis of their topological properties. We have obtained the exact solutions for their number of spanning trees using the decomposition method. We have further calculated and compared their entropy of spanning trees. The result shows that these two generalized small-world networks have the same entropy of the spanning trees although they do not have the same complexity. As a future work, we intend to analyse another type of complex networks and to use a new combinatorial method that facilitates the calculation of its number of spanning trees.
---
*Source: 1017308-2018-09-03.xml* | 1017308-2018-09-03_1017308-2018-09-03.md | 34,097 | The Evaluation of the Number and the Entropy of Spanning Trees on Generalized Small-World Networks | Raihana Mokhlissi; Dounia Lotfi; Joyati Debnath; Mohamed El Marraki; Noussaima EL Khattabi | Journal of Applied Mathematics
(2018) | Mathematical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1017308 | 1017308-2018-09-03.xml | ---
## Abstract
Spanning trees have been widely investigated in many aspects of mathematics: theoretical computer science, combinatorics, so on. An important issue is to compute the number of these spanning trees. This number remains a challenge, particularly for large and complex networks. As a model of complex networks, we study two families of generalized small-world networks, namely, the Small-World Exponential and the Koch networks, by changing the size and the dimension of the cyclic subgraphs. We introduce their construction and their structural properties which are built in an iterative way. We propose a decomposition method for counting their number of spanning trees and we obtain the exact formulas, which are then verified by numerical simulations. From this number, we find their spanning tree entropy, which is lower than that of the other networks having the same average degree. This entropy allows quantifying the robustness of the networks and characterizing their structures.
---
## Body
## 1. Introduction
Recently, the analysis of complex networks has received a major boost caused by the huge network data resources and many systems in the real world can be described and characterized by complex networks [1]. Some scientific studies have inspired researchers to construct network models to explain the existing common characteristics in real-life systems. Among the well-known models of the complex networks, there is a small-world network. It displays rich behavior as observed in a large variety of real systems including Internet (websites with navigation menus), electric power grids, networks of brain neurons, telephone call graphs, and social networks. It is characterized by specific structural features: large clustering coefficient and small average distance. To analyze this class of complex networks, theories are needed to explain their inherent and emergent properties. New formal models of these networks are needed to predict accurately their performance, assert the guarantees of their reliability, and quantify their robustness. The graph theory has a powerful tool to simplify this theoretical study by enumeratingthe spanning trees of a network G [2]. The latter are defined as a connected and acyclic subgraph of G having all vertices (nodes) of G and some or all its edges. The goal of this paper is to know how many spanning trees can have a network. The enumeration of these spanning trees tends to be one of the most important parameters that characterizes the network reliability [3]. We denote the number of spanning trees by τ(G), also known as the complexity of a network. In general, it can be obtained by calculating the determinant or the eigenvalues of the Laplacian matrix corresponding to the network [4]. However, this general method is not acceptable for large and complex networks due to its high computing time complexity. Therefore, it is interesting to develop techniques and methods to facilitate the calculation of the number of spanning trees and find its exact formula for special classes of networks. In this context, our work proposes a combinatorial method for determining the spanning trees number for some complex networks, which is the decomposition method [5]. It relies on the principle of a process of “Divide and Conquer" by dividing a problem in subproblems, solving each of these subproblems and then incorporating the partial results for a general solution.As an application of the number of spanning trees of a network, we usethe entropy of spanning trees or what is called the asymptotic complexity (see, e.g., Dehmer, Emmert-Streib, Chen, Li, and Shi [2, 6]). By calculating this entropy, we can estimate how the network will evolve to infinity. This parameter permits us to quantify the robustness of complex networks and to characterize their structures [7]. It is related to the ability of the network to resist random changes in its structures. Many researchers have used this measure to estimate the robustness of some complex networks and the heterogeneity of their structures such as the small-world Farey graph [8], the two-tree network [9], the planar unclustered networks [10], the prism and antiprism graphs [11], and the lattices [12].The novelty of our work is to analytically investigate two generalized families of small-world networks, called the Small-World Exponential network. See, e.g., Mokhlissi, Lotfi, Debnath and El Marraki [13] and Liu, Dolgushev, Qi and Zhang [14], and the Koch network. See, e.g., Zhang, Zhou, Xie, Chen, Lin and Guan [15] and Zhang, Gao, Chen, Zhou, Zhang, and Guan [16]. The first network is based on complete graphs and the second network is based on the classical fractal Koch curve [17], which has many important properties observed in real networks. To generalize these two networks, we add two important parameters related to the size of the cyclic subgraphs and the dimension of the cyclic subgraphs (the number of the cyclic subgraphs added). We suggest two iterative algorithms generating their structures, we determine their topological properties, and we calculate their complexities. In the end, we evaluate and compare their spanning trees entropy with other networks having the same average degree as the Hanoi network, the Flower network, the Honeycomb lattice. As a result, we conclude that the generalized Small-World Exponential network and the generalized Koch network have the same spanning tree entropy, so the same robustness although their structures and properties are totally different, and this entropy depends just on the size of the cyclic subgraphs, which means the articulation nodes degree of the first iteration increases according to the dimension of the cyclic subgraphs; it does not influence the spanning tee entropy. The scope of this study is that the generalization of these two small-world networks does not affect the concept of the small-world networks (large clustering coefficient and small average distance). The work of this paper presents an alternative perspective in the analysis of small-world networks that exhibit typical features of real-world systems.The outline of this paper is organized as follows. In Section2, we present the preliminaries and the used methodology. The construction, the properties, and the complexity of the generalized Small-World Exponential network and the generalized Koch network are provided in Sections 3 and 4. Then, the spanning trees entropy of these small-world networks are presented in Section 5. Finally, the conclusion is included in Section 6.
## 2. Preliminaries
In this section, we introduce some notations and the method used to facilitate the calculation of the complexity of a complex network. LetG=(V(G),E(G),F(G)) be a connected planar graph with V(G) being its number of vertices, E(G) being its number of edges, and F(G) being its number of faces; it has no loops and no parallel edges. The number of vertices of a graph refers to its order and its number of edges refers to its size. The terms graph and network are used indistinctly. A network is said to be a small-world network if the distance L between two random nodes grows proportionally to the logarithm of the number of nodes in the network, that is, L∝logN, while the clustering coefficient (measure of the degree to which nodes in a network tend to cluster together) is not small.Euler’s formula [22]: Euler’s formula is a topological invariant that characterized the topological properties related to the number of vertices, edges, and faces.Theorem 1.
LetG be a connected planar graph with n vertices, m edges, and f faces. These numbers are connected by the well-known Euler’s relation; then(1)n-m+f=2The selection of the appropriate method for calculating the spanning trees number is a key factor in a given network. For this work, we put forward a decomposition method to make the number of spanning trees easy for computation. This method relies on the principle of Divide and Conquer; we decompose the graph into different subgraphs according certain constraints: by following one node, two nodes, an edge, and a path. In this work, we study the case where subgraphs are connected by one vertex (see Figure1). To apply this method, we follow this algorithm:(1)
We decompose the original graph into different subgraphs that are connected to one vertex.(2)
We calculate the number of spanning trees for each of subgraph.(3)
We collect the results to obtain the complexity of the original graph.Figure 1
Star network and chain network.LetG be a chain of planar graphs defined by G=C1•C2•…•Cn (see Figure 1). The number of spanning trees in G is given by the following formula:(2)τG=∏i=1nτCi.If the complexity of a networkτ(G) grows exponentially with the number of vertices VG, then there exists a constant ρG, called the entropy of spanning trees or the asymptotic complexity [23], described by this relation:(3)ρG=limVG⟶∞lnτGVGThe entropy of spanning trees of a networkG is a quantitative measure of the number of spanning trees to evaluate the robustness of a network and to characterize its structure. The most robust network with the stronger heterogeneous topology is the network that has the highest spanning tree entropy. According to the definition of the entropy of spanning trees of a network, the bigger the entropy value, the more the number of spanning trees, so there are more possibilities of connections between two nodes related to defective links that ensures a good reliability and robustness.
## 3. A Generalized Small-World Exponential NetworkGk,l,n
In this section, we introduce a well-known family of small-world network: the Small-World Exponential network [24]. It has an exponential form of degree distribution and the same number of nodes and edges as the dual Sierpinski gaskets [25]. It has been observed from some real-life systems as tensor networks, social networks, quantum walks. We propose a generalized Small-World Exponential network, where the difference relies on the size of the cyclic subgraph and the dimension of the cyclic subgraph (the number of the cyclic subgraphs added). We also investigate its construction and structural properties and calculate its complexity.
### 3.1. The Construction and the Properties of the Generalized Small-World Exponential NetworkGk,l,n
The generalized Small-World Exponential network is denoted byGk,l,n with two controllable parameters: l is the size of the cyclic subgraph and k is the dimension of the cyclic subgraph, i.e., the number of the cyclic subgraphs added. The construction of Gk,l,n follows this algorithm: at n=0, we have a simple node. At first generation, Gk,l,1 is a cyclic graph with the size l. For n>1, each node in the network of the previous iteration is replaced by k new cyclic subgraphs having the size l. Thus, each of the newly appeared cyclic subgraphs contains exactly one node of the network of the previous iteration and the articulation nodes degree of the first iteration is dGk,l,n=2(kn-1)/(k-1) (in Figure 2, the articulation nodes are colored by the red). The same process is used for the other iterations. In Figure 2, the first four iterations of the generalized Small-World Exponential network Gk,l,n are illustrated.Figure 2
The first four generations of the generalized Small-World Exponential networkG2,4,n.Let us compute the order, the size, the number of faces, the average degree, and the diameter of the generalized Small-World Exponential networkGk,l,n. Let VGk,l,n be the numbers of nodes created at n. From Figure 2, we notice for i from 1 to n: VGk,l,i=lk×VGk,l,i-1-(k-1)l. Then, we multiply the equation of VGk,l,n-1 by (lk), the equation of VGk,l,n-2 by (lk)2, and so on until the last equation VGk,l,1 which will be multiplied by (lk)(n-1). Summing all the obtained equations: ∑i=0n-1(lk)iVGk,l,n-i=∑i=0n-1(lk)i+1VGk,l,n-i-1-(k-1)l∑i=0n-1(lk)i. We find the following results: VGk,l,n=(lk)nVGk,l,0-(k-1)l∑i=0n-1(lk)i with VGk,l,0=1. Thus,the number of nodes ofGk,l,nis(4)VGk,l,n=lknl-1+k-1llk-1,n≥0.LetEGk,l,n be the numbers of links created at iteration n. By construction, for i from 1 to n, we have EGk,l,i=lk×EGk,l,i-1+l. Then, we multiply the equation of EGk,l,n-1 by (lk), the equation of EGk,l,n-2 by (lk)2, and so on until the last equation EGk,l,1 which will be multiplied by (lk)(n-1). Summing all the obtained equations: ∑i=0n-1(lk)iEGk,l,n-i=∑i=0n-1(lk)i+1EGk,l,n-i-1+l∑i=0n-1(lk)i. We find EGk,l,n=(lk)nEGk,l,0+l∑i=0n-1(lk)i with EGk,l,0=0. Thus,the number of links ofGk,l,nis(5)EGk,l,n=l×lkn-1lk-1,n≥0.LetFGk,l,n be the numbers of faces created at generation n. We apply Theorem 1; we obtain thatthe number of faces ofGk,l,nis(6)FGk,l,n=lkn+lk-2lk-1,n≥0.The average degree of G k , l , n is (which is approximately 3 for large n)(7)zGk,l,n=2EGk,l,nVGk,l,n=2l×lkn-1lknl-1+k-1l,n≥0.The diameterD is the maximum of the shortest distance between any two nodes (u,v) of a network: D=maxu,vd(u,v). Let DGk,l,n be the diameter of Gk,l,n created at generation n. This diameter can be calculated in two cases:(i)
If the size of cyclic subgraphsl is pair, we can calculate the diameter as follows: at iteration n=1, the diameter DGk,l,1=l/2. For n>1, the diameter of Gk,l,n increases by l at most.(ii)
If the size of cyclic subgraphsl is odd, we can calculate the diameter as follows: at iteration n=1, the diameter DGk,l,1=⌊l/2⌋. For n>1, the diameter of Gk,l,n increases by (l-1) at most.Sothe diameter ofGk,l,nis(8)DGk,l,n=l-ϵ2+l-ϵn-1withϵ=0,ifliseven,ϵ=1,iflisoddThis diameter can be presented by another formula which grows logarithmically with the number of vertices of the network indicating thatGk,l,n is a small-world network.(9)DGk,l,n=l-ϵ2+l-ϵloglkVGk,l,nlk-1-k-1ll-1-1withϵ=0,ifliseven,ϵ=1,iflisodd
### 3.2. The Number of Spanning Trees of the Generalized Small-World Exponential NetworkGk,l,n
The enumeration of spanning trees is a fundamental issue in many problems encountered in network analysis. However, explicitly determining this interesting quantity in networks is a theoretical challenge specially for the complex networks. Fortunately, the construction of the generalized Small-World Exponential networkGk,l,n makes it possible to derive the exact formula of this number using the decomposition method.Theorem 2.
LetGk,l,n denote the generalized Small-World Exponential networks. The complexity of Gk,l,n is given by the following formula:(10)τGk,l,n=llkn-1/lk-1,n≥1.Proof.
From Figure2, we see that Gk,l,n contains several cyclic subgraphs Yk,l,n. Using (2) we obtain τ(Gk,l,n)=∏δYk,l,nτ(Yk,l,n)=τ(Yk,l,n)δYk,l,n, where δYk,l,n is the number of cyclic subgraphs in Gk,l,n. In order to calculate the number of spanning trees of Gk,l,n, we need to find firstly the number of cyclic subgraphs in Gk,l,n. From our network, for i from 1 to n, we see δYk,l,i=lk×δYk,l,i-1+1. Then, we multiply the equation of δYk,l,n-1 by (lk), the equation of δYk,l,n-2 by (lk)2, and so on until the last equation δYk,l,1 which will be multiplied by (lk)n-1. Summing all the obtained equations: ∑i=0n-1(lk)iδYk,l,n-i=∑i=0n-1(lk)i+1δYk,l,n-i-1+∑i=0n-1(lk)i. We find the number of cycles in Gk,l,n: δYk,l,n=((lk)n-1)/((lk)-1). We replace it in the equation of τ(Gk,l,n); hence, we obtain τ(Gk,l,n)=l((lk)n-1)/(lk-1).
Fork=1 and l=3, the network G1,3,n is the Small-World Exponential network. Its number of spanning trees is given by the following formula [26]:(11)τG1,3,n=33n-1/2,n≥1.
## 3.1. The Construction and the Properties of the Generalized Small-World Exponential NetworkGk,l,n
The generalized Small-World Exponential network is denoted byGk,l,n with two controllable parameters: l is the size of the cyclic subgraph and k is the dimension of the cyclic subgraph, i.e., the number of the cyclic subgraphs added. The construction of Gk,l,n follows this algorithm: at n=0, we have a simple node. At first generation, Gk,l,1 is a cyclic graph with the size l. For n>1, each node in the network of the previous iteration is replaced by k new cyclic subgraphs having the size l. Thus, each of the newly appeared cyclic subgraphs contains exactly one node of the network of the previous iteration and the articulation nodes degree of the first iteration is dGk,l,n=2(kn-1)/(k-1) (in Figure 2, the articulation nodes are colored by the red). The same process is used for the other iterations. In Figure 2, the first four iterations of the generalized Small-World Exponential network Gk,l,n are illustrated.Figure 2
The first four generations of the generalized Small-World Exponential networkG2,4,n.Let us compute the order, the size, the number of faces, the average degree, and the diameter of the generalized Small-World Exponential networkGk,l,n. Let VGk,l,n be the numbers of nodes created at n. From Figure 2, we notice for i from 1 to n: VGk,l,i=lk×VGk,l,i-1-(k-1)l. Then, we multiply the equation of VGk,l,n-1 by (lk), the equation of VGk,l,n-2 by (lk)2, and so on until the last equation VGk,l,1 which will be multiplied by (lk)(n-1). Summing all the obtained equations: ∑i=0n-1(lk)iVGk,l,n-i=∑i=0n-1(lk)i+1VGk,l,n-i-1-(k-1)l∑i=0n-1(lk)i. We find the following results: VGk,l,n=(lk)nVGk,l,0-(k-1)l∑i=0n-1(lk)i with VGk,l,0=1. Thus,the number of nodes ofGk,l,nis(4)VGk,l,n=lknl-1+k-1llk-1,n≥0.LetEGk,l,n be the numbers of links created at iteration n. By construction, for i from 1 to n, we have EGk,l,i=lk×EGk,l,i-1+l. Then, we multiply the equation of EGk,l,n-1 by (lk), the equation of EGk,l,n-2 by (lk)2, and so on until the last equation EGk,l,1 which will be multiplied by (lk)(n-1). Summing all the obtained equations: ∑i=0n-1(lk)iEGk,l,n-i=∑i=0n-1(lk)i+1EGk,l,n-i-1+l∑i=0n-1(lk)i. We find EGk,l,n=(lk)nEGk,l,0+l∑i=0n-1(lk)i with EGk,l,0=0. Thus,the number of links ofGk,l,nis(5)EGk,l,n=l×lkn-1lk-1,n≥0.LetFGk,l,n be the numbers of faces created at generation n. We apply Theorem 1; we obtain thatthe number of faces ofGk,l,nis(6)FGk,l,n=lkn+lk-2lk-1,n≥0.The average degree of G k , l , n is (which is approximately 3 for large n)(7)zGk,l,n=2EGk,l,nVGk,l,n=2l×lkn-1lknl-1+k-1l,n≥0.The diameterD is the maximum of the shortest distance between any two nodes (u,v) of a network: D=maxu,vd(u,v). Let DGk,l,n be the diameter of Gk,l,n created at generation n. This diameter can be calculated in two cases:(i)
If the size of cyclic subgraphsl is pair, we can calculate the diameter as follows: at iteration n=1, the diameter DGk,l,1=l/2. For n>1, the diameter of Gk,l,n increases by l at most.(ii)
If the size of cyclic subgraphsl is odd, we can calculate the diameter as follows: at iteration n=1, the diameter DGk,l,1=⌊l/2⌋. For n>1, the diameter of Gk,l,n increases by (l-1) at most.Sothe diameter ofGk,l,nis(8)DGk,l,n=l-ϵ2+l-ϵn-1withϵ=0,ifliseven,ϵ=1,iflisoddThis diameter can be presented by another formula which grows logarithmically with the number of vertices of the network indicating thatGk,l,n is a small-world network.(9)DGk,l,n=l-ϵ2+l-ϵloglkVGk,l,nlk-1-k-1ll-1-1withϵ=0,ifliseven,ϵ=1,iflisodd
## 3.2. The Number of Spanning Trees of the Generalized Small-World Exponential NetworkGk,l,n
The enumeration of spanning trees is a fundamental issue in many problems encountered in network analysis. However, explicitly determining this interesting quantity in networks is a theoretical challenge specially for the complex networks. Fortunately, the construction of the generalized Small-World Exponential networkGk,l,n makes it possible to derive the exact formula of this number using the decomposition method.Theorem 2.
LetGk,l,n denote the generalized Small-World Exponential networks. The complexity of Gk,l,n is given by the following formula:(10)τGk,l,n=llkn-1/lk-1,n≥1.Proof.
From Figure2, we see that Gk,l,n contains several cyclic subgraphs Yk,l,n. Using (2) we obtain τ(Gk,l,n)=∏δYk,l,nτ(Yk,l,n)=τ(Yk,l,n)δYk,l,n, where δYk,l,n is the number of cyclic subgraphs in Gk,l,n. In order to calculate the number of spanning trees of Gk,l,n, we need to find firstly the number of cyclic subgraphs in Gk,l,n. From our network, for i from 1 to n, we see δYk,l,i=lk×δYk,l,i-1+1. Then, we multiply the equation of δYk,l,n-1 by (lk), the equation of δYk,l,n-2 by (lk)2, and so on until the last equation δYk,l,1 which will be multiplied by (lk)n-1. Summing all the obtained equations: ∑i=0n-1(lk)iδYk,l,n-i=∑i=0n-1(lk)i+1δYk,l,n-i-1+∑i=0n-1(lk)i. We find the number of cycles in Gk,l,n: δYk,l,n=((lk)n-1)/((lk)-1). We replace it in the equation of τ(Gk,l,n); hence, we obtain τ(Gk,l,n)=l((lk)n-1)/(lk-1).
Fork=1 and l=3, the network G1,3,n is the Small-World Exponential network. Its number of spanning trees is given by the following formula [26]:(11)τG1,3,n=33n-1/2,n≥1.
## 4. A Generalized Koch NetworkCk,l,n
In this section, another class of small-world networks called theKoch networkCn is studied analytically. This network is derived from the class of Koch curves. They are one of the interesting families of fractals. We use them to understand the geometric fractals in real systems. This Koch network incorporates some properties characterizing a majority of real-life network systems: a high clustering coefficient and a small diameter, indicating that the Koch network is a small-world network. We put forward a family of generalized Koch network Ck,l,n, where the difference relies on the size of the cyclic subgraphs and the number of the cyclic subgraphs added in each node change according to two parameters k and l. We propose analytically an algorithm of the construction of the generalized Koch network, we determine its properties and we calculate its complexity.
### 4.1. The Construction and the Properties of the Generalized Koch NetworkCk,l,n
Inspired by the algorithm of the Koch network, we propose a family of generalized Koch network asCk,l,n with two integer parameters l (the size of the cyclic subgraph) and k (the dimension of the cyclic subgraph). The algorithm of its construction is as follows: initially (n=0), Ck,l,0 is a cyclic graph with the size l. For n≥1, Ck,l,n is obtained from Ck,l,n-1 by adding k new cyclic subgraphs having the size l for each of the nodes of every existing cyclic subgraph in Ck,l,n-1. The growth process of the generalized Koch network to the next generation keeps on in a similar way. The articulation nodes degree of the first iteration is dCk,l,n=2(k+1)n (in Figure 3, the articulation nodes are colored by the green). Figure 3 illustrates the growing process of the networks for the first three generations of Ck,l,n.Figure 3
The first three generations of the generalized Koch networkC2,4,n.In this section, exact expressions for the properties of the generalized Koch NetworkCk,l,n are given. Then the explicit results for its number of nodes, number of edges, number of faces, average degree, and diameter are stated.The structural properties of the generalized Koch NetworkCk,l,n are presented as follows: the number of nodes of Ck,l,n is calculated as follows. From Figure 3, we notice for i from 1 to n: VCk,l,i=(lk+1)×VCk,l,i-1-lk. Then, we multiply the equation of VCk,l,n-1 by (lk+1), the equation of VCk,l,n-2 by (lk+1)2, and so on until the last equation VCk,l,1 which will be multiplied by (lk+1)(n-1). Summing all the obtained equations: ∑i=0n-1(lk+1)iVCk,l,n-i=∑i=0n-1(lk+1)i+1VCk,l,n-i-1-lk∑i=0n-1(lk+1)i. We find VCk,l,n=(lk+1)nVCk,l,0-lk∑i=0n-1(lk+1)i with VCk,l,0=l. Sothe number of nodes ofCk,l,nis(12)VCk,l,n=l-1lk+1n+1,n≥0.The number of edges ofCk,l,n is calculated as follows: from Figure 3, we notice for i from 1 to n: ECk,l,i=(lk+1)ECk,l,i-1 (a geometric suite). Sothe number of edges ofCk,l,nis(13)ECk,l,n=llk+1n,n≥0.The number of faces ofCk,l,n is calculated as follows: from Figure 3, we notice for i from 1 to n: FCk,l,i=(lk+1)×FCk,l,i-1-lk. Then, the equation of FCk,l,n-1 is multiplied by (lk+1), the equation of FCk,l,n-2 by (lk+1)2, and so on until the last equation FCk,l,1 which is multiplied by (lk+1)n-1. Summing all the obtained equations: ∑i=0n-1(lk+1)iFCk,l,n-i=∑i=0n-1(lk+1)i+1FCk,l,n-i-1-lk∑i=0n-1(lk+1)i. We find FCk,l,n=(lk+1)nFCk,l,0-lk∑i=0n-1(lk+1)i with FCk,l,0=2. Sothe number of faces ofCk,l,nis(14)FCk,l,n=lk+1n+1,n≥0.We can obtain the number of faces ofCk,l,n also by using Theorem 1.The average degree of C k , l , n is (which is approximately 3 for large n)(15)zCk,l,n=2ECk,l,nVCk,l,n=2llk+1nl-1lk+1n+1,n≥0LetDCk,l,n be the diameter of Ck,l,n created at generation n.This diameter can be presented by the following formula for n≥0:(16)DCk,l,n=l-ϵ2+nl-ϵwithϵ=0,ifliseven,ϵ=1,iflisoddWe can present it by another formula which grows logarithmically with the number of vertices of the network indicating thatCk,l,n is a small-world network.(17)DCk,l,n=l-ϵ2+l-ϵloglk+1VCk,l,n-1l-1withϵ=0,ifliseven,ϵ=1,iflisodd
### 4.2. The Number of Spanning Trees of the Generalized Koch NetworkCk,l,n
In order to calculate the number of spanning trees of the generalized Koch NetworkCk,l,n, we use the same method as the other networks studied before: the decomposition method.Theorem 3.
LetCk,l,n denote the generalized Koch network. The complexity of Ck,l,n is given by the following formula:(18)τCk,l,n=llk+1n,n≥0Proof.
From Figure3, we see that Ck,l,n contains several cyclic subgraphs Xk,l,n. Using (2) τ(Ck,l,n)=∏δXk,l,nτ(Xk,l,n)=τ(Xk,l,n)δXk,l,n with δXk,l,n as the number of the cyclic subgraphs in Ck,l,n. From Figure 3, we see for i from 1 to n: δXk,l,i=(lk+1)δXk,l,i-1(a geometric suite). So the number of cyclic subgraphs in Ck,l,n is δXk,l,n=(lk+1)n. Replacing this result in the equation of τ(Ck,l,n) with τ(Xk,l,n)=l, hence we obtain τ(Ck,l,n)=l(lk+1)n,n≥0. For k=1 and l=3, the network C1,3,n is the Koch network. Its number of spanning trees is given by the following formula [16]:(19)τC1,3,n=34n,n≥0.
## 4.1. The Construction and the Properties of the Generalized Koch NetworkCk,l,n
Inspired by the algorithm of the Koch network, we propose a family of generalized Koch network asCk,l,n with two integer parameters l (the size of the cyclic subgraph) and k (the dimension of the cyclic subgraph). The algorithm of its construction is as follows: initially (n=0), Ck,l,0 is a cyclic graph with the size l. For n≥1, Ck,l,n is obtained from Ck,l,n-1 by adding k new cyclic subgraphs having the size l for each of the nodes of every existing cyclic subgraph in Ck,l,n-1. The growth process of the generalized Koch network to the next generation keeps on in a similar way. The articulation nodes degree of the first iteration is dCk,l,n=2(k+1)n (in Figure 3, the articulation nodes are colored by the green). Figure 3 illustrates the growing process of the networks for the first three generations of Ck,l,n.Figure 3
The first three generations of the generalized Koch networkC2,4,n.In this section, exact expressions for the properties of the generalized Koch NetworkCk,l,n are given. Then the explicit results for its number of nodes, number of edges, number of faces, average degree, and diameter are stated.The structural properties of the generalized Koch NetworkCk,l,n are presented as follows: the number of nodes of Ck,l,n is calculated as follows. From Figure 3, we notice for i from 1 to n: VCk,l,i=(lk+1)×VCk,l,i-1-lk. Then, we multiply the equation of VCk,l,n-1 by (lk+1), the equation of VCk,l,n-2 by (lk+1)2, and so on until the last equation VCk,l,1 which will be multiplied by (lk+1)(n-1). Summing all the obtained equations: ∑i=0n-1(lk+1)iVCk,l,n-i=∑i=0n-1(lk+1)i+1VCk,l,n-i-1-lk∑i=0n-1(lk+1)i. We find VCk,l,n=(lk+1)nVCk,l,0-lk∑i=0n-1(lk+1)i with VCk,l,0=l. Sothe number of nodes ofCk,l,nis(12)VCk,l,n=l-1lk+1n+1,n≥0.The number of edges ofCk,l,n is calculated as follows: from Figure 3, we notice for i from 1 to n: ECk,l,i=(lk+1)ECk,l,i-1 (a geometric suite). Sothe number of edges ofCk,l,nis(13)ECk,l,n=llk+1n,n≥0.The number of faces ofCk,l,n is calculated as follows: from Figure 3, we notice for i from 1 to n: FCk,l,i=(lk+1)×FCk,l,i-1-lk. Then, the equation of FCk,l,n-1 is multiplied by (lk+1), the equation of FCk,l,n-2 by (lk+1)2, and so on until the last equation FCk,l,1 which is multiplied by (lk+1)n-1. Summing all the obtained equations: ∑i=0n-1(lk+1)iFCk,l,n-i=∑i=0n-1(lk+1)i+1FCk,l,n-i-1-lk∑i=0n-1(lk+1)i. We find FCk,l,n=(lk+1)nFCk,l,0-lk∑i=0n-1(lk+1)i with FCk,l,0=2. Sothe number of faces ofCk,l,nis(14)FCk,l,n=lk+1n+1,n≥0.We can obtain the number of faces ofCk,l,n also by using Theorem 1.The average degree of C k , l , n is (which is approximately 3 for large n)(15)zCk,l,n=2ECk,l,nVCk,l,n=2llk+1nl-1lk+1n+1,n≥0LetDCk,l,n be the diameter of Ck,l,n created at generation n.This diameter can be presented by the following formula for n≥0:(16)DCk,l,n=l-ϵ2+nl-ϵwithϵ=0,ifliseven,ϵ=1,iflisoddWe can present it by another formula which grows logarithmically with the number of vertices of the network indicating thatCk,l,n is a small-world network.(17)DCk,l,n=l-ϵ2+l-ϵloglk+1VCk,l,n-1l-1withϵ=0,ifliseven,ϵ=1,iflisodd
## 4.2. The Number of Spanning Trees of the Generalized Koch NetworkCk,l,n
In order to calculate the number of spanning trees of the generalized Koch NetworkCk,l,n, we use the same method as the other networks studied before: the decomposition method.Theorem 3.
LetCk,l,n denote the generalized Koch network. The complexity of Ck,l,n is given by the following formula:(18)τCk,l,n=llk+1n,n≥0Proof.
From Figure3, we see that Ck,l,n contains several cyclic subgraphs Xk,l,n. Using (2) τ(Ck,l,n)=∏δXk,l,nτ(Xk,l,n)=τ(Xk,l,n)δXk,l,n with δXk,l,n as the number of the cyclic subgraphs in Ck,l,n. From Figure 3, we see for i from 1 to n: δXk,l,i=(lk+1)δXk,l,i-1(a geometric suite). So the number of cyclic subgraphs in Ck,l,n is δXk,l,n=(lk+1)n. Replacing this result in the equation of τ(Ck,l,n) with τ(Xk,l,n)=l, hence we obtain τ(Ck,l,n)=l(lk+1)n,n≥0. For k=1 and l=3, the network C1,3,n is the Koch network. Its number of spanning trees is given by the following formula [16]:(19)τC1,3,n=34n,n≥0.
## 5. The Spanning Tree Entropy of the Generalized Small-World Exponential Network and the Generalized Koch Network
The spanning tree number of the generalized small-world networks grows exponentially, so we can calculate their spanning trees entropy according to the definition of the entropy in Section2. Let ρGk,l,n be the entropy of spanning trees for the generalized Small-World Exponential network and ρCk,l,n be the entropy of spanning trees for the generalized Koch network.Corollary 4.
The entropy of spanning trees of the generalized Small-World Exponential networkGk,l,n is(20)ρGk,l,n=lnll-1
The entropy of spanning trees of the generalized Koch networkCk,l,n is(21)ρCk,l,n=lnll-1From the results, we find that the generalized Small-World Exponential network and the generalized Koch network have the same entropy even if their complexities are different. The entropy depends just on the size of the cyclic subgraphsl and not on the dimension of the cyclic subgraphs k. It means that generalized Small-World Exponential network and the generalized Koch network have the same robustness despite the fact that their structures and properties are different. Notice that the degree of the articulation nodes of the first iteration increases according to the value of k, and it does not influence the spanning tree entropy and, therefore, does not influence the robustness of these two small-world networks.Figure4 shows that increasing the size of the cyclic subgraphs l leads to the decreasing of the entropy of spanning trees of Gk,l,n and Ck,l,n. This result proves that these networks having low value of l are more robust than those having high value of l.Figure 4
The spanning tree entropy of the generalized Small-World Exponential network and the generalized Koch network.From Table1, we compare the spanning trees entropy of the Small-World Exponential network G1,3,n and the Koch network C1,3,n (0.549) with those of other networks having the same average degree 3. We notice that the value of their spanning trees entropy is the smallest known for networks with average degree 3. This reflects the fact that the Koch network and the Small-World Exponential network are less robust and their topology is less heterogeneous than other networks having the same average degree.Table 1
T: The spanning trees entropy of several networks having the same average degree.
Type of network z ρ Koch network C1,3,n 3 0.549 Small-World Exponential network G1,3,n 3 0.549 The Hanoi network [18] 3 0.677 The 2-Flower network [19] 3 0.6931 The 3-2-12 lattices [20] 3 0.721 The 4-8-8 bathroom tile [20] 3 0.787 Honeycomb lattice [21] 3 0.807
## 6. Conclusion
In this paper, we have studied the problem of efficiently computing the number of spanning trees in two well-known small-world networks: Generalized Small-World Exponential network and the generalized Koch network. We have examined their construction and determined a detailed analysis of their topological properties. We have obtained the exact solutions for their number of spanning trees using the decomposition method. We have further calculated and compared their entropy of spanning trees. The result shows that these two generalized small-world networks have the same entropy of the spanning trees although they do not have the same complexity. As a future work, we intend to analyse another type of complex networks and to use a new combinatorial method that facilitates the calculation of its number of spanning trees.
---
*Source: 1017308-2018-09-03.xml* | 2018 |
# Estimating the Physical Properties of Nanofluids Using a Connectionist Intelligent Model Known as Gaussian Process Regression Approach
**Authors:** Tzu-Chia Chen; Ali Thaeer Hammid; Avzal N. Akbarov; Kaveh Shariati; Mina Dinari; Mohammed Sardar Ali
**Journal:** International Journal of Chemical Engineering
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1017341
---
## Abstract
This work aims to develop a robust machine learning model for the prediction of the relative viscosity of nanoparticles (NPs) including Al2O3, TiO2, SiO2, CuO, SiC, and Ag based on the most important input parameters affecting them covering the size, concentration, thickness of the interfacial layer, and intensive properties of NPs. In order to develop a comprehensive artificial intelligence model in this study, sixty-nine data samples were collected. To this end, the Gaussian process regression approach with four basic function kernels (Matern, squared exponential, exponential, and rational quadratic) was exploited. It was found that Matern outperformed other models with R2 = 0.987, MARE (%) = 6.048, RMSE = 0.0577, and STD = 0.0574. This precise yet simple model can be a good alternative to the complex thermodynamic, mathematical-analytical models of the past.
---
## Body
## 1. Introduction
Nanoscience researchers have been recently interested in the viscosity and thermal conduction of nanofluids [1, 2]. The lubrication and thermal performances of a nanofluid are dependent on its viscosity [3, 4]. To use a nanofluid for thermal management purposes, it is required to bring a trade-off between a low viscosity level and a high thermal conduction level [5–7]. Temperature, fluid form, and the shape, size, and the load of nanoparticles are determinants of such a trade-off [8, 9].Research has shown that viscosity strongly influences nanofluids in solar energy systems through a direct effect on the pump work and pressure drop [10–12]. These fluids can be more efficiently used in solar energy systems through detailed knowledge of their viscosity [3, 13]. Accurate experimental works have been conducted on the viscosity evaluation of hybrid nanofluids [14–16]. However, experimental evaluation is expensive and time-consuming. Researchers have introduced approaches to estimate nanofluid viscosity [17, 18]. Additionally, in recent years, new methods of modeling based on artificial intelligence such as ANFIS, SVM, and ANN have been used in a wide variety of sciences [19–22]. These approaches are mostly based on soft computing and theoretical calculations [23]. Einstein theoretically developed a framework to estimate nanofluid viscosity at small volume fractions [24].Traditional correlation-based methodologies have also been developed for nanofluid viscosity prediction [25]. However, such methodologies have been found to underestimate nanofluid viscosity as there are lack of important parameters playing key roles in the nanofluid rheology [18, 26]. Data mining and machine learning have been widely employed for the relative viscosity estimation of hybrid nanofluids in a variety of empirical conditions [23, 27, 28]. Artificial neural networks (ANNs), support vector machines (SVMs), and ANFIS-GA are among the common machine learning techniques [29–32]. Researchers have introduced generic machine learning algorithms in recent years to estimate the viscosity of nanofluids based on data mining of the synthesis of nanofluids. Alrashed et al. introduced the ANN and ANFIS algorithms for the viscosity estimation of C-based nanofluid [33]. A total of 129 experimental data samples were exploited to implement optimized viscosity estimation through the ANN.Likewise, Bahrami et al. proposed twenty-four ANN structures to estimate non-Newtonian hybrid Fe-Cu nanofluids within a mixed water-ethylene glycol base fluid [34]. Bayesian regularization (BR) outperformed the other methods in the prediction of viscosity. They argued that a rise in the number of neurons in the hidden layer led to a slight performance improvement. Ahmadi et al. comparatively studied a number of machine learning algorithms in the dynamic viscosity prediction of the CuO-water nanofluid [35]. They proposed ANN-MLP, MARS, MPR, M5-tree, and GMDH algorithms based on the nanofluid concentration, temperature, and nanostructure size. The ANN-MLP was found to have the highest predictive performance. Amin et al. developed a GMDH-ANN method to estimate the viscosity of Fe2O3 nanoparticles. The RMSE was obtained to be 0.0018 [35, 36].This study aims to describe an artificial intelligence-based model for accurately predicting the relative viscosity of nanoparticles. For this purpose, the GPR model has been used considering its four main function kernels, including Matern, squared exponential, exponential, and rational quadratic. These kernel functions were selected because of their high ability to predict and model the various data observed in the literature [37–41]. This model was proposed since it was newer and less complicated than analytical mathematical models. Furthermore, this problem can be solved more effectively by offering an accurate model to accommodate the limitations such as cost and time associated with accurate measurement and monitoring of laboratory data. This study employed these strategies and used various statistical methods to analyze and predict the target data.
## 2. GPR
GPR is an efficient probabilistic model developed based on kernels [42]. Gaussian processes include random variables of a multivariate Gaussian distribution [43, 44]. Let x and y denote the input and output domains. Then, the sphere of influence with n (xi,yi) pairs is obtained. The sphere domains have an equal distribution and independence. It is assumed that the average function μ = Y ⟶ Re defines the Gaussian process for the variables [45, 46]. The covariance function k:X∗X⟶Re is then performed. GPR is capable of recognizing the random variable of f (x) for supplied predictors (x), representing randomly featured function f [47, 48]. The present work assumed an independent observation error with a mean value distribution of zero (i.e., μx=0), zero variance, and f (x) of the Gaussian process at x (represented by k) [49–51]:(1)y=y1,…,yn∼No,K=σ2I,where I is the identity matrix, and Kij=kxi,xj. As y/x∼No,K+σ2I is normal, the conditional distribution of the conditional distribution of the test label with the condition of a testing-training pair of Y∗/Y,X,X∗ is Y∗/Y,X,X∗ ∼ (μ, σ). As a result [52, 53],(2)μ=KX0,XKX,X+σ2I−1Y,(3)σ=X0,X0−σ2IKX0,XKX,X+σ2I−1KX,X0,where KX,X′ is the n×n∗ matrix of the covariance examined in each training-testing pair. The other KX,X,KX,X∗, and KX∗,X∗ values have a similar matrix [54–56]. Also, X denotes the training vector label, Y stands for the training data label, and X∗ represents the testing data [57]. The specified covariance function for the creation of a semifinite positive covariance matrix of Kij=kxi,xj. equations (2) and (3) is quantified by specified kernel k and noise degree σ2 for deduction. Efficient GRP training requires the selection of a suitable covariance function and parameters; the actual GFRP model function is determined by the covariance function [58, 59]. It contains the geometric structure of training samples. Thus, the mean and covariance functions should be estimated from the data (hyperparameters), so that prediction could be performed accurately [60]. As this model has been used in many recent studies in different fields of science, more details are available elsewhere [61–65], so there is no need to repeat them here.
## 3. Preprocessing Procedure
As mentioned, GPR was used to estimate the relative viscosity of nanoparticles through the size, concentration, thickness of the interfacial layer, and intensive properties of NPs. A total of sixty-nine data samples were exploited [66]. MATLAB 2014 has been used to model these data. The input data were classified into a training subset (75%) and a testing subset (25%). Data normalization was carried out as [67–69](4)Dk=2x−xminxmax−xmin−1,where D denotes the parameter. Furthermore, subscriptions n, max, and min represent the normalized, maximum, and minimum values, respectively. The normalized data varied from −1 to 1. The relative viscosity of nanoparticles was the output obtained through the size, concentration, thickness of the interfacial layer, and intensive properties of NPs.
## 4. Models’ Evaluation
Model performance could be evaluated using the percentage of average relative deviation (ARD%), mean squared error (MSE), coefficient of determination (R2), root mean square error (RMSE), and standard deviation (STD) [70–73]. These evaluation indices are written as(5)STD=1N−1∑i=1Nerror−error¯0.5,RMSE=1N∑i=1NXiactual−Xipredicted2,MARE=100N∑i=1NXiactual−XipredictedXiactual,R2=1−∑i=1NXiactual−Xipredicted2∑i=1NXiactual−Xactual2,MSE=1N∑i=1NXiactual−Xipredicted2,where N denotes the number of data samples, while subscriptions cal and exp represent the calculated and experimental quantities, respectively [74]. Also, H¯exp denotes the experimental relative viscosity of nanoparticles.
## 5. Results and Discussion
The models were evaluated using a variety of graphical techniques. Figure1 shows the evaluation results of the models. As can be seen, all kernel functions of the GPR model showed higher accuracy in the estimation of the relative viscosity of nanoparticles.Figure 1
Simultaneous observation of modeled and real data to visually observe the accuracy of the model in different phases of modeling: (a) Matern, (b) exponential, (c) squared exponential, and (d) rational quadratic.
(a)(b)(c)(d)Figure2 shows the regression diagram. The highest fit was obtained through linear regression between the experimental data and model estimates.Figure 2
Linear regression on the models proposed in this research: (a) Matern, (b) exponential, (c) squared exponential, and (d) rational quadratic.
(a)(b)(c)(d)Figure3 shows the errors of the models in the estimation of the relative viscosity of nanoparticles (i.e., the difference between the estimates and experimental data). As can be seen, this model had the smallest error as a majority of the data samples were distributed around the zero line. According to our calculations, all kernels had an average relative deviation of less than 30%.Figure 3
Relative deviation values obtained by statistical analysis to determine the accuracy of the proposed models: (a) Matern, (b) exponential, (c) squared exponential, and (d) rational quadratic.
(a)(b)(c)(d)Moreover, the predictive performance of the models in the estimation of the relative viscosity of nanoparticles was evaluated statistically. Table1 provides the comparison of the models in the statistical errors of the training data, testing data, and input dataset.Table 1
Different statistical parameters of the proposed models in order to determine their accuracy in predicting the target parameter.
ModelPhaseR2MARE (%)MSERMSESTDMaternTrain0.9831.9390.0038704060.06220.0585Test0.99318.6140.0033344710.05770.0554Total0.9876.0480.0037383640.05770.0574ExponentialTrain0.9821.9960.0040970290.06400.0603Test0.98922.5470.0049833140.07060.0678Total0.9857.0590.0043153890.07060.0617Squared exponentialTrain0.9822.1850.0041421070.06440.0597Test0.99022.5870.0049045290.07000.0664Total0.9857.2110.004329950.07000.0609QuadraticTrain0.9723.0270.0066381420.08150.0740Test0.98823.5270.0055203660.07430.0697Total0.9788.0780.0063627480.07430.0725
### 5.1. Outlier Detection
The experimental data utilized to develop a model strongly influence its reliability. It is required to detect and exclude outlier data as they have a different behavior from other data samples. This enhances the reliability of the model. To detect outliers, standardized residuals and leverage analysis were employed. The candidate outliers were evaluated using the Williams plot [75, 76]. It plots the standard residuals versus hat values. Furthermore, to identify the feasible region, hat values are obtained as the diagonal elements of the hat matrix [76]:(6)H=XXTX−1XT,where X is a matrix with a size of n×k, where n is the number of data samples, and k is the number of inputs. The feasible region is represented by a square within the cutoff and warning leverage value. The warning leverage value is quantified as [77, 78](7)H∗=3p+1N.It is worth mentioning that the cutoff is typically set to 3 for standardized residuals [79, 80]. The data samples that are not positioned within the feasible region are assumed to be outliers. Figure 4 shows the Williams plot. According to this figure, Matern, exponential, squared exponential, and rational quadratic were found to have only two outliers.Figure 4
Analysis to determine the effective suspicious points on the proposed models: (a) Matern, (b) exponential, (c) squared exponential, and (d) rational quadratic.
(a)(b)(c)(d)
## 5.1. Outlier Detection
The experimental data utilized to develop a model strongly influence its reliability. It is required to detect and exclude outlier data as they have a different behavior from other data samples. This enhances the reliability of the model. To detect outliers, standardized residuals and leverage analysis were employed. The candidate outliers were evaluated using the Williams plot [75, 76]. It plots the standard residuals versus hat values. Furthermore, to identify the feasible region, hat values are obtained as the diagonal elements of the hat matrix [76]:(6)H=XXTX−1XT,where X is a matrix with a size of n×k, where n is the number of data samples, and k is the number of inputs. The feasible region is represented by a square within the cutoff and warning leverage value. The warning leverage value is quantified as [77, 78](7)H∗=3p+1N.It is worth mentioning that the cutoff is typically set to 3 for standardized residuals [79, 80]. The data samples that are not positioned within the feasible region are assumed to be outliers. Figure 4 shows the Williams plot. According to this figure, Matern, exponential, squared exponential, and rational quadratic were found to have only two outliers.Figure 4
Analysis to determine the effective suspicious points on the proposed models: (a) Matern, (b) exponential, (c) squared exponential, and (d) rational quadratic.
(a)(b)(c)(d)
## 6. Conclusions
This study adopted the GPR approach to estimate the relative viscosity of nanoparticles based on the size, concentration, thickness of the interfacial layer, and intensive properties of NPs. The Matern kernel was found to outperform exponential, squared exponential, and rational quadratic in the estimation of outputs. MARE was calculated to be 6.048%, 7.059%, 7.211%, and 8.078% for them, respectively. Moreover, the dependence of the target values on the inputs was measured using a sensitivity analysis. The proposed model could be significantly helpful in mechanical and chemical applications, particularly in heat transfer evaluation for heat exchangers where a nanofluid (e.g., CNT-water nanofluid) is employed.
---
*Source: 1017341-2022-06-09.xml* | 1017341-2022-06-09_1017341-2022-06-09.md | 15,323 | Estimating the Physical Properties of Nanofluids Using a Connectionist Intelligent Model Known as Gaussian Process Regression Approach | Tzu-Chia Chen; Ali Thaeer Hammid; Avzal N. Akbarov; Kaveh Shariati; Mina Dinari; Mohammed Sardar Ali | International Journal of Chemical Engineering
(2022) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1017341 | 1017341-2022-06-09.xml | ---
## Abstract
This work aims to develop a robust machine learning model for the prediction of the relative viscosity of nanoparticles (NPs) including Al2O3, TiO2, SiO2, CuO, SiC, and Ag based on the most important input parameters affecting them covering the size, concentration, thickness of the interfacial layer, and intensive properties of NPs. In order to develop a comprehensive artificial intelligence model in this study, sixty-nine data samples were collected. To this end, the Gaussian process regression approach with four basic function kernels (Matern, squared exponential, exponential, and rational quadratic) was exploited. It was found that Matern outperformed other models with R2 = 0.987, MARE (%) = 6.048, RMSE = 0.0577, and STD = 0.0574. This precise yet simple model can be a good alternative to the complex thermodynamic, mathematical-analytical models of the past.
---
## Body
## 1. Introduction
Nanoscience researchers have been recently interested in the viscosity and thermal conduction of nanofluids [1, 2]. The lubrication and thermal performances of a nanofluid are dependent on its viscosity [3, 4]. To use a nanofluid for thermal management purposes, it is required to bring a trade-off between a low viscosity level and a high thermal conduction level [5–7]. Temperature, fluid form, and the shape, size, and the load of nanoparticles are determinants of such a trade-off [8, 9].Research has shown that viscosity strongly influences nanofluids in solar energy systems through a direct effect on the pump work and pressure drop [10–12]. These fluids can be more efficiently used in solar energy systems through detailed knowledge of their viscosity [3, 13]. Accurate experimental works have been conducted on the viscosity evaluation of hybrid nanofluids [14–16]. However, experimental evaluation is expensive and time-consuming. Researchers have introduced approaches to estimate nanofluid viscosity [17, 18]. Additionally, in recent years, new methods of modeling based on artificial intelligence such as ANFIS, SVM, and ANN have been used in a wide variety of sciences [19–22]. These approaches are mostly based on soft computing and theoretical calculations [23]. Einstein theoretically developed a framework to estimate nanofluid viscosity at small volume fractions [24].Traditional correlation-based methodologies have also been developed for nanofluid viscosity prediction [25]. However, such methodologies have been found to underestimate nanofluid viscosity as there are lack of important parameters playing key roles in the nanofluid rheology [18, 26]. Data mining and machine learning have been widely employed for the relative viscosity estimation of hybrid nanofluids in a variety of empirical conditions [23, 27, 28]. Artificial neural networks (ANNs), support vector machines (SVMs), and ANFIS-GA are among the common machine learning techniques [29–32]. Researchers have introduced generic machine learning algorithms in recent years to estimate the viscosity of nanofluids based on data mining of the synthesis of nanofluids. Alrashed et al. introduced the ANN and ANFIS algorithms for the viscosity estimation of C-based nanofluid [33]. A total of 129 experimental data samples were exploited to implement optimized viscosity estimation through the ANN.Likewise, Bahrami et al. proposed twenty-four ANN structures to estimate non-Newtonian hybrid Fe-Cu nanofluids within a mixed water-ethylene glycol base fluid [34]. Bayesian regularization (BR) outperformed the other methods in the prediction of viscosity. They argued that a rise in the number of neurons in the hidden layer led to a slight performance improvement. Ahmadi et al. comparatively studied a number of machine learning algorithms in the dynamic viscosity prediction of the CuO-water nanofluid [35]. They proposed ANN-MLP, MARS, MPR, M5-tree, and GMDH algorithms based on the nanofluid concentration, temperature, and nanostructure size. The ANN-MLP was found to have the highest predictive performance. Amin et al. developed a GMDH-ANN method to estimate the viscosity of Fe2O3 nanoparticles. The RMSE was obtained to be 0.0018 [35, 36].This study aims to describe an artificial intelligence-based model for accurately predicting the relative viscosity of nanoparticles. For this purpose, the GPR model has been used considering its four main function kernels, including Matern, squared exponential, exponential, and rational quadratic. These kernel functions were selected because of their high ability to predict and model the various data observed in the literature [37–41]. This model was proposed since it was newer and less complicated than analytical mathematical models. Furthermore, this problem can be solved more effectively by offering an accurate model to accommodate the limitations such as cost and time associated with accurate measurement and monitoring of laboratory data. This study employed these strategies and used various statistical methods to analyze and predict the target data.
## 2. GPR
GPR is an efficient probabilistic model developed based on kernels [42]. Gaussian processes include random variables of a multivariate Gaussian distribution [43, 44]. Let x and y denote the input and output domains. Then, the sphere of influence with n (xi,yi) pairs is obtained. The sphere domains have an equal distribution and independence. It is assumed that the average function μ = Y ⟶ Re defines the Gaussian process for the variables [45, 46]. The covariance function k:X∗X⟶Re is then performed. GPR is capable of recognizing the random variable of f (x) for supplied predictors (x), representing randomly featured function f [47, 48]. The present work assumed an independent observation error with a mean value distribution of zero (i.e., μx=0), zero variance, and f (x) of the Gaussian process at x (represented by k) [49–51]:(1)y=y1,…,yn∼No,K=σ2I,where I is the identity matrix, and Kij=kxi,xj. As y/x∼No,K+σ2I is normal, the conditional distribution of the conditional distribution of the test label with the condition of a testing-training pair of Y∗/Y,X,X∗ is Y∗/Y,X,X∗ ∼ (μ, σ). As a result [52, 53],(2)μ=KX0,XKX,X+σ2I−1Y,(3)σ=X0,X0−σ2IKX0,XKX,X+σ2I−1KX,X0,where KX,X′ is the n×n∗ matrix of the covariance examined in each training-testing pair. The other KX,X,KX,X∗, and KX∗,X∗ values have a similar matrix [54–56]. Also, X denotes the training vector label, Y stands for the training data label, and X∗ represents the testing data [57]. The specified covariance function for the creation of a semifinite positive covariance matrix of Kij=kxi,xj. equations (2) and (3) is quantified by specified kernel k and noise degree σ2 for deduction. Efficient GRP training requires the selection of a suitable covariance function and parameters; the actual GFRP model function is determined by the covariance function [58, 59]. It contains the geometric structure of training samples. Thus, the mean and covariance functions should be estimated from the data (hyperparameters), so that prediction could be performed accurately [60]. As this model has been used in many recent studies in different fields of science, more details are available elsewhere [61–65], so there is no need to repeat them here.
## 3. Preprocessing Procedure
As mentioned, GPR was used to estimate the relative viscosity of nanoparticles through the size, concentration, thickness of the interfacial layer, and intensive properties of NPs. A total of sixty-nine data samples were exploited [66]. MATLAB 2014 has been used to model these data. The input data were classified into a training subset (75%) and a testing subset (25%). Data normalization was carried out as [67–69](4)Dk=2x−xminxmax−xmin−1,where D denotes the parameter. Furthermore, subscriptions n, max, and min represent the normalized, maximum, and minimum values, respectively. The normalized data varied from −1 to 1. The relative viscosity of nanoparticles was the output obtained through the size, concentration, thickness of the interfacial layer, and intensive properties of NPs.
## 4. Models’ Evaluation
Model performance could be evaluated using the percentage of average relative deviation (ARD%), mean squared error (MSE), coefficient of determination (R2), root mean square error (RMSE), and standard deviation (STD) [70–73]. These evaluation indices are written as(5)STD=1N−1∑i=1Nerror−error¯0.5,RMSE=1N∑i=1NXiactual−Xipredicted2,MARE=100N∑i=1NXiactual−XipredictedXiactual,R2=1−∑i=1NXiactual−Xipredicted2∑i=1NXiactual−Xactual2,MSE=1N∑i=1NXiactual−Xipredicted2,where N denotes the number of data samples, while subscriptions cal and exp represent the calculated and experimental quantities, respectively [74]. Also, H¯exp denotes the experimental relative viscosity of nanoparticles.
## 5. Results and Discussion
The models were evaluated using a variety of graphical techniques. Figure1 shows the evaluation results of the models. As can be seen, all kernel functions of the GPR model showed higher accuracy in the estimation of the relative viscosity of nanoparticles.Figure 1
Simultaneous observation of modeled and real data to visually observe the accuracy of the model in different phases of modeling: (a) Matern, (b) exponential, (c) squared exponential, and (d) rational quadratic.
(a)(b)(c)(d)Figure2 shows the regression diagram. The highest fit was obtained through linear regression between the experimental data and model estimates.Figure 2
Linear regression on the models proposed in this research: (a) Matern, (b) exponential, (c) squared exponential, and (d) rational quadratic.
(a)(b)(c)(d)Figure3 shows the errors of the models in the estimation of the relative viscosity of nanoparticles (i.e., the difference between the estimates and experimental data). As can be seen, this model had the smallest error as a majority of the data samples were distributed around the zero line. According to our calculations, all kernels had an average relative deviation of less than 30%.Figure 3
Relative deviation values obtained by statistical analysis to determine the accuracy of the proposed models: (a) Matern, (b) exponential, (c) squared exponential, and (d) rational quadratic.
(a)(b)(c)(d)Moreover, the predictive performance of the models in the estimation of the relative viscosity of nanoparticles was evaluated statistically. Table1 provides the comparison of the models in the statistical errors of the training data, testing data, and input dataset.Table 1
Different statistical parameters of the proposed models in order to determine their accuracy in predicting the target parameter.
ModelPhaseR2MARE (%)MSERMSESTDMaternTrain0.9831.9390.0038704060.06220.0585Test0.99318.6140.0033344710.05770.0554Total0.9876.0480.0037383640.05770.0574ExponentialTrain0.9821.9960.0040970290.06400.0603Test0.98922.5470.0049833140.07060.0678Total0.9857.0590.0043153890.07060.0617Squared exponentialTrain0.9822.1850.0041421070.06440.0597Test0.99022.5870.0049045290.07000.0664Total0.9857.2110.004329950.07000.0609QuadraticTrain0.9723.0270.0066381420.08150.0740Test0.98823.5270.0055203660.07430.0697Total0.9788.0780.0063627480.07430.0725
### 5.1. Outlier Detection
The experimental data utilized to develop a model strongly influence its reliability. It is required to detect and exclude outlier data as they have a different behavior from other data samples. This enhances the reliability of the model. To detect outliers, standardized residuals and leverage analysis were employed. The candidate outliers were evaluated using the Williams plot [75, 76]. It plots the standard residuals versus hat values. Furthermore, to identify the feasible region, hat values are obtained as the diagonal elements of the hat matrix [76]:(6)H=XXTX−1XT,where X is a matrix with a size of n×k, where n is the number of data samples, and k is the number of inputs. The feasible region is represented by a square within the cutoff and warning leverage value. The warning leverage value is quantified as [77, 78](7)H∗=3p+1N.It is worth mentioning that the cutoff is typically set to 3 for standardized residuals [79, 80]. The data samples that are not positioned within the feasible region are assumed to be outliers. Figure 4 shows the Williams plot. According to this figure, Matern, exponential, squared exponential, and rational quadratic were found to have only two outliers.Figure 4
Analysis to determine the effective suspicious points on the proposed models: (a) Matern, (b) exponential, (c) squared exponential, and (d) rational quadratic.
(a)(b)(c)(d)
## 5.1. Outlier Detection
The experimental data utilized to develop a model strongly influence its reliability. It is required to detect and exclude outlier data as they have a different behavior from other data samples. This enhances the reliability of the model. To detect outliers, standardized residuals and leverage analysis were employed. The candidate outliers were evaluated using the Williams plot [75, 76]. It plots the standard residuals versus hat values. Furthermore, to identify the feasible region, hat values are obtained as the diagonal elements of the hat matrix [76]:(6)H=XXTX−1XT,where X is a matrix with a size of n×k, where n is the number of data samples, and k is the number of inputs. The feasible region is represented by a square within the cutoff and warning leverage value. The warning leverage value is quantified as [77, 78](7)H∗=3p+1N.It is worth mentioning that the cutoff is typically set to 3 for standardized residuals [79, 80]. The data samples that are not positioned within the feasible region are assumed to be outliers. Figure 4 shows the Williams plot. According to this figure, Matern, exponential, squared exponential, and rational quadratic were found to have only two outliers.Figure 4
Analysis to determine the effective suspicious points on the proposed models: (a) Matern, (b) exponential, (c) squared exponential, and (d) rational quadratic.
(a)(b)(c)(d)
## 6. Conclusions
This study adopted the GPR approach to estimate the relative viscosity of nanoparticles based on the size, concentration, thickness of the interfacial layer, and intensive properties of NPs. The Matern kernel was found to outperform exponential, squared exponential, and rational quadratic in the estimation of outputs. MARE was calculated to be 6.048%, 7.059%, 7.211%, and 8.078% for them, respectively. Moreover, the dependence of the target values on the inputs was measured using a sensitivity analysis. The proposed model could be significantly helpful in mechanical and chemical applications, particularly in heat transfer evaluation for heat exchangers where a nanofluid (e.g., CNT-water nanofluid) is employed.
---
*Source: 1017341-2022-06-09.xml* | 2022 |
# Effects of Reactive Nitrogen Scavengers on NK-Cell-Mediated Killing of K562 Cells
**Authors:** Yili Zeng; Qinmiao Huang; Meizhu Zheng; Jianxin Guo; Jingxin Pan
**Journal:** Journal of Biomedicine and Biotechnology
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101737
---
## Abstract
This study explored the effects of reactive nitrogen metabolites (RNMS) on natural-killer- (NK-) cell-mediated killing of K562 cells and the influence of RNM scavengers, such as tiopronin (TIP), glutamylcysteinylglycine (GSH), and histamine dihydrochloride (DHT), on reversing the suppressing effect of RNM. We administered exogenous and endogenous RNM in the NK + K562 culture system and then added RNM scavengers. The concentrations of RNM, TNF-β and IFN-γ, and NK-cell cytotoxicity (NCC) and the percentage of living NK cells were then examined. We found that both exogenous and endogenous RNM caused the KIR to decrease (P<0.01); however, RNM scavengers such as TIP and GSH rescued this phenomenon dose dependently. In conclusion, our data suggests that RNM scavengers such as TIP and GSH enhance the antineoplasmic activity of NK cells.
---
## Body
## 1. Introduction
There is a great number of monocytes/macrophages (MO) and NK cells within and outside malignant tumors. Compared with other sections of body, the function of NK cells in a tumor and its ambient tissue is remarkably decreased [1]. Current antitumor immunotherapies mainly use adoptive immunotherapy (AIT), which involves cells such as cytotoxic T lymphocytes (CTLS), lymphokine-activated killer cells (LAK cells), tumor-infiltrating lymphocytes (TILS), multicytokine-induced killer cells (CIK cells), donor lymphocyte infusions (DLIS), antineoplastic lymphocyte clones, and haplotype lymphocyte infusions. T cells and NK cells are the major effective cells, whereas interleukin-2 (IL-2) is the main activator of T/NK cells. However, most studies using IL-2 alone to treat leukemia in vivo have shown low efficacy, with only a few patients achieving remission. The main reason for this result is that certain monocytes/macrophages (MO) that can inhibit the antitumor activity of lymphocytes were quantitatively shown to exist in and around the tumor tissue. MO participate in tumor-induced immune suppression by secreting cytokines, particularly reactive oxygen metabolites (ROMS) and reactive nitrogen metabolites (RNMS) [2]. Studies have confirmed that the ROM yielded by MO inhibit the antitumor activity of lymphocytes when respiratory bursts occur. DHT, TIP, and GSH can reverse the inhibition of the antitumor activity of NK cells by ROM [3, 4]. In our previous studies, we also demonstrated that TIP and GSH were superior to DHT in reversing the suppression of the antitumor activity of T/NK cells by ROM [5]. When respiratory bursts occur, MO yield not only ROM but also RNM, which include nitrogen monoxide (NO), NO2, NO2-, NO3-, and peroxynitrite (ONOO−). The function of RNM is similar to that of ROM; however, RNM also have nitrogenation activity. Peroxynitrite (ONOO−), once acidified, immediately converts to peroxy-nitrous acid in the excited state, which has a stronger oxidizing activity and simultaneously yields both nitrogen dioxide (NO2) and OH analogs. These substances are more toxic than ROM [6]. Kono et al. [7] speculated that, in cancer-bearing animals, reactive nitrogen species induce the downregulation of CD3+, which is an important signal transduction molecule in T/NK cells. Thus, the antitumor immunosuppression caused by RNM should not be neglected. The immunotolerance to tumors induced by RNM may be similar to or stronger than that of ROM. In our previous studies, we had demonstrated that ROM produced by MO result in tumor immunosuppression [8]. However, studies examining whether RNM causes antitumor immunosuppression have not yet been reported. This research investigates the effects of the exogenous and endogenous RNM on NK-cell-mediated killing of K562 cells and the influence of RNM scavengers such as TIP, GSH and DHT on reversing the suppressing effect of RNM.
## 2. Materials and Methods
### 2.1. Materials
The K562 cell line was provided by the Union Hospital of Fujian Province. Fresh, enriching leukocytes from healthy patients were obtained from the Quanzhou City Blood Center. The reagents and their manufacturers were as follows: NK Cell Negative Isolation Kit, Dynal; MTT, Trypan Blue, Propidium Iodide, Sigma; CFSE, Dojindo (Japan) interleukin-2, Double Heron (Beijing); phytohemagglutinin, Yihua (Shanghai); histamine dihydrochloride, Sigma; tiopronin, Henan Xinyi Medicine Industry, Ltd.; hydroxy radical detection kit and nitrogen monoxide detection kit, Jiancheng (Nanjing); human IFN-γ ELISA Kit, Xinbosheng (Guangzhou); TNF-β ELISA Kit, Boster (Wuhan).
### 2.2. Isolation of Mononuclear Cells Rich in NK Cells (E) [9]
After PBMc were isolated using density gradient centrifugation, they were incubated with the immunomagnetic bead of the NK Cell Negative Isolation Kit at a low temperature. Then, the NK cells were isolated by the magnetic sorption method. Flow cytometry (FCM) was applied to detect the cells marked with CD3−FITC/CD56+16−PE. There were 85% of CD−/CD56+/CD16+ cells, and more than 95% of cells were shown to be alive via the trypan blue exclusion assay.
### 2.3. Isolation of Mononuclear Cells Rich in Monocytes (MO)
After isolating by the density gradient, the PBMc were cultured adherently to isolate the monocytes. Then, they were identified by a nonspecific carboxylesterase staining method. The MO constituted of 76.3% of the total cells, and more than 95% of cells were shown to be alive via the trypan blue exclusion assay.
### 2.4. Viable NK-Cell Counting (CFSE-PI Double Staining Method)
CFSE labeling: a 1μm solution that contained 5 mM CFSE was diluted with 1 mL PBS containing 10% FCS, followed by incubation with 1 mL of the cells at 37°C for 5–10 minutes. PI staining: the cells were stained with PI for 10 minutes at a concentration of 2.5 μg/mL. Stained cells were analyzed using COULTER flow cytometry. The fluorescence intensity of CFSE and PI was detected by the FL1 and FL3 channels, respectively. The NK cells were firstly gated as CFSE-positive cells. The CFSE and PI double-positive cells (upper-right quadrant) were dead cells, and viable NK cells were shown in the upper-left quadrant. The percentage of viable NK cells was the same as the percentage of plots in upper-left quadrant.
### 2.5. ROM Assay
ROM production was assayed by spectral luminosity chromatometry following the instructions on the hydroxy radical detection kit (Jiancheng, Nanjing).
### 2.6. RNM Assay (Nitrate Reductase Method)
RNM production was assayed indirectly using chromatometry following the instructions of the nitric oxide detection kit (Jiancheng, Nanjing).
### 2.7. NK-Cell Cytotoxicity (NCC)
NCC was assayed by the MTT method.
### 2.8. TNF-β and IFN-γ Assay
The IFN-γ and TNF-β levels were assayed to indirectly reflect the activity of NK cells by double antibody sandwich enzyme-labeled immunosorbent assay, according to the manufacturers’ instructions. The concentration gradients of the standard preparations and their corresponding optical density results were imported into the program SPSS 13.0 to generate standard calibration equations for conversion of OD values to concentrations. These calibration equations were used to determine the concentrations of IFN-γ and TNF-β.
#### 2.8.1. The Effect of ONOO− on the Activity of NK Cells
NK cells (E) and K562 cells (T) were cultured in 96-well plates at a ratio (E : T) of 10 : 1. The cells were cocultured at 37°C in an atmosphere of 5% carbon dioxide (CO2) and saturated humidity. RNM production was measured 6 hours later, and the TNF-β and IFN-γ levels were determined 24 hours later. In addition, the number of viable NK cells was measured by FCM 6 and 24 hours later. The equation for NCC was NCC = [1 − (ODE/T − ODE)/ODT] × 100% [10]. All measurements were carried out in triplicate. NK cells, K562 cells, and NK cells + K562 cells were used as blank groups, and DMEM containing 10% FBS was used as the holo-blank sample. The production of RNM and the extent of the inhibition of NK on K562 cells were measured at the indicated time points, and the data were analyzed to evaluate the relationship between RNM levels and the activity of NK cells.
#### 2.8.2. The Effect of Endogenous RNM on the Activity of NK Cell
IL-2 (150 U/mL) and PHA (60 g/mL) were administered to cocultures of NK cells and MO at a ratio (E : MO) of 10 : 2. After 24 hours of coculture, K562 cells were added at an E : T ratio of 10 : 1. RNM production was measured 6 hours later, while TNF-β and IFN-γ levels and KIR were measured 24 hours later. Each group was tested three times. Blank groups containing IL-2/PHA were the same as those described in paragraph 2.3.
#### 2.8.3. The Effect of Different Dosages of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were first administered. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1. At the same time, different concentrations of DHT (10 umol/L, 20 umol/L, 50 umol/L) and TIP (125 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) and GSH (25 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) were added to separate wells. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All the experiments for each group were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures (Control 1) and IL-2/PHA + NK + K562 + MO cultures (Control 2) were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and KIR without DHT, TIP, and GSH were compared with the corresponding values after addition of the different doses of drugs. This allowed us to determine which dose of DHT and TIP and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
#### 2.8.4. The Effect of Different Combinations of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were administered first. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1, and at the same time, different combinations of DHT (20μmol/L), TIP (50 mol/L, 100 mol/L), and GSH (50 mol/L) were added to each well. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All experiments were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures and IL-2/PHA + NK + K562 + MO cultures were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and NCC with DHT, TIP, and GSH were compared with the corresponding values after the addition of the different combinations of RNM scavengers. This allowed us to know which combination of DHT, TIP, and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
### 2.9. Statistical Analysis
SPSS 13.0 statistical software was used to analyze the results. The measurements were reported asx±SD. The LSD t-test was taken when the mean squares were regular, while the Dunnett T3 test was used to measure the heterogeneity of variance during the multiple comparisons of the means of all groups. P<0.05 was taken as the level of significance.
## 2.1. Materials
The K562 cell line was provided by the Union Hospital of Fujian Province. Fresh, enriching leukocytes from healthy patients were obtained from the Quanzhou City Blood Center. The reagents and their manufacturers were as follows: NK Cell Negative Isolation Kit, Dynal; MTT, Trypan Blue, Propidium Iodide, Sigma; CFSE, Dojindo (Japan) interleukin-2, Double Heron (Beijing); phytohemagglutinin, Yihua (Shanghai); histamine dihydrochloride, Sigma; tiopronin, Henan Xinyi Medicine Industry, Ltd.; hydroxy radical detection kit and nitrogen monoxide detection kit, Jiancheng (Nanjing); human IFN-γ ELISA Kit, Xinbosheng (Guangzhou); TNF-β ELISA Kit, Boster (Wuhan).
## 2.2. Isolation of Mononuclear Cells Rich in NK Cells (E) [9]
After PBMc were isolated using density gradient centrifugation, they were incubated with the immunomagnetic bead of the NK Cell Negative Isolation Kit at a low temperature. Then, the NK cells were isolated by the magnetic sorption method. Flow cytometry (FCM) was applied to detect the cells marked with CD3−FITC/CD56+16−PE. There were 85% of CD−/CD56+/CD16+ cells, and more than 95% of cells were shown to be alive via the trypan blue exclusion assay.
## 2.3. Isolation of Mononuclear Cells Rich in Monocytes (MO)
After isolating by the density gradient, the PBMc were cultured adherently to isolate the monocytes. Then, they were identified by a nonspecific carboxylesterase staining method. The MO constituted of 76.3% of the total cells, and more than 95% of cells were shown to be alive via the trypan blue exclusion assay.
## 2.4. Viable NK-Cell Counting (CFSE-PI Double Staining Method)
CFSE labeling: a 1μm solution that contained 5 mM CFSE was diluted with 1 mL PBS containing 10% FCS, followed by incubation with 1 mL of the cells at 37°C for 5–10 minutes. PI staining: the cells were stained with PI for 10 minutes at a concentration of 2.5 μg/mL. Stained cells were analyzed using COULTER flow cytometry. The fluorescence intensity of CFSE and PI was detected by the FL1 and FL3 channels, respectively. The NK cells were firstly gated as CFSE-positive cells. The CFSE and PI double-positive cells (upper-right quadrant) were dead cells, and viable NK cells were shown in the upper-left quadrant. The percentage of viable NK cells was the same as the percentage of plots in upper-left quadrant.
## 2.5. ROM Assay
ROM production was assayed by spectral luminosity chromatometry following the instructions on the hydroxy radical detection kit (Jiancheng, Nanjing).
## 2.6. RNM Assay (Nitrate Reductase Method)
RNM production was assayed indirectly using chromatometry following the instructions of the nitric oxide detection kit (Jiancheng, Nanjing).
## 2.7. NK-Cell Cytotoxicity (NCC)
NCC was assayed by the MTT method.
## 2.8. TNF-β and IFN-γ Assay
The IFN-γ and TNF-β levels were assayed to indirectly reflect the activity of NK cells by double antibody sandwich enzyme-labeled immunosorbent assay, according to the manufacturers’ instructions. The concentration gradients of the standard preparations and their corresponding optical density results were imported into the program SPSS 13.0 to generate standard calibration equations for conversion of OD values to concentrations. These calibration equations were used to determine the concentrations of IFN-γ and TNF-β.
### 2.8.1. The Effect of ONOO− on the Activity of NK Cells
NK cells (E) and K562 cells (T) were cultured in 96-well plates at a ratio (E : T) of 10 : 1. The cells were cocultured at 37°C in an atmosphere of 5% carbon dioxide (CO2) and saturated humidity. RNM production was measured 6 hours later, and the TNF-β and IFN-γ levels were determined 24 hours later. In addition, the number of viable NK cells was measured by FCM 6 and 24 hours later. The equation for NCC was NCC = [1 − (ODE/T − ODE)/ODT] × 100% [10]. All measurements were carried out in triplicate. NK cells, K562 cells, and NK cells + K562 cells were used as blank groups, and DMEM containing 10% FBS was used as the holo-blank sample. The production of RNM and the extent of the inhibition of NK on K562 cells were measured at the indicated time points, and the data were analyzed to evaluate the relationship between RNM levels and the activity of NK cells.
### 2.8.2. The Effect of Endogenous RNM on the Activity of NK Cell
IL-2 (150 U/mL) and PHA (60 g/mL) were administered to cocultures of NK cells and MO at a ratio (E : MO) of 10 : 2. After 24 hours of coculture, K562 cells were added at an E : T ratio of 10 : 1. RNM production was measured 6 hours later, while TNF-β and IFN-γ levels and KIR were measured 24 hours later. Each group was tested three times. Blank groups containing IL-2/PHA were the same as those described in paragraph 2.3.
### 2.8.3. The Effect of Different Dosages of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were first administered. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1. At the same time, different concentrations of DHT (10 umol/L, 20 umol/L, 50 umol/L) and TIP (125 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) and GSH (25 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) were added to separate wells. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All the experiments for each group were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures (Control 1) and IL-2/PHA + NK + K562 + MO cultures (Control 2) were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and KIR without DHT, TIP, and GSH were compared with the corresponding values after addition of the different doses of drugs. This allowed us to determine which dose of DHT and TIP and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
### 2.8.4. The Effect of Different Combinations of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were administered first. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1, and at the same time, different combinations of DHT (20μmol/L), TIP (50 mol/L, 100 mol/L), and GSH (50 mol/L) were added to each well. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All experiments were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures and IL-2/PHA + NK + K562 + MO cultures were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and NCC with DHT, TIP, and GSH were compared with the corresponding values after the addition of the different combinations of RNM scavengers. This allowed us to know which combination of DHT, TIP, and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
## 2.8.1. The Effect of ONOO− on the Activity of NK Cells
NK cells (E) and K562 cells (T) were cultured in 96-well plates at a ratio (E : T) of 10 : 1. The cells were cocultured at 37°C in an atmosphere of 5% carbon dioxide (CO2) and saturated humidity. RNM production was measured 6 hours later, and the TNF-β and IFN-γ levels were determined 24 hours later. In addition, the number of viable NK cells was measured by FCM 6 and 24 hours later. The equation for NCC was NCC = [1 − (ODE/T − ODE)/ODT] × 100% [10]. All measurements were carried out in triplicate. NK cells, K562 cells, and NK cells + K562 cells were used as blank groups, and DMEM containing 10% FBS was used as the holo-blank sample. The production of RNM and the extent of the inhibition of NK on K562 cells were measured at the indicated time points, and the data were analyzed to evaluate the relationship between RNM levels and the activity of NK cells.
## 2.8.2. The Effect of Endogenous RNM on the Activity of NK Cell
IL-2 (150 U/mL) and PHA (60 g/mL) were administered to cocultures of NK cells and MO at a ratio (E : MO) of 10 : 2. After 24 hours of coculture, K562 cells were added at an E : T ratio of 10 : 1. RNM production was measured 6 hours later, while TNF-β and IFN-γ levels and KIR were measured 24 hours later. Each group was tested three times. Blank groups containing IL-2/PHA were the same as those described in paragraph 2.3.
## 2.8.3. The Effect of Different Dosages of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were first administered. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1. At the same time, different concentrations of DHT (10 umol/L, 20 umol/L, 50 umol/L) and TIP (125 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) and GSH (25 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) were added to separate wells. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All the experiments for each group were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures (Control 1) and IL-2/PHA + NK + K562 + MO cultures (Control 2) were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and KIR without DHT, TIP, and GSH were compared with the corresponding values after addition of the different doses of drugs. This allowed us to determine which dose of DHT and TIP and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
## 2.8.4. The Effect of Different Combinations of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were administered first. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1, and at the same time, different combinations of DHT (20μmol/L), TIP (50 mol/L, 100 mol/L), and GSH (50 mol/L) were added to each well. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All experiments were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures and IL-2/PHA + NK + K562 + MO cultures were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and NCC with DHT, TIP, and GSH were compared with the corresponding values after the addition of the different combinations of RNM scavengers. This allowed us to know which combination of DHT, TIP, and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
## 2.9. Statistical Analysis
SPSS 13.0 statistical software was used to analyze the results. The measurements were reported asx±SD. The LSD t-test was taken when the mean squares were regular, while the Dunnett T3 test was used to measure the heterogeneity of variance during the multiple comparisons of the means of all groups. P<0.05 was taken as the level of significance.
## 3. Results
### 3.1. The Effect of ONOO− on the Activity of NK Cells
After exogenous ONOO− was administered in the coculture system of NK and K562 cells, the RNM production increased (P<0.05), whereas the concentration of TNF-γ and IFN-β and the NCC was significantly decreased (P<0.05). The percentage of living NK cells was also decreased by the FCM at the 6th and the 24th hours. These data are shown in Table 1.Table 1
The effect of ONOO− on the activity of NK cell.
GroupsNO (6 h) (μmol/mL)NK (6 h) %NK (24 h) %TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h) %Control11.29±5.026.63±6.4210.80±5.05E52.90±8.6189.87±1.9387.37±2.11183.08±7.45136.32±6.5T25.68±5.9611.96±5.8912.02±95E+T95.36±6.4590.57±2.5292.16±2.53198.64±7.33146.43±6.4966.32±4.34E+ONOO-264.85±9.16#80.41±2.52#81.05±1.58#86.07±7.51#58.46±6.12#T+ONOO-228.35±8.45
*11.07±5.5212.10±5.02E + T +ONOO-261.03±6.57∆80.97±1.677∆73.87±1.021∆91.68±6.00∆58.47±6.99∆43.84±3.42∆n=3; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; #P<0.05, comparison between group E+ONOO- and group E; *P<0.05, comparison between group T+ONOO- and group T; ∆P<0.05 comparison between group E+T+ONOO- and group E+T.
### 3.2. The Effect of RNM Scavenger on NK-Cell Cytotoxicity Caused byONOO-
To explore the effect of RNM scavengers on NK-cell cytotoxicity caused byONOO-, we used three RNM scavengers. As shown in Table 2, we found that the production of RNM in the systems of NK and K562 cells decreased significantly after administration of TIP and GSH (P<0.05), while the percentage of living NK cells and the concentration of TNF-γ and IFN-β and NCC were significantly increased (P<0.05).Table 2
The effect of RNM scavengers on NK-cell cytotoxicity of ONOO−.
GroupsNO (6 h) (μmol/mL)NK (6 h) %NK (24 h) %TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h)%E+T95.36±6.4590.57±2.5293.17±2.57198.64±7.33146.43±6.4967.47±2.64E+T+ONOO-261.03±6.5780.97±1.6871.87±1.0291.68±6.0058.47±7043.44±2.87E+T+DHC+ONOO-255.32±11.9382.27±1.3873.60±2.76118.73±5.56
*70.40±7.15
*45.26±3.31E + T + TIP +ONOO-179.65±7.00
*90.07±1.23
*91.13±3.67
*131.03±5.46
*76.80±4.91
*61.58±1.89
*E + T + GSH +ONOO-185.69±5.02
*89.87+0.35*88.03±1.46
*128.70±4.53
*75.12±6.45
*60.68±2.07
*n=3;2. ONOO− 200 umol/L; DHC 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, compared with E+T+ONOO-.
### 3.3. The Effect of Endogenous RNM on the Activity of NK Cells
We know that exogenous RNM reduces the activity of NK cells. Furthermore, we investigated the effect of endogenous RNM on the activity of NK cells. The results are shown in Table3, when the number of NK cells was fixed. After activation by IL-2/PHA, the levels of IFN-γ and TNF-β were significantly higher than at the same E : T ratios in the absence of IL-2/PHA (P<0.05). With a further addition of MO, the level of IFN-γ and TNF-β did not increase (P>0.05), while the production of TNF-β increased to a small extent over the level prior to the addition of MO, and the NCC was lower.Table 3
The effect of endogenous RNM on the activity of NK cells.
GroupsNO (6 h) (μmol/mL)TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h) %Control11.68±6.6211.08±5.4611.65±5.05T + IL-2/PHA22.29±5.6620.64±5.5713.18±5.86MO + IL-2/PHA114.37±7.4040.64+7.5947.76±6.57E + IL-2/PHA62.64±7.00361.62±12.27284.74±7.49E + MO + IL-2/PHA119.62±11.18114.09±7.4676.77±4.99MO + T + IL-2/PHA115.26±6.4733.31±6.3446.46±6.97E + T + IL-2/PHA79.63±7.04371.99±12.79275.08±9.6191.77±3.62E + T + MO + IL-2/PHA189.35±6.51
*110.91±10.01
*74.74±10.15
*60.39±5.39
*n=3; ONOO− 200 umol/L; DHT 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, comparison between group NK + MO + K562 + IL-2 and group NK + K562 + IL-2.
### 3.4. The Effect of RNM Scavenger on NK-Cell-Mediated Killing of K562 Cells
As shown in Table4, the addition of TIP and GSH reduced the production of RNM and ROM and increased the production of TNF-γ and IFN-β and NCC significantly (P<0.05); however, DHT incubation did not reduce the production of RNM effectively (P>0.05).Table 4
The effect of RNM scavengers on NK-cell-mediated killing of K562 cells.
Groups·OH (6 h) (U/mL)NO (6 h) (μmol/mL)TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC%IL-2/PHA + E + T74.41±3.0582.10±6.60381.47±10.64277.14±10.6190.64±3.06IL-2/PHA + E + T + MO256.08±8.52193.65±5.95114.39±7.4576.81±9.5061.29±2.22IL-2/PHA + E + T + MO + DHC101.37±5.56
*188.92±5.00134.10±6.68
*107.89±6.55
*72.20±4.10
*IL-2/PHA + E + T + MO + TIP107.02±6.39
*91.32±6.81
*185.00±4.51
*146.71±6.96
*84.31±4.56
*IL-2/PHA + E + T + MO + GSH108.69±6.05
*84.66±5.99
*181.91±5.92
*144.11±6.03
*81.65±3.09
*n=3; ONOO− 200 umol/L; DHT 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, compared with IL-2/PHA + NK + K562 + MO.
### 3.5. The Effect of Different Dosages of RNM Scavengers on the Activity of NK Cells
To investigate whether different dosages of RNM scavengers affect the activity of NK cells, we selected different combinations, as shown in Figure1. According to Figure 1, with an increase of the dosage, the groups treated with TIP and GSH decreased the production of RNM and increased the levels of TNF-γ, IFN-β and NCC significantly (P<0.05). However, each group of DHT could not eliminate RNM (P>0.05).Figure 1
The effect of different dosages of RNM scavengers on the activity of NK cells. With an increase of the dosage, the groups treated with TIP and GSH decreased the production of RNM and increased the levels of TNF-γ, IFN-β and NCC significantly (P<0.05). However, each group of DHT could not eliminate RNM (P>0.05).
### 3.6. The Effect of Different Combinations of RNM Scavengers on the Activity of NK Cells
To investigate whether different combinations of RNM scavengers affect the activity of NK cells, we selected different combinations, as shown in Figure2. According to the result of Figure 2, we found that different combinations of RNM scavengers did not enhance the antineoplasmic activity of NK cells.Figure 2
The effect of different combinations of RNM scavengers on the activity of NK cells. Different combinations of RNM scavengers did not enhance the antineoplasmic activity of NK cells.
## 3.1. The Effect of ONOO− on the Activity of NK Cells
After exogenous ONOO− was administered in the coculture system of NK and K562 cells, the RNM production increased (P<0.05), whereas the concentration of TNF-γ and IFN-β and the NCC was significantly decreased (P<0.05). The percentage of living NK cells was also decreased by the FCM at the 6th and the 24th hours. These data are shown in Table 1.Table 1
The effect of ONOO− on the activity of NK cell.
GroupsNO (6 h) (μmol/mL)NK (6 h) %NK (24 h) %TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h) %Control11.29±5.026.63±6.4210.80±5.05E52.90±8.6189.87±1.9387.37±2.11183.08±7.45136.32±6.5T25.68±5.9611.96±5.8912.02±95E+T95.36±6.4590.57±2.5292.16±2.53198.64±7.33146.43±6.4966.32±4.34E+ONOO-264.85±9.16#80.41±2.52#81.05±1.58#86.07±7.51#58.46±6.12#T+ONOO-228.35±8.45
*11.07±5.5212.10±5.02E + T +ONOO-261.03±6.57∆80.97±1.677∆73.87±1.021∆91.68±6.00∆58.47±6.99∆43.84±3.42∆n=3; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; #P<0.05, comparison between group E+ONOO- and group E; *P<0.05, comparison between group T+ONOO- and group T; ∆P<0.05 comparison between group E+T+ONOO- and group E+T.
## 3.2. The Effect of RNM Scavenger on NK-Cell Cytotoxicity Caused byONOO-
To explore the effect of RNM scavengers on NK-cell cytotoxicity caused byONOO-, we used three RNM scavengers. As shown in Table 2, we found that the production of RNM in the systems of NK and K562 cells decreased significantly after administration of TIP and GSH (P<0.05), while the percentage of living NK cells and the concentration of TNF-γ and IFN-β and NCC were significantly increased (P<0.05).Table 2
The effect of RNM scavengers on NK-cell cytotoxicity of ONOO−.
GroupsNO (6 h) (μmol/mL)NK (6 h) %NK (24 h) %TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h)%E+T95.36±6.4590.57±2.5293.17±2.57198.64±7.33146.43±6.4967.47±2.64E+T+ONOO-261.03±6.5780.97±1.6871.87±1.0291.68±6.0058.47±7043.44±2.87E+T+DHC+ONOO-255.32±11.9382.27±1.3873.60±2.76118.73±5.56
*70.40±7.15
*45.26±3.31E + T + TIP +ONOO-179.65±7.00
*90.07±1.23
*91.13±3.67
*131.03±5.46
*76.80±4.91
*61.58±1.89
*E + T + GSH +ONOO-185.69±5.02
*89.87+0.35*88.03±1.46
*128.70±4.53
*75.12±6.45
*60.68±2.07
*n=3;2. ONOO− 200 umol/L; DHC 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, compared with E+T+ONOO-.
## 3.3. The Effect of Endogenous RNM on the Activity of NK Cells
We know that exogenous RNM reduces the activity of NK cells. Furthermore, we investigated the effect of endogenous RNM on the activity of NK cells. The results are shown in Table3, when the number of NK cells was fixed. After activation by IL-2/PHA, the levels of IFN-γ and TNF-β were significantly higher than at the same E : T ratios in the absence of IL-2/PHA (P<0.05). With a further addition of MO, the level of IFN-γ and TNF-β did not increase (P>0.05), while the production of TNF-β increased to a small extent over the level prior to the addition of MO, and the NCC was lower.Table 3
The effect of endogenous RNM on the activity of NK cells.
GroupsNO (6 h) (μmol/mL)TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h) %Control11.68±6.6211.08±5.4611.65±5.05T + IL-2/PHA22.29±5.6620.64±5.5713.18±5.86MO + IL-2/PHA114.37±7.4040.64+7.5947.76±6.57E + IL-2/PHA62.64±7.00361.62±12.27284.74±7.49E + MO + IL-2/PHA119.62±11.18114.09±7.4676.77±4.99MO + T + IL-2/PHA115.26±6.4733.31±6.3446.46±6.97E + T + IL-2/PHA79.63±7.04371.99±12.79275.08±9.6191.77±3.62E + T + MO + IL-2/PHA189.35±6.51
*110.91±10.01
*74.74±10.15
*60.39±5.39
*n=3; ONOO− 200 umol/L; DHT 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, comparison between group NK + MO + K562 + IL-2 and group NK + K562 + IL-2.
## 3.4. The Effect of RNM Scavenger on NK-Cell-Mediated Killing of K562 Cells
As shown in Table4, the addition of TIP and GSH reduced the production of RNM and ROM and increased the production of TNF-γ and IFN-β and NCC significantly (P<0.05); however, DHT incubation did not reduce the production of RNM effectively (P>0.05).Table 4
The effect of RNM scavengers on NK-cell-mediated killing of K562 cells.
Groups·OH (6 h) (U/mL)NO (6 h) (μmol/mL)TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC%IL-2/PHA + E + T74.41±3.0582.10±6.60381.47±10.64277.14±10.6190.64±3.06IL-2/PHA + E + T + MO256.08±8.52193.65±5.95114.39±7.4576.81±9.5061.29±2.22IL-2/PHA + E + T + MO + DHC101.37±5.56
*188.92±5.00134.10±6.68
*107.89±6.55
*72.20±4.10
*IL-2/PHA + E + T + MO + TIP107.02±6.39
*91.32±6.81
*185.00±4.51
*146.71±6.96
*84.31±4.56
*IL-2/PHA + E + T + MO + GSH108.69±6.05
*84.66±5.99
*181.91±5.92
*144.11±6.03
*81.65±3.09
*n=3; ONOO− 200 umol/L; DHT 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, compared with IL-2/PHA + NK + K562 + MO.
## 3.5. The Effect of Different Dosages of RNM Scavengers on the Activity of NK Cells
To investigate whether different dosages of RNM scavengers affect the activity of NK cells, we selected different combinations, as shown in Figure1. According to Figure 1, with an increase of the dosage, the groups treated with TIP and GSH decreased the production of RNM and increased the levels of TNF-γ, IFN-β and NCC significantly (P<0.05). However, each group of DHT could not eliminate RNM (P>0.05).Figure 1
The effect of different dosages of RNM scavengers on the activity of NK cells. With an increase of the dosage, the groups treated with TIP and GSH decreased the production of RNM and increased the levels of TNF-γ, IFN-β and NCC significantly (P<0.05). However, each group of DHT could not eliminate RNM (P>0.05).
## 3.6. The Effect of Different Combinations of RNM Scavengers on the Activity of NK Cells
To investigate whether different combinations of RNM scavengers affect the activity of NK cells, we selected different combinations, as shown in Figure2. According to the result of Figure 2, we found that different combinations of RNM scavengers did not enhance the antineoplasmic activity of NK cells.Figure 2
The effect of different combinations of RNM scavengers on the activity of NK cells. Different combinations of RNM scavengers did not enhance the antineoplasmic activity of NK cells.
## 4. Discussion
ONOO− is generated by the NO and O2- reaction, which can be produced by many cells in our body. Under normal conditions, ONOO− is believed to have a primarily physiological function. However, under pathological conditions, the oxidation and injury role is activated for the increased ONOO− stimulation by inflammatory factor [11]. By detecting the expression of RNM-induced genes, Nittler et al. [12] discovered that RNMS take part in the metabolic process of many types of cells, including T/NK cells in the following ways: (1) by oxidizing and nitrifying DNA residues and deaminating them to induce DNA damage and interfere with DNA repair; (2) by modifying proteins in the electron transfer chain to inhibit cell respiration, promoting the reduction of coenzyme Q10 to increase the production of active oxygen and reducing the proton current rate through the mitochondria to reduce the ATP content of the cell; (3) by mediating the nitrogenation and nitrosylation of proteins and interfering with their correct folding and degradation, thereby influencing cellular activity. Our research showed that the percentage of live NK cells decreased after the addition of synthetic ONOO− into the culture system of NK and K562 cells, which indicated that the ONOO− can kill NK cells directly. Not only did the lymphokines TNF-β, IFN-γ in NK cells decrease significantly, but KIR also decreased dramatically (P<0.01) (Table 1). All the results indicated that exogenous ONOO− had a cell-killing effect on NK cells and inhibited the anti-K562 cells function of NK cells.The mononuclear phagocyte, which protects the body from pathogenic factors, is an important component of acute and chronic inflammatory reactions. Once phagocytosed foreign bodies are recognized, the phagocyte will have a respiratory burst, which is manifested by an increase in oxygen consumption, enhanced metabolic activity of the pentose phosphate pathway, and the generation of nitrogen and oxygen free radicals, among which the NO can transfer the activating signal from the cytoplasm to endometaphase through the cell signaling pathway and induce the expression of related genes to activate the inflammatory response. This type of oxidizing bactericide is nonspecific, and normal tissues around mononuclear macrophages could also be injured [13]. In this study, IL-2 and PHA were added into the NK + K562 + MO culture system, resulting in significant enhancement of RNM, while the NCC decreased from 91.77% to 60.39% (P<0.01). Cytokine TNF-β IFN-γ also decreased after the addition of IL-2/PHA (Table 3). These results suggested that RNM produced by the MO cell respiratory burst inhibits NK-cell activity. This result is consistent with that of decreased NK activity and lower NCC induced by addition of exogenous ONOO-. It has been shown that a large number of MO cells can be detected in and around tumors [14]. This study also showed that IL-2 activates T/NK cells in vivo and also induces MO cells to produce a large amount of RNMS. Such a nonspecific killing effect inhibits the activity of T/NK cells, which may explain the low efficiency of adoptive tumor immunotherapy when using the T/NK cell as an effector cell and IL-2 as an activator.A central goal of tumor therapy is strengthening the immune system in order to eliminate the microresidue of the tumor. Thoren et al. had shown when using IL-2 for treatment that histamine is an ideal immunomodulator that can counter ROM inhibition of the antitumor activity of NK cells through the H2 receptor [15]. In our previous studies, we also demonstrated that tiopronin was superior to DHT in reversing the suppression by ROM of the antitumor activity of T/NK cells. In this study, TIP and GSH were chosen as scavengers of RNM and were compared with DHT. By adding exogenous ONOO−, TIP, GSH, and DHT into the coculture of NK cells and K562 cells, as shown in Table 2, both tiopronin and glutathione remove RNM directly, which protects NK cells and reverses ONOO− inhibition of NK-cell activity. However, in the DHT group, no change was found in the RNM and NCC, suggesting that ONOO−cannot be cleared by DHT directly. Table 4 shows the effects of the three kinds of scavengers in removing endogenous RNM generated by MO cells, which are similar to the effect on exogenous ONOO−. Tiopronin and glutathione can significantly reduce the RNM (P<0.05), while DHT has no effect on RNM production (P>0.05). All three drugs can reduce the ROM output, increase the production of TNF-β and IFN-γ, and enhance the rate of inhibition of K562 cells (P<0.05). GSH is tripeptide that is composed of γ-glutamic acid, and cysteine, and glycine, with a molecular structure that contains a nonprotein thiol, which can be catalyzed by itself or by glutathione transferase system (GST-S), regulating intracellular oxidation-reduction systems and reducing the content of RNM and ROM to play its pharmacological role. With the self-contained free SH, the tiopronin cannot only drive reversible synthesis of disulfide compounds with RNM and ROM, but it can also activate Cu, Zn-superoxide dismutase (SOD) to enhance its free radical scavenging role, maintain the balance of glutathione peptide in vivo, clear metal ions, and regulate the antioxidative enzyme system [16]. Histamine dihydrochloride plays an indirect role by blocking the generation of ROM through the H2 receptor. Therefore, it is limited by the quantity and the function of the H2 receptor. Therefore, tiopronin and glutathione perform better than histamine dihydrochloride in clearing the RNM. At the same time, because of its toxic side effects, histamine dihydrochloride is limited in clinical application, while the other two drugs have widely been used in clinical treatment, with few toxic side effects.Figure1 showed that the effect of TIP and GSH on clearing RNM and protecting NK cells is dose dependent. It also means that TIP and GSH reverse the inhibition of monocytes on the activity of NK cells. Compared with a single scavenger, two or three scavengers combined can reduce NO and increase the production of TNF-β and IFN-γ; given that there was no change in NCC (P>0.05), it appears that different combinations of RNM scavengers cannot protect NK cells better. Still, the reason is still not clear.In conclusion, endogenous and exogenous reactive nitrogen metabolites can act directly on cells, kill NK cells significantly, then reduce NK cells against K562 cells and activate TNF-β, IFN-γ, and other cytokines. There are a large number of MO cells in and around malignant tumors, which can cause respiratory burst, especially when activated, thus generating lots of RNMS, inducing NK cells apoptosis and significantly inhibiting NK cells activity. Tiopronin and glutathione are more effective than histamine in clearing RNMS and reversing their inhibitory effect on NK cells in anti-K562 cells with relatively minor toxic side effects and in a dose-dependent fashion. Therefore, they can be used clinically as a better immune adjuvant to improve the efficacy of adoptive immunotherapy for minimal residual tumor/leukemia. However, different combinations of RNM scavengers cannot better protect NK cells.
---
*Source: 101737-2012-03-12.xml* | 101737-2012-03-12_101737-2012-03-12.md | 44,354 | Effects of Reactive Nitrogen Scavengers on NK-Cell-Mediated Killing of K562 Cells | Yili Zeng; Qinmiao Huang; Meizhu Zheng; Jianxin Guo; Jingxin Pan | Journal of Biomedicine and Biotechnology
(2012) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101737 | 101737-2012-03-12.xml | ---
## Abstract
This study explored the effects of reactive nitrogen metabolites (RNMS) on natural-killer- (NK-) cell-mediated killing of K562 cells and the influence of RNM scavengers, such as tiopronin (TIP), glutamylcysteinylglycine (GSH), and histamine dihydrochloride (DHT), on reversing the suppressing effect of RNM. We administered exogenous and endogenous RNM in the NK + K562 culture system and then added RNM scavengers. The concentrations of RNM, TNF-β and IFN-γ, and NK-cell cytotoxicity (NCC) and the percentage of living NK cells were then examined. We found that both exogenous and endogenous RNM caused the KIR to decrease (P<0.01); however, RNM scavengers such as TIP and GSH rescued this phenomenon dose dependently. In conclusion, our data suggests that RNM scavengers such as TIP and GSH enhance the antineoplasmic activity of NK cells.
---
## Body
## 1. Introduction
There is a great number of monocytes/macrophages (MO) and NK cells within and outside malignant tumors. Compared with other sections of body, the function of NK cells in a tumor and its ambient tissue is remarkably decreased [1]. Current antitumor immunotherapies mainly use adoptive immunotherapy (AIT), which involves cells such as cytotoxic T lymphocytes (CTLS), lymphokine-activated killer cells (LAK cells), tumor-infiltrating lymphocytes (TILS), multicytokine-induced killer cells (CIK cells), donor lymphocyte infusions (DLIS), antineoplastic lymphocyte clones, and haplotype lymphocyte infusions. T cells and NK cells are the major effective cells, whereas interleukin-2 (IL-2) is the main activator of T/NK cells. However, most studies using IL-2 alone to treat leukemia in vivo have shown low efficacy, with only a few patients achieving remission. The main reason for this result is that certain monocytes/macrophages (MO) that can inhibit the antitumor activity of lymphocytes were quantitatively shown to exist in and around the tumor tissue. MO participate in tumor-induced immune suppression by secreting cytokines, particularly reactive oxygen metabolites (ROMS) and reactive nitrogen metabolites (RNMS) [2]. Studies have confirmed that the ROM yielded by MO inhibit the antitumor activity of lymphocytes when respiratory bursts occur. DHT, TIP, and GSH can reverse the inhibition of the antitumor activity of NK cells by ROM [3, 4]. In our previous studies, we also demonstrated that TIP and GSH were superior to DHT in reversing the suppression of the antitumor activity of T/NK cells by ROM [5]. When respiratory bursts occur, MO yield not only ROM but also RNM, which include nitrogen monoxide (NO), NO2, NO2-, NO3-, and peroxynitrite (ONOO−). The function of RNM is similar to that of ROM; however, RNM also have nitrogenation activity. Peroxynitrite (ONOO−), once acidified, immediately converts to peroxy-nitrous acid in the excited state, which has a stronger oxidizing activity and simultaneously yields both nitrogen dioxide (NO2) and OH analogs. These substances are more toxic than ROM [6]. Kono et al. [7] speculated that, in cancer-bearing animals, reactive nitrogen species induce the downregulation of CD3+, which is an important signal transduction molecule in T/NK cells. Thus, the antitumor immunosuppression caused by RNM should not be neglected. The immunotolerance to tumors induced by RNM may be similar to or stronger than that of ROM. In our previous studies, we had demonstrated that ROM produced by MO result in tumor immunosuppression [8]. However, studies examining whether RNM causes antitumor immunosuppression have not yet been reported. This research investigates the effects of the exogenous and endogenous RNM on NK-cell-mediated killing of K562 cells and the influence of RNM scavengers such as TIP, GSH and DHT on reversing the suppressing effect of RNM.
## 2. Materials and Methods
### 2.1. Materials
The K562 cell line was provided by the Union Hospital of Fujian Province. Fresh, enriching leukocytes from healthy patients were obtained from the Quanzhou City Blood Center. The reagents and their manufacturers were as follows: NK Cell Negative Isolation Kit, Dynal; MTT, Trypan Blue, Propidium Iodide, Sigma; CFSE, Dojindo (Japan) interleukin-2, Double Heron (Beijing); phytohemagglutinin, Yihua (Shanghai); histamine dihydrochloride, Sigma; tiopronin, Henan Xinyi Medicine Industry, Ltd.; hydroxy radical detection kit and nitrogen monoxide detection kit, Jiancheng (Nanjing); human IFN-γ ELISA Kit, Xinbosheng (Guangzhou); TNF-β ELISA Kit, Boster (Wuhan).
### 2.2. Isolation of Mononuclear Cells Rich in NK Cells (E) [9]
After PBMc were isolated using density gradient centrifugation, they were incubated with the immunomagnetic bead of the NK Cell Negative Isolation Kit at a low temperature. Then, the NK cells were isolated by the magnetic sorption method. Flow cytometry (FCM) was applied to detect the cells marked with CD3−FITC/CD56+16−PE. There were 85% of CD−/CD56+/CD16+ cells, and more than 95% of cells were shown to be alive via the trypan blue exclusion assay.
### 2.3. Isolation of Mononuclear Cells Rich in Monocytes (MO)
After isolating by the density gradient, the PBMc were cultured adherently to isolate the monocytes. Then, they were identified by a nonspecific carboxylesterase staining method. The MO constituted of 76.3% of the total cells, and more than 95% of cells were shown to be alive via the trypan blue exclusion assay.
### 2.4. Viable NK-Cell Counting (CFSE-PI Double Staining Method)
CFSE labeling: a 1μm solution that contained 5 mM CFSE was diluted with 1 mL PBS containing 10% FCS, followed by incubation with 1 mL of the cells at 37°C for 5–10 minutes. PI staining: the cells were stained with PI for 10 minutes at a concentration of 2.5 μg/mL. Stained cells were analyzed using COULTER flow cytometry. The fluorescence intensity of CFSE and PI was detected by the FL1 and FL3 channels, respectively. The NK cells were firstly gated as CFSE-positive cells. The CFSE and PI double-positive cells (upper-right quadrant) were dead cells, and viable NK cells were shown in the upper-left quadrant. The percentage of viable NK cells was the same as the percentage of plots in upper-left quadrant.
### 2.5. ROM Assay
ROM production was assayed by spectral luminosity chromatometry following the instructions on the hydroxy radical detection kit (Jiancheng, Nanjing).
### 2.6. RNM Assay (Nitrate Reductase Method)
RNM production was assayed indirectly using chromatometry following the instructions of the nitric oxide detection kit (Jiancheng, Nanjing).
### 2.7. NK-Cell Cytotoxicity (NCC)
NCC was assayed by the MTT method.
### 2.8. TNF-β and IFN-γ Assay
The IFN-γ and TNF-β levels were assayed to indirectly reflect the activity of NK cells by double antibody sandwich enzyme-labeled immunosorbent assay, according to the manufacturers’ instructions. The concentration gradients of the standard preparations and their corresponding optical density results were imported into the program SPSS 13.0 to generate standard calibration equations for conversion of OD values to concentrations. These calibration equations were used to determine the concentrations of IFN-γ and TNF-β.
#### 2.8.1. The Effect of ONOO− on the Activity of NK Cells
NK cells (E) and K562 cells (T) were cultured in 96-well plates at a ratio (E : T) of 10 : 1. The cells were cocultured at 37°C in an atmosphere of 5% carbon dioxide (CO2) and saturated humidity. RNM production was measured 6 hours later, and the TNF-β and IFN-γ levels were determined 24 hours later. In addition, the number of viable NK cells was measured by FCM 6 and 24 hours later. The equation for NCC was NCC = [1 − (ODE/T − ODE)/ODT] × 100% [10]. All measurements were carried out in triplicate. NK cells, K562 cells, and NK cells + K562 cells were used as blank groups, and DMEM containing 10% FBS was used as the holo-blank sample. The production of RNM and the extent of the inhibition of NK on K562 cells were measured at the indicated time points, and the data were analyzed to evaluate the relationship between RNM levels and the activity of NK cells.
#### 2.8.2. The Effect of Endogenous RNM on the Activity of NK Cell
IL-2 (150 U/mL) and PHA (60 g/mL) were administered to cocultures of NK cells and MO at a ratio (E : MO) of 10 : 2. After 24 hours of coculture, K562 cells were added at an E : T ratio of 10 : 1. RNM production was measured 6 hours later, while TNF-β and IFN-γ levels and KIR were measured 24 hours later. Each group was tested three times. Blank groups containing IL-2/PHA were the same as those described in paragraph 2.3.
#### 2.8.3. The Effect of Different Dosages of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were first administered. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1. At the same time, different concentrations of DHT (10 umol/L, 20 umol/L, 50 umol/L) and TIP (125 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) and GSH (25 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) were added to separate wells. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All the experiments for each group were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures (Control 1) and IL-2/PHA + NK + K562 + MO cultures (Control 2) were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and KIR without DHT, TIP, and GSH were compared with the corresponding values after addition of the different doses of drugs. This allowed us to determine which dose of DHT and TIP and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
#### 2.8.4. The Effect of Different Combinations of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were administered first. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1, and at the same time, different combinations of DHT (20μmol/L), TIP (50 mol/L, 100 mol/L), and GSH (50 mol/L) were added to each well. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All experiments were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures and IL-2/PHA + NK + K562 + MO cultures were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and NCC with DHT, TIP, and GSH were compared with the corresponding values after the addition of the different combinations of RNM scavengers. This allowed us to know which combination of DHT, TIP, and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
### 2.9. Statistical Analysis
SPSS 13.0 statistical software was used to analyze the results. The measurements were reported asx±SD. The LSD t-test was taken when the mean squares were regular, while the Dunnett T3 test was used to measure the heterogeneity of variance during the multiple comparisons of the means of all groups. P<0.05 was taken as the level of significance.
## 2.1. Materials
The K562 cell line was provided by the Union Hospital of Fujian Province. Fresh, enriching leukocytes from healthy patients were obtained from the Quanzhou City Blood Center. The reagents and their manufacturers were as follows: NK Cell Negative Isolation Kit, Dynal; MTT, Trypan Blue, Propidium Iodide, Sigma; CFSE, Dojindo (Japan) interleukin-2, Double Heron (Beijing); phytohemagglutinin, Yihua (Shanghai); histamine dihydrochloride, Sigma; tiopronin, Henan Xinyi Medicine Industry, Ltd.; hydroxy radical detection kit and nitrogen monoxide detection kit, Jiancheng (Nanjing); human IFN-γ ELISA Kit, Xinbosheng (Guangzhou); TNF-β ELISA Kit, Boster (Wuhan).
## 2.2. Isolation of Mononuclear Cells Rich in NK Cells (E) [9]
After PBMc were isolated using density gradient centrifugation, they were incubated with the immunomagnetic bead of the NK Cell Negative Isolation Kit at a low temperature. Then, the NK cells were isolated by the magnetic sorption method. Flow cytometry (FCM) was applied to detect the cells marked with CD3−FITC/CD56+16−PE. There were 85% of CD−/CD56+/CD16+ cells, and more than 95% of cells were shown to be alive via the trypan blue exclusion assay.
## 2.3. Isolation of Mononuclear Cells Rich in Monocytes (MO)
After isolating by the density gradient, the PBMc were cultured adherently to isolate the monocytes. Then, they were identified by a nonspecific carboxylesterase staining method. The MO constituted of 76.3% of the total cells, and more than 95% of cells were shown to be alive via the trypan blue exclusion assay.
## 2.4. Viable NK-Cell Counting (CFSE-PI Double Staining Method)
CFSE labeling: a 1μm solution that contained 5 mM CFSE was diluted with 1 mL PBS containing 10% FCS, followed by incubation with 1 mL of the cells at 37°C for 5–10 minutes. PI staining: the cells were stained with PI for 10 minutes at a concentration of 2.5 μg/mL. Stained cells were analyzed using COULTER flow cytometry. The fluorescence intensity of CFSE and PI was detected by the FL1 and FL3 channels, respectively. The NK cells were firstly gated as CFSE-positive cells. The CFSE and PI double-positive cells (upper-right quadrant) were dead cells, and viable NK cells were shown in the upper-left quadrant. The percentage of viable NK cells was the same as the percentage of plots in upper-left quadrant.
## 2.5. ROM Assay
ROM production was assayed by spectral luminosity chromatometry following the instructions on the hydroxy radical detection kit (Jiancheng, Nanjing).
## 2.6. RNM Assay (Nitrate Reductase Method)
RNM production was assayed indirectly using chromatometry following the instructions of the nitric oxide detection kit (Jiancheng, Nanjing).
## 2.7. NK-Cell Cytotoxicity (NCC)
NCC was assayed by the MTT method.
## 2.8. TNF-β and IFN-γ Assay
The IFN-γ and TNF-β levels were assayed to indirectly reflect the activity of NK cells by double antibody sandwich enzyme-labeled immunosorbent assay, according to the manufacturers’ instructions. The concentration gradients of the standard preparations and their corresponding optical density results were imported into the program SPSS 13.0 to generate standard calibration equations for conversion of OD values to concentrations. These calibration equations were used to determine the concentrations of IFN-γ and TNF-β.
### 2.8.1. The Effect of ONOO− on the Activity of NK Cells
NK cells (E) and K562 cells (T) were cultured in 96-well plates at a ratio (E : T) of 10 : 1. The cells were cocultured at 37°C in an atmosphere of 5% carbon dioxide (CO2) and saturated humidity. RNM production was measured 6 hours later, and the TNF-β and IFN-γ levels were determined 24 hours later. In addition, the number of viable NK cells was measured by FCM 6 and 24 hours later. The equation for NCC was NCC = [1 − (ODE/T − ODE)/ODT] × 100% [10]. All measurements were carried out in triplicate. NK cells, K562 cells, and NK cells + K562 cells were used as blank groups, and DMEM containing 10% FBS was used as the holo-blank sample. The production of RNM and the extent of the inhibition of NK on K562 cells were measured at the indicated time points, and the data were analyzed to evaluate the relationship between RNM levels and the activity of NK cells.
### 2.8.2. The Effect of Endogenous RNM on the Activity of NK Cell
IL-2 (150 U/mL) and PHA (60 g/mL) were administered to cocultures of NK cells and MO at a ratio (E : MO) of 10 : 2. After 24 hours of coculture, K562 cells were added at an E : T ratio of 10 : 1. RNM production was measured 6 hours later, while TNF-β and IFN-γ levels and KIR were measured 24 hours later. Each group was tested three times. Blank groups containing IL-2/PHA were the same as those described in paragraph 2.3.
### 2.8.3. The Effect of Different Dosages of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were first administered. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1. At the same time, different concentrations of DHT (10 umol/L, 20 umol/L, 50 umol/L) and TIP (125 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) and GSH (25 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) were added to separate wells. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All the experiments for each group were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures (Control 1) and IL-2/PHA + NK + K562 + MO cultures (Control 2) were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and KIR without DHT, TIP, and GSH were compared with the corresponding values after addition of the different doses of drugs. This allowed us to determine which dose of DHT and TIP and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
### 2.8.4. The Effect of Different Combinations of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were administered first. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1, and at the same time, different combinations of DHT (20μmol/L), TIP (50 mol/L, 100 mol/L), and GSH (50 mol/L) were added to each well. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All experiments were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures and IL-2/PHA + NK + K562 + MO cultures were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and NCC with DHT, TIP, and GSH were compared with the corresponding values after the addition of the different combinations of RNM scavengers. This allowed us to know which combination of DHT, TIP, and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
## 2.8.1. The Effect of ONOO− on the Activity of NK Cells
NK cells (E) and K562 cells (T) were cultured in 96-well plates at a ratio (E : T) of 10 : 1. The cells were cocultured at 37°C in an atmosphere of 5% carbon dioxide (CO2) and saturated humidity. RNM production was measured 6 hours later, and the TNF-β and IFN-γ levels were determined 24 hours later. In addition, the number of viable NK cells was measured by FCM 6 and 24 hours later. The equation for NCC was NCC = [1 − (ODE/T − ODE)/ODT] × 100% [10]. All measurements were carried out in triplicate. NK cells, K562 cells, and NK cells + K562 cells were used as blank groups, and DMEM containing 10% FBS was used as the holo-blank sample. The production of RNM and the extent of the inhibition of NK on K562 cells were measured at the indicated time points, and the data were analyzed to evaluate the relationship between RNM levels and the activity of NK cells.
## 2.8.2. The Effect of Endogenous RNM on the Activity of NK Cell
IL-2 (150 U/mL) and PHA (60 g/mL) were administered to cocultures of NK cells and MO at a ratio (E : MO) of 10 : 2. After 24 hours of coculture, K562 cells were added at an E : T ratio of 10 : 1. RNM production was measured 6 hours later, while TNF-β and IFN-γ levels and KIR were measured 24 hours later. Each group was tested three times. Blank groups containing IL-2/PHA were the same as those described in paragraph 2.3.
## 2.8.3. The Effect of Different Dosages of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were first administered. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1. At the same time, different concentrations of DHT (10 umol/L, 20 umol/L, 50 umol/L) and TIP (125 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) and GSH (25 mol/L, 50 mol/L, 100 mol/L, 250 mol/L) were added to separate wells. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All the experiments for each group were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures (Control 1) and IL-2/PHA + NK + K562 + MO cultures (Control 2) were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and KIR without DHT, TIP, and GSH were compared with the corresponding values after addition of the different doses of drugs. This allowed us to determine which dose of DHT and TIP and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
## 2.8.4. The Effect of Different Combinations of RNM Scavengers on the Activity of NK Cells
In the NK + MO (E : MO = 10 : 2) mixed-cell culture system, IL-2 (150 U/mL) and PHA (60 ug/mL) were administered first. After 24 hours of coculture, K562 cells were added at a ratio of E : T = 10 : 1, and at the same time, different combinations of DHT (20μmol/L), TIP (50 mol/L, 100 mol/L), and GSH (50 mol/L) were added to each well. The production of RNM was measured after another 6 hours, and the levels of TNF-β and IFN-γ were measured after 48 hours. In addition, MTT was used to measure KIR. All experiments were repeated three times. Meanwhile, IL-2/PHA + NK + K562 cultures and IL-2/PHA + NK + K562 + MO cultures were used as the blank control groups. The levels of RNM, TNF-β and IFN-γ, and NCC with DHT, TIP, and GSH were compared with the corresponding values after the addition of the different combinations of RNM scavengers. This allowed us to know which combination of DHT, TIP, and GSH could reverse the inhibitory effect of MO on the antitumor activity of NK cells effectively.
## 2.9. Statistical Analysis
SPSS 13.0 statistical software was used to analyze the results. The measurements were reported asx±SD. The LSD t-test was taken when the mean squares were regular, while the Dunnett T3 test was used to measure the heterogeneity of variance during the multiple comparisons of the means of all groups. P<0.05 was taken as the level of significance.
## 3. Results
### 3.1. The Effect of ONOO− on the Activity of NK Cells
After exogenous ONOO− was administered in the coculture system of NK and K562 cells, the RNM production increased (P<0.05), whereas the concentration of TNF-γ and IFN-β and the NCC was significantly decreased (P<0.05). The percentage of living NK cells was also decreased by the FCM at the 6th and the 24th hours. These data are shown in Table 1.Table 1
The effect of ONOO− on the activity of NK cell.
GroupsNO (6 h) (μmol/mL)NK (6 h) %NK (24 h) %TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h) %Control11.29±5.026.63±6.4210.80±5.05E52.90±8.6189.87±1.9387.37±2.11183.08±7.45136.32±6.5T25.68±5.9611.96±5.8912.02±95E+T95.36±6.4590.57±2.5292.16±2.53198.64±7.33146.43±6.4966.32±4.34E+ONOO-264.85±9.16#80.41±2.52#81.05±1.58#86.07±7.51#58.46±6.12#T+ONOO-228.35±8.45
*11.07±5.5212.10±5.02E + T +ONOO-261.03±6.57∆80.97±1.677∆73.87±1.021∆91.68±6.00∆58.47±6.99∆43.84±3.42∆n=3; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; #P<0.05, comparison between group E+ONOO- and group E; *P<0.05, comparison between group T+ONOO- and group T; ∆P<0.05 comparison between group E+T+ONOO- and group E+T.
### 3.2. The Effect of RNM Scavenger on NK-Cell Cytotoxicity Caused byONOO-
To explore the effect of RNM scavengers on NK-cell cytotoxicity caused byONOO-, we used three RNM scavengers. As shown in Table 2, we found that the production of RNM in the systems of NK and K562 cells decreased significantly after administration of TIP and GSH (P<0.05), while the percentage of living NK cells and the concentration of TNF-γ and IFN-β and NCC were significantly increased (P<0.05).Table 2
The effect of RNM scavengers on NK-cell cytotoxicity of ONOO−.
GroupsNO (6 h) (μmol/mL)NK (6 h) %NK (24 h) %TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h)%E+T95.36±6.4590.57±2.5293.17±2.57198.64±7.33146.43±6.4967.47±2.64E+T+ONOO-261.03±6.5780.97±1.6871.87±1.0291.68±6.0058.47±7043.44±2.87E+T+DHC+ONOO-255.32±11.9382.27±1.3873.60±2.76118.73±5.56
*70.40±7.15
*45.26±3.31E + T + TIP +ONOO-179.65±7.00
*90.07±1.23
*91.13±3.67
*131.03±5.46
*76.80±4.91
*61.58±1.89
*E + T + GSH +ONOO-185.69±5.02
*89.87+0.35*88.03±1.46
*128.70±4.53
*75.12±6.45
*60.68±2.07
*n=3;2. ONOO− 200 umol/L; DHC 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, compared with E+T+ONOO-.
### 3.3. The Effect of Endogenous RNM on the Activity of NK Cells
We know that exogenous RNM reduces the activity of NK cells. Furthermore, we investigated the effect of endogenous RNM on the activity of NK cells. The results are shown in Table3, when the number of NK cells was fixed. After activation by IL-2/PHA, the levels of IFN-γ and TNF-β were significantly higher than at the same E : T ratios in the absence of IL-2/PHA (P<0.05). With a further addition of MO, the level of IFN-γ and TNF-β did not increase (P>0.05), while the production of TNF-β increased to a small extent over the level prior to the addition of MO, and the NCC was lower.Table 3
The effect of endogenous RNM on the activity of NK cells.
GroupsNO (6 h) (μmol/mL)TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h) %Control11.68±6.6211.08±5.4611.65±5.05T + IL-2/PHA22.29±5.6620.64±5.5713.18±5.86MO + IL-2/PHA114.37±7.4040.64+7.5947.76±6.57E + IL-2/PHA62.64±7.00361.62±12.27284.74±7.49E + MO + IL-2/PHA119.62±11.18114.09±7.4676.77±4.99MO + T + IL-2/PHA115.26±6.4733.31±6.3446.46±6.97E + T + IL-2/PHA79.63±7.04371.99±12.79275.08±9.6191.77±3.62E + T + MO + IL-2/PHA189.35±6.51
*110.91±10.01
*74.74±10.15
*60.39±5.39
*n=3; ONOO− 200 umol/L; DHT 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, comparison between group NK + MO + K562 + IL-2 and group NK + K562 + IL-2.
### 3.4. The Effect of RNM Scavenger on NK-Cell-Mediated Killing of K562 Cells
As shown in Table4, the addition of TIP and GSH reduced the production of RNM and ROM and increased the production of TNF-γ and IFN-β and NCC significantly (P<0.05); however, DHT incubation did not reduce the production of RNM effectively (P>0.05).Table 4
The effect of RNM scavengers on NK-cell-mediated killing of K562 cells.
Groups·OH (6 h) (U/mL)NO (6 h) (μmol/mL)TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC%IL-2/PHA + E + T74.41±3.0582.10±6.60381.47±10.64277.14±10.6190.64±3.06IL-2/PHA + E + T + MO256.08±8.52193.65±5.95114.39±7.4576.81±9.5061.29±2.22IL-2/PHA + E + T + MO + DHC101.37±5.56
*188.92±5.00134.10±6.68
*107.89±6.55
*72.20±4.10
*IL-2/PHA + E + T + MO + TIP107.02±6.39
*91.32±6.81
*185.00±4.51
*146.71±6.96
*84.31±4.56
*IL-2/PHA + E + T + MO + GSH108.69±6.05
*84.66±5.99
*181.91±5.92
*144.11±6.03
*81.65±3.09
*n=3; ONOO− 200 umol/L; DHT 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, compared with IL-2/PHA + NK + K562 + MO.
### 3.5. The Effect of Different Dosages of RNM Scavengers on the Activity of NK Cells
To investigate whether different dosages of RNM scavengers affect the activity of NK cells, we selected different combinations, as shown in Figure1. According to Figure 1, with an increase of the dosage, the groups treated with TIP and GSH decreased the production of RNM and increased the levels of TNF-γ, IFN-β and NCC significantly (P<0.05). However, each group of DHT could not eliminate RNM (P>0.05).Figure 1
The effect of different dosages of RNM scavengers on the activity of NK cells. With an increase of the dosage, the groups treated with TIP and GSH decreased the production of RNM and increased the levels of TNF-γ, IFN-β and NCC significantly (P<0.05). However, each group of DHT could not eliminate RNM (P>0.05).
### 3.6. The Effect of Different Combinations of RNM Scavengers on the Activity of NK Cells
To investigate whether different combinations of RNM scavengers affect the activity of NK cells, we selected different combinations, as shown in Figure2. According to the result of Figure 2, we found that different combinations of RNM scavengers did not enhance the antineoplasmic activity of NK cells.Figure 2
The effect of different combinations of RNM scavengers on the activity of NK cells. Different combinations of RNM scavengers did not enhance the antineoplasmic activity of NK cells.
## 3.1. The Effect of ONOO− on the Activity of NK Cells
After exogenous ONOO− was administered in the coculture system of NK and K562 cells, the RNM production increased (P<0.05), whereas the concentration of TNF-γ and IFN-β and the NCC was significantly decreased (P<0.05). The percentage of living NK cells was also decreased by the FCM at the 6th and the 24th hours. These data are shown in Table 1.Table 1
The effect of ONOO− on the activity of NK cell.
GroupsNO (6 h) (μmol/mL)NK (6 h) %NK (24 h) %TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h) %Control11.29±5.026.63±6.4210.80±5.05E52.90±8.6189.87±1.9387.37±2.11183.08±7.45136.32±6.5T25.68±5.9611.96±5.8912.02±95E+T95.36±6.4590.57±2.5292.16±2.53198.64±7.33146.43±6.4966.32±4.34E+ONOO-264.85±9.16#80.41±2.52#81.05±1.58#86.07±7.51#58.46±6.12#T+ONOO-228.35±8.45
*11.07±5.5212.10±5.02E + T +ONOO-261.03±6.57∆80.97±1.677∆73.87±1.021∆91.68±6.00∆58.47±6.99∆43.84±3.42∆n=3; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; #P<0.05, comparison between group E+ONOO- and group E; *P<0.05, comparison between group T+ONOO- and group T; ∆P<0.05 comparison between group E+T+ONOO- and group E+T.
## 3.2. The Effect of RNM Scavenger on NK-Cell Cytotoxicity Caused byONOO-
To explore the effect of RNM scavengers on NK-cell cytotoxicity caused byONOO-, we used three RNM scavengers. As shown in Table 2, we found that the production of RNM in the systems of NK and K562 cells decreased significantly after administration of TIP and GSH (P<0.05), while the percentage of living NK cells and the concentration of TNF-γ and IFN-β and NCC were significantly increased (P<0.05).Table 2
The effect of RNM scavengers on NK-cell cytotoxicity of ONOO−.
GroupsNO (6 h) (μmol/mL)NK (6 h) %NK (24 h) %TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h)%E+T95.36±6.4590.57±2.5293.17±2.57198.64±7.33146.43±6.4967.47±2.64E+T+ONOO-261.03±6.5780.97±1.6871.87±1.0291.68±6.0058.47±7043.44±2.87E+T+DHC+ONOO-255.32±11.9382.27±1.3873.60±2.76118.73±5.56
*70.40±7.15
*45.26±3.31E + T + TIP +ONOO-179.65±7.00
*90.07±1.23
*91.13±3.67
*131.03±5.46
*76.80±4.91
*61.58±1.89
*E + T + GSH +ONOO-185.69±5.02
*89.87+0.35*88.03±1.46
*128.70±4.53
*75.12±6.45
*60.68±2.07
*n=3;2. ONOO− 200 umol/L; DHC 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, compared with E+T+ONOO-.
## 3.3. The Effect of Endogenous RNM on the Activity of NK Cells
We know that exogenous RNM reduces the activity of NK cells. Furthermore, we investigated the effect of endogenous RNM on the activity of NK cells. The results are shown in Table3, when the number of NK cells was fixed. After activation by IL-2/PHA, the levels of IFN-γ and TNF-β were significantly higher than at the same E : T ratios in the absence of IL-2/PHA (P<0.05). With a further addition of MO, the level of IFN-γ and TNF-β did not increase (P>0.05), while the production of TNF-β increased to a small extent over the level prior to the addition of MO, and the NCC was lower.Table 3
The effect of endogenous RNM on the activity of NK cells.
GroupsNO (6 h) (μmol/mL)TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC (24 h) %Control11.68±6.6211.08±5.4611.65±5.05T + IL-2/PHA22.29±5.6620.64±5.5713.18±5.86MO + IL-2/PHA114.37±7.4040.64+7.5947.76±6.57E + IL-2/PHA62.64±7.00361.62±12.27284.74±7.49E + MO + IL-2/PHA119.62±11.18114.09±7.4676.77±4.99MO + T + IL-2/PHA115.26±6.4733.31±6.3446.46±6.97E + T + IL-2/PHA79.63±7.04371.99±12.79275.08±9.6191.77±3.62E + T + MO + IL-2/PHA189.35±6.51
*110.91±10.01
*74.74±10.15
*60.39±5.39
*n=3; ONOO− 200 umol/L; DHT 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, comparison between group NK + MO + K562 + IL-2 and group NK + K562 + IL-2.
## 3.4. The Effect of RNM Scavenger on NK-Cell-Mediated Killing of K562 Cells
As shown in Table4, the addition of TIP and GSH reduced the production of RNM and ROM and increased the production of TNF-γ and IFN-β and NCC significantly (P<0.05); however, DHT incubation did not reduce the production of RNM effectively (P>0.05).Table 4
The effect of RNM scavengers on NK-cell-mediated killing of K562 cells.
Groups·OH (6 h) (U/mL)NO (6 h) (μmol/mL)TNF-β (24 h) pg/mLIFN-γ (24 h) pg/mLNCC%IL-2/PHA + E + T74.41±3.0582.10±6.60381.47±10.64277.14±10.6190.64±3.06IL-2/PHA + E + T + MO256.08±8.52193.65±5.95114.39±7.4576.81±9.5061.29±2.22IL-2/PHA + E + T + MO + DHC101.37±5.56
*188.92±5.00134.10±6.68
*107.89±6.55
*72.20±4.10
*IL-2/PHA + E + T + MO + TIP107.02±6.39
*91.32±6.81
*185.00±4.51
*146.71±6.96
*84.31±4.56
*IL-2/PHA + E + T + MO + GSH108.69±6.05
*84.66±5.99
*181.91±5.92
*144.11±6.03
*81.65±3.09
*n=3; ONOO− 200 umol/L; DHT 20 umol/L, TIP 50 umol/L, GSH 50 umol/L; T is K562 cells, E is mononuclear enriching NK cells E/T = 10/1; *P<0.05, compared with IL-2/PHA + NK + K562 + MO.
## 3.5. The Effect of Different Dosages of RNM Scavengers on the Activity of NK Cells
To investigate whether different dosages of RNM scavengers affect the activity of NK cells, we selected different combinations, as shown in Figure1. According to Figure 1, with an increase of the dosage, the groups treated with TIP and GSH decreased the production of RNM and increased the levels of TNF-γ, IFN-β and NCC significantly (P<0.05). However, each group of DHT could not eliminate RNM (P>0.05).Figure 1
The effect of different dosages of RNM scavengers on the activity of NK cells. With an increase of the dosage, the groups treated with TIP and GSH decreased the production of RNM and increased the levels of TNF-γ, IFN-β and NCC significantly (P<0.05). However, each group of DHT could not eliminate RNM (P>0.05).
## 3.6. The Effect of Different Combinations of RNM Scavengers on the Activity of NK Cells
To investigate whether different combinations of RNM scavengers affect the activity of NK cells, we selected different combinations, as shown in Figure2. According to the result of Figure 2, we found that different combinations of RNM scavengers did not enhance the antineoplasmic activity of NK cells.Figure 2
The effect of different combinations of RNM scavengers on the activity of NK cells. Different combinations of RNM scavengers did not enhance the antineoplasmic activity of NK cells.
## 4. Discussion
ONOO− is generated by the NO and O2- reaction, which can be produced by many cells in our body. Under normal conditions, ONOO− is believed to have a primarily physiological function. However, under pathological conditions, the oxidation and injury role is activated for the increased ONOO− stimulation by inflammatory factor [11]. By detecting the expression of RNM-induced genes, Nittler et al. [12] discovered that RNMS take part in the metabolic process of many types of cells, including T/NK cells in the following ways: (1) by oxidizing and nitrifying DNA residues and deaminating them to induce DNA damage and interfere with DNA repair; (2) by modifying proteins in the electron transfer chain to inhibit cell respiration, promoting the reduction of coenzyme Q10 to increase the production of active oxygen and reducing the proton current rate through the mitochondria to reduce the ATP content of the cell; (3) by mediating the nitrogenation and nitrosylation of proteins and interfering with their correct folding and degradation, thereby influencing cellular activity. Our research showed that the percentage of live NK cells decreased after the addition of synthetic ONOO− into the culture system of NK and K562 cells, which indicated that the ONOO− can kill NK cells directly. Not only did the lymphokines TNF-β, IFN-γ in NK cells decrease significantly, but KIR also decreased dramatically (P<0.01) (Table 1). All the results indicated that exogenous ONOO− had a cell-killing effect on NK cells and inhibited the anti-K562 cells function of NK cells.The mononuclear phagocyte, which protects the body from pathogenic factors, is an important component of acute and chronic inflammatory reactions. Once phagocytosed foreign bodies are recognized, the phagocyte will have a respiratory burst, which is manifested by an increase in oxygen consumption, enhanced metabolic activity of the pentose phosphate pathway, and the generation of nitrogen and oxygen free radicals, among which the NO can transfer the activating signal from the cytoplasm to endometaphase through the cell signaling pathway and induce the expression of related genes to activate the inflammatory response. This type of oxidizing bactericide is nonspecific, and normal tissues around mononuclear macrophages could also be injured [13]. In this study, IL-2 and PHA were added into the NK + K562 + MO culture system, resulting in significant enhancement of RNM, while the NCC decreased from 91.77% to 60.39% (P<0.01). Cytokine TNF-β IFN-γ also decreased after the addition of IL-2/PHA (Table 3). These results suggested that RNM produced by the MO cell respiratory burst inhibits NK-cell activity. This result is consistent with that of decreased NK activity and lower NCC induced by addition of exogenous ONOO-. It has been shown that a large number of MO cells can be detected in and around tumors [14]. This study also showed that IL-2 activates T/NK cells in vivo and also induces MO cells to produce a large amount of RNMS. Such a nonspecific killing effect inhibits the activity of T/NK cells, which may explain the low efficiency of adoptive tumor immunotherapy when using the T/NK cell as an effector cell and IL-2 as an activator.A central goal of tumor therapy is strengthening the immune system in order to eliminate the microresidue of the tumor. Thoren et al. had shown when using IL-2 for treatment that histamine is an ideal immunomodulator that can counter ROM inhibition of the antitumor activity of NK cells through the H2 receptor [15]. In our previous studies, we also demonstrated that tiopronin was superior to DHT in reversing the suppression by ROM of the antitumor activity of T/NK cells. In this study, TIP and GSH were chosen as scavengers of RNM and were compared with DHT. By adding exogenous ONOO−, TIP, GSH, and DHT into the coculture of NK cells and K562 cells, as shown in Table 2, both tiopronin and glutathione remove RNM directly, which protects NK cells and reverses ONOO− inhibition of NK-cell activity. However, in the DHT group, no change was found in the RNM and NCC, suggesting that ONOO−cannot be cleared by DHT directly. Table 4 shows the effects of the three kinds of scavengers in removing endogenous RNM generated by MO cells, which are similar to the effect on exogenous ONOO−. Tiopronin and glutathione can significantly reduce the RNM (P<0.05), while DHT has no effect on RNM production (P>0.05). All three drugs can reduce the ROM output, increase the production of TNF-β and IFN-γ, and enhance the rate of inhibition of K562 cells (P<0.05). GSH is tripeptide that is composed of γ-glutamic acid, and cysteine, and glycine, with a molecular structure that contains a nonprotein thiol, which can be catalyzed by itself or by glutathione transferase system (GST-S), regulating intracellular oxidation-reduction systems and reducing the content of RNM and ROM to play its pharmacological role. With the self-contained free SH, the tiopronin cannot only drive reversible synthesis of disulfide compounds with RNM and ROM, but it can also activate Cu, Zn-superoxide dismutase (SOD) to enhance its free radical scavenging role, maintain the balance of glutathione peptide in vivo, clear metal ions, and regulate the antioxidative enzyme system [16]. Histamine dihydrochloride plays an indirect role by blocking the generation of ROM through the H2 receptor. Therefore, it is limited by the quantity and the function of the H2 receptor. Therefore, tiopronin and glutathione perform better than histamine dihydrochloride in clearing the RNM. At the same time, because of its toxic side effects, histamine dihydrochloride is limited in clinical application, while the other two drugs have widely been used in clinical treatment, with few toxic side effects.Figure1 showed that the effect of TIP and GSH on clearing RNM and protecting NK cells is dose dependent. It also means that TIP and GSH reverse the inhibition of monocytes on the activity of NK cells. Compared with a single scavenger, two or three scavengers combined can reduce NO and increase the production of TNF-β and IFN-γ; given that there was no change in NCC (P>0.05), it appears that different combinations of RNM scavengers cannot protect NK cells better. Still, the reason is still not clear.In conclusion, endogenous and exogenous reactive nitrogen metabolites can act directly on cells, kill NK cells significantly, then reduce NK cells against K562 cells and activate TNF-β, IFN-γ, and other cytokines. There are a large number of MO cells in and around malignant tumors, which can cause respiratory burst, especially when activated, thus generating lots of RNMS, inducing NK cells apoptosis and significantly inhibiting NK cells activity. Tiopronin and glutathione are more effective than histamine in clearing RNMS and reversing their inhibitory effect on NK cells in anti-K562 cells with relatively minor toxic side effects and in a dose-dependent fashion. Therefore, they can be used clinically as a better immune adjuvant to improve the efficacy of adoptive immunotherapy for minimal residual tumor/leukemia. However, different combinations of RNM scavengers cannot better protect NK cells.
---
*Source: 101737-2012-03-12.xml* | 2012 |
# Fatigue Performance of Steel Slag SMC Ultrathin Abrasive Layer Under Strain Controlling Mode
**Authors:** Shuyun He; Lingling Gao; Chaoyang Guo; Xianhu Wu
**Journal:** Mathematical Problems in Engineering
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1017425
---
## Abstract
Recently, the warm mixing modified ultrathin abrasive layer has been paid great attention due to the green energy-saving strength. In regard to its durability, the study of the fatigue behavior of the SMC warm mixing modified ultrathin abrasive layer asphalt mixture was carried out under the strain controlling mode. The relationship among all characteristic parameters (bending stiffness modulus, normalized times, and products phase angle) and stress cycle times showed a similar three-stage change process, except for the accumulative dissipated energy. The transition point (Nf) between the second (RDEC2) and the third stage (RDEC3) was assumed to be the characteristic fatigue cracking sign for the SMC warm mixing modified ultrathin abrasive layer. In that case, supported by multiobjective system, the relative change of dissipation energy, rather than the 50% initial stiffness, was regarded as the appropriate fatigue cracking criteria.
---
## Body
## 1. Introduction
Road function decreases rapidly as the operational time of the highway continuously expands with the increasing phenomenon of heavily loaded and overloaded vehicles and exacerbating traffic environment. Plagued with insufficient attention to highway maintenance, especially preventive maintenance, there exists early breakage in most roads, accelerating the road deterioration and posing a great challenge to the maintenance and management of roads. In addition to increasing maintenance and protection investment, the maintenance and protection philosophy also must be transformed. That means the maintenance philosophy must shift to preventive maintenance from current corrective maintenance and maintenance according to need of the road. The systematic highway maintenance and protection system needs established to spontaneously make preventive maintenance based on the real road condition of highways [1–3].At present, the common preventive maintenance technologies of asphalt road surface at home and abroad are fog seal, slurry seal, microsurfacing, synchronous surface dressing, and ultrathin overlay. Although the above technologies have their respective advantages, seal and surfacing-type technologies are not applicable to circumstances in which the road surface has obvious fatigue cracks or temperature cracks due to technical limitations. The ultrathin abrasive layer is the only solution to improve the road conditions [4, 5]. The ultrathin abrasive layer as the preventive maintenance treatment to the superficial abrasive layer of newly built roads and roads made from high-grade asphalt or cement and concrete, enjoying the advantages of high strength, good durability, rich surface texture, and excellent sliding resistance can restrain and improve road disease and prolong the service life of the road, which attracts the attention of a wide range of scholars [6–12].Ary et al. utilized rubber powder particle to replace partly fine aggregate and applied it to ultrathin abrasive layer, discovering it could reduce the asphalt dose with better MLS stabilization in asphalt mixture [13]. Zhang et al. tested the road performance of asphalt mixture of ultrathin abrasive layer of rubber asphalt of four different types gradations and made a comparison, discovering that it possesses favorable high-temperature stability, low-temperature performance, and water stability performance but poor sliding resistance performance [14]. Zhou et al. applied rubber asphalt made from waste rubber powder and new type vita rubber asphalt with ultrathin abrasive layer, discovering that it possesses favorable road performance. However, the mixing and compaction temperatures of the mixture in the construction process are increased because of the larger viscosity [15, 16]. For the mix of cold-mix-cold-laid ultrathin abrasive layer, Li et al. found that the flow value and dynamic stability did not satisfy the technical index of hot-mix asphalt mixture. Consequently, it is not applicable to ultrathin abrasive layer of the upper surface of high-grade roads [17].Through the analysis of the above research, high-temperature asphalt ultrathin abrasive layer mixture is found to possess good pavement performance, but because of its thin thickness and easy cooling, the construction compaction is not ideal, and the sliding resistance performance is rapidly reduced. In response to the call of “low carbon environmental protection” and “green energy conservation,” some progresses have been made in warm mixing technology, bringing out many novel road materials. The styrene methyl copolymers (SMC), derived from waste plastics, waste rubber, and other methyl styrene polymers, were used to modify the warm mixing asphalt accompanied with a certain proportion of epoxy resin, epoxy resin hardener, and other auxiliaries, which displayed excellent energy-saving and emission-reducing effects [18, 19]. Xie et al. have adopted SMC modifier in road regeneration and achieved normal road performance under room temperature mixing by using 60% of waste materials [20]. However, the application of SMC warm mixing modifier to the road folium coat is still in the initial stage [15–18]. Through late tracking of these regenerated roads, many cracks were found due to the poor durability. In this work, the SMC room temperature asphalt modifier produced by Ningxia Rui Tai Tian Cheng New Material Science and Technology. (with waste tires, rubber oil made of plastic as the main ingredient, accounting for about 80% of the weight of the modifier, and other auxiliary chemical raw materials accounting for 20% of the weight) can be melted or dispersed in the asphalt to change the construction and ease of the asphalt bond under room temperature conditions, so that the asphalt and asphalt mixture at room temperature and subzero temperature conditions still have some mobility. [21] After a certain period of recuperation, room temperature modified asphalt mixture in the volatile solvent evaporation completely, the mixture gradually curing the formation of strength, the formation of a new type of modified asphalt ultrathin wear layer, the fatigue behavior was studied under strain controlling mode to get appropriate criteria.
## 2. Materials and Methods
### 2.1. Materials
#### 2.1.1. Warm Mix Modifier
SMC warm mixing modifier was brown viscous liquid at room temperature, as shown in Figure1, which was prepared by colloid mill and high speed shearing method. The technical specifications and test results are shown in Table 1.Figure 1
SMC warm mixing modified.Table 1
The technical specifications and test results for SMC warm mixing modifier.
TypeUnitTest resultsTechnical specificationsDensityg/cm30.930.8~1.0Rubber Hydrocarbon ≥%9485Viscosity (25°C)≤pa.s0.630.8Flash point°C9390~110Volatile organic compounds benzene≤%0.010.1
#### 2.1.2. Asphalt
SBS modified asphalt was heated at 135°, then mixed with 12% SMC warm mix modifier, and stirred for 1 hour to obtain warm mix asphalt. The test results about SBS modified asphalt and SMC warm mix modified asphalt are shown in Tables2 and 3, respectively.Table 2
Test results about SBS modified asphalt.
TypeUnitTest resultsTechnical specificationsPenetration (25°C, 100 g, 5 s)0.1 mm9080~100Softening point (R and B), ≥°C4744Ductility (15°C,5 cm/min), ≥cm300100Density (15°C)g/cm31.08—Table 3
Test results about SMC warm mix modified asphalt
TypeUnitTest resultsTechnical specificationsRotary viscometer (60°C), ≤pa.s0.60.8Flash point, ≥°C198180Loss of mass due to distillation, ≤%413Adhesion to coarse aggregate, ≥%7¾3/4
#### 2.1.3. Aggregate
The coarse and fine aggregate used in this paper are conventional basalt, and the mineral powder is conventional limestone, which meets the requirements of relevant specifications.
#### 2.1.4. Design Gradation of Slag Asphalt Mixture
The aggregate gradations for SMC-10 and AC-16 are presented in Table4. And 5% steel slag powder is added in SMC-10. Based on the Marshall mix design results, the optimum asphalt to aggregate ratios are both 5.3% for SMC-10 and AC-16 mixtures.Table 4
Gradation composition.
Gradation typesThrough the mesh quality percentage (%)191613.29.54.752.361.180.60.30.150.075SMC-10 (35%)——10095353122161196AC-16100958071483125151085Plate specimens were prepared with the upper layer of 1.5 cm SMC-10 and under layer of 3.5 cm AC-16, as shown in Figure2. According to the Chinese 《Standard Test Methods of Bitumen and Bituminous Mixtures for Highway Engineering》(JTG_E20-2011) [22], the wheel pressure method was used to make the specimen, which was loaded in two layers and rolled into shape with a rut board and then placed for 72 hours at laboratory temperature. After that, the beam specimens were cut with 38 cm length, 5 cm height, and 6.5 cm width.Figure 2
Optical image of ultrathin abrasive layer.
### 2.2. Test Methods
The fatigue behavior was tested by using a multifunctional dynamic testing system for road materials (UTM-100) developed by the Italian CONTROLS Company. 10 Hz is usually adopted in the asphalt mixture fatigue test for loading frequency, which corresponds to the time when the vehicle running speed is 60–65 km/h, relatively close to the loading state of the road under real traffic load. At the same time, the fatigue life is related to the loading waveform.The strain loading control mode with four strain levels (400, 500, 600, and 700με) was adopted. The test temperature was set as 10°C and 15°C, respectively. All tests were terminated until the bending stiffness modulus attained to 30% of the initial stiffness modulus (S0).
## 2.1. Materials
### 2.1.1. Warm Mix Modifier
SMC warm mixing modifier was brown viscous liquid at room temperature, as shown in Figure1, which was prepared by colloid mill and high speed shearing method. The technical specifications and test results are shown in Table 1.Figure 1
SMC warm mixing modified.Table 1
The technical specifications and test results for SMC warm mixing modifier.
TypeUnitTest resultsTechnical specificationsDensityg/cm30.930.8~1.0Rubber Hydrocarbon ≥%9485Viscosity (25°C)≤pa.s0.630.8Flash point°C9390~110Volatile organic compounds benzene≤%0.010.1
### 2.1.2. Asphalt
SBS modified asphalt was heated at 135°, then mixed with 12% SMC warm mix modifier, and stirred for 1 hour to obtain warm mix asphalt. The test results about SBS modified asphalt and SMC warm mix modified asphalt are shown in Tables2 and 3, respectively.Table 2
Test results about SBS modified asphalt.
TypeUnitTest resultsTechnical specificationsPenetration (25°C, 100 g, 5 s)0.1 mm9080~100Softening point (R and B), ≥°C4744Ductility (15°C,5 cm/min), ≥cm300100Density (15°C)g/cm31.08—Table 3
Test results about SMC warm mix modified asphalt
TypeUnitTest resultsTechnical specificationsRotary viscometer (60°C), ≤pa.s0.60.8Flash point, ≥°C198180Loss of mass due to distillation, ≤%413Adhesion to coarse aggregate, ≥%7¾3/4
### 2.1.3. Aggregate
The coarse and fine aggregate used in this paper are conventional basalt, and the mineral powder is conventional limestone, which meets the requirements of relevant specifications.
### 2.1.4. Design Gradation of Slag Asphalt Mixture
The aggregate gradations for SMC-10 and AC-16 are presented in Table4. And 5% steel slag powder is added in SMC-10. Based on the Marshall mix design results, the optimum asphalt to aggregate ratios are both 5.3% for SMC-10 and AC-16 mixtures.Table 4
Gradation composition.
Gradation typesThrough the mesh quality percentage (%)191613.29.54.752.361.180.60.30.150.075SMC-10 (35%)——10095353122161196AC-16100958071483125151085Plate specimens were prepared with the upper layer of 1.5 cm SMC-10 and under layer of 3.5 cm AC-16, as shown in Figure2. According to the Chinese 《Standard Test Methods of Bitumen and Bituminous Mixtures for Highway Engineering》(JTG_E20-2011) [22], the wheel pressure method was used to make the specimen, which was loaded in two layers and rolled into shape with a rut board and then placed for 72 hours at laboratory temperature. After that, the beam specimens were cut with 38 cm length, 5 cm height, and 6.5 cm width.Figure 2
Optical image of ultrathin abrasive layer.
## 2.1.1. Warm Mix Modifier
SMC warm mixing modifier was brown viscous liquid at room temperature, as shown in Figure1, which was prepared by colloid mill and high speed shearing method. The technical specifications and test results are shown in Table 1.Figure 1
SMC warm mixing modified.Table 1
The technical specifications and test results for SMC warm mixing modifier.
TypeUnitTest resultsTechnical specificationsDensityg/cm30.930.8~1.0Rubber Hydrocarbon ≥%9485Viscosity (25°C)≤pa.s0.630.8Flash point°C9390~110Volatile organic compounds benzene≤%0.010.1
## 2.1.2. Asphalt
SBS modified asphalt was heated at 135°, then mixed with 12% SMC warm mix modifier, and stirred for 1 hour to obtain warm mix asphalt. The test results about SBS modified asphalt and SMC warm mix modified asphalt are shown in Tables2 and 3, respectively.Table 2
Test results about SBS modified asphalt.
TypeUnitTest resultsTechnical specificationsPenetration (25°C, 100 g, 5 s)0.1 mm9080~100Softening point (R and B), ≥°C4744Ductility (15°C,5 cm/min), ≥cm300100Density (15°C)g/cm31.08—Table 3
Test results about SMC warm mix modified asphalt
TypeUnitTest resultsTechnical specificationsRotary viscometer (60°C), ≤pa.s0.60.8Flash point, ≥°C198180Loss of mass due to distillation, ≤%413Adhesion to coarse aggregate, ≥%7¾3/4
## 2.1.3. Aggregate
The coarse and fine aggregate used in this paper are conventional basalt, and the mineral powder is conventional limestone, which meets the requirements of relevant specifications.
## 2.1.4. Design Gradation of Slag Asphalt Mixture
The aggregate gradations for SMC-10 and AC-16 are presented in Table4. And 5% steel slag powder is added in SMC-10. Based on the Marshall mix design results, the optimum asphalt to aggregate ratios are both 5.3% for SMC-10 and AC-16 mixtures.Table 4
Gradation composition.
Gradation typesThrough the mesh quality percentage (%)191613.29.54.752.361.180.60.30.150.075SMC-10 (35%)——10095353122161196AC-16100958071483125151085Plate specimens were prepared with the upper layer of 1.5 cm SMC-10 and under layer of 3.5 cm AC-16, as shown in Figure2. According to the Chinese 《Standard Test Methods of Bitumen and Bituminous Mixtures for Highway Engineering》(JTG_E20-2011) [22], the wheel pressure method was used to make the specimen, which was loaded in two layers and rolled into shape with a rut board and then placed for 72 hours at laboratory temperature. After that, the beam specimens were cut with 38 cm length, 5 cm height, and 6.5 cm width.Figure 2
Optical image of ultrathin abrasive layer.
## 2.2. Test Methods
The fatigue behavior was tested by using a multifunctional dynamic testing system for road materials (UTM-100) developed by the Italian CONTROLS Company. 10 Hz is usually adopted in the asphalt mixture fatigue test for loading frequency, which corresponds to the time when the vehicle running speed is 60–65 km/h, relatively close to the loading state of the road under real traffic load. At the same time, the fatigue life is related to the loading waveform.The strain loading control mode with four strain levels (400, 500, 600, and 700με) was adopted. The test temperature was set as 10°C and 15°C, respectively. All tests were terminated until the bending stiffness modulus attained to 30% of the initial stiffness modulus (S0).
## 3. Results and Discussion
### 3.1. Studies on the Characteristic Factors
#### 3.1.1. Bending Stiffness Modulus (S) and Time Product Normalized Stiffness (NM)
Generally, the fatigue life is defined as cycle time when the initial stiffness modulus attenuates to 50%, which can boast the advantage of quick test and convenient data analysis [23]. However, the collected data are insufficient and easily lead to unconvinced experimental conclusions. Herein, all tests in the present work were terminated until the bending stiffness modulus attained to 30% of the initial stiffness modulus. Meanwhile, the fatigue life of ASTM D746 was measured as cycle time when the time product of normalized stiffness (normalized modulus × cycles, NM) reached the maximum [12]. Figure 3 shows the bending stiffness modulus versus cycling time, and the initial bending stiffness modulus is listed in Table 5.Figure 3
Bending stiffness modulus of SMC warm mixing modified ultrathin abrasive layer mixture and normalized coefficient.Table 5
Initial stiffness modulus under different test conditions.
Test temperature/°CStrain level/μεInitial stiffness Modulus/MPa104007080.285006882.926005671.21155003619.966003339.837003108.26As shown in Figure3, the curves of bending stiffness modulus (S) of SMC warm mixing modified ultrathin abrasive layer versus increased cycle time (N) can be divided into three stages. Mixture bending stiffness modulus attenuated rapidly with the increase in cycle time in the first stage. Attenuation of the mixture bending stiffness modulus came to a stability with the increase in cycle time in the second stage. Attenuation rate in the second stage was clearly lower than that of the first stage. Compared with the second stage, the attenuation rate of mixture bending in the third stage apparently increased with the cycle time when the cycle time reached a certain value. Besides, a distinct transition point (Nf) was found between the second and third stage, which was the characteristic fatigue cracking sign for the SMC warm mixing modified ultrathin abrasive layer.For the tests under the same mode, the value of cycling time at the transition point (Nf) was almost the same, which means that the indices of bending stiffness modulus (S) and normalized stiffness time product (NM) can be regarded as the criterion of fatigue life.The initial stiffness modulusS0 of this study was the bending stiffness modulus at the 100th cyclic loading time. It can be seen from Table 1 that the initial stiffness modulus of SMC warm mixing modified ultrathin abrasive layer was enhanced with lowered strain level at the same test temperature, while reduced with increased test temperature at the same strain level. The value of initial stiffness modulus (S0) at 10°C was about twice larger than that at 15°C.
#### 3.1.2. Modulus Ratio of Residual Stiffness (Sr)
In this study, the modulus ratio of residual stiffness (Sr) = the stiffness modulus under the loading cycles/initial stiffness modulus (S0). The variation of residual stiffness modulus ratio with the number of cycles is shown in Figure 4.Figure 4
Variation of residual stiffness modulus ratio of SMC warm mixing modified ultrathin abrasive layer.As shown in Figure4, the Sr showed a similar tendency to S, which was decreased rapidly in the first stage, slowly in the second stage, and drastically in the third stage. Thus, the Sr can also be regarded as the criterion of fatigue life.
#### 3.1.3. Phase Angle (θ)
As shown in Figure5, the phase angle (θ) fluctuated up and down with increased cycle time (N), while maintaining the growth tendency. In consistent with the other values (S, S0, Sr), three stages were also found with the same indication for the curves of phase angle (θ) versus cycle time. The value of cycle time corresponding to the turning point (Nf) was considered as the fatigue life of specimen. Furthermore, the phase angle at 15°C was larger than that at 10°C under the same strain condition, indicating that soft material was easy to generate larger recoverable deformation.Figure 5
Phase angle of SMC warm mixing modified ultrathin abrasive layer.
#### 3.1.4. Accumulative Dissipated Energy (Qd)
The dissipated energy was an essential factor which could reflect the cracking process of materials. As shown in Figure6, the value of accumulative dissipated energy (Qd) increased constantly with elevated cycle time (N). Moreover, the Qd was enlarged with lowered strain level during the thorough test process. It should be noted that the three-stage as well as transition point (Nf) appeared in the curves of S, S0, Sr, and θ versus cycle times were not found for Qd. Therefore, we cannot determine the failure time from the curves of Qd versus cycle time, and the accumulative dissipated energy Qd should not be regarded as the direct criterion of fatigue life of specimens.Figure 6
Variation law of accumulative dissipated energy of SMC warm mixing modified ultrathin abrasive layer.
### 3.2. Fatigue Behavior of Asphalt Mixture Analyzed by Energy Method
As indicated by many researchers, the asphalt mixture was a viscoelastic material which exhibited a failure process accompanied with energy dissipation [16, 17]. Although the accumulative dissipated energy Qd was not the direct criterion of fatigue life of specimen, the relevant change law of dissipated energy was still of great significance to underline the fatigue behavior of asphalt mixture. The relationship between energy dissipation and fatigue life is shown as follows:(1)Wf=ANfz.Wf—accumulative dissipated energy; Nf——fatigue life; A, Z——test regression coefficient.There was a time-induced hysteresis phenomenon for strain stress in viscoelastic materials (Figure7). Carpenter et al. proposed the relative dissipated energy change ratio (RDEC) to depict the fatigue properties of materials [18].Figure 7
Lag time and lag curve.The calculation formula for the RDEC is shown as(2)RDEC=DEj−DEiDEij−i.DEi, DEj represents the energy dissipation for the energy dissipation of the i th and j th cycle, respectively. j>i, the difference between jandi is determined by fatigue life and instrument fatigue sampling. There exists a three-stage variation law of RDEC, and the corresponding fatigue time at the turning point between the second and third stages (Nf) is determined as the fatigue life. Thereby, the fatigue equation shown in formula (3) can be obtained by using the average PV of RDEC2 in the second stage [19, 20].(3)PV=cNfd.c, d are the fitting parameters.Accordingly, the fatigue life of asphalt mixture of SMC warm mixing modified asphalt was predicted and analyzed with energy method in this work. The data of RDEC versus cycle time were obtained and calculated according to formula (2).The curves of RDEC versus cyclic loading time N were divided into three stages (Figure8–Figure 13). At the first stage (RDEC 1), the specimen possessed a relatively high RDEC value, which was gradually reduced with increased cyclic time. It was assumed that the native energy dissipation played an important role on the resistance of initially cycled loading. For the second stage (RDEC2), a low RDEC value was maintained, which implied a steady damage rate of RDEC. Finally, the RDEC value increased gradually at the third stage (RDEC 3), which resulted in an accelerated fatigue damage.Figure 8
Diagram of Relation between RDEC and cycle time at 10°C-400.Figure 9
Diagram of Relation between RDEC and cycle time at 10°C-500.Figure 10
Diagram of Relation between RDEC and cycle time at 10°C-600.Figure 11
Diagram of Relation between RDEC and cycle time at 15°C-500.Figure 12
Diagram of Relation between RDEC and cycle time at 15°C-600.Figure 13
Diagram of Relation between RDEC and cycle time at 15°C-700.If the 50% initial stiffness was adopted in the tests, no obvious increased energy dissipation change ratio was found under either mode of strain control. The fatigue damage was in steady state, which provided the specimen enough residual energy to resist the external loadings. Thus, the transition point between the second and third stages (Nf) was an appropriate factor to determine the fatigue life of the specimen.The value ofPV assigned to the relative change rate of energy dissipation was calculated from the RDEC data in the second stage and listed in Table 6, which was increased with the elevated strain level. The fatigue life of SMC warm mixing modified ultrathin abrasive layer asphalt mixture was fitted with (equation (4)). Through logging both sides of (4) simultaneously, the relationship between log (PV) and fatigue life (Nf) was established and shown in Figure 14 The PV and Nf displayed a strong correlation with coefficient R2 reached more than 0.98, indicating the RDEC value at the transition point of Nf can be used as an appropriate criterion for predicting the fatigue cracking of SMC warm mixing modified ultrathin abrasive layer asphalt mixture.(4)logPV=logc+dlogNf.Table 6
Results ofPV and Nf under different strain levels.
Test temperature/°CStrain level/μεPVN/Cycle times104005.3E-517123405007.5E-56432436001.3E-4203435155001.25E-528231046001.83E-511234337004.51E-5241223Figure 14
Curve diagram of relation between PV and fatigue life.
## 3.1. Studies on the Characteristic Factors
### 3.1.1. Bending Stiffness Modulus (S) and Time Product Normalized Stiffness (NM)
Generally, the fatigue life is defined as cycle time when the initial stiffness modulus attenuates to 50%, which can boast the advantage of quick test and convenient data analysis [23]. However, the collected data are insufficient and easily lead to unconvinced experimental conclusions. Herein, all tests in the present work were terminated until the bending stiffness modulus attained to 30% of the initial stiffness modulus. Meanwhile, the fatigue life of ASTM D746 was measured as cycle time when the time product of normalized stiffness (normalized modulus × cycles, NM) reached the maximum [12]. Figure 3 shows the bending stiffness modulus versus cycling time, and the initial bending stiffness modulus is listed in Table 5.Figure 3
Bending stiffness modulus of SMC warm mixing modified ultrathin abrasive layer mixture and normalized coefficient.Table 5
Initial stiffness modulus under different test conditions.
Test temperature/°CStrain level/μεInitial stiffness Modulus/MPa104007080.285006882.926005671.21155003619.966003339.837003108.26As shown in Figure3, the curves of bending stiffness modulus (S) of SMC warm mixing modified ultrathin abrasive layer versus increased cycle time (N) can be divided into three stages. Mixture bending stiffness modulus attenuated rapidly with the increase in cycle time in the first stage. Attenuation of the mixture bending stiffness modulus came to a stability with the increase in cycle time in the second stage. Attenuation rate in the second stage was clearly lower than that of the first stage. Compared with the second stage, the attenuation rate of mixture bending in the third stage apparently increased with the cycle time when the cycle time reached a certain value. Besides, a distinct transition point (Nf) was found between the second and third stage, which was the characteristic fatigue cracking sign for the SMC warm mixing modified ultrathin abrasive layer.For the tests under the same mode, the value of cycling time at the transition point (Nf) was almost the same, which means that the indices of bending stiffness modulus (S) and normalized stiffness time product (NM) can be regarded as the criterion of fatigue life.The initial stiffness modulusS0 of this study was the bending stiffness modulus at the 100th cyclic loading time. It can be seen from Table 1 that the initial stiffness modulus of SMC warm mixing modified ultrathin abrasive layer was enhanced with lowered strain level at the same test temperature, while reduced with increased test temperature at the same strain level. The value of initial stiffness modulus (S0) at 10°C was about twice larger than that at 15°C.
### 3.1.2. Modulus Ratio of Residual Stiffness (Sr)
In this study, the modulus ratio of residual stiffness (Sr) = the stiffness modulus under the loading cycles/initial stiffness modulus (S0). The variation of residual stiffness modulus ratio with the number of cycles is shown in Figure 4.Figure 4
Variation of residual stiffness modulus ratio of SMC warm mixing modified ultrathin abrasive layer.As shown in Figure4, the Sr showed a similar tendency to S, which was decreased rapidly in the first stage, slowly in the second stage, and drastically in the third stage. Thus, the Sr can also be regarded as the criterion of fatigue life.
### 3.1.3. Phase Angle (θ)
As shown in Figure5, the phase angle (θ) fluctuated up and down with increased cycle time (N), while maintaining the growth tendency. In consistent with the other values (S, S0, Sr), three stages were also found with the same indication for the curves of phase angle (θ) versus cycle time. The value of cycle time corresponding to the turning point (Nf) was considered as the fatigue life of specimen. Furthermore, the phase angle at 15°C was larger than that at 10°C under the same strain condition, indicating that soft material was easy to generate larger recoverable deformation.Figure 5
Phase angle of SMC warm mixing modified ultrathin abrasive layer.
### 3.1.4. Accumulative Dissipated Energy (Qd)
The dissipated energy was an essential factor which could reflect the cracking process of materials. As shown in Figure6, the value of accumulative dissipated energy (Qd) increased constantly with elevated cycle time (N). Moreover, the Qd was enlarged with lowered strain level during the thorough test process. It should be noted that the three-stage as well as transition point (Nf) appeared in the curves of S, S0, Sr, and θ versus cycle times were not found for Qd. Therefore, we cannot determine the failure time from the curves of Qd versus cycle time, and the accumulative dissipated energy Qd should not be regarded as the direct criterion of fatigue life of specimens.Figure 6
Variation law of accumulative dissipated energy of SMC warm mixing modified ultrathin abrasive layer.
## 3.1.1. Bending Stiffness Modulus (S) and Time Product Normalized Stiffness (NM)
Generally, the fatigue life is defined as cycle time when the initial stiffness modulus attenuates to 50%, which can boast the advantage of quick test and convenient data analysis [23]. However, the collected data are insufficient and easily lead to unconvinced experimental conclusions. Herein, all tests in the present work were terminated until the bending stiffness modulus attained to 30% of the initial stiffness modulus. Meanwhile, the fatigue life of ASTM D746 was measured as cycle time when the time product of normalized stiffness (normalized modulus × cycles, NM) reached the maximum [12]. Figure 3 shows the bending stiffness modulus versus cycling time, and the initial bending stiffness modulus is listed in Table 5.Figure 3
Bending stiffness modulus of SMC warm mixing modified ultrathin abrasive layer mixture and normalized coefficient.Table 5
Initial stiffness modulus under different test conditions.
Test temperature/°CStrain level/μεInitial stiffness Modulus/MPa104007080.285006882.926005671.21155003619.966003339.837003108.26As shown in Figure3, the curves of bending stiffness modulus (S) of SMC warm mixing modified ultrathin abrasive layer versus increased cycle time (N) can be divided into three stages. Mixture bending stiffness modulus attenuated rapidly with the increase in cycle time in the first stage. Attenuation of the mixture bending stiffness modulus came to a stability with the increase in cycle time in the second stage. Attenuation rate in the second stage was clearly lower than that of the first stage. Compared with the second stage, the attenuation rate of mixture bending in the third stage apparently increased with the cycle time when the cycle time reached a certain value. Besides, a distinct transition point (Nf) was found between the second and third stage, which was the characteristic fatigue cracking sign for the SMC warm mixing modified ultrathin abrasive layer.For the tests under the same mode, the value of cycling time at the transition point (Nf) was almost the same, which means that the indices of bending stiffness modulus (S) and normalized stiffness time product (NM) can be regarded as the criterion of fatigue life.The initial stiffness modulusS0 of this study was the bending stiffness modulus at the 100th cyclic loading time. It can be seen from Table 1 that the initial stiffness modulus of SMC warm mixing modified ultrathin abrasive layer was enhanced with lowered strain level at the same test temperature, while reduced with increased test temperature at the same strain level. The value of initial stiffness modulus (S0) at 10°C was about twice larger than that at 15°C.
## 3.1.2. Modulus Ratio of Residual Stiffness (Sr)
In this study, the modulus ratio of residual stiffness (Sr) = the stiffness modulus under the loading cycles/initial stiffness modulus (S0). The variation of residual stiffness modulus ratio with the number of cycles is shown in Figure 4.Figure 4
Variation of residual stiffness modulus ratio of SMC warm mixing modified ultrathin abrasive layer.As shown in Figure4, the Sr showed a similar tendency to S, which was decreased rapidly in the first stage, slowly in the second stage, and drastically in the third stage. Thus, the Sr can also be regarded as the criterion of fatigue life.
## 3.1.3. Phase Angle (θ)
As shown in Figure5, the phase angle (θ) fluctuated up and down with increased cycle time (N), while maintaining the growth tendency. In consistent with the other values (S, S0, Sr), three stages were also found with the same indication for the curves of phase angle (θ) versus cycle time. The value of cycle time corresponding to the turning point (Nf) was considered as the fatigue life of specimen. Furthermore, the phase angle at 15°C was larger than that at 10°C under the same strain condition, indicating that soft material was easy to generate larger recoverable deformation.Figure 5
Phase angle of SMC warm mixing modified ultrathin abrasive layer.
## 3.1.4. Accumulative Dissipated Energy (Qd)
The dissipated energy was an essential factor which could reflect the cracking process of materials. As shown in Figure6, the value of accumulative dissipated energy (Qd) increased constantly with elevated cycle time (N). Moreover, the Qd was enlarged with lowered strain level during the thorough test process. It should be noted that the three-stage as well as transition point (Nf) appeared in the curves of S, S0, Sr, and θ versus cycle times were not found for Qd. Therefore, we cannot determine the failure time from the curves of Qd versus cycle time, and the accumulative dissipated energy Qd should not be regarded as the direct criterion of fatigue life of specimens.Figure 6
Variation law of accumulative dissipated energy of SMC warm mixing modified ultrathin abrasive layer.
## 3.2. Fatigue Behavior of Asphalt Mixture Analyzed by Energy Method
As indicated by many researchers, the asphalt mixture was a viscoelastic material which exhibited a failure process accompanied with energy dissipation [16, 17]. Although the accumulative dissipated energy Qd was not the direct criterion of fatigue life of specimen, the relevant change law of dissipated energy was still of great significance to underline the fatigue behavior of asphalt mixture. The relationship between energy dissipation and fatigue life is shown as follows:(1)Wf=ANfz.Wf—accumulative dissipated energy; Nf——fatigue life; A, Z——test regression coefficient.There was a time-induced hysteresis phenomenon for strain stress in viscoelastic materials (Figure7). Carpenter et al. proposed the relative dissipated energy change ratio (RDEC) to depict the fatigue properties of materials [18].Figure 7
Lag time and lag curve.The calculation formula for the RDEC is shown as(2)RDEC=DEj−DEiDEij−i.DEi, DEj represents the energy dissipation for the energy dissipation of the i th and j th cycle, respectively. j>i, the difference between jandi is determined by fatigue life and instrument fatigue sampling. There exists a three-stage variation law of RDEC, and the corresponding fatigue time at the turning point between the second and third stages (Nf) is determined as the fatigue life. Thereby, the fatigue equation shown in formula (3) can be obtained by using the average PV of RDEC2 in the second stage [19, 20].(3)PV=cNfd.c, d are the fitting parameters.Accordingly, the fatigue life of asphalt mixture of SMC warm mixing modified asphalt was predicted and analyzed with energy method in this work. The data of RDEC versus cycle time were obtained and calculated according to formula (2).The curves of RDEC versus cyclic loading time N were divided into three stages (Figure8–Figure 13). At the first stage (RDEC 1), the specimen possessed a relatively high RDEC value, which was gradually reduced with increased cyclic time. It was assumed that the native energy dissipation played an important role on the resistance of initially cycled loading. For the second stage (RDEC2), a low RDEC value was maintained, which implied a steady damage rate of RDEC. Finally, the RDEC value increased gradually at the third stage (RDEC 3), which resulted in an accelerated fatigue damage.Figure 8
Diagram of Relation between RDEC and cycle time at 10°C-400.Figure 9
Diagram of Relation between RDEC and cycle time at 10°C-500.Figure 10
Diagram of Relation between RDEC and cycle time at 10°C-600.Figure 11
Diagram of Relation between RDEC and cycle time at 15°C-500.Figure 12
Diagram of Relation between RDEC and cycle time at 15°C-600.Figure 13
Diagram of Relation between RDEC and cycle time at 15°C-700.If the 50% initial stiffness was adopted in the tests, no obvious increased energy dissipation change ratio was found under either mode of strain control. The fatigue damage was in steady state, which provided the specimen enough residual energy to resist the external loadings. Thus, the transition point between the second and third stages (Nf) was an appropriate factor to determine the fatigue life of the specimen.The value ofPV assigned to the relative change rate of energy dissipation was calculated from the RDEC data in the second stage and listed in Table 6, which was increased with the elevated strain level. The fatigue life of SMC warm mixing modified ultrathin abrasive layer asphalt mixture was fitted with (equation (4)). Through logging both sides of (4) simultaneously, the relationship between log (PV) and fatigue life (Nf) was established and shown in Figure 14 The PV and Nf displayed a strong correlation with coefficient R2 reached more than 0.98, indicating the RDEC value at the transition point of Nf can be used as an appropriate criterion for predicting the fatigue cracking of SMC warm mixing modified ultrathin abrasive layer asphalt mixture.(4)logPV=logc+dlogNf.Table 6
Results ofPV and Nf under different strain levels.
Test temperature/°CStrain level/μεPVN/Cycle times104005.3E-517123405007.5E-56432436001.3E-4203435155001.25E-528231046001.83E-511234337004.51E-5241223Figure 14
Curve diagram of relation between PV and fatigue life.
## 4. Conclusions
The study of the fatigue behavior of the SMC warm mixing modified ultrathin abrasive layer asphalt mixture was carried out under the strain controlling mode. All characteristic parameters (bending stiffness modulus, normalized times, product, and phase angle) showed a similar three-stage change process correlated with the stress cycle times, except for the accumulative dissipated energy. It is unreasonable to use 50% initial stiffness as the standard of fatigue failure of SMC warm mixing modified ultrathin abrasive asphalt mixture. However, The PV andNf displayed a strong correlation with coefficient R2 reached more than 0.98, indicating the RDEC value at the transition point of Nf, considering the rest of the valid parameters for verification, and these can be used as appropriate criteria for predicting the fatigue cracking of SMC warm mixing modified ultrathin abrasive layer asphalt mixture.
---
*Source: 1017425-2022-08-26.xml* | 1017425-2022-08-26_1017425-2022-08-26.md | 40,487 | Fatigue Performance of Steel Slag SMC Ultrathin Abrasive Layer Under Strain Controlling Mode | Shuyun He; Lingling Gao; Chaoyang Guo; Xianhu Wu | Mathematical Problems in Engineering
(2022) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1017425 | 1017425-2022-08-26.xml | ---
## Abstract
Recently, the warm mixing modified ultrathin abrasive layer has been paid great attention due to the green energy-saving strength. In regard to its durability, the study of the fatigue behavior of the SMC warm mixing modified ultrathin abrasive layer asphalt mixture was carried out under the strain controlling mode. The relationship among all characteristic parameters (bending stiffness modulus, normalized times, and products phase angle) and stress cycle times showed a similar three-stage change process, except for the accumulative dissipated energy. The transition point (Nf) between the second (RDEC2) and the third stage (RDEC3) was assumed to be the characteristic fatigue cracking sign for the SMC warm mixing modified ultrathin abrasive layer. In that case, supported by multiobjective system, the relative change of dissipation energy, rather than the 50% initial stiffness, was regarded as the appropriate fatigue cracking criteria.
---
## Body
## 1. Introduction
Road function decreases rapidly as the operational time of the highway continuously expands with the increasing phenomenon of heavily loaded and overloaded vehicles and exacerbating traffic environment. Plagued with insufficient attention to highway maintenance, especially preventive maintenance, there exists early breakage in most roads, accelerating the road deterioration and posing a great challenge to the maintenance and management of roads. In addition to increasing maintenance and protection investment, the maintenance and protection philosophy also must be transformed. That means the maintenance philosophy must shift to preventive maintenance from current corrective maintenance and maintenance according to need of the road. The systematic highway maintenance and protection system needs established to spontaneously make preventive maintenance based on the real road condition of highways [1–3].At present, the common preventive maintenance technologies of asphalt road surface at home and abroad are fog seal, slurry seal, microsurfacing, synchronous surface dressing, and ultrathin overlay. Although the above technologies have their respective advantages, seal and surfacing-type technologies are not applicable to circumstances in which the road surface has obvious fatigue cracks or temperature cracks due to technical limitations. The ultrathin abrasive layer is the only solution to improve the road conditions [4, 5]. The ultrathin abrasive layer as the preventive maintenance treatment to the superficial abrasive layer of newly built roads and roads made from high-grade asphalt or cement and concrete, enjoying the advantages of high strength, good durability, rich surface texture, and excellent sliding resistance can restrain and improve road disease and prolong the service life of the road, which attracts the attention of a wide range of scholars [6–12].Ary et al. utilized rubber powder particle to replace partly fine aggregate and applied it to ultrathin abrasive layer, discovering it could reduce the asphalt dose with better MLS stabilization in asphalt mixture [13]. Zhang et al. tested the road performance of asphalt mixture of ultrathin abrasive layer of rubber asphalt of four different types gradations and made a comparison, discovering that it possesses favorable high-temperature stability, low-temperature performance, and water stability performance but poor sliding resistance performance [14]. Zhou et al. applied rubber asphalt made from waste rubber powder and new type vita rubber asphalt with ultrathin abrasive layer, discovering that it possesses favorable road performance. However, the mixing and compaction temperatures of the mixture in the construction process are increased because of the larger viscosity [15, 16]. For the mix of cold-mix-cold-laid ultrathin abrasive layer, Li et al. found that the flow value and dynamic stability did not satisfy the technical index of hot-mix asphalt mixture. Consequently, it is not applicable to ultrathin abrasive layer of the upper surface of high-grade roads [17].Through the analysis of the above research, high-temperature asphalt ultrathin abrasive layer mixture is found to possess good pavement performance, but because of its thin thickness and easy cooling, the construction compaction is not ideal, and the sliding resistance performance is rapidly reduced. In response to the call of “low carbon environmental protection” and “green energy conservation,” some progresses have been made in warm mixing technology, bringing out many novel road materials. The styrene methyl copolymers (SMC), derived from waste plastics, waste rubber, and other methyl styrene polymers, were used to modify the warm mixing asphalt accompanied with a certain proportion of epoxy resin, epoxy resin hardener, and other auxiliaries, which displayed excellent energy-saving and emission-reducing effects [18, 19]. Xie et al. have adopted SMC modifier in road regeneration and achieved normal road performance under room temperature mixing by using 60% of waste materials [20]. However, the application of SMC warm mixing modifier to the road folium coat is still in the initial stage [15–18]. Through late tracking of these regenerated roads, many cracks were found due to the poor durability. In this work, the SMC room temperature asphalt modifier produced by Ningxia Rui Tai Tian Cheng New Material Science and Technology. (with waste tires, rubber oil made of plastic as the main ingredient, accounting for about 80% of the weight of the modifier, and other auxiliary chemical raw materials accounting for 20% of the weight) can be melted or dispersed in the asphalt to change the construction and ease of the asphalt bond under room temperature conditions, so that the asphalt and asphalt mixture at room temperature and subzero temperature conditions still have some mobility. [21] After a certain period of recuperation, room temperature modified asphalt mixture in the volatile solvent evaporation completely, the mixture gradually curing the formation of strength, the formation of a new type of modified asphalt ultrathin wear layer, the fatigue behavior was studied under strain controlling mode to get appropriate criteria.
## 2. Materials and Methods
### 2.1. Materials
#### 2.1.1. Warm Mix Modifier
SMC warm mixing modifier was brown viscous liquid at room temperature, as shown in Figure1, which was prepared by colloid mill and high speed shearing method. The technical specifications and test results are shown in Table 1.Figure 1
SMC warm mixing modified.Table 1
The technical specifications and test results for SMC warm mixing modifier.
TypeUnitTest resultsTechnical specificationsDensityg/cm30.930.8~1.0Rubber Hydrocarbon ≥%9485Viscosity (25°C)≤pa.s0.630.8Flash point°C9390~110Volatile organic compounds benzene≤%0.010.1
#### 2.1.2. Asphalt
SBS modified asphalt was heated at 135°, then mixed with 12% SMC warm mix modifier, and stirred for 1 hour to obtain warm mix asphalt. The test results about SBS modified asphalt and SMC warm mix modified asphalt are shown in Tables2 and 3, respectively.Table 2
Test results about SBS modified asphalt.
TypeUnitTest resultsTechnical specificationsPenetration (25°C, 100 g, 5 s)0.1 mm9080~100Softening point (R and B), ≥°C4744Ductility (15°C,5 cm/min), ≥cm300100Density (15°C)g/cm31.08—Table 3
Test results about SMC warm mix modified asphalt
TypeUnitTest resultsTechnical specificationsRotary viscometer (60°C), ≤pa.s0.60.8Flash point, ≥°C198180Loss of mass due to distillation, ≤%413Adhesion to coarse aggregate, ≥%7¾3/4
#### 2.1.3. Aggregate
The coarse and fine aggregate used in this paper are conventional basalt, and the mineral powder is conventional limestone, which meets the requirements of relevant specifications.
#### 2.1.4. Design Gradation of Slag Asphalt Mixture
The aggregate gradations for SMC-10 and AC-16 are presented in Table4. And 5% steel slag powder is added in SMC-10. Based on the Marshall mix design results, the optimum asphalt to aggregate ratios are both 5.3% for SMC-10 and AC-16 mixtures.Table 4
Gradation composition.
Gradation typesThrough the mesh quality percentage (%)191613.29.54.752.361.180.60.30.150.075SMC-10 (35%)——10095353122161196AC-16100958071483125151085Plate specimens were prepared with the upper layer of 1.5 cm SMC-10 and under layer of 3.5 cm AC-16, as shown in Figure2. According to the Chinese 《Standard Test Methods of Bitumen and Bituminous Mixtures for Highway Engineering》(JTG_E20-2011) [22], the wheel pressure method was used to make the specimen, which was loaded in two layers and rolled into shape with a rut board and then placed for 72 hours at laboratory temperature. After that, the beam specimens were cut with 38 cm length, 5 cm height, and 6.5 cm width.Figure 2
Optical image of ultrathin abrasive layer.
### 2.2. Test Methods
The fatigue behavior was tested by using a multifunctional dynamic testing system for road materials (UTM-100) developed by the Italian CONTROLS Company. 10 Hz is usually adopted in the asphalt mixture fatigue test for loading frequency, which corresponds to the time when the vehicle running speed is 60–65 km/h, relatively close to the loading state of the road under real traffic load. At the same time, the fatigue life is related to the loading waveform.The strain loading control mode with four strain levels (400, 500, 600, and 700με) was adopted. The test temperature was set as 10°C and 15°C, respectively. All tests were terminated until the bending stiffness modulus attained to 30% of the initial stiffness modulus (S0).
## 2.1. Materials
### 2.1.1. Warm Mix Modifier
SMC warm mixing modifier was brown viscous liquid at room temperature, as shown in Figure1, which was prepared by colloid mill and high speed shearing method. The technical specifications and test results are shown in Table 1.Figure 1
SMC warm mixing modified.Table 1
The technical specifications and test results for SMC warm mixing modifier.
TypeUnitTest resultsTechnical specificationsDensityg/cm30.930.8~1.0Rubber Hydrocarbon ≥%9485Viscosity (25°C)≤pa.s0.630.8Flash point°C9390~110Volatile organic compounds benzene≤%0.010.1
### 2.1.2. Asphalt
SBS modified asphalt was heated at 135°, then mixed with 12% SMC warm mix modifier, and stirred for 1 hour to obtain warm mix asphalt. The test results about SBS modified asphalt and SMC warm mix modified asphalt are shown in Tables2 and 3, respectively.Table 2
Test results about SBS modified asphalt.
TypeUnitTest resultsTechnical specificationsPenetration (25°C, 100 g, 5 s)0.1 mm9080~100Softening point (R and B), ≥°C4744Ductility (15°C,5 cm/min), ≥cm300100Density (15°C)g/cm31.08—Table 3
Test results about SMC warm mix modified asphalt
TypeUnitTest resultsTechnical specificationsRotary viscometer (60°C), ≤pa.s0.60.8Flash point, ≥°C198180Loss of mass due to distillation, ≤%413Adhesion to coarse aggregate, ≥%7¾3/4
### 2.1.3. Aggregate
The coarse and fine aggregate used in this paper are conventional basalt, and the mineral powder is conventional limestone, which meets the requirements of relevant specifications.
### 2.1.4. Design Gradation of Slag Asphalt Mixture
The aggregate gradations for SMC-10 and AC-16 are presented in Table4. And 5% steel slag powder is added in SMC-10. Based on the Marshall mix design results, the optimum asphalt to aggregate ratios are both 5.3% for SMC-10 and AC-16 mixtures.Table 4
Gradation composition.
Gradation typesThrough the mesh quality percentage (%)191613.29.54.752.361.180.60.30.150.075SMC-10 (35%)——10095353122161196AC-16100958071483125151085Plate specimens were prepared with the upper layer of 1.5 cm SMC-10 and under layer of 3.5 cm AC-16, as shown in Figure2. According to the Chinese 《Standard Test Methods of Bitumen and Bituminous Mixtures for Highway Engineering》(JTG_E20-2011) [22], the wheel pressure method was used to make the specimen, which was loaded in two layers and rolled into shape with a rut board and then placed for 72 hours at laboratory temperature. After that, the beam specimens were cut with 38 cm length, 5 cm height, and 6.5 cm width.Figure 2
Optical image of ultrathin abrasive layer.
## 2.1.1. Warm Mix Modifier
SMC warm mixing modifier was brown viscous liquid at room temperature, as shown in Figure1, which was prepared by colloid mill and high speed shearing method. The technical specifications and test results are shown in Table 1.Figure 1
SMC warm mixing modified.Table 1
The technical specifications and test results for SMC warm mixing modifier.
TypeUnitTest resultsTechnical specificationsDensityg/cm30.930.8~1.0Rubber Hydrocarbon ≥%9485Viscosity (25°C)≤pa.s0.630.8Flash point°C9390~110Volatile organic compounds benzene≤%0.010.1
## 2.1.2. Asphalt
SBS modified asphalt was heated at 135°, then mixed with 12% SMC warm mix modifier, and stirred for 1 hour to obtain warm mix asphalt. The test results about SBS modified asphalt and SMC warm mix modified asphalt are shown in Tables2 and 3, respectively.Table 2
Test results about SBS modified asphalt.
TypeUnitTest resultsTechnical specificationsPenetration (25°C, 100 g, 5 s)0.1 mm9080~100Softening point (R and B), ≥°C4744Ductility (15°C,5 cm/min), ≥cm300100Density (15°C)g/cm31.08—Table 3
Test results about SMC warm mix modified asphalt
TypeUnitTest resultsTechnical specificationsRotary viscometer (60°C), ≤pa.s0.60.8Flash point, ≥°C198180Loss of mass due to distillation, ≤%413Adhesion to coarse aggregate, ≥%7¾3/4
## 2.1.3. Aggregate
The coarse and fine aggregate used in this paper are conventional basalt, and the mineral powder is conventional limestone, which meets the requirements of relevant specifications.
## 2.1.4. Design Gradation of Slag Asphalt Mixture
The aggregate gradations for SMC-10 and AC-16 are presented in Table4. And 5% steel slag powder is added in SMC-10. Based on the Marshall mix design results, the optimum asphalt to aggregate ratios are both 5.3% for SMC-10 and AC-16 mixtures.Table 4
Gradation composition.
Gradation typesThrough the mesh quality percentage (%)191613.29.54.752.361.180.60.30.150.075SMC-10 (35%)——10095353122161196AC-16100958071483125151085Plate specimens were prepared with the upper layer of 1.5 cm SMC-10 and under layer of 3.5 cm AC-16, as shown in Figure2. According to the Chinese 《Standard Test Methods of Bitumen and Bituminous Mixtures for Highway Engineering》(JTG_E20-2011) [22], the wheel pressure method was used to make the specimen, which was loaded in two layers and rolled into shape with a rut board and then placed for 72 hours at laboratory temperature. After that, the beam specimens were cut with 38 cm length, 5 cm height, and 6.5 cm width.Figure 2
Optical image of ultrathin abrasive layer.
## 2.2. Test Methods
The fatigue behavior was tested by using a multifunctional dynamic testing system for road materials (UTM-100) developed by the Italian CONTROLS Company. 10 Hz is usually adopted in the asphalt mixture fatigue test for loading frequency, which corresponds to the time when the vehicle running speed is 60–65 km/h, relatively close to the loading state of the road under real traffic load. At the same time, the fatigue life is related to the loading waveform.The strain loading control mode with four strain levels (400, 500, 600, and 700με) was adopted. The test temperature was set as 10°C and 15°C, respectively. All tests were terminated until the bending stiffness modulus attained to 30% of the initial stiffness modulus (S0).
## 3. Results and Discussion
### 3.1. Studies on the Characteristic Factors
#### 3.1.1. Bending Stiffness Modulus (S) and Time Product Normalized Stiffness (NM)
Generally, the fatigue life is defined as cycle time when the initial stiffness modulus attenuates to 50%, which can boast the advantage of quick test and convenient data analysis [23]. However, the collected data are insufficient and easily lead to unconvinced experimental conclusions. Herein, all tests in the present work were terminated until the bending stiffness modulus attained to 30% of the initial stiffness modulus. Meanwhile, the fatigue life of ASTM D746 was measured as cycle time when the time product of normalized stiffness (normalized modulus × cycles, NM) reached the maximum [12]. Figure 3 shows the bending stiffness modulus versus cycling time, and the initial bending stiffness modulus is listed in Table 5.Figure 3
Bending stiffness modulus of SMC warm mixing modified ultrathin abrasive layer mixture and normalized coefficient.Table 5
Initial stiffness modulus under different test conditions.
Test temperature/°CStrain level/μεInitial stiffness Modulus/MPa104007080.285006882.926005671.21155003619.966003339.837003108.26As shown in Figure3, the curves of bending stiffness modulus (S) of SMC warm mixing modified ultrathin abrasive layer versus increased cycle time (N) can be divided into three stages. Mixture bending stiffness modulus attenuated rapidly with the increase in cycle time in the first stage. Attenuation of the mixture bending stiffness modulus came to a stability with the increase in cycle time in the second stage. Attenuation rate in the second stage was clearly lower than that of the first stage. Compared with the second stage, the attenuation rate of mixture bending in the third stage apparently increased with the cycle time when the cycle time reached a certain value. Besides, a distinct transition point (Nf) was found between the second and third stage, which was the characteristic fatigue cracking sign for the SMC warm mixing modified ultrathin abrasive layer.For the tests under the same mode, the value of cycling time at the transition point (Nf) was almost the same, which means that the indices of bending stiffness modulus (S) and normalized stiffness time product (NM) can be regarded as the criterion of fatigue life.The initial stiffness modulusS0 of this study was the bending stiffness modulus at the 100th cyclic loading time. It can be seen from Table 1 that the initial stiffness modulus of SMC warm mixing modified ultrathin abrasive layer was enhanced with lowered strain level at the same test temperature, while reduced with increased test temperature at the same strain level. The value of initial stiffness modulus (S0) at 10°C was about twice larger than that at 15°C.
#### 3.1.2. Modulus Ratio of Residual Stiffness (Sr)
In this study, the modulus ratio of residual stiffness (Sr) = the stiffness modulus under the loading cycles/initial stiffness modulus (S0). The variation of residual stiffness modulus ratio with the number of cycles is shown in Figure 4.Figure 4
Variation of residual stiffness modulus ratio of SMC warm mixing modified ultrathin abrasive layer.As shown in Figure4, the Sr showed a similar tendency to S, which was decreased rapidly in the first stage, slowly in the second stage, and drastically in the third stage. Thus, the Sr can also be regarded as the criterion of fatigue life.
#### 3.1.3. Phase Angle (θ)
As shown in Figure5, the phase angle (θ) fluctuated up and down with increased cycle time (N), while maintaining the growth tendency. In consistent with the other values (S, S0, Sr), three stages were also found with the same indication for the curves of phase angle (θ) versus cycle time. The value of cycle time corresponding to the turning point (Nf) was considered as the fatigue life of specimen. Furthermore, the phase angle at 15°C was larger than that at 10°C under the same strain condition, indicating that soft material was easy to generate larger recoverable deformation.Figure 5
Phase angle of SMC warm mixing modified ultrathin abrasive layer.
#### 3.1.4. Accumulative Dissipated Energy (Qd)
The dissipated energy was an essential factor which could reflect the cracking process of materials. As shown in Figure6, the value of accumulative dissipated energy (Qd) increased constantly with elevated cycle time (N). Moreover, the Qd was enlarged with lowered strain level during the thorough test process. It should be noted that the three-stage as well as transition point (Nf) appeared in the curves of S, S0, Sr, and θ versus cycle times were not found for Qd. Therefore, we cannot determine the failure time from the curves of Qd versus cycle time, and the accumulative dissipated energy Qd should not be regarded as the direct criterion of fatigue life of specimens.Figure 6
Variation law of accumulative dissipated energy of SMC warm mixing modified ultrathin abrasive layer.
### 3.2. Fatigue Behavior of Asphalt Mixture Analyzed by Energy Method
As indicated by many researchers, the asphalt mixture was a viscoelastic material which exhibited a failure process accompanied with energy dissipation [16, 17]. Although the accumulative dissipated energy Qd was not the direct criterion of fatigue life of specimen, the relevant change law of dissipated energy was still of great significance to underline the fatigue behavior of asphalt mixture. The relationship between energy dissipation and fatigue life is shown as follows:(1)Wf=ANfz.Wf—accumulative dissipated energy; Nf——fatigue life; A, Z——test regression coefficient.There was a time-induced hysteresis phenomenon for strain stress in viscoelastic materials (Figure7). Carpenter et al. proposed the relative dissipated energy change ratio (RDEC) to depict the fatigue properties of materials [18].Figure 7
Lag time and lag curve.The calculation formula for the RDEC is shown as(2)RDEC=DEj−DEiDEij−i.DEi, DEj represents the energy dissipation for the energy dissipation of the i th and j th cycle, respectively. j>i, the difference between jandi is determined by fatigue life and instrument fatigue sampling. There exists a three-stage variation law of RDEC, and the corresponding fatigue time at the turning point between the second and third stages (Nf) is determined as the fatigue life. Thereby, the fatigue equation shown in formula (3) can be obtained by using the average PV of RDEC2 in the second stage [19, 20].(3)PV=cNfd.c, d are the fitting parameters.Accordingly, the fatigue life of asphalt mixture of SMC warm mixing modified asphalt was predicted and analyzed with energy method in this work. The data of RDEC versus cycle time were obtained and calculated according to formula (2).The curves of RDEC versus cyclic loading time N were divided into three stages (Figure8–Figure 13). At the first stage (RDEC 1), the specimen possessed a relatively high RDEC value, which was gradually reduced with increased cyclic time. It was assumed that the native energy dissipation played an important role on the resistance of initially cycled loading. For the second stage (RDEC2), a low RDEC value was maintained, which implied a steady damage rate of RDEC. Finally, the RDEC value increased gradually at the third stage (RDEC 3), which resulted in an accelerated fatigue damage.Figure 8
Diagram of Relation between RDEC and cycle time at 10°C-400.Figure 9
Diagram of Relation between RDEC and cycle time at 10°C-500.Figure 10
Diagram of Relation between RDEC and cycle time at 10°C-600.Figure 11
Diagram of Relation between RDEC and cycle time at 15°C-500.Figure 12
Diagram of Relation between RDEC and cycle time at 15°C-600.Figure 13
Diagram of Relation between RDEC and cycle time at 15°C-700.If the 50% initial stiffness was adopted in the tests, no obvious increased energy dissipation change ratio was found under either mode of strain control. The fatigue damage was in steady state, which provided the specimen enough residual energy to resist the external loadings. Thus, the transition point between the second and third stages (Nf) was an appropriate factor to determine the fatigue life of the specimen.The value ofPV assigned to the relative change rate of energy dissipation was calculated from the RDEC data in the second stage and listed in Table 6, which was increased with the elevated strain level. The fatigue life of SMC warm mixing modified ultrathin abrasive layer asphalt mixture was fitted with (equation (4)). Through logging both sides of (4) simultaneously, the relationship between log (PV) and fatigue life (Nf) was established and shown in Figure 14 The PV and Nf displayed a strong correlation with coefficient R2 reached more than 0.98, indicating the RDEC value at the transition point of Nf can be used as an appropriate criterion for predicting the fatigue cracking of SMC warm mixing modified ultrathin abrasive layer asphalt mixture.(4)logPV=logc+dlogNf.Table 6
Results ofPV and Nf under different strain levels.
Test temperature/°CStrain level/μεPVN/Cycle times104005.3E-517123405007.5E-56432436001.3E-4203435155001.25E-528231046001.83E-511234337004.51E-5241223Figure 14
Curve diagram of relation between PV and fatigue life.
## 3.1. Studies on the Characteristic Factors
### 3.1.1. Bending Stiffness Modulus (S) and Time Product Normalized Stiffness (NM)
Generally, the fatigue life is defined as cycle time when the initial stiffness modulus attenuates to 50%, which can boast the advantage of quick test and convenient data analysis [23]. However, the collected data are insufficient and easily lead to unconvinced experimental conclusions. Herein, all tests in the present work were terminated until the bending stiffness modulus attained to 30% of the initial stiffness modulus. Meanwhile, the fatigue life of ASTM D746 was measured as cycle time when the time product of normalized stiffness (normalized modulus × cycles, NM) reached the maximum [12]. Figure 3 shows the bending stiffness modulus versus cycling time, and the initial bending stiffness modulus is listed in Table 5.Figure 3
Bending stiffness modulus of SMC warm mixing modified ultrathin abrasive layer mixture and normalized coefficient.Table 5
Initial stiffness modulus under different test conditions.
Test temperature/°CStrain level/μεInitial stiffness Modulus/MPa104007080.285006882.926005671.21155003619.966003339.837003108.26As shown in Figure3, the curves of bending stiffness modulus (S) of SMC warm mixing modified ultrathin abrasive layer versus increased cycle time (N) can be divided into three stages. Mixture bending stiffness modulus attenuated rapidly with the increase in cycle time in the first stage. Attenuation of the mixture bending stiffness modulus came to a stability with the increase in cycle time in the second stage. Attenuation rate in the second stage was clearly lower than that of the first stage. Compared with the second stage, the attenuation rate of mixture bending in the third stage apparently increased with the cycle time when the cycle time reached a certain value. Besides, a distinct transition point (Nf) was found between the second and third stage, which was the characteristic fatigue cracking sign for the SMC warm mixing modified ultrathin abrasive layer.For the tests under the same mode, the value of cycling time at the transition point (Nf) was almost the same, which means that the indices of bending stiffness modulus (S) and normalized stiffness time product (NM) can be regarded as the criterion of fatigue life.The initial stiffness modulusS0 of this study was the bending stiffness modulus at the 100th cyclic loading time. It can be seen from Table 1 that the initial stiffness modulus of SMC warm mixing modified ultrathin abrasive layer was enhanced with lowered strain level at the same test temperature, while reduced with increased test temperature at the same strain level. The value of initial stiffness modulus (S0) at 10°C was about twice larger than that at 15°C.
### 3.1.2. Modulus Ratio of Residual Stiffness (Sr)
In this study, the modulus ratio of residual stiffness (Sr) = the stiffness modulus under the loading cycles/initial stiffness modulus (S0). The variation of residual stiffness modulus ratio with the number of cycles is shown in Figure 4.Figure 4
Variation of residual stiffness modulus ratio of SMC warm mixing modified ultrathin abrasive layer.As shown in Figure4, the Sr showed a similar tendency to S, which was decreased rapidly in the first stage, slowly in the second stage, and drastically in the third stage. Thus, the Sr can also be regarded as the criterion of fatigue life.
### 3.1.3. Phase Angle (θ)
As shown in Figure5, the phase angle (θ) fluctuated up and down with increased cycle time (N), while maintaining the growth tendency. In consistent with the other values (S, S0, Sr), three stages were also found with the same indication for the curves of phase angle (θ) versus cycle time. The value of cycle time corresponding to the turning point (Nf) was considered as the fatigue life of specimen. Furthermore, the phase angle at 15°C was larger than that at 10°C under the same strain condition, indicating that soft material was easy to generate larger recoverable deformation.Figure 5
Phase angle of SMC warm mixing modified ultrathin abrasive layer.
### 3.1.4. Accumulative Dissipated Energy (Qd)
The dissipated energy was an essential factor which could reflect the cracking process of materials. As shown in Figure6, the value of accumulative dissipated energy (Qd) increased constantly with elevated cycle time (N). Moreover, the Qd was enlarged with lowered strain level during the thorough test process. It should be noted that the three-stage as well as transition point (Nf) appeared in the curves of S, S0, Sr, and θ versus cycle times were not found for Qd. Therefore, we cannot determine the failure time from the curves of Qd versus cycle time, and the accumulative dissipated energy Qd should not be regarded as the direct criterion of fatigue life of specimens.Figure 6
Variation law of accumulative dissipated energy of SMC warm mixing modified ultrathin abrasive layer.
## 3.1.1. Bending Stiffness Modulus (S) and Time Product Normalized Stiffness (NM)
Generally, the fatigue life is defined as cycle time when the initial stiffness modulus attenuates to 50%, which can boast the advantage of quick test and convenient data analysis [23]. However, the collected data are insufficient and easily lead to unconvinced experimental conclusions. Herein, all tests in the present work were terminated until the bending stiffness modulus attained to 30% of the initial stiffness modulus. Meanwhile, the fatigue life of ASTM D746 was measured as cycle time when the time product of normalized stiffness (normalized modulus × cycles, NM) reached the maximum [12]. Figure 3 shows the bending stiffness modulus versus cycling time, and the initial bending stiffness modulus is listed in Table 5.Figure 3
Bending stiffness modulus of SMC warm mixing modified ultrathin abrasive layer mixture and normalized coefficient.Table 5
Initial stiffness modulus under different test conditions.
Test temperature/°CStrain level/μεInitial stiffness Modulus/MPa104007080.285006882.926005671.21155003619.966003339.837003108.26As shown in Figure3, the curves of bending stiffness modulus (S) of SMC warm mixing modified ultrathin abrasive layer versus increased cycle time (N) can be divided into three stages. Mixture bending stiffness modulus attenuated rapidly with the increase in cycle time in the first stage. Attenuation of the mixture bending stiffness modulus came to a stability with the increase in cycle time in the second stage. Attenuation rate in the second stage was clearly lower than that of the first stage. Compared with the second stage, the attenuation rate of mixture bending in the third stage apparently increased with the cycle time when the cycle time reached a certain value. Besides, a distinct transition point (Nf) was found between the second and third stage, which was the characteristic fatigue cracking sign for the SMC warm mixing modified ultrathin abrasive layer.For the tests under the same mode, the value of cycling time at the transition point (Nf) was almost the same, which means that the indices of bending stiffness modulus (S) and normalized stiffness time product (NM) can be regarded as the criterion of fatigue life.The initial stiffness modulusS0 of this study was the bending stiffness modulus at the 100th cyclic loading time. It can be seen from Table 1 that the initial stiffness modulus of SMC warm mixing modified ultrathin abrasive layer was enhanced with lowered strain level at the same test temperature, while reduced with increased test temperature at the same strain level. The value of initial stiffness modulus (S0) at 10°C was about twice larger than that at 15°C.
## 3.1.2. Modulus Ratio of Residual Stiffness (Sr)
In this study, the modulus ratio of residual stiffness (Sr) = the stiffness modulus under the loading cycles/initial stiffness modulus (S0). The variation of residual stiffness modulus ratio with the number of cycles is shown in Figure 4.Figure 4
Variation of residual stiffness modulus ratio of SMC warm mixing modified ultrathin abrasive layer.As shown in Figure4, the Sr showed a similar tendency to S, which was decreased rapidly in the first stage, slowly in the second stage, and drastically in the third stage. Thus, the Sr can also be regarded as the criterion of fatigue life.
## 3.1.3. Phase Angle (θ)
As shown in Figure5, the phase angle (θ) fluctuated up and down with increased cycle time (N), while maintaining the growth tendency. In consistent with the other values (S, S0, Sr), three stages were also found with the same indication for the curves of phase angle (θ) versus cycle time. The value of cycle time corresponding to the turning point (Nf) was considered as the fatigue life of specimen. Furthermore, the phase angle at 15°C was larger than that at 10°C under the same strain condition, indicating that soft material was easy to generate larger recoverable deformation.Figure 5
Phase angle of SMC warm mixing modified ultrathin abrasive layer.
## 3.1.4. Accumulative Dissipated Energy (Qd)
The dissipated energy was an essential factor which could reflect the cracking process of materials. As shown in Figure6, the value of accumulative dissipated energy (Qd) increased constantly with elevated cycle time (N). Moreover, the Qd was enlarged with lowered strain level during the thorough test process. It should be noted that the three-stage as well as transition point (Nf) appeared in the curves of S, S0, Sr, and θ versus cycle times were not found for Qd. Therefore, we cannot determine the failure time from the curves of Qd versus cycle time, and the accumulative dissipated energy Qd should not be regarded as the direct criterion of fatigue life of specimens.Figure 6
Variation law of accumulative dissipated energy of SMC warm mixing modified ultrathin abrasive layer.
## 3.2. Fatigue Behavior of Asphalt Mixture Analyzed by Energy Method
As indicated by many researchers, the asphalt mixture was a viscoelastic material which exhibited a failure process accompanied with energy dissipation [16, 17]. Although the accumulative dissipated energy Qd was not the direct criterion of fatigue life of specimen, the relevant change law of dissipated energy was still of great significance to underline the fatigue behavior of asphalt mixture. The relationship between energy dissipation and fatigue life is shown as follows:(1)Wf=ANfz.Wf—accumulative dissipated energy; Nf——fatigue life; A, Z——test regression coefficient.There was a time-induced hysteresis phenomenon for strain stress in viscoelastic materials (Figure7). Carpenter et al. proposed the relative dissipated energy change ratio (RDEC) to depict the fatigue properties of materials [18].Figure 7
Lag time and lag curve.The calculation formula for the RDEC is shown as(2)RDEC=DEj−DEiDEij−i.DEi, DEj represents the energy dissipation for the energy dissipation of the i th and j th cycle, respectively. j>i, the difference between jandi is determined by fatigue life and instrument fatigue sampling. There exists a three-stage variation law of RDEC, and the corresponding fatigue time at the turning point between the second and third stages (Nf) is determined as the fatigue life. Thereby, the fatigue equation shown in formula (3) can be obtained by using the average PV of RDEC2 in the second stage [19, 20].(3)PV=cNfd.c, d are the fitting parameters.Accordingly, the fatigue life of asphalt mixture of SMC warm mixing modified asphalt was predicted and analyzed with energy method in this work. The data of RDEC versus cycle time were obtained and calculated according to formula (2).The curves of RDEC versus cyclic loading time N were divided into three stages (Figure8–Figure 13). At the first stage (RDEC 1), the specimen possessed a relatively high RDEC value, which was gradually reduced with increased cyclic time. It was assumed that the native energy dissipation played an important role on the resistance of initially cycled loading. For the second stage (RDEC2), a low RDEC value was maintained, which implied a steady damage rate of RDEC. Finally, the RDEC value increased gradually at the third stage (RDEC 3), which resulted in an accelerated fatigue damage.Figure 8
Diagram of Relation between RDEC and cycle time at 10°C-400.Figure 9
Diagram of Relation between RDEC and cycle time at 10°C-500.Figure 10
Diagram of Relation between RDEC and cycle time at 10°C-600.Figure 11
Diagram of Relation between RDEC and cycle time at 15°C-500.Figure 12
Diagram of Relation between RDEC and cycle time at 15°C-600.Figure 13
Diagram of Relation between RDEC and cycle time at 15°C-700.If the 50% initial stiffness was adopted in the tests, no obvious increased energy dissipation change ratio was found under either mode of strain control. The fatigue damage was in steady state, which provided the specimen enough residual energy to resist the external loadings. Thus, the transition point between the second and third stages (Nf) was an appropriate factor to determine the fatigue life of the specimen.The value ofPV assigned to the relative change rate of energy dissipation was calculated from the RDEC data in the second stage and listed in Table 6, which was increased with the elevated strain level. The fatigue life of SMC warm mixing modified ultrathin abrasive layer asphalt mixture was fitted with (equation (4)). Through logging both sides of (4) simultaneously, the relationship between log (PV) and fatigue life (Nf) was established and shown in Figure 14 The PV and Nf displayed a strong correlation with coefficient R2 reached more than 0.98, indicating the RDEC value at the transition point of Nf can be used as an appropriate criterion for predicting the fatigue cracking of SMC warm mixing modified ultrathin abrasive layer asphalt mixture.(4)logPV=logc+dlogNf.Table 6
Results ofPV and Nf under different strain levels.
Test temperature/°CStrain level/μεPVN/Cycle times104005.3E-517123405007.5E-56432436001.3E-4203435155001.25E-528231046001.83E-511234337004.51E-5241223Figure 14
Curve diagram of relation between PV and fatigue life.
## 4. Conclusions
The study of the fatigue behavior of the SMC warm mixing modified ultrathin abrasive layer asphalt mixture was carried out under the strain controlling mode. All characteristic parameters (bending stiffness modulus, normalized times, product, and phase angle) showed a similar three-stage change process correlated with the stress cycle times, except for the accumulative dissipated energy. It is unreasonable to use 50% initial stiffness as the standard of fatigue failure of SMC warm mixing modified ultrathin abrasive asphalt mixture. However, The PV andNf displayed a strong correlation with coefficient R2 reached more than 0.98, indicating the RDEC value at the transition point of Nf, considering the rest of the valid parameters for verification, and these can be used as appropriate criteria for predicting the fatigue cracking of SMC warm mixing modified ultrathin abrasive layer asphalt mixture.
---
*Source: 1017425-2022-08-26.xml* | 2022 |
# Latest Development on Membrane Fabrication for Natural Gas Purification: A Review
**Authors:** Dzeti Farhah Mohshim; Hilmi bin Mukhtar; Zakaria Man; Rizwan Nasir
**Journal:** Journal of Engineering
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101746
---
## Abstract
In the last few decades, membrane technology has been a great attention for gas separation technology especially for natural gas sweetening. The intrinsic character of membranes makes them fit for process escalation, and this versatility could be the significant factor to induce membrane technology in most gas separation areas. Membranes were synthesized with various materials which depended on the applications. The fabrication of polymeric membrane was one of the fastest growing fields of membrane technology. However, polymeric membranes could not meet the separation performances required especially in high operating pressure due to deficiencies problem. The chemistry and structure of support materials like inorganic membranes were also one of the focus areas when inorganic membranes showed some positive results towards gas separation. However, the materials are somewhat lacking to meet the separation performance requirement. Mixed matrix membrane (MMM) which is comprising polymeric and inorganic membranes presents an interesting approach for enhancing the separation performance. Nevertheless, MMM is yet to be commercialized as the material combinations are still in the research stage. This paper highlights the potential promising areas of research in gas separation by taking into account the material selections and the addition of a third component for conventional MMM.
---
## Body
## 1. Introduction
Natural gas can be considered as the largest fuel source required after the oil and coal [1]. Nowadays, the consumption of natural gas is not only limited to the industry, but natural gas is also extensively consumed by the power generation and transportation sector [2]. These phenomena supported the idea of going towards sustainability and green technology as the natural gas is claimed to generate less-toxic gases like carbon dioxide (CO2) and nitrogen oxides (NOx) upon combustion as shown in Table 1 [3].Table 1
Fossil fuel emission levels (pounds per billion Btu of energy input).
Fuel sources/pollutant(pound/BTU)
Natural gas
Oil
Coal
Carbon dioxide
117,000
164,000
208,000
Carbon monoxide
40
33
208
Nitrogen oxides
92
448
457
Sulphur dioxide
1
1,122
2,591
Particulates
7
84
2,744
Mercury
0.000
0.007
0.016However, pure natural gas from the wellhead cannot directly be used as it contains undesirable impurities such as carbon dioxide (CO2) and hydrogen sulphide (H2S) [4]. All of these unwanted substances must be removed as these toxic gases could corrode the pipeline since CO2 is highly acidic in the presence of water. Furthermore, the existence of CO2 may waste the pipeline capacity and reduce the energy content of natural gas which eventually lowers the calorific value of natural gas [5].Conventionally, natural gas treatment was predominated with some methods such as absorption, adsorption, and cryogenic distillation. But these methods require high treatment cost due to regeneration process, large equipments, and broad area for the big equipments [6]. With the advantages of lower capital cost, easy operation process, and high CO2 removal percentage, membrane technology offers the best treatment for natural gas [6]. Natural gas is expected to contain less than 2 vol% or less than 2 ppm of CO2 after the natural gas treatment in order to meet the pipeline and commercial specification [7]. This specification is made to secure the lifetime of the pipeline and to avoid an excessive budget for pipeline replacement.Membrane technology has received significant attention from various sectors especially industries and academics in their research as it gives the most relevant impact in reducing the environmental problem and costs. Membrane is defined as a thin layer, which separates two phases and restricts transport of various chemicals in a selective manner [8]. Membrane restricts the penetration of some molecules that have bigger kinetic diameter. The commercial value of membrane is determined by the membrane’s transport properties which are permeability and selectivity. Major gap of the existing technologies is limited to low CO2 loading (<15 mol%). Ideally, we required high permeability and high selectivity of membrane, but, however, most membranes exhibit high selectivity in low permeability and vice versa which make this is as a major tradeoff of membranes, and none of these technologies are yet to treat natural gas containing high CO2 (>80 mol%) [9].
## 2. Membrane Technology Development
### 2.1. Early Membrane Development
Membrane technology has been started as early as in 1850 when Graham introduced the Graham’s Law of Diffusion. Then, gas separation utilization in membrane technology has been commercialized in late 1900’s. Permea PRISM membrane was the first commercialized gas separation membrane produced in 1980 [2]. Summary of early development of membranes is shown in Figure 1. This innovation has led to the further membrane gas separation development. A lot of studies done by the researchers for various gas separation mostly focus on the natural gas purification.Figure 1
Membrane development timeline.Development of membrane for CO2/CH4 separation has been started since early 1990’s. Numbers of membranes were fabricated using different kind of materials in the early stage of this membrane gas separation. The desirable material selected must be well suited to the separation performance by which mean separation of gases works contrarily in different materials. Excellent gas membranes separation should have the characteristic of high separation performance with reasonable high permeability, high robustness, chemically, thermally, and mechanically good and rational production cost [10, 11]. Two types of materials are practically used in gas separation: polymeric membrane and inorganic membrane and the comparison of both polymeric and inorganic membranes is showed in Table 2.Table 2
Comparison between polymeric and inorganic membranes.
Polymeric membranes
Inorganic membranes
Materials
Present in either rubbery or glassy type which depends on the operating temperature [12].
Made from inorganic-based material like glass, aluminium, and metal [13].
Characteristics
(i) Polymer is more rigid and hard in glassy state while in rubbery state it is more soft and flexible.(ii) Glassy polymeric membranes exhibit higher glass transition temperature compared to rubbery membranes, and glassy types tend to have higher CO2/CH4 selectivity [14].
(i) Able to withstand with solvent and other chemicals and also susceptible to microbial attack.(ii) Comprise significantly higher permeability and selectivity, but they are also more resistant towards higher pressure and temperature, aggressive feeds, and fouling effects [15].
Disadvantages
(i) May have plasticization problem when handling high CO2. (ii) Presence of CO2 may result in membrane performance reduction at certain elevated pressure.(iii) As the membranes expose to CO2, polymer network in the membrane will swell, and segmental mobility will also increase which consequently cause a rise in permeability for all gas components [16].(iv) The components with low permeability characteristic will experience more permeability increment; thus, the selectivity of the membrane will definitely decrease [17–19].
(i) Inherent brittleness characteristic.(ii) Performed well under low pressure which does not suit the natural gas well which required high pressure for the exploration.(iii) High production cost which seems not practical for large industrial applications [20].
Examples
Polyethylene (PE), poly(dimethylsiloxane) (PDMS), polysulfone (PSU), polyethersulfone (PES), polyimide (PI) [21], polycarbonate [22], polyimide [23], polyethers [24], polypyrrolones [25, 26], polysulfones [27], and polyethersulfones [28].
Aminoslicate membrane [29], carbon-silicalite composite membrane [30], MFI membranes [31], and microporous silica membranes [32].Gas separation using polymeric membranes has taken its first commercial scale in late 1970’s after the demonstration of rubbery membranes back in 1830’s [33]. Literally, the permeability of gas in a specific gas mixture varies inversely with its separation factor. The tighter of molecular spacing it has, the higher the separation characteristic of the polymer, but, however, as the operating pressure increases, the permeability is decreasing due to experiencing lower diffusion coefficients [34]. Polymeric membranes that are commercially available for CO2/CH4 separation include polysulfone (PSU), polyetehrsulfone (PES), polyamide (PI) and many more. Generally, as the permeability of the gas increases, the permselectivity was attended to decrease in most cases of polymeric membranes [23].Inorganic membrane like SAPO-34 could give higher separation performance compared to the polymeric membrane, but the separation performance is inversely proportional to the pressure loaded. This observation may create problem when we deal with high pressure natural gas well. The performance of both organic and inorganic membrane is summarized in Robeson’s plot as in Figure2 [35].Figure 2
Zeolite (SAPO-34) membrane performance in Robeson’s plot.
### 2.2. Conventional Mixed Matrix Membrane
A lot of researches have been done to satisfy the needs of gas separation requirement through both polymeric and inorganic membranes. The deficiencies of these membranes have driven the researchers to develop an alternative material for membrane which is more mechanically stable and economic viable, and most important is having high separation performance. The combination of organic and inorganic material which is known as mixed matrix membrane (MMM) was then proposed in idea to get a better membrane gas separation performance at reasonable price [36]. The fabrication of MMM was a promising technology as this composite material has improved its mechanical and electrical properties [37], and it combines the exceptional separation ability and pleasant stability of molecular sieves with better processability of organic membrane [38]. The MMM is characterized by dispersing the inorganic material into the continuous phase of polymeric material which can be almost any polymeric material such as polysulfone, polyimide, and polyethersulfones [39, 40].Various membrane materials can be selected based on the process requirement. Selected materials can be “tailored-made” in order to meet the specific separation purpose in a wide range of application [39]. There were many attempts of developing polymer-inorganic membrane that started few decades back then.Based on Table3, this was observed that the selection of materials is important, and it depends on the system requirement. Higher intrinsic diffusion selectivity characteristic of glassy polymer makes this material better than rubbery polymer [56]. Although MMM has proven an enhancement of selectivity, it was noticed that most MMMs were endured with poor adhesion between the organic matrix and inorganic particles [55]. Even MMM fabrication does have its disadvantages, but the research of MMM with different materials is worth to work on since it has proven its ability to have high separation performance.Table 3
Few researches of mixed matrix membranes.
Year
Mixed matrix membrane (MMM)
Observations
Ref.
Organic
Inorganic
1973
Silicon rubber
Molecular sieves
Poor adhesion of organic and inorganic selected leads to poor separation performance.This poor interaction of both materials may result in nonselective voids present at the interface which consequently causes insufficient membrane performance [41–43].
[44]
1992
Polydimethylsiloxane (PDMS)
Silicalite-1, 13X, KY, and zeolite-5A
Zeolite like silicalite-1, 13X, and KY have enhanced the separation performance of poorly selective rubbery membrane for the carbon dioxide (CO2) and methane (CH4) mixture.
[45]
Propylene diene rubber (EPDM)
Zeolite-5A showed no change in gas selectivity with decrease permeability due to impermeable characteristic towards CO2.
2000
Cellulose acetate (CA)
Silicalite, NaX, and AgX
Silicalite did in fact reverse the selectivity of CA membrane from H2to CO2for CO2/H2separation.
[46]
2000
Polyvinyl acetate
4A
Formation of chemical bonds gave good adhesion, but there is still nonselective “leakage” from the existence of nanometric region.
[47]
2003
Matrimid
Carbon molecular sieves
Selectivity of CO2/CH4 mixture has increased up to 45%.Zeolites loading also affects both gas permeability and gas mixture selectivity. There were also a number of records where permeability increased with selectivity decreased as the zeolites loading was increased [48, 49] and vice versa [42].
[50]
2006
Polyethersulfone (PES)
Zeolite 4A
Due to low mobility of the polymer chain in glassy polymer such as to prevent them to completely cover the zeolites surface which resulted in void interface [51, 52].
[53]
2001
Polyimide (PI)
Zeolite 13X
[54]
2008
Polycarbonate
Zeolite 4A
[55]
### 2.3. Recent Development of Membrane Gas Separation
#### 2.3.1. Ionic Liquid-Supported Membrane (ILSM)
In recent years, many researches have been evaluated on the ionic liquid supported membrane (ILSM) for gas separation membrane since ionic liquids are known materials that could dissolve CO2 and stable at high temperature ranges [57]. To be specific, ionic liquids are molten salt that are liquid at room temperature [58]. Furthermore, ionic liquids are of particular interest for membrane gas separation application as they are inflammable, negligible vapour pressure, and nonvolatile which make them also known as “green” solvents [58–60]. Extensive researches have been carried out to develop room temperature ionic liquid (RTIL)-based solvents for CO2 separation with various types of ionic liquids such as pyridinium and imidazolium based. Among RTILs tested, imidazolium-based RTIL was chosen as the most feasible solvent for CO2 separation as they are commercially viable and easily tunable by tailoring the cation and anion to meet the system requirements [60].ILSMs have been proven that they offered an increase in permeability that outperforms many neat polymer membranes. ILSMs synthesized from poly(vinylidene fluoride) (PVDF) and 1-butyl-3-methylimidazolium tetrafluororate (BMImBF4) showed high permeation performance of CO2 and mechanically stable while operating at high pressure condition [63]. The consumption of RTILs showed an increment especially for 1-R-3-methylimidazolium (R-mim)-based RTILs as this type is preferable due to its properties of less viscous compared to other RTILs. In addition, gases like CO2, nitrogen (N2), and other hydrocarbons demonstrated high solubility in Rmim-based RTILs [64, 65]. Besides, the use of Rmim-based RTILs could calculate the latent permeability and selectivity of the mixture of given gases by using the molar volume of these RTILs [60]. RTIL can be functionalized and set up in according to the system requirement and application, and these researches could be good benchmark for designing the functionalized RTIL efficiently as showed in Table 4.Table 4
Effects of ionic liquid functionalization.
Functionalization
Effects
Nitrile and alkyne group
(i) Gas solubility and separation performance have been tailored.(ii) Functionalized RTIL solvents displayed a decreasing in CO2, N2, and CH4 solubility, but, however, the selectivity of CO2/N2 and CO2/CH4 increased when compared to the nonfunctionalized RTIL [61].
Temperature
(i) As the temperature increases, the CO2 solubility is decreasing while the CH4 solubility remains unchanged.(ii) The ideal solubility selectivity of mix gases for CO2/N2, CO2/CH4, and CO2/H2increased as the temperature decreased [62].
#### 2.3.2. Polymerized Room Temperature Ionic Liquid Membrane (Poly(RTIL))
Comparatively, RTIL especially imidazolium based can be also polymerized into a solid, dense, and thin film membrane due to their modular nature [66–68]. It was a successful breakthrough when the researcher found that polymer from ionic liquid monomer had higher CO2 absorption capacity with faster absorption and desorption rate compared to the neat RTIL [69]. Moreover, poly(RTIL) is also attributed with higher mechanical strength [66]. These characters have proven that polymerized ionic liquid (poly(RTIL)) is also a promising material for membrane gas separation. Polymerization of RTIL monomer by varying the n-alkyl length also showed a pleasant result when increase of permeability of given gases like CO2, N2, and methane (CH4) was observed as the n-alkyl group was lengthened [68]. Additionally, poly(RTIL) is also up to extend when it practically absorb about twice as much CO2 as their liquid analogue which makes it much better than molten RTIL [68]. Apparently, performance of poly(RTIL) also depends on the substituent attached to it. In a research done on the inclusion of a polar oligo(ethylene glycol) on the cation side of imidazolium-based RTIL, the separation selectivity has seemed to increase [70].As discussed earlier, mixed matrix membrane is a known membrane that composed of a compatible organic-inorganic pair which demonstrated having good separation properties subject to no interfacial adhesion problem. The improvement of separation performance is expected in an MMM comprising poly(RTIL) (polymer matrix) and zeolite (inorganic). In a very recent work, the benefit of MMM has become an idea to the researcher in ionic liquid membrane field. Hudiono and his coworkers have introduced a three-component mixed matrix membrane by utilizing the poly(RTIL), RTIL, and zeolite [71]. Their research was also based on a positive finding by Bara and his coworkers when they found that the addition of RTIL in poly(RTIL) has increased the gas permeability. This is due to that more rapid gas diffusion occurred as the free volume of membrane increased when RTIL was added [72].On the other hand, Hudiono has used the RTIL to increase the membrane permeability and also to act as an aid for better interaction between the poly(RTIL) and zeolite (SAPO-34). The result was promising as the permeability of given gases like CO2, N2, and CH4 increased accordingly. However, the selectivity was slightly decrease as they claimed that the RTIL used which is emim[Tf2N] was not selective towards CO2/CH4 separation [71]. Nonetheless, the result proved that the addition of RTIL could increase the polymer-zeolite adhesion in MMM as RTIL also acts as the wetting agent for the zeolite.Hudiono again repeated the same experiment fabricating a three-component mixed matrix membrane but by varying the composition of RTIL and zeolite added in order to determine the optimum condition for the membrane. The CO2 permeability seems to rise with the increasing amount of RTIL. The CO2/CH4 selectivity of the MMM also improved with the presence of SAPO-34 compared to neat poly(RTIL)-RTIL membrane as long as there is sufficient amount of RTIL as the wetting agent. Besides, the team also conducted an investigation of the separation performance by using the vinyl-based poly(RTIL). The addition of RTIL is not essential as they are structurally similar [73].In contrast, a ternary MMM has been fabricated by Oral and his coworkers by using different materials. The project study on the effect of different RTIL loadings which are emim[Tf2N] and emim[CF3SO3] towards MMM composed of polyimide-zeolite (SAPO-34). The addition of emim[Tf2N] has performed as expected when the permeability of CO2 increased while the incorporation of emim[CF3SO3] has increased the CO2/CH4 selectivity since emim[CF3SO3] is selective towards CO2/CH4 [74].
## 2.1. Early Membrane Development
Membrane technology has been started as early as in 1850 when Graham introduced the Graham’s Law of Diffusion. Then, gas separation utilization in membrane technology has been commercialized in late 1900’s. Permea PRISM membrane was the first commercialized gas separation membrane produced in 1980 [2]. Summary of early development of membranes is shown in Figure 1. This innovation has led to the further membrane gas separation development. A lot of studies done by the researchers for various gas separation mostly focus on the natural gas purification.Figure 1
Membrane development timeline.Development of membrane for CO2/CH4 separation has been started since early 1990’s. Numbers of membranes were fabricated using different kind of materials in the early stage of this membrane gas separation. The desirable material selected must be well suited to the separation performance by which mean separation of gases works contrarily in different materials. Excellent gas membranes separation should have the characteristic of high separation performance with reasonable high permeability, high robustness, chemically, thermally, and mechanically good and rational production cost [10, 11]. Two types of materials are practically used in gas separation: polymeric membrane and inorganic membrane and the comparison of both polymeric and inorganic membranes is showed in Table 2.Table 2
Comparison between polymeric and inorganic membranes.
Polymeric membranes
Inorganic membranes
Materials
Present in either rubbery or glassy type which depends on the operating temperature [12].
Made from inorganic-based material like glass, aluminium, and metal [13].
Characteristics
(i) Polymer is more rigid and hard in glassy state while in rubbery state it is more soft and flexible.(ii) Glassy polymeric membranes exhibit higher glass transition temperature compared to rubbery membranes, and glassy types tend to have higher CO2/CH4 selectivity [14].
(i) Able to withstand with solvent and other chemicals and also susceptible to microbial attack.(ii) Comprise significantly higher permeability and selectivity, but they are also more resistant towards higher pressure and temperature, aggressive feeds, and fouling effects [15].
Disadvantages
(i) May have plasticization problem when handling high CO2. (ii) Presence of CO2 may result in membrane performance reduction at certain elevated pressure.(iii) As the membranes expose to CO2, polymer network in the membrane will swell, and segmental mobility will also increase which consequently cause a rise in permeability for all gas components [16].(iv) The components with low permeability characteristic will experience more permeability increment; thus, the selectivity of the membrane will definitely decrease [17–19].
(i) Inherent brittleness characteristic.(ii) Performed well under low pressure which does not suit the natural gas well which required high pressure for the exploration.(iii) High production cost which seems not practical for large industrial applications [20].
Examples
Polyethylene (PE), poly(dimethylsiloxane) (PDMS), polysulfone (PSU), polyethersulfone (PES), polyimide (PI) [21], polycarbonate [22], polyimide [23], polyethers [24], polypyrrolones [25, 26], polysulfones [27], and polyethersulfones [28].
Aminoslicate membrane [29], carbon-silicalite composite membrane [30], MFI membranes [31], and microporous silica membranes [32].Gas separation using polymeric membranes has taken its first commercial scale in late 1970’s after the demonstration of rubbery membranes back in 1830’s [33]. Literally, the permeability of gas in a specific gas mixture varies inversely with its separation factor. The tighter of molecular spacing it has, the higher the separation characteristic of the polymer, but, however, as the operating pressure increases, the permeability is decreasing due to experiencing lower diffusion coefficients [34]. Polymeric membranes that are commercially available for CO2/CH4 separation include polysulfone (PSU), polyetehrsulfone (PES), polyamide (PI) and many more. Generally, as the permeability of the gas increases, the permselectivity was attended to decrease in most cases of polymeric membranes [23].Inorganic membrane like SAPO-34 could give higher separation performance compared to the polymeric membrane, but the separation performance is inversely proportional to the pressure loaded. This observation may create problem when we deal with high pressure natural gas well. The performance of both organic and inorganic membrane is summarized in Robeson’s plot as in Figure2 [35].Figure 2
Zeolite (SAPO-34) membrane performance in Robeson’s plot.
## 2.2. Conventional Mixed Matrix Membrane
A lot of researches have been done to satisfy the needs of gas separation requirement through both polymeric and inorganic membranes. The deficiencies of these membranes have driven the researchers to develop an alternative material for membrane which is more mechanically stable and economic viable, and most important is having high separation performance. The combination of organic and inorganic material which is known as mixed matrix membrane (MMM) was then proposed in idea to get a better membrane gas separation performance at reasonable price [36]. The fabrication of MMM was a promising technology as this composite material has improved its mechanical and electrical properties [37], and it combines the exceptional separation ability and pleasant stability of molecular sieves with better processability of organic membrane [38]. The MMM is characterized by dispersing the inorganic material into the continuous phase of polymeric material which can be almost any polymeric material such as polysulfone, polyimide, and polyethersulfones [39, 40].Various membrane materials can be selected based on the process requirement. Selected materials can be “tailored-made” in order to meet the specific separation purpose in a wide range of application [39]. There were many attempts of developing polymer-inorganic membrane that started few decades back then.Based on Table3, this was observed that the selection of materials is important, and it depends on the system requirement. Higher intrinsic diffusion selectivity characteristic of glassy polymer makes this material better than rubbery polymer [56]. Although MMM has proven an enhancement of selectivity, it was noticed that most MMMs were endured with poor adhesion between the organic matrix and inorganic particles [55]. Even MMM fabrication does have its disadvantages, but the research of MMM with different materials is worth to work on since it has proven its ability to have high separation performance.Table 3
Few researches of mixed matrix membranes.
Year
Mixed matrix membrane (MMM)
Observations
Ref.
Organic
Inorganic
1973
Silicon rubber
Molecular sieves
Poor adhesion of organic and inorganic selected leads to poor separation performance.This poor interaction of both materials may result in nonselective voids present at the interface which consequently causes insufficient membrane performance [41–43].
[44]
1992
Polydimethylsiloxane (PDMS)
Silicalite-1, 13X, KY, and zeolite-5A
Zeolite like silicalite-1, 13X, and KY have enhanced the separation performance of poorly selective rubbery membrane for the carbon dioxide (CO2) and methane (CH4) mixture.
[45]
Propylene diene rubber (EPDM)
Zeolite-5A showed no change in gas selectivity with decrease permeability due to impermeable characteristic towards CO2.
2000
Cellulose acetate (CA)
Silicalite, NaX, and AgX
Silicalite did in fact reverse the selectivity of CA membrane from H2to CO2for CO2/H2separation.
[46]
2000
Polyvinyl acetate
4A
Formation of chemical bonds gave good adhesion, but there is still nonselective “leakage” from the existence of nanometric region.
[47]
2003
Matrimid
Carbon molecular sieves
Selectivity of CO2/CH4 mixture has increased up to 45%.Zeolites loading also affects both gas permeability and gas mixture selectivity. There were also a number of records where permeability increased with selectivity decreased as the zeolites loading was increased [48, 49] and vice versa [42].
[50]
2006
Polyethersulfone (PES)
Zeolite 4A
Due to low mobility of the polymer chain in glassy polymer such as to prevent them to completely cover the zeolites surface which resulted in void interface [51, 52].
[53]
2001
Polyimide (PI)
Zeolite 13X
[54]
2008
Polycarbonate
Zeolite 4A
[55]
## 2.3. Recent Development of Membrane Gas Separation
### 2.3.1. Ionic Liquid-Supported Membrane (ILSM)
In recent years, many researches have been evaluated on the ionic liquid supported membrane (ILSM) for gas separation membrane since ionic liquids are known materials that could dissolve CO2 and stable at high temperature ranges [57]. To be specific, ionic liquids are molten salt that are liquid at room temperature [58]. Furthermore, ionic liquids are of particular interest for membrane gas separation application as they are inflammable, negligible vapour pressure, and nonvolatile which make them also known as “green” solvents [58–60]. Extensive researches have been carried out to develop room temperature ionic liquid (RTIL)-based solvents for CO2 separation with various types of ionic liquids such as pyridinium and imidazolium based. Among RTILs tested, imidazolium-based RTIL was chosen as the most feasible solvent for CO2 separation as they are commercially viable and easily tunable by tailoring the cation and anion to meet the system requirements [60].ILSMs have been proven that they offered an increase in permeability that outperforms many neat polymer membranes. ILSMs synthesized from poly(vinylidene fluoride) (PVDF) and 1-butyl-3-methylimidazolium tetrafluororate (BMImBF4) showed high permeation performance of CO2 and mechanically stable while operating at high pressure condition [63]. The consumption of RTILs showed an increment especially for 1-R-3-methylimidazolium (R-mim)-based RTILs as this type is preferable due to its properties of less viscous compared to other RTILs. In addition, gases like CO2, nitrogen (N2), and other hydrocarbons demonstrated high solubility in Rmim-based RTILs [64, 65]. Besides, the use of Rmim-based RTILs could calculate the latent permeability and selectivity of the mixture of given gases by using the molar volume of these RTILs [60]. RTIL can be functionalized and set up in according to the system requirement and application, and these researches could be good benchmark for designing the functionalized RTIL efficiently as showed in Table 4.Table 4
Effects of ionic liquid functionalization.
Functionalization
Effects
Nitrile and alkyne group
(i) Gas solubility and separation performance have been tailored.(ii) Functionalized RTIL solvents displayed a decreasing in CO2, N2, and CH4 solubility, but, however, the selectivity of CO2/N2 and CO2/CH4 increased when compared to the nonfunctionalized RTIL [61].
Temperature
(i) As the temperature increases, the CO2 solubility is decreasing while the CH4 solubility remains unchanged.(ii) The ideal solubility selectivity of mix gases for CO2/N2, CO2/CH4, and CO2/H2increased as the temperature decreased [62].
### 2.3.2. Polymerized Room Temperature Ionic Liquid Membrane (Poly(RTIL))
Comparatively, RTIL especially imidazolium based can be also polymerized into a solid, dense, and thin film membrane due to their modular nature [66–68]. It was a successful breakthrough when the researcher found that polymer from ionic liquid monomer had higher CO2 absorption capacity with faster absorption and desorption rate compared to the neat RTIL [69]. Moreover, poly(RTIL) is also attributed with higher mechanical strength [66]. These characters have proven that polymerized ionic liquid (poly(RTIL)) is also a promising material for membrane gas separation. Polymerization of RTIL monomer by varying the n-alkyl length also showed a pleasant result when increase of permeability of given gases like CO2, N2, and methane (CH4) was observed as the n-alkyl group was lengthened [68]. Additionally, poly(RTIL) is also up to extend when it practically absorb about twice as much CO2 as their liquid analogue which makes it much better than molten RTIL [68]. Apparently, performance of poly(RTIL) also depends on the substituent attached to it. In a research done on the inclusion of a polar oligo(ethylene glycol) on the cation side of imidazolium-based RTIL, the separation selectivity has seemed to increase [70].As discussed earlier, mixed matrix membrane is a known membrane that composed of a compatible organic-inorganic pair which demonstrated having good separation properties subject to no interfacial adhesion problem. The improvement of separation performance is expected in an MMM comprising poly(RTIL) (polymer matrix) and zeolite (inorganic). In a very recent work, the benefit of MMM has become an idea to the researcher in ionic liquid membrane field. Hudiono and his coworkers have introduced a three-component mixed matrix membrane by utilizing the poly(RTIL), RTIL, and zeolite [71]. Their research was also based on a positive finding by Bara and his coworkers when they found that the addition of RTIL in poly(RTIL) has increased the gas permeability. This is due to that more rapid gas diffusion occurred as the free volume of membrane increased when RTIL was added [72].On the other hand, Hudiono has used the RTIL to increase the membrane permeability and also to act as an aid for better interaction between the poly(RTIL) and zeolite (SAPO-34). The result was promising as the permeability of given gases like CO2, N2, and CH4 increased accordingly. However, the selectivity was slightly decrease as they claimed that the RTIL used which is emim[Tf2N] was not selective towards CO2/CH4 separation [71]. Nonetheless, the result proved that the addition of RTIL could increase the polymer-zeolite adhesion in MMM as RTIL also acts as the wetting agent for the zeolite.Hudiono again repeated the same experiment fabricating a three-component mixed matrix membrane but by varying the composition of RTIL and zeolite added in order to determine the optimum condition for the membrane. The CO2 permeability seems to rise with the increasing amount of RTIL. The CO2/CH4 selectivity of the MMM also improved with the presence of SAPO-34 compared to neat poly(RTIL)-RTIL membrane as long as there is sufficient amount of RTIL as the wetting agent. Besides, the team also conducted an investigation of the separation performance by using the vinyl-based poly(RTIL). The addition of RTIL is not essential as they are structurally similar [73].In contrast, a ternary MMM has been fabricated by Oral and his coworkers by using different materials. The project study on the effect of different RTIL loadings which are emim[Tf2N] and emim[CF3SO3] towards MMM composed of polyimide-zeolite (SAPO-34). The addition of emim[Tf2N] has performed as expected when the permeability of CO2 increased while the incorporation of emim[CF3SO3] has increased the CO2/CH4 selectivity since emim[CF3SO3] is selective towards CO2/CH4 [74].
## 2.3.1. Ionic Liquid-Supported Membrane (ILSM)
In recent years, many researches have been evaluated on the ionic liquid supported membrane (ILSM) for gas separation membrane since ionic liquids are known materials that could dissolve CO2 and stable at high temperature ranges [57]. To be specific, ionic liquids are molten salt that are liquid at room temperature [58]. Furthermore, ionic liquids are of particular interest for membrane gas separation application as they are inflammable, negligible vapour pressure, and nonvolatile which make them also known as “green” solvents [58–60]. Extensive researches have been carried out to develop room temperature ionic liquid (RTIL)-based solvents for CO2 separation with various types of ionic liquids such as pyridinium and imidazolium based. Among RTILs tested, imidazolium-based RTIL was chosen as the most feasible solvent for CO2 separation as they are commercially viable and easily tunable by tailoring the cation and anion to meet the system requirements [60].ILSMs have been proven that they offered an increase in permeability that outperforms many neat polymer membranes. ILSMs synthesized from poly(vinylidene fluoride) (PVDF) and 1-butyl-3-methylimidazolium tetrafluororate (BMImBF4) showed high permeation performance of CO2 and mechanically stable while operating at high pressure condition [63]. The consumption of RTILs showed an increment especially for 1-R-3-methylimidazolium (R-mim)-based RTILs as this type is preferable due to its properties of less viscous compared to other RTILs. In addition, gases like CO2, nitrogen (N2), and other hydrocarbons demonstrated high solubility in Rmim-based RTILs [64, 65]. Besides, the use of Rmim-based RTILs could calculate the latent permeability and selectivity of the mixture of given gases by using the molar volume of these RTILs [60]. RTIL can be functionalized and set up in according to the system requirement and application, and these researches could be good benchmark for designing the functionalized RTIL efficiently as showed in Table 4.Table 4
Effects of ionic liquid functionalization.
Functionalization
Effects
Nitrile and alkyne group
(i) Gas solubility and separation performance have been tailored.(ii) Functionalized RTIL solvents displayed a decreasing in CO2, N2, and CH4 solubility, but, however, the selectivity of CO2/N2 and CO2/CH4 increased when compared to the nonfunctionalized RTIL [61].
Temperature
(i) As the temperature increases, the CO2 solubility is decreasing while the CH4 solubility remains unchanged.(ii) The ideal solubility selectivity of mix gases for CO2/N2, CO2/CH4, and CO2/H2increased as the temperature decreased [62].
## 2.3.2. Polymerized Room Temperature Ionic Liquid Membrane (Poly(RTIL))
Comparatively, RTIL especially imidazolium based can be also polymerized into a solid, dense, and thin film membrane due to their modular nature [66–68]. It was a successful breakthrough when the researcher found that polymer from ionic liquid monomer had higher CO2 absorption capacity with faster absorption and desorption rate compared to the neat RTIL [69]. Moreover, poly(RTIL) is also attributed with higher mechanical strength [66]. These characters have proven that polymerized ionic liquid (poly(RTIL)) is also a promising material for membrane gas separation. Polymerization of RTIL monomer by varying the n-alkyl length also showed a pleasant result when increase of permeability of given gases like CO2, N2, and methane (CH4) was observed as the n-alkyl group was lengthened [68]. Additionally, poly(RTIL) is also up to extend when it practically absorb about twice as much CO2 as their liquid analogue which makes it much better than molten RTIL [68]. Apparently, performance of poly(RTIL) also depends on the substituent attached to it. In a research done on the inclusion of a polar oligo(ethylene glycol) on the cation side of imidazolium-based RTIL, the separation selectivity has seemed to increase [70].As discussed earlier, mixed matrix membrane is a known membrane that composed of a compatible organic-inorganic pair which demonstrated having good separation properties subject to no interfacial adhesion problem. The improvement of separation performance is expected in an MMM comprising poly(RTIL) (polymer matrix) and zeolite (inorganic). In a very recent work, the benefit of MMM has become an idea to the researcher in ionic liquid membrane field. Hudiono and his coworkers have introduced a three-component mixed matrix membrane by utilizing the poly(RTIL), RTIL, and zeolite [71]. Their research was also based on a positive finding by Bara and his coworkers when they found that the addition of RTIL in poly(RTIL) has increased the gas permeability. This is due to that more rapid gas diffusion occurred as the free volume of membrane increased when RTIL was added [72].On the other hand, Hudiono has used the RTIL to increase the membrane permeability and also to act as an aid for better interaction between the poly(RTIL) and zeolite (SAPO-34). The result was promising as the permeability of given gases like CO2, N2, and CH4 increased accordingly. However, the selectivity was slightly decrease as they claimed that the RTIL used which is emim[Tf2N] was not selective towards CO2/CH4 separation [71]. Nonetheless, the result proved that the addition of RTIL could increase the polymer-zeolite adhesion in MMM as RTIL also acts as the wetting agent for the zeolite.Hudiono again repeated the same experiment fabricating a three-component mixed matrix membrane but by varying the composition of RTIL and zeolite added in order to determine the optimum condition for the membrane. The CO2 permeability seems to rise with the increasing amount of RTIL. The CO2/CH4 selectivity of the MMM also improved with the presence of SAPO-34 compared to neat poly(RTIL)-RTIL membrane as long as there is sufficient amount of RTIL as the wetting agent. Besides, the team also conducted an investigation of the separation performance by using the vinyl-based poly(RTIL). The addition of RTIL is not essential as they are structurally similar [73].In contrast, a ternary MMM has been fabricated by Oral and his coworkers by using different materials. The project study on the effect of different RTIL loadings which are emim[Tf2N] and emim[CF3SO3] towards MMM composed of polyimide-zeolite (SAPO-34). The addition of emim[Tf2N] has performed as expected when the permeability of CO2 increased while the incorporation of emim[CF3SO3] has increased the CO2/CH4 selectivity since emim[CF3SO3] is selective towards CO2/CH4 [74].
## 3. Conclusion
The escalating research in the membrane fabrication for gas separation applications signifies that membranes technology is currently growing and becoming the major focus for industrial gas separation processes. Latest research area using mixed matrix membranes combines the flexibility and low capital cost with improving selectivity, permeability, chemical, thermal, and mechanical strength. Material selection and method of preparation are the most important part in fabricating a membrane. So the next research must be very careful in determining the materials for gas separation and methods applied in the fabrication stage. Even the synthesized MMMs were only tested in a small scale, the research of MMMs is worth to be further explored since MMMs have shown better separation performance compared to polymeric and inorganic membranes.
---
*Source: 101746-2012-12-04.xml* | 101746-2012-12-04_101746-2012-12-04.md | 42,671 | Latest Development on Membrane Fabrication for Natural Gas Purification: A Review | Dzeti Farhah Mohshim; Hilmi bin Mukhtar; Zakaria Man; Rizwan Nasir | Journal of Engineering
(2013) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101746 | 101746-2012-12-04.xml | ---
## Abstract
In the last few decades, membrane technology has been a great attention for gas separation technology especially for natural gas sweetening. The intrinsic character of membranes makes them fit for process escalation, and this versatility could be the significant factor to induce membrane technology in most gas separation areas. Membranes were synthesized with various materials which depended on the applications. The fabrication of polymeric membrane was one of the fastest growing fields of membrane technology. However, polymeric membranes could not meet the separation performances required especially in high operating pressure due to deficiencies problem. The chemistry and structure of support materials like inorganic membranes were also one of the focus areas when inorganic membranes showed some positive results towards gas separation. However, the materials are somewhat lacking to meet the separation performance requirement. Mixed matrix membrane (MMM) which is comprising polymeric and inorganic membranes presents an interesting approach for enhancing the separation performance. Nevertheless, MMM is yet to be commercialized as the material combinations are still in the research stage. This paper highlights the potential promising areas of research in gas separation by taking into account the material selections and the addition of a third component for conventional MMM.
---
## Body
## 1. Introduction
Natural gas can be considered as the largest fuel source required after the oil and coal [1]. Nowadays, the consumption of natural gas is not only limited to the industry, but natural gas is also extensively consumed by the power generation and transportation sector [2]. These phenomena supported the idea of going towards sustainability and green technology as the natural gas is claimed to generate less-toxic gases like carbon dioxide (CO2) and nitrogen oxides (NOx) upon combustion as shown in Table 1 [3].Table 1
Fossil fuel emission levels (pounds per billion Btu of energy input).
Fuel sources/pollutant(pound/BTU)
Natural gas
Oil
Coal
Carbon dioxide
117,000
164,000
208,000
Carbon monoxide
40
33
208
Nitrogen oxides
92
448
457
Sulphur dioxide
1
1,122
2,591
Particulates
7
84
2,744
Mercury
0.000
0.007
0.016However, pure natural gas from the wellhead cannot directly be used as it contains undesirable impurities such as carbon dioxide (CO2) and hydrogen sulphide (H2S) [4]. All of these unwanted substances must be removed as these toxic gases could corrode the pipeline since CO2 is highly acidic in the presence of water. Furthermore, the existence of CO2 may waste the pipeline capacity and reduce the energy content of natural gas which eventually lowers the calorific value of natural gas [5].Conventionally, natural gas treatment was predominated with some methods such as absorption, adsorption, and cryogenic distillation. But these methods require high treatment cost due to regeneration process, large equipments, and broad area for the big equipments [6]. With the advantages of lower capital cost, easy operation process, and high CO2 removal percentage, membrane technology offers the best treatment for natural gas [6]. Natural gas is expected to contain less than 2 vol% or less than 2 ppm of CO2 after the natural gas treatment in order to meet the pipeline and commercial specification [7]. This specification is made to secure the lifetime of the pipeline and to avoid an excessive budget for pipeline replacement.Membrane technology has received significant attention from various sectors especially industries and academics in their research as it gives the most relevant impact in reducing the environmental problem and costs. Membrane is defined as a thin layer, which separates two phases and restricts transport of various chemicals in a selective manner [8]. Membrane restricts the penetration of some molecules that have bigger kinetic diameter. The commercial value of membrane is determined by the membrane’s transport properties which are permeability and selectivity. Major gap of the existing technologies is limited to low CO2 loading (<15 mol%). Ideally, we required high permeability and high selectivity of membrane, but, however, most membranes exhibit high selectivity in low permeability and vice versa which make this is as a major tradeoff of membranes, and none of these technologies are yet to treat natural gas containing high CO2 (>80 mol%) [9].
## 2. Membrane Technology Development
### 2.1. Early Membrane Development
Membrane technology has been started as early as in 1850 when Graham introduced the Graham’s Law of Diffusion. Then, gas separation utilization in membrane technology has been commercialized in late 1900’s. Permea PRISM membrane was the first commercialized gas separation membrane produced in 1980 [2]. Summary of early development of membranes is shown in Figure 1. This innovation has led to the further membrane gas separation development. A lot of studies done by the researchers for various gas separation mostly focus on the natural gas purification.Figure 1
Membrane development timeline.Development of membrane for CO2/CH4 separation has been started since early 1990’s. Numbers of membranes were fabricated using different kind of materials in the early stage of this membrane gas separation. The desirable material selected must be well suited to the separation performance by which mean separation of gases works contrarily in different materials. Excellent gas membranes separation should have the characteristic of high separation performance with reasonable high permeability, high robustness, chemically, thermally, and mechanically good and rational production cost [10, 11]. Two types of materials are practically used in gas separation: polymeric membrane and inorganic membrane and the comparison of both polymeric and inorganic membranes is showed in Table 2.Table 2
Comparison between polymeric and inorganic membranes.
Polymeric membranes
Inorganic membranes
Materials
Present in either rubbery or glassy type which depends on the operating temperature [12].
Made from inorganic-based material like glass, aluminium, and metal [13].
Characteristics
(i) Polymer is more rigid and hard in glassy state while in rubbery state it is more soft and flexible.(ii) Glassy polymeric membranes exhibit higher glass transition temperature compared to rubbery membranes, and glassy types tend to have higher CO2/CH4 selectivity [14].
(i) Able to withstand with solvent and other chemicals and also susceptible to microbial attack.(ii) Comprise significantly higher permeability and selectivity, but they are also more resistant towards higher pressure and temperature, aggressive feeds, and fouling effects [15].
Disadvantages
(i) May have plasticization problem when handling high CO2. (ii) Presence of CO2 may result in membrane performance reduction at certain elevated pressure.(iii) As the membranes expose to CO2, polymer network in the membrane will swell, and segmental mobility will also increase which consequently cause a rise in permeability for all gas components [16].(iv) The components with low permeability characteristic will experience more permeability increment; thus, the selectivity of the membrane will definitely decrease [17–19].
(i) Inherent brittleness characteristic.(ii) Performed well under low pressure which does not suit the natural gas well which required high pressure for the exploration.(iii) High production cost which seems not practical for large industrial applications [20].
Examples
Polyethylene (PE), poly(dimethylsiloxane) (PDMS), polysulfone (PSU), polyethersulfone (PES), polyimide (PI) [21], polycarbonate [22], polyimide [23], polyethers [24], polypyrrolones [25, 26], polysulfones [27], and polyethersulfones [28].
Aminoslicate membrane [29], carbon-silicalite composite membrane [30], MFI membranes [31], and microporous silica membranes [32].Gas separation using polymeric membranes has taken its first commercial scale in late 1970’s after the demonstration of rubbery membranes back in 1830’s [33]. Literally, the permeability of gas in a specific gas mixture varies inversely with its separation factor. The tighter of molecular spacing it has, the higher the separation characteristic of the polymer, but, however, as the operating pressure increases, the permeability is decreasing due to experiencing lower diffusion coefficients [34]. Polymeric membranes that are commercially available for CO2/CH4 separation include polysulfone (PSU), polyetehrsulfone (PES), polyamide (PI) and many more. Generally, as the permeability of the gas increases, the permselectivity was attended to decrease in most cases of polymeric membranes [23].Inorganic membrane like SAPO-34 could give higher separation performance compared to the polymeric membrane, but the separation performance is inversely proportional to the pressure loaded. This observation may create problem when we deal with high pressure natural gas well. The performance of both organic and inorganic membrane is summarized in Robeson’s plot as in Figure2 [35].Figure 2
Zeolite (SAPO-34) membrane performance in Robeson’s plot.
### 2.2. Conventional Mixed Matrix Membrane
A lot of researches have been done to satisfy the needs of gas separation requirement through both polymeric and inorganic membranes. The deficiencies of these membranes have driven the researchers to develop an alternative material for membrane which is more mechanically stable and economic viable, and most important is having high separation performance. The combination of organic and inorganic material which is known as mixed matrix membrane (MMM) was then proposed in idea to get a better membrane gas separation performance at reasonable price [36]. The fabrication of MMM was a promising technology as this composite material has improved its mechanical and electrical properties [37], and it combines the exceptional separation ability and pleasant stability of molecular sieves with better processability of organic membrane [38]. The MMM is characterized by dispersing the inorganic material into the continuous phase of polymeric material which can be almost any polymeric material such as polysulfone, polyimide, and polyethersulfones [39, 40].Various membrane materials can be selected based on the process requirement. Selected materials can be “tailored-made” in order to meet the specific separation purpose in a wide range of application [39]. There were many attempts of developing polymer-inorganic membrane that started few decades back then.Based on Table3, this was observed that the selection of materials is important, and it depends on the system requirement. Higher intrinsic diffusion selectivity characteristic of glassy polymer makes this material better than rubbery polymer [56]. Although MMM has proven an enhancement of selectivity, it was noticed that most MMMs were endured with poor adhesion between the organic matrix and inorganic particles [55]. Even MMM fabrication does have its disadvantages, but the research of MMM with different materials is worth to work on since it has proven its ability to have high separation performance.Table 3
Few researches of mixed matrix membranes.
Year
Mixed matrix membrane (MMM)
Observations
Ref.
Organic
Inorganic
1973
Silicon rubber
Molecular sieves
Poor adhesion of organic and inorganic selected leads to poor separation performance.This poor interaction of both materials may result in nonselective voids present at the interface which consequently causes insufficient membrane performance [41–43].
[44]
1992
Polydimethylsiloxane (PDMS)
Silicalite-1, 13X, KY, and zeolite-5A
Zeolite like silicalite-1, 13X, and KY have enhanced the separation performance of poorly selective rubbery membrane for the carbon dioxide (CO2) and methane (CH4) mixture.
[45]
Propylene diene rubber (EPDM)
Zeolite-5A showed no change in gas selectivity with decrease permeability due to impermeable characteristic towards CO2.
2000
Cellulose acetate (CA)
Silicalite, NaX, and AgX
Silicalite did in fact reverse the selectivity of CA membrane from H2to CO2for CO2/H2separation.
[46]
2000
Polyvinyl acetate
4A
Formation of chemical bonds gave good adhesion, but there is still nonselective “leakage” from the existence of nanometric region.
[47]
2003
Matrimid
Carbon molecular sieves
Selectivity of CO2/CH4 mixture has increased up to 45%.Zeolites loading also affects both gas permeability and gas mixture selectivity. There were also a number of records where permeability increased with selectivity decreased as the zeolites loading was increased [48, 49] and vice versa [42].
[50]
2006
Polyethersulfone (PES)
Zeolite 4A
Due to low mobility of the polymer chain in glassy polymer such as to prevent them to completely cover the zeolites surface which resulted in void interface [51, 52].
[53]
2001
Polyimide (PI)
Zeolite 13X
[54]
2008
Polycarbonate
Zeolite 4A
[55]
### 2.3. Recent Development of Membrane Gas Separation
#### 2.3.1. Ionic Liquid-Supported Membrane (ILSM)
In recent years, many researches have been evaluated on the ionic liquid supported membrane (ILSM) for gas separation membrane since ionic liquids are known materials that could dissolve CO2 and stable at high temperature ranges [57]. To be specific, ionic liquids are molten salt that are liquid at room temperature [58]. Furthermore, ionic liquids are of particular interest for membrane gas separation application as they are inflammable, negligible vapour pressure, and nonvolatile which make them also known as “green” solvents [58–60]. Extensive researches have been carried out to develop room temperature ionic liquid (RTIL)-based solvents for CO2 separation with various types of ionic liquids such as pyridinium and imidazolium based. Among RTILs tested, imidazolium-based RTIL was chosen as the most feasible solvent for CO2 separation as they are commercially viable and easily tunable by tailoring the cation and anion to meet the system requirements [60].ILSMs have been proven that they offered an increase in permeability that outperforms many neat polymer membranes. ILSMs synthesized from poly(vinylidene fluoride) (PVDF) and 1-butyl-3-methylimidazolium tetrafluororate (BMImBF4) showed high permeation performance of CO2 and mechanically stable while operating at high pressure condition [63]. The consumption of RTILs showed an increment especially for 1-R-3-methylimidazolium (R-mim)-based RTILs as this type is preferable due to its properties of less viscous compared to other RTILs. In addition, gases like CO2, nitrogen (N2), and other hydrocarbons demonstrated high solubility in Rmim-based RTILs [64, 65]. Besides, the use of Rmim-based RTILs could calculate the latent permeability and selectivity of the mixture of given gases by using the molar volume of these RTILs [60]. RTIL can be functionalized and set up in according to the system requirement and application, and these researches could be good benchmark for designing the functionalized RTIL efficiently as showed in Table 4.Table 4
Effects of ionic liquid functionalization.
Functionalization
Effects
Nitrile and alkyne group
(i) Gas solubility and separation performance have been tailored.(ii) Functionalized RTIL solvents displayed a decreasing in CO2, N2, and CH4 solubility, but, however, the selectivity of CO2/N2 and CO2/CH4 increased when compared to the nonfunctionalized RTIL [61].
Temperature
(i) As the temperature increases, the CO2 solubility is decreasing while the CH4 solubility remains unchanged.(ii) The ideal solubility selectivity of mix gases for CO2/N2, CO2/CH4, and CO2/H2increased as the temperature decreased [62].
#### 2.3.2. Polymerized Room Temperature Ionic Liquid Membrane (Poly(RTIL))
Comparatively, RTIL especially imidazolium based can be also polymerized into a solid, dense, and thin film membrane due to their modular nature [66–68]. It was a successful breakthrough when the researcher found that polymer from ionic liquid monomer had higher CO2 absorption capacity with faster absorption and desorption rate compared to the neat RTIL [69]. Moreover, poly(RTIL) is also attributed with higher mechanical strength [66]. These characters have proven that polymerized ionic liquid (poly(RTIL)) is also a promising material for membrane gas separation. Polymerization of RTIL monomer by varying the n-alkyl length also showed a pleasant result when increase of permeability of given gases like CO2, N2, and methane (CH4) was observed as the n-alkyl group was lengthened [68]. Additionally, poly(RTIL) is also up to extend when it practically absorb about twice as much CO2 as their liquid analogue which makes it much better than molten RTIL [68]. Apparently, performance of poly(RTIL) also depends on the substituent attached to it. In a research done on the inclusion of a polar oligo(ethylene glycol) on the cation side of imidazolium-based RTIL, the separation selectivity has seemed to increase [70].As discussed earlier, mixed matrix membrane is a known membrane that composed of a compatible organic-inorganic pair which demonstrated having good separation properties subject to no interfacial adhesion problem. The improvement of separation performance is expected in an MMM comprising poly(RTIL) (polymer matrix) and zeolite (inorganic). In a very recent work, the benefit of MMM has become an idea to the researcher in ionic liquid membrane field. Hudiono and his coworkers have introduced a three-component mixed matrix membrane by utilizing the poly(RTIL), RTIL, and zeolite [71]. Their research was also based on a positive finding by Bara and his coworkers when they found that the addition of RTIL in poly(RTIL) has increased the gas permeability. This is due to that more rapid gas diffusion occurred as the free volume of membrane increased when RTIL was added [72].On the other hand, Hudiono has used the RTIL to increase the membrane permeability and also to act as an aid for better interaction between the poly(RTIL) and zeolite (SAPO-34). The result was promising as the permeability of given gases like CO2, N2, and CH4 increased accordingly. However, the selectivity was slightly decrease as they claimed that the RTIL used which is emim[Tf2N] was not selective towards CO2/CH4 separation [71]. Nonetheless, the result proved that the addition of RTIL could increase the polymer-zeolite adhesion in MMM as RTIL also acts as the wetting agent for the zeolite.Hudiono again repeated the same experiment fabricating a three-component mixed matrix membrane but by varying the composition of RTIL and zeolite added in order to determine the optimum condition for the membrane. The CO2 permeability seems to rise with the increasing amount of RTIL. The CO2/CH4 selectivity of the MMM also improved with the presence of SAPO-34 compared to neat poly(RTIL)-RTIL membrane as long as there is sufficient amount of RTIL as the wetting agent. Besides, the team also conducted an investigation of the separation performance by using the vinyl-based poly(RTIL). The addition of RTIL is not essential as they are structurally similar [73].In contrast, a ternary MMM has been fabricated by Oral and his coworkers by using different materials. The project study on the effect of different RTIL loadings which are emim[Tf2N] and emim[CF3SO3] towards MMM composed of polyimide-zeolite (SAPO-34). The addition of emim[Tf2N] has performed as expected when the permeability of CO2 increased while the incorporation of emim[CF3SO3] has increased the CO2/CH4 selectivity since emim[CF3SO3] is selective towards CO2/CH4 [74].
## 2.1. Early Membrane Development
Membrane technology has been started as early as in 1850 when Graham introduced the Graham’s Law of Diffusion. Then, gas separation utilization in membrane technology has been commercialized in late 1900’s. Permea PRISM membrane was the first commercialized gas separation membrane produced in 1980 [2]. Summary of early development of membranes is shown in Figure 1. This innovation has led to the further membrane gas separation development. A lot of studies done by the researchers for various gas separation mostly focus on the natural gas purification.Figure 1
Membrane development timeline.Development of membrane for CO2/CH4 separation has been started since early 1990’s. Numbers of membranes were fabricated using different kind of materials in the early stage of this membrane gas separation. The desirable material selected must be well suited to the separation performance by which mean separation of gases works contrarily in different materials. Excellent gas membranes separation should have the characteristic of high separation performance with reasonable high permeability, high robustness, chemically, thermally, and mechanically good and rational production cost [10, 11]. Two types of materials are practically used in gas separation: polymeric membrane and inorganic membrane and the comparison of both polymeric and inorganic membranes is showed in Table 2.Table 2
Comparison between polymeric and inorganic membranes.
Polymeric membranes
Inorganic membranes
Materials
Present in either rubbery or glassy type which depends on the operating temperature [12].
Made from inorganic-based material like glass, aluminium, and metal [13].
Characteristics
(i) Polymer is more rigid and hard in glassy state while in rubbery state it is more soft and flexible.(ii) Glassy polymeric membranes exhibit higher glass transition temperature compared to rubbery membranes, and glassy types tend to have higher CO2/CH4 selectivity [14].
(i) Able to withstand with solvent and other chemicals and also susceptible to microbial attack.(ii) Comprise significantly higher permeability and selectivity, but they are also more resistant towards higher pressure and temperature, aggressive feeds, and fouling effects [15].
Disadvantages
(i) May have plasticization problem when handling high CO2. (ii) Presence of CO2 may result in membrane performance reduction at certain elevated pressure.(iii) As the membranes expose to CO2, polymer network in the membrane will swell, and segmental mobility will also increase which consequently cause a rise in permeability for all gas components [16].(iv) The components with low permeability characteristic will experience more permeability increment; thus, the selectivity of the membrane will definitely decrease [17–19].
(i) Inherent brittleness characteristic.(ii) Performed well under low pressure which does not suit the natural gas well which required high pressure for the exploration.(iii) High production cost which seems not practical for large industrial applications [20].
Examples
Polyethylene (PE), poly(dimethylsiloxane) (PDMS), polysulfone (PSU), polyethersulfone (PES), polyimide (PI) [21], polycarbonate [22], polyimide [23], polyethers [24], polypyrrolones [25, 26], polysulfones [27], and polyethersulfones [28].
Aminoslicate membrane [29], carbon-silicalite composite membrane [30], MFI membranes [31], and microporous silica membranes [32].Gas separation using polymeric membranes has taken its first commercial scale in late 1970’s after the demonstration of rubbery membranes back in 1830’s [33]. Literally, the permeability of gas in a specific gas mixture varies inversely with its separation factor. The tighter of molecular spacing it has, the higher the separation characteristic of the polymer, but, however, as the operating pressure increases, the permeability is decreasing due to experiencing lower diffusion coefficients [34]. Polymeric membranes that are commercially available for CO2/CH4 separation include polysulfone (PSU), polyetehrsulfone (PES), polyamide (PI) and many more. Generally, as the permeability of the gas increases, the permselectivity was attended to decrease in most cases of polymeric membranes [23].Inorganic membrane like SAPO-34 could give higher separation performance compared to the polymeric membrane, but the separation performance is inversely proportional to the pressure loaded. This observation may create problem when we deal with high pressure natural gas well. The performance of both organic and inorganic membrane is summarized in Robeson’s plot as in Figure2 [35].Figure 2
Zeolite (SAPO-34) membrane performance in Robeson’s plot.
## 2.2. Conventional Mixed Matrix Membrane
A lot of researches have been done to satisfy the needs of gas separation requirement through both polymeric and inorganic membranes. The deficiencies of these membranes have driven the researchers to develop an alternative material for membrane which is more mechanically stable and economic viable, and most important is having high separation performance. The combination of organic and inorganic material which is known as mixed matrix membrane (MMM) was then proposed in idea to get a better membrane gas separation performance at reasonable price [36]. The fabrication of MMM was a promising technology as this composite material has improved its mechanical and electrical properties [37], and it combines the exceptional separation ability and pleasant stability of molecular sieves with better processability of organic membrane [38]. The MMM is characterized by dispersing the inorganic material into the continuous phase of polymeric material which can be almost any polymeric material such as polysulfone, polyimide, and polyethersulfones [39, 40].Various membrane materials can be selected based on the process requirement. Selected materials can be “tailored-made” in order to meet the specific separation purpose in a wide range of application [39]. There were many attempts of developing polymer-inorganic membrane that started few decades back then.Based on Table3, this was observed that the selection of materials is important, and it depends on the system requirement. Higher intrinsic diffusion selectivity characteristic of glassy polymer makes this material better than rubbery polymer [56]. Although MMM has proven an enhancement of selectivity, it was noticed that most MMMs were endured with poor adhesion between the organic matrix and inorganic particles [55]. Even MMM fabrication does have its disadvantages, but the research of MMM with different materials is worth to work on since it has proven its ability to have high separation performance.Table 3
Few researches of mixed matrix membranes.
Year
Mixed matrix membrane (MMM)
Observations
Ref.
Organic
Inorganic
1973
Silicon rubber
Molecular sieves
Poor adhesion of organic and inorganic selected leads to poor separation performance.This poor interaction of both materials may result in nonselective voids present at the interface which consequently causes insufficient membrane performance [41–43].
[44]
1992
Polydimethylsiloxane (PDMS)
Silicalite-1, 13X, KY, and zeolite-5A
Zeolite like silicalite-1, 13X, and KY have enhanced the separation performance of poorly selective rubbery membrane for the carbon dioxide (CO2) and methane (CH4) mixture.
[45]
Propylene diene rubber (EPDM)
Zeolite-5A showed no change in gas selectivity with decrease permeability due to impermeable characteristic towards CO2.
2000
Cellulose acetate (CA)
Silicalite, NaX, and AgX
Silicalite did in fact reverse the selectivity of CA membrane from H2to CO2for CO2/H2separation.
[46]
2000
Polyvinyl acetate
4A
Formation of chemical bonds gave good adhesion, but there is still nonselective “leakage” from the existence of nanometric region.
[47]
2003
Matrimid
Carbon molecular sieves
Selectivity of CO2/CH4 mixture has increased up to 45%.Zeolites loading also affects both gas permeability and gas mixture selectivity. There were also a number of records where permeability increased with selectivity decreased as the zeolites loading was increased [48, 49] and vice versa [42].
[50]
2006
Polyethersulfone (PES)
Zeolite 4A
Due to low mobility of the polymer chain in glassy polymer such as to prevent them to completely cover the zeolites surface which resulted in void interface [51, 52].
[53]
2001
Polyimide (PI)
Zeolite 13X
[54]
2008
Polycarbonate
Zeolite 4A
[55]
## 2.3. Recent Development of Membrane Gas Separation
### 2.3.1. Ionic Liquid-Supported Membrane (ILSM)
In recent years, many researches have been evaluated on the ionic liquid supported membrane (ILSM) for gas separation membrane since ionic liquids are known materials that could dissolve CO2 and stable at high temperature ranges [57]. To be specific, ionic liquids are molten salt that are liquid at room temperature [58]. Furthermore, ionic liquids are of particular interest for membrane gas separation application as they are inflammable, negligible vapour pressure, and nonvolatile which make them also known as “green” solvents [58–60]. Extensive researches have been carried out to develop room temperature ionic liquid (RTIL)-based solvents for CO2 separation with various types of ionic liquids such as pyridinium and imidazolium based. Among RTILs tested, imidazolium-based RTIL was chosen as the most feasible solvent for CO2 separation as they are commercially viable and easily tunable by tailoring the cation and anion to meet the system requirements [60].ILSMs have been proven that they offered an increase in permeability that outperforms many neat polymer membranes. ILSMs synthesized from poly(vinylidene fluoride) (PVDF) and 1-butyl-3-methylimidazolium tetrafluororate (BMImBF4) showed high permeation performance of CO2 and mechanically stable while operating at high pressure condition [63]. The consumption of RTILs showed an increment especially for 1-R-3-methylimidazolium (R-mim)-based RTILs as this type is preferable due to its properties of less viscous compared to other RTILs. In addition, gases like CO2, nitrogen (N2), and other hydrocarbons demonstrated high solubility in Rmim-based RTILs [64, 65]. Besides, the use of Rmim-based RTILs could calculate the latent permeability and selectivity of the mixture of given gases by using the molar volume of these RTILs [60]. RTIL can be functionalized and set up in according to the system requirement and application, and these researches could be good benchmark for designing the functionalized RTIL efficiently as showed in Table 4.Table 4
Effects of ionic liquid functionalization.
Functionalization
Effects
Nitrile and alkyne group
(i) Gas solubility and separation performance have been tailored.(ii) Functionalized RTIL solvents displayed a decreasing in CO2, N2, and CH4 solubility, but, however, the selectivity of CO2/N2 and CO2/CH4 increased when compared to the nonfunctionalized RTIL [61].
Temperature
(i) As the temperature increases, the CO2 solubility is decreasing while the CH4 solubility remains unchanged.(ii) The ideal solubility selectivity of mix gases for CO2/N2, CO2/CH4, and CO2/H2increased as the temperature decreased [62].
### 2.3.2. Polymerized Room Temperature Ionic Liquid Membrane (Poly(RTIL))
Comparatively, RTIL especially imidazolium based can be also polymerized into a solid, dense, and thin film membrane due to their modular nature [66–68]. It was a successful breakthrough when the researcher found that polymer from ionic liquid monomer had higher CO2 absorption capacity with faster absorption and desorption rate compared to the neat RTIL [69]. Moreover, poly(RTIL) is also attributed with higher mechanical strength [66]. These characters have proven that polymerized ionic liquid (poly(RTIL)) is also a promising material for membrane gas separation. Polymerization of RTIL monomer by varying the n-alkyl length also showed a pleasant result when increase of permeability of given gases like CO2, N2, and methane (CH4) was observed as the n-alkyl group was lengthened [68]. Additionally, poly(RTIL) is also up to extend when it practically absorb about twice as much CO2 as their liquid analogue which makes it much better than molten RTIL [68]. Apparently, performance of poly(RTIL) also depends on the substituent attached to it. In a research done on the inclusion of a polar oligo(ethylene glycol) on the cation side of imidazolium-based RTIL, the separation selectivity has seemed to increase [70].As discussed earlier, mixed matrix membrane is a known membrane that composed of a compatible organic-inorganic pair which demonstrated having good separation properties subject to no interfacial adhesion problem. The improvement of separation performance is expected in an MMM comprising poly(RTIL) (polymer matrix) and zeolite (inorganic). In a very recent work, the benefit of MMM has become an idea to the researcher in ionic liquid membrane field. Hudiono and his coworkers have introduced a three-component mixed matrix membrane by utilizing the poly(RTIL), RTIL, and zeolite [71]. Their research was also based on a positive finding by Bara and his coworkers when they found that the addition of RTIL in poly(RTIL) has increased the gas permeability. This is due to that more rapid gas diffusion occurred as the free volume of membrane increased when RTIL was added [72].On the other hand, Hudiono has used the RTIL to increase the membrane permeability and also to act as an aid for better interaction between the poly(RTIL) and zeolite (SAPO-34). The result was promising as the permeability of given gases like CO2, N2, and CH4 increased accordingly. However, the selectivity was slightly decrease as they claimed that the RTIL used which is emim[Tf2N] was not selective towards CO2/CH4 separation [71]. Nonetheless, the result proved that the addition of RTIL could increase the polymer-zeolite adhesion in MMM as RTIL also acts as the wetting agent for the zeolite.Hudiono again repeated the same experiment fabricating a three-component mixed matrix membrane but by varying the composition of RTIL and zeolite added in order to determine the optimum condition for the membrane. The CO2 permeability seems to rise with the increasing amount of RTIL. The CO2/CH4 selectivity of the MMM also improved with the presence of SAPO-34 compared to neat poly(RTIL)-RTIL membrane as long as there is sufficient amount of RTIL as the wetting agent. Besides, the team also conducted an investigation of the separation performance by using the vinyl-based poly(RTIL). The addition of RTIL is not essential as they are structurally similar [73].In contrast, a ternary MMM has been fabricated by Oral and his coworkers by using different materials. The project study on the effect of different RTIL loadings which are emim[Tf2N] and emim[CF3SO3] towards MMM composed of polyimide-zeolite (SAPO-34). The addition of emim[Tf2N] has performed as expected when the permeability of CO2 increased while the incorporation of emim[CF3SO3] has increased the CO2/CH4 selectivity since emim[CF3SO3] is selective towards CO2/CH4 [74].
## 2.3.1. Ionic Liquid-Supported Membrane (ILSM)
In recent years, many researches have been evaluated on the ionic liquid supported membrane (ILSM) for gas separation membrane since ionic liquids are known materials that could dissolve CO2 and stable at high temperature ranges [57]. To be specific, ionic liquids are molten salt that are liquid at room temperature [58]. Furthermore, ionic liquids are of particular interest for membrane gas separation application as they are inflammable, negligible vapour pressure, and nonvolatile which make them also known as “green” solvents [58–60]. Extensive researches have been carried out to develop room temperature ionic liquid (RTIL)-based solvents for CO2 separation with various types of ionic liquids such as pyridinium and imidazolium based. Among RTILs tested, imidazolium-based RTIL was chosen as the most feasible solvent for CO2 separation as they are commercially viable and easily tunable by tailoring the cation and anion to meet the system requirements [60].ILSMs have been proven that they offered an increase in permeability that outperforms many neat polymer membranes. ILSMs synthesized from poly(vinylidene fluoride) (PVDF) and 1-butyl-3-methylimidazolium tetrafluororate (BMImBF4) showed high permeation performance of CO2 and mechanically stable while operating at high pressure condition [63]. The consumption of RTILs showed an increment especially for 1-R-3-methylimidazolium (R-mim)-based RTILs as this type is preferable due to its properties of less viscous compared to other RTILs. In addition, gases like CO2, nitrogen (N2), and other hydrocarbons demonstrated high solubility in Rmim-based RTILs [64, 65]. Besides, the use of Rmim-based RTILs could calculate the latent permeability and selectivity of the mixture of given gases by using the molar volume of these RTILs [60]. RTIL can be functionalized and set up in according to the system requirement and application, and these researches could be good benchmark for designing the functionalized RTIL efficiently as showed in Table 4.Table 4
Effects of ionic liquid functionalization.
Functionalization
Effects
Nitrile and alkyne group
(i) Gas solubility and separation performance have been tailored.(ii) Functionalized RTIL solvents displayed a decreasing in CO2, N2, and CH4 solubility, but, however, the selectivity of CO2/N2 and CO2/CH4 increased when compared to the nonfunctionalized RTIL [61].
Temperature
(i) As the temperature increases, the CO2 solubility is decreasing while the CH4 solubility remains unchanged.(ii) The ideal solubility selectivity of mix gases for CO2/N2, CO2/CH4, and CO2/H2increased as the temperature decreased [62].
## 2.3.2. Polymerized Room Temperature Ionic Liquid Membrane (Poly(RTIL))
Comparatively, RTIL especially imidazolium based can be also polymerized into a solid, dense, and thin film membrane due to their modular nature [66–68]. It was a successful breakthrough when the researcher found that polymer from ionic liquid monomer had higher CO2 absorption capacity with faster absorption and desorption rate compared to the neat RTIL [69]. Moreover, poly(RTIL) is also attributed with higher mechanical strength [66]. These characters have proven that polymerized ionic liquid (poly(RTIL)) is also a promising material for membrane gas separation. Polymerization of RTIL monomer by varying the n-alkyl length also showed a pleasant result when increase of permeability of given gases like CO2, N2, and methane (CH4) was observed as the n-alkyl group was lengthened [68]. Additionally, poly(RTIL) is also up to extend when it practically absorb about twice as much CO2 as their liquid analogue which makes it much better than molten RTIL [68]. Apparently, performance of poly(RTIL) also depends on the substituent attached to it. In a research done on the inclusion of a polar oligo(ethylene glycol) on the cation side of imidazolium-based RTIL, the separation selectivity has seemed to increase [70].As discussed earlier, mixed matrix membrane is a known membrane that composed of a compatible organic-inorganic pair which demonstrated having good separation properties subject to no interfacial adhesion problem. The improvement of separation performance is expected in an MMM comprising poly(RTIL) (polymer matrix) and zeolite (inorganic). In a very recent work, the benefit of MMM has become an idea to the researcher in ionic liquid membrane field. Hudiono and his coworkers have introduced a three-component mixed matrix membrane by utilizing the poly(RTIL), RTIL, and zeolite [71]. Their research was also based on a positive finding by Bara and his coworkers when they found that the addition of RTIL in poly(RTIL) has increased the gas permeability. This is due to that more rapid gas diffusion occurred as the free volume of membrane increased when RTIL was added [72].On the other hand, Hudiono has used the RTIL to increase the membrane permeability and also to act as an aid for better interaction between the poly(RTIL) and zeolite (SAPO-34). The result was promising as the permeability of given gases like CO2, N2, and CH4 increased accordingly. However, the selectivity was slightly decrease as they claimed that the RTIL used which is emim[Tf2N] was not selective towards CO2/CH4 separation [71]. Nonetheless, the result proved that the addition of RTIL could increase the polymer-zeolite adhesion in MMM as RTIL also acts as the wetting agent for the zeolite.Hudiono again repeated the same experiment fabricating a three-component mixed matrix membrane but by varying the composition of RTIL and zeolite added in order to determine the optimum condition for the membrane. The CO2 permeability seems to rise with the increasing amount of RTIL. The CO2/CH4 selectivity of the MMM also improved with the presence of SAPO-34 compared to neat poly(RTIL)-RTIL membrane as long as there is sufficient amount of RTIL as the wetting agent. Besides, the team also conducted an investigation of the separation performance by using the vinyl-based poly(RTIL). The addition of RTIL is not essential as they are structurally similar [73].In contrast, a ternary MMM has been fabricated by Oral and his coworkers by using different materials. The project study on the effect of different RTIL loadings which are emim[Tf2N] and emim[CF3SO3] towards MMM composed of polyimide-zeolite (SAPO-34). The addition of emim[Tf2N] has performed as expected when the permeability of CO2 increased while the incorporation of emim[CF3SO3] has increased the CO2/CH4 selectivity since emim[CF3SO3] is selective towards CO2/CH4 [74].
## 3. Conclusion
The escalating research in the membrane fabrication for gas separation applications signifies that membranes technology is currently growing and becoming the major focus for industrial gas separation processes. Latest research area using mixed matrix membranes combines the flexibility and low capital cost with improving selectivity, permeability, chemical, thermal, and mechanical strength. Material selection and method of preparation are the most important part in fabricating a membrane. So the next research must be very careful in determining the materials for gas separation and methods applied in the fabrication stage. Even the synthesized MMMs were only tested in a small scale, the research of MMMs is worth to be further explored since MMMs have shown better separation performance compared to polymeric and inorganic membranes.
---
*Source: 101746-2012-12-04.xml* | 2013 |
# Structural and Optical Characterisation of an Erbium/Ytterbium Doped Hybrid Material Developed via a Nonhydrolytic Sol-Gel Route
**Authors:** M. Oubaha; R. Copperwhite; C. McDonagh; P. Etienne; B. D. MacCraith
**Journal:** SRX Materials Science
(2010)
**Publisher:** Scholarly Research Exchange
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.3814/2010/101747
---
## Abstract
This paper proposes the development and structural characterisation of anEr3+/Yb3+ doped hybrid organic-inorganic material synthesised by a nonhydrolytic sol-gel process. By using a pumping laser diode at 980 nm, a typical Er3+ luminescence has been recorded in the near infrared region (1.53–1.55 μm). However, the detected fluorescence was particularly weak compared to that generally observed in pure mineral materials, suggesting the occurrence of strong quenching due to multiphonon relaxation processes.
To understand this behaviour, structural characterisation of both of the matrix and the local environment of Er3+ ions were conducted employing infrared spectroscopy, nuclear magnetic resonance, electron paramagnetic resonance, and neutron scattering. These studies showed that the major phenomenon competing with the Er3+ fluorescence is intimately associated to the strong vibrational modes of the organic species that involve multiphonon relaxation processes, resulting in energy dissipation within the host matrix.
---
## Body
## 1. Introduction
Since the early 1990s hybrid organic-inorganic has been very popular for the development of novel materials with tuneable properties and morphologies.In particular, Ormosils (organically modified silicon) (R′xSi(OR)4-x) synthesised by the sol-gel process [1] have been widely studied for the preparation of hybrid organic-inorganic materials for different applications such as separation [2, 3], sensing [4, 5], surface protection [6, 7], and optics [8–12].In the optics field, an exciting challenge was the development of integrated optical devices that allow the integration of several functions on one chip, which has become possible for example by combining a photolitographic process with photocurable hybrid sol-gel materials [12, 13].However, to our knowledge, the fabrication of active integrated optical circuits employing photocurable hybrid sol-gel materials has not been reported previously, despite their great potential to enable the use of large bandwidth lasers. One of the major challenges in the development of Er3+ doped hybrid materials for optical amplification around 1550 nm is the avoidance of OH groups in the material. Unfortunately, these groups are inherent to the hydrolytic sol-gel process and are well known to compete with the Er3+ luminescence by nonradiative decay processes [14, 15].In this paper, we report the development of a novel OH-free Er3+/Yb3+ codoped photocurable material employing a nonhydrolytic sol-gel route. The spectroscopic behaviour of the active dopants is correlated to the structure of the host matrix by means of near infrared spectroscopy (NIR), nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and neutron scattering.
## 2. Experimental
### 2.1. Material
In order to avoid any relaxation due to the OH groups, inherent to the hydrolytic sol-gel route, the goal of the material synthesis consists of obtaining an OH-free condensed organo-silane. To achieve this, the sol-gel synthesis was conducted via a nonhydrolytic sol-gel process, as sketched in Figure1.Figure 1
Nonhydrolytic sol-gel synthesis of a photocurable organosilane material.Compared to the hydrolytic routes, which can be processed at ambient temperature, nonhydrolytic processes typically require higher temperature and a catalyst to allow the condensation reaction between precursors of different reactivity. Generally, to condense an alkoxysilane with a chlorosilane, a Lewis acid catalyst, such as zirconium tetrachloride (ZrCl4) is employed [16].The sol preparation consisted of mixing a photocurable organically modified silicate, 3-trimethoxypropyltrimethoxysilane (MAPTMS, Assay~99%) together with silicon tetrachloride (SiCl4, Assay ~99.99%) in the presence of zirconium tetrachloride (ZrCl4, Assay ~99.99%). The mixture was refluxed at 120°C for 24 hours before addition of the doping elements ErCl3 and YbCl3. The obtained sol was then filtered, through a 0.2 micrometer membrane, to remove any contaminating particle. NIR and neutron scattering spectra were recorded on bulk samples of 2 to 5 millimetre thicknesses. For EPR, the solid gels were crushed and the resulting powders sealed under vacuum into glass tubes.
### 2.2. Experimental Techniques
The evolution of the siloxane hybrid network has been followed by29Si-NMR spectroscopy, employing a 400 MHZ Bruker spectrometer. The various oxo bridges formed through either self-condensation or cocondensation reactions between the MAPTMS and the SiCl4 are easily identified during the synthesis. Spectra were recorded at room temperature from liquid solution. Measurements showed that a 4-second recycle delay time with an 8 μs pulse duration, was sufficient for quantitative measurements. The chemical shifts were referenced with respect to Tetramethylsilane, used as an external reference. The Free Induction Decay processing used a 10 Hz line broadening. Each recorded spectrum is an average of all previously obtained spectra during the instrument acquisition time. 128 scans were accumulated for each spectrum.NIR is commonly used to identify the absorption of the harmonics and combinations bands of the fundamental vibrations as well as the absorption of rare earth ions [17]. In the present case, this technique was employed to identify the optimum excitation wavelength of Er3+ and quantify the effect of the Yb3+ codoping effect. NIR spectra were performed using a NIR spectrometer. This optical spectrum analyser included an infrared source, a second-order blocking filter, and a scanning monochromator with both Ge (600–1900 nm) and PbSe (1500 and 3000 nm) detectors. The resolution can be selected between 0.25 and 20 nm. The scanning monochromator used a continuously rotating diffraction grating driven by an electronically controlled DC motor.EPR spectroscopy characterises the environment of the paramagnetic species (in this case Er3+). In this work, this technique was used to investigate the homogeneity of the sample as function of Er3+ concentration. EPR spectra were recorded at room temperature employing an ER200D (X band) spectrometer operating at 9 GHz.Compared to light scattering, which yields information on particles of>100 nm, neutrons scattering allows structural characterisation on a smaller scale, typically from 10 to 500 Å. Neutrons scattering experiments were performed on the small angle neutron scattering spectrometer at the Orphée 14 MW Reactor at the French Atomic Energy (CEA Saclay, France), using a neutron wavelength of 50 Å.Fluorescence emission measurements were performed at room temperature by optically pumping the samples with a Ti: sapphire laser tuned at 980 nm. The light emitted at 90° from the monolith was analysed spectrally with a Jobin-Yvon U1000 double-monochromator fitted with a grating of 600 grooves/mm. The fluorescence was detected by a North-Coast liquid-nitrogen-cooled germanium detector. The spectral resolution ranged from 1 to 2 nm. The pump signal was mechanically chopped at 80 Hz and the signal from the Ge-detector was preamplified and passed to a lock-in amplifier.
## 2.1. Material
In order to avoid any relaxation due to the OH groups, inherent to the hydrolytic sol-gel route, the goal of the material synthesis consists of obtaining an OH-free condensed organo-silane. To achieve this, the sol-gel synthesis was conducted via a nonhydrolytic sol-gel process, as sketched in Figure1.Figure 1
Nonhydrolytic sol-gel synthesis of a photocurable organosilane material.Compared to the hydrolytic routes, which can be processed at ambient temperature, nonhydrolytic processes typically require higher temperature and a catalyst to allow the condensation reaction between precursors of different reactivity. Generally, to condense an alkoxysilane with a chlorosilane, a Lewis acid catalyst, such as zirconium tetrachloride (ZrCl4) is employed [16].The sol preparation consisted of mixing a photocurable organically modified silicate, 3-trimethoxypropyltrimethoxysilane (MAPTMS, Assay~99%) together with silicon tetrachloride (SiCl4, Assay ~99.99%) in the presence of zirconium tetrachloride (ZrCl4, Assay ~99.99%). The mixture was refluxed at 120°C for 24 hours before addition of the doping elements ErCl3 and YbCl3. The obtained sol was then filtered, through a 0.2 micrometer membrane, to remove any contaminating particle. NIR and neutron scattering spectra were recorded on bulk samples of 2 to 5 millimetre thicknesses. For EPR, the solid gels were crushed and the resulting powders sealed under vacuum into glass tubes.
## 2.2. Experimental Techniques
The evolution of the siloxane hybrid network has been followed by29Si-NMR spectroscopy, employing a 400 MHZ Bruker spectrometer. The various oxo bridges formed through either self-condensation or cocondensation reactions between the MAPTMS and the SiCl4 are easily identified during the synthesis. Spectra were recorded at room temperature from liquid solution. Measurements showed that a 4-second recycle delay time with an 8 μs pulse duration, was sufficient for quantitative measurements. The chemical shifts were referenced with respect to Tetramethylsilane, used as an external reference. The Free Induction Decay processing used a 10 Hz line broadening. Each recorded spectrum is an average of all previously obtained spectra during the instrument acquisition time. 128 scans were accumulated for each spectrum.NIR is commonly used to identify the absorption of the harmonics and combinations bands of the fundamental vibrations as well as the absorption of rare earth ions [17]. In the present case, this technique was employed to identify the optimum excitation wavelength of Er3+ and quantify the effect of the Yb3+ codoping effect. NIR spectra were performed using a NIR spectrometer. This optical spectrum analyser included an infrared source, a second-order blocking filter, and a scanning monochromator with both Ge (600–1900 nm) and PbSe (1500 and 3000 nm) detectors. The resolution can be selected between 0.25 and 20 nm. The scanning monochromator used a continuously rotating diffraction grating driven by an electronically controlled DC motor.EPR spectroscopy characterises the environment of the paramagnetic species (in this case Er3+). In this work, this technique was used to investigate the homogeneity of the sample as function of Er3+ concentration. EPR spectra were recorded at room temperature employing an ER200D (X band) spectrometer operating at 9 GHz.Compared to light scattering, which yields information on particles of>100 nm, neutrons scattering allows structural characterisation on a smaller scale, typically from 10 to 500 Å. Neutrons scattering experiments were performed on the small angle neutron scattering spectrometer at the Orphée 14 MW Reactor at the French Atomic Energy (CEA Saclay, France), using a neutron wavelength of 50 Å.Fluorescence emission measurements were performed at room temperature by optically pumping the samples with a Ti: sapphire laser tuned at 980 nm. The light emitted at 90° from the monolith was analysed spectrally with a Jobin-Yvon U1000 double-monochromator fitted with a grating of 600 grooves/mm. The fluorescence was detected by a North-Coast liquid-nitrogen-cooled germanium detector. The spectral resolution ranged from 1 to 2 nm. The pump signal was mechanically chopped at 80 Hz and the signal from the Ge-detector was preamplified and passed to a lock-in amplifier.
## 3. Results and Discussion
### 3.1.29Si-NMR
The29Si-NMR spectra of the pure MAPTMS and SiCl4 alkoxides (not shown here) exhibit a single peak at -42.8 and -19.2 ppm, respectively. This is an indication of the high purity of the employed precursors and absence of any hydrolysed species.Figure2 shows the 29Si-NMR spectra after 1 hour and 24 hours of reaction. After 1 hour of reaction, in addition to the precursors peaks, two resonances located at -36.5 and -27 ppm are observed, with a respective contribution of 27.0 and 30.3% of the total silicon nuclei, as summarised in Table 1. After 24 hours of reaction, all peaks observed after 1 hour of reaction disappear with the appearance of 3 new peaks, located at -49.1, -58, and -59 ppm, and a large band between -65 and -69 ppm. This band is in fact composed of two main peaks centred at -66 and -67.8 ppm.Table 1Groups%Chemical shift (ppm)RSiOMe333.7-42.8RSiOMe2Cl27-36.5RSiOMeCl230.3-27SiCl49-19.2Figure 2
29Si-NMR spectrum of sample after 1 hour and 24 hours of reaction.The noncondensed silicon nuclei (T0 groups) are generally observed at chemical shifts lower than –45 ppm [18]. In a similar structure, the progressive condensation provoked an increase of 10 ppm [19, 20]. Based on the literature, it is possible to confirm that the peaks observed in the sample recorded after 1 hour of reaction are not attributable to any condensed silica species. As these peaks are located at chemical shifts located between those of the pure precursors, it is obvious that an exchange of the methoxy and chloride groups between both precursors has occurred, resulting in the formation of a mixture of organochlorosilane as shown in Table 1. Indeed, chloride is well known to be an electro-attractive group by inductive effect, which results in a decrease of the electronic density around the Si nuclei, explaining the displacement of the resonances of the hybrid precursor toward lower chemical shifts.After 24 hours of reaction, the resonances observed are located in the region of condensed siloxane, previously identified between –50 and –100 ppm [20]. Furthermore, the progressive shift by ~10 ppm suggests a progressive increase of the degree of condensation, as summarised in Table 2. At this stage, it is important to note the absence in both spectra of any band centred at ~–40 ppm confirming the absence of any hydrolysed species. Indeed, in a previous study [20] on a similar material, we have highlighted the resonances of silanol groups (Si-OH) ~–40 ppm. The present result confirms the success of our nonhydrolytic sol-gel synthesis in obtaining OH-free materials and eliminates any further implication of these groups in the degradation of fluorescence via quenching.Table 2Groups%Chemical shift (ppm)RSiOSi (T1)28.7-49.1RSi(OSi)2 (T2)55.3-58RSi(OSi)3 (T3)16-67Furthermore, examination of these results suggests that the chemical reactions involved in this nonhydrolytic sol-gel synthesis can be divided into two steps.The first implies the exchange of the chemical groups between both silanes:(1)SiCl4+RSi(OCH3)→SiCl(OCH3)+RSiCl(OCH3).The second step involves the ZrCl4 catalyst in the formation of siloxane bonds. The most plausible reaction could involve the formation of an intermediate electronically deficient silicon nucleus that would be balanced by a hydrophylic species such as the oxygen contained in the methoxide group. This results in the formation of fully stable siloxane bonds and in the removal of volatile methyl chloride (CH3Cl), as sketched in the following:
(2)SiCl(OCH3)+RSiCl(OCH3)→ZrCl4Si-O-Si+CH3-Cl.
### 3.2. Near-Infrared Spectroscopy
Near infrared (NIR) spectra have been recorded on Er3+ doped and Er3+/Yb3+ codoped samples. Figure 3 shows the NIR absorption of Er3+(0.5, 1 and 2% at) doped samples. Compared to the undoped sample [20], in which the absorption bands in this spectral range were ascribed to the first and second overtones of the CH groups, the main difference is the appearance of two bands located at 980 and 1540 nm.Figure 3
Near-infrared spectra of Er3+-doped samples (0.5, 1, and 2% at).Bands observed at 980 and 1540 nm are assigned to erbium transitions from the fundamental energy level to the excited levels4I11/2 and 4I13/2, respectively. This is in agreement with the typical erbium absorption [21, 22]. 980 nm has been found to be the most favourable pumping wavelength as erbium-doped optical amplifiers display a quasi-quantum limited 3 dB noise level, while also displaying an improved performance in terms of gain efficiency [23]. In our material, the absorption at 980 nm is particularly weak at around 20% of the absorption measured at 1540 nm, even for the sample containing the highest erbium concentration. To increase this absorption, Yb3+ ions can be employed as a codoping agent, as well known to exhibit a strong absorption at 980 nm [24]. When pumped at 980 nm, Yb3+ ions are excited to the first energy level (2F5/2) and then relaxed to the ground level by nonradiative transition. However, in presence of Er3+ ions, an energy transfer process from the excited Yb3+ ions toward the 2I13/2 level of the Er3+ ions can take place to increase the population of the excited Er3+ ions in this level, as sketched in (Figure 4). Indeed, because of the proximity of the ions energy levels, energy transfer is very efficient, as it is quasi-resonant. Since the population of the 4I13/2 is increased by this mechanism, the luminescence lifetime of the Er3+ ions on this energy level should also be increased and the resulting fluorescence intensity improved.Figure 4
Energy transfer between Yb3+ and Er3+ ions.Figure5 shows the NIR absorption spectra of Er3+/Yb3+ codoped monoliths (6% at Yb3+). Compared to Figure 3, two observations are evident. Firstly, the absorption at 980 nm is increased by a factor of around 200. Secondly, the high erbium absorption around 1540 nm is strongly reduced by ~80%, in all samples revealing interactions at the atomic level between both Er3+ and Yb3+ ions. The only explanation we found to this behaviour involves an up-conversion process from the 4I13/2 to reach the 4I11/2, by absorption of a second photon at 1540 nm. From this excited level, the Er3+ ions can either undergo a nonradiative relaxation to the 4I13/2 state followed by a radiative transition to the ground level or process a nonradiative transfer to neighbouring Yb3+ ions (2F5/2 excited level). Comparison of Figures 3 and 5 clearly suggests the involvement of Yb3+ ions in the up-conversion process of Er3+ ions. However, to our knowledge, no study has previously reported the implication of Yb3+ ions in the up-conversion process of the Er3+ ions when excited at 1540 nm. This important physical process, highlighted for the first time in this paper, will be further investigated by the authors and further clarification will be proposed in a future study.Figure 5
Near-infrared spectra of Er3+ : Yb3+-codoped samples (0.5:6, 1:6, and 2:6% at).
### 3.3. Fluorescence
Figure6 shows the emission spectra in the near infrared region of Er3+/Yb3+ codoped bulk samples. Er3+ and Yb3+ are, respectively, pumped to the 4I11/2 and 2F5/2 excited levels employing a Ti: sapphire laser tuned to 980 nm. Er3+ ions first decay nonradiatively to the 4I13/2 level followed by a final decay to the ground state 4I15/2 by emitting a photon at 1530 nm (Figure 5).Room-temperature luminescence spectra in the near infrared region of Er3+/Yb3+-codoped samples ((a) 1/6 and (b) 2/6% at).
(a)(b)Both spectra show a main photoluminescence peak at 1530 nm and two shoulders around 1500 and 1550 nm with wide tails extending from roughly 1450 to 1650 nm. The peak shape is attributed to the stark splitting of the excited state4I13/2 and ground state 4I15/2 of Er3+ ions. Furthermore, the full width at half maximum is abnormally high at around 30 to 50 nm, which is on average of 30% higher than those measured in pure mineral materials [25, 26]. In a first instance, this can be explained by the amorphous structure of the matrix and the codoping with Yb3+ ions. Oppositely to crystalline materials, which can precisely define the position of rare-earth ions within a host matrix, the amorphous nature of our glass-like material favours a random distribution, resulting in a broadening of the luminescence signal.However, the measured photoluminescence intensity is very low for both samples, which we estimated at 1% of that generally obtained in the case of pure silica matrices. Insofar as the material does not contain any OH groups, quenching by these groups is then eliminated. From these results two hypotheses remain possible: firstly, autoquenching of the fluorescence due to agglomeration of the Er3+ ions, secondly, the energy dissipation in the matrix caused by a multiphonon relaxation process. To understand the mechanism responsible for the relatively low level of measured fluorescence, we have studied the Er3+ structure within the hybrid material by electron paramagnetic resonance and neutron scattering.
### 3.4. Electron Paramagnetic Resonance
EPR spectra of samples doped with Er3+ at 0.5, 1, and 2% at are shown in Figure 7. A single signal was detected for magnetic fields varying from 0 to 4000 mT that exhibit a constant peak to peak width (~8mT) and a progressive increase of the intensity with the increase of the Er3+ ion concentration.Figure 7
Electron paramagnetic resonance spectra of Er3+ doped samples (0.5, 1, and 2% at).To our knowledge, no literature reported any EPR results on Er3+ ions incorporated in either organic polymers or hybrid materials to allow a direct comparison with similar amorphous structures. However, compared to Er3+ ions incorporated in crystalline structures [27–29], the EPR signals obtained in our material require a magnetic field of approximately 4 times greater magnitude [30]. This suggests that the Er3+ ions are strongly linked to the hybrid matrix.As the paramagnetic character is preserved, the formation of covalent bonds between Er3+ and the matrix is unlikely. However, physical interactions with the strong nucleophilic groups of the matrix can explain the observed behaviour. Indeed, the material contains oxygen atoms in the carboxylic functions that can potentially act as complexing agent by electron donor effect, then reducing the paramagnetic character of the active ions in the material.Compared to crystalline structures that precisely define the position of the rare earth ion [30], the amorphous structure of the hybrid glass can potentially allow a random distribution of Er3+ ions within the matrix. However, according to the EPR data, a single peak was observed indicating that the environment of the different Er3+ ions in our material is very similar. However, the EPR technique does not give any information about the concentration effect on the evolution of the Er-Er interatomics distances. This is investigated using neutron scattering characterisation.
### 3.5. Neutron Scattering
Rare earth agglomerates have been reported with dimensions between a few angstroms to several nanometers [31]. Neutron scattering experiments have been conducted to yield information about any structural change associated with the possible formation of Er3+ clusters.Figure8 shows the neutron scattering spectra for the undoped and Er3+ doped samples with 0.5, 1 and 2% at All spectra exhibit a similar behaviour, with a single band the maximum of which was detected at a wavevector of 1.29.10-2, which progressively decreases with the increase of the dopant concentration.Figure 8
Neutron scattering spectra of un-doped and Er3+ doped samples (0.5, 1, and 2% at).The stability of the maximum band position confirms that the size structure of the material is invariant, and excludes the formation of any novel phase, whatever the dopant concentration is. This indicates that fluorescence autoquenching by energy transfer between neighbouring Er3+ is not the prevalent relaxation process. Moreover, the decrease of the signal intensity is explained by the increase of the Er3+ scattering.Consequently, the weak fluorescence intensity measured is not due to nonradiative relaxation by either the erbium structure or OH groups. We propose that it is associated with the energy dissipation within the matrix by multiphonon nonradiative relaxation process. In our case, CH groups (ν(C-H) ~2800–3100 cm-1) contained in the organic part of the hybrid, are the only species capable of competing with the radiative emission of Er3+ ions around 1550 nm, by a two- or three-phonon relaxation process.
## 3.1.29Si-NMR
The29Si-NMR spectra of the pure MAPTMS and SiCl4 alkoxides (not shown here) exhibit a single peak at -42.8 and -19.2 ppm, respectively. This is an indication of the high purity of the employed precursors and absence of any hydrolysed species.Figure2 shows the 29Si-NMR spectra after 1 hour and 24 hours of reaction. After 1 hour of reaction, in addition to the precursors peaks, two resonances located at -36.5 and -27 ppm are observed, with a respective contribution of 27.0 and 30.3% of the total silicon nuclei, as summarised in Table 1. After 24 hours of reaction, all peaks observed after 1 hour of reaction disappear with the appearance of 3 new peaks, located at -49.1, -58, and -59 ppm, and a large band between -65 and -69 ppm. This band is in fact composed of two main peaks centred at -66 and -67.8 ppm.Table 1Groups%Chemical shift (ppm)RSiOMe333.7-42.8RSiOMe2Cl27-36.5RSiOMeCl230.3-27SiCl49-19.2Figure 2
29Si-NMR spectrum of sample after 1 hour and 24 hours of reaction.The noncondensed silicon nuclei (T0 groups) are generally observed at chemical shifts lower than –45 ppm [18]. In a similar structure, the progressive condensation provoked an increase of 10 ppm [19, 20]. Based on the literature, it is possible to confirm that the peaks observed in the sample recorded after 1 hour of reaction are not attributable to any condensed silica species. As these peaks are located at chemical shifts located between those of the pure precursors, it is obvious that an exchange of the methoxy and chloride groups between both precursors has occurred, resulting in the formation of a mixture of organochlorosilane as shown in Table 1. Indeed, chloride is well known to be an electro-attractive group by inductive effect, which results in a decrease of the electronic density around the Si nuclei, explaining the displacement of the resonances of the hybrid precursor toward lower chemical shifts.After 24 hours of reaction, the resonances observed are located in the region of condensed siloxane, previously identified between –50 and –100 ppm [20]. Furthermore, the progressive shift by ~10 ppm suggests a progressive increase of the degree of condensation, as summarised in Table 2. At this stage, it is important to note the absence in both spectra of any band centred at ~–40 ppm confirming the absence of any hydrolysed species. Indeed, in a previous study [20] on a similar material, we have highlighted the resonances of silanol groups (Si-OH) ~–40 ppm. The present result confirms the success of our nonhydrolytic sol-gel synthesis in obtaining OH-free materials and eliminates any further implication of these groups in the degradation of fluorescence via quenching.Table 2Groups%Chemical shift (ppm)RSiOSi (T1)28.7-49.1RSi(OSi)2 (T2)55.3-58RSi(OSi)3 (T3)16-67Furthermore, examination of these results suggests that the chemical reactions involved in this nonhydrolytic sol-gel synthesis can be divided into two steps.The first implies the exchange of the chemical groups between both silanes:(1)SiCl4+RSi(OCH3)→SiCl(OCH3)+RSiCl(OCH3).The second step involves the ZrCl4 catalyst in the formation of siloxane bonds. The most plausible reaction could involve the formation of an intermediate electronically deficient silicon nucleus that would be balanced by a hydrophylic species such as the oxygen contained in the methoxide group. This results in the formation of fully stable siloxane bonds and in the removal of volatile methyl chloride (CH3Cl), as sketched in the following:
(2)SiCl(OCH3)+RSiCl(OCH3)→ZrCl4Si-O-Si+CH3-Cl.
## 3.2. Near-Infrared Spectroscopy
Near infrared (NIR) spectra have been recorded on Er3+ doped and Er3+/Yb3+ codoped samples. Figure 3 shows the NIR absorption of Er3+(0.5, 1 and 2% at) doped samples. Compared to the undoped sample [20], in which the absorption bands in this spectral range were ascribed to the first and second overtones of the CH groups, the main difference is the appearance of two bands located at 980 and 1540 nm.Figure 3
Near-infrared spectra of Er3+-doped samples (0.5, 1, and 2% at).Bands observed at 980 and 1540 nm are assigned to erbium transitions from the fundamental energy level to the excited levels4I11/2 and 4I13/2, respectively. This is in agreement with the typical erbium absorption [21, 22]. 980 nm has been found to be the most favourable pumping wavelength as erbium-doped optical amplifiers display a quasi-quantum limited 3 dB noise level, while also displaying an improved performance in terms of gain efficiency [23]. In our material, the absorption at 980 nm is particularly weak at around 20% of the absorption measured at 1540 nm, even for the sample containing the highest erbium concentration. To increase this absorption, Yb3+ ions can be employed as a codoping agent, as well known to exhibit a strong absorption at 980 nm [24]. When pumped at 980 nm, Yb3+ ions are excited to the first energy level (2F5/2) and then relaxed to the ground level by nonradiative transition. However, in presence of Er3+ ions, an energy transfer process from the excited Yb3+ ions toward the 2I13/2 level of the Er3+ ions can take place to increase the population of the excited Er3+ ions in this level, as sketched in (Figure 4). Indeed, because of the proximity of the ions energy levels, energy transfer is very efficient, as it is quasi-resonant. Since the population of the 4I13/2 is increased by this mechanism, the luminescence lifetime of the Er3+ ions on this energy level should also be increased and the resulting fluorescence intensity improved.Figure 4
Energy transfer between Yb3+ and Er3+ ions.Figure5 shows the NIR absorption spectra of Er3+/Yb3+ codoped monoliths (6% at Yb3+). Compared to Figure 3, two observations are evident. Firstly, the absorption at 980 nm is increased by a factor of around 200. Secondly, the high erbium absorption around 1540 nm is strongly reduced by ~80%, in all samples revealing interactions at the atomic level between both Er3+ and Yb3+ ions. The only explanation we found to this behaviour involves an up-conversion process from the 4I13/2 to reach the 4I11/2, by absorption of a second photon at 1540 nm. From this excited level, the Er3+ ions can either undergo a nonradiative relaxation to the 4I13/2 state followed by a radiative transition to the ground level or process a nonradiative transfer to neighbouring Yb3+ ions (2F5/2 excited level). Comparison of Figures 3 and 5 clearly suggests the involvement of Yb3+ ions in the up-conversion process of Er3+ ions. However, to our knowledge, no study has previously reported the implication of Yb3+ ions in the up-conversion process of the Er3+ ions when excited at 1540 nm. This important physical process, highlighted for the first time in this paper, will be further investigated by the authors and further clarification will be proposed in a future study.Figure 5
Near-infrared spectra of Er3+ : Yb3+-codoped samples (0.5:6, 1:6, and 2:6% at).
## 3.3. Fluorescence
Figure6 shows the emission spectra in the near infrared region of Er3+/Yb3+ codoped bulk samples. Er3+ and Yb3+ are, respectively, pumped to the 4I11/2 and 2F5/2 excited levels employing a Ti: sapphire laser tuned to 980 nm. Er3+ ions first decay nonradiatively to the 4I13/2 level followed by a final decay to the ground state 4I15/2 by emitting a photon at 1530 nm (Figure 5).Room-temperature luminescence spectra in the near infrared region of Er3+/Yb3+-codoped samples ((a) 1/6 and (b) 2/6% at).
(a)(b)Both spectra show a main photoluminescence peak at 1530 nm and two shoulders around 1500 and 1550 nm with wide tails extending from roughly 1450 to 1650 nm. The peak shape is attributed to the stark splitting of the excited state4I13/2 and ground state 4I15/2 of Er3+ ions. Furthermore, the full width at half maximum is abnormally high at around 30 to 50 nm, which is on average of 30% higher than those measured in pure mineral materials [25, 26]. In a first instance, this can be explained by the amorphous structure of the matrix and the codoping with Yb3+ ions. Oppositely to crystalline materials, which can precisely define the position of rare-earth ions within a host matrix, the amorphous nature of our glass-like material favours a random distribution, resulting in a broadening of the luminescence signal.However, the measured photoluminescence intensity is very low for both samples, which we estimated at 1% of that generally obtained in the case of pure silica matrices. Insofar as the material does not contain any OH groups, quenching by these groups is then eliminated. From these results two hypotheses remain possible: firstly, autoquenching of the fluorescence due to agglomeration of the Er3+ ions, secondly, the energy dissipation in the matrix caused by a multiphonon relaxation process. To understand the mechanism responsible for the relatively low level of measured fluorescence, we have studied the Er3+ structure within the hybrid material by electron paramagnetic resonance and neutron scattering.
## 3.4. Electron Paramagnetic Resonance
EPR spectra of samples doped with Er3+ at 0.5, 1, and 2% at are shown in Figure 7. A single signal was detected for magnetic fields varying from 0 to 4000 mT that exhibit a constant peak to peak width (~8mT) and a progressive increase of the intensity with the increase of the Er3+ ion concentration.Figure 7
Electron paramagnetic resonance spectra of Er3+ doped samples (0.5, 1, and 2% at).To our knowledge, no literature reported any EPR results on Er3+ ions incorporated in either organic polymers or hybrid materials to allow a direct comparison with similar amorphous structures. However, compared to Er3+ ions incorporated in crystalline structures [27–29], the EPR signals obtained in our material require a magnetic field of approximately 4 times greater magnitude [30]. This suggests that the Er3+ ions are strongly linked to the hybrid matrix.As the paramagnetic character is preserved, the formation of covalent bonds between Er3+ and the matrix is unlikely. However, physical interactions with the strong nucleophilic groups of the matrix can explain the observed behaviour. Indeed, the material contains oxygen atoms in the carboxylic functions that can potentially act as complexing agent by electron donor effect, then reducing the paramagnetic character of the active ions in the material.Compared to crystalline structures that precisely define the position of the rare earth ion [30], the amorphous structure of the hybrid glass can potentially allow a random distribution of Er3+ ions within the matrix. However, according to the EPR data, a single peak was observed indicating that the environment of the different Er3+ ions in our material is very similar. However, the EPR technique does not give any information about the concentration effect on the evolution of the Er-Er interatomics distances. This is investigated using neutron scattering characterisation.
## 3.5. Neutron Scattering
Rare earth agglomerates have been reported with dimensions between a few angstroms to several nanometers [31]. Neutron scattering experiments have been conducted to yield information about any structural change associated with the possible formation of Er3+ clusters.Figure8 shows the neutron scattering spectra for the undoped and Er3+ doped samples with 0.5, 1 and 2% at All spectra exhibit a similar behaviour, with a single band the maximum of which was detected at a wavevector of 1.29.10-2, which progressively decreases with the increase of the dopant concentration.Figure 8
Neutron scattering spectra of un-doped and Er3+ doped samples (0.5, 1, and 2% at).The stability of the maximum band position confirms that the size structure of the material is invariant, and excludes the formation of any novel phase, whatever the dopant concentration is. This indicates that fluorescence autoquenching by energy transfer between neighbouring Er3+ is not the prevalent relaxation process. Moreover, the decrease of the signal intensity is explained by the increase of the Er3+ scattering.Consequently, the weak fluorescence intensity measured is not due to nonradiative relaxation by either the erbium structure or OH groups. We propose that it is associated with the energy dissipation within the matrix by multiphonon nonradiative relaxation process. In our case, CH groups (ν(C-H) ~2800–3100 cm-1) contained in the organic part of the hybrid, are the only species capable of competing with the radiative emission of Er3+ ions around 1550 nm, by a two- or three-phonon relaxation process.
## 4. Conclusion
The approach developed in this paper aimed at establishing the material requirements for the development of future photocurable hybrid materials as rare earth host matrices for optical amplification applications.An erbium doped organic-inorganic sol-gel material has been synthesised via a nonhydrolytic process leading to an OH-free material. The emission characterisation revealed low photoluminescence intensity estimated at about 0.1% of similar Er3+ doped silica fibres used in the telecommunication industry.Electron paramagnetic resonance and neutron scattering have been used to demonstrate a homogeneous distribution of Er3+ ions within the hybrid structure, which excluded any autoquenching by energy transfer between Er3+ neighbours atoms as an explanation for the low fluorescence. This result was then attributed to a multiphonon relaxation process involving the CH groups, in the organic part of the hybrid material. The development of future active materials for optical amplification requires taking into consideration 2 critical parameters. Firstly, the active rare earth ion needs to be nonlinked to any complexing agent that could drastically decrease its paramagnetic character, then strongly affecting the inversion of population toward its excited levels. Secondly, the structure of the host matrix should be designed to minimise any energetic dissipation by multiphonon relaxation process. These conditions can potentially be fulfilled by the development of fluorinated organic hybrid materials, which vibrations are theoretically unable to compete with the desired radiative emission of the Er3+ ions to further obtain a gain by optical amplification.From an example of a classical hybrid organic-inorganic material, the approach developed in this paper aimed at establishing the material requirements for the development of future high performance materials applied to the development of optical integrated devices.
---
*Source: 101747-2010-01-03.xml* | 101747-2010-01-03_101747-2010-01-03.md | 39,328 | Structural and Optical Characterisation of an Erbium/Ytterbium Doped Hybrid Material Developed via a Nonhydrolytic Sol-Gel Route | M. Oubaha; R. Copperwhite; C. McDonagh; P. Etienne; B. D. MacCraith | SRX Materials Science
(2010) | Engineering & Technology | Scholarly Research Exchange | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.3814/2010/101747 | 101747-2010-01-03.xml | ---
## Abstract
This paper proposes the development and structural characterisation of anEr3+/Yb3+ doped hybrid organic-inorganic material synthesised by a nonhydrolytic sol-gel process. By using a pumping laser diode at 980 nm, a typical Er3+ luminescence has been recorded in the near infrared region (1.53–1.55 μm). However, the detected fluorescence was particularly weak compared to that generally observed in pure mineral materials, suggesting the occurrence of strong quenching due to multiphonon relaxation processes.
To understand this behaviour, structural characterisation of both of the matrix and the local environment of Er3+ ions were conducted employing infrared spectroscopy, nuclear magnetic resonance, electron paramagnetic resonance, and neutron scattering. These studies showed that the major phenomenon competing with the Er3+ fluorescence is intimately associated to the strong vibrational modes of the organic species that involve multiphonon relaxation processes, resulting in energy dissipation within the host matrix.
---
## Body
## 1. Introduction
Since the early 1990s hybrid organic-inorganic has been very popular for the development of novel materials with tuneable properties and morphologies.In particular, Ormosils (organically modified silicon) (R′xSi(OR)4-x) synthesised by the sol-gel process [1] have been widely studied for the preparation of hybrid organic-inorganic materials for different applications such as separation [2, 3], sensing [4, 5], surface protection [6, 7], and optics [8–12].In the optics field, an exciting challenge was the development of integrated optical devices that allow the integration of several functions on one chip, which has become possible for example by combining a photolitographic process with photocurable hybrid sol-gel materials [12, 13].However, to our knowledge, the fabrication of active integrated optical circuits employing photocurable hybrid sol-gel materials has not been reported previously, despite their great potential to enable the use of large bandwidth lasers. One of the major challenges in the development of Er3+ doped hybrid materials for optical amplification around 1550 nm is the avoidance of OH groups in the material. Unfortunately, these groups are inherent to the hydrolytic sol-gel process and are well known to compete with the Er3+ luminescence by nonradiative decay processes [14, 15].In this paper, we report the development of a novel OH-free Er3+/Yb3+ codoped photocurable material employing a nonhydrolytic sol-gel route. The spectroscopic behaviour of the active dopants is correlated to the structure of the host matrix by means of near infrared spectroscopy (NIR), nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and neutron scattering.
## 2. Experimental
### 2.1. Material
In order to avoid any relaxation due to the OH groups, inherent to the hydrolytic sol-gel route, the goal of the material synthesis consists of obtaining an OH-free condensed organo-silane. To achieve this, the sol-gel synthesis was conducted via a nonhydrolytic sol-gel process, as sketched in Figure1.Figure 1
Nonhydrolytic sol-gel synthesis of a photocurable organosilane material.Compared to the hydrolytic routes, which can be processed at ambient temperature, nonhydrolytic processes typically require higher temperature and a catalyst to allow the condensation reaction between precursors of different reactivity. Generally, to condense an alkoxysilane with a chlorosilane, a Lewis acid catalyst, such as zirconium tetrachloride (ZrCl4) is employed [16].The sol preparation consisted of mixing a photocurable organically modified silicate, 3-trimethoxypropyltrimethoxysilane (MAPTMS, Assay~99%) together with silicon tetrachloride (SiCl4, Assay ~99.99%) in the presence of zirconium tetrachloride (ZrCl4, Assay ~99.99%). The mixture was refluxed at 120°C for 24 hours before addition of the doping elements ErCl3 and YbCl3. The obtained sol was then filtered, through a 0.2 micrometer membrane, to remove any contaminating particle. NIR and neutron scattering spectra were recorded on bulk samples of 2 to 5 millimetre thicknesses. For EPR, the solid gels were crushed and the resulting powders sealed under vacuum into glass tubes.
### 2.2. Experimental Techniques
The evolution of the siloxane hybrid network has been followed by29Si-NMR spectroscopy, employing a 400 MHZ Bruker spectrometer. The various oxo bridges formed through either self-condensation or cocondensation reactions between the MAPTMS and the SiCl4 are easily identified during the synthesis. Spectra were recorded at room temperature from liquid solution. Measurements showed that a 4-second recycle delay time with an 8 μs pulse duration, was sufficient for quantitative measurements. The chemical shifts were referenced with respect to Tetramethylsilane, used as an external reference. The Free Induction Decay processing used a 10 Hz line broadening. Each recorded spectrum is an average of all previously obtained spectra during the instrument acquisition time. 128 scans were accumulated for each spectrum.NIR is commonly used to identify the absorption of the harmonics and combinations bands of the fundamental vibrations as well as the absorption of rare earth ions [17]. In the present case, this technique was employed to identify the optimum excitation wavelength of Er3+ and quantify the effect of the Yb3+ codoping effect. NIR spectra were performed using a NIR spectrometer. This optical spectrum analyser included an infrared source, a second-order blocking filter, and a scanning monochromator with both Ge (600–1900 nm) and PbSe (1500 and 3000 nm) detectors. The resolution can be selected between 0.25 and 20 nm. The scanning monochromator used a continuously rotating diffraction grating driven by an electronically controlled DC motor.EPR spectroscopy characterises the environment of the paramagnetic species (in this case Er3+). In this work, this technique was used to investigate the homogeneity of the sample as function of Er3+ concentration. EPR spectra were recorded at room temperature employing an ER200D (X band) spectrometer operating at 9 GHz.Compared to light scattering, which yields information on particles of>100 nm, neutrons scattering allows structural characterisation on a smaller scale, typically from 10 to 500 Å. Neutrons scattering experiments were performed on the small angle neutron scattering spectrometer at the Orphée 14 MW Reactor at the French Atomic Energy (CEA Saclay, France), using a neutron wavelength of 50 Å.Fluorescence emission measurements were performed at room temperature by optically pumping the samples with a Ti: sapphire laser tuned at 980 nm. The light emitted at 90° from the monolith was analysed spectrally with a Jobin-Yvon U1000 double-monochromator fitted with a grating of 600 grooves/mm. The fluorescence was detected by a North-Coast liquid-nitrogen-cooled germanium detector. The spectral resolution ranged from 1 to 2 nm. The pump signal was mechanically chopped at 80 Hz and the signal from the Ge-detector was preamplified and passed to a lock-in amplifier.
## 2.1. Material
In order to avoid any relaxation due to the OH groups, inherent to the hydrolytic sol-gel route, the goal of the material synthesis consists of obtaining an OH-free condensed organo-silane. To achieve this, the sol-gel synthesis was conducted via a nonhydrolytic sol-gel process, as sketched in Figure1.Figure 1
Nonhydrolytic sol-gel synthesis of a photocurable organosilane material.Compared to the hydrolytic routes, which can be processed at ambient temperature, nonhydrolytic processes typically require higher temperature and a catalyst to allow the condensation reaction between precursors of different reactivity. Generally, to condense an alkoxysilane with a chlorosilane, a Lewis acid catalyst, such as zirconium tetrachloride (ZrCl4) is employed [16].The sol preparation consisted of mixing a photocurable organically modified silicate, 3-trimethoxypropyltrimethoxysilane (MAPTMS, Assay~99%) together with silicon tetrachloride (SiCl4, Assay ~99.99%) in the presence of zirconium tetrachloride (ZrCl4, Assay ~99.99%). The mixture was refluxed at 120°C for 24 hours before addition of the doping elements ErCl3 and YbCl3. The obtained sol was then filtered, through a 0.2 micrometer membrane, to remove any contaminating particle. NIR and neutron scattering spectra were recorded on bulk samples of 2 to 5 millimetre thicknesses. For EPR, the solid gels were crushed and the resulting powders sealed under vacuum into glass tubes.
## 2.2. Experimental Techniques
The evolution of the siloxane hybrid network has been followed by29Si-NMR spectroscopy, employing a 400 MHZ Bruker spectrometer. The various oxo bridges formed through either self-condensation or cocondensation reactions between the MAPTMS and the SiCl4 are easily identified during the synthesis. Spectra were recorded at room temperature from liquid solution. Measurements showed that a 4-second recycle delay time with an 8 μs pulse duration, was sufficient for quantitative measurements. The chemical shifts were referenced with respect to Tetramethylsilane, used as an external reference. The Free Induction Decay processing used a 10 Hz line broadening. Each recorded spectrum is an average of all previously obtained spectra during the instrument acquisition time. 128 scans were accumulated for each spectrum.NIR is commonly used to identify the absorption of the harmonics and combinations bands of the fundamental vibrations as well as the absorption of rare earth ions [17]. In the present case, this technique was employed to identify the optimum excitation wavelength of Er3+ and quantify the effect of the Yb3+ codoping effect. NIR spectra were performed using a NIR spectrometer. This optical spectrum analyser included an infrared source, a second-order blocking filter, and a scanning monochromator with both Ge (600–1900 nm) and PbSe (1500 and 3000 nm) detectors. The resolution can be selected between 0.25 and 20 nm. The scanning monochromator used a continuously rotating diffraction grating driven by an electronically controlled DC motor.EPR spectroscopy characterises the environment of the paramagnetic species (in this case Er3+). In this work, this technique was used to investigate the homogeneity of the sample as function of Er3+ concentration. EPR spectra were recorded at room temperature employing an ER200D (X band) spectrometer operating at 9 GHz.Compared to light scattering, which yields information on particles of>100 nm, neutrons scattering allows structural characterisation on a smaller scale, typically from 10 to 500 Å. Neutrons scattering experiments were performed on the small angle neutron scattering spectrometer at the Orphée 14 MW Reactor at the French Atomic Energy (CEA Saclay, France), using a neutron wavelength of 50 Å.Fluorescence emission measurements were performed at room temperature by optically pumping the samples with a Ti: sapphire laser tuned at 980 nm. The light emitted at 90° from the monolith was analysed spectrally with a Jobin-Yvon U1000 double-monochromator fitted with a grating of 600 grooves/mm. The fluorescence was detected by a North-Coast liquid-nitrogen-cooled germanium detector. The spectral resolution ranged from 1 to 2 nm. The pump signal was mechanically chopped at 80 Hz and the signal from the Ge-detector was preamplified and passed to a lock-in amplifier.
## 3. Results and Discussion
### 3.1.29Si-NMR
The29Si-NMR spectra of the pure MAPTMS and SiCl4 alkoxides (not shown here) exhibit a single peak at -42.8 and -19.2 ppm, respectively. This is an indication of the high purity of the employed precursors and absence of any hydrolysed species.Figure2 shows the 29Si-NMR spectra after 1 hour and 24 hours of reaction. After 1 hour of reaction, in addition to the precursors peaks, two resonances located at -36.5 and -27 ppm are observed, with a respective contribution of 27.0 and 30.3% of the total silicon nuclei, as summarised in Table 1. After 24 hours of reaction, all peaks observed after 1 hour of reaction disappear with the appearance of 3 new peaks, located at -49.1, -58, and -59 ppm, and a large band between -65 and -69 ppm. This band is in fact composed of two main peaks centred at -66 and -67.8 ppm.Table 1Groups%Chemical shift (ppm)RSiOMe333.7-42.8RSiOMe2Cl27-36.5RSiOMeCl230.3-27SiCl49-19.2Figure 2
29Si-NMR spectrum of sample after 1 hour and 24 hours of reaction.The noncondensed silicon nuclei (T0 groups) are generally observed at chemical shifts lower than –45 ppm [18]. In a similar structure, the progressive condensation provoked an increase of 10 ppm [19, 20]. Based on the literature, it is possible to confirm that the peaks observed in the sample recorded after 1 hour of reaction are not attributable to any condensed silica species. As these peaks are located at chemical shifts located between those of the pure precursors, it is obvious that an exchange of the methoxy and chloride groups between both precursors has occurred, resulting in the formation of a mixture of organochlorosilane as shown in Table 1. Indeed, chloride is well known to be an electro-attractive group by inductive effect, which results in a decrease of the electronic density around the Si nuclei, explaining the displacement of the resonances of the hybrid precursor toward lower chemical shifts.After 24 hours of reaction, the resonances observed are located in the region of condensed siloxane, previously identified between –50 and –100 ppm [20]. Furthermore, the progressive shift by ~10 ppm suggests a progressive increase of the degree of condensation, as summarised in Table 2. At this stage, it is important to note the absence in both spectra of any band centred at ~–40 ppm confirming the absence of any hydrolysed species. Indeed, in a previous study [20] on a similar material, we have highlighted the resonances of silanol groups (Si-OH) ~–40 ppm. The present result confirms the success of our nonhydrolytic sol-gel synthesis in obtaining OH-free materials and eliminates any further implication of these groups in the degradation of fluorescence via quenching.Table 2Groups%Chemical shift (ppm)RSiOSi (T1)28.7-49.1RSi(OSi)2 (T2)55.3-58RSi(OSi)3 (T3)16-67Furthermore, examination of these results suggests that the chemical reactions involved in this nonhydrolytic sol-gel synthesis can be divided into two steps.The first implies the exchange of the chemical groups between both silanes:(1)SiCl4+RSi(OCH3)→SiCl(OCH3)+RSiCl(OCH3).The second step involves the ZrCl4 catalyst in the formation of siloxane bonds. The most plausible reaction could involve the formation of an intermediate electronically deficient silicon nucleus that would be balanced by a hydrophylic species such as the oxygen contained in the methoxide group. This results in the formation of fully stable siloxane bonds and in the removal of volatile methyl chloride (CH3Cl), as sketched in the following:
(2)SiCl(OCH3)+RSiCl(OCH3)→ZrCl4Si-O-Si+CH3-Cl.
### 3.2. Near-Infrared Spectroscopy
Near infrared (NIR) spectra have been recorded on Er3+ doped and Er3+/Yb3+ codoped samples. Figure 3 shows the NIR absorption of Er3+(0.5, 1 and 2% at) doped samples. Compared to the undoped sample [20], in which the absorption bands in this spectral range were ascribed to the first and second overtones of the CH groups, the main difference is the appearance of two bands located at 980 and 1540 nm.Figure 3
Near-infrared spectra of Er3+-doped samples (0.5, 1, and 2% at).Bands observed at 980 and 1540 nm are assigned to erbium transitions from the fundamental energy level to the excited levels4I11/2 and 4I13/2, respectively. This is in agreement with the typical erbium absorption [21, 22]. 980 nm has been found to be the most favourable pumping wavelength as erbium-doped optical amplifiers display a quasi-quantum limited 3 dB noise level, while also displaying an improved performance in terms of gain efficiency [23]. In our material, the absorption at 980 nm is particularly weak at around 20% of the absorption measured at 1540 nm, even for the sample containing the highest erbium concentration. To increase this absorption, Yb3+ ions can be employed as a codoping agent, as well known to exhibit a strong absorption at 980 nm [24]. When pumped at 980 nm, Yb3+ ions are excited to the first energy level (2F5/2) and then relaxed to the ground level by nonradiative transition. However, in presence of Er3+ ions, an energy transfer process from the excited Yb3+ ions toward the 2I13/2 level of the Er3+ ions can take place to increase the population of the excited Er3+ ions in this level, as sketched in (Figure 4). Indeed, because of the proximity of the ions energy levels, energy transfer is very efficient, as it is quasi-resonant. Since the population of the 4I13/2 is increased by this mechanism, the luminescence lifetime of the Er3+ ions on this energy level should also be increased and the resulting fluorescence intensity improved.Figure 4
Energy transfer between Yb3+ and Er3+ ions.Figure5 shows the NIR absorption spectra of Er3+/Yb3+ codoped monoliths (6% at Yb3+). Compared to Figure 3, two observations are evident. Firstly, the absorption at 980 nm is increased by a factor of around 200. Secondly, the high erbium absorption around 1540 nm is strongly reduced by ~80%, in all samples revealing interactions at the atomic level between both Er3+ and Yb3+ ions. The only explanation we found to this behaviour involves an up-conversion process from the 4I13/2 to reach the 4I11/2, by absorption of a second photon at 1540 nm. From this excited level, the Er3+ ions can either undergo a nonradiative relaxation to the 4I13/2 state followed by a radiative transition to the ground level or process a nonradiative transfer to neighbouring Yb3+ ions (2F5/2 excited level). Comparison of Figures 3 and 5 clearly suggests the involvement of Yb3+ ions in the up-conversion process of Er3+ ions. However, to our knowledge, no study has previously reported the implication of Yb3+ ions in the up-conversion process of the Er3+ ions when excited at 1540 nm. This important physical process, highlighted for the first time in this paper, will be further investigated by the authors and further clarification will be proposed in a future study.Figure 5
Near-infrared spectra of Er3+ : Yb3+-codoped samples (0.5:6, 1:6, and 2:6% at).
### 3.3. Fluorescence
Figure6 shows the emission spectra in the near infrared region of Er3+/Yb3+ codoped bulk samples. Er3+ and Yb3+ are, respectively, pumped to the 4I11/2 and 2F5/2 excited levels employing a Ti: sapphire laser tuned to 980 nm. Er3+ ions first decay nonradiatively to the 4I13/2 level followed by a final decay to the ground state 4I15/2 by emitting a photon at 1530 nm (Figure 5).Room-temperature luminescence spectra in the near infrared region of Er3+/Yb3+-codoped samples ((a) 1/6 and (b) 2/6% at).
(a)(b)Both spectra show a main photoluminescence peak at 1530 nm and two shoulders around 1500 and 1550 nm with wide tails extending from roughly 1450 to 1650 nm. The peak shape is attributed to the stark splitting of the excited state4I13/2 and ground state 4I15/2 of Er3+ ions. Furthermore, the full width at half maximum is abnormally high at around 30 to 50 nm, which is on average of 30% higher than those measured in pure mineral materials [25, 26]. In a first instance, this can be explained by the amorphous structure of the matrix and the codoping with Yb3+ ions. Oppositely to crystalline materials, which can precisely define the position of rare-earth ions within a host matrix, the amorphous nature of our glass-like material favours a random distribution, resulting in a broadening of the luminescence signal.However, the measured photoluminescence intensity is very low for both samples, which we estimated at 1% of that generally obtained in the case of pure silica matrices. Insofar as the material does not contain any OH groups, quenching by these groups is then eliminated. From these results two hypotheses remain possible: firstly, autoquenching of the fluorescence due to agglomeration of the Er3+ ions, secondly, the energy dissipation in the matrix caused by a multiphonon relaxation process. To understand the mechanism responsible for the relatively low level of measured fluorescence, we have studied the Er3+ structure within the hybrid material by electron paramagnetic resonance and neutron scattering.
### 3.4. Electron Paramagnetic Resonance
EPR spectra of samples doped with Er3+ at 0.5, 1, and 2% at are shown in Figure 7. A single signal was detected for magnetic fields varying from 0 to 4000 mT that exhibit a constant peak to peak width (~8mT) and a progressive increase of the intensity with the increase of the Er3+ ion concentration.Figure 7
Electron paramagnetic resonance spectra of Er3+ doped samples (0.5, 1, and 2% at).To our knowledge, no literature reported any EPR results on Er3+ ions incorporated in either organic polymers or hybrid materials to allow a direct comparison with similar amorphous structures. However, compared to Er3+ ions incorporated in crystalline structures [27–29], the EPR signals obtained in our material require a magnetic field of approximately 4 times greater magnitude [30]. This suggests that the Er3+ ions are strongly linked to the hybrid matrix.As the paramagnetic character is preserved, the formation of covalent bonds between Er3+ and the matrix is unlikely. However, physical interactions with the strong nucleophilic groups of the matrix can explain the observed behaviour. Indeed, the material contains oxygen atoms in the carboxylic functions that can potentially act as complexing agent by electron donor effect, then reducing the paramagnetic character of the active ions in the material.Compared to crystalline structures that precisely define the position of the rare earth ion [30], the amorphous structure of the hybrid glass can potentially allow a random distribution of Er3+ ions within the matrix. However, according to the EPR data, a single peak was observed indicating that the environment of the different Er3+ ions in our material is very similar. However, the EPR technique does not give any information about the concentration effect on the evolution of the Er-Er interatomics distances. This is investigated using neutron scattering characterisation.
### 3.5. Neutron Scattering
Rare earth agglomerates have been reported with dimensions between a few angstroms to several nanometers [31]. Neutron scattering experiments have been conducted to yield information about any structural change associated with the possible formation of Er3+ clusters.Figure8 shows the neutron scattering spectra for the undoped and Er3+ doped samples with 0.5, 1 and 2% at All spectra exhibit a similar behaviour, with a single band the maximum of which was detected at a wavevector of 1.29.10-2, which progressively decreases with the increase of the dopant concentration.Figure 8
Neutron scattering spectra of un-doped and Er3+ doped samples (0.5, 1, and 2% at).The stability of the maximum band position confirms that the size structure of the material is invariant, and excludes the formation of any novel phase, whatever the dopant concentration is. This indicates that fluorescence autoquenching by energy transfer between neighbouring Er3+ is not the prevalent relaxation process. Moreover, the decrease of the signal intensity is explained by the increase of the Er3+ scattering.Consequently, the weak fluorescence intensity measured is not due to nonradiative relaxation by either the erbium structure or OH groups. We propose that it is associated with the energy dissipation within the matrix by multiphonon nonradiative relaxation process. In our case, CH groups (ν(C-H) ~2800–3100 cm-1) contained in the organic part of the hybrid, are the only species capable of competing with the radiative emission of Er3+ ions around 1550 nm, by a two- or three-phonon relaxation process.
## 3.1.29Si-NMR
The29Si-NMR spectra of the pure MAPTMS and SiCl4 alkoxides (not shown here) exhibit a single peak at -42.8 and -19.2 ppm, respectively. This is an indication of the high purity of the employed precursors and absence of any hydrolysed species.Figure2 shows the 29Si-NMR spectra after 1 hour and 24 hours of reaction. After 1 hour of reaction, in addition to the precursors peaks, two resonances located at -36.5 and -27 ppm are observed, with a respective contribution of 27.0 and 30.3% of the total silicon nuclei, as summarised in Table 1. After 24 hours of reaction, all peaks observed after 1 hour of reaction disappear with the appearance of 3 new peaks, located at -49.1, -58, and -59 ppm, and a large band between -65 and -69 ppm. This band is in fact composed of two main peaks centred at -66 and -67.8 ppm.Table 1Groups%Chemical shift (ppm)RSiOMe333.7-42.8RSiOMe2Cl27-36.5RSiOMeCl230.3-27SiCl49-19.2Figure 2
29Si-NMR spectrum of sample after 1 hour and 24 hours of reaction.The noncondensed silicon nuclei (T0 groups) are generally observed at chemical shifts lower than –45 ppm [18]. In a similar structure, the progressive condensation provoked an increase of 10 ppm [19, 20]. Based on the literature, it is possible to confirm that the peaks observed in the sample recorded after 1 hour of reaction are not attributable to any condensed silica species. As these peaks are located at chemical shifts located between those of the pure precursors, it is obvious that an exchange of the methoxy and chloride groups between both precursors has occurred, resulting in the formation of a mixture of organochlorosilane as shown in Table 1. Indeed, chloride is well known to be an electro-attractive group by inductive effect, which results in a decrease of the electronic density around the Si nuclei, explaining the displacement of the resonances of the hybrid precursor toward lower chemical shifts.After 24 hours of reaction, the resonances observed are located in the region of condensed siloxane, previously identified between –50 and –100 ppm [20]. Furthermore, the progressive shift by ~10 ppm suggests a progressive increase of the degree of condensation, as summarised in Table 2. At this stage, it is important to note the absence in both spectra of any band centred at ~–40 ppm confirming the absence of any hydrolysed species. Indeed, in a previous study [20] on a similar material, we have highlighted the resonances of silanol groups (Si-OH) ~–40 ppm. The present result confirms the success of our nonhydrolytic sol-gel synthesis in obtaining OH-free materials and eliminates any further implication of these groups in the degradation of fluorescence via quenching.Table 2Groups%Chemical shift (ppm)RSiOSi (T1)28.7-49.1RSi(OSi)2 (T2)55.3-58RSi(OSi)3 (T3)16-67Furthermore, examination of these results suggests that the chemical reactions involved in this nonhydrolytic sol-gel synthesis can be divided into two steps.The first implies the exchange of the chemical groups between both silanes:(1)SiCl4+RSi(OCH3)→SiCl(OCH3)+RSiCl(OCH3).The second step involves the ZrCl4 catalyst in the formation of siloxane bonds. The most plausible reaction could involve the formation of an intermediate electronically deficient silicon nucleus that would be balanced by a hydrophylic species such as the oxygen contained in the methoxide group. This results in the formation of fully stable siloxane bonds and in the removal of volatile methyl chloride (CH3Cl), as sketched in the following:
(2)SiCl(OCH3)+RSiCl(OCH3)→ZrCl4Si-O-Si+CH3-Cl.
## 3.2. Near-Infrared Spectroscopy
Near infrared (NIR) spectra have been recorded on Er3+ doped and Er3+/Yb3+ codoped samples. Figure 3 shows the NIR absorption of Er3+(0.5, 1 and 2% at) doped samples. Compared to the undoped sample [20], in which the absorption bands in this spectral range were ascribed to the first and second overtones of the CH groups, the main difference is the appearance of two bands located at 980 and 1540 nm.Figure 3
Near-infrared spectra of Er3+-doped samples (0.5, 1, and 2% at).Bands observed at 980 and 1540 nm are assigned to erbium transitions from the fundamental energy level to the excited levels4I11/2 and 4I13/2, respectively. This is in agreement with the typical erbium absorption [21, 22]. 980 nm has been found to be the most favourable pumping wavelength as erbium-doped optical amplifiers display a quasi-quantum limited 3 dB noise level, while also displaying an improved performance in terms of gain efficiency [23]. In our material, the absorption at 980 nm is particularly weak at around 20% of the absorption measured at 1540 nm, even for the sample containing the highest erbium concentration. To increase this absorption, Yb3+ ions can be employed as a codoping agent, as well known to exhibit a strong absorption at 980 nm [24]. When pumped at 980 nm, Yb3+ ions are excited to the first energy level (2F5/2) and then relaxed to the ground level by nonradiative transition. However, in presence of Er3+ ions, an energy transfer process from the excited Yb3+ ions toward the 2I13/2 level of the Er3+ ions can take place to increase the population of the excited Er3+ ions in this level, as sketched in (Figure 4). Indeed, because of the proximity of the ions energy levels, energy transfer is very efficient, as it is quasi-resonant. Since the population of the 4I13/2 is increased by this mechanism, the luminescence lifetime of the Er3+ ions on this energy level should also be increased and the resulting fluorescence intensity improved.Figure 4
Energy transfer between Yb3+ and Er3+ ions.Figure5 shows the NIR absorption spectra of Er3+/Yb3+ codoped monoliths (6% at Yb3+). Compared to Figure 3, two observations are evident. Firstly, the absorption at 980 nm is increased by a factor of around 200. Secondly, the high erbium absorption around 1540 nm is strongly reduced by ~80%, in all samples revealing interactions at the atomic level between both Er3+ and Yb3+ ions. The only explanation we found to this behaviour involves an up-conversion process from the 4I13/2 to reach the 4I11/2, by absorption of a second photon at 1540 nm. From this excited level, the Er3+ ions can either undergo a nonradiative relaxation to the 4I13/2 state followed by a radiative transition to the ground level or process a nonradiative transfer to neighbouring Yb3+ ions (2F5/2 excited level). Comparison of Figures 3 and 5 clearly suggests the involvement of Yb3+ ions in the up-conversion process of Er3+ ions. However, to our knowledge, no study has previously reported the implication of Yb3+ ions in the up-conversion process of the Er3+ ions when excited at 1540 nm. This important physical process, highlighted for the first time in this paper, will be further investigated by the authors and further clarification will be proposed in a future study.Figure 5
Near-infrared spectra of Er3+ : Yb3+-codoped samples (0.5:6, 1:6, and 2:6% at).
## 3.3. Fluorescence
Figure6 shows the emission spectra in the near infrared region of Er3+/Yb3+ codoped bulk samples. Er3+ and Yb3+ are, respectively, pumped to the 4I11/2 and 2F5/2 excited levels employing a Ti: sapphire laser tuned to 980 nm. Er3+ ions first decay nonradiatively to the 4I13/2 level followed by a final decay to the ground state 4I15/2 by emitting a photon at 1530 nm (Figure 5).Room-temperature luminescence spectra in the near infrared region of Er3+/Yb3+-codoped samples ((a) 1/6 and (b) 2/6% at).
(a)(b)Both spectra show a main photoluminescence peak at 1530 nm and two shoulders around 1500 and 1550 nm with wide tails extending from roughly 1450 to 1650 nm. The peak shape is attributed to the stark splitting of the excited state4I13/2 and ground state 4I15/2 of Er3+ ions. Furthermore, the full width at half maximum is abnormally high at around 30 to 50 nm, which is on average of 30% higher than those measured in pure mineral materials [25, 26]. In a first instance, this can be explained by the amorphous structure of the matrix and the codoping with Yb3+ ions. Oppositely to crystalline materials, which can precisely define the position of rare-earth ions within a host matrix, the amorphous nature of our glass-like material favours a random distribution, resulting in a broadening of the luminescence signal.However, the measured photoluminescence intensity is very low for both samples, which we estimated at 1% of that generally obtained in the case of pure silica matrices. Insofar as the material does not contain any OH groups, quenching by these groups is then eliminated. From these results two hypotheses remain possible: firstly, autoquenching of the fluorescence due to agglomeration of the Er3+ ions, secondly, the energy dissipation in the matrix caused by a multiphonon relaxation process. To understand the mechanism responsible for the relatively low level of measured fluorescence, we have studied the Er3+ structure within the hybrid material by electron paramagnetic resonance and neutron scattering.
## 3.4. Electron Paramagnetic Resonance
EPR spectra of samples doped with Er3+ at 0.5, 1, and 2% at are shown in Figure 7. A single signal was detected for magnetic fields varying from 0 to 4000 mT that exhibit a constant peak to peak width (~8mT) and a progressive increase of the intensity with the increase of the Er3+ ion concentration.Figure 7
Electron paramagnetic resonance spectra of Er3+ doped samples (0.5, 1, and 2% at).To our knowledge, no literature reported any EPR results on Er3+ ions incorporated in either organic polymers or hybrid materials to allow a direct comparison with similar amorphous structures. However, compared to Er3+ ions incorporated in crystalline structures [27–29], the EPR signals obtained in our material require a magnetic field of approximately 4 times greater magnitude [30]. This suggests that the Er3+ ions are strongly linked to the hybrid matrix.As the paramagnetic character is preserved, the formation of covalent bonds between Er3+ and the matrix is unlikely. However, physical interactions with the strong nucleophilic groups of the matrix can explain the observed behaviour. Indeed, the material contains oxygen atoms in the carboxylic functions that can potentially act as complexing agent by electron donor effect, then reducing the paramagnetic character of the active ions in the material.Compared to crystalline structures that precisely define the position of the rare earth ion [30], the amorphous structure of the hybrid glass can potentially allow a random distribution of Er3+ ions within the matrix. However, according to the EPR data, a single peak was observed indicating that the environment of the different Er3+ ions in our material is very similar. However, the EPR technique does not give any information about the concentration effect on the evolution of the Er-Er interatomics distances. This is investigated using neutron scattering characterisation.
## 3.5. Neutron Scattering
Rare earth agglomerates have been reported with dimensions between a few angstroms to several nanometers [31]. Neutron scattering experiments have been conducted to yield information about any structural change associated with the possible formation of Er3+ clusters.Figure8 shows the neutron scattering spectra for the undoped and Er3+ doped samples with 0.5, 1 and 2% at All spectra exhibit a similar behaviour, with a single band the maximum of which was detected at a wavevector of 1.29.10-2, which progressively decreases with the increase of the dopant concentration.Figure 8
Neutron scattering spectra of un-doped and Er3+ doped samples (0.5, 1, and 2% at).The stability of the maximum band position confirms that the size structure of the material is invariant, and excludes the formation of any novel phase, whatever the dopant concentration is. This indicates that fluorescence autoquenching by energy transfer between neighbouring Er3+ is not the prevalent relaxation process. Moreover, the decrease of the signal intensity is explained by the increase of the Er3+ scattering.Consequently, the weak fluorescence intensity measured is not due to nonradiative relaxation by either the erbium structure or OH groups. We propose that it is associated with the energy dissipation within the matrix by multiphonon nonradiative relaxation process. In our case, CH groups (ν(C-H) ~2800–3100 cm-1) contained in the organic part of the hybrid, are the only species capable of competing with the radiative emission of Er3+ ions around 1550 nm, by a two- or three-phonon relaxation process.
## 4. Conclusion
The approach developed in this paper aimed at establishing the material requirements for the development of future photocurable hybrid materials as rare earth host matrices for optical amplification applications.An erbium doped organic-inorganic sol-gel material has been synthesised via a nonhydrolytic process leading to an OH-free material. The emission characterisation revealed low photoluminescence intensity estimated at about 0.1% of similar Er3+ doped silica fibres used in the telecommunication industry.Electron paramagnetic resonance and neutron scattering have been used to demonstrate a homogeneous distribution of Er3+ ions within the hybrid structure, which excluded any autoquenching by energy transfer between Er3+ neighbours atoms as an explanation for the low fluorescence. This result was then attributed to a multiphonon relaxation process involving the CH groups, in the organic part of the hybrid material. The development of future active materials for optical amplification requires taking into consideration 2 critical parameters. Firstly, the active rare earth ion needs to be nonlinked to any complexing agent that could drastically decrease its paramagnetic character, then strongly affecting the inversion of population toward its excited levels. Secondly, the structure of the host matrix should be designed to minimise any energetic dissipation by multiphonon relaxation process. These conditions can potentially be fulfilled by the development of fluorinated organic hybrid materials, which vibrations are theoretically unable to compete with the desired radiative emission of the Er3+ ions to further obtain a gain by optical amplification.From an example of a classical hybrid organic-inorganic material, the approach developed in this paper aimed at establishing the material requirements for the development of future high performance materials applied to the development of optical integrated devices.
---
*Source: 101747-2010-01-03.xml* | 2010 |
# Cardiovascular Risk Reduction with Renin-Angiotensin Aldosterone System Blockade
**Authors:** Nancy Houston Miller
**Journal:** Nursing Research and Practice
(2010)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2010/101749
---
## Abstract
This paper examines the evidence supporting treatments within the renin-angiotensin aldosterone system (RAS), the role cardioprotection plays within the management of hypertension, considerations around medication adherence, and the role of the nurse or nurse practitioner in guiding patients to achieve higher hypertension control rates. A large body of data now exists to support the use of angiotensin receptor blockers (ARBs) and angiotensin-converting enzyme inhibitors (ACEIs) which act on RAS, in the management of hypertension and their effect on cardiovascular risk reduction. Current evidence suggests that inhibition of the RAS is an important target for cardioprotection. RAS inhibition controls blood pressure and also reduces target-organ damage. This is especially important in populations at high-risk for damage including patients with diabetes and those with chronic kidney disease. Both ARBs and ACEIs target the RAS offering important reductions in both BP and target organ damage.
---
## Body
## 1. Introduction
Nurse practitioners and nurses play a key role in the prevention and management of chronic conditions such as cardiovascular disease (CVD), diabetes mellitus, and kidney disease. Despite strides made in its treatment and prevention, CVD remains the leading cause of death worldwide [1]. Myocardial infarction (MI), stroke, and renal failure are its most common complications. In 2005, CVD was the underlying cause of 17.5 million deaths, or 30% of all deaths globally—nearly equal to the entire population of the state of Florida. MI accounted for 7.6 millions of those deaths and strokes for 5.7 millions [1]. In the United States, 631,636 died from heart disease, the number one cause of death, whereas 137,119 deaths occurred as the result of stroke and 45,344 as the result of kidney disease [2]. Stroke and kidney diseases are the third and ninth leading causes of death, respectively.The morbidity associated with CVD is high as well. Currently, approximately 24.1 million Americans have been diagnosed with heart disease, and this condition resulted in 2.4 million hospital discharges in 2005. Approximately 5.6 million Americans have at one time or another had a stroke, and in 2005, stroke accounted for 1 million hospital discharges. About 3.3 million Americans have been diagnosed with kidney disease [2]. The costs in terms of death, disability, reduced productivity or loss of income, and healthcare expense are enormous. US healthcare costs for CVD total more than $149 billions annually, or 17% of all medical expenditures [3].Risks associated with CVD include increasing age, male gender, heredity, hypertension, smoking, high blood cholesterol, lack of physical activity, diabetes, and obesity [4]. Clearly, age, gender, and heredity cannot be altered. Other risk factors are modifiable, and actions such as smoking cessation, eating a healthier diet, and getting adequate exercise can reduce an individual’s risk of developing CVD. Hypertension is the leading preventable risk factor. It has shown a continuous, consistent, and independent association with the risk of developing CVD [5]. However, control of hypertension remains less than optimal. Currently, only 1 in 3 patients with hypertension has achieved optimal blood pressure (BP) control [5].
## 2. Background
The renin-angiotensin aldosterone system (RAS; Figure1) is essential to the regulation of salt and water in the body [6, 7]. It is the RAS that maintains BP and vascular tone, primarily through signals from the kidney that are generated in response to changes in salt and water intake [6–8]. Although most of the RAS is based in the kidneys, there is tissue RAS as well [6, 7, 9]. The kidney or endocrine RAS is responsible for short-term volume and pressure adjustments, whereas the tissue RAS appears to affect long-term changes in the circulatory system [9, 10].Figure 1
Renin-angiotensin aldosterone system. Reprinted with permission from Ibrahim [8].
## 3. The RAS Cycle
The RAS cycle begins when angiotensinogen is produced in the liver and excreted. It is converted to angiotensin I by the enzyme renin, which is produced in the juxtaglomerular cells of the kidney. Angiotensin-converting enzyme (ACE) then converts angiotensin I to angiotensin II. Circulating angiotensin II activates AT1 receptors in a variety of target tissues, which results in increased water and sodium reabsorption, cell proliferation, and changes in vascular tone [7]. The consequences of these effects are an increase in blood volume and systemic vasoconstriction and a subsequent rise in BP [7, 8]. It is important to note that angiotensin II can be generated directly from angiotensinogen through non-ACE pathways, including cathepsin G, chymase, and ACE-2-dependent pathways [6, 8, 10]. These alternative pathways are responsible for persistent production of angiotensin II during ACE inhibition.Angiotensin II binds to both AT1 and AT2 receptors. AT1 upregulates the sympathetic nervous system, increasing vasoconstriction, aldosterone release, and sodium retention [6, 8, 10, 11]. Angiotensin II also promotes the production of free radicals, stimulates plasminogen activator inhibitor-1 release, and increases tissue factor and vascular cell adhesion molecule expression [6]. Additionally, angiotensin II has proatherogenic effects through promotion of vascular smooth muscle cell proliferation and leukocyte adhesion, thus playing an important role in the development of CVD [6, 8]. Angiotensin II also reduces the beneficial vasodilatory effects of nitric oxide through inhibition of nitric oxide synthase [10]. However, in binding to the AT2 receptor, angiotensin II mediates apparent beneficial effects that counterbalance AT1 receptor stimulation [10].
## 4. The RAS in Hypertension and CVD
Chronic elevation of RAS with subsequent exposure of tissues to high levels of angiotensin II results in hypertension, CVD, and target-organ damage. Hypertension creates stress on the blood vessel walls, giving rise to endothelial injury and thrombotic and inflammatory complications [12]. The vascular endothelium regulates blood fluidity and coagulation, vascular growth, inflammation, and vascular tone. These processes are primarily under the control of the renin-angiotensin and kallikrein-kinin systems [12, 13]. Bradykinin, a potent vasodilator, is degraded by ACE. In combination with the conversion of angiotensin I to angiotensin II, the reduction in bradykinin levels by ACE leads to enhanced vasoconstriction and inhibition of fibrinolysis [12, 14, 15] (Figure 2).Figure 2
Important effects of angiotensin II on mechanisms associated with atherosclerosis. Reprinted with permission from Schmieder et al. [15]The risks of CVD presented by the disruption of vascular homeostasis in the face of hypertension are increased in patients with diabetes mellitus. More than 65% of individuals with diabetes die from heart disease or stroke, and their risk of death from heart disease is 2 to 4 times higher than that of nondiabetic adults, whereas the risk of death from stroke is 2.8 times higher [16]. Approximately 73% of adults with diabetes have hypertension, and diabetes accounts for 44% of new cases of kidney disease each year [16]. It is the most common reason for kidney transplantation [17].
## 5. The Role of Angiotensin Receptor Blockers and ACE Inhibitors: RAS Inhibition
ACEIs and ARBs block the activity of the RAS in different ways. Whereas ACEIs prevent the formation of angiotensin II by inhibiting ACE, ARBs block the angiotensin II type 1 receptor, thus preventing angiotensin II formed by ACE and non-ACE pathways from binding to the AT1 receptor. ARBs also stimulate AT2 receptors [15, 18]. Interestingly, the AT2 receptor antagonizes many of the effects of the AT1 receptor, such as cell proliferation, and stimulation of the AT2 receptor appears to provide protection for certain organs, such as the brain against ischemia [15].Long-term use of ACEIs can lead to secondary increases in angiotensin II and aldosterone through the secondary (non-ACE) pathways, also known as “ACE escape” [18]. Of the non-ACE pathways, the most important for the formation of angiotensin II is the chymase pathway [19]. Of significant interest, recent data suggest that the chymase pathway is upregulated in diabetic and hypertensive nephropathy, and thus ACE escape may be more marked in patients with renal disease [18, 19]. Chymase also has been found to be upregulated in the coronary vascular and kidney tissue of patients with diabetes in general [15, 20].Although the phenomenon of ACE escape represents a drawback for the ACEI drug class in the treatment of hypertension, the ARB class is not without its own shortcomings. Treatment with ARBs may result in rebound concentrations of renin and angiotensin II by disrupting the negative feedback loop within the RAS [18]. The renal RAS has been shown to be separate from the systemic RAS, and doses of ARBs necessary to achieve adequate renal tissue concentrations to inhibit intrarenal RAS and prevent rebound of angiotensin II exceed those necessary to attain maximal BP-lowering effects [18]. Thus, it has been suggested that combination therapy with ACEIs and ARBs may provide the best option for patients with kidney disease, because some of these patients continue to progress to end-stage renal disease despite treatment with one or the other class as monotherapy [18].The kidneys are not the only target organs at risk in patients with hypertension. Hypertension and upregulation of the RAS affect the heart, brain, and vascular endothelium as well, and there is evidence that blockade of the RAS can reduce damage to these target organs [15]. RAS activation has been noted to contribute to left ventricular hypertrophy in patients with primary hypertension independently of and in addition to the BP load exerted on the left ventricle [15]. The RAS may also play a role in the development of atrial fibrillation. RAS blockade by ARBs in animals has been shown to slow conductivity and to prevent left atrial dilation and fibrosis, suggesting that RAS blockade may be effective as a preventive and therapeutic strategy for atrial fibrillation [15]. Stroke is another important CVD complication, and hypertension contributes substantially to its risk. Good BP control is the most effective method of reducing this risk. However, meta-analyses indicate that ARBs provide benefit in stroke risk reduction that go beyond BP control [15]. Cerebral AT2 receptors exert neuroprotective effects in response to ischemic neuronal damage. Therefore, stimulation of these receptors by ARBs may prove more effective in stroke management than therapy with ACEIs [15, 21]. Atherosclerosis contributes to risk of coronary and cerebrovascular events. The binding of angiotensin II to the AT1 receptors appears to be central to the atherosclerotic cascade, implicating the RAS in endothelial dysfunction and the development of atherosclerosis. Evidence suggests that both ACEIs and ARBs improve endothelial function [15]. Finally, RAS blockade may reduce insulin resistance, which is characteristic of both the metabolic syndrome and type 2 diabetes mellitus. Data indicate that both ACEIs and ARBs may reduce the frequency of new-onset type 2 diabetes in hypertensive patients, in contrast to β-blockers and diuretics, which do not [15].
## 6. The Present Paper
A large body of data now exists to support the use of angiotensin receptor blockers (ARBs) and ACE inhibitors (ACEIs) in the management of hypertension [15, 17, 22, 23].
## 7. Trials of ACEIs
The Heart Outcomes Prevention Evaluation (HOPE) trial investigated the effects of the ACEI ramipril on cardiovascular (CV) events in 9,297 patients who had diabetes or evidence of CVD (coronary, cerebrovascular, or peripheral artery disease) and were therefore considered at high risk, but who did not have left ventricular dysfunction or heart failure (HF) [23]. Patients were randomly assigned to receive either ramipril 10 mg once daily or matching placebo for 5 years. The primary endpoint was a composite of MI, stroke, or CV-related death.The primary endpoint was reached by 14.1% (n=651) of those receiving ramipril and 17.8% (n=826) of those in the placebo group (P<.001). The relative risk was 0.78, and the upper bound of the 95% confidence interval (CI) of 0.70 to 0.86 includes at least a relative risk reduction of 14% [23]. Statistically significant reductions were also found for death from CV causes (6.1% for ramipril, 8.1% for placebo; relative risk, 0.74; P<.001), MI (9.9% versus 12.3%, resp.; relative risk, 0.80; P<.001), stroke (3.4% versus 4.9%, resp.; relative risk, 0.68; P<.001), and death from any cause (10.4% versus 12.2%; relative risk, 0.84; P<.005) [23]. Complications related to diabetes were significantly reduced as well (6.4% versus 7.6%, resp.; relative risk, 0.84; P<.03).The findings of HOPE provided evidence-based support that ramipril is beneficial in a broad range of patients considered to be at high risk for CV events. Ramipril lowered the combined primary endpoint in the total patient population by 22%. The magnitude of benefit with ramipril was at least as great as that achieved with agents asβ-blockers, aspirin, and lipid-lowering agents for secondary prevention over 4 years of treatment [23].In the subgroup of patients with diabetes (38.5%;n=3,577), the risk of the combined primary endpoint was significantly reduced by 25% (95% CI, 12–36; P=.0004), and progression to overt nephropathy was reduced by 24% (95% CI, 3–40; P=.027) [23, 24].EUROPA (European trial On reduction of cardiac events with Perindopril in patients with stable Artery disease) examined the use of another ACEI, perindopril, in 13,655 patients with stable coronary artery disease, including 64% with a previous MI, 61% with angiographic evidence of coronary artery disease, 55% with coronary revascularization, and 5% whose only evidence of coronary artery disease was a positive stress test. After a preliminary run-in period of 5 weeks, during which all patients received perindopril, patients were randomized to perindopril 8 mg once daily (n=6,110) or matching placebo (n=6,108). The primary outcome measure was time to first occurrence of CV death, MI, or cardiac arrest [25]. Patients also received other agents known to reduce CV risk, including β-blockers, aspirin, and lipid-lowering agents [25].The mean follow-up was 4.2 years. The primary endpoint was experienced by 8% of those receiving perindopril and 10% of those on placebo, for a 20% relative risk reduction in favor of perindopril (95% CI, 9–29;P=.0003). The investigators concluded that in patients with stable coronary heart disease and without apparent HF, 50 patients would need to be treated with perindopril for 4 years to prevent one major CV event [25].The Prevention of Events with Angiotensin Converting Enzyme Inhibition (PEACE) trial investigated an ACEI, trandolapril, in 8,290 patients with stable coronary artery disease. Patients were randomized to either trandolapril 4 mg per day or matching placebo; 72% of patients had previously undergone coronary revascularization and 70% received lipid-lowering drugs during the trial period [26]. The primary endpoint was death from CV causes, MI, or coronary revascularization. Over 4.8 years, this outcome occurred in 21.9% of those receiving trandolapril and 22.5 of those receiving placebo (hazard ratio, 0.96; 95% CI, 0.88–1.06; P=.43). This study indicated that the addition of an ACEI provides no further benefit in terms of death from CV causes, MI, or coronary revascularization [26].
## 8. Trials of ARBs
ARBs have also figured prominently in recent clinical trials. The Candesartan in Heart Failure: Assessment of Reduction in Mortality and Morbidity (CHARM)-Alternative study looked at candesartan therapy in patients with chronic HF and reduced left-ventricular systolic function who were intolerant to ACEIs. A total of 2,028 patients with symptomatic HF and a left-ventricular ejection fraction of 40% or less were randomized to receive a targeted dose of candesartan 32 mg once daily or matching placebo. The primary endpoint was the composite of CV death or hospital admission for chronic HF [27].Over a mean follow-up of 33.7 months, 33% of the patients receiving candesartan and 40% of those receiving placebo experienced the primary endpoint (hazard ratio, 0.77; 95% CI, 0.67–0.89;P=.0004), resulting in a 23% relative risk reduction with candesartan. Importantly, permanent discontinuation of study drug was similar in the candesartan (30%) and placebo (29%) groups [27].Valsartan also was investigated in patients with chronic HF. A total of 5,010 chronic HF patients already receiving pharmacologic therapy considered optimal by their physicians (93% were on an ACEI at baseline) were randomly assigned to valsartan 160 mg twice daily or matching placebo. The primary endpoints were mortality and the combined endpoint of mortality and morbidity, defined as cardiac arrest with resuscitation, hospitalization for HF, or receipt of inotropic or vasodilator therapy for 4 hours or more [28].Although overall mortality was similar in both groups, the combined endpoint was 13.2% lower in the valsartan group than with placebo (relative risk, 0.87; 97.5% CI, 0.77–0.97;P=.009). This latter result was primarily driven by a lower incidence of patients hospitalized for HF in the valsartan group compared with placebo (13.8% versus 182%, resp.; P<.001). Treatment with valsartan was also associated with improvement in New York Heart Association class, ejection fraction, signs and symptoms of HF, and quality of life as compared with placebo (P<.01). Thus, valsartan proved to be valuable when added to prescribed therapy in patients with HF. However, a post hoc analysis of a subgroup of patients receiving a combination of valsartan, an ACEI, and a β-blocker had an increase in mortality and morbidity, suggesting that not all combinations improve patient outcomes [28].The Ongoing Telmisartan Alone and in Combination with Ramipril Global Endpoint Trial (ONTARGET) was conducted in patients with vascular disease or high-risk diabetes without HF to determine if the ARB, telmisartan, would be as effective as the ACEI, ramipril; and whether a combination of both agents would be superior to ramipril alone. Patients were randomized to ramipril 10 mg daily (n=8,576), telmisartan 80 mg daily (n=8,542), or a combination of both agents (n=8,502). The primary composite endpoint was death from CV causes, MI, stroke, or hospitalization for HF [27].At a median follow-up of 56 months, the primary endpoint was reached by 16.5% of those in the ramipril group and 16.7% in the telmisartan group (relative risk, 1.01; 95% CI, 0.94–1.09). The telmisartan group had lower incidence of cough and angioedema and a higher incidence of hypotensive symptoms associated with permanent discontinuation of study medication compared with the ramipril group. The investigators concluded that telmisartan was as effective as ramipril in reducing the risk for CV death/MI/stroke and hospitalization for HF in this high-risk patient population [27].
## 9. Trials of Combination Therapy with ACEIs and ARBs
Because ACEIs and ARBs inhibit the RAS in different and potentially complementary ways, it was thought that combination therapy with these 2 drug classes might prove beneficial in preventing or mitigating target-organ damage in patients with hypertension. The CHARM-Added study evaluated the efficacy of candesartan in patients with chronic HF and reduced left-ventricular systolic function. A total of 2,548 patients were randomized to either a targeted dose of 32 mg of candesartan once daily or placebo in addition to concurrent ACEI therapy. The primary outcome was the composite of CV death or admission to hospital for chronic HF [29].Over a median follow-up of 41 months, 38% of patients receiving candesartan and 42% receiving placebo experienced a primary outcome event. The hazard ratio was 0.85 (95% CI, 0.75–0.96;P=.011), significantly favoring candesartan versus placebo. The annual event rates were 14.1% in the candesartan group and 16.6% in the placebo group [29]. CHARM-Added showed that in patients with chronic HF and a low left-ventricular ejection fraction, the addition of candesartan to an ACEI led to further reductions in the risk of CV-related mortality and hospital admission for chronic HF [29].In the Valsartan in Acute Myocardial Infarction Trial (VALIANT), the efficacy of monotherapy with valsartan, captopril, or the combination of the 2 was explored in patients who had experienced an acute MI. Within 0.5 to 10 days after the event, patients were randomized to valsartan (4,909 patients), captopril (4,909 patients), or the combination (4,885 patients). The primary study outcome was death from any cause [30].At a median follow-up of 24.7 months, 19.9% of patients receiving valsartan, 19.5% of patients receiving captopril, and 19.3% of patients receiving combination therapy had died. These differences were not significant, and valsartan was found to be noninferior to captopril (P=.004), but no benefit was found for the combination therapy for this endpoint. However, drug-related adverse effects were more common with the combination of valsartan and captopril than in either monotherapy group [30].ONTARGET found that the combination of telmisartan plus ramipril was not superior to ramipril alone. The primary outcome of CV death, MI, stroke, or hospitalization for HF occurred in 16.3% of patients receiving combination therapy, as compared with 16.5% in those receiving ramipril (relative risk, 0.99; 95% CI, 0.92–1.07). The combination resulted in significantly higher incidence of hypotensive symptoms, syncope, and renal dysfunction compared with ramipril alone [27].The results of ONTARGET suggest that combining 2 distinct classes of agents that inhibit the RAS at different sites does not improve patient outcomes in a broad spectrum of high-risk subjects without HF. It corroborates the findings of VALIANT and is in contrast to the findings of CHARM-Added. However, it must also be noted that the ONTARGET patient population and that of VALIANT differ fundamentally, because the latter trial was conducted in patients who had experienced an acute MI and had signs of HF, radiographic evidence of left ventricular systolic dysfunction, or both.
## 10. Adherence with RAS Agents
While the use of RAS agents has shown some significant benefit within controlled studies, there continues to be a struggle with many patients taking their prescribed medication to achieve maximum outcomes. There are no recent studies specific to nurses and their role within the management of RAS agents relative to a patient’s adherence to medications; however, several factors can be seen through other studies. A recent study within hypertension has shown that medication adherence was significantly associated with systolic blood pressure (r=.253, P<.04), thus, prompting the need for strict adherence to prescribed regimens [31]. This adherence can be increased with a long-term intervention from health professionals as seen in the VALIDATE Study; however, when such an intervention stops, adherence declines as well [32]. This decline points out that a healthcare provider’s intervention is only one component in increasing adherence. In a 2005 HealthStyles survey of 1432 individuals who received prescriptions for antihypertensive medications, 407 (28.4%) reported having difficulty taking their medication. “Not remembering” was the most common reason reported (32.4%), but cost (22.6%), having no insurance (22.4%), side effects (12.5%), and not thinking there is any need (9.3%) were also important indicators. Additionally younger age, lower income, having mental function impairment, and having had a blood pressure check more than 6 months earlier were factors significantly associated with nonadherence. While utilizing the right medications to decrease the risk of cardiovascular disease is vital, alleviating barriers to medication adherence should be a major goal within management [33].
## 11. Discussion
ARBs are a proven option in patients with hypertension, particularly those who are at risk for target-organ damage, such as those with diabetes or evidence of CVD. Evidence from clinical trials demonstrates that these agents provide good control of hypertension and reduce CV risk. To this end, some ACEIs and ARBs have received FDA-approved indications to reduce CV risk. Adverse effects associated with use of ACEIs include cough, rash, taste disturbance, and angioedema. Cough, angioedema, taste disturbance, and rash occur less frequently with ARBs than with ACEIs. However, hypotension is more common with ARBs than ACEIs [30].The side-effect profile of ARBs may lead to better adherence on the part of patients. Adherence is notoriously poor with hypertension. Nonadherence to hypertension therapy is influenced by misunderstanding of the condition or treatment, denial of illness because of lack of symptoms, lack of patient involvement in the care plan, or unexpected adverse effects of medications. Many of these contribute to the 34% of hypertension patients who are not adequately controlled with their current treatment regimen [5].However, adherence is an area in which nurses and nurse practitioners can have a positive impact. Patients must be motivated to take their medication as prescribed, and to do so, they must understand the importance of doing so. Patient motivation is enhanced through education, positive experiences with the healthcare system, and trusting in the nurses and nurse practitioners who oversee medical care. Empathy by all healthcare professionals is a powerful motivator [5]. Patients must agree on BP goals, and the cost of medications and the complexity of the regimen must be taken into account. Patients must also be clear about their responsibility to adhere to the regimen and must make sensible lifestyle changes [5].
## 12. Conclusions
Inhibition of the RAS is an important target for cardioprotection. RAS inhibition not only controls BP, but it also reduces target-organ damage. This is especially important in populations at high risk for such damage, such as diabetics and those with chronic kidney disease. Both ARBs and ACEIs effectively control the RAS, offering important reductions in both BP and target-organ damage.
## 13. Relevance to Clinical Practice
Cardiovascular protection is a key element in the overall management of hypertension. Since the risk of CVD doubles with each increment of 20/10 mmHg, there should be some care given to selecting the right agent for an individual [5]. Due to the overwhelming positive data on the inhibition of the RAS, selection of an agent in these classes should be considered. However, other factors such as patient co-morbidities, adherence, and risk for potential adverse drug events must also be considered when selecting an agent. Depending on the patient’s needs, the use of ARBs or ACEIs can be used to effectively inhibit the RAS, offering important reductions in both BP and target-organ damage. Before selecting which agent to use to manage hypertension, consideration should be given to both classes’ agents with respect to their FDA-approved indications.
---
*Source: 101749-2010-08-12.xml* | 101749-2010-08-12_101749-2010-08-12.md | 27,880 | Cardiovascular Risk Reduction with Renin-Angiotensin Aldosterone System Blockade | Nancy Houston Miller | Nursing Research and Practice
(2010) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2010/101749 | 101749-2010-08-12.xml | ---
## Abstract
This paper examines the evidence supporting treatments within the renin-angiotensin aldosterone system (RAS), the role cardioprotection plays within the management of hypertension, considerations around medication adherence, and the role of the nurse or nurse practitioner in guiding patients to achieve higher hypertension control rates. A large body of data now exists to support the use of angiotensin receptor blockers (ARBs) and angiotensin-converting enzyme inhibitors (ACEIs) which act on RAS, in the management of hypertension and their effect on cardiovascular risk reduction. Current evidence suggests that inhibition of the RAS is an important target for cardioprotection. RAS inhibition controls blood pressure and also reduces target-organ damage. This is especially important in populations at high-risk for damage including patients with diabetes and those with chronic kidney disease. Both ARBs and ACEIs target the RAS offering important reductions in both BP and target organ damage.
---
## Body
## 1. Introduction
Nurse practitioners and nurses play a key role in the prevention and management of chronic conditions such as cardiovascular disease (CVD), diabetes mellitus, and kidney disease. Despite strides made in its treatment and prevention, CVD remains the leading cause of death worldwide [1]. Myocardial infarction (MI), stroke, and renal failure are its most common complications. In 2005, CVD was the underlying cause of 17.5 million deaths, or 30% of all deaths globally—nearly equal to the entire population of the state of Florida. MI accounted for 7.6 millions of those deaths and strokes for 5.7 millions [1]. In the United States, 631,636 died from heart disease, the number one cause of death, whereas 137,119 deaths occurred as the result of stroke and 45,344 as the result of kidney disease [2]. Stroke and kidney diseases are the third and ninth leading causes of death, respectively.The morbidity associated with CVD is high as well. Currently, approximately 24.1 million Americans have been diagnosed with heart disease, and this condition resulted in 2.4 million hospital discharges in 2005. Approximately 5.6 million Americans have at one time or another had a stroke, and in 2005, stroke accounted for 1 million hospital discharges. About 3.3 million Americans have been diagnosed with kidney disease [2]. The costs in terms of death, disability, reduced productivity or loss of income, and healthcare expense are enormous. US healthcare costs for CVD total more than $149 billions annually, or 17% of all medical expenditures [3].Risks associated with CVD include increasing age, male gender, heredity, hypertension, smoking, high blood cholesterol, lack of physical activity, diabetes, and obesity [4]. Clearly, age, gender, and heredity cannot be altered. Other risk factors are modifiable, and actions such as smoking cessation, eating a healthier diet, and getting adequate exercise can reduce an individual’s risk of developing CVD. Hypertension is the leading preventable risk factor. It has shown a continuous, consistent, and independent association with the risk of developing CVD [5]. However, control of hypertension remains less than optimal. Currently, only 1 in 3 patients with hypertension has achieved optimal blood pressure (BP) control [5].
## 2. Background
The renin-angiotensin aldosterone system (RAS; Figure1) is essential to the regulation of salt and water in the body [6, 7]. It is the RAS that maintains BP and vascular tone, primarily through signals from the kidney that are generated in response to changes in salt and water intake [6–8]. Although most of the RAS is based in the kidneys, there is tissue RAS as well [6, 7, 9]. The kidney or endocrine RAS is responsible for short-term volume and pressure adjustments, whereas the tissue RAS appears to affect long-term changes in the circulatory system [9, 10].Figure 1
Renin-angiotensin aldosterone system. Reprinted with permission from Ibrahim [8].
## 3. The RAS Cycle
The RAS cycle begins when angiotensinogen is produced in the liver and excreted. It is converted to angiotensin I by the enzyme renin, which is produced in the juxtaglomerular cells of the kidney. Angiotensin-converting enzyme (ACE) then converts angiotensin I to angiotensin II. Circulating angiotensin II activates AT1 receptors in a variety of target tissues, which results in increased water and sodium reabsorption, cell proliferation, and changes in vascular tone [7]. The consequences of these effects are an increase in blood volume and systemic vasoconstriction and a subsequent rise in BP [7, 8]. It is important to note that angiotensin II can be generated directly from angiotensinogen through non-ACE pathways, including cathepsin G, chymase, and ACE-2-dependent pathways [6, 8, 10]. These alternative pathways are responsible for persistent production of angiotensin II during ACE inhibition.Angiotensin II binds to both AT1 and AT2 receptors. AT1 upregulates the sympathetic nervous system, increasing vasoconstriction, aldosterone release, and sodium retention [6, 8, 10, 11]. Angiotensin II also promotes the production of free radicals, stimulates plasminogen activator inhibitor-1 release, and increases tissue factor and vascular cell adhesion molecule expression [6]. Additionally, angiotensin II has proatherogenic effects through promotion of vascular smooth muscle cell proliferation and leukocyte adhesion, thus playing an important role in the development of CVD [6, 8]. Angiotensin II also reduces the beneficial vasodilatory effects of nitric oxide through inhibition of nitric oxide synthase [10]. However, in binding to the AT2 receptor, angiotensin II mediates apparent beneficial effects that counterbalance AT1 receptor stimulation [10].
## 4. The RAS in Hypertension and CVD
Chronic elevation of RAS with subsequent exposure of tissues to high levels of angiotensin II results in hypertension, CVD, and target-organ damage. Hypertension creates stress on the blood vessel walls, giving rise to endothelial injury and thrombotic and inflammatory complications [12]. The vascular endothelium regulates blood fluidity and coagulation, vascular growth, inflammation, and vascular tone. These processes are primarily under the control of the renin-angiotensin and kallikrein-kinin systems [12, 13]. Bradykinin, a potent vasodilator, is degraded by ACE. In combination with the conversion of angiotensin I to angiotensin II, the reduction in bradykinin levels by ACE leads to enhanced vasoconstriction and inhibition of fibrinolysis [12, 14, 15] (Figure 2).Figure 2
Important effects of angiotensin II on mechanisms associated with atherosclerosis. Reprinted with permission from Schmieder et al. [15]The risks of CVD presented by the disruption of vascular homeostasis in the face of hypertension are increased in patients with diabetes mellitus. More than 65% of individuals with diabetes die from heart disease or stroke, and their risk of death from heart disease is 2 to 4 times higher than that of nondiabetic adults, whereas the risk of death from stroke is 2.8 times higher [16]. Approximately 73% of adults with diabetes have hypertension, and diabetes accounts for 44% of new cases of kidney disease each year [16]. It is the most common reason for kidney transplantation [17].
## 5. The Role of Angiotensin Receptor Blockers and ACE Inhibitors: RAS Inhibition
ACEIs and ARBs block the activity of the RAS in different ways. Whereas ACEIs prevent the formation of angiotensin II by inhibiting ACE, ARBs block the angiotensin II type 1 receptor, thus preventing angiotensin II formed by ACE and non-ACE pathways from binding to the AT1 receptor. ARBs also stimulate AT2 receptors [15, 18]. Interestingly, the AT2 receptor antagonizes many of the effects of the AT1 receptor, such as cell proliferation, and stimulation of the AT2 receptor appears to provide protection for certain organs, such as the brain against ischemia [15].Long-term use of ACEIs can lead to secondary increases in angiotensin II and aldosterone through the secondary (non-ACE) pathways, also known as “ACE escape” [18]. Of the non-ACE pathways, the most important for the formation of angiotensin II is the chymase pathway [19]. Of significant interest, recent data suggest that the chymase pathway is upregulated in diabetic and hypertensive nephropathy, and thus ACE escape may be more marked in patients with renal disease [18, 19]. Chymase also has been found to be upregulated in the coronary vascular and kidney tissue of patients with diabetes in general [15, 20].Although the phenomenon of ACE escape represents a drawback for the ACEI drug class in the treatment of hypertension, the ARB class is not without its own shortcomings. Treatment with ARBs may result in rebound concentrations of renin and angiotensin II by disrupting the negative feedback loop within the RAS [18]. The renal RAS has been shown to be separate from the systemic RAS, and doses of ARBs necessary to achieve adequate renal tissue concentrations to inhibit intrarenal RAS and prevent rebound of angiotensin II exceed those necessary to attain maximal BP-lowering effects [18]. Thus, it has been suggested that combination therapy with ACEIs and ARBs may provide the best option for patients with kidney disease, because some of these patients continue to progress to end-stage renal disease despite treatment with one or the other class as monotherapy [18].The kidneys are not the only target organs at risk in patients with hypertension. Hypertension and upregulation of the RAS affect the heart, brain, and vascular endothelium as well, and there is evidence that blockade of the RAS can reduce damage to these target organs [15]. RAS activation has been noted to contribute to left ventricular hypertrophy in patients with primary hypertension independently of and in addition to the BP load exerted on the left ventricle [15]. The RAS may also play a role in the development of atrial fibrillation. RAS blockade by ARBs in animals has been shown to slow conductivity and to prevent left atrial dilation and fibrosis, suggesting that RAS blockade may be effective as a preventive and therapeutic strategy for atrial fibrillation [15]. Stroke is another important CVD complication, and hypertension contributes substantially to its risk. Good BP control is the most effective method of reducing this risk. However, meta-analyses indicate that ARBs provide benefit in stroke risk reduction that go beyond BP control [15]. Cerebral AT2 receptors exert neuroprotective effects in response to ischemic neuronal damage. Therefore, stimulation of these receptors by ARBs may prove more effective in stroke management than therapy with ACEIs [15, 21]. Atherosclerosis contributes to risk of coronary and cerebrovascular events. The binding of angiotensin II to the AT1 receptors appears to be central to the atherosclerotic cascade, implicating the RAS in endothelial dysfunction and the development of atherosclerosis. Evidence suggests that both ACEIs and ARBs improve endothelial function [15]. Finally, RAS blockade may reduce insulin resistance, which is characteristic of both the metabolic syndrome and type 2 diabetes mellitus. Data indicate that both ACEIs and ARBs may reduce the frequency of new-onset type 2 diabetes in hypertensive patients, in contrast to β-blockers and diuretics, which do not [15].
## 6. The Present Paper
A large body of data now exists to support the use of angiotensin receptor blockers (ARBs) and ACE inhibitors (ACEIs) in the management of hypertension [15, 17, 22, 23].
## 7. Trials of ACEIs
The Heart Outcomes Prevention Evaluation (HOPE) trial investigated the effects of the ACEI ramipril on cardiovascular (CV) events in 9,297 patients who had diabetes or evidence of CVD (coronary, cerebrovascular, or peripheral artery disease) and were therefore considered at high risk, but who did not have left ventricular dysfunction or heart failure (HF) [23]. Patients were randomly assigned to receive either ramipril 10 mg once daily or matching placebo for 5 years. The primary endpoint was a composite of MI, stroke, or CV-related death.The primary endpoint was reached by 14.1% (n=651) of those receiving ramipril and 17.8% (n=826) of those in the placebo group (P<.001). The relative risk was 0.78, and the upper bound of the 95% confidence interval (CI) of 0.70 to 0.86 includes at least a relative risk reduction of 14% [23]. Statistically significant reductions were also found for death from CV causes (6.1% for ramipril, 8.1% for placebo; relative risk, 0.74; P<.001), MI (9.9% versus 12.3%, resp.; relative risk, 0.80; P<.001), stroke (3.4% versus 4.9%, resp.; relative risk, 0.68; P<.001), and death from any cause (10.4% versus 12.2%; relative risk, 0.84; P<.005) [23]. Complications related to diabetes were significantly reduced as well (6.4% versus 7.6%, resp.; relative risk, 0.84; P<.03).The findings of HOPE provided evidence-based support that ramipril is beneficial in a broad range of patients considered to be at high risk for CV events. Ramipril lowered the combined primary endpoint in the total patient population by 22%. The magnitude of benefit with ramipril was at least as great as that achieved with agents asβ-blockers, aspirin, and lipid-lowering agents for secondary prevention over 4 years of treatment [23].In the subgroup of patients with diabetes (38.5%;n=3,577), the risk of the combined primary endpoint was significantly reduced by 25% (95% CI, 12–36; P=.0004), and progression to overt nephropathy was reduced by 24% (95% CI, 3–40; P=.027) [23, 24].EUROPA (European trial On reduction of cardiac events with Perindopril in patients with stable Artery disease) examined the use of another ACEI, perindopril, in 13,655 patients with stable coronary artery disease, including 64% with a previous MI, 61% with angiographic evidence of coronary artery disease, 55% with coronary revascularization, and 5% whose only evidence of coronary artery disease was a positive stress test. After a preliminary run-in period of 5 weeks, during which all patients received perindopril, patients were randomized to perindopril 8 mg once daily (n=6,110) or matching placebo (n=6,108). The primary outcome measure was time to first occurrence of CV death, MI, or cardiac arrest [25]. Patients also received other agents known to reduce CV risk, including β-blockers, aspirin, and lipid-lowering agents [25].The mean follow-up was 4.2 years. The primary endpoint was experienced by 8% of those receiving perindopril and 10% of those on placebo, for a 20% relative risk reduction in favor of perindopril (95% CI, 9–29;P=.0003). The investigators concluded that in patients with stable coronary heart disease and without apparent HF, 50 patients would need to be treated with perindopril for 4 years to prevent one major CV event [25].The Prevention of Events with Angiotensin Converting Enzyme Inhibition (PEACE) trial investigated an ACEI, trandolapril, in 8,290 patients with stable coronary artery disease. Patients were randomized to either trandolapril 4 mg per day or matching placebo; 72% of patients had previously undergone coronary revascularization and 70% received lipid-lowering drugs during the trial period [26]. The primary endpoint was death from CV causes, MI, or coronary revascularization. Over 4.8 years, this outcome occurred in 21.9% of those receiving trandolapril and 22.5 of those receiving placebo (hazard ratio, 0.96; 95% CI, 0.88–1.06; P=.43). This study indicated that the addition of an ACEI provides no further benefit in terms of death from CV causes, MI, or coronary revascularization [26].
## 8. Trials of ARBs
ARBs have also figured prominently in recent clinical trials. The Candesartan in Heart Failure: Assessment of Reduction in Mortality and Morbidity (CHARM)-Alternative study looked at candesartan therapy in patients with chronic HF and reduced left-ventricular systolic function who were intolerant to ACEIs. A total of 2,028 patients with symptomatic HF and a left-ventricular ejection fraction of 40% or less were randomized to receive a targeted dose of candesartan 32 mg once daily or matching placebo. The primary endpoint was the composite of CV death or hospital admission for chronic HF [27].Over a mean follow-up of 33.7 months, 33% of the patients receiving candesartan and 40% of those receiving placebo experienced the primary endpoint (hazard ratio, 0.77; 95% CI, 0.67–0.89;P=.0004), resulting in a 23% relative risk reduction with candesartan. Importantly, permanent discontinuation of study drug was similar in the candesartan (30%) and placebo (29%) groups [27].Valsartan also was investigated in patients with chronic HF. A total of 5,010 chronic HF patients already receiving pharmacologic therapy considered optimal by their physicians (93% were on an ACEI at baseline) were randomly assigned to valsartan 160 mg twice daily or matching placebo. The primary endpoints were mortality and the combined endpoint of mortality and morbidity, defined as cardiac arrest with resuscitation, hospitalization for HF, or receipt of inotropic or vasodilator therapy for 4 hours or more [28].Although overall mortality was similar in both groups, the combined endpoint was 13.2% lower in the valsartan group than with placebo (relative risk, 0.87; 97.5% CI, 0.77–0.97;P=.009). This latter result was primarily driven by a lower incidence of patients hospitalized for HF in the valsartan group compared with placebo (13.8% versus 182%, resp.; P<.001). Treatment with valsartan was also associated with improvement in New York Heart Association class, ejection fraction, signs and symptoms of HF, and quality of life as compared with placebo (P<.01). Thus, valsartan proved to be valuable when added to prescribed therapy in patients with HF. However, a post hoc analysis of a subgroup of patients receiving a combination of valsartan, an ACEI, and a β-blocker had an increase in mortality and morbidity, suggesting that not all combinations improve patient outcomes [28].The Ongoing Telmisartan Alone and in Combination with Ramipril Global Endpoint Trial (ONTARGET) was conducted in patients with vascular disease or high-risk diabetes without HF to determine if the ARB, telmisartan, would be as effective as the ACEI, ramipril; and whether a combination of both agents would be superior to ramipril alone. Patients were randomized to ramipril 10 mg daily (n=8,576), telmisartan 80 mg daily (n=8,542), or a combination of both agents (n=8,502). The primary composite endpoint was death from CV causes, MI, stroke, or hospitalization for HF [27].At a median follow-up of 56 months, the primary endpoint was reached by 16.5% of those in the ramipril group and 16.7% in the telmisartan group (relative risk, 1.01; 95% CI, 0.94–1.09). The telmisartan group had lower incidence of cough and angioedema and a higher incidence of hypotensive symptoms associated with permanent discontinuation of study medication compared with the ramipril group. The investigators concluded that telmisartan was as effective as ramipril in reducing the risk for CV death/MI/stroke and hospitalization for HF in this high-risk patient population [27].
## 9. Trials of Combination Therapy with ACEIs and ARBs
Because ACEIs and ARBs inhibit the RAS in different and potentially complementary ways, it was thought that combination therapy with these 2 drug classes might prove beneficial in preventing or mitigating target-organ damage in patients with hypertension. The CHARM-Added study evaluated the efficacy of candesartan in patients with chronic HF and reduced left-ventricular systolic function. A total of 2,548 patients were randomized to either a targeted dose of 32 mg of candesartan once daily or placebo in addition to concurrent ACEI therapy. The primary outcome was the composite of CV death or admission to hospital for chronic HF [29].Over a median follow-up of 41 months, 38% of patients receiving candesartan and 42% receiving placebo experienced a primary outcome event. The hazard ratio was 0.85 (95% CI, 0.75–0.96;P=.011), significantly favoring candesartan versus placebo. The annual event rates were 14.1% in the candesartan group and 16.6% in the placebo group [29]. CHARM-Added showed that in patients with chronic HF and a low left-ventricular ejection fraction, the addition of candesartan to an ACEI led to further reductions in the risk of CV-related mortality and hospital admission for chronic HF [29].In the Valsartan in Acute Myocardial Infarction Trial (VALIANT), the efficacy of monotherapy with valsartan, captopril, or the combination of the 2 was explored in patients who had experienced an acute MI. Within 0.5 to 10 days after the event, patients were randomized to valsartan (4,909 patients), captopril (4,909 patients), or the combination (4,885 patients). The primary study outcome was death from any cause [30].At a median follow-up of 24.7 months, 19.9% of patients receiving valsartan, 19.5% of patients receiving captopril, and 19.3% of patients receiving combination therapy had died. These differences were not significant, and valsartan was found to be noninferior to captopril (P=.004), but no benefit was found for the combination therapy for this endpoint. However, drug-related adverse effects were more common with the combination of valsartan and captopril than in either monotherapy group [30].ONTARGET found that the combination of telmisartan plus ramipril was not superior to ramipril alone. The primary outcome of CV death, MI, stroke, or hospitalization for HF occurred in 16.3% of patients receiving combination therapy, as compared with 16.5% in those receiving ramipril (relative risk, 0.99; 95% CI, 0.92–1.07). The combination resulted in significantly higher incidence of hypotensive symptoms, syncope, and renal dysfunction compared with ramipril alone [27].The results of ONTARGET suggest that combining 2 distinct classes of agents that inhibit the RAS at different sites does not improve patient outcomes in a broad spectrum of high-risk subjects without HF. It corroborates the findings of VALIANT and is in contrast to the findings of CHARM-Added. However, it must also be noted that the ONTARGET patient population and that of VALIANT differ fundamentally, because the latter trial was conducted in patients who had experienced an acute MI and had signs of HF, radiographic evidence of left ventricular systolic dysfunction, or both.
## 10. Adherence with RAS Agents
While the use of RAS agents has shown some significant benefit within controlled studies, there continues to be a struggle with many patients taking their prescribed medication to achieve maximum outcomes. There are no recent studies specific to nurses and their role within the management of RAS agents relative to a patient’s adherence to medications; however, several factors can be seen through other studies. A recent study within hypertension has shown that medication adherence was significantly associated with systolic blood pressure (r=.253, P<.04), thus, prompting the need for strict adherence to prescribed regimens [31]. This adherence can be increased with a long-term intervention from health professionals as seen in the VALIDATE Study; however, when such an intervention stops, adherence declines as well [32]. This decline points out that a healthcare provider’s intervention is only one component in increasing adherence. In a 2005 HealthStyles survey of 1432 individuals who received prescriptions for antihypertensive medications, 407 (28.4%) reported having difficulty taking their medication. “Not remembering” was the most common reason reported (32.4%), but cost (22.6%), having no insurance (22.4%), side effects (12.5%), and not thinking there is any need (9.3%) were also important indicators. Additionally younger age, lower income, having mental function impairment, and having had a blood pressure check more than 6 months earlier were factors significantly associated with nonadherence. While utilizing the right medications to decrease the risk of cardiovascular disease is vital, alleviating barriers to medication adherence should be a major goal within management [33].
## 11. Discussion
ARBs are a proven option in patients with hypertension, particularly those who are at risk for target-organ damage, such as those with diabetes or evidence of CVD. Evidence from clinical trials demonstrates that these agents provide good control of hypertension and reduce CV risk. To this end, some ACEIs and ARBs have received FDA-approved indications to reduce CV risk. Adverse effects associated with use of ACEIs include cough, rash, taste disturbance, and angioedema. Cough, angioedema, taste disturbance, and rash occur less frequently with ARBs than with ACEIs. However, hypotension is more common with ARBs than ACEIs [30].The side-effect profile of ARBs may lead to better adherence on the part of patients. Adherence is notoriously poor with hypertension. Nonadherence to hypertension therapy is influenced by misunderstanding of the condition or treatment, denial of illness because of lack of symptoms, lack of patient involvement in the care plan, or unexpected adverse effects of medications. Many of these contribute to the 34% of hypertension patients who are not adequately controlled with their current treatment regimen [5].However, adherence is an area in which nurses and nurse practitioners can have a positive impact. Patients must be motivated to take their medication as prescribed, and to do so, they must understand the importance of doing so. Patient motivation is enhanced through education, positive experiences with the healthcare system, and trusting in the nurses and nurse practitioners who oversee medical care. Empathy by all healthcare professionals is a powerful motivator [5]. Patients must agree on BP goals, and the cost of medications and the complexity of the regimen must be taken into account. Patients must also be clear about their responsibility to adhere to the regimen and must make sensible lifestyle changes [5].
## 12. Conclusions
Inhibition of the RAS is an important target for cardioprotection. RAS inhibition not only controls BP, but it also reduces target-organ damage. This is especially important in populations at high risk for such damage, such as diabetics and those with chronic kidney disease. Both ARBs and ACEIs effectively control the RAS, offering important reductions in both BP and target-organ damage.
## 13. Relevance to Clinical Practice
Cardiovascular protection is a key element in the overall management of hypertension. Since the risk of CVD doubles with each increment of 20/10 mmHg, there should be some care given to selecting the right agent for an individual [5]. Due to the overwhelming positive data on the inhibition of the RAS, selection of an agent in these classes should be considered. However, other factors such as patient co-morbidities, adherence, and risk for potential adverse drug events must also be considered when selecting an agent. Depending on the patient’s needs, the use of ARBs or ACEIs can be used to effectively inhibit the RAS, offering important reductions in both BP and target-organ damage. Before selecting which agent to use to manage hypertension, consideration should be given to both classes’ agents with respect to their FDA-approved indications.
---
*Source: 101749-2010-08-12.xml* | 2010 |
# Unusual Presentation of Gianotti-Crosti Syndrome due to Epstein-Barr Virus Infection
**Authors:** Hind Saif Al Dhaheri; Amani Al Kaabi; Yasmin Kara Hamo; Aysha Al Kaabi; Salwa Al Kaabi; Hossam Al Tatari
**Journal:** Case Reports in Dermatological Medicine
(2016)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2016/1017524
---
## Abstract
Gianotti-Crosti syndrome (GCS) is viral exanthema of childhood. It typically presents with a symmetric erythematous papular and papulovesicular eruption. It has been classically associated with hepatitis B virus, as well as rarely with Epstein-Barr virus (EBV). We report a case of GCS related to EBV infection without the classical systemic symptoms in a five-year-old male patient.
---
## Body
## 1. Introduction
Gianotti-Crosti syndrome (GCS) is a viral exanthem of childhood that appears most commonly in children between the ages of 1–6 years but also reported in other age groups between 3 months to 15 years [1].The most common causative agent is hepatitis B virus (HBV). Other agents include hepatitis C and hepatitis A, influenza, parainfluenza, adenovirus, cytomegalovirus (CMV), Epstein-Barr virus (EBV), and respiratory syncytial virus (RSV) [2]. GCS was also associated with vaccine administration [3] such as oral polio vaccine, pentavalent vaccine, Diphtheria, Pertussis, and Tetanus (DPT), hepatitis B and haemophilus influenza type b, and hepatitis A vaccine. The mean time of developing the rash ranges from 2 days to 21 days [4].This viral exanthem typically consists of symmetrical erythematous papular or vesiculopapular eruptions with an acral spreading. It frequently starts from the buttocks and spreads to other areas of the body [5, 6]. GCS is a clinical diagnosis and the treatment is supportive since it is a self-limited disease [1, 5].
## 2. Case Report
A previously healthy 5-year-old male was admitted to the paediatric medical ward due to 2-day history of blood tinged diarrhoea and 2-week history of abdominal pain with itchy rash. No fever, joint pain, cough, nasal congestion, or urinary symptoms were reported at that time. He was fully vaccinated and his family medical history was completely negative.On physical examination, he had normal growth parameters with normal vitals. Systems examination was unremarkable apart from symmetrical itchy erythematous maculopapular rash over the extensor surfaces of upper and lower limbs. The rash was mainly concentrated over elbows and ankles. No lymphadenopathies, hepatosplenomegaly, or joints involvement was appreciated at that time.The rash morphology and distribution were suggestive of GCS and therefore he was investigated accordingly. Investigations revealed the following results: complete blood count showed normal WBC with monocytosis (1.2 × 109/L); renal and liver function tests were within normal range for the age; and urinalysis was normal. Hepatitis Bs Ag was negative. Despite the lack of typical mononucleosis syndrome, EBV infection was suspected due to the significant monocytosis in the CBC. Therefore, EBV PCR was checked and turned to be positive. He was provided with symptomatic care and was discharged after two days in good condition. The patient was followed up regularly in the outpatient clinic and the rash was found to fade completely by the end of the fifth week of illness.
## 3. Discussion
Gianotti-Crosti syndrome (GCS) is a viral exanthem of childhood that was first reported by Ferdinando Gianotti and Agostino Crosti in 1957 as a monomorphous erythematous rash of infants and children [1, 5].Incidence of GCS is not well defined although many believe that it is an underdiagnosed condition. It appears most commonly in children between the ages of 1–6 years. There are also few case reports in children as young as 3 months and as old as 15 years of age. In paediatric population there is no statistical differences in gender or race [1]. GCS has higher incidence during spring and summer [2] and in patients with personal or family history of atopy [1].The pathogenesis of GCS is not clear; however studies had reported two main hypotheses; the first was IgE mediated response supported by the fact that GCS is seen more in patients with atopy. The second hypothesis was viral induced delayed type hypersensitivity reaction due to high CD4+ T cell counts in the dermal infiltrate of affected patients [1, 5].Case reports have clearly indicated the association between infectious triggers and development of GCS. Most common causative agent is HBV; other agents include hepatitis C and hepatitis A (HAV), EBV, influenza, parainfluenza, adenovirus, CMV, and RSV [2]. Though rare, bacterial causes have been reported as well, includingBartonella henselae, b-hemolytic streptococci,Borrelia burgdorferi, andMycoplasma pneumonia [1, 5]. GCS was also reported after vaccine administration such as influenza virus vaccine, HBV and HAV vaccine, oral polio vaccine, and measles-mumps-rubella vaccines; however no causal correlation has been recognized [3, 5, 7]. In our case GCS was confirmed to be caused by EBV.GCS main clinical feature is a viral exanthem, which might be preceded by upper respiratory infection, diarrhoea, or pharyngitis. The rash of GCS is characterized by symmetrical erythematous papular or vesiculopapular eruption with an acral spreading, frequently starting from the buttock and spreading acrally [4, 5]. Other areas of the body (trunk, elbows, knees, palms, and soles) are usually not affected; however its involvement does not exclude the diagnosis. The size of the rash ranges between 1 and 10 millimetres. Most children will have spontaneous resolution of the rash in 10 days to 6 months. However there are reports of the rash lasting for as short as 5 days and as long as 12 months. Mild to severe pruritus might be present and may last up to several weeks. In our case the rash was itchy and distributed over the extensor surfaces of upper and lower limbs while being concentrated around the knees and elbows, which is typical for GCS.Systemic manifestation associated with GCS includes low-grade fever, malaise, diarrhoea, and lymphadenopathies (25–35% of patients). Hepatitis is rare and is mainly seen with HBV, EBV, and CMV infection in the form of anicteric hepatitis. Splenomegaly was rarely reported [1, 5]. Our patient was unique in a way since he did not have any of the above manifestations despite having positive blood EBV PCR. His blood tinged diarrhoea could be explained by the chronic constipation that he had and recent use of fleet enema.GCS is a self-limited benign disease that could be puzzling to paediatricians and disturbing to many parents. Reaching the diagnosis and educating the parents about it will surely help in resolving parents’ anxiety. For pruritus, treatment depends on the severity and ranges from topical emollient to oral antihistamine [1, 5].
## 4. Conclusion
By reporting this unique case we encourage paediatricians to consider GCS in their differential diagnosis of viral exanthems and to consider EBV as an underlying cause even in the absence of the typical features of EBV infection.
---
*Source: 1017524-2016-12-05.xml* | 1017524-2016-12-05_1017524-2016-12-05.md | 7,266 | Unusual Presentation of Gianotti-Crosti Syndrome due to Epstein-Barr Virus Infection | Hind Saif Al Dhaheri; Amani Al Kaabi; Yasmin Kara Hamo; Aysha Al Kaabi; Salwa Al Kaabi; Hossam Al Tatari | Case Reports in Dermatological Medicine
(2016) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2016/1017524 | 1017524-2016-12-05.xml | ---
## Abstract
Gianotti-Crosti syndrome (GCS) is viral exanthema of childhood. It typically presents with a symmetric erythematous papular and papulovesicular eruption. It has been classically associated with hepatitis B virus, as well as rarely with Epstein-Barr virus (EBV). We report a case of GCS related to EBV infection without the classical systemic symptoms in a five-year-old male patient.
---
## Body
## 1. Introduction
Gianotti-Crosti syndrome (GCS) is a viral exanthem of childhood that appears most commonly in children between the ages of 1–6 years but also reported in other age groups between 3 months to 15 years [1].The most common causative agent is hepatitis B virus (HBV). Other agents include hepatitis C and hepatitis A, influenza, parainfluenza, adenovirus, cytomegalovirus (CMV), Epstein-Barr virus (EBV), and respiratory syncytial virus (RSV) [2]. GCS was also associated with vaccine administration [3] such as oral polio vaccine, pentavalent vaccine, Diphtheria, Pertussis, and Tetanus (DPT), hepatitis B and haemophilus influenza type b, and hepatitis A vaccine. The mean time of developing the rash ranges from 2 days to 21 days [4].This viral exanthem typically consists of symmetrical erythematous papular or vesiculopapular eruptions with an acral spreading. It frequently starts from the buttocks and spreads to other areas of the body [5, 6]. GCS is a clinical diagnosis and the treatment is supportive since it is a self-limited disease [1, 5].
## 2. Case Report
A previously healthy 5-year-old male was admitted to the paediatric medical ward due to 2-day history of blood tinged diarrhoea and 2-week history of abdominal pain with itchy rash. No fever, joint pain, cough, nasal congestion, or urinary symptoms were reported at that time. He was fully vaccinated and his family medical history was completely negative.On physical examination, he had normal growth parameters with normal vitals. Systems examination was unremarkable apart from symmetrical itchy erythematous maculopapular rash over the extensor surfaces of upper and lower limbs. The rash was mainly concentrated over elbows and ankles. No lymphadenopathies, hepatosplenomegaly, or joints involvement was appreciated at that time.The rash morphology and distribution were suggestive of GCS and therefore he was investigated accordingly. Investigations revealed the following results: complete blood count showed normal WBC with monocytosis (1.2 × 109/L); renal and liver function tests were within normal range for the age; and urinalysis was normal. Hepatitis Bs Ag was negative. Despite the lack of typical mononucleosis syndrome, EBV infection was suspected due to the significant monocytosis in the CBC. Therefore, EBV PCR was checked and turned to be positive. He was provided with symptomatic care and was discharged after two days in good condition. The patient was followed up regularly in the outpatient clinic and the rash was found to fade completely by the end of the fifth week of illness.
## 3. Discussion
Gianotti-Crosti syndrome (GCS) is a viral exanthem of childhood that was first reported by Ferdinando Gianotti and Agostino Crosti in 1957 as a monomorphous erythematous rash of infants and children [1, 5].Incidence of GCS is not well defined although many believe that it is an underdiagnosed condition. It appears most commonly in children between the ages of 1–6 years. There are also few case reports in children as young as 3 months and as old as 15 years of age. In paediatric population there is no statistical differences in gender or race [1]. GCS has higher incidence during spring and summer [2] and in patients with personal or family history of atopy [1].The pathogenesis of GCS is not clear; however studies had reported two main hypotheses; the first was IgE mediated response supported by the fact that GCS is seen more in patients with atopy. The second hypothesis was viral induced delayed type hypersensitivity reaction due to high CD4+ T cell counts in the dermal infiltrate of affected patients [1, 5].Case reports have clearly indicated the association between infectious triggers and development of GCS. Most common causative agent is HBV; other agents include hepatitis C and hepatitis A (HAV), EBV, influenza, parainfluenza, adenovirus, CMV, and RSV [2]. Though rare, bacterial causes have been reported as well, includingBartonella henselae, b-hemolytic streptococci,Borrelia burgdorferi, andMycoplasma pneumonia [1, 5]. GCS was also reported after vaccine administration such as influenza virus vaccine, HBV and HAV vaccine, oral polio vaccine, and measles-mumps-rubella vaccines; however no causal correlation has been recognized [3, 5, 7]. In our case GCS was confirmed to be caused by EBV.GCS main clinical feature is a viral exanthem, which might be preceded by upper respiratory infection, diarrhoea, or pharyngitis. The rash of GCS is characterized by symmetrical erythematous papular or vesiculopapular eruption with an acral spreading, frequently starting from the buttock and spreading acrally [4, 5]. Other areas of the body (trunk, elbows, knees, palms, and soles) are usually not affected; however its involvement does not exclude the diagnosis. The size of the rash ranges between 1 and 10 millimetres. Most children will have spontaneous resolution of the rash in 10 days to 6 months. However there are reports of the rash lasting for as short as 5 days and as long as 12 months. Mild to severe pruritus might be present and may last up to several weeks. In our case the rash was itchy and distributed over the extensor surfaces of upper and lower limbs while being concentrated around the knees and elbows, which is typical for GCS.Systemic manifestation associated with GCS includes low-grade fever, malaise, diarrhoea, and lymphadenopathies (25–35% of patients). Hepatitis is rare and is mainly seen with HBV, EBV, and CMV infection in the form of anicteric hepatitis. Splenomegaly was rarely reported [1, 5]. Our patient was unique in a way since he did not have any of the above manifestations despite having positive blood EBV PCR. His blood tinged diarrhoea could be explained by the chronic constipation that he had and recent use of fleet enema.GCS is a self-limited benign disease that could be puzzling to paediatricians and disturbing to many parents. Reaching the diagnosis and educating the parents about it will surely help in resolving parents’ anxiety. For pruritus, treatment depends on the severity and ranges from topical emollient to oral antihistamine [1, 5].
## 4. Conclusion
By reporting this unique case we encourage paediatricians to consider GCS in their differential diagnosis of viral exanthems and to consider EBV as an underlying cause even in the absence of the typical features of EBV infection.
---
*Source: 1017524-2016-12-05.xml* | 2016 |
# Clinical Trials and Treatment of ATL
**Authors:** Kunihiro Tsukasaki; Kensei Tobinai
**Journal:** Leukemia Research and Treatment
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101754
---
## Abstract
ATL is a distinct peripheral T-lymphocytic malignancy associated with human T-cell lymphotropic virus type I (HTLV-1). The diversity in clinical features and prognosis of patients with this disease has led to its subtype-classification into four categories, acute, lymphoma, chronic, and smoldering types, defined by organ involvement, and LDH and calcium values. In case of acute, lymphoma, or unfavorable chronic subtypes (aggressive ATL), intensive chemotherapy like the LSG15 regimen (VCAP-AMP-VECP) is usually recommended if outside of clinical trials, based on the results of a phase 3 trial. In case of favorable chronic or smoldering ATL (indolent ATL), watchful waiting until disease progression has been recommended, although the long-term prognosis was inferior to those of, for instance, chronic lymphoid leukemia. Retrospective analysis suggested that the combination of interferon alpha and zidovudine was apparently promising for the treatment of ATL, especially for types with leukemic manifestation. Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is also promising for the treatment of aggressive ATL possibly reflecting graft versus ATL effect. Several new agent trials for ATL are ongoing and in preparation, including a defucosylated humanized anti-CC chemokine receptor 4 monoclonal antibody, IL2-fused with diphtheria toxin, histone deacetylase inhibitors, a purine nucleoside phosphorylase inhibitor, a proteasome inhibitor, and lenalidomide.
---
## Body
## 1. Introduction
Adult T-cell leukemia-lymphoma (ATL) was first described in 1977 by Uchiyama et al. as a distinct clinico-pathological entity with a suspected viral etiology because of the clustering of the disease in the southwest region of Japan [1]. Subsequently, a novel RNA retrovirus, human T-cell leukemia/lymphotropic virus type I (HTLV-1), was isolated from a cell line established from leukemic cells of an ATL patient, and the finding of a clear association with ATL led to its inclusion among human carcinogenic pathogens [2–5]. In the mid-1980s and 1990s, several inflammatory diseases were reported to be associated with HTLV-1 [6–10]. At the same time, endemic areas for the virus and diseases have been found (reviewed in [11–13]). Diversity in ATL has been recognized and a classification of clinical subtypes of the disease was proposed [14]. This chapter will review the current recognition of ATL focusing on treatment of the disease.
## 2. Clinical Features and Laboratory Findings of ATL
ATL patients show a variety of clinical manifestations because of various complications of organ involvement by ATL cells, opportunistic infections and/or hypercalcemia [11–14]. These three often contribute to the extremely high mortality of the disease. Lymph node, liver, spleen, and skin lesions are frequently observed. Though less frequently, digestive tract, lungs, central nervous system, bone, and/or other organs may be involved. Large nodules, plaques, ulcers, and erythroderma are common skin lesions [15–17]. Immune suppression is common. Approximately 26% of 854 patients with ATL had active infections at diagnosis in a prior nationwide study in Japan [14]. The incidence was highest in the chronic and smoldering types (36%) and lower in the acute (27%) and lymphoma types (11%). The infections were bacterial in 43%, fungal in 31%, protozoal in 18%, and viral in 8% of patients. The immunodeficiency at presentation in ATL patients can be exacerbated by cytotoxic chemotherapy. Individuals with indolent ATL might have no manifestation of the disease and are identified only by health checkups and laboratory examinations.ATL cells are usually detected quite easily in the blood of affected individuals except for the smoldering type with mainly skin manifestations and lymphoma type [14]. These so-called “flower cells” have highly indented or lobulated nuclei with condensed chromatin, small or absent nucleoli, and a agranular and basophilic cytoplasm [18]. The histological analysis of aberrant cutaneous lesions or lymph nodes is essential for the diagnosis of the smoldering type with mainly skin manifestations and lymphoma type of ATL, respectively. Because ATL cells in the skin and lymph node can vary in size from small to large and in form from pleomorphic to anaplastic and Hodgkin-like cell with no specific histological pattern of involvement, differentiating between Sezary syndrome, other peripheral T-cell lymphomas and Hodgkin lymphoma versus ATL can at times be difficult without examinations for HTLV-1 serotype/genotype [13, 19].Hypercalcemia is the most distinctive laboratory abnormality in ATL as compared to other lymphoid malignancies and is observed in 31% of patients (50% in acute type, 17% in lymphoma type, and 0% in the other two types) at onset [14]. Individuals with hypercalcemia do not usually have osteolytic bone lesions. Parathyroid hormone-related protein or receptor activator of nuclear factor kappa B ligand (RANKL) produced by ATL cells is considered the main factor causing hypercalcemia [20, 21].Similar to serum LDH,β2-microglobulin, and serum thymidine kinase levels reflecting disease bulk/activity, the level of the soluble form of interleukin (IL)-2 receptor alpha-chain is elevated in the order of acute/lymphoma-type ATL, smoldering/chronic-type ATL, and HTLV-1 carriers as compared with normal individuals, perhaps with better accuracy than the other markers [22–24]. These serum markers are useful for detecting the acute transformation of indolent ATL as well as the early relapse of ATL after achieving responses by therapy.Prototypical ATL cells have a mature alpha-beta T-cell phenotype, that is, they are terminal deoxynucleotidyl transferase- (TdT-)negative, cluster of differentiation (CD) 1a-negative, T-cell receptor alpha-beta positive, CD2-positive and CD5, CD45RO, and CD29-positive, and frequently do not express CD7 and CD26. A decline in the CD3 level with the appearance of CD25 indicates that the ATL cells are in an activated state. Most ATL cells are CD52-positive but some are negative, and this may correlate with the coexpression of CD30. About 90% of cases are CD4-positive and CD8-negative, and in rare cases either coexpress CD4 and CD8, are negative for both markers, or are only CD8-positive [25]. CC chemokine receptor 4 (CCR4) is expressed in more than 90% of cases and associated with a poor prognosis. Recent studies have suggested that the cells of some ATL may be the equivalent of regulatory T-cells because of the high frequency of expression of CD25/CCR4 and about half of FoxP3 [26–28].
## 3. Diagnosis of ATL
The diagnosis of typical ATL is not difficult and is based on clinical features, ATL cell morphology, mature helper-T-cell phenotype, and anti-HTLV-1 antibody in most cases [13]. Those rare cases, which might be difficult to diagnose, can be shown to have the monoclonal integration of HTLV-1 proviral DNA in the malignant cells as determined by Southern blotting. However, the monoclonal integration of HTLV-1 is also detected in some HAM/TSP patients and HTLV-1 carriers [29, 30]. After the diagnosis of ATL, subtype classification of the disease is necessary for the selection of appropriate treatment [14, 31].
## 4. Definition, Prognostic Factors, and Subtype Classification of ATL
ATL is a distinct peripheral T-lymphocytic malignancy associated with a retrovirus designated human T-cell leukemia virus type I or human T-cell lymphotropic virus type I (HTLV-1) [1, 11–14, 31].Major prognostic indicators for ATL, which have been elucidated in 854 patients with ATL in Japan, the Lymphoma Study Group (LSG) of the Japan Clinical Oncology Group (JCOG) by multivariate analysis, were advanced performance status (PS), high lactic dehydrogenase (LDH) level, age of 40 years or more, more than 3 involved lesions, and hypercalcemia [32]. Also a classification of clinical subtypes into acute, lymphoma, chronic, and smoldering types was proposed based on prognostic factors and clinical features of the disease [14]. The leukemic subtypes include all of the chronic type and most of the acute and smoldering types. The acute type has a rapid course with leukemic manifestation (≥2% ATL cells) mostly, with or without lymphocytosis (>4×109/L) including ATL cells and most of the characteristic features of ATL-generalized lymphadenopathy, hepatosplenomegaly, skin involvement, other organ involvement, a high LDH value, and hypercalcemia. The symptoms and signs include abdominal pain, diarrhea, ascites, jaundice, unconsciousness, dyspnea, pleural effusion, cough, sputum, and chest X-ray abnormalities because of organ involvement, hypercalcemia, and/or opportunistic infections. The smoldering type shows an indolent course and 5% or more of leukemic cells in the peripheral blood without lymphocytosis but may include skin/lung involvement. The calcium level is less than the upper limit, and LDH level is less than 1.5 times the upper limit in smoldering ATL. The chronic type, with absolute lymphocytosis (4×109/L) less frequently showing flower cell morphology than the acute type, is frequently and occasionally associated with skin involvement and lymphadenopathy, respectively, and also usually shows a relatively indolent course. The calcium level is less than the upper limit, and the LDH level is less than double the upper limit of the chronic type. The lymphoma type presents with the manifestations of a nodal-lymphoma without leukemic cells, frequently with high LDH/Ca levels, a rapid course, and symptoms and signs similar to the acute type. In case of ATL, clinical subtype is more important than Ann Arbor stage for predicting prognosis and deciding treatment because of frequent leukemic manifestation defined as stage IV.Additional factors associated with a poor prognosis include thrombocytopenia, eosinophilia, bone marrow involvement, a high interleukin (IL)-5 serum-level, C-C chemokine receptor 4 (CCR4) expression, lung resistance-related protein (LRP), p53 mutation, and p16 deletion by multivariate analysis [26, 27, 33–37]. Specific for the chronic type of ATL, high LDH, high blood urea nitrogen (BUN), and low albumin levels were identified as factors for a poor prognosis by multivariate analysis [11]. Primary cutaneous tumoral type although generally included among smoldering ATL had a poor prognosis in univariate analysis [15].
## 5. Clinical Course, Treatment, and Response Criteria of ATL
Treatment decisions should be based on the ATL subtype-classification and the prognostic factors at onset including those related with ATL and comorbidity [31]. As mentioned above, subtype-classification of this disease has been proposed based on the prognosis and clinical manifestations. Without treatment, most patients with acute-/lymphoma/type ATL die of the disease or infections within weeks or months. More than half of patients with smoldering ATL survive for more than 5 years without chemotherapy and transformation to aggressive ATL. Chronic ATL has the most diverse prognosis among the subtypes and could be divided into favorable and unfavorable by clinical parameters (serum albumin, BUN, and LDH levels) after a multivariate analysis [31].Current treatment options for ATL include watchful waiting until the disease progresses, interferon alpha (IFN) and zidovudine (AZT) therapy, multiagent chemotherapy, allogeneic hematopoietic stem cell transplantation (allo-HSCT), and a new agent [15].
### 5.1. Watchful Waiting
At present, no standard treatment for ATL exists. Therefore, patients with the smoldering or favorable chronic type, who may survive one or more years without chemotherapy, excluding topical therapy for cutaneous lesions, should be observed and therapy should be delayed until progression of the disease [31]. However, it was recently found that the long-term prognosis of such patients was poorer than expected. In a long-term followup study for 78 patients with indolent ATL (favorable chronic- or smoldering-type) with a policy of watchful waiting until disease progression at a single institution, the median survival time was 5.3 years with no plateau in the survival curve. Twelve patients remained alive for >10 years, 32 progressed to acute ATL, and 51 died [38]. Recently, the striking benefit of early intervention to indolent ATL by IFN and an antiretroviral agent was reported by a meta-analysis [39]. This modality should be extensively evaluated by larger clinical trials to establish appropriate management practices for indolent ATL.
### 5.2. Chemotherapy
Since 1978, chemotherapy trials have been consecutively conducted for patients newly diagnosed with ATL by JCOG’s Lymphoma Study Group (LSG) (Table1) [40–45]. Between 1981 and 1983, JCOG conducted a phase III trial (JCOG8101) to evaluate LSG1-VEPA (vincristine, cyclophosphamide, prednisone, and doxorubicin) versus LSG2-VEPA-M (VEPA plus methotrexate (MTX)) for advanced non-Hodgkin lymphoma (NHL), including ATL [40, 41]. The complete response (CR) rate of LSG2-VEPA-M for ATL (37%) was higher than that of LSG1-VEPA (17%; P=.09). However, the CR rate was significantly lower for ATL than for B-cell NHL and peripheral T-cell lymphoma (PTCL) other than ATL (P<.001). The median survival time of the 54 patients with ATL was 6 months, and the estimated 4-year survival rate was 8%.Table 1
Results of sequential chemotherapeutic-trials of untreated patients with ATL (JCOG-LSG).
J7801J8101J8701J9109J9303JCOG9801LSG1LSG1/LSG2LSG4LSG11LSG15mLSG15/mLSG19Pts. No.18544362965761CR (%)16.727.841.928.335.540.424.6CR + PR (%)51.680.672.065.6MST (months)7.58.07.413.012.710.92 yr. survival (%)17.031.33 yr. survival (%)10.021.923.612.74 yr. survival (%)8.011.6CR: complete remission, PR: partial remission, MST: median survival time.In 1987, JCOG initiated a multicenter phase II study (JCOG8701) of a multiagent combination chemotherapy (LSG4) for advanced aggressive NHL (including ATL). LSG4 consisted of three regimens: (1) VEPA-B (VEPA plus bleomycin), (2) M-FEPA (methotrexate, vindesine, cyclophosphamide, prednisone, and doxorubicin), and (3) VEPP-B, (vincristine, etoposide, procarbazine, prednisone, and bleomycin) [42]. The CR rate for ATL patients was improved from 28% (JCOG8101) to 43% (JCOG8701); however, the CR rate was significantly lower in ATL than in B-cell NHL and PTCL (P<.01). Patients with ATL still showed a poor prognosis, with a median survival time of 8 months and a 4-year survival rate of 12%.The disappointing results with conventional chemotherapies have led to a search for new active agents. Multicenter phase I and II studies of pentostatin (2′-deoxycoformycin, an inhibitor of adenosine deaminase) were conducted against ATL in Japan [43]. The phase II study revealed a response rate of 32% (10 of 31) in cases of relapsed or refractory ATL (2CRs and 8PRs).These encouraging results prompted the investigators to conduct a phase II trial (JCOG9109) with a pentostatin-containing combination (LSG11) as the initial chemotherapy [44]. Patients with aggressive ATL—that is, of the acute, lymphoma, or unfavorable chronic type—were eligible for this study. Unfavorable chronic-type ATL, defined as having at least 1 of 3 unfavorable prognostic factors (low serum albumin level, high LDH level, or high BUN), has an unfavorable prognosis similar to that for acute- and lymphoma-type ATL. A total of 62 untreated patients with aggressive ATL (34 acute, 21 lymphoma, and 7 unfavorable chronic type) were enrolled. A regimen of 1 mg/m2 vincristine on days 1 and 8, 40 mg/m2 doxorubicin on day 1, 100 mg/m2 etoposide on days 1 through 3, 40 mg/m2 prednisolone (PSL) on days 1 and 2, and 5 mg/m2 pentostatin on days 8, 15, and 22 was administered every 28 days for 10 cycles. Among the 61 patients evaluable for toxicity, four patients (7%) died of infections, two from septicemia, and two from cytomegalovirus pneumonia. Among the 60 eligible patients, there were 17CRs (28%) and 14 partial responses (PRs) (overall response rate [ORR] = 52%). The median survival time was 7.4 months, and the estimated 2-year survival rate was 17%. The prognosis in patients with ATL remained poor, even though they were treated with a pentostatin-containing combination chemotherapy.In 1994, JCOG initiated a phase II trial (JCOG9303) of an eight-drug regimen (LSG15) consisting of vincristine, cyclophosphamide, doxorubicin, prednisone, ranimustine, vindesine, etoposide, and carboplatin for untreated ATL [45]. Dose intensification was attempted with the prophylactic use of granulocyte colony-stimulating factor (G-CSF). In addition, non-cross-resistant agents, such as ranimustine and carboplatin, and intrathecal prophylaxis with MTX and PSL were incorporated. Ninety-six previously untreated patients with aggressive ATL were enrolled: 58 acute, 28 lymphoma, and 10 unfavorable chronic types. Approximately 81% of the 93 eligible patients responded (75/93), with 33 patients obtaining a CR (35%). The overall survival rate of the 93 patients at 2 years was estimated to be 31%, with a median survival time of 13 months. Grade 4 neutropenia and thrombocytopenia were observed in 65% and 53% of the patients, respectively, whereas grade 4 nonhematologic toxicity was observed in only one patient.Dose intensification of CHOP with prophylactic use of G-CSF was expected to improve survival among patients with aggressive NHL, and our randomized phase II study (JCOG9505) comparing CHOP-14 (LSG19) and dose-escalated CHOP (LSG20) to treat aggressive NHL excluding ATL revealed biweekly CHOP to be more promising [46]. Therefore, we regarded biweekly CHOP as a standard treatment for NHL including aggressive ATL at the time of designing this phase III study.To confirm whether the LSG15 regimen is a new standard for the treatment of aggressive ATL, JCOG conducted a phase III trial comparing modified (m)-LSG15 with biweekly CHOP (cyclophosphamide, hydroxy-doxorubicin, vincristine [Oncovin], and prednisone), both supported with G-CSF and intrathecal prophylaxis [47].mLSG19, a modified version of LSG19, consisted of eight cycles of CHOP [CPA 750 mg/m2, ADM 50 mg/m2,VCR 1.4 mg/m2(maximum 2 mg) on day 1 and PSL 100 mg on days 1 to 5] every 2 weeks [46]. The modification was an intrathecal administration identical to that in mLSG15.mLSG15 in JCOG9801 was a modified version of LSG15 in JCOG9303, consisting of three regimens: VCAP [VCR 1 mg/m2 (maximum 2 mg), CPA 350 mg/m2, ADM 40 mg/m2, PSL 40 mg/m2] on day 1, AMP [ADM 30 mg/m2, MCNU 60 mg/m2, PSL 40 mg/m2] on day 8, and VECP [VDS 2.4 mg/m2 on day 15, ETP 100 mg/m2 on days 15 to 17, CBDCA 250 mg/m2 on day15, PSL 40 mg/m2 on days 15 to 17] on days 15–17, and the next course was to be started on day 29 (Figure 1). The modifications in mLSG15 as compared to LSG15 were as follows: (1) The total number of cycles was reduced from 7 to 6 because of progressive cytopenia, especially thrombocytopenia, after repeating the LSG15 therapy. (2) Cytarabine 40 mg was used with MTX 15 mg and PSL 10 mg for prophylactic intrathecal administration, at the recovery phases of courses 1, 3, and 5 because of the high frequency of central nervous system relapse in the JCOG9303 study. Untreated patients with aggressive ATL were assigned to receive either six courses of mLSG15 every 4 weeks or eight courses of biweekly CHOP. The primary endpoint was overall survival. A total of 118 patients were enrolled. The CR rate was higher in the mLSG15 arm than in the biweekly CHOP arm (40% versus 25%, resp.; P=.020). As shown in Table 1, the median survival time and OS rate at 3 years were 12.7 months and 24% in the mLSG15 arm and 10.9 months and 13% in the biweekly CHOP arm [two-sided P=.169, and the hazard ratio was 0.75; 95% confidence interval (CI), 0.50 to 1.13]. A Cox regression analysis with performance status (PS 0 versus 1 versus 2–4) as the stratum for baseline hazard functions was performed to evaluate the effect on overall survival of age, B-symptoms, subtypes of ATL, LDH, BUN, bulky mass, and treatment arms. According to this analysis, the hazard ratio and two-sided P value for the treatment arms were 0.62 (95% CI, 0.38 to 1.01) and .056, respectively. The difference between the crude analysis and this result was because of unbalanced prognostic factors, such as PS 0 versus 1, and the presence or absence of bulky lesions between the treatment arms. The progression-free survival rate at 1 year was 28% in the mLSG15 arm compared with 16% in the biweekly CHOP arm (two-sided P=.20).Figure 1
Regimen of VCAP-AMP-VECP in mLSG15. VCAP: vincristine (VCR), cyclophosphamide (CPA), doxorubicin (ADM), prednisone (PSL); AMP: ADM, ranimustine (MCNU), PSL; VECP: vindesine (VDS), etoposide (ETP), carboplatin (CBDCA), and PSL.*) MCNU and VDS are nitrosourea and vinca alkaloid, respectively, developed in Japan. A previous study on myeloma described that carmustine (BCNU), another nitrosourea, at 1 mg/kg is equivalent to MCNU at 0.8 to 1.0 mg/kg. VDS at 2.4 mg/m2 can be substituted for VCR, another vinca alkaloid used in this regimen, at 1 mg/m2 with possibly less myelosuppression and more peripheral neuropathy which can be managed by dose modification.In mLSG15 versus mLSG19, rate of grade 4 neutropenia, grade 4 thrombocytopenia, and grade 3/4 infection were 98% versus 83%, 74% versus 17%, and 32% versus 15%, respectively. There were three toxic deaths in the former. Three treatment-related deaths (TRDs), two from sepsis and one from interstitial pneumonitis related to neutropenia, were reported in the mLSG15 arm. Two cases of myelodysplastic syndrome were reported, one each in both arms.The longer survival at 3 years and higher CR rate with mLSG15 compared with mLSG19 suggest that mLSG15 is a more effective regimen at the expense of higher toxicity, providing the basis for future investigations in the treatment of ATL [47]. The superiority of VCAP-AMP-VECP in mLSG15 to biweekly CHOP in mLSG19 may be explained by the more prolonged, dose dense schedule of therapy in addition to 4 more drugs. In addition, agents such as carboplatin and ranimustine not affected by multidrug-resistance (MDR) related genes, which were frequently expressed in ATL cells at onset, were incorporated [48]. Intrathecal prophylaxis, which was incorporated in both arms of the phase III study, should be considered for patients with aggressive ATL even in the absence of clinical symptoms because a previous analysis revealed that more than half of relapses at new sites after chemotherapy occurred in the CNS [49]. However, the median survival time of 13 months in VCAP-AMP-VECP (LSG15/mLSG15) still compares unfavorably to other hematological malignancies, requiring further effort to improve the outcome.
### 5.3. Interferon-Alpha and Zidovudine
A small phase II trial in Japan of IFN alpha against relapsed/refractory ATL showed a response rate (all PR) of 33% (8/24), including 5 out of 9 (56%) chronic-type ATL [50]. In 1995, Gill and associates reported that 11 of 19 patients with acute- or lymphoma-type ATL showed major responses (5 CR and 6 PR) to a combination of interferon-alpha (IFN) and zidovudine (AZT) [51]. The efficacy of this combination was also observed by Hermine and associates; major objective responses were obtained in all five patients with ATL (four with acute type and one with smoldering type) [52]. Although these results are encouraging, the OS of previously untreated patients with ATL was relatively short (4.8 months) compared with the survival of those in the chemotherapy trials conducted by the JCOG-LSG (7 to 8 months) [53]. After that, numerous small phase II studies using AZT and IFN have shown responses in ATL patients [54–56]. High doses of both agents are recommended: 6–9 million units of IFN in combination with daily divided AZT doses of 800–1000 mg/day. Therapeutic effect of AZT and IFN is not through a direct cytotoxic effect of these drugs on the leukemic cells [57]. Enduring AZT treatment of ATL cell lines results in inhibition of telomerase which reprograms the cells to p53-dependent senescence [58].Recently, the results of a “meta-analysis” on the use of IFN and AZT for ATL were reported [39]. A total of 100 patients received interferon-alpha and AZT as initial treatments. The ORR was 66%, with a 43% CR rate. In this worldwide retrospective analysis, the median survival time was 24 months and the 5-year survival rate was 50% for first-line IFN and AZT, versus 7 months and 20% for 84 patients who received first-line chemotherapy. The median survival time of patients with acute-type ATL treated with first-line IFN/AZT and chemotherapy was 12 and 9 months, respectively. Patients with lymphoma-type ATL did not benefit from this combination. In addition, first-line IFN/AZT therapy in chronic- and smoldering-type ATL resulted in a 100% survival rate at a median followup of 5 years. However, because of the retrospective nature of this meta-analysis based on medical records at each hospital, the decision process to select the therapeutic modality for each patient and the possibility of interference with OS by second-line treatment remains unknown. While the results for IFN/AZT in indolent ATL appear to be promising compared to those with watchful-waiting policy until disease progression, recently reported from Japan [38], the possibility of selection bias cannot be ruled out. A prospective multicenter phase III study evaluating the efficacy of IFN/AZT as compared to watchful-waiting for indolent ATL is to be initiated in Japan.Recently, a phase II study of the combination of arsenic trioxide, IFN, and AZT for chronic ATL revealed an impressive response rate and moderate toxicity [39]. Although the results appeared promising, the addition of arsenic trioxide to IFN/AZT, which might be sufficient for the treatment of chronic ATL as described above, caused more toxicity and should be evaluated with caution.
### 5.4. Allogeneic Hematopoietic Stem-Cell Transplantation (Allo-HSCT)
Allo-HSCT is now recommended for the treatment of young patients with aggressive ATL [31, 59]. Despite higher treatment-related mortality including graft versus host disease in a retrospective multicenter analysis of myeloablative allo-HSCT, the estimated 3-year OS of 33% is promising, possibly reflecting a graft versus ATL effect [60]. To evaluate the efficacy of allo-HSCT more accurately, especially in view of a comparison with intensive chemotherapy, a prospective multicenter phase II study of LSG15 chemotherapy followed by allo-HSCT is ongoing (JCOG0907).Feasibility studies of allo-HSCT with reduced intensity conditioning for relatively aged patients with ATL also revealed promising results, and subsequent multicenter trials are being conducted in Japan [61, 62]. The minimal residual disease after allo-HSCT detected as HTLV-1 proviral load was much less than that after chemotherapy or AZT/IFN therapy, suggesting the presence of a graft-versus-ATL effect as well as graft-versus-HTLV-1 activity [61].It remains unclear which type of allo-HSCT (myeloablative or reduced intensity conditioning) is more suitable for the treatment of ATL. Furthermore, selection criteria with respect to responses to previous treatments, sources of stem cells, and HTLV-1 viral status of the donor remain to be determined. Recently, a patient in whom ATL derived from donor cells developed four months after transplantation of stem cells from a sibling with HTLV-I was reported [63].However, several other retrospective studies as well as those mentioned above on allo-HSCT showed a promising long-term survival rate of 20 to 40% with an apparent plateau phase despite significant treatment-related mortality.
### 5.5. Supportive Care
The prevention of opportunistic infections is essential in the management of ATL patients, nearly half of whom develop severe infections during chemotherapy. Some patients with indolent ATL develop infections during watchful waiting.Sulfamethoxazole/trimethoprim and antifungal agents have been recommended as prophylaxes for Pneumocystis jiroveci pneumonia and fungal infections, respectively, in the JCOG trials [43–45]. While cytomegalovirus infections are not infrequent among ATL patients, ganciclovir is not usually recommended as a prophylaxis [31]. In addition, in patients not receiving chemotherapy or allo-HSCT, antifungal prophylaxis may not be critical. An antistrongyloides agent, such as ivermectin or albendazole, should be considered to avoid systemic infections in patients with a history of exposure to the parasite in the tropics. Treatment with steroids and proton pump inhibitors may precipitate a fulminant strongyloides infestation and warrants testing before these agents are used in endemic areas [31]. Hypercalcemia associated with aggressive ATL can be corrected using chemotherapy in combination with hydration and bisphosphonate even when the performance status of the patient is poor.
### 5.6. Response Criteria
The complex nature of ATL, often with both leukemic and lymphomatous components, makes response assessment difficult. A modification of the JCOG response criteria was suggested by ATL consensus-meeting reflecting those for CLL and NHL which had been published later [31, 64, 65]. Recently, revised response criteria were proposed for lymphoma. New guidelines were presented incorporating positron emission tomography (PET), especially for the assessment of CR. It is well known and described in the criteria that several kinds of lymphoma including peripheral T-cell lymphomas were variably [18F] fluorodeoxyglucose (FDG) avid [66]. Meanwhile, PET or PET/CT is recommended for evaluations of response when the tumorous lesions are FDG-avid at diagnosis [31].
### 5.7. New Agents for ATL
#### 5.7.1. Purine Analogs
Several purine analogs have been evaluated for ATL. Among them, pentostatin (deoxycoformycin) has been most extensively evaluated as a single agent and in combination as described above [43, 46].Other purine analogs clinically studied for ATL are fludarabine and cladribine. Fludarabine is among standard treatments for B-chronic lymphocytic leukemia and other lymphoid malignancies. In a phase I study of fludarabine in Japan, 5 ATL patients and 10 B-CLL patients with refractory or relapsed-disease were enrolled [67]. Six grade 3 nonhematological toxicities were only observed in the ATL patients. PR was achieved only in one of the 5 ATL patients and the duration was short. Cladribine is among standard treatments for hairy cell leukemia and other lymphoid malignancies. A phase II study of cladribine for relapsed/refractory aggressive-ATL in 15 patients revealed only one PR [68].Forodesine, a purine nucleotide phosphorylase (PNP) inhibitor, is among purine nucleotide analogs. PNP is an enzyme in the purine salvage pathway that phosphorolysis 2′deoxyguanosine (dGuo). Purine nucleoside phosphorylase (PNP) deficiency in humans results in a severe combined immunodeficiency phenotype and the selective depletion of T cells associated with high plasma deoxyguanosine (dGuo) and high intracellular deoxyguanosine triphosphate levels in those cells with high deoxynucleoside kinase activity such as T cells, leading to cell death. Inhibitors of PNP, such as forodesine, mimic SCID in vitro and in vivo, suggesting a new targeting agent specific for T cell malignancies [69]. A dose escalating phase I study of forodesine is being conducted in Japan for T cell malignancies including ATL.
#### 5.7.2. Histone Deacetylase Inhibitor
Gene expression governed by epigenetic changes is crucial to the pathogenesis of cancer. Histone deacetylases (HDACs) are enzymes involved in the remodeling of chromatin and play a key role in the epigenetic regulation of gene expression. Deacetylase inhibitors (DACis) induce the hyperacetylation of nonhistone proteins as well as nucleosomal histones resulting in the expression of repressed genes involved in growth arrest, terminal differentiation, and/or apoptosis among cancer cells. Several classes of HDACi have been found to have potent anticancer effects in preclinical studies. HDACIs such as vorinostat (suberoylanilide hydroxamic acid: SAHA), romidepsin (depsipeptide), and panobinostat (LBH589) have also shown promise in preclinical and/or clinical studies against T-cell malignancies including ATL [70, 71]. Vorinostat and romidepsin have been approved for cutaneous T-cell lymphoma (CTCL) by the Food and Drug Administration in the USA. LBH589 has a significant anti-ATL effect in vitro and in mice [71]. However, a phase II study for CTCL and indolent ATL in Japan was terminated because of severe infections associated with the shrinkage of skin tumors and formation of ulcers in patients with ATL. Further study is required to evaluate the efficacy of HDACIs for PTCL/CTCL including ATL.
#### 5.7.3. Monoclonal Antibodies and Toxin Fusion Proteins
Monoclonal antibodies (MoAb) and toxin fusion proteins targeting several molecules expressed on the surface of ATL cells and other lymphoid malignant cells, such as CD25, CD2, CD52, and chemokine receptor 4 (CCR4), have shown promise in recent clinical trials.Because most ATL cells express the alpha-chain of IL-2R (CD25), Waldmann et al. treated patients with ATL using monoclonal antibodies to CD25 [72]. Six (32%) of 19 patients treated with anti-Tac showed objective responses lasting from 9 weeks to longer than 3 years. One impediment to this approach is the quantity of soluble IL-2R shed by the tumor cells into the circulation. Another strategy for targeting IL-2R is conjugation with an immunotoxin (Pseudomonas exotoxin) or radioisotope (yttrium-90). Waldmann et al. developed a stable conjugate of anti-Tac with yttrium-90. Among the 16 patients with ATL who received 5- to 15-mCi doses, 9 (56%) showed objective responses. The response lasted longer than that obtained with unconjugated anti-Tac antibody [73, 74].LMB-2, composed of the anti-CD25 murine MoAb fused to the truncated form of Pseudomonas toxin, was cytotoxic to CD25-expressing cells including ATL cells in vitro and in mice. Phase I/II trials of this agent showed some effect against hairy cell leukemia, CTCL, and ATL [6]. Six of 35 patients in the phase I study had significant levels of neutralizing antibodies after the first cycle. This drug deserves further clinical trials including in combination with cytotoxic agents.Denileukin diftitox (DD; DAB(389)-interleukin-2 [IL-2]), an interleukin-2-diphtheria toxin fusion protein targeting IL-2 receptor-expressing malignant T lymphocytes, shows efficacy as a single agent against CTCL and peripheral T-cell lymphoma (PTCL) [75]. Also the combination of this agent with multiagent chemotherapy, CHOP, was promising for PTCL [76]. ATL cells frequently and highly express CD25 as described above, and several ATL cases successfully treated with this agent have been reported [77].CD52 antigen is present on normal and pathologic B and T cells. In PTCL, however, CD52 expression varies among patients, with an overall expression rate lower than 50% in one study but not in another [78, 79]. ATL cells frequently express CD52 as compared to other PTCLs. The humanized anti-CD52 monoclonal antibody alemtuzumab is active against CLL and PTCL as a single agent. The combination of alemtuzumab with a standard-dose cyclophosphamide/doxorubicin/vincristine/prednisone (CHOP) regimen as a first-line treatment for 24 patients with PTCL showed promising results with CR in 17 (71%) patients; 1 had a partial remission, with an overall median duration of response of 11 months and was associated with mostly manageable infections but including CMV reactivation [80]. Major infections were Jacob-Creutzfeldt virus reactivation, pulmonary invasive aspergillosis, and staphylococcus sepsis.ATL cells express CD52, the target of alemtuzumab, which was active in a preclinical model of ATL and toxic to p53-deficient cells, and several ATL cases successfully treated with this agent have been reported [81–83].Siplizumab is a humanized MoAb targeting CD2 and showed efficacy in a murine ATL model. P1 dose-escalating study of this agent in 22 patients with several kinds of T/NK-cell malignancy revealed 6 responses (2 CR in LGL leukemia, 3 PR in ATL, and 1 PR in CTCL). However, 4 patients developed EBV-associated LPD [84]. The broad specificity of this agent may eliminate both CD4- and CD8-positive T cells as well as NK cells without effecting B cells and predispose individuals to the development of EBV lymphoproliferative syndrome.CC chemokine receptor 4 (CCR4) is expressed on normal T helper type 27 and regulatory T (Treg) cells and on certain types of T-cell neoplasms [20, 21, 35]. KW-0761, a next generation humanized anti-CCR4 mAb, with a defucosylated Fc region, exerts strong antibody-dependent cellular cytotoxicity (ADCC) due to increased binding to the Fcγ receptor on effecter cells [85]. A phase I study of dose escalation with 4 weekly intravenous infusions of KW-0761 in 16 patients with relapsed CCR4-positive T cell malignancy (13 ATL and 3 PTCL) revealed that one patient, at the maximum dose (1.0 mg/kg), developed grade (G) 3 dose-limiting toxic effects, namely, skin rashes and febrile neutropenia and G4 neutropenia [86]. Other treatment-related G3-4 toxic effects were lymphopenia (n=10), neutropenia (n=3), leukopenia (n=2), herpes zoster (n=1), and acute infusion reaction/cytokine release syndrome (n=1). Neither the frequency nor severity of these effects increased with dose escalation or the plasma concentration of the agent. The maximum tolerated dose was not reached. No patients had detectable levels of anti-KW-0761 antibody. Five patients (31%; 95% CI, 11% to 59%) achieved objective responses: 2 complete (0.1; 1.0 mg/kg) and 3 partial (0.01; 2 at 1.0 mg/kg) responses. Three out of 13 patients with ATL (31%) achieved a response (2 CR and 1 PR). Responses in each lesion were diverse, that is, good in PB (6 CR and 1 PR/7 evaluable cases), intermediate in skin (3 CR and 1 PR/8 evaluable cases), and poor in LN (1 CR and 2 PR/11 evaluable cases). KW-0761 was well tolerated at all the doses tested, demonstrating potential efficacy against relapsed CCR4-positive ATL or PTCL. Recently, results of subsequent phase II studies at the 1.0 mg/kg in relapsed ATL, showing 50% of response rate with acceptable toxicity profiles, reported [87]. A phase II trial of single agent KW-0761 at the 1.0 mg/kg in relapsed PTCL/CTCL and a phase II trial of VCAP-AMP-VECP combined with KW-0761 for untreated aggressive ATL are ongoing.
#### 5.7.4. Other Agents
A proteasome inhibitor, bortezomib (Velcade), and an immunomodulatory agent, lenalidomide (Revlimid), both have potent preclinical and clinical activity in T-cell malignancies including ATL, are now under clinical trials for relapsed ATL in Japan [88–90]. Other potential drugs for ATL include pralatrexate (Folotyn), a new agent with clinical activity in T-cell malignancies including ATL [91–93]. The agent is a novel antifolate with improved membrane transport and polyglutamylation in tumor cells and high affinity for the reduced folate carrier (RFC) highly expressed in malignant cells and has been approved by FDA recently for T-cell lymphoma including ATL.
### 5.8. Prevention
Two steps should be considered for the prevention of HTLV-1-associated ATL. The first is the prevention of HTLV-1 infections. This has been achieved in some endemic areas in Japan by screening for HTLV-1 among blood donors and asking mothers who are carriers to refrain from breast feeding. For several decades, before initiation of the interventions, the prevalence of HTLV-1 has declined drastically in endemic areas in Japan, probably because of birth cohort effects [94]. The elimination of HTLV-1 in endemic areas is now considered possible due to the natural decrease in the prevalence as well as the intervention of transmission through blood transfusion and breast feeding. The second step is the prevention of ATL among HTLV-1 carriers. This has not been achieved partly because only about 5% of HTLV-1 carriers develop the disease in their life time although several risk factors have been identified by a cohort study of HTLV-1 carriers (Joint Study of Predisposing Factors for ATL Development) [95]. Also, no agent has been found to be effective in preventing the development of ATL among HTLV-1 carriers.
## 5.1. Watchful Waiting
At present, no standard treatment for ATL exists. Therefore, patients with the smoldering or favorable chronic type, who may survive one or more years without chemotherapy, excluding topical therapy for cutaneous lesions, should be observed and therapy should be delayed until progression of the disease [31]. However, it was recently found that the long-term prognosis of such patients was poorer than expected. In a long-term followup study for 78 patients with indolent ATL (favorable chronic- or smoldering-type) with a policy of watchful waiting until disease progression at a single institution, the median survival time was 5.3 years with no plateau in the survival curve. Twelve patients remained alive for >10 years, 32 progressed to acute ATL, and 51 died [38]. Recently, the striking benefit of early intervention to indolent ATL by IFN and an antiretroviral agent was reported by a meta-analysis [39]. This modality should be extensively evaluated by larger clinical trials to establish appropriate management practices for indolent ATL.
## 5.2. Chemotherapy
Since 1978, chemotherapy trials have been consecutively conducted for patients newly diagnosed with ATL by JCOG’s Lymphoma Study Group (LSG) (Table1) [40–45]. Between 1981 and 1983, JCOG conducted a phase III trial (JCOG8101) to evaluate LSG1-VEPA (vincristine, cyclophosphamide, prednisone, and doxorubicin) versus LSG2-VEPA-M (VEPA plus methotrexate (MTX)) for advanced non-Hodgkin lymphoma (NHL), including ATL [40, 41]. The complete response (CR) rate of LSG2-VEPA-M for ATL (37%) was higher than that of LSG1-VEPA (17%; P=.09). However, the CR rate was significantly lower for ATL than for B-cell NHL and peripheral T-cell lymphoma (PTCL) other than ATL (P<.001). The median survival time of the 54 patients with ATL was 6 months, and the estimated 4-year survival rate was 8%.Table 1
Results of sequential chemotherapeutic-trials of untreated patients with ATL (JCOG-LSG).
J7801J8101J8701J9109J9303JCOG9801LSG1LSG1/LSG2LSG4LSG11LSG15mLSG15/mLSG19Pts. No.18544362965761CR (%)16.727.841.928.335.540.424.6CR + PR (%)51.680.672.065.6MST (months)7.58.07.413.012.710.92 yr. survival (%)17.031.33 yr. survival (%)10.021.923.612.74 yr. survival (%)8.011.6CR: complete remission, PR: partial remission, MST: median survival time.In 1987, JCOG initiated a multicenter phase II study (JCOG8701) of a multiagent combination chemotherapy (LSG4) for advanced aggressive NHL (including ATL). LSG4 consisted of three regimens: (1) VEPA-B (VEPA plus bleomycin), (2) M-FEPA (methotrexate, vindesine, cyclophosphamide, prednisone, and doxorubicin), and (3) VEPP-B, (vincristine, etoposide, procarbazine, prednisone, and bleomycin) [42]. The CR rate for ATL patients was improved from 28% (JCOG8101) to 43% (JCOG8701); however, the CR rate was significantly lower in ATL than in B-cell NHL and PTCL (P<.01). Patients with ATL still showed a poor prognosis, with a median survival time of 8 months and a 4-year survival rate of 12%.The disappointing results with conventional chemotherapies have led to a search for new active agents. Multicenter phase I and II studies of pentostatin (2′-deoxycoformycin, an inhibitor of adenosine deaminase) were conducted against ATL in Japan [43]. The phase II study revealed a response rate of 32% (10 of 31) in cases of relapsed or refractory ATL (2CRs and 8PRs).These encouraging results prompted the investigators to conduct a phase II trial (JCOG9109) with a pentostatin-containing combination (LSG11) as the initial chemotherapy [44]. Patients with aggressive ATL—that is, of the acute, lymphoma, or unfavorable chronic type—were eligible for this study. Unfavorable chronic-type ATL, defined as having at least 1 of 3 unfavorable prognostic factors (low serum albumin level, high LDH level, or high BUN), has an unfavorable prognosis similar to that for acute- and lymphoma-type ATL. A total of 62 untreated patients with aggressive ATL (34 acute, 21 lymphoma, and 7 unfavorable chronic type) were enrolled. A regimen of 1 mg/m2 vincristine on days 1 and 8, 40 mg/m2 doxorubicin on day 1, 100 mg/m2 etoposide on days 1 through 3, 40 mg/m2 prednisolone (PSL) on days 1 and 2, and 5 mg/m2 pentostatin on days 8, 15, and 22 was administered every 28 days for 10 cycles. Among the 61 patients evaluable for toxicity, four patients (7%) died of infections, two from septicemia, and two from cytomegalovirus pneumonia. Among the 60 eligible patients, there were 17CRs (28%) and 14 partial responses (PRs) (overall response rate [ORR] = 52%). The median survival time was 7.4 months, and the estimated 2-year survival rate was 17%. The prognosis in patients with ATL remained poor, even though they were treated with a pentostatin-containing combination chemotherapy.In 1994, JCOG initiated a phase II trial (JCOG9303) of an eight-drug regimen (LSG15) consisting of vincristine, cyclophosphamide, doxorubicin, prednisone, ranimustine, vindesine, etoposide, and carboplatin for untreated ATL [45]. Dose intensification was attempted with the prophylactic use of granulocyte colony-stimulating factor (G-CSF). In addition, non-cross-resistant agents, such as ranimustine and carboplatin, and intrathecal prophylaxis with MTX and PSL were incorporated. Ninety-six previously untreated patients with aggressive ATL were enrolled: 58 acute, 28 lymphoma, and 10 unfavorable chronic types. Approximately 81% of the 93 eligible patients responded (75/93), with 33 patients obtaining a CR (35%). The overall survival rate of the 93 patients at 2 years was estimated to be 31%, with a median survival time of 13 months. Grade 4 neutropenia and thrombocytopenia were observed in 65% and 53% of the patients, respectively, whereas grade 4 nonhematologic toxicity was observed in only one patient.Dose intensification of CHOP with prophylactic use of G-CSF was expected to improve survival among patients with aggressive NHL, and our randomized phase II study (JCOG9505) comparing CHOP-14 (LSG19) and dose-escalated CHOP (LSG20) to treat aggressive NHL excluding ATL revealed biweekly CHOP to be more promising [46]. Therefore, we regarded biweekly CHOP as a standard treatment for NHL including aggressive ATL at the time of designing this phase III study.To confirm whether the LSG15 regimen is a new standard for the treatment of aggressive ATL, JCOG conducted a phase III trial comparing modified (m)-LSG15 with biweekly CHOP (cyclophosphamide, hydroxy-doxorubicin, vincristine [Oncovin], and prednisone), both supported with G-CSF and intrathecal prophylaxis [47].mLSG19, a modified version of LSG19, consisted of eight cycles of CHOP [CPA 750 mg/m2, ADM 50 mg/m2,VCR 1.4 mg/m2(maximum 2 mg) on day 1 and PSL 100 mg on days 1 to 5] every 2 weeks [46]. The modification was an intrathecal administration identical to that in mLSG15.mLSG15 in JCOG9801 was a modified version of LSG15 in JCOG9303, consisting of three regimens: VCAP [VCR 1 mg/m2 (maximum 2 mg), CPA 350 mg/m2, ADM 40 mg/m2, PSL 40 mg/m2] on day 1, AMP [ADM 30 mg/m2, MCNU 60 mg/m2, PSL 40 mg/m2] on day 8, and VECP [VDS 2.4 mg/m2 on day 15, ETP 100 mg/m2 on days 15 to 17, CBDCA 250 mg/m2 on day15, PSL 40 mg/m2 on days 15 to 17] on days 15–17, and the next course was to be started on day 29 (Figure 1). The modifications in mLSG15 as compared to LSG15 were as follows: (1) The total number of cycles was reduced from 7 to 6 because of progressive cytopenia, especially thrombocytopenia, after repeating the LSG15 therapy. (2) Cytarabine 40 mg was used with MTX 15 mg and PSL 10 mg for prophylactic intrathecal administration, at the recovery phases of courses 1, 3, and 5 because of the high frequency of central nervous system relapse in the JCOG9303 study. Untreated patients with aggressive ATL were assigned to receive either six courses of mLSG15 every 4 weeks or eight courses of biweekly CHOP. The primary endpoint was overall survival. A total of 118 patients were enrolled. The CR rate was higher in the mLSG15 arm than in the biweekly CHOP arm (40% versus 25%, resp.; P=.020). As shown in Table 1, the median survival time and OS rate at 3 years were 12.7 months and 24% in the mLSG15 arm and 10.9 months and 13% in the biweekly CHOP arm [two-sided P=.169, and the hazard ratio was 0.75; 95% confidence interval (CI), 0.50 to 1.13]. A Cox regression analysis with performance status (PS 0 versus 1 versus 2–4) as the stratum for baseline hazard functions was performed to evaluate the effect on overall survival of age, B-symptoms, subtypes of ATL, LDH, BUN, bulky mass, and treatment arms. According to this analysis, the hazard ratio and two-sided P value for the treatment arms were 0.62 (95% CI, 0.38 to 1.01) and .056, respectively. The difference between the crude analysis and this result was because of unbalanced prognostic factors, such as PS 0 versus 1, and the presence or absence of bulky lesions between the treatment arms. The progression-free survival rate at 1 year was 28% in the mLSG15 arm compared with 16% in the biweekly CHOP arm (two-sided P=.20).Figure 1
Regimen of VCAP-AMP-VECP in mLSG15. VCAP: vincristine (VCR), cyclophosphamide (CPA), doxorubicin (ADM), prednisone (PSL); AMP: ADM, ranimustine (MCNU), PSL; VECP: vindesine (VDS), etoposide (ETP), carboplatin (CBDCA), and PSL.*) MCNU and VDS are nitrosourea and vinca alkaloid, respectively, developed in Japan. A previous study on myeloma described that carmustine (BCNU), another nitrosourea, at 1 mg/kg is equivalent to MCNU at 0.8 to 1.0 mg/kg. VDS at 2.4 mg/m2 can be substituted for VCR, another vinca alkaloid used in this regimen, at 1 mg/m2 with possibly less myelosuppression and more peripheral neuropathy which can be managed by dose modification.In mLSG15 versus mLSG19, rate of grade 4 neutropenia, grade 4 thrombocytopenia, and grade 3/4 infection were 98% versus 83%, 74% versus 17%, and 32% versus 15%, respectively. There were three toxic deaths in the former. Three treatment-related deaths (TRDs), two from sepsis and one from interstitial pneumonitis related to neutropenia, were reported in the mLSG15 arm. Two cases of myelodysplastic syndrome were reported, one each in both arms.The longer survival at 3 years and higher CR rate with mLSG15 compared with mLSG19 suggest that mLSG15 is a more effective regimen at the expense of higher toxicity, providing the basis for future investigations in the treatment of ATL [47]. The superiority of VCAP-AMP-VECP in mLSG15 to biweekly CHOP in mLSG19 may be explained by the more prolonged, dose dense schedule of therapy in addition to 4 more drugs. In addition, agents such as carboplatin and ranimustine not affected by multidrug-resistance (MDR) related genes, which were frequently expressed in ATL cells at onset, were incorporated [48]. Intrathecal prophylaxis, which was incorporated in both arms of the phase III study, should be considered for patients with aggressive ATL even in the absence of clinical symptoms because a previous analysis revealed that more than half of relapses at new sites after chemotherapy occurred in the CNS [49]. However, the median survival time of 13 months in VCAP-AMP-VECP (LSG15/mLSG15) still compares unfavorably to other hematological malignancies, requiring further effort to improve the outcome.
## 5.3. Interferon-Alpha and Zidovudine
A small phase II trial in Japan of IFN alpha against relapsed/refractory ATL showed a response rate (all PR) of 33% (8/24), including 5 out of 9 (56%) chronic-type ATL [50]. In 1995, Gill and associates reported that 11 of 19 patients with acute- or lymphoma-type ATL showed major responses (5 CR and 6 PR) to a combination of interferon-alpha (IFN) and zidovudine (AZT) [51]. The efficacy of this combination was also observed by Hermine and associates; major objective responses were obtained in all five patients with ATL (four with acute type and one with smoldering type) [52]. Although these results are encouraging, the OS of previously untreated patients with ATL was relatively short (4.8 months) compared with the survival of those in the chemotherapy trials conducted by the JCOG-LSG (7 to 8 months) [53]. After that, numerous small phase II studies using AZT and IFN have shown responses in ATL patients [54–56]. High doses of both agents are recommended: 6–9 million units of IFN in combination with daily divided AZT doses of 800–1000 mg/day. Therapeutic effect of AZT and IFN is not through a direct cytotoxic effect of these drugs on the leukemic cells [57]. Enduring AZT treatment of ATL cell lines results in inhibition of telomerase which reprograms the cells to p53-dependent senescence [58].Recently, the results of a “meta-analysis” on the use of IFN and AZT for ATL were reported [39]. A total of 100 patients received interferon-alpha and AZT as initial treatments. The ORR was 66%, with a 43% CR rate. In this worldwide retrospective analysis, the median survival time was 24 months and the 5-year survival rate was 50% for first-line IFN and AZT, versus 7 months and 20% for 84 patients who received first-line chemotherapy. The median survival time of patients with acute-type ATL treated with first-line IFN/AZT and chemotherapy was 12 and 9 months, respectively. Patients with lymphoma-type ATL did not benefit from this combination. In addition, first-line IFN/AZT therapy in chronic- and smoldering-type ATL resulted in a 100% survival rate at a median followup of 5 years. However, because of the retrospective nature of this meta-analysis based on medical records at each hospital, the decision process to select the therapeutic modality for each patient and the possibility of interference with OS by second-line treatment remains unknown. While the results for IFN/AZT in indolent ATL appear to be promising compared to those with watchful-waiting policy until disease progression, recently reported from Japan [38], the possibility of selection bias cannot be ruled out. A prospective multicenter phase III study evaluating the efficacy of IFN/AZT as compared to watchful-waiting for indolent ATL is to be initiated in Japan.Recently, a phase II study of the combination of arsenic trioxide, IFN, and AZT for chronic ATL revealed an impressive response rate and moderate toxicity [39]. Although the results appeared promising, the addition of arsenic trioxide to IFN/AZT, which might be sufficient for the treatment of chronic ATL as described above, caused more toxicity and should be evaluated with caution.
## 5.4. Allogeneic Hematopoietic Stem-Cell Transplantation (Allo-HSCT)
Allo-HSCT is now recommended for the treatment of young patients with aggressive ATL [31, 59]. Despite higher treatment-related mortality including graft versus host disease in a retrospective multicenter analysis of myeloablative allo-HSCT, the estimated 3-year OS of 33% is promising, possibly reflecting a graft versus ATL effect [60]. To evaluate the efficacy of allo-HSCT more accurately, especially in view of a comparison with intensive chemotherapy, a prospective multicenter phase II study of LSG15 chemotherapy followed by allo-HSCT is ongoing (JCOG0907).Feasibility studies of allo-HSCT with reduced intensity conditioning for relatively aged patients with ATL also revealed promising results, and subsequent multicenter trials are being conducted in Japan [61, 62]. The minimal residual disease after allo-HSCT detected as HTLV-1 proviral load was much less than that after chemotherapy or AZT/IFN therapy, suggesting the presence of a graft-versus-ATL effect as well as graft-versus-HTLV-1 activity [61].It remains unclear which type of allo-HSCT (myeloablative or reduced intensity conditioning) is more suitable for the treatment of ATL. Furthermore, selection criteria with respect to responses to previous treatments, sources of stem cells, and HTLV-1 viral status of the donor remain to be determined. Recently, a patient in whom ATL derived from donor cells developed four months after transplantation of stem cells from a sibling with HTLV-I was reported [63].However, several other retrospective studies as well as those mentioned above on allo-HSCT showed a promising long-term survival rate of 20 to 40% with an apparent plateau phase despite significant treatment-related mortality.
## 5.5. Supportive Care
The prevention of opportunistic infections is essential in the management of ATL patients, nearly half of whom develop severe infections during chemotherapy. Some patients with indolent ATL develop infections during watchful waiting.Sulfamethoxazole/trimethoprim and antifungal agents have been recommended as prophylaxes for Pneumocystis jiroveci pneumonia and fungal infections, respectively, in the JCOG trials [43–45]. While cytomegalovirus infections are not infrequent among ATL patients, ganciclovir is not usually recommended as a prophylaxis [31]. In addition, in patients not receiving chemotherapy or allo-HSCT, antifungal prophylaxis may not be critical. An antistrongyloides agent, such as ivermectin or albendazole, should be considered to avoid systemic infections in patients with a history of exposure to the parasite in the tropics. Treatment with steroids and proton pump inhibitors may precipitate a fulminant strongyloides infestation and warrants testing before these agents are used in endemic areas [31]. Hypercalcemia associated with aggressive ATL can be corrected using chemotherapy in combination with hydration and bisphosphonate even when the performance status of the patient is poor.
## 5.6. Response Criteria
The complex nature of ATL, often with both leukemic and lymphomatous components, makes response assessment difficult. A modification of the JCOG response criteria was suggested by ATL consensus-meeting reflecting those for CLL and NHL which had been published later [31, 64, 65]. Recently, revised response criteria were proposed for lymphoma. New guidelines were presented incorporating positron emission tomography (PET), especially for the assessment of CR. It is well known and described in the criteria that several kinds of lymphoma including peripheral T-cell lymphomas were variably [18F] fluorodeoxyglucose (FDG) avid [66]. Meanwhile, PET or PET/CT is recommended for evaluations of response when the tumorous lesions are FDG-avid at diagnosis [31].
## 5.7. New Agents for ATL
### 5.7.1. Purine Analogs
Several purine analogs have been evaluated for ATL. Among them, pentostatin (deoxycoformycin) has been most extensively evaluated as a single agent and in combination as described above [43, 46].Other purine analogs clinically studied for ATL are fludarabine and cladribine. Fludarabine is among standard treatments for B-chronic lymphocytic leukemia and other lymphoid malignancies. In a phase I study of fludarabine in Japan, 5 ATL patients and 10 B-CLL patients with refractory or relapsed-disease were enrolled [67]. Six grade 3 nonhematological toxicities were only observed in the ATL patients. PR was achieved only in one of the 5 ATL patients and the duration was short. Cladribine is among standard treatments for hairy cell leukemia and other lymphoid malignancies. A phase II study of cladribine for relapsed/refractory aggressive-ATL in 15 patients revealed only one PR [68].Forodesine, a purine nucleotide phosphorylase (PNP) inhibitor, is among purine nucleotide analogs. PNP is an enzyme in the purine salvage pathway that phosphorolysis 2′deoxyguanosine (dGuo). Purine nucleoside phosphorylase (PNP) deficiency in humans results in a severe combined immunodeficiency phenotype and the selective depletion of T cells associated with high plasma deoxyguanosine (dGuo) and high intracellular deoxyguanosine triphosphate levels in those cells with high deoxynucleoside kinase activity such as T cells, leading to cell death. Inhibitors of PNP, such as forodesine, mimic SCID in vitro and in vivo, suggesting a new targeting agent specific for T cell malignancies [69]. A dose escalating phase I study of forodesine is being conducted in Japan for T cell malignancies including ATL.
### 5.7.2. Histone Deacetylase Inhibitor
Gene expression governed by epigenetic changes is crucial to the pathogenesis of cancer. Histone deacetylases (HDACs) are enzymes involved in the remodeling of chromatin and play a key role in the epigenetic regulation of gene expression. Deacetylase inhibitors (DACis) induce the hyperacetylation of nonhistone proteins as well as nucleosomal histones resulting in the expression of repressed genes involved in growth arrest, terminal differentiation, and/or apoptosis among cancer cells. Several classes of HDACi have been found to have potent anticancer effects in preclinical studies. HDACIs such as vorinostat (suberoylanilide hydroxamic acid: SAHA), romidepsin (depsipeptide), and panobinostat (LBH589) have also shown promise in preclinical and/or clinical studies against T-cell malignancies including ATL [70, 71]. Vorinostat and romidepsin have been approved for cutaneous T-cell lymphoma (CTCL) by the Food and Drug Administration in the USA. LBH589 has a significant anti-ATL effect in vitro and in mice [71]. However, a phase II study for CTCL and indolent ATL in Japan was terminated because of severe infections associated with the shrinkage of skin tumors and formation of ulcers in patients with ATL. Further study is required to evaluate the efficacy of HDACIs for PTCL/CTCL including ATL.
### 5.7.3. Monoclonal Antibodies and Toxin Fusion Proteins
Monoclonal antibodies (MoAb) and toxin fusion proteins targeting several molecules expressed on the surface of ATL cells and other lymphoid malignant cells, such as CD25, CD2, CD52, and chemokine receptor 4 (CCR4), have shown promise in recent clinical trials.Because most ATL cells express the alpha-chain of IL-2R (CD25), Waldmann et al. treated patients with ATL using monoclonal antibodies to CD25 [72]. Six (32%) of 19 patients treated with anti-Tac showed objective responses lasting from 9 weeks to longer than 3 years. One impediment to this approach is the quantity of soluble IL-2R shed by the tumor cells into the circulation. Another strategy for targeting IL-2R is conjugation with an immunotoxin (Pseudomonas exotoxin) or radioisotope (yttrium-90). Waldmann et al. developed a stable conjugate of anti-Tac with yttrium-90. Among the 16 patients with ATL who received 5- to 15-mCi doses, 9 (56%) showed objective responses. The response lasted longer than that obtained with unconjugated anti-Tac antibody [73, 74].LMB-2, composed of the anti-CD25 murine MoAb fused to the truncated form of Pseudomonas toxin, was cytotoxic to CD25-expressing cells including ATL cells in vitro and in mice. Phase I/II trials of this agent showed some effect against hairy cell leukemia, CTCL, and ATL [6]. Six of 35 patients in the phase I study had significant levels of neutralizing antibodies after the first cycle. This drug deserves further clinical trials including in combination with cytotoxic agents.Denileukin diftitox (DD; DAB(389)-interleukin-2 [IL-2]), an interleukin-2-diphtheria toxin fusion protein targeting IL-2 receptor-expressing malignant T lymphocytes, shows efficacy as a single agent against CTCL and peripheral T-cell lymphoma (PTCL) [75]. Also the combination of this agent with multiagent chemotherapy, CHOP, was promising for PTCL [76]. ATL cells frequently and highly express CD25 as described above, and several ATL cases successfully treated with this agent have been reported [77].CD52 antigen is present on normal and pathologic B and T cells. In PTCL, however, CD52 expression varies among patients, with an overall expression rate lower than 50% in one study but not in another [78, 79]. ATL cells frequently express CD52 as compared to other PTCLs. The humanized anti-CD52 monoclonal antibody alemtuzumab is active against CLL and PTCL as a single agent. The combination of alemtuzumab with a standard-dose cyclophosphamide/doxorubicin/vincristine/prednisone (CHOP) regimen as a first-line treatment for 24 patients with PTCL showed promising results with CR in 17 (71%) patients; 1 had a partial remission, with an overall median duration of response of 11 months and was associated with mostly manageable infections but including CMV reactivation [80]. Major infections were Jacob-Creutzfeldt virus reactivation, pulmonary invasive aspergillosis, and staphylococcus sepsis.ATL cells express CD52, the target of alemtuzumab, which was active in a preclinical model of ATL and toxic to p53-deficient cells, and several ATL cases successfully treated with this agent have been reported [81–83].Siplizumab is a humanized MoAb targeting CD2 and showed efficacy in a murine ATL model. P1 dose-escalating study of this agent in 22 patients with several kinds of T/NK-cell malignancy revealed 6 responses (2 CR in LGL leukemia, 3 PR in ATL, and 1 PR in CTCL). However, 4 patients developed EBV-associated LPD [84]. The broad specificity of this agent may eliminate both CD4- and CD8-positive T cells as well as NK cells without effecting B cells and predispose individuals to the development of EBV lymphoproliferative syndrome.CC chemokine receptor 4 (CCR4) is expressed on normal T helper type 27 and regulatory T (Treg) cells and on certain types of T-cell neoplasms [20, 21, 35]. KW-0761, a next generation humanized anti-CCR4 mAb, with a defucosylated Fc region, exerts strong antibody-dependent cellular cytotoxicity (ADCC) due to increased binding to the Fcγ receptor on effecter cells [85]. A phase I study of dose escalation with 4 weekly intravenous infusions of KW-0761 in 16 patients with relapsed CCR4-positive T cell malignancy (13 ATL and 3 PTCL) revealed that one patient, at the maximum dose (1.0 mg/kg), developed grade (G) 3 dose-limiting toxic effects, namely, skin rashes and febrile neutropenia and G4 neutropenia [86]. Other treatment-related G3-4 toxic effects were lymphopenia (n=10), neutropenia (n=3), leukopenia (n=2), herpes zoster (n=1), and acute infusion reaction/cytokine release syndrome (n=1). Neither the frequency nor severity of these effects increased with dose escalation or the plasma concentration of the agent. The maximum tolerated dose was not reached. No patients had detectable levels of anti-KW-0761 antibody. Five patients (31%; 95% CI, 11% to 59%) achieved objective responses: 2 complete (0.1; 1.0 mg/kg) and 3 partial (0.01; 2 at 1.0 mg/kg) responses. Three out of 13 patients with ATL (31%) achieved a response (2 CR and 1 PR). Responses in each lesion were diverse, that is, good in PB (6 CR and 1 PR/7 evaluable cases), intermediate in skin (3 CR and 1 PR/8 evaluable cases), and poor in LN (1 CR and 2 PR/11 evaluable cases). KW-0761 was well tolerated at all the doses tested, demonstrating potential efficacy against relapsed CCR4-positive ATL or PTCL. Recently, results of subsequent phase II studies at the 1.0 mg/kg in relapsed ATL, showing 50% of response rate with acceptable toxicity profiles, reported [87]. A phase II trial of single agent KW-0761 at the 1.0 mg/kg in relapsed PTCL/CTCL and a phase II trial of VCAP-AMP-VECP combined with KW-0761 for untreated aggressive ATL are ongoing.
### 5.7.4. Other Agents
A proteasome inhibitor, bortezomib (Velcade), and an immunomodulatory agent, lenalidomide (Revlimid), both have potent preclinical and clinical activity in T-cell malignancies including ATL, are now under clinical trials for relapsed ATL in Japan [88–90]. Other potential drugs for ATL include pralatrexate (Folotyn), a new agent with clinical activity in T-cell malignancies including ATL [91–93]. The agent is a novel antifolate with improved membrane transport and polyglutamylation in tumor cells and high affinity for the reduced folate carrier (RFC) highly expressed in malignant cells and has been approved by FDA recently for T-cell lymphoma including ATL.
## 5.7.1. Purine Analogs
Several purine analogs have been evaluated for ATL. Among them, pentostatin (deoxycoformycin) has been most extensively evaluated as a single agent and in combination as described above [43, 46].Other purine analogs clinically studied for ATL are fludarabine and cladribine. Fludarabine is among standard treatments for B-chronic lymphocytic leukemia and other lymphoid malignancies. In a phase I study of fludarabine in Japan, 5 ATL patients and 10 B-CLL patients with refractory or relapsed-disease were enrolled [67]. Six grade 3 nonhematological toxicities were only observed in the ATL patients. PR was achieved only in one of the 5 ATL patients and the duration was short. Cladribine is among standard treatments for hairy cell leukemia and other lymphoid malignancies. A phase II study of cladribine for relapsed/refractory aggressive-ATL in 15 patients revealed only one PR [68].Forodesine, a purine nucleotide phosphorylase (PNP) inhibitor, is among purine nucleotide analogs. PNP is an enzyme in the purine salvage pathway that phosphorolysis 2′deoxyguanosine (dGuo). Purine nucleoside phosphorylase (PNP) deficiency in humans results in a severe combined immunodeficiency phenotype and the selective depletion of T cells associated with high plasma deoxyguanosine (dGuo) and high intracellular deoxyguanosine triphosphate levels in those cells with high deoxynucleoside kinase activity such as T cells, leading to cell death. Inhibitors of PNP, such as forodesine, mimic SCID in vitro and in vivo, suggesting a new targeting agent specific for T cell malignancies [69]. A dose escalating phase I study of forodesine is being conducted in Japan for T cell malignancies including ATL.
## 5.7.2. Histone Deacetylase Inhibitor
Gene expression governed by epigenetic changes is crucial to the pathogenesis of cancer. Histone deacetylases (HDACs) are enzymes involved in the remodeling of chromatin and play a key role in the epigenetic regulation of gene expression. Deacetylase inhibitors (DACis) induce the hyperacetylation of nonhistone proteins as well as nucleosomal histones resulting in the expression of repressed genes involved in growth arrest, terminal differentiation, and/or apoptosis among cancer cells. Several classes of HDACi have been found to have potent anticancer effects in preclinical studies. HDACIs such as vorinostat (suberoylanilide hydroxamic acid: SAHA), romidepsin (depsipeptide), and panobinostat (LBH589) have also shown promise in preclinical and/or clinical studies against T-cell malignancies including ATL [70, 71]. Vorinostat and romidepsin have been approved for cutaneous T-cell lymphoma (CTCL) by the Food and Drug Administration in the USA. LBH589 has a significant anti-ATL effect in vitro and in mice [71]. However, a phase II study for CTCL and indolent ATL in Japan was terminated because of severe infections associated with the shrinkage of skin tumors and formation of ulcers in patients with ATL. Further study is required to evaluate the efficacy of HDACIs for PTCL/CTCL including ATL.
## 5.7.3. Monoclonal Antibodies and Toxin Fusion Proteins
Monoclonal antibodies (MoAb) and toxin fusion proteins targeting several molecules expressed on the surface of ATL cells and other lymphoid malignant cells, such as CD25, CD2, CD52, and chemokine receptor 4 (CCR4), have shown promise in recent clinical trials.Because most ATL cells express the alpha-chain of IL-2R (CD25), Waldmann et al. treated patients with ATL using monoclonal antibodies to CD25 [72]. Six (32%) of 19 patients treated with anti-Tac showed objective responses lasting from 9 weeks to longer than 3 years. One impediment to this approach is the quantity of soluble IL-2R shed by the tumor cells into the circulation. Another strategy for targeting IL-2R is conjugation with an immunotoxin (Pseudomonas exotoxin) or radioisotope (yttrium-90). Waldmann et al. developed a stable conjugate of anti-Tac with yttrium-90. Among the 16 patients with ATL who received 5- to 15-mCi doses, 9 (56%) showed objective responses. The response lasted longer than that obtained with unconjugated anti-Tac antibody [73, 74].LMB-2, composed of the anti-CD25 murine MoAb fused to the truncated form of Pseudomonas toxin, was cytotoxic to CD25-expressing cells including ATL cells in vitro and in mice. Phase I/II trials of this agent showed some effect against hairy cell leukemia, CTCL, and ATL [6]. Six of 35 patients in the phase I study had significant levels of neutralizing antibodies after the first cycle. This drug deserves further clinical trials including in combination with cytotoxic agents.Denileukin diftitox (DD; DAB(389)-interleukin-2 [IL-2]), an interleukin-2-diphtheria toxin fusion protein targeting IL-2 receptor-expressing malignant T lymphocytes, shows efficacy as a single agent against CTCL and peripheral T-cell lymphoma (PTCL) [75]. Also the combination of this agent with multiagent chemotherapy, CHOP, was promising for PTCL [76]. ATL cells frequently and highly express CD25 as described above, and several ATL cases successfully treated with this agent have been reported [77].CD52 antigen is present on normal and pathologic B and T cells. In PTCL, however, CD52 expression varies among patients, with an overall expression rate lower than 50% in one study but not in another [78, 79]. ATL cells frequently express CD52 as compared to other PTCLs. The humanized anti-CD52 monoclonal antibody alemtuzumab is active against CLL and PTCL as a single agent. The combination of alemtuzumab with a standard-dose cyclophosphamide/doxorubicin/vincristine/prednisone (CHOP) regimen as a first-line treatment for 24 patients with PTCL showed promising results with CR in 17 (71%) patients; 1 had a partial remission, with an overall median duration of response of 11 months and was associated with mostly manageable infections but including CMV reactivation [80]. Major infections were Jacob-Creutzfeldt virus reactivation, pulmonary invasive aspergillosis, and staphylococcus sepsis.ATL cells express CD52, the target of alemtuzumab, which was active in a preclinical model of ATL and toxic to p53-deficient cells, and several ATL cases successfully treated with this agent have been reported [81–83].Siplizumab is a humanized MoAb targeting CD2 and showed efficacy in a murine ATL model. P1 dose-escalating study of this agent in 22 patients with several kinds of T/NK-cell malignancy revealed 6 responses (2 CR in LGL leukemia, 3 PR in ATL, and 1 PR in CTCL). However, 4 patients developed EBV-associated LPD [84]. The broad specificity of this agent may eliminate both CD4- and CD8-positive T cells as well as NK cells without effecting B cells and predispose individuals to the development of EBV lymphoproliferative syndrome.CC chemokine receptor 4 (CCR4) is expressed on normal T helper type 27 and regulatory T (Treg) cells and on certain types of T-cell neoplasms [20, 21, 35]. KW-0761, a next generation humanized anti-CCR4 mAb, with a defucosylated Fc region, exerts strong antibody-dependent cellular cytotoxicity (ADCC) due to increased binding to the Fcγ receptor on effecter cells [85]. A phase I study of dose escalation with 4 weekly intravenous infusions of KW-0761 in 16 patients with relapsed CCR4-positive T cell malignancy (13 ATL and 3 PTCL) revealed that one patient, at the maximum dose (1.0 mg/kg), developed grade (G) 3 dose-limiting toxic effects, namely, skin rashes and febrile neutropenia and G4 neutropenia [86]. Other treatment-related G3-4 toxic effects were lymphopenia (n=10), neutropenia (n=3), leukopenia (n=2), herpes zoster (n=1), and acute infusion reaction/cytokine release syndrome (n=1). Neither the frequency nor severity of these effects increased with dose escalation or the plasma concentration of the agent. The maximum tolerated dose was not reached. No patients had detectable levels of anti-KW-0761 antibody. Five patients (31%; 95% CI, 11% to 59%) achieved objective responses: 2 complete (0.1; 1.0 mg/kg) and 3 partial (0.01; 2 at 1.0 mg/kg) responses. Three out of 13 patients with ATL (31%) achieved a response (2 CR and 1 PR). Responses in each lesion were diverse, that is, good in PB (6 CR and 1 PR/7 evaluable cases), intermediate in skin (3 CR and 1 PR/8 evaluable cases), and poor in LN (1 CR and 2 PR/11 evaluable cases). KW-0761 was well tolerated at all the doses tested, demonstrating potential efficacy against relapsed CCR4-positive ATL or PTCL. Recently, results of subsequent phase II studies at the 1.0 mg/kg in relapsed ATL, showing 50% of response rate with acceptable toxicity profiles, reported [87]. A phase II trial of single agent KW-0761 at the 1.0 mg/kg in relapsed PTCL/CTCL and a phase II trial of VCAP-AMP-VECP combined with KW-0761 for untreated aggressive ATL are ongoing.
## 5.7.4. Other Agents
A proteasome inhibitor, bortezomib (Velcade), and an immunomodulatory agent, lenalidomide (Revlimid), both have potent preclinical and clinical activity in T-cell malignancies including ATL, are now under clinical trials for relapsed ATL in Japan [88–90]. Other potential drugs for ATL include pralatrexate (Folotyn), a new agent with clinical activity in T-cell malignancies including ATL [91–93]. The agent is a novel antifolate with improved membrane transport and polyglutamylation in tumor cells and high affinity for the reduced folate carrier (RFC) highly expressed in malignant cells and has been approved by FDA recently for T-cell lymphoma including ATL.
## 5.8. Prevention
Two steps should be considered for the prevention of HTLV-1-associated ATL. The first is the prevention of HTLV-1 infections. This has been achieved in some endemic areas in Japan by screening for HTLV-1 among blood donors and asking mothers who are carriers to refrain from breast feeding. For several decades, before initiation of the interventions, the prevalence of HTLV-1 has declined drastically in endemic areas in Japan, probably because of birth cohort effects [94]. The elimination of HTLV-1 in endemic areas is now considered possible due to the natural decrease in the prevalence as well as the intervention of transmission through blood transfusion and breast feeding. The second step is the prevention of ATL among HTLV-1 carriers. This has not been achieved partly because only about 5% of HTLV-1 carriers develop the disease in their life time although several risk factors have been identified by a cohort study of HTLV-1 carriers (Joint Study of Predisposing Factors for ATL Development) [95]. Also, no agent has been found to be effective in preventing the development of ATL among HTLV-1 carriers.
## 6. Conclusions
Clinical trials have been paramount to the recent advances in ATL treatment, including assessments of chemotherapy, AZT/IFN, and allo-HSCT. Recently, a strategy for ATL treatment, stratified by subtype-classification, prognostic factors, and the response to initial treatment as well as response criteria, was proposed [31]. The recommended treatment algorithm for ATL is shown in Table 2. However, ATL still has a worse prognosis than the other T-cell malignancies [96].There is no plateau with an initial steep slope and subsequent gentle slope without a plateau in the survival curve for aggressive or indolent ATL treated by watchful waiting and with chemotherapy, respectively, although the prognosis is much better in the latter [38]. A prognostic model for each subgroup should be elucidated to properly identify the candidate for allo-HSCT which can achieve a cure of ATL despite considerable treatment-related mortality. Although several small phase II trials and a recent metaanalysis suggested IFN/AZT therapy to be promising, no confirmative phase III study has been conducted [39]. Furthermore, as described in the other chapters in detail, more than ten promising new agents for PTCL/CTCL including ATL are now in clinical trials or preparation. Future clinical trials on ATL as described above should be incorporated to ensure that the consensus is continually updated to establish evidence-based practical guidelines.Table 2
Strategy for the treatment of Adult T-Cell Leukemia-Lymphoma.
Smoldering- or favorable chronic-type ATL(i) Consider inclusion in prospective clinical trials.(ii) Symptomatic patients (skin lesions, opportunistic infections, etc.): Consider AZT/IFN or Watch and Wait.(iii) Asymptomatic patients: Consider Watch and Wait.Unfavorable chronic- or acute-type ATL(i) If outside clinical trials, check prognostic factors (including clinical and molecular factors if possible):(a) Good prognostic factors: consider chemotherapy (VCAP-AMP-VECP evaluated by a phase III trial against biweekly-CHOP) or AZT/IFN (evaluated by a meta-analysis on retrospective studies).(b) Poor prognostic factors: consider chemotherapy followed by conventional or reduced intensity allo-HSCT (evaluated by retrospective and prospective Japanese analyses, resp.).(c) Poor response to initial therapy: Consider conventional or reduced intensity allo-HSCT.Lymphoma-type ATL(i) If outside clinical trials, consider chemotherapy (VCAP-AMP-VECP).(ii) Check prognostic factors (including clinical and molecular factors if possible) and response to chemotherapy:(a) Good prognostic factors and good response to initial therapy: Consider chemotherapy followed by observation.(b) Poor prognostic factors or poor response to initial therapy: Consider chemotherapy followed by conventional or reduced intensity allo-HSCT.
---
*Source: 101754-2012-01-16.xml* | 101754-2012-01-16_101754-2012-01-16.md | 82,387 | Clinical Trials and Treatment of ATL | Kunihiro Tsukasaki; Kensei Tobinai | Leukemia Research and Treatment
(2012) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101754 | 101754-2012-01-16.xml | ---
## Abstract
ATL is a distinct peripheral T-lymphocytic malignancy associated with human T-cell lymphotropic virus type I (HTLV-1). The diversity in clinical features and prognosis of patients with this disease has led to its subtype-classification into four categories, acute, lymphoma, chronic, and smoldering types, defined by organ involvement, and LDH and calcium values. In case of acute, lymphoma, or unfavorable chronic subtypes (aggressive ATL), intensive chemotherapy like the LSG15 regimen (VCAP-AMP-VECP) is usually recommended if outside of clinical trials, based on the results of a phase 3 trial. In case of favorable chronic or smoldering ATL (indolent ATL), watchful waiting until disease progression has been recommended, although the long-term prognosis was inferior to those of, for instance, chronic lymphoid leukemia. Retrospective analysis suggested that the combination of interferon alpha and zidovudine was apparently promising for the treatment of ATL, especially for types with leukemic manifestation. Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is also promising for the treatment of aggressive ATL possibly reflecting graft versus ATL effect. Several new agent trials for ATL are ongoing and in preparation, including a defucosylated humanized anti-CC chemokine receptor 4 monoclonal antibody, IL2-fused with diphtheria toxin, histone deacetylase inhibitors, a purine nucleoside phosphorylase inhibitor, a proteasome inhibitor, and lenalidomide.
---
## Body
## 1. Introduction
Adult T-cell leukemia-lymphoma (ATL) was first described in 1977 by Uchiyama et al. as a distinct clinico-pathological entity with a suspected viral etiology because of the clustering of the disease in the southwest region of Japan [1]. Subsequently, a novel RNA retrovirus, human T-cell leukemia/lymphotropic virus type I (HTLV-1), was isolated from a cell line established from leukemic cells of an ATL patient, and the finding of a clear association with ATL led to its inclusion among human carcinogenic pathogens [2–5]. In the mid-1980s and 1990s, several inflammatory diseases were reported to be associated with HTLV-1 [6–10]. At the same time, endemic areas for the virus and diseases have been found (reviewed in [11–13]). Diversity in ATL has been recognized and a classification of clinical subtypes of the disease was proposed [14]. This chapter will review the current recognition of ATL focusing on treatment of the disease.
## 2. Clinical Features and Laboratory Findings of ATL
ATL patients show a variety of clinical manifestations because of various complications of organ involvement by ATL cells, opportunistic infections and/or hypercalcemia [11–14]. These three often contribute to the extremely high mortality of the disease. Lymph node, liver, spleen, and skin lesions are frequently observed. Though less frequently, digestive tract, lungs, central nervous system, bone, and/or other organs may be involved. Large nodules, plaques, ulcers, and erythroderma are common skin lesions [15–17]. Immune suppression is common. Approximately 26% of 854 patients with ATL had active infections at diagnosis in a prior nationwide study in Japan [14]. The incidence was highest in the chronic and smoldering types (36%) and lower in the acute (27%) and lymphoma types (11%). The infections were bacterial in 43%, fungal in 31%, protozoal in 18%, and viral in 8% of patients. The immunodeficiency at presentation in ATL patients can be exacerbated by cytotoxic chemotherapy. Individuals with indolent ATL might have no manifestation of the disease and are identified only by health checkups and laboratory examinations.ATL cells are usually detected quite easily in the blood of affected individuals except for the smoldering type with mainly skin manifestations and lymphoma type [14]. These so-called “flower cells” have highly indented or lobulated nuclei with condensed chromatin, small or absent nucleoli, and a agranular and basophilic cytoplasm [18]. The histological analysis of aberrant cutaneous lesions or lymph nodes is essential for the diagnosis of the smoldering type with mainly skin manifestations and lymphoma type of ATL, respectively. Because ATL cells in the skin and lymph node can vary in size from small to large and in form from pleomorphic to anaplastic and Hodgkin-like cell with no specific histological pattern of involvement, differentiating between Sezary syndrome, other peripheral T-cell lymphomas and Hodgkin lymphoma versus ATL can at times be difficult without examinations for HTLV-1 serotype/genotype [13, 19].Hypercalcemia is the most distinctive laboratory abnormality in ATL as compared to other lymphoid malignancies and is observed in 31% of patients (50% in acute type, 17% in lymphoma type, and 0% in the other two types) at onset [14]. Individuals with hypercalcemia do not usually have osteolytic bone lesions. Parathyroid hormone-related protein or receptor activator of nuclear factor kappa B ligand (RANKL) produced by ATL cells is considered the main factor causing hypercalcemia [20, 21].Similar to serum LDH,β2-microglobulin, and serum thymidine kinase levels reflecting disease bulk/activity, the level of the soluble form of interleukin (IL)-2 receptor alpha-chain is elevated in the order of acute/lymphoma-type ATL, smoldering/chronic-type ATL, and HTLV-1 carriers as compared with normal individuals, perhaps with better accuracy than the other markers [22–24]. These serum markers are useful for detecting the acute transformation of indolent ATL as well as the early relapse of ATL after achieving responses by therapy.Prototypical ATL cells have a mature alpha-beta T-cell phenotype, that is, they are terminal deoxynucleotidyl transferase- (TdT-)negative, cluster of differentiation (CD) 1a-negative, T-cell receptor alpha-beta positive, CD2-positive and CD5, CD45RO, and CD29-positive, and frequently do not express CD7 and CD26. A decline in the CD3 level with the appearance of CD25 indicates that the ATL cells are in an activated state. Most ATL cells are CD52-positive but some are negative, and this may correlate with the coexpression of CD30. About 90% of cases are CD4-positive and CD8-negative, and in rare cases either coexpress CD4 and CD8, are negative for both markers, or are only CD8-positive [25]. CC chemokine receptor 4 (CCR4) is expressed in more than 90% of cases and associated with a poor prognosis. Recent studies have suggested that the cells of some ATL may be the equivalent of regulatory T-cells because of the high frequency of expression of CD25/CCR4 and about half of FoxP3 [26–28].
## 3. Diagnosis of ATL
The diagnosis of typical ATL is not difficult and is based on clinical features, ATL cell morphology, mature helper-T-cell phenotype, and anti-HTLV-1 antibody in most cases [13]. Those rare cases, which might be difficult to diagnose, can be shown to have the monoclonal integration of HTLV-1 proviral DNA in the malignant cells as determined by Southern blotting. However, the monoclonal integration of HTLV-1 is also detected in some HAM/TSP patients and HTLV-1 carriers [29, 30]. After the diagnosis of ATL, subtype classification of the disease is necessary for the selection of appropriate treatment [14, 31].
## 4. Definition, Prognostic Factors, and Subtype Classification of ATL
ATL is a distinct peripheral T-lymphocytic malignancy associated with a retrovirus designated human T-cell leukemia virus type I or human T-cell lymphotropic virus type I (HTLV-1) [1, 11–14, 31].Major prognostic indicators for ATL, which have been elucidated in 854 patients with ATL in Japan, the Lymphoma Study Group (LSG) of the Japan Clinical Oncology Group (JCOG) by multivariate analysis, were advanced performance status (PS), high lactic dehydrogenase (LDH) level, age of 40 years or more, more than 3 involved lesions, and hypercalcemia [32]. Also a classification of clinical subtypes into acute, lymphoma, chronic, and smoldering types was proposed based on prognostic factors and clinical features of the disease [14]. The leukemic subtypes include all of the chronic type and most of the acute and smoldering types. The acute type has a rapid course with leukemic manifestation (≥2% ATL cells) mostly, with or without lymphocytosis (>4×109/L) including ATL cells and most of the characteristic features of ATL-generalized lymphadenopathy, hepatosplenomegaly, skin involvement, other organ involvement, a high LDH value, and hypercalcemia. The symptoms and signs include abdominal pain, diarrhea, ascites, jaundice, unconsciousness, dyspnea, pleural effusion, cough, sputum, and chest X-ray abnormalities because of organ involvement, hypercalcemia, and/or opportunistic infections. The smoldering type shows an indolent course and 5% or more of leukemic cells in the peripheral blood without lymphocytosis but may include skin/lung involvement. The calcium level is less than the upper limit, and LDH level is less than 1.5 times the upper limit in smoldering ATL. The chronic type, with absolute lymphocytosis (4×109/L) less frequently showing flower cell morphology than the acute type, is frequently and occasionally associated with skin involvement and lymphadenopathy, respectively, and also usually shows a relatively indolent course. The calcium level is less than the upper limit, and the LDH level is less than double the upper limit of the chronic type. The lymphoma type presents with the manifestations of a nodal-lymphoma without leukemic cells, frequently with high LDH/Ca levels, a rapid course, and symptoms and signs similar to the acute type. In case of ATL, clinical subtype is more important than Ann Arbor stage for predicting prognosis and deciding treatment because of frequent leukemic manifestation defined as stage IV.Additional factors associated with a poor prognosis include thrombocytopenia, eosinophilia, bone marrow involvement, a high interleukin (IL)-5 serum-level, C-C chemokine receptor 4 (CCR4) expression, lung resistance-related protein (LRP), p53 mutation, and p16 deletion by multivariate analysis [26, 27, 33–37]. Specific for the chronic type of ATL, high LDH, high blood urea nitrogen (BUN), and low albumin levels were identified as factors for a poor prognosis by multivariate analysis [11]. Primary cutaneous tumoral type although generally included among smoldering ATL had a poor prognosis in univariate analysis [15].
## 5. Clinical Course, Treatment, and Response Criteria of ATL
Treatment decisions should be based on the ATL subtype-classification and the prognostic factors at onset including those related with ATL and comorbidity [31]. As mentioned above, subtype-classification of this disease has been proposed based on the prognosis and clinical manifestations. Without treatment, most patients with acute-/lymphoma/type ATL die of the disease or infections within weeks or months. More than half of patients with smoldering ATL survive for more than 5 years without chemotherapy and transformation to aggressive ATL. Chronic ATL has the most diverse prognosis among the subtypes and could be divided into favorable and unfavorable by clinical parameters (serum albumin, BUN, and LDH levels) after a multivariate analysis [31].Current treatment options for ATL include watchful waiting until the disease progresses, interferon alpha (IFN) and zidovudine (AZT) therapy, multiagent chemotherapy, allogeneic hematopoietic stem cell transplantation (allo-HSCT), and a new agent [15].
### 5.1. Watchful Waiting
At present, no standard treatment for ATL exists. Therefore, patients with the smoldering or favorable chronic type, who may survive one or more years without chemotherapy, excluding topical therapy for cutaneous lesions, should be observed and therapy should be delayed until progression of the disease [31]. However, it was recently found that the long-term prognosis of such patients was poorer than expected. In a long-term followup study for 78 patients with indolent ATL (favorable chronic- or smoldering-type) with a policy of watchful waiting until disease progression at a single institution, the median survival time was 5.3 years with no plateau in the survival curve. Twelve patients remained alive for >10 years, 32 progressed to acute ATL, and 51 died [38]. Recently, the striking benefit of early intervention to indolent ATL by IFN and an antiretroviral agent was reported by a meta-analysis [39]. This modality should be extensively evaluated by larger clinical trials to establish appropriate management practices for indolent ATL.
### 5.2. Chemotherapy
Since 1978, chemotherapy trials have been consecutively conducted for patients newly diagnosed with ATL by JCOG’s Lymphoma Study Group (LSG) (Table1) [40–45]. Between 1981 and 1983, JCOG conducted a phase III trial (JCOG8101) to evaluate LSG1-VEPA (vincristine, cyclophosphamide, prednisone, and doxorubicin) versus LSG2-VEPA-M (VEPA plus methotrexate (MTX)) for advanced non-Hodgkin lymphoma (NHL), including ATL [40, 41]. The complete response (CR) rate of LSG2-VEPA-M for ATL (37%) was higher than that of LSG1-VEPA (17%; P=.09). However, the CR rate was significantly lower for ATL than for B-cell NHL and peripheral T-cell lymphoma (PTCL) other than ATL (P<.001). The median survival time of the 54 patients with ATL was 6 months, and the estimated 4-year survival rate was 8%.Table 1
Results of sequential chemotherapeutic-trials of untreated patients with ATL (JCOG-LSG).
J7801J8101J8701J9109J9303JCOG9801LSG1LSG1/LSG2LSG4LSG11LSG15mLSG15/mLSG19Pts. No.18544362965761CR (%)16.727.841.928.335.540.424.6CR + PR (%)51.680.672.065.6MST (months)7.58.07.413.012.710.92 yr. survival (%)17.031.33 yr. survival (%)10.021.923.612.74 yr. survival (%)8.011.6CR: complete remission, PR: partial remission, MST: median survival time.In 1987, JCOG initiated a multicenter phase II study (JCOG8701) of a multiagent combination chemotherapy (LSG4) for advanced aggressive NHL (including ATL). LSG4 consisted of three regimens: (1) VEPA-B (VEPA plus bleomycin), (2) M-FEPA (methotrexate, vindesine, cyclophosphamide, prednisone, and doxorubicin), and (3) VEPP-B, (vincristine, etoposide, procarbazine, prednisone, and bleomycin) [42]. The CR rate for ATL patients was improved from 28% (JCOG8101) to 43% (JCOG8701); however, the CR rate was significantly lower in ATL than in B-cell NHL and PTCL (P<.01). Patients with ATL still showed a poor prognosis, with a median survival time of 8 months and a 4-year survival rate of 12%.The disappointing results with conventional chemotherapies have led to a search for new active agents. Multicenter phase I and II studies of pentostatin (2′-deoxycoformycin, an inhibitor of adenosine deaminase) were conducted against ATL in Japan [43]. The phase II study revealed a response rate of 32% (10 of 31) in cases of relapsed or refractory ATL (2CRs and 8PRs).These encouraging results prompted the investigators to conduct a phase II trial (JCOG9109) with a pentostatin-containing combination (LSG11) as the initial chemotherapy [44]. Patients with aggressive ATL—that is, of the acute, lymphoma, or unfavorable chronic type—were eligible for this study. Unfavorable chronic-type ATL, defined as having at least 1 of 3 unfavorable prognostic factors (low serum albumin level, high LDH level, or high BUN), has an unfavorable prognosis similar to that for acute- and lymphoma-type ATL. A total of 62 untreated patients with aggressive ATL (34 acute, 21 lymphoma, and 7 unfavorable chronic type) were enrolled. A regimen of 1 mg/m2 vincristine on days 1 and 8, 40 mg/m2 doxorubicin on day 1, 100 mg/m2 etoposide on days 1 through 3, 40 mg/m2 prednisolone (PSL) on days 1 and 2, and 5 mg/m2 pentostatin on days 8, 15, and 22 was administered every 28 days for 10 cycles. Among the 61 patients evaluable for toxicity, four patients (7%) died of infections, two from septicemia, and two from cytomegalovirus pneumonia. Among the 60 eligible patients, there were 17CRs (28%) and 14 partial responses (PRs) (overall response rate [ORR] = 52%). The median survival time was 7.4 months, and the estimated 2-year survival rate was 17%. The prognosis in patients with ATL remained poor, even though they were treated with a pentostatin-containing combination chemotherapy.In 1994, JCOG initiated a phase II trial (JCOG9303) of an eight-drug regimen (LSG15) consisting of vincristine, cyclophosphamide, doxorubicin, prednisone, ranimustine, vindesine, etoposide, and carboplatin for untreated ATL [45]. Dose intensification was attempted with the prophylactic use of granulocyte colony-stimulating factor (G-CSF). In addition, non-cross-resistant agents, such as ranimustine and carboplatin, and intrathecal prophylaxis with MTX and PSL were incorporated. Ninety-six previously untreated patients with aggressive ATL were enrolled: 58 acute, 28 lymphoma, and 10 unfavorable chronic types. Approximately 81% of the 93 eligible patients responded (75/93), with 33 patients obtaining a CR (35%). The overall survival rate of the 93 patients at 2 years was estimated to be 31%, with a median survival time of 13 months. Grade 4 neutropenia and thrombocytopenia were observed in 65% and 53% of the patients, respectively, whereas grade 4 nonhematologic toxicity was observed in only one patient.Dose intensification of CHOP with prophylactic use of G-CSF was expected to improve survival among patients with aggressive NHL, and our randomized phase II study (JCOG9505) comparing CHOP-14 (LSG19) and dose-escalated CHOP (LSG20) to treat aggressive NHL excluding ATL revealed biweekly CHOP to be more promising [46]. Therefore, we regarded biweekly CHOP as a standard treatment for NHL including aggressive ATL at the time of designing this phase III study.To confirm whether the LSG15 regimen is a new standard for the treatment of aggressive ATL, JCOG conducted a phase III trial comparing modified (m)-LSG15 with biweekly CHOP (cyclophosphamide, hydroxy-doxorubicin, vincristine [Oncovin], and prednisone), both supported with G-CSF and intrathecal prophylaxis [47].mLSG19, a modified version of LSG19, consisted of eight cycles of CHOP [CPA 750 mg/m2, ADM 50 mg/m2,VCR 1.4 mg/m2(maximum 2 mg) on day 1 and PSL 100 mg on days 1 to 5] every 2 weeks [46]. The modification was an intrathecal administration identical to that in mLSG15.mLSG15 in JCOG9801 was a modified version of LSG15 in JCOG9303, consisting of three regimens: VCAP [VCR 1 mg/m2 (maximum 2 mg), CPA 350 mg/m2, ADM 40 mg/m2, PSL 40 mg/m2] on day 1, AMP [ADM 30 mg/m2, MCNU 60 mg/m2, PSL 40 mg/m2] on day 8, and VECP [VDS 2.4 mg/m2 on day 15, ETP 100 mg/m2 on days 15 to 17, CBDCA 250 mg/m2 on day15, PSL 40 mg/m2 on days 15 to 17] on days 15–17, and the next course was to be started on day 29 (Figure 1). The modifications in mLSG15 as compared to LSG15 were as follows: (1) The total number of cycles was reduced from 7 to 6 because of progressive cytopenia, especially thrombocytopenia, after repeating the LSG15 therapy. (2) Cytarabine 40 mg was used with MTX 15 mg and PSL 10 mg for prophylactic intrathecal administration, at the recovery phases of courses 1, 3, and 5 because of the high frequency of central nervous system relapse in the JCOG9303 study. Untreated patients with aggressive ATL were assigned to receive either six courses of mLSG15 every 4 weeks or eight courses of biweekly CHOP. The primary endpoint was overall survival. A total of 118 patients were enrolled. The CR rate was higher in the mLSG15 arm than in the biweekly CHOP arm (40% versus 25%, resp.; P=.020). As shown in Table 1, the median survival time and OS rate at 3 years were 12.7 months and 24% in the mLSG15 arm and 10.9 months and 13% in the biweekly CHOP arm [two-sided P=.169, and the hazard ratio was 0.75; 95% confidence interval (CI), 0.50 to 1.13]. A Cox regression analysis with performance status (PS 0 versus 1 versus 2–4) as the stratum for baseline hazard functions was performed to evaluate the effect on overall survival of age, B-symptoms, subtypes of ATL, LDH, BUN, bulky mass, and treatment arms. According to this analysis, the hazard ratio and two-sided P value for the treatment arms were 0.62 (95% CI, 0.38 to 1.01) and .056, respectively. The difference between the crude analysis and this result was because of unbalanced prognostic factors, such as PS 0 versus 1, and the presence or absence of bulky lesions between the treatment arms. The progression-free survival rate at 1 year was 28% in the mLSG15 arm compared with 16% in the biweekly CHOP arm (two-sided P=.20).Figure 1
Regimen of VCAP-AMP-VECP in mLSG15. VCAP: vincristine (VCR), cyclophosphamide (CPA), doxorubicin (ADM), prednisone (PSL); AMP: ADM, ranimustine (MCNU), PSL; VECP: vindesine (VDS), etoposide (ETP), carboplatin (CBDCA), and PSL.*) MCNU and VDS are nitrosourea and vinca alkaloid, respectively, developed in Japan. A previous study on myeloma described that carmustine (BCNU), another nitrosourea, at 1 mg/kg is equivalent to MCNU at 0.8 to 1.0 mg/kg. VDS at 2.4 mg/m2 can be substituted for VCR, another vinca alkaloid used in this regimen, at 1 mg/m2 with possibly less myelosuppression and more peripheral neuropathy which can be managed by dose modification.In mLSG15 versus mLSG19, rate of grade 4 neutropenia, grade 4 thrombocytopenia, and grade 3/4 infection were 98% versus 83%, 74% versus 17%, and 32% versus 15%, respectively. There were three toxic deaths in the former. Three treatment-related deaths (TRDs), two from sepsis and one from interstitial pneumonitis related to neutropenia, were reported in the mLSG15 arm. Two cases of myelodysplastic syndrome were reported, one each in both arms.The longer survival at 3 years and higher CR rate with mLSG15 compared with mLSG19 suggest that mLSG15 is a more effective regimen at the expense of higher toxicity, providing the basis for future investigations in the treatment of ATL [47]. The superiority of VCAP-AMP-VECP in mLSG15 to biweekly CHOP in mLSG19 may be explained by the more prolonged, dose dense schedule of therapy in addition to 4 more drugs. In addition, agents such as carboplatin and ranimustine not affected by multidrug-resistance (MDR) related genes, which were frequently expressed in ATL cells at onset, were incorporated [48]. Intrathecal prophylaxis, which was incorporated in both arms of the phase III study, should be considered for patients with aggressive ATL even in the absence of clinical symptoms because a previous analysis revealed that more than half of relapses at new sites after chemotherapy occurred in the CNS [49]. However, the median survival time of 13 months in VCAP-AMP-VECP (LSG15/mLSG15) still compares unfavorably to other hematological malignancies, requiring further effort to improve the outcome.
### 5.3. Interferon-Alpha and Zidovudine
A small phase II trial in Japan of IFN alpha against relapsed/refractory ATL showed a response rate (all PR) of 33% (8/24), including 5 out of 9 (56%) chronic-type ATL [50]. In 1995, Gill and associates reported that 11 of 19 patients with acute- or lymphoma-type ATL showed major responses (5 CR and 6 PR) to a combination of interferon-alpha (IFN) and zidovudine (AZT) [51]. The efficacy of this combination was also observed by Hermine and associates; major objective responses were obtained in all five patients with ATL (four with acute type and one with smoldering type) [52]. Although these results are encouraging, the OS of previously untreated patients with ATL was relatively short (4.8 months) compared with the survival of those in the chemotherapy trials conducted by the JCOG-LSG (7 to 8 months) [53]. After that, numerous small phase II studies using AZT and IFN have shown responses in ATL patients [54–56]. High doses of both agents are recommended: 6–9 million units of IFN in combination with daily divided AZT doses of 800–1000 mg/day. Therapeutic effect of AZT and IFN is not through a direct cytotoxic effect of these drugs on the leukemic cells [57]. Enduring AZT treatment of ATL cell lines results in inhibition of telomerase which reprograms the cells to p53-dependent senescence [58].Recently, the results of a “meta-analysis” on the use of IFN and AZT for ATL were reported [39]. A total of 100 patients received interferon-alpha and AZT as initial treatments. The ORR was 66%, with a 43% CR rate. In this worldwide retrospective analysis, the median survival time was 24 months and the 5-year survival rate was 50% for first-line IFN and AZT, versus 7 months and 20% for 84 patients who received first-line chemotherapy. The median survival time of patients with acute-type ATL treated with first-line IFN/AZT and chemotherapy was 12 and 9 months, respectively. Patients with lymphoma-type ATL did not benefit from this combination. In addition, first-line IFN/AZT therapy in chronic- and smoldering-type ATL resulted in a 100% survival rate at a median followup of 5 years. However, because of the retrospective nature of this meta-analysis based on medical records at each hospital, the decision process to select the therapeutic modality for each patient and the possibility of interference with OS by second-line treatment remains unknown. While the results for IFN/AZT in indolent ATL appear to be promising compared to those with watchful-waiting policy until disease progression, recently reported from Japan [38], the possibility of selection bias cannot be ruled out. A prospective multicenter phase III study evaluating the efficacy of IFN/AZT as compared to watchful-waiting for indolent ATL is to be initiated in Japan.Recently, a phase II study of the combination of arsenic trioxide, IFN, and AZT for chronic ATL revealed an impressive response rate and moderate toxicity [39]. Although the results appeared promising, the addition of arsenic trioxide to IFN/AZT, which might be sufficient for the treatment of chronic ATL as described above, caused more toxicity and should be evaluated with caution.
### 5.4. Allogeneic Hematopoietic Stem-Cell Transplantation (Allo-HSCT)
Allo-HSCT is now recommended for the treatment of young patients with aggressive ATL [31, 59]. Despite higher treatment-related mortality including graft versus host disease in a retrospective multicenter analysis of myeloablative allo-HSCT, the estimated 3-year OS of 33% is promising, possibly reflecting a graft versus ATL effect [60]. To evaluate the efficacy of allo-HSCT more accurately, especially in view of a comparison with intensive chemotherapy, a prospective multicenter phase II study of LSG15 chemotherapy followed by allo-HSCT is ongoing (JCOG0907).Feasibility studies of allo-HSCT with reduced intensity conditioning for relatively aged patients with ATL also revealed promising results, and subsequent multicenter trials are being conducted in Japan [61, 62]. The minimal residual disease after allo-HSCT detected as HTLV-1 proviral load was much less than that after chemotherapy or AZT/IFN therapy, suggesting the presence of a graft-versus-ATL effect as well as graft-versus-HTLV-1 activity [61].It remains unclear which type of allo-HSCT (myeloablative or reduced intensity conditioning) is more suitable for the treatment of ATL. Furthermore, selection criteria with respect to responses to previous treatments, sources of stem cells, and HTLV-1 viral status of the donor remain to be determined. Recently, a patient in whom ATL derived from donor cells developed four months after transplantation of stem cells from a sibling with HTLV-I was reported [63].However, several other retrospective studies as well as those mentioned above on allo-HSCT showed a promising long-term survival rate of 20 to 40% with an apparent plateau phase despite significant treatment-related mortality.
### 5.5. Supportive Care
The prevention of opportunistic infections is essential in the management of ATL patients, nearly half of whom develop severe infections during chemotherapy. Some patients with indolent ATL develop infections during watchful waiting.Sulfamethoxazole/trimethoprim and antifungal agents have been recommended as prophylaxes for Pneumocystis jiroveci pneumonia and fungal infections, respectively, in the JCOG trials [43–45]. While cytomegalovirus infections are not infrequent among ATL patients, ganciclovir is not usually recommended as a prophylaxis [31]. In addition, in patients not receiving chemotherapy or allo-HSCT, antifungal prophylaxis may not be critical. An antistrongyloides agent, such as ivermectin or albendazole, should be considered to avoid systemic infections in patients with a history of exposure to the parasite in the tropics. Treatment with steroids and proton pump inhibitors may precipitate a fulminant strongyloides infestation and warrants testing before these agents are used in endemic areas [31]. Hypercalcemia associated with aggressive ATL can be corrected using chemotherapy in combination with hydration and bisphosphonate even when the performance status of the patient is poor.
### 5.6. Response Criteria
The complex nature of ATL, often with both leukemic and lymphomatous components, makes response assessment difficult. A modification of the JCOG response criteria was suggested by ATL consensus-meeting reflecting those for CLL and NHL which had been published later [31, 64, 65]. Recently, revised response criteria were proposed for lymphoma. New guidelines were presented incorporating positron emission tomography (PET), especially for the assessment of CR. It is well known and described in the criteria that several kinds of lymphoma including peripheral T-cell lymphomas were variably [18F] fluorodeoxyglucose (FDG) avid [66]. Meanwhile, PET or PET/CT is recommended for evaluations of response when the tumorous lesions are FDG-avid at diagnosis [31].
### 5.7. New Agents for ATL
#### 5.7.1. Purine Analogs
Several purine analogs have been evaluated for ATL. Among them, pentostatin (deoxycoformycin) has been most extensively evaluated as a single agent and in combination as described above [43, 46].Other purine analogs clinically studied for ATL are fludarabine and cladribine. Fludarabine is among standard treatments for B-chronic lymphocytic leukemia and other lymphoid malignancies. In a phase I study of fludarabine in Japan, 5 ATL patients and 10 B-CLL patients with refractory or relapsed-disease were enrolled [67]. Six grade 3 nonhematological toxicities were only observed in the ATL patients. PR was achieved only in one of the 5 ATL patients and the duration was short. Cladribine is among standard treatments for hairy cell leukemia and other lymphoid malignancies. A phase II study of cladribine for relapsed/refractory aggressive-ATL in 15 patients revealed only one PR [68].Forodesine, a purine nucleotide phosphorylase (PNP) inhibitor, is among purine nucleotide analogs. PNP is an enzyme in the purine salvage pathway that phosphorolysis 2′deoxyguanosine (dGuo). Purine nucleoside phosphorylase (PNP) deficiency in humans results in a severe combined immunodeficiency phenotype and the selective depletion of T cells associated with high plasma deoxyguanosine (dGuo) and high intracellular deoxyguanosine triphosphate levels in those cells with high deoxynucleoside kinase activity such as T cells, leading to cell death. Inhibitors of PNP, such as forodesine, mimic SCID in vitro and in vivo, suggesting a new targeting agent specific for T cell malignancies [69]. A dose escalating phase I study of forodesine is being conducted in Japan for T cell malignancies including ATL.
#### 5.7.2. Histone Deacetylase Inhibitor
Gene expression governed by epigenetic changes is crucial to the pathogenesis of cancer. Histone deacetylases (HDACs) are enzymes involved in the remodeling of chromatin and play a key role in the epigenetic regulation of gene expression. Deacetylase inhibitors (DACis) induce the hyperacetylation of nonhistone proteins as well as nucleosomal histones resulting in the expression of repressed genes involved in growth arrest, terminal differentiation, and/or apoptosis among cancer cells. Several classes of HDACi have been found to have potent anticancer effects in preclinical studies. HDACIs such as vorinostat (suberoylanilide hydroxamic acid: SAHA), romidepsin (depsipeptide), and panobinostat (LBH589) have also shown promise in preclinical and/or clinical studies against T-cell malignancies including ATL [70, 71]. Vorinostat and romidepsin have been approved for cutaneous T-cell lymphoma (CTCL) by the Food and Drug Administration in the USA. LBH589 has a significant anti-ATL effect in vitro and in mice [71]. However, a phase II study for CTCL and indolent ATL in Japan was terminated because of severe infections associated with the shrinkage of skin tumors and formation of ulcers in patients with ATL. Further study is required to evaluate the efficacy of HDACIs for PTCL/CTCL including ATL.
#### 5.7.3. Monoclonal Antibodies and Toxin Fusion Proteins
Monoclonal antibodies (MoAb) and toxin fusion proteins targeting several molecules expressed on the surface of ATL cells and other lymphoid malignant cells, such as CD25, CD2, CD52, and chemokine receptor 4 (CCR4), have shown promise in recent clinical trials.Because most ATL cells express the alpha-chain of IL-2R (CD25), Waldmann et al. treated patients with ATL using monoclonal antibodies to CD25 [72]. Six (32%) of 19 patients treated with anti-Tac showed objective responses lasting from 9 weeks to longer than 3 years. One impediment to this approach is the quantity of soluble IL-2R shed by the tumor cells into the circulation. Another strategy for targeting IL-2R is conjugation with an immunotoxin (Pseudomonas exotoxin) or radioisotope (yttrium-90). Waldmann et al. developed a stable conjugate of anti-Tac with yttrium-90. Among the 16 patients with ATL who received 5- to 15-mCi doses, 9 (56%) showed objective responses. The response lasted longer than that obtained with unconjugated anti-Tac antibody [73, 74].LMB-2, composed of the anti-CD25 murine MoAb fused to the truncated form of Pseudomonas toxin, was cytotoxic to CD25-expressing cells including ATL cells in vitro and in mice. Phase I/II trials of this agent showed some effect against hairy cell leukemia, CTCL, and ATL [6]. Six of 35 patients in the phase I study had significant levels of neutralizing antibodies after the first cycle. This drug deserves further clinical trials including in combination with cytotoxic agents.Denileukin diftitox (DD; DAB(389)-interleukin-2 [IL-2]), an interleukin-2-diphtheria toxin fusion protein targeting IL-2 receptor-expressing malignant T lymphocytes, shows efficacy as a single agent against CTCL and peripheral T-cell lymphoma (PTCL) [75]. Also the combination of this agent with multiagent chemotherapy, CHOP, was promising for PTCL [76]. ATL cells frequently and highly express CD25 as described above, and several ATL cases successfully treated with this agent have been reported [77].CD52 antigen is present on normal and pathologic B and T cells. In PTCL, however, CD52 expression varies among patients, with an overall expression rate lower than 50% in one study but not in another [78, 79]. ATL cells frequently express CD52 as compared to other PTCLs. The humanized anti-CD52 monoclonal antibody alemtuzumab is active against CLL and PTCL as a single agent. The combination of alemtuzumab with a standard-dose cyclophosphamide/doxorubicin/vincristine/prednisone (CHOP) regimen as a first-line treatment for 24 patients with PTCL showed promising results with CR in 17 (71%) patients; 1 had a partial remission, with an overall median duration of response of 11 months and was associated with mostly manageable infections but including CMV reactivation [80]. Major infections were Jacob-Creutzfeldt virus reactivation, pulmonary invasive aspergillosis, and staphylococcus sepsis.ATL cells express CD52, the target of alemtuzumab, which was active in a preclinical model of ATL and toxic to p53-deficient cells, and several ATL cases successfully treated with this agent have been reported [81–83].Siplizumab is a humanized MoAb targeting CD2 and showed efficacy in a murine ATL model. P1 dose-escalating study of this agent in 22 patients with several kinds of T/NK-cell malignancy revealed 6 responses (2 CR in LGL leukemia, 3 PR in ATL, and 1 PR in CTCL). However, 4 patients developed EBV-associated LPD [84]. The broad specificity of this agent may eliminate both CD4- and CD8-positive T cells as well as NK cells without effecting B cells and predispose individuals to the development of EBV lymphoproliferative syndrome.CC chemokine receptor 4 (CCR4) is expressed on normal T helper type 27 and regulatory T (Treg) cells and on certain types of T-cell neoplasms [20, 21, 35]. KW-0761, a next generation humanized anti-CCR4 mAb, with a defucosylated Fc region, exerts strong antibody-dependent cellular cytotoxicity (ADCC) due to increased binding to the Fcγ receptor on effecter cells [85]. A phase I study of dose escalation with 4 weekly intravenous infusions of KW-0761 in 16 patients with relapsed CCR4-positive T cell malignancy (13 ATL and 3 PTCL) revealed that one patient, at the maximum dose (1.0 mg/kg), developed grade (G) 3 dose-limiting toxic effects, namely, skin rashes and febrile neutropenia and G4 neutropenia [86]. Other treatment-related G3-4 toxic effects were lymphopenia (n=10), neutropenia (n=3), leukopenia (n=2), herpes zoster (n=1), and acute infusion reaction/cytokine release syndrome (n=1). Neither the frequency nor severity of these effects increased with dose escalation or the plasma concentration of the agent. The maximum tolerated dose was not reached. No patients had detectable levels of anti-KW-0761 antibody. Five patients (31%; 95% CI, 11% to 59%) achieved objective responses: 2 complete (0.1; 1.0 mg/kg) and 3 partial (0.01; 2 at 1.0 mg/kg) responses. Three out of 13 patients with ATL (31%) achieved a response (2 CR and 1 PR). Responses in each lesion were diverse, that is, good in PB (6 CR and 1 PR/7 evaluable cases), intermediate in skin (3 CR and 1 PR/8 evaluable cases), and poor in LN (1 CR and 2 PR/11 evaluable cases). KW-0761 was well tolerated at all the doses tested, demonstrating potential efficacy against relapsed CCR4-positive ATL or PTCL. Recently, results of subsequent phase II studies at the 1.0 mg/kg in relapsed ATL, showing 50% of response rate with acceptable toxicity profiles, reported [87]. A phase II trial of single agent KW-0761 at the 1.0 mg/kg in relapsed PTCL/CTCL and a phase II trial of VCAP-AMP-VECP combined with KW-0761 for untreated aggressive ATL are ongoing.
#### 5.7.4. Other Agents
A proteasome inhibitor, bortezomib (Velcade), and an immunomodulatory agent, lenalidomide (Revlimid), both have potent preclinical and clinical activity in T-cell malignancies including ATL, are now under clinical trials for relapsed ATL in Japan [88–90]. Other potential drugs for ATL include pralatrexate (Folotyn), a new agent with clinical activity in T-cell malignancies including ATL [91–93]. The agent is a novel antifolate with improved membrane transport and polyglutamylation in tumor cells and high affinity for the reduced folate carrier (RFC) highly expressed in malignant cells and has been approved by FDA recently for T-cell lymphoma including ATL.
### 5.8. Prevention
Two steps should be considered for the prevention of HTLV-1-associated ATL. The first is the prevention of HTLV-1 infections. This has been achieved in some endemic areas in Japan by screening for HTLV-1 among blood donors and asking mothers who are carriers to refrain from breast feeding. For several decades, before initiation of the interventions, the prevalence of HTLV-1 has declined drastically in endemic areas in Japan, probably because of birth cohort effects [94]. The elimination of HTLV-1 in endemic areas is now considered possible due to the natural decrease in the prevalence as well as the intervention of transmission through blood transfusion and breast feeding. The second step is the prevention of ATL among HTLV-1 carriers. This has not been achieved partly because only about 5% of HTLV-1 carriers develop the disease in their life time although several risk factors have been identified by a cohort study of HTLV-1 carriers (Joint Study of Predisposing Factors for ATL Development) [95]. Also, no agent has been found to be effective in preventing the development of ATL among HTLV-1 carriers.
## 5.1. Watchful Waiting
At present, no standard treatment for ATL exists. Therefore, patients with the smoldering or favorable chronic type, who may survive one or more years without chemotherapy, excluding topical therapy for cutaneous lesions, should be observed and therapy should be delayed until progression of the disease [31]. However, it was recently found that the long-term prognosis of such patients was poorer than expected. In a long-term followup study for 78 patients with indolent ATL (favorable chronic- or smoldering-type) with a policy of watchful waiting until disease progression at a single institution, the median survival time was 5.3 years with no plateau in the survival curve. Twelve patients remained alive for >10 years, 32 progressed to acute ATL, and 51 died [38]. Recently, the striking benefit of early intervention to indolent ATL by IFN and an antiretroviral agent was reported by a meta-analysis [39]. This modality should be extensively evaluated by larger clinical trials to establish appropriate management practices for indolent ATL.
## 5.2. Chemotherapy
Since 1978, chemotherapy trials have been consecutively conducted for patients newly diagnosed with ATL by JCOG’s Lymphoma Study Group (LSG) (Table1) [40–45]. Between 1981 and 1983, JCOG conducted a phase III trial (JCOG8101) to evaluate LSG1-VEPA (vincristine, cyclophosphamide, prednisone, and doxorubicin) versus LSG2-VEPA-M (VEPA plus methotrexate (MTX)) for advanced non-Hodgkin lymphoma (NHL), including ATL [40, 41]. The complete response (CR) rate of LSG2-VEPA-M for ATL (37%) was higher than that of LSG1-VEPA (17%; P=.09). However, the CR rate was significantly lower for ATL than for B-cell NHL and peripheral T-cell lymphoma (PTCL) other than ATL (P<.001). The median survival time of the 54 patients with ATL was 6 months, and the estimated 4-year survival rate was 8%.Table 1
Results of sequential chemotherapeutic-trials of untreated patients with ATL (JCOG-LSG).
J7801J8101J8701J9109J9303JCOG9801LSG1LSG1/LSG2LSG4LSG11LSG15mLSG15/mLSG19Pts. No.18544362965761CR (%)16.727.841.928.335.540.424.6CR + PR (%)51.680.672.065.6MST (months)7.58.07.413.012.710.92 yr. survival (%)17.031.33 yr. survival (%)10.021.923.612.74 yr. survival (%)8.011.6CR: complete remission, PR: partial remission, MST: median survival time.In 1987, JCOG initiated a multicenter phase II study (JCOG8701) of a multiagent combination chemotherapy (LSG4) for advanced aggressive NHL (including ATL). LSG4 consisted of three regimens: (1) VEPA-B (VEPA plus bleomycin), (2) M-FEPA (methotrexate, vindesine, cyclophosphamide, prednisone, and doxorubicin), and (3) VEPP-B, (vincristine, etoposide, procarbazine, prednisone, and bleomycin) [42]. The CR rate for ATL patients was improved from 28% (JCOG8101) to 43% (JCOG8701); however, the CR rate was significantly lower in ATL than in B-cell NHL and PTCL (P<.01). Patients with ATL still showed a poor prognosis, with a median survival time of 8 months and a 4-year survival rate of 12%.The disappointing results with conventional chemotherapies have led to a search for new active agents. Multicenter phase I and II studies of pentostatin (2′-deoxycoformycin, an inhibitor of adenosine deaminase) were conducted against ATL in Japan [43]. The phase II study revealed a response rate of 32% (10 of 31) in cases of relapsed or refractory ATL (2CRs and 8PRs).These encouraging results prompted the investigators to conduct a phase II trial (JCOG9109) with a pentostatin-containing combination (LSG11) as the initial chemotherapy [44]. Patients with aggressive ATL—that is, of the acute, lymphoma, or unfavorable chronic type—were eligible for this study. Unfavorable chronic-type ATL, defined as having at least 1 of 3 unfavorable prognostic factors (low serum albumin level, high LDH level, or high BUN), has an unfavorable prognosis similar to that for acute- and lymphoma-type ATL. A total of 62 untreated patients with aggressive ATL (34 acute, 21 lymphoma, and 7 unfavorable chronic type) were enrolled. A regimen of 1 mg/m2 vincristine on days 1 and 8, 40 mg/m2 doxorubicin on day 1, 100 mg/m2 etoposide on days 1 through 3, 40 mg/m2 prednisolone (PSL) on days 1 and 2, and 5 mg/m2 pentostatin on days 8, 15, and 22 was administered every 28 days for 10 cycles. Among the 61 patients evaluable for toxicity, four patients (7%) died of infections, two from septicemia, and two from cytomegalovirus pneumonia. Among the 60 eligible patients, there were 17CRs (28%) and 14 partial responses (PRs) (overall response rate [ORR] = 52%). The median survival time was 7.4 months, and the estimated 2-year survival rate was 17%. The prognosis in patients with ATL remained poor, even though they were treated with a pentostatin-containing combination chemotherapy.In 1994, JCOG initiated a phase II trial (JCOG9303) of an eight-drug regimen (LSG15) consisting of vincristine, cyclophosphamide, doxorubicin, prednisone, ranimustine, vindesine, etoposide, and carboplatin for untreated ATL [45]. Dose intensification was attempted with the prophylactic use of granulocyte colony-stimulating factor (G-CSF). In addition, non-cross-resistant agents, such as ranimustine and carboplatin, and intrathecal prophylaxis with MTX and PSL were incorporated. Ninety-six previously untreated patients with aggressive ATL were enrolled: 58 acute, 28 lymphoma, and 10 unfavorable chronic types. Approximately 81% of the 93 eligible patients responded (75/93), with 33 patients obtaining a CR (35%). The overall survival rate of the 93 patients at 2 years was estimated to be 31%, with a median survival time of 13 months. Grade 4 neutropenia and thrombocytopenia were observed in 65% and 53% of the patients, respectively, whereas grade 4 nonhematologic toxicity was observed in only one patient.Dose intensification of CHOP with prophylactic use of G-CSF was expected to improve survival among patients with aggressive NHL, and our randomized phase II study (JCOG9505) comparing CHOP-14 (LSG19) and dose-escalated CHOP (LSG20) to treat aggressive NHL excluding ATL revealed biweekly CHOP to be more promising [46]. Therefore, we regarded biweekly CHOP as a standard treatment for NHL including aggressive ATL at the time of designing this phase III study.To confirm whether the LSG15 regimen is a new standard for the treatment of aggressive ATL, JCOG conducted a phase III trial comparing modified (m)-LSG15 with biweekly CHOP (cyclophosphamide, hydroxy-doxorubicin, vincristine [Oncovin], and prednisone), both supported with G-CSF and intrathecal prophylaxis [47].mLSG19, a modified version of LSG19, consisted of eight cycles of CHOP [CPA 750 mg/m2, ADM 50 mg/m2,VCR 1.4 mg/m2(maximum 2 mg) on day 1 and PSL 100 mg on days 1 to 5] every 2 weeks [46]. The modification was an intrathecal administration identical to that in mLSG15.mLSG15 in JCOG9801 was a modified version of LSG15 in JCOG9303, consisting of three regimens: VCAP [VCR 1 mg/m2 (maximum 2 mg), CPA 350 mg/m2, ADM 40 mg/m2, PSL 40 mg/m2] on day 1, AMP [ADM 30 mg/m2, MCNU 60 mg/m2, PSL 40 mg/m2] on day 8, and VECP [VDS 2.4 mg/m2 on day 15, ETP 100 mg/m2 on days 15 to 17, CBDCA 250 mg/m2 on day15, PSL 40 mg/m2 on days 15 to 17] on days 15–17, and the next course was to be started on day 29 (Figure 1). The modifications in mLSG15 as compared to LSG15 were as follows: (1) The total number of cycles was reduced from 7 to 6 because of progressive cytopenia, especially thrombocytopenia, after repeating the LSG15 therapy. (2) Cytarabine 40 mg was used with MTX 15 mg and PSL 10 mg for prophylactic intrathecal administration, at the recovery phases of courses 1, 3, and 5 because of the high frequency of central nervous system relapse in the JCOG9303 study. Untreated patients with aggressive ATL were assigned to receive either six courses of mLSG15 every 4 weeks or eight courses of biweekly CHOP. The primary endpoint was overall survival. A total of 118 patients were enrolled. The CR rate was higher in the mLSG15 arm than in the biweekly CHOP arm (40% versus 25%, resp.; P=.020). As shown in Table 1, the median survival time and OS rate at 3 years were 12.7 months and 24% in the mLSG15 arm and 10.9 months and 13% in the biweekly CHOP arm [two-sided P=.169, and the hazard ratio was 0.75; 95% confidence interval (CI), 0.50 to 1.13]. A Cox regression analysis with performance status (PS 0 versus 1 versus 2–4) as the stratum for baseline hazard functions was performed to evaluate the effect on overall survival of age, B-symptoms, subtypes of ATL, LDH, BUN, bulky mass, and treatment arms. According to this analysis, the hazard ratio and two-sided P value for the treatment arms were 0.62 (95% CI, 0.38 to 1.01) and .056, respectively. The difference between the crude analysis and this result was because of unbalanced prognostic factors, such as PS 0 versus 1, and the presence or absence of bulky lesions between the treatment arms. The progression-free survival rate at 1 year was 28% in the mLSG15 arm compared with 16% in the biweekly CHOP arm (two-sided P=.20).Figure 1
Regimen of VCAP-AMP-VECP in mLSG15. VCAP: vincristine (VCR), cyclophosphamide (CPA), doxorubicin (ADM), prednisone (PSL); AMP: ADM, ranimustine (MCNU), PSL; VECP: vindesine (VDS), etoposide (ETP), carboplatin (CBDCA), and PSL.*) MCNU and VDS are nitrosourea and vinca alkaloid, respectively, developed in Japan. A previous study on myeloma described that carmustine (BCNU), another nitrosourea, at 1 mg/kg is equivalent to MCNU at 0.8 to 1.0 mg/kg. VDS at 2.4 mg/m2 can be substituted for VCR, another vinca alkaloid used in this regimen, at 1 mg/m2 with possibly less myelosuppression and more peripheral neuropathy which can be managed by dose modification.In mLSG15 versus mLSG19, rate of grade 4 neutropenia, grade 4 thrombocytopenia, and grade 3/4 infection were 98% versus 83%, 74% versus 17%, and 32% versus 15%, respectively. There were three toxic deaths in the former. Three treatment-related deaths (TRDs), two from sepsis and one from interstitial pneumonitis related to neutropenia, were reported in the mLSG15 arm. Two cases of myelodysplastic syndrome were reported, one each in both arms.The longer survival at 3 years and higher CR rate with mLSG15 compared with mLSG19 suggest that mLSG15 is a more effective regimen at the expense of higher toxicity, providing the basis for future investigations in the treatment of ATL [47]. The superiority of VCAP-AMP-VECP in mLSG15 to biweekly CHOP in mLSG19 may be explained by the more prolonged, dose dense schedule of therapy in addition to 4 more drugs. In addition, agents such as carboplatin and ranimustine not affected by multidrug-resistance (MDR) related genes, which were frequently expressed in ATL cells at onset, were incorporated [48]. Intrathecal prophylaxis, which was incorporated in both arms of the phase III study, should be considered for patients with aggressive ATL even in the absence of clinical symptoms because a previous analysis revealed that more than half of relapses at new sites after chemotherapy occurred in the CNS [49]. However, the median survival time of 13 months in VCAP-AMP-VECP (LSG15/mLSG15) still compares unfavorably to other hematological malignancies, requiring further effort to improve the outcome.
## 5.3. Interferon-Alpha and Zidovudine
A small phase II trial in Japan of IFN alpha against relapsed/refractory ATL showed a response rate (all PR) of 33% (8/24), including 5 out of 9 (56%) chronic-type ATL [50]. In 1995, Gill and associates reported that 11 of 19 patients with acute- or lymphoma-type ATL showed major responses (5 CR and 6 PR) to a combination of interferon-alpha (IFN) and zidovudine (AZT) [51]. The efficacy of this combination was also observed by Hermine and associates; major objective responses were obtained in all five patients with ATL (four with acute type and one with smoldering type) [52]. Although these results are encouraging, the OS of previously untreated patients with ATL was relatively short (4.8 months) compared with the survival of those in the chemotherapy trials conducted by the JCOG-LSG (7 to 8 months) [53]. After that, numerous small phase II studies using AZT and IFN have shown responses in ATL patients [54–56]. High doses of both agents are recommended: 6–9 million units of IFN in combination with daily divided AZT doses of 800–1000 mg/day. Therapeutic effect of AZT and IFN is not through a direct cytotoxic effect of these drugs on the leukemic cells [57]. Enduring AZT treatment of ATL cell lines results in inhibition of telomerase which reprograms the cells to p53-dependent senescence [58].Recently, the results of a “meta-analysis” on the use of IFN and AZT for ATL were reported [39]. A total of 100 patients received interferon-alpha and AZT as initial treatments. The ORR was 66%, with a 43% CR rate. In this worldwide retrospective analysis, the median survival time was 24 months and the 5-year survival rate was 50% for first-line IFN and AZT, versus 7 months and 20% for 84 patients who received first-line chemotherapy. The median survival time of patients with acute-type ATL treated with first-line IFN/AZT and chemotherapy was 12 and 9 months, respectively. Patients with lymphoma-type ATL did not benefit from this combination. In addition, first-line IFN/AZT therapy in chronic- and smoldering-type ATL resulted in a 100% survival rate at a median followup of 5 years. However, because of the retrospective nature of this meta-analysis based on medical records at each hospital, the decision process to select the therapeutic modality for each patient and the possibility of interference with OS by second-line treatment remains unknown. While the results for IFN/AZT in indolent ATL appear to be promising compared to those with watchful-waiting policy until disease progression, recently reported from Japan [38], the possibility of selection bias cannot be ruled out. A prospective multicenter phase III study evaluating the efficacy of IFN/AZT as compared to watchful-waiting for indolent ATL is to be initiated in Japan.Recently, a phase II study of the combination of arsenic trioxide, IFN, and AZT for chronic ATL revealed an impressive response rate and moderate toxicity [39]. Although the results appeared promising, the addition of arsenic trioxide to IFN/AZT, which might be sufficient for the treatment of chronic ATL as described above, caused more toxicity and should be evaluated with caution.
## 5.4. Allogeneic Hematopoietic Stem-Cell Transplantation (Allo-HSCT)
Allo-HSCT is now recommended for the treatment of young patients with aggressive ATL [31, 59]. Despite higher treatment-related mortality including graft versus host disease in a retrospective multicenter analysis of myeloablative allo-HSCT, the estimated 3-year OS of 33% is promising, possibly reflecting a graft versus ATL effect [60]. To evaluate the efficacy of allo-HSCT more accurately, especially in view of a comparison with intensive chemotherapy, a prospective multicenter phase II study of LSG15 chemotherapy followed by allo-HSCT is ongoing (JCOG0907).Feasibility studies of allo-HSCT with reduced intensity conditioning for relatively aged patients with ATL also revealed promising results, and subsequent multicenter trials are being conducted in Japan [61, 62]. The minimal residual disease after allo-HSCT detected as HTLV-1 proviral load was much less than that after chemotherapy or AZT/IFN therapy, suggesting the presence of a graft-versus-ATL effect as well as graft-versus-HTLV-1 activity [61].It remains unclear which type of allo-HSCT (myeloablative or reduced intensity conditioning) is more suitable for the treatment of ATL. Furthermore, selection criteria with respect to responses to previous treatments, sources of stem cells, and HTLV-1 viral status of the donor remain to be determined. Recently, a patient in whom ATL derived from donor cells developed four months after transplantation of stem cells from a sibling with HTLV-I was reported [63].However, several other retrospective studies as well as those mentioned above on allo-HSCT showed a promising long-term survival rate of 20 to 40% with an apparent plateau phase despite significant treatment-related mortality.
## 5.5. Supportive Care
The prevention of opportunistic infections is essential in the management of ATL patients, nearly half of whom develop severe infections during chemotherapy. Some patients with indolent ATL develop infections during watchful waiting.Sulfamethoxazole/trimethoprim and antifungal agents have been recommended as prophylaxes for Pneumocystis jiroveci pneumonia and fungal infections, respectively, in the JCOG trials [43–45]. While cytomegalovirus infections are not infrequent among ATL patients, ganciclovir is not usually recommended as a prophylaxis [31]. In addition, in patients not receiving chemotherapy or allo-HSCT, antifungal prophylaxis may not be critical. An antistrongyloides agent, such as ivermectin or albendazole, should be considered to avoid systemic infections in patients with a history of exposure to the parasite in the tropics. Treatment with steroids and proton pump inhibitors may precipitate a fulminant strongyloides infestation and warrants testing before these agents are used in endemic areas [31]. Hypercalcemia associated with aggressive ATL can be corrected using chemotherapy in combination with hydration and bisphosphonate even when the performance status of the patient is poor.
## 5.6. Response Criteria
The complex nature of ATL, often with both leukemic and lymphomatous components, makes response assessment difficult. A modification of the JCOG response criteria was suggested by ATL consensus-meeting reflecting those for CLL and NHL which had been published later [31, 64, 65]. Recently, revised response criteria were proposed for lymphoma. New guidelines were presented incorporating positron emission tomography (PET), especially for the assessment of CR. It is well known and described in the criteria that several kinds of lymphoma including peripheral T-cell lymphomas were variably [18F] fluorodeoxyglucose (FDG) avid [66]. Meanwhile, PET or PET/CT is recommended for evaluations of response when the tumorous lesions are FDG-avid at diagnosis [31].
## 5.7. New Agents for ATL
### 5.7.1. Purine Analogs
Several purine analogs have been evaluated for ATL. Among them, pentostatin (deoxycoformycin) has been most extensively evaluated as a single agent and in combination as described above [43, 46].Other purine analogs clinically studied for ATL are fludarabine and cladribine. Fludarabine is among standard treatments for B-chronic lymphocytic leukemia and other lymphoid malignancies. In a phase I study of fludarabine in Japan, 5 ATL patients and 10 B-CLL patients with refractory or relapsed-disease were enrolled [67]. Six grade 3 nonhematological toxicities were only observed in the ATL patients. PR was achieved only in one of the 5 ATL patients and the duration was short. Cladribine is among standard treatments for hairy cell leukemia and other lymphoid malignancies. A phase II study of cladribine for relapsed/refractory aggressive-ATL in 15 patients revealed only one PR [68].Forodesine, a purine nucleotide phosphorylase (PNP) inhibitor, is among purine nucleotide analogs. PNP is an enzyme in the purine salvage pathway that phosphorolysis 2′deoxyguanosine (dGuo). Purine nucleoside phosphorylase (PNP) deficiency in humans results in a severe combined immunodeficiency phenotype and the selective depletion of T cells associated with high plasma deoxyguanosine (dGuo) and high intracellular deoxyguanosine triphosphate levels in those cells with high deoxynucleoside kinase activity such as T cells, leading to cell death. Inhibitors of PNP, such as forodesine, mimic SCID in vitro and in vivo, suggesting a new targeting agent specific for T cell malignancies [69]. A dose escalating phase I study of forodesine is being conducted in Japan for T cell malignancies including ATL.
### 5.7.2. Histone Deacetylase Inhibitor
Gene expression governed by epigenetic changes is crucial to the pathogenesis of cancer. Histone deacetylases (HDACs) are enzymes involved in the remodeling of chromatin and play a key role in the epigenetic regulation of gene expression. Deacetylase inhibitors (DACis) induce the hyperacetylation of nonhistone proteins as well as nucleosomal histones resulting in the expression of repressed genes involved in growth arrest, terminal differentiation, and/or apoptosis among cancer cells. Several classes of HDACi have been found to have potent anticancer effects in preclinical studies. HDACIs such as vorinostat (suberoylanilide hydroxamic acid: SAHA), romidepsin (depsipeptide), and panobinostat (LBH589) have also shown promise in preclinical and/or clinical studies against T-cell malignancies including ATL [70, 71]. Vorinostat and romidepsin have been approved for cutaneous T-cell lymphoma (CTCL) by the Food and Drug Administration in the USA. LBH589 has a significant anti-ATL effect in vitro and in mice [71]. However, a phase II study for CTCL and indolent ATL in Japan was terminated because of severe infections associated with the shrinkage of skin tumors and formation of ulcers in patients with ATL. Further study is required to evaluate the efficacy of HDACIs for PTCL/CTCL including ATL.
### 5.7.3. Monoclonal Antibodies and Toxin Fusion Proteins
Monoclonal antibodies (MoAb) and toxin fusion proteins targeting several molecules expressed on the surface of ATL cells and other lymphoid malignant cells, such as CD25, CD2, CD52, and chemokine receptor 4 (CCR4), have shown promise in recent clinical trials.Because most ATL cells express the alpha-chain of IL-2R (CD25), Waldmann et al. treated patients with ATL using monoclonal antibodies to CD25 [72]. Six (32%) of 19 patients treated with anti-Tac showed objective responses lasting from 9 weeks to longer than 3 years. One impediment to this approach is the quantity of soluble IL-2R shed by the tumor cells into the circulation. Another strategy for targeting IL-2R is conjugation with an immunotoxin (Pseudomonas exotoxin) or radioisotope (yttrium-90). Waldmann et al. developed a stable conjugate of anti-Tac with yttrium-90. Among the 16 patients with ATL who received 5- to 15-mCi doses, 9 (56%) showed objective responses. The response lasted longer than that obtained with unconjugated anti-Tac antibody [73, 74].LMB-2, composed of the anti-CD25 murine MoAb fused to the truncated form of Pseudomonas toxin, was cytotoxic to CD25-expressing cells including ATL cells in vitro and in mice. Phase I/II trials of this agent showed some effect against hairy cell leukemia, CTCL, and ATL [6]. Six of 35 patients in the phase I study had significant levels of neutralizing antibodies after the first cycle. This drug deserves further clinical trials including in combination with cytotoxic agents.Denileukin diftitox (DD; DAB(389)-interleukin-2 [IL-2]), an interleukin-2-diphtheria toxin fusion protein targeting IL-2 receptor-expressing malignant T lymphocytes, shows efficacy as a single agent against CTCL and peripheral T-cell lymphoma (PTCL) [75]. Also the combination of this agent with multiagent chemotherapy, CHOP, was promising for PTCL [76]. ATL cells frequently and highly express CD25 as described above, and several ATL cases successfully treated with this agent have been reported [77].CD52 antigen is present on normal and pathologic B and T cells. In PTCL, however, CD52 expression varies among patients, with an overall expression rate lower than 50% in one study but not in another [78, 79]. ATL cells frequently express CD52 as compared to other PTCLs. The humanized anti-CD52 monoclonal antibody alemtuzumab is active against CLL and PTCL as a single agent. The combination of alemtuzumab with a standard-dose cyclophosphamide/doxorubicin/vincristine/prednisone (CHOP) regimen as a first-line treatment for 24 patients with PTCL showed promising results with CR in 17 (71%) patients; 1 had a partial remission, with an overall median duration of response of 11 months and was associated with mostly manageable infections but including CMV reactivation [80]. Major infections were Jacob-Creutzfeldt virus reactivation, pulmonary invasive aspergillosis, and staphylococcus sepsis.ATL cells express CD52, the target of alemtuzumab, which was active in a preclinical model of ATL and toxic to p53-deficient cells, and several ATL cases successfully treated with this agent have been reported [81–83].Siplizumab is a humanized MoAb targeting CD2 and showed efficacy in a murine ATL model. P1 dose-escalating study of this agent in 22 patients with several kinds of T/NK-cell malignancy revealed 6 responses (2 CR in LGL leukemia, 3 PR in ATL, and 1 PR in CTCL). However, 4 patients developed EBV-associated LPD [84]. The broad specificity of this agent may eliminate both CD4- and CD8-positive T cells as well as NK cells without effecting B cells and predispose individuals to the development of EBV lymphoproliferative syndrome.CC chemokine receptor 4 (CCR4) is expressed on normal T helper type 27 and regulatory T (Treg) cells and on certain types of T-cell neoplasms [20, 21, 35]. KW-0761, a next generation humanized anti-CCR4 mAb, with a defucosylated Fc region, exerts strong antibody-dependent cellular cytotoxicity (ADCC) due to increased binding to the Fcγ receptor on effecter cells [85]. A phase I study of dose escalation with 4 weekly intravenous infusions of KW-0761 in 16 patients with relapsed CCR4-positive T cell malignancy (13 ATL and 3 PTCL) revealed that one patient, at the maximum dose (1.0 mg/kg), developed grade (G) 3 dose-limiting toxic effects, namely, skin rashes and febrile neutropenia and G4 neutropenia [86]. Other treatment-related G3-4 toxic effects were lymphopenia (n=10), neutropenia (n=3), leukopenia (n=2), herpes zoster (n=1), and acute infusion reaction/cytokine release syndrome (n=1). Neither the frequency nor severity of these effects increased with dose escalation or the plasma concentration of the agent. The maximum tolerated dose was not reached. No patients had detectable levels of anti-KW-0761 antibody. Five patients (31%; 95% CI, 11% to 59%) achieved objective responses: 2 complete (0.1; 1.0 mg/kg) and 3 partial (0.01; 2 at 1.0 mg/kg) responses. Three out of 13 patients with ATL (31%) achieved a response (2 CR and 1 PR). Responses in each lesion were diverse, that is, good in PB (6 CR and 1 PR/7 evaluable cases), intermediate in skin (3 CR and 1 PR/8 evaluable cases), and poor in LN (1 CR and 2 PR/11 evaluable cases). KW-0761 was well tolerated at all the doses tested, demonstrating potential efficacy against relapsed CCR4-positive ATL or PTCL. Recently, results of subsequent phase II studies at the 1.0 mg/kg in relapsed ATL, showing 50% of response rate with acceptable toxicity profiles, reported [87]. A phase II trial of single agent KW-0761 at the 1.0 mg/kg in relapsed PTCL/CTCL and a phase II trial of VCAP-AMP-VECP combined with KW-0761 for untreated aggressive ATL are ongoing.
### 5.7.4. Other Agents
A proteasome inhibitor, bortezomib (Velcade), and an immunomodulatory agent, lenalidomide (Revlimid), both have potent preclinical and clinical activity in T-cell malignancies including ATL, are now under clinical trials for relapsed ATL in Japan [88–90]. Other potential drugs for ATL include pralatrexate (Folotyn), a new agent with clinical activity in T-cell malignancies including ATL [91–93]. The agent is a novel antifolate with improved membrane transport and polyglutamylation in tumor cells and high affinity for the reduced folate carrier (RFC) highly expressed in malignant cells and has been approved by FDA recently for T-cell lymphoma including ATL.
## 5.7.1. Purine Analogs
Several purine analogs have been evaluated for ATL. Among them, pentostatin (deoxycoformycin) has been most extensively evaluated as a single agent and in combination as described above [43, 46].Other purine analogs clinically studied for ATL are fludarabine and cladribine. Fludarabine is among standard treatments for B-chronic lymphocytic leukemia and other lymphoid malignancies. In a phase I study of fludarabine in Japan, 5 ATL patients and 10 B-CLL patients with refractory or relapsed-disease were enrolled [67]. Six grade 3 nonhematological toxicities were only observed in the ATL patients. PR was achieved only in one of the 5 ATL patients and the duration was short. Cladribine is among standard treatments for hairy cell leukemia and other lymphoid malignancies. A phase II study of cladribine for relapsed/refractory aggressive-ATL in 15 patients revealed only one PR [68].Forodesine, a purine nucleotide phosphorylase (PNP) inhibitor, is among purine nucleotide analogs. PNP is an enzyme in the purine salvage pathway that phosphorolysis 2′deoxyguanosine (dGuo). Purine nucleoside phosphorylase (PNP) deficiency in humans results in a severe combined immunodeficiency phenotype and the selective depletion of T cells associated with high plasma deoxyguanosine (dGuo) and high intracellular deoxyguanosine triphosphate levels in those cells with high deoxynucleoside kinase activity such as T cells, leading to cell death. Inhibitors of PNP, such as forodesine, mimic SCID in vitro and in vivo, suggesting a new targeting agent specific for T cell malignancies [69]. A dose escalating phase I study of forodesine is being conducted in Japan for T cell malignancies including ATL.
## 5.7.2. Histone Deacetylase Inhibitor
Gene expression governed by epigenetic changes is crucial to the pathogenesis of cancer. Histone deacetylases (HDACs) are enzymes involved in the remodeling of chromatin and play a key role in the epigenetic regulation of gene expression. Deacetylase inhibitors (DACis) induce the hyperacetylation of nonhistone proteins as well as nucleosomal histones resulting in the expression of repressed genes involved in growth arrest, terminal differentiation, and/or apoptosis among cancer cells. Several classes of HDACi have been found to have potent anticancer effects in preclinical studies. HDACIs such as vorinostat (suberoylanilide hydroxamic acid: SAHA), romidepsin (depsipeptide), and panobinostat (LBH589) have also shown promise in preclinical and/or clinical studies against T-cell malignancies including ATL [70, 71]. Vorinostat and romidepsin have been approved for cutaneous T-cell lymphoma (CTCL) by the Food and Drug Administration in the USA. LBH589 has a significant anti-ATL effect in vitro and in mice [71]. However, a phase II study for CTCL and indolent ATL in Japan was terminated because of severe infections associated with the shrinkage of skin tumors and formation of ulcers in patients with ATL. Further study is required to evaluate the efficacy of HDACIs for PTCL/CTCL including ATL.
## 5.7.3. Monoclonal Antibodies and Toxin Fusion Proteins
Monoclonal antibodies (MoAb) and toxin fusion proteins targeting several molecules expressed on the surface of ATL cells and other lymphoid malignant cells, such as CD25, CD2, CD52, and chemokine receptor 4 (CCR4), have shown promise in recent clinical trials.Because most ATL cells express the alpha-chain of IL-2R (CD25), Waldmann et al. treated patients with ATL using monoclonal antibodies to CD25 [72]. Six (32%) of 19 patients treated with anti-Tac showed objective responses lasting from 9 weeks to longer than 3 years. One impediment to this approach is the quantity of soluble IL-2R shed by the tumor cells into the circulation. Another strategy for targeting IL-2R is conjugation with an immunotoxin (Pseudomonas exotoxin) or radioisotope (yttrium-90). Waldmann et al. developed a stable conjugate of anti-Tac with yttrium-90. Among the 16 patients with ATL who received 5- to 15-mCi doses, 9 (56%) showed objective responses. The response lasted longer than that obtained with unconjugated anti-Tac antibody [73, 74].LMB-2, composed of the anti-CD25 murine MoAb fused to the truncated form of Pseudomonas toxin, was cytotoxic to CD25-expressing cells including ATL cells in vitro and in mice. Phase I/II trials of this agent showed some effect against hairy cell leukemia, CTCL, and ATL [6]. Six of 35 patients in the phase I study had significant levels of neutralizing antibodies after the first cycle. This drug deserves further clinical trials including in combination with cytotoxic agents.Denileukin diftitox (DD; DAB(389)-interleukin-2 [IL-2]), an interleukin-2-diphtheria toxin fusion protein targeting IL-2 receptor-expressing malignant T lymphocytes, shows efficacy as a single agent against CTCL and peripheral T-cell lymphoma (PTCL) [75]. Also the combination of this agent with multiagent chemotherapy, CHOP, was promising for PTCL [76]. ATL cells frequently and highly express CD25 as described above, and several ATL cases successfully treated with this agent have been reported [77].CD52 antigen is present on normal and pathologic B and T cells. In PTCL, however, CD52 expression varies among patients, with an overall expression rate lower than 50% in one study but not in another [78, 79]. ATL cells frequently express CD52 as compared to other PTCLs. The humanized anti-CD52 monoclonal antibody alemtuzumab is active against CLL and PTCL as a single agent. The combination of alemtuzumab with a standard-dose cyclophosphamide/doxorubicin/vincristine/prednisone (CHOP) regimen as a first-line treatment for 24 patients with PTCL showed promising results with CR in 17 (71%) patients; 1 had a partial remission, with an overall median duration of response of 11 months and was associated with mostly manageable infections but including CMV reactivation [80]. Major infections were Jacob-Creutzfeldt virus reactivation, pulmonary invasive aspergillosis, and staphylococcus sepsis.ATL cells express CD52, the target of alemtuzumab, which was active in a preclinical model of ATL and toxic to p53-deficient cells, and several ATL cases successfully treated with this agent have been reported [81–83].Siplizumab is a humanized MoAb targeting CD2 and showed efficacy in a murine ATL model. P1 dose-escalating study of this agent in 22 patients with several kinds of T/NK-cell malignancy revealed 6 responses (2 CR in LGL leukemia, 3 PR in ATL, and 1 PR in CTCL). However, 4 patients developed EBV-associated LPD [84]. The broad specificity of this agent may eliminate both CD4- and CD8-positive T cells as well as NK cells without effecting B cells and predispose individuals to the development of EBV lymphoproliferative syndrome.CC chemokine receptor 4 (CCR4) is expressed on normal T helper type 27 and regulatory T (Treg) cells and on certain types of T-cell neoplasms [20, 21, 35]. KW-0761, a next generation humanized anti-CCR4 mAb, with a defucosylated Fc region, exerts strong antibody-dependent cellular cytotoxicity (ADCC) due to increased binding to the Fcγ receptor on effecter cells [85]. A phase I study of dose escalation with 4 weekly intravenous infusions of KW-0761 in 16 patients with relapsed CCR4-positive T cell malignancy (13 ATL and 3 PTCL) revealed that one patient, at the maximum dose (1.0 mg/kg), developed grade (G) 3 dose-limiting toxic effects, namely, skin rashes and febrile neutropenia and G4 neutropenia [86]. Other treatment-related G3-4 toxic effects were lymphopenia (n=10), neutropenia (n=3), leukopenia (n=2), herpes zoster (n=1), and acute infusion reaction/cytokine release syndrome (n=1). Neither the frequency nor severity of these effects increased with dose escalation or the plasma concentration of the agent. The maximum tolerated dose was not reached. No patients had detectable levels of anti-KW-0761 antibody. Five patients (31%; 95% CI, 11% to 59%) achieved objective responses: 2 complete (0.1; 1.0 mg/kg) and 3 partial (0.01; 2 at 1.0 mg/kg) responses. Three out of 13 patients with ATL (31%) achieved a response (2 CR and 1 PR). Responses in each lesion were diverse, that is, good in PB (6 CR and 1 PR/7 evaluable cases), intermediate in skin (3 CR and 1 PR/8 evaluable cases), and poor in LN (1 CR and 2 PR/11 evaluable cases). KW-0761 was well tolerated at all the doses tested, demonstrating potential efficacy against relapsed CCR4-positive ATL or PTCL. Recently, results of subsequent phase II studies at the 1.0 mg/kg in relapsed ATL, showing 50% of response rate with acceptable toxicity profiles, reported [87]. A phase II trial of single agent KW-0761 at the 1.0 mg/kg in relapsed PTCL/CTCL and a phase II trial of VCAP-AMP-VECP combined with KW-0761 for untreated aggressive ATL are ongoing.
## 5.7.4. Other Agents
A proteasome inhibitor, bortezomib (Velcade), and an immunomodulatory agent, lenalidomide (Revlimid), both have potent preclinical and clinical activity in T-cell malignancies including ATL, are now under clinical trials for relapsed ATL in Japan [88–90]. Other potential drugs for ATL include pralatrexate (Folotyn), a new agent with clinical activity in T-cell malignancies including ATL [91–93]. The agent is a novel antifolate with improved membrane transport and polyglutamylation in tumor cells and high affinity for the reduced folate carrier (RFC) highly expressed in malignant cells and has been approved by FDA recently for T-cell lymphoma including ATL.
## 5.8. Prevention
Two steps should be considered for the prevention of HTLV-1-associated ATL. The first is the prevention of HTLV-1 infections. This has been achieved in some endemic areas in Japan by screening for HTLV-1 among blood donors and asking mothers who are carriers to refrain from breast feeding. For several decades, before initiation of the interventions, the prevalence of HTLV-1 has declined drastically in endemic areas in Japan, probably because of birth cohort effects [94]. The elimination of HTLV-1 in endemic areas is now considered possible due to the natural decrease in the prevalence as well as the intervention of transmission through blood transfusion and breast feeding. The second step is the prevention of ATL among HTLV-1 carriers. This has not been achieved partly because only about 5% of HTLV-1 carriers develop the disease in their life time although several risk factors have been identified by a cohort study of HTLV-1 carriers (Joint Study of Predisposing Factors for ATL Development) [95]. Also, no agent has been found to be effective in preventing the development of ATL among HTLV-1 carriers.
## 6. Conclusions
Clinical trials have been paramount to the recent advances in ATL treatment, including assessments of chemotherapy, AZT/IFN, and allo-HSCT. Recently, a strategy for ATL treatment, stratified by subtype-classification, prognostic factors, and the response to initial treatment as well as response criteria, was proposed [31]. The recommended treatment algorithm for ATL is shown in Table 2. However, ATL still has a worse prognosis than the other T-cell malignancies [96].There is no plateau with an initial steep slope and subsequent gentle slope without a plateau in the survival curve for aggressive or indolent ATL treated by watchful waiting and with chemotherapy, respectively, although the prognosis is much better in the latter [38]. A prognostic model for each subgroup should be elucidated to properly identify the candidate for allo-HSCT which can achieve a cure of ATL despite considerable treatment-related mortality. Although several small phase II trials and a recent metaanalysis suggested IFN/AZT therapy to be promising, no confirmative phase III study has been conducted [39]. Furthermore, as described in the other chapters in detail, more than ten promising new agents for PTCL/CTCL including ATL are now in clinical trials or preparation. Future clinical trials on ATL as described above should be incorporated to ensure that the consensus is continually updated to establish evidence-based practical guidelines.Table 2
Strategy for the treatment of Adult T-Cell Leukemia-Lymphoma.
Smoldering- or favorable chronic-type ATL(i) Consider inclusion in prospective clinical trials.(ii) Symptomatic patients (skin lesions, opportunistic infections, etc.): Consider AZT/IFN or Watch and Wait.(iii) Asymptomatic patients: Consider Watch and Wait.Unfavorable chronic- or acute-type ATL(i) If outside clinical trials, check prognostic factors (including clinical and molecular factors if possible):(a) Good prognostic factors: consider chemotherapy (VCAP-AMP-VECP evaluated by a phase III trial against biweekly-CHOP) or AZT/IFN (evaluated by a meta-analysis on retrospective studies).(b) Poor prognostic factors: consider chemotherapy followed by conventional or reduced intensity allo-HSCT (evaluated by retrospective and prospective Japanese analyses, resp.).(c) Poor response to initial therapy: Consider conventional or reduced intensity allo-HSCT.Lymphoma-type ATL(i) If outside clinical trials, consider chemotherapy (VCAP-AMP-VECP).(ii) Check prognostic factors (including clinical and molecular factors if possible) and response to chemotherapy:(a) Good prognostic factors and good response to initial therapy: Consider chemotherapy followed by observation.(b) Poor prognostic factors or poor response to initial therapy: Consider chemotherapy followed by conventional or reduced intensity allo-HSCT.
---
*Source: 101754-2012-01-16.xml* | 2012 |
# A Study of the Safety and Morbidity Profile of Closed versus Open Technique of Laparoscopic Primary Peritoneal Access Port in Patients Undergoing Routine Laparoscopic Cholecystectomy at a Tertiary Care Hospital in Northeastern India
**Authors:** A. Baruah; N. Topno; S. Ghosh; N. Naku; R. Hajong; D. Tongper; D. Khongwar; P. Baruah; N. Chishi; S. Sutradhar
**Journal:** Minimally Invasive Surgery
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1017551
---
## Abstract
Introduction. Laparoscopic cholecystectomy (LC) is the gold standard operation for gallstone disease. Primary port placement into the abdomen is a blind procedure and is challenging with chances of unforeseen complications. The complication rate has remained the same during the past 25 years. Both closed/Veress and open/Hasson’s techniques are commonly employed and have their typical indications for use. Materials and Methods. This prospective study was carried out in the Department of General Surgery, North Eastern Indira Gandhi Regional Institute of Health and Medical Sciences (NEIGRIHMS), Shillong, from January 2014 to January 2016, with the aim to compare the safety profile of closed/Veress and open/Hasson’s methods of access to the abdomen during laparoscopic cholecystectomy (LC). The study had 400 eligible cases undergoing LC who were randomly allotted into 2 groups with 200 cases each: group A: closed/Veress needle method and group B: open/Hasson’s method. Results. Closed/Veress and open/Hasson’s method of establishing pneumoperitoneum in laparoscopic cholecystectomy is equally safe in terms of major complications. The closed/Veress method gives faster access to the abdomen as compared to the open method (5.62 ± 2.23 minutes and 7.18 ± 2.52 minutes, respectively, p value <0.0001). The open/Hasson’s method is associated with more primary port site complications (9/200 vs. 0/200, p value 0.0036) and troublesome intraoperative gas leaks (39/200 vs. 2/200, p value <0.0001). The open technique for primary peritoneal access port for laparoscopic cholecystectomy does not impart any additional benefits in terms of safety and morbidity profile in patients undergoing LC. Conclusion. The closed/Veress method of establishing pneumoperitoneum in laparoscopic cholecystectomy is equally safe in terms of major complications and gives quicker access to the abdomen as compared to the open method.
---
## Body
## 1. Introduction
Laparoscopic cholecystectomy (LC) is the gold standard operation for gallstone disease. Primary port placement into the abdomen through small incisions for insertion of laparoscopic surgical instruments which is a blind procedure is challenging and fraught with complications. Access is associated with injuries to the gastrointestinal tract structures and major blood vessels, and at least 50% of these major complications occur before commencement of the intended surgery [1, 2]. This complication rate has remained the same during the past 25 years.
## 2. Materials and Methods
This prospective study was carried out in the Department of General Surgery, North Eastern Indira Gandhi Regional Institute of Health and Medical Sciences (NEIGRIHMS), Shillong, from January 2014 to January 2016, with the aim of comparing the safety profile of open versus closed methods of access to the abdomen during LC.A total of 400 patients admitted for LC were enrolled in the study after due informed consent from the patients. The study was approved by the institutional ethics committee. Single blinding was adopted where patients were unaware of the group to which they would be allocated. The study group of patients consisted of 129 males and 271 females and they were allotted randomly into 2 groups: group A using the closed/Veress needle method (200 patients) and group B using the open/Hasson’s method (200 patients). LC was performed by surgeons having more than 5 years’ experience in the field of laparoscopic surgery.
### 2.1. Inclusion Criteria
The inclusion criteria were as follows:(i)
All patients undergoing routine LC(ii)
Patients above 18 years of age(iii)
Diagnosed to be calculous cholecystitis on ultrasound
### 2.2. Exclusion Criteria
The exclusion criteria were as follows:(i)
Those unwilling to consent(ii)
LC on pregnant women(iii)
LC for indications other than calculous cholecystitis(iv)
LC along with laparoscopic CBD exploration(v)
Previous abdominal operationsThe personal details of the patient like name, age/gender, date of admission, date of operation, date of discharge, and complications were recorded in the proforma.
### 2.3. Closed/Veress Method
A transversely placed sub/supraumbilical stab skin incision of about 5-6 mm was employed, and then, subcutaneous tissue was bluntly dissected until fascia was palpable. The abdominal wall was lifted with one hand, while the Veress needle was held in the right hand like a dart and inserted through the fascia into the peritoneal cavity. The angle of Veress needle insertion varied from 45° in non-obese to 90° in obese [3]. Two clicks of the Veress needle were appreciated as it penetrated first the umbilical fascia and then the peritoneum. The confirmatory test for correct placement of Veress was to observe that the intraperitoneal pressure was below 8 mm Hg and gas was flowing freely.After achieving adequate pneumoperitoneum with intraabdominal pressure of 10–12 mmHg, the Veress needle was replaced with the trocar and cannula. It was advanced in steady rotating manner until a hissing sound from the outer end of the cannula was heard or change in resistance was noticed.
### 2.4. Open/Hasson’s Method
A 1.5–2 cm transverse or semicircular incision approximately was made in the inferior/superior umbilical fold, and the skin edges were retracted with small Langenbeck retractors and the fat separated from the umbilical scar [4–7]. The rectus sheath was picked up with an Allis forceps to facilitate lifting of the abdominal wall. A vertical incision was placed on the fascia and rectus sheath. Using good tissue retraction, the preperitoneal fat and the peritoneum were identified. The peritoneum was sequentially picked up using two Halstead’s mosquito artery forceps and then incised with a pair of scissors. The little finger was then introduced through this incision to explore the area around the incision for any adhesions. The 10 mm cannula without the trocar was inserted through the incision. The cannula was fixed to the abdominal wall with a 1/0 silk suture to prevent leakage of the pneumoperitoneum.Per operative findings like method of pneumoperitoneum creation and its duration, a number of attempts, incision size, port site bleeding, gas leak, and total gas used were recorded. Per operative complications like visceral or vascular injury and port site hematoma were noted. Postoperative complications like primary peritoneal access port site hematoma or infection were noted.
### 2.5. Statistical Analysis
The data were analysed by using INSTAT software (GraphPad Prism Software Inc, La Jolla, California. USA). The mean access time was calculated by Studentst-test and the difference between the various complications among the group was calculated by Fisher’s exact test. A p value of <0.05 was taken to be statistically significant.
## 2.1. Inclusion Criteria
The inclusion criteria were as follows:(i)
All patients undergoing routine LC(ii)
Patients above 18 years of age(iii)
Diagnosed to be calculous cholecystitis on ultrasound
## 2.2. Exclusion Criteria
The exclusion criteria were as follows:(i)
Those unwilling to consent(ii)
LC on pregnant women(iii)
LC for indications other than calculous cholecystitis(iv)
LC along with laparoscopic CBD exploration(v)
Previous abdominal operationsThe personal details of the patient like name, age/gender, date of admission, date of operation, date of discharge, and complications were recorded in the proforma.
## 2.3. Closed/Veress Method
A transversely placed sub/supraumbilical stab skin incision of about 5-6 mm was employed, and then, subcutaneous tissue was bluntly dissected until fascia was palpable. The abdominal wall was lifted with one hand, while the Veress needle was held in the right hand like a dart and inserted through the fascia into the peritoneal cavity. The angle of Veress needle insertion varied from 45° in non-obese to 90° in obese [3]. Two clicks of the Veress needle were appreciated as it penetrated first the umbilical fascia and then the peritoneum. The confirmatory test for correct placement of Veress was to observe that the intraperitoneal pressure was below 8 mm Hg and gas was flowing freely.After achieving adequate pneumoperitoneum with intraabdominal pressure of 10–12 mmHg, the Veress needle was replaced with the trocar and cannula. It was advanced in steady rotating manner until a hissing sound from the outer end of the cannula was heard or change in resistance was noticed.
## 2.4. Open/Hasson’s Method
A 1.5–2 cm transverse or semicircular incision approximately was made in the inferior/superior umbilical fold, and the skin edges were retracted with small Langenbeck retractors and the fat separated from the umbilical scar [4–7]. The rectus sheath was picked up with an Allis forceps to facilitate lifting of the abdominal wall. A vertical incision was placed on the fascia and rectus sheath. Using good tissue retraction, the preperitoneal fat and the peritoneum were identified. The peritoneum was sequentially picked up using two Halstead’s mosquito artery forceps and then incised with a pair of scissors. The little finger was then introduced through this incision to explore the area around the incision for any adhesions. The 10 mm cannula without the trocar was inserted through the incision. The cannula was fixed to the abdominal wall with a 1/0 silk suture to prevent leakage of the pneumoperitoneum.Per operative findings like method of pneumoperitoneum creation and its duration, a number of attempts, incision size, port site bleeding, gas leak, and total gas used were recorded. Per operative complications like visceral or vascular injury and port site hematoma were noted. Postoperative complications like primary peritoneal access port site hematoma or infection were noted.
## 2.5. Statistical Analysis
The data were analysed by using INSTAT software (GraphPad Prism Software Inc, La Jolla, California. USA). The mean access time was calculated by Studentst-test and the difference between the various complications among the group was calculated by Fisher’s exact test. A p value of <0.05 was taken to be statistically significant.
## 3. Results
A total of 400 patients undergoing LC were randomly divided into 2 groups. Both groups were well matched c for age, sex, and body weight (Table1).Table 1
Demographic profile of patients in two groups.
VariableGroup A (closed/Veress needle) (n = 200)Group B (open/Hasson’s) (n = 200)Age (years)36.21 ± 9.0037.61 ± 8.75Male6861Female132139Weight (Kg)60.90 ± 11.2169.12 ± 14.25Time taken for access in the 2 groups was calculated from skin incision to entry of first trocar. The difference in mean access time in group A (5.62 ± 2.23) versus group B (7.18 ± 2.52) was statistically significant (p<0.0001) which meant that access time in the closed/Veress needle group was faster compared to the open/Hasson’s method group. The majority (63.5%) of access in group A, i.e., the closed/Veress needle group was achieved in 1–5 minutes compared to group B, i.e., the open/Hasson’s method group (72%) was achieved in 6–10 minutes (Table 2).Table 2
Access time analysis in both groups.
Access time (in minutes)Group A (closed/Veress needle) (n = 200)Group B (open/Hasson’s) (n = 200)P value1–5127566–1072144>1010Mean access time5.62 ± 2.237.18 ± 2.52<0.0001It was observed that intraoperative gas leak was a troublesome problem in the open/Hasson’s method group (19.5%) compared to the closed/Veress method group (1%) and the difference was statistically significant (p<0.0001) (Table 3). There were two omental injuries (non-expanding hematoma) in the group A (closed/Veress method) which were managed conservatively. There was one bowel injury (ileal serosal tear approximately 0.5 cm) in group B (open/Hasson’s) detected on table and repaired laparoscopically using a single layer of seromuscular silk sutures. However, there were no major vascular injuries in either group. The difference between the incidences of omental injury and bowel injury between the two groups was not statistically significant (Table 3).Table 3
Complications in the two groups.
ComplicationGroup A (closed/Veress needle) (n = 200)Group B (open/Hasson’s) (n = 200)P valueIntraoperative gas leak239<0.0001Omental injury200.4987Major vascular injury00—Bowel injury011.00Primary port site superficial surgical site infections were observed only in the open group (4.5%) and were found to be statistically significant (p=0.0036). Port site hematoma was also observed only in the open/Hasson’s group (2.5%), but there was no statistically significant difference with the closed/Veress needle group (Table 4).Table 4
Port site complications.
ComplicationsGroup A (closed/Veress needle) (n = 200)Group B (open/Hasson’s) (n = 200)P valuePort site hematoma050.0609Port site SSI090.0036Port site hernia00—SSI, surgical site infection; NB, port site SSI managed with wound care and antibiotics.
## 4. Discussion
LC is the gold standard operation for gallstone disease. Abdominal access and creation of a pneumoperitoneum are the first important steps in any laparoscopic surgery and carry a potential risk of bowel and vascular injuries. These are unique to laparoscopic surgery and are rarely seen in open surgery [8]. Access is associated with injuries to the gastrointestinal tract and major blood vessels, and at least 50% of these major complications occur before commencement of the intended surgery [1, 2].This complication rate has remained the same during the past 25 years, and hence, the technique of primary trocar entry in laparoscopy still remains a debatable topic. No single method is suitable for all cases of LC. The entry technique may be individualized in each case depending on the preoperative evaluation and surgical skill.Today, the closed/Veress needle and open/Hasson’s technique with their various modifications are the two widely used methods of primary abdominal access [9]. Hence, we compared these two methods in terms of access time, safety profile, and complications associated with each method.We, in our study, found no instances of major vascular injuries in both the groups. However, the open/Hasson’s group encountered one ileal injury compared to none in the closed/Veress group. Chapron et al. in their large series reported bowel and major vessel injury rates to be 0.04% and 0.01% in the closed technique (n = 8324) and 0.19% and 0% in the open technique (n = 1562), respectively, and had concluded that open laparoscopy does not reduce the risk of major complications during laparoscopic access [10]. In our study, there was no significant difference in major complications between the two groups. On the contrary, Taye et al. [11] in their comparative study of 3000 cases concluded that the open method was relatively a safer technique as far as major complications are concerned. Bathla et al. [12] had found that the open technique in primary trocar insertion is superior as the Veress method had caused small bowel perforation (2%); whereas, in our study, we had no small bowel injury using Veress needle.We found intraoperative gas leak to be a statistically significant problem in the open/Hasson’s method group (19.5%) compared to the closed/Veress method group (1%) which was similar to results reported by Juneja et al. [13] and Chotai et al. [14]. This complication was troublesome as it disturbed the tempo of the surgery with resultant longer operating time.On analysis of the access time between the two groups, we found that using Veress needle access to the abdomen was significantly quicker as compared to the open/Hasson’s method. Majority (63.5%) of access in the closed/Veress needle group was achieved in 1–5 minutes compared to only 28% in the open/Hasson’s method group. Nawaz [15] in their study of 140 patients had found similar results with access time for creation of pneumoperitoneum and insertion of camera port with Veress needle (4 ± 1 min) was faster compared to the open group (5 ± 1 min). On the contrary, Chotai et al. [14] had found access time for creation of pneumoperitoneum and insertion of primary camera port was longer at 5.12 ± 2.51 minutes in the closed method versus 3.94 ± 2.7 minutes in the open method. Hamayun et al. [16] and Juneja et al. [13] also had found the open method to be faster compared to the Veress needle method.Pawanindra et al. [17] reported 2.91% (22 cases) of periumbilical hematoma out of 755 cases of modified open port insertion. Nawaz [15] had found that 1.3% and 2.6% of their patients developed umbilical port site hematoma and umbilical port site infection compared to none in the Veress needle group. Akbar et al. [18] noted the incidence of wound infection was more in the open group but was not found to be statistically significant. We, in our study, found that 2.5% (5 of 200) of patients had port site hematoma and 4.5% (9 of 200) patients had port site infection in the open/Hasson’s group compared to no such port site complication in the closed/Veress needle group. Port site infection occurring in the open/Hasson’s group was statistically significant. These port site complications in the open/Hasson’s group may be attributed to larger incisions, more tissue dissection, and trauma compared to its closed/Veress needle method counterparts.
## 5. Conclusions
The closed/Veress method of establishing pneumoperitoneum in laparoscopic cholecystectomy is equally safe in terms of major complications and gives quicker access to the abdomen as compared to the open/Hasson’s method. The open/Hasson’s method is associated with more port site complications and troublesome intraoperative gas leaks. Thus, the open technique for primary peritoneal access port for laparoscopic cholecystectomy does not impart any additional benefits in terms of safety and morbidity profile of patients.
---
*Source: 1017551-2022-07-12.xml* | 1017551-2022-07-12_1017551-2022-07-12.md | 18,648 | A Study of the Safety and Morbidity Profile of Closed versus Open Technique of Laparoscopic Primary Peritoneal Access Port in Patients Undergoing Routine Laparoscopic Cholecystectomy at a Tertiary Care Hospital in Northeastern India | A. Baruah; N. Topno; S. Ghosh; N. Naku; R. Hajong; D. Tongper; D. Khongwar; P. Baruah; N. Chishi; S. Sutradhar | Minimally Invasive Surgery
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1017551 | 1017551-2022-07-12.xml | ---
## Abstract
Introduction. Laparoscopic cholecystectomy (LC) is the gold standard operation for gallstone disease. Primary port placement into the abdomen is a blind procedure and is challenging with chances of unforeseen complications. The complication rate has remained the same during the past 25 years. Both closed/Veress and open/Hasson’s techniques are commonly employed and have their typical indications for use. Materials and Methods. This prospective study was carried out in the Department of General Surgery, North Eastern Indira Gandhi Regional Institute of Health and Medical Sciences (NEIGRIHMS), Shillong, from January 2014 to January 2016, with the aim to compare the safety profile of closed/Veress and open/Hasson’s methods of access to the abdomen during laparoscopic cholecystectomy (LC). The study had 400 eligible cases undergoing LC who were randomly allotted into 2 groups with 200 cases each: group A: closed/Veress needle method and group B: open/Hasson’s method. Results. Closed/Veress and open/Hasson’s method of establishing pneumoperitoneum in laparoscopic cholecystectomy is equally safe in terms of major complications. The closed/Veress method gives faster access to the abdomen as compared to the open method (5.62 ± 2.23 minutes and 7.18 ± 2.52 minutes, respectively, p value <0.0001). The open/Hasson’s method is associated with more primary port site complications (9/200 vs. 0/200, p value 0.0036) and troublesome intraoperative gas leaks (39/200 vs. 2/200, p value <0.0001). The open technique for primary peritoneal access port for laparoscopic cholecystectomy does not impart any additional benefits in terms of safety and morbidity profile in patients undergoing LC. Conclusion. The closed/Veress method of establishing pneumoperitoneum in laparoscopic cholecystectomy is equally safe in terms of major complications and gives quicker access to the abdomen as compared to the open method.
---
## Body
## 1. Introduction
Laparoscopic cholecystectomy (LC) is the gold standard operation for gallstone disease. Primary port placement into the abdomen through small incisions for insertion of laparoscopic surgical instruments which is a blind procedure is challenging and fraught with complications. Access is associated with injuries to the gastrointestinal tract structures and major blood vessels, and at least 50% of these major complications occur before commencement of the intended surgery [1, 2]. This complication rate has remained the same during the past 25 years.
## 2. Materials and Methods
This prospective study was carried out in the Department of General Surgery, North Eastern Indira Gandhi Regional Institute of Health and Medical Sciences (NEIGRIHMS), Shillong, from January 2014 to January 2016, with the aim of comparing the safety profile of open versus closed methods of access to the abdomen during LC.A total of 400 patients admitted for LC were enrolled in the study after due informed consent from the patients. The study was approved by the institutional ethics committee. Single blinding was adopted where patients were unaware of the group to which they would be allocated. The study group of patients consisted of 129 males and 271 females and they were allotted randomly into 2 groups: group A using the closed/Veress needle method (200 patients) and group B using the open/Hasson’s method (200 patients). LC was performed by surgeons having more than 5 years’ experience in the field of laparoscopic surgery.
### 2.1. Inclusion Criteria
The inclusion criteria were as follows:(i)
All patients undergoing routine LC(ii)
Patients above 18 years of age(iii)
Diagnosed to be calculous cholecystitis on ultrasound
### 2.2. Exclusion Criteria
The exclusion criteria were as follows:(i)
Those unwilling to consent(ii)
LC on pregnant women(iii)
LC for indications other than calculous cholecystitis(iv)
LC along with laparoscopic CBD exploration(v)
Previous abdominal operationsThe personal details of the patient like name, age/gender, date of admission, date of operation, date of discharge, and complications were recorded in the proforma.
### 2.3. Closed/Veress Method
A transversely placed sub/supraumbilical stab skin incision of about 5-6 mm was employed, and then, subcutaneous tissue was bluntly dissected until fascia was palpable. The abdominal wall was lifted with one hand, while the Veress needle was held in the right hand like a dart and inserted through the fascia into the peritoneal cavity. The angle of Veress needle insertion varied from 45° in non-obese to 90° in obese [3]. Two clicks of the Veress needle were appreciated as it penetrated first the umbilical fascia and then the peritoneum. The confirmatory test for correct placement of Veress was to observe that the intraperitoneal pressure was below 8 mm Hg and gas was flowing freely.After achieving adequate pneumoperitoneum with intraabdominal pressure of 10–12 mmHg, the Veress needle was replaced with the trocar and cannula. It was advanced in steady rotating manner until a hissing sound from the outer end of the cannula was heard or change in resistance was noticed.
### 2.4. Open/Hasson’s Method
A 1.5–2 cm transverse or semicircular incision approximately was made in the inferior/superior umbilical fold, and the skin edges were retracted with small Langenbeck retractors and the fat separated from the umbilical scar [4–7]. The rectus sheath was picked up with an Allis forceps to facilitate lifting of the abdominal wall. A vertical incision was placed on the fascia and rectus sheath. Using good tissue retraction, the preperitoneal fat and the peritoneum were identified. The peritoneum was sequentially picked up using two Halstead’s mosquito artery forceps and then incised with a pair of scissors. The little finger was then introduced through this incision to explore the area around the incision for any adhesions. The 10 mm cannula without the trocar was inserted through the incision. The cannula was fixed to the abdominal wall with a 1/0 silk suture to prevent leakage of the pneumoperitoneum.Per operative findings like method of pneumoperitoneum creation and its duration, a number of attempts, incision size, port site bleeding, gas leak, and total gas used were recorded. Per operative complications like visceral or vascular injury and port site hematoma were noted. Postoperative complications like primary peritoneal access port site hematoma or infection were noted.
### 2.5. Statistical Analysis
The data were analysed by using INSTAT software (GraphPad Prism Software Inc, La Jolla, California. USA). The mean access time was calculated by Studentst-test and the difference between the various complications among the group was calculated by Fisher’s exact test. A p value of <0.05 was taken to be statistically significant.
## 2.1. Inclusion Criteria
The inclusion criteria were as follows:(i)
All patients undergoing routine LC(ii)
Patients above 18 years of age(iii)
Diagnosed to be calculous cholecystitis on ultrasound
## 2.2. Exclusion Criteria
The exclusion criteria were as follows:(i)
Those unwilling to consent(ii)
LC on pregnant women(iii)
LC for indications other than calculous cholecystitis(iv)
LC along with laparoscopic CBD exploration(v)
Previous abdominal operationsThe personal details of the patient like name, age/gender, date of admission, date of operation, date of discharge, and complications were recorded in the proforma.
## 2.3. Closed/Veress Method
A transversely placed sub/supraumbilical stab skin incision of about 5-6 mm was employed, and then, subcutaneous tissue was bluntly dissected until fascia was palpable. The abdominal wall was lifted with one hand, while the Veress needle was held in the right hand like a dart and inserted through the fascia into the peritoneal cavity. The angle of Veress needle insertion varied from 45° in non-obese to 90° in obese [3]. Two clicks of the Veress needle were appreciated as it penetrated first the umbilical fascia and then the peritoneum. The confirmatory test for correct placement of Veress was to observe that the intraperitoneal pressure was below 8 mm Hg and gas was flowing freely.After achieving adequate pneumoperitoneum with intraabdominal pressure of 10–12 mmHg, the Veress needle was replaced with the trocar and cannula. It was advanced in steady rotating manner until a hissing sound from the outer end of the cannula was heard or change in resistance was noticed.
## 2.4. Open/Hasson’s Method
A 1.5–2 cm transverse or semicircular incision approximately was made in the inferior/superior umbilical fold, and the skin edges were retracted with small Langenbeck retractors and the fat separated from the umbilical scar [4–7]. The rectus sheath was picked up with an Allis forceps to facilitate lifting of the abdominal wall. A vertical incision was placed on the fascia and rectus sheath. Using good tissue retraction, the preperitoneal fat and the peritoneum were identified. The peritoneum was sequentially picked up using two Halstead’s mosquito artery forceps and then incised with a pair of scissors. The little finger was then introduced through this incision to explore the area around the incision for any adhesions. The 10 mm cannula without the trocar was inserted through the incision. The cannula was fixed to the abdominal wall with a 1/0 silk suture to prevent leakage of the pneumoperitoneum.Per operative findings like method of pneumoperitoneum creation and its duration, a number of attempts, incision size, port site bleeding, gas leak, and total gas used were recorded. Per operative complications like visceral or vascular injury and port site hematoma were noted. Postoperative complications like primary peritoneal access port site hematoma or infection were noted.
## 2.5. Statistical Analysis
The data were analysed by using INSTAT software (GraphPad Prism Software Inc, La Jolla, California. USA). The mean access time was calculated by Studentst-test and the difference between the various complications among the group was calculated by Fisher’s exact test. A p value of <0.05 was taken to be statistically significant.
## 3. Results
A total of 400 patients undergoing LC were randomly divided into 2 groups. Both groups were well matched c for age, sex, and body weight (Table1).Table 1
Demographic profile of patients in two groups.
VariableGroup A (closed/Veress needle) (n = 200)Group B (open/Hasson’s) (n = 200)Age (years)36.21 ± 9.0037.61 ± 8.75Male6861Female132139Weight (Kg)60.90 ± 11.2169.12 ± 14.25Time taken for access in the 2 groups was calculated from skin incision to entry of first trocar. The difference in mean access time in group A (5.62 ± 2.23) versus group B (7.18 ± 2.52) was statistically significant (p<0.0001) which meant that access time in the closed/Veress needle group was faster compared to the open/Hasson’s method group. The majority (63.5%) of access in group A, i.e., the closed/Veress needle group was achieved in 1–5 minutes compared to group B, i.e., the open/Hasson’s method group (72%) was achieved in 6–10 minutes (Table 2).Table 2
Access time analysis in both groups.
Access time (in minutes)Group A (closed/Veress needle) (n = 200)Group B (open/Hasson’s) (n = 200)P value1–5127566–1072144>1010Mean access time5.62 ± 2.237.18 ± 2.52<0.0001It was observed that intraoperative gas leak was a troublesome problem in the open/Hasson’s method group (19.5%) compared to the closed/Veress method group (1%) and the difference was statistically significant (p<0.0001) (Table 3). There were two omental injuries (non-expanding hematoma) in the group A (closed/Veress method) which were managed conservatively. There was one bowel injury (ileal serosal tear approximately 0.5 cm) in group B (open/Hasson’s) detected on table and repaired laparoscopically using a single layer of seromuscular silk sutures. However, there were no major vascular injuries in either group. The difference between the incidences of omental injury and bowel injury between the two groups was not statistically significant (Table 3).Table 3
Complications in the two groups.
ComplicationGroup A (closed/Veress needle) (n = 200)Group B (open/Hasson’s) (n = 200)P valueIntraoperative gas leak239<0.0001Omental injury200.4987Major vascular injury00—Bowel injury011.00Primary port site superficial surgical site infections were observed only in the open group (4.5%) and were found to be statistically significant (p=0.0036). Port site hematoma was also observed only in the open/Hasson’s group (2.5%), but there was no statistically significant difference with the closed/Veress needle group (Table 4).Table 4
Port site complications.
ComplicationsGroup A (closed/Veress needle) (n = 200)Group B (open/Hasson’s) (n = 200)P valuePort site hematoma050.0609Port site SSI090.0036Port site hernia00—SSI, surgical site infection; NB, port site SSI managed with wound care and antibiotics.
## 4. Discussion
LC is the gold standard operation for gallstone disease. Abdominal access and creation of a pneumoperitoneum are the first important steps in any laparoscopic surgery and carry a potential risk of bowel and vascular injuries. These are unique to laparoscopic surgery and are rarely seen in open surgery [8]. Access is associated with injuries to the gastrointestinal tract and major blood vessels, and at least 50% of these major complications occur before commencement of the intended surgery [1, 2].This complication rate has remained the same during the past 25 years, and hence, the technique of primary trocar entry in laparoscopy still remains a debatable topic. No single method is suitable for all cases of LC. The entry technique may be individualized in each case depending on the preoperative evaluation and surgical skill.Today, the closed/Veress needle and open/Hasson’s technique with their various modifications are the two widely used methods of primary abdominal access [9]. Hence, we compared these two methods in terms of access time, safety profile, and complications associated with each method.We, in our study, found no instances of major vascular injuries in both the groups. However, the open/Hasson’s group encountered one ileal injury compared to none in the closed/Veress group. Chapron et al. in their large series reported bowel and major vessel injury rates to be 0.04% and 0.01% in the closed technique (n = 8324) and 0.19% and 0% in the open technique (n = 1562), respectively, and had concluded that open laparoscopy does not reduce the risk of major complications during laparoscopic access [10]. In our study, there was no significant difference in major complications between the two groups. On the contrary, Taye et al. [11] in their comparative study of 3000 cases concluded that the open method was relatively a safer technique as far as major complications are concerned. Bathla et al. [12] had found that the open technique in primary trocar insertion is superior as the Veress method had caused small bowel perforation (2%); whereas, in our study, we had no small bowel injury using Veress needle.We found intraoperative gas leak to be a statistically significant problem in the open/Hasson’s method group (19.5%) compared to the closed/Veress method group (1%) which was similar to results reported by Juneja et al. [13] and Chotai et al. [14]. This complication was troublesome as it disturbed the tempo of the surgery with resultant longer operating time.On analysis of the access time between the two groups, we found that using Veress needle access to the abdomen was significantly quicker as compared to the open/Hasson’s method. Majority (63.5%) of access in the closed/Veress needle group was achieved in 1–5 minutes compared to only 28% in the open/Hasson’s method group. Nawaz [15] in their study of 140 patients had found similar results with access time for creation of pneumoperitoneum and insertion of camera port with Veress needle (4 ± 1 min) was faster compared to the open group (5 ± 1 min). On the contrary, Chotai et al. [14] had found access time for creation of pneumoperitoneum and insertion of primary camera port was longer at 5.12 ± 2.51 minutes in the closed method versus 3.94 ± 2.7 minutes in the open method. Hamayun et al. [16] and Juneja et al. [13] also had found the open method to be faster compared to the Veress needle method.Pawanindra et al. [17] reported 2.91% (22 cases) of periumbilical hematoma out of 755 cases of modified open port insertion. Nawaz [15] had found that 1.3% and 2.6% of their patients developed umbilical port site hematoma and umbilical port site infection compared to none in the Veress needle group. Akbar et al. [18] noted the incidence of wound infection was more in the open group but was not found to be statistically significant. We, in our study, found that 2.5% (5 of 200) of patients had port site hematoma and 4.5% (9 of 200) patients had port site infection in the open/Hasson’s group compared to no such port site complication in the closed/Veress needle group. Port site infection occurring in the open/Hasson’s group was statistically significant. These port site complications in the open/Hasson’s group may be attributed to larger incisions, more tissue dissection, and trauma compared to its closed/Veress needle method counterparts.
## 5. Conclusions
The closed/Veress method of establishing pneumoperitoneum in laparoscopic cholecystectomy is equally safe in terms of major complications and gives quicker access to the abdomen as compared to the open/Hasson’s method. The open/Hasson’s method is associated with more port site complications and troublesome intraoperative gas leaks. Thus, the open technique for primary peritoneal access port for laparoscopic cholecystectomy does not impart any additional benefits in terms of safety and morbidity profile of patients.
---
*Source: 1017551-2022-07-12.xml* | 2022 |
# Mechanical Fault Diagnosis for HV Circuit Breakers Based on Ensemble Empirical Mode Decomposition Energy Entropy and Support Vector Machine
**Authors:** Jianfeng Zhang; Mingliang Liu; Keqi Wang; Laijun Sun
**Journal:** Mathematical Problems in Engineering
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101757
---
## Abstract
During the operation process of the high voltage circuit breaker, the changes of vibration signals can reflect the machinery states of the circuit breaker. The extraction of the vibration signal feature will directly influence the accuracy and practicability of fault diagnosis. This paper presents an extraction method based on ensemble empirical mode decomposition (EEMD). Firstly, the original vibration signals are decomposed into a finite number of stationary intrinsic mode functions (IMFs). Secondly, calculating the envelope of each IMF and separating the envelope by equal-time segment and then forming equal-time segment energy entropy to reflect the change of vibration signal are performed. At last, the energy entropies could serve as input vectors of support vector machine (SVM) to identify the working state and fault pattern of the circuit breaker. Practical examples show that this diagnosis approach can identify effectively fault patterns of HV circuit breaker.
---
## Body
## 1. Introduction
As an import part of the electric system, a HV circuit breaker is a key device to control and protect the power network. Therefore, the action reliability of HV circuit breaker is extremely important in the electric system. In recent years, research on diagnosis method of circuit breaker is growing fast, and many new techniques have been used in practice, in which the technique based on the analysis of the vibration signal has gradually become hot [1–3].Many vibration signals produced by circuit breaker contain a number of pieces of important information, which can be used to evaluate the machinery state of circuit breaker. Through the analysis of vibration signals acquired by the piezoelectric sensor, the running states of circuit breaker are convenient and accurate to diagnose. To analyze the vibration signal, some signal processing methods, such as wavelet [4, 5] and EMD [6, 7], have been used in practice. Wavelet analysis has become popular in the past decade as a method for time-frequency representation. In principle, wavelet transform (WT) uses short windows at high frequencies and long windows at low frequencies, which renders WT more suitable for dealing with nonstationary time series. Nonetheless, wavelet analysis is also limited by the fundamental uncertainty principle, in which both time and frequency cannot simultaneously be resolved with the same precision. Moreover, the results of WT analysis depend on the choice of the mother wavelet, which is arbitrary and may not be optimal for the time series under scrutiny. In contrast to the WT approach, the empirical mode decomposition (EMD) [8] method adaptively decomposes nonstationary time series into narrow-band components, namely, intrinsic mode functions (IMFs), by empirically identifying the physical time scales intrinsic to the data without assuming any basis functions. Thus, the EMD can potentially localize events in both time and frequency, even in nonstationary time series [9–12]. So, the EMD is a suitable method to process nonlinear and nonstationary signals. However, mode mixing problems brought by EMD greatly restrict its application in practice. EEMD is the repeated EMD by adding Gauss white noise in each of the decompositions. It takes advantage of the uniform distribution statistical characteristics of Gauss white noise in frequency domain [13]. Through this method, EEMD could decompose signal continuously in different scales. Therefore, the problem of mode mixing will be eliminated effectively. A nonstationary vibration signal is decomposed into a series of intrinsic mode functions (IMFs) by EEMD. The envelope of IMF can be obtained through Hilbert transform and separated by equal-time segment. Then, we can get the energy entropy of each envelope of IMF with the energy entropy theory. Those IMF energy entropies can form the entropy vector, and this could serve as the input vector of SVM for judging circuit breaker working states and fault types. The experiment result indicates the method that applied the EEMD-energy entropy and support vector machine is effective and has many potential applications in practice.
## 2. EEMD Method
EEMD is a new method of signal process; the specific decomposition steps and principles are as follows [14].Step 1.
Adding the random Gauss white noiseni(t) with the mean zero of amplitude and the constant of standard deviation to the original signal x(t) (the standard deviation of white noise is 0.1–0.4 times the size of the original signal.), the function is as follows:(1)xit=xt+nit.Signal xi(t) is the signal that added the ith Gauss white noise. The Gauss white noise will directly affect the decomposition of signal by EEMD.Step 2.
Signalxi(t) is decomposed into several IMFs cij(t) and the margin ri(t). The cij(t) with the ith Gauss white noise is the jth IMF decomposition.Step 3.
Repeat Steps1 and 2N times. Next, with the principle that the statistical mean of random and independent sequence is zero, the overall average operation for the corresponding IMF could eliminate the effects of multiple Gauss white noise on the real IMF. The final IMF is written as(2)cjt=1N∑i=1Ncij,in which the cj is the jth IMF component of original signal by EEMD. When N is larger, the sum of the white noise of IMFS will tend to zero. At this time, the results for EEMD are written as(3)xt=∑jcjt+rt,in which r(t) is the final residual component, representing the average trend of signal. Through EEMD method, we can put any signal x(t) into several of IMFs and a residual component. The intrinsic mode components cj(t) (j=1,2,…) represent the elements of signal from high to low frequency band; in each band the frequency components are not the same and will change following the change of vibration signal x(t).Figure1 shows the normal state of vibration signal. The signal can get eight major components and a residual component by EEMD, as shown in Figure 2. From the diagram, the normal state of nonstationary vibration signal is decomposed into a number of stationary IMF components by EEMD, and different IMF component contains a variety of time scales.Figure 1
Standard signal of normal state.Figure 2
Results of EEMD decomposition.
## 3. EEMD-Energy Entropy
### 3.1. The Extraction Envelope of the Signal
Mutation information of signal is often presented in the envelope of the signal. When the operating mechanism of circuit breaker is in action, the high frequency components from the impact can be seen as a signal carrier of the envelope signal. Therefore, the Hilbert method which is used to extract the envelope of signal for diagnosis is very effective in the mechanical fault diagnosis.For a real signalx(t), the Hilbert transform is defined as(4)x^t=1πt·xt=1π∫-∞+∞xτt-τdτ.Then, the analytic signal of x(t) is(5)gt=xt+jx^t.The amplitude of g(t) is(6)At=x2t+x^2t.So A(t) is the envelope of x(t).
### 3.2. The Application of Entropy
Entropy is an information measure for describing the complexity or irregularity of system. So far, many attempts have been made for estimation of entropy in the complex system, such as Kolmogorov entropy, approximate entropy, and Shannon information entropy [15]. This paper chooses the Shannon information entropy. The information entropy can reflect the uniformity of probability distribution of system. The greater the entropy value H, the more uniform the information distribution and the greater the disorder degree of the information. Therefore, it can also be used to describe the uncertainty degree of the system [16–18].Let an information system haveN random information sources x1,x2,…,xN, and the probability of each information source appearing in the whole system is p1,p2,…,pN, respectively. Then, its information entropy is defined as follows:(7)H=-∑i=1Npilogpi.Distinguishing normal state and fault state is the essence of fault diagnosis of circuit breaker. The fault can be regarded as different mutations in the normal state. According to this property, this paper proposes the equal-time segment approach to achieve extraction of entropy. The principle is shown in Figure3.Figure 3
Segments with equal-time.In Figure3, Signal 1 is the envelope of normal signal. Signal 2 is the envelope of fault signal. Mutation events are delayed. Signal 2 was compared with Signal 1. Signal 1 was segmented according to equal-time segment. Each of them has three segments: Seg1, Seg2, and Seg3. Because Signal 2 is changed compared with Signal 1, Seg1, Seg2, and Seg3 energies of Signal 2 compared with Seg1, Seg2, and Seg3 energies of Signal 1 are also changed, indicating that energy distribution is changed. Therefore, we can transform the changes of Signal 1 and Signal 2 into the change of energy distribution uniformity of each segment.
### 3.3. The Extraction Steps of Entropy
With the extraction of EEMD-energy entropy of sampling vibration signal as an example, the specific extraction steps are as follows.Step 1.
First vibration signal is denoised by wavelet soft threshold.Step 2.
The denoising signal is decomposed by EEMD and choosing the top 8 main intrinsic mode functions (IMFs).Step 3.
Get the respective analytic signals of the obtained IMFs with Hilbert transform.Step 4.
Extract the envelope of the respective analytic signals.Step 5.
Separate equally the envelope of signal intoM sections along the time axis, and calculate the energy of each segment according to the following equation:(8)Qki=∫t1t2At2dt,in which i=1,2,…,M, k=1,2,…,8, and t1 and t2 are the starting and stopping time of the i segment.Step 6.
Normalization processing is made to the signal envelope of the segmented energy as follows:(9)qki=Qki∑i=1mQki.Step 7.
According to the basic theory of entropy, the definition of the EEMD-energy entropy for the envelope of signalA(t) is written as(10)Hk=-∑i=1mqkilgqki.Step 8.
Finally, the EEMD-energy entropy vector is(11)T=H1,H2,…,H8.When we use the method of the EEMD-energy entropy for fault detection, actually, the distribution of normal signals is considered to be as a uniform distribution. Moreover, the distribution of the test signals under fault condition is not uniform. And because the entropy is a measure of signal heterogeneity degree, we can use the EEMD-energy entropy to reflect the deviation degree of fault state relative to the normal state.
## 3.1. The Extraction Envelope of the Signal
Mutation information of signal is often presented in the envelope of the signal. When the operating mechanism of circuit breaker is in action, the high frequency components from the impact can be seen as a signal carrier of the envelope signal. Therefore, the Hilbert method which is used to extract the envelope of signal for diagnosis is very effective in the mechanical fault diagnosis.For a real signalx(t), the Hilbert transform is defined as(4)x^t=1πt·xt=1π∫-∞+∞xτt-τdτ.Then, the analytic signal of x(t) is(5)gt=xt+jx^t.The amplitude of g(t) is(6)At=x2t+x^2t.So A(t) is the envelope of x(t).
## 3.2. The Application of Entropy
Entropy is an information measure for describing the complexity or irregularity of system. So far, many attempts have been made for estimation of entropy in the complex system, such as Kolmogorov entropy, approximate entropy, and Shannon information entropy [15]. This paper chooses the Shannon information entropy. The information entropy can reflect the uniformity of probability distribution of system. The greater the entropy value H, the more uniform the information distribution and the greater the disorder degree of the information. Therefore, it can also be used to describe the uncertainty degree of the system [16–18].Let an information system haveN random information sources x1,x2,…,xN, and the probability of each information source appearing in the whole system is p1,p2,…,pN, respectively. Then, its information entropy is defined as follows:(7)H=-∑i=1Npilogpi.Distinguishing normal state and fault state is the essence of fault diagnosis of circuit breaker. The fault can be regarded as different mutations in the normal state. According to this property, this paper proposes the equal-time segment approach to achieve extraction of entropy. The principle is shown in Figure3.Figure 3
Segments with equal-time.In Figure3, Signal 1 is the envelope of normal signal. Signal 2 is the envelope of fault signal. Mutation events are delayed. Signal 2 was compared with Signal 1. Signal 1 was segmented according to equal-time segment. Each of them has three segments: Seg1, Seg2, and Seg3. Because Signal 2 is changed compared with Signal 1, Seg1, Seg2, and Seg3 energies of Signal 2 compared with Seg1, Seg2, and Seg3 energies of Signal 1 are also changed, indicating that energy distribution is changed. Therefore, we can transform the changes of Signal 1 and Signal 2 into the change of energy distribution uniformity of each segment.
## 3.3. The Extraction Steps of Entropy
With the extraction of EEMD-energy entropy of sampling vibration signal as an example, the specific extraction steps are as follows.Step 1.
First vibration signal is denoised by wavelet soft threshold.Step 2.
The denoising signal is decomposed by EEMD and choosing the top 8 main intrinsic mode functions (IMFs).Step 3.
Get the respective analytic signals of the obtained IMFs with Hilbert transform.Step 4.
Extract the envelope of the respective analytic signals.Step 5.
Separate equally the envelope of signal intoM sections along the time axis, and calculate the energy of each segment according to the following equation:(8)Qki=∫t1t2At2dt,in which i=1,2,…,M, k=1,2,…,8, and t1 and t2 are the starting and stopping time of the i segment.Step 6.
Normalization processing is made to the signal envelope of the segmented energy as follows:(9)qki=Qki∑i=1mQki.Step 7.
According to the basic theory of entropy, the definition of the EEMD-energy entropy for the envelope of signalA(t) is written as(10)Hk=-∑i=1mqkilgqki.Step 8.
Finally, the EEMD-energy entropy vector is(11)T=H1,H2,…,H8.When we use the method of the EEMD-energy entropy for fault detection, actually, the distribution of normal signals is considered to be as a uniform distribution. Moreover, the distribution of the test signals under fault condition is not uniform. And because the entropy is a measure of signal heterogeneity degree, we can use the EEMD-energy entropy to reflect the deviation degree of fault state relative to the normal state.
## 4. SVM
SVM is a promising classifier that minimizes the empirical classification error and at the same time maximizes the margin by determining a separating hyperplane to distinguish different classes of data [19]. The basic idea of support vector machine is to create a hyperplane as the optimal separating hyperplane. The optimal separating hyperplane can not only make all samples correct classification but also have the maximum distance to the nearest point of the training data. To enable the optimal separating hyperplane method to be generalised, Cortes and Vapnik introduced nonnegative variable εi and the penalty factor C. The εi are a measure of the misclassification errors and the C is a given value and subject to the constraints of (12). Then, the constraint conditions of the hyperplane are written as(12)yiω·x+b≥1-εii=1,2,…,l.Hence, the optimal separating hyperplane is given by(13)ϕω,ε=12ω·ω+c∑i=1lεi.By introducing the kernel function Kxi·xj, the nonlinear problem could be transformed into a linear problem in high dimension space. The corresponding decision function is written as(14)fx=sgn∑SVaiyiKxi·xj+b.The classification performance of SVM is superior to the neural network classifier in the fault diagnosis for small samples. PutT as the input vector of SVM, choose the radial basis function (RBF), and use the strategy of “hierarchical SVMs (H-SVMs)” [20, 21] for the mechanical fault diagnosis of circuit breakers.H-SVMs have a variety of classification structures; this research chooses the skewed tree classification structure; its structure is shown in Figure4. H-SVMs classify the four kinds of states that are the normal state C1, the lack of lubrication state C2, the foundation bolt looseness state C3, and the energy storage spring shed state C4 into three levels of classification training and recognition by three SVMs. First, using a SVM to the first level classification, C4 is separated from C1, C2, and C3; the second stage is separating C3 from C1 and C2 by the second SVM; finally, the third SVM is employed to the third grade classification, separating C1 and C2.Figure 4
Classification tree diagram for H-SVMs.
## 5. Experiment and Analysis
Collecting the vibration signals of the normal state, the lack of lubrication state, the foundation bolt looseness state, and the energy storage spring shed state from type ZW32-12 of vacuum circuit breaker in laboratory, as shown in Figure5, each state collected 20 groups of close-brake vibration signal, including 10 groups for SVM training and 10 groups for testing. Adopting the method of EEMD-energy entropy to calculate each state of the close-brake vibration signals, getting the 8-dimensional entropic vector T of each sample, a part of the entropy values is shown in Table 1.Table 1
Vectors of characteristic entropy.
H
1
H
2
H
3
H
4
H
5
H
6
H
7
H
8
Normal
1.38995
1.2863
1.31565
1.22686
1.19784
1.08202
1.1478
1.35075
Normal
1.3758
1.37933
1.32379
1.18654
1.19728
1.01396
1.18778
1.32624
Normal
1.42401
1.2713
1.32004
1.25894
1.23718
1.17143
0.97525
1.39509
Fault I
0.89786
0.92083
0.97844
1.11754
0.96972
1.20403
1.45528
1.89058
Fault I
0.92882
0.86088
0.91616
0.92303
1.03207
1.19891
1.58584
1.85211
Fault I
0.83574
0.91725
0.96161
1.19681
1.03097
1.23316
1.50922
2.05601
Fault II
1.59699
1.3049
1.19265
0.90063
0.96383
1.00915
1.02713
1.67725
Fault II
1.57129
1.28567
1.22408
1.06797
0.97543
0.97423
0.95939
1.66329
Fault II
1.54017
1.24947
1.23603
1.03721
1.09331
1.0717
1.07941
1.76371
Fault III
1.13835
0.98421
1.08779
1.09961
1.37122
1.49651
1.93909
2.07203
Fault III
1.16172
1.02966
0.85995
0.99001
1.32936
1.521
1.93032
1.94298
Fault III
1.19095
0.81114
1.04993
1.02732
1.31641
1.45068
1.93723
2.04867Figure 5
Vibration signals of vacuum circuit breaker.Then, the received entropic vectors are inputted H-SVMs for classification and recognition. Radial basis kernel function is applied in this experiment. Penalty factorC and the kernel function parameter g are two important parameters for influencing the accuracy and generalization ability of SVM diagnosis. Common methods of parameter optimization are genetic algorithm, grid method, and particle swarm method. This experiment selects the genetic algorithm to determine the optimal parameters C and g. Training results are shown in Figure 6, and recognition results are shown in Table 2. We could see the four training signals are clearly classified from Figure 6 and the accuracy of recognition results is very high from Table 2 in which three types of accuracy are a hundred percent.Table 2
Recognition results of H-SVMs.
The type of state
Recognition results
Accuracy/%
Normal
C1, C1, C1, C1, C1, C1, C1, C1, C1, C1
100
Fault I
C2, C2, C2, C2, C2, C2, C2, C2, C2, C2
100
Fault II
C3, C3, C3, C3, C3, C1, C3, C3, C3, C3
90
Fault III
C4, C4, C4, C4, C4, C4, C4, C4, C4, C4
100Figure 6
Training results of H-SVMs.
## 6. Conclusion
The change of vibration signal of circuit breaker running can reflect circuit breaker operation state. The paper presented an equal-time segment approach based on the EEMD which could influence the change of time, frequency, and energy. Besides, the SVM is effective in solving the circuit breaker status recognition problems that include small samples, nonlinear and high dimension problems. Experimental results show that a combination method of the EEMD, entropy, and SVM for circuit breaker diagnosis to typical fault and normal state has better diagnosis effect.
---
*Source: 101757-2015-07-09.xml* | 101757-2015-07-09_101757-2015-07-09.md | 20,471 | Mechanical Fault Diagnosis for HV Circuit Breakers Based on Ensemble Empirical Mode Decomposition Energy Entropy and Support Vector Machine | Jianfeng Zhang; Mingliang Liu; Keqi Wang; Laijun Sun | Mathematical Problems in Engineering
(2015) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101757 | 101757-2015-07-09.xml | ---
## Abstract
During the operation process of the high voltage circuit breaker, the changes of vibration signals can reflect the machinery states of the circuit breaker. The extraction of the vibration signal feature will directly influence the accuracy and practicability of fault diagnosis. This paper presents an extraction method based on ensemble empirical mode decomposition (EEMD). Firstly, the original vibration signals are decomposed into a finite number of stationary intrinsic mode functions (IMFs). Secondly, calculating the envelope of each IMF and separating the envelope by equal-time segment and then forming equal-time segment energy entropy to reflect the change of vibration signal are performed. At last, the energy entropies could serve as input vectors of support vector machine (SVM) to identify the working state and fault pattern of the circuit breaker. Practical examples show that this diagnosis approach can identify effectively fault patterns of HV circuit breaker.
---
## Body
## 1. Introduction
As an import part of the electric system, a HV circuit breaker is a key device to control and protect the power network. Therefore, the action reliability of HV circuit breaker is extremely important in the electric system. In recent years, research on diagnosis method of circuit breaker is growing fast, and many new techniques have been used in practice, in which the technique based on the analysis of the vibration signal has gradually become hot [1–3].Many vibration signals produced by circuit breaker contain a number of pieces of important information, which can be used to evaluate the machinery state of circuit breaker. Through the analysis of vibration signals acquired by the piezoelectric sensor, the running states of circuit breaker are convenient and accurate to diagnose. To analyze the vibration signal, some signal processing methods, such as wavelet [4, 5] and EMD [6, 7], have been used in practice. Wavelet analysis has become popular in the past decade as a method for time-frequency representation. In principle, wavelet transform (WT) uses short windows at high frequencies and long windows at low frequencies, which renders WT more suitable for dealing with nonstationary time series. Nonetheless, wavelet analysis is also limited by the fundamental uncertainty principle, in which both time and frequency cannot simultaneously be resolved with the same precision. Moreover, the results of WT analysis depend on the choice of the mother wavelet, which is arbitrary and may not be optimal for the time series under scrutiny. In contrast to the WT approach, the empirical mode decomposition (EMD) [8] method adaptively decomposes nonstationary time series into narrow-band components, namely, intrinsic mode functions (IMFs), by empirically identifying the physical time scales intrinsic to the data without assuming any basis functions. Thus, the EMD can potentially localize events in both time and frequency, even in nonstationary time series [9–12]. So, the EMD is a suitable method to process nonlinear and nonstationary signals. However, mode mixing problems brought by EMD greatly restrict its application in practice. EEMD is the repeated EMD by adding Gauss white noise in each of the decompositions. It takes advantage of the uniform distribution statistical characteristics of Gauss white noise in frequency domain [13]. Through this method, EEMD could decompose signal continuously in different scales. Therefore, the problem of mode mixing will be eliminated effectively. A nonstationary vibration signal is decomposed into a series of intrinsic mode functions (IMFs) by EEMD. The envelope of IMF can be obtained through Hilbert transform and separated by equal-time segment. Then, we can get the energy entropy of each envelope of IMF with the energy entropy theory. Those IMF energy entropies can form the entropy vector, and this could serve as the input vector of SVM for judging circuit breaker working states and fault types. The experiment result indicates the method that applied the EEMD-energy entropy and support vector machine is effective and has many potential applications in practice.
## 2. EEMD Method
EEMD is a new method of signal process; the specific decomposition steps and principles are as follows [14].Step 1.
Adding the random Gauss white noiseni(t) with the mean zero of amplitude and the constant of standard deviation to the original signal x(t) (the standard deviation of white noise is 0.1–0.4 times the size of the original signal.), the function is as follows:(1)xit=xt+nit.Signal xi(t) is the signal that added the ith Gauss white noise. The Gauss white noise will directly affect the decomposition of signal by EEMD.Step 2.
Signalxi(t) is decomposed into several IMFs cij(t) and the margin ri(t). The cij(t) with the ith Gauss white noise is the jth IMF decomposition.Step 3.
Repeat Steps1 and 2N times. Next, with the principle that the statistical mean of random and independent sequence is zero, the overall average operation for the corresponding IMF could eliminate the effects of multiple Gauss white noise on the real IMF. The final IMF is written as(2)cjt=1N∑i=1Ncij,in which the cj is the jth IMF component of original signal by EEMD. When N is larger, the sum of the white noise of IMFS will tend to zero. At this time, the results for EEMD are written as(3)xt=∑jcjt+rt,in which r(t) is the final residual component, representing the average trend of signal. Through EEMD method, we can put any signal x(t) into several of IMFs and a residual component. The intrinsic mode components cj(t) (j=1,2,…) represent the elements of signal from high to low frequency band; in each band the frequency components are not the same and will change following the change of vibration signal x(t).Figure1 shows the normal state of vibration signal. The signal can get eight major components and a residual component by EEMD, as shown in Figure 2. From the diagram, the normal state of nonstationary vibration signal is decomposed into a number of stationary IMF components by EEMD, and different IMF component contains a variety of time scales.Figure 1
Standard signal of normal state.Figure 2
Results of EEMD decomposition.
## 3. EEMD-Energy Entropy
### 3.1. The Extraction Envelope of the Signal
Mutation information of signal is often presented in the envelope of the signal. When the operating mechanism of circuit breaker is in action, the high frequency components from the impact can be seen as a signal carrier of the envelope signal. Therefore, the Hilbert method which is used to extract the envelope of signal for diagnosis is very effective in the mechanical fault diagnosis.For a real signalx(t), the Hilbert transform is defined as(4)x^t=1πt·xt=1π∫-∞+∞xτt-τdτ.Then, the analytic signal of x(t) is(5)gt=xt+jx^t.The amplitude of g(t) is(6)At=x2t+x^2t.So A(t) is the envelope of x(t).
### 3.2. The Application of Entropy
Entropy is an information measure for describing the complexity or irregularity of system. So far, many attempts have been made for estimation of entropy in the complex system, such as Kolmogorov entropy, approximate entropy, and Shannon information entropy [15]. This paper chooses the Shannon information entropy. The information entropy can reflect the uniformity of probability distribution of system. The greater the entropy value H, the more uniform the information distribution and the greater the disorder degree of the information. Therefore, it can also be used to describe the uncertainty degree of the system [16–18].Let an information system haveN random information sources x1,x2,…,xN, and the probability of each information source appearing in the whole system is p1,p2,…,pN, respectively. Then, its information entropy is defined as follows:(7)H=-∑i=1Npilogpi.Distinguishing normal state and fault state is the essence of fault diagnosis of circuit breaker. The fault can be regarded as different mutations in the normal state. According to this property, this paper proposes the equal-time segment approach to achieve extraction of entropy. The principle is shown in Figure3.Figure 3
Segments with equal-time.In Figure3, Signal 1 is the envelope of normal signal. Signal 2 is the envelope of fault signal. Mutation events are delayed. Signal 2 was compared with Signal 1. Signal 1 was segmented according to equal-time segment. Each of them has three segments: Seg1, Seg2, and Seg3. Because Signal 2 is changed compared with Signal 1, Seg1, Seg2, and Seg3 energies of Signal 2 compared with Seg1, Seg2, and Seg3 energies of Signal 1 are also changed, indicating that energy distribution is changed. Therefore, we can transform the changes of Signal 1 and Signal 2 into the change of energy distribution uniformity of each segment.
### 3.3. The Extraction Steps of Entropy
With the extraction of EEMD-energy entropy of sampling vibration signal as an example, the specific extraction steps are as follows.Step 1.
First vibration signal is denoised by wavelet soft threshold.Step 2.
The denoising signal is decomposed by EEMD and choosing the top 8 main intrinsic mode functions (IMFs).Step 3.
Get the respective analytic signals of the obtained IMFs with Hilbert transform.Step 4.
Extract the envelope of the respective analytic signals.Step 5.
Separate equally the envelope of signal intoM sections along the time axis, and calculate the energy of each segment according to the following equation:(8)Qki=∫t1t2At2dt,in which i=1,2,…,M, k=1,2,…,8, and t1 and t2 are the starting and stopping time of the i segment.Step 6.
Normalization processing is made to the signal envelope of the segmented energy as follows:(9)qki=Qki∑i=1mQki.Step 7.
According to the basic theory of entropy, the definition of the EEMD-energy entropy for the envelope of signalA(t) is written as(10)Hk=-∑i=1mqkilgqki.Step 8.
Finally, the EEMD-energy entropy vector is(11)T=H1,H2,…,H8.When we use the method of the EEMD-energy entropy for fault detection, actually, the distribution of normal signals is considered to be as a uniform distribution. Moreover, the distribution of the test signals under fault condition is not uniform. And because the entropy is a measure of signal heterogeneity degree, we can use the EEMD-energy entropy to reflect the deviation degree of fault state relative to the normal state.
## 3.1. The Extraction Envelope of the Signal
Mutation information of signal is often presented in the envelope of the signal. When the operating mechanism of circuit breaker is in action, the high frequency components from the impact can be seen as a signal carrier of the envelope signal. Therefore, the Hilbert method which is used to extract the envelope of signal for diagnosis is very effective in the mechanical fault diagnosis.For a real signalx(t), the Hilbert transform is defined as(4)x^t=1πt·xt=1π∫-∞+∞xτt-τdτ.Then, the analytic signal of x(t) is(5)gt=xt+jx^t.The amplitude of g(t) is(6)At=x2t+x^2t.So A(t) is the envelope of x(t).
## 3.2. The Application of Entropy
Entropy is an information measure for describing the complexity or irregularity of system. So far, many attempts have been made for estimation of entropy in the complex system, such as Kolmogorov entropy, approximate entropy, and Shannon information entropy [15]. This paper chooses the Shannon information entropy. The information entropy can reflect the uniformity of probability distribution of system. The greater the entropy value H, the more uniform the information distribution and the greater the disorder degree of the information. Therefore, it can also be used to describe the uncertainty degree of the system [16–18].Let an information system haveN random information sources x1,x2,…,xN, and the probability of each information source appearing in the whole system is p1,p2,…,pN, respectively. Then, its information entropy is defined as follows:(7)H=-∑i=1Npilogpi.Distinguishing normal state and fault state is the essence of fault diagnosis of circuit breaker. The fault can be regarded as different mutations in the normal state. According to this property, this paper proposes the equal-time segment approach to achieve extraction of entropy. The principle is shown in Figure3.Figure 3
Segments with equal-time.In Figure3, Signal 1 is the envelope of normal signal. Signal 2 is the envelope of fault signal. Mutation events are delayed. Signal 2 was compared with Signal 1. Signal 1 was segmented according to equal-time segment. Each of them has three segments: Seg1, Seg2, and Seg3. Because Signal 2 is changed compared with Signal 1, Seg1, Seg2, and Seg3 energies of Signal 2 compared with Seg1, Seg2, and Seg3 energies of Signal 1 are also changed, indicating that energy distribution is changed. Therefore, we can transform the changes of Signal 1 and Signal 2 into the change of energy distribution uniformity of each segment.
## 3.3. The Extraction Steps of Entropy
With the extraction of EEMD-energy entropy of sampling vibration signal as an example, the specific extraction steps are as follows.Step 1.
First vibration signal is denoised by wavelet soft threshold.Step 2.
The denoising signal is decomposed by EEMD and choosing the top 8 main intrinsic mode functions (IMFs).Step 3.
Get the respective analytic signals of the obtained IMFs with Hilbert transform.Step 4.
Extract the envelope of the respective analytic signals.Step 5.
Separate equally the envelope of signal intoM sections along the time axis, and calculate the energy of each segment according to the following equation:(8)Qki=∫t1t2At2dt,in which i=1,2,…,M, k=1,2,…,8, and t1 and t2 are the starting and stopping time of the i segment.Step 6.
Normalization processing is made to the signal envelope of the segmented energy as follows:(9)qki=Qki∑i=1mQki.Step 7.
According to the basic theory of entropy, the definition of the EEMD-energy entropy for the envelope of signalA(t) is written as(10)Hk=-∑i=1mqkilgqki.Step 8.
Finally, the EEMD-energy entropy vector is(11)T=H1,H2,…,H8.When we use the method of the EEMD-energy entropy for fault detection, actually, the distribution of normal signals is considered to be as a uniform distribution. Moreover, the distribution of the test signals under fault condition is not uniform. And because the entropy is a measure of signal heterogeneity degree, we can use the EEMD-energy entropy to reflect the deviation degree of fault state relative to the normal state.
## 4. SVM
SVM is a promising classifier that minimizes the empirical classification error and at the same time maximizes the margin by determining a separating hyperplane to distinguish different classes of data [19]. The basic idea of support vector machine is to create a hyperplane as the optimal separating hyperplane. The optimal separating hyperplane can not only make all samples correct classification but also have the maximum distance to the nearest point of the training data. To enable the optimal separating hyperplane method to be generalised, Cortes and Vapnik introduced nonnegative variable εi and the penalty factor C. The εi are a measure of the misclassification errors and the C is a given value and subject to the constraints of (12). Then, the constraint conditions of the hyperplane are written as(12)yiω·x+b≥1-εii=1,2,…,l.Hence, the optimal separating hyperplane is given by(13)ϕω,ε=12ω·ω+c∑i=1lεi.By introducing the kernel function Kxi·xj, the nonlinear problem could be transformed into a linear problem in high dimension space. The corresponding decision function is written as(14)fx=sgn∑SVaiyiKxi·xj+b.The classification performance of SVM is superior to the neural network classifier in the fault diagnosis for small samples. PutT as the input vector of SVM, choose the radial basis function (RBF), and use the strategy of “hierarchical SVMs (H-SVMs)” [20, 21] for the mechanical fault diagnosis of circuit breakers.H-SVMs have a variety of classification structures; this research chooses the skewed tree classification structure; its structure is shown in Figure4. H-SVMs classify the four kinds of states that are the normal state C1, the lack of lubrication state C2, the foundation bolt looseness state C3, and the energy storage spring shed state C4 into three levels of classification training and recognition by three SVMs. First, using a SVM to the first level classification, C4 is separated from C1, C2, and C3; the second stage is separating C3 from C1 and C2 by the second SVM; finally, the third SVM is employed to the third grade classification, separating C1 and C2.Figure 4
Classification tree diagram for H-SVMs.
## 5. Experiment and Analysis
Collecting the vibration signals of the normal state, the lack of lubrication state, the foundation bolt looseness state, and the energy storage spring shed state from type ZW32-12 of vacuum circuit breaker in laboratory, as shown in Figure5, each state collected 20 groups of close-brake vibration signal, including 10 groups for SVM training and 10 groups for testing. Adopting the method of EEMD-energy entropy to calculate each state of the close-brake vibration signals, getting the 8-dimensional entropic vector T of each sample, a part of the entropy values is shown in Table 1.Table 1
Vectors of characteristic entropy.
H
1
H
2
H
3
H
4
H
5
H
6
H
7
H
8
Normal
1.38995
1.2863
1.31565
1.22686
1.19784
1.08202
1.1478
1.35075
Normal
1.3758
1.37933
1.32379
1.18654
1.19728
1.01396
1.18778
1.32624
Normal
1.42401
1.2713
1.32004
1.25894
1.23718
1.17143
0.97525
1.39509
Fault I
0.89786
0.92083
0.97844
1.11754
0.96972
1.20403
1.45528
1.89058
Fault I
0.92882
0.86088
0.91616
0.92303
1.03207
1.19891
1.58584
1.85211
Fault I
0.83574
0.91725
0.96161
1.19681
1.03097
1.23316
1.50922
2.05601
Fault II
1.59699
1.3049
1.19265
0.90063
0.96383
1.00915
1.02713
1.67725
Fault II
1.57129
1.28567
1.22408
1.06797
0.97543
0.97423
0.95939
1.66329
Fault II
1.54017
1.24947
1.23603
1.03721
1.09331
1.0717
1.07941
1.76371
Fault III
1.13835
0.98421
1.08779
1.09961
1.37122
1.49651
1.93909
2.07203
Fault III
1.16172
1.02966
0.85995
0.99001
1.32936
1.521
1.93032
1.94298
Fault III
1.19095
0.81114
1.04993
1.02732
1.31641
1.45068
1.93723
2.04867Figure 5
Vibration signals of vacuum circuit breaker.Then, the received entropic vectors are inputted H-SVMs for classification and recognition. Radial basis kernel function is applied in this experiment. Penalty factorC and the kernel function parameter g are two important parameters for influencing the accuracy and generalization ability of SVM diagnosis. Common methods of parameter optimization are genetic algorithm, grid method, and particle swarm method. This experiment selects the genetic algorithm to determine the optimal parameters C and g. Training results are shown in Figure 6, and recognition results are shown in Table 2. We could see the four training signals are clearly classified from Figure 6 and the accuracy of recognition results is very high from Table 2 in which three types of accuracy are a hundred percent.Table 2
Recognition results of H-SVMs.
The type of state
Recognition results
Accuracy/%
Normal
C1, C1, C1, C1, C1, C1, C1, C1, C1, C1
100
Fault I
C2, C2, C2, C2, C2, C2, C2, C2, C2, C2
100
Fault II
C3, C3, C3, C3, C3, C1, C3, C3, C3, C3
90
Fault III
C4, C4, C4, C4, C4, C4, C4, C4, C4, C4
100Figure 6
Training results of H-SVMs.
## 6. Conclusion
The change of vibration signal of circuit breaker running can reflect circuit breaker operation state. The paper presented an equal-time segment approach based on the EEMD which could influence the change of time, frequency, and energy. Besides, the SVM is effective in solving the circuit breaker status recognition problems that include small samples, nonlinear and high dimension problems. Experimental results show that a combination method of the EEMD, entropy, and SVM for circuit breaker diagnosis to typical fault and normal state has better diagnosis effect.
---
*Source: 101757-2015-07-09.xml* | 2015 |
# Cesarean Section in the Delivery Room: An Exploration of the Viewpoint of Midwives, Anaesthesiologists, and Obstetricians
**Authors:** Jansegers Jolien; Jacquemyn Yves
**Journal:** Journal of Pregnancy
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1017572
---
## Abstract
Aim. To explore the attitude and vision of midwives, anaesthesiologists, and obstetricians concerning a dedicated operating room for cesarean sections within the delivery ward versus cesarean sections within the general operating room.Method. A descriptive qualitative study using a constructive paradigm. Face-to-face semistructured interviews were performed in 3 different hospitals, one without operating theatre within the delivery ward, one with a recently built cesarean section room within the delivery ward, and one with a long time tradition of cesarean section in the delivery room. Interviews have been analysed thematically.Results. Three themes have been identified: organization, role of the midwife, and safety. Although identical protocols for the degree of emergency of a cesarean section are used, infrastructure and daily practice differ between hospitals. Logistic support, medical and midwife staffing, and hospital infrastructure are systematically mentioned as needing improvement. Realizing cesarean section within the delivery ward was considered as an improvement for the patient’s experience. Midwives need a clear and new job description and delineation and mention a lack of formal education to assist surgical procedures. To increase patient safety continuous education and communication are considered necessary.Conclusion. A detailed job description and education of all those involved in cesarean section at the delivery ward are necessary to improve patient safety. Patient experience is improved, but our knowledge on this is hampered by lack of studies.
---
## Body
## 1. Introduction
Cesarean section rate is rising all over the world [1]. The process of becoming parents is a major life event of multidimensional, complex, and unique character, influenced by the surroundings where it takes place and a continuous search to improve this experience is necessary [2]. A reorganization of traditional obstetrics in a multidisciplinary context including a change of location can be part of this amelioration.An essential part of improving the experience of cesarean section consists in avoiding separation of mother and child including skin to skin contact immediately after birth in the operating theatre [3, 4]. Frederick et al. concluded that women highly appreciated not being separated from the child during and after cesarean section [5].A dedicated operating room for cesarean section within the delivery ward can help implement continuous skin to skin contact and improve patient experience; such a specified cesarean section room is not connected to the main operating block [6]. A specialized cesarean section operating room not connected to the central surgical unit is not widely available. While constructing a new perinatal unit we performed a small survey in 2015; only 3 of 88 (3.4%) delivery units in Flanders (Belgium) had this available; the Belgian Association of Regional Association also was not able to provide a general advice on this as during development of their guideline major discussions arose, resulting in a very general guideline stating “the location for an elective or urgent cesarean section is variable form one hospital to the other” [7]. Furthermore until 2018 midwives did not receive systematic training in assisting anesthesiologists or gynaecologists during cesarean section in our country; since 2018 by law this is made part of the formation to become a midwife. This situation has prompted us to further explore the actual views of anesthesiologists and midwives on cesarean section in the delivery room.The place where delivery takes place influences not only the way the mother-to-be lives through this major life event, but also the attitude of midwives and thus should be taken into consideration when trying to improve the care given to parturient women [8]. On the other side for an anesthesiologist leaving the trusted environment of the operating theatre can be a threatening experience or can provoke uncertainty.The aim of the current research is to analyse the views of midwives, obstetricians, and anaesthesiologists on performing cesarean sections outside the main operating block and in the delivery ward.
## 2. Methods
### 2.1. Design
We performed descriptive qualitative research using a constructive paradigm. The paradigm states that any person has his/her own views and interpretation of reality; no interpretation is considered superior to another. Face-to-face semistructured interviews were performed and recorded. The study population consisted of midwives, gynaecologists, and obstetricians from three hospitals in the region of Antwerp, Belgium. All 3 hospitals are comparable as to the number of deliveries, between 1500 and 2000 per year. The hospitals are different for the infrastructure related to a cesarean section room in the delivery ward. The first hospital (Sint Augustinus Hospital) has a long-standing tradition of cesarean sections in the delivery room; the second hospital (Antwerp University Hospital (UZA)) had started performing cesarean sections in a new operating theatre in the delivery suit 6 months before our study; the third hospital (Middelheimziekenhuis) has no operating theatre in the delivery room; all cesarean sections are performed in the main operating theatre. In every hospital at least one midwife, one anaesthesiologist, and one gynaecologist have been interviewed. Every interview followed a previously determined scenario including 3 topics: cesarean section, location, and working experience. Respondents in each hospital have been selected by sending an email to all midwives, obstetricians, and anaesthesiologists working at the obstetric department. Data collection was started in December 2016 and ended in April 2017. The scenario for the semistructured interviews included after presentation of the interviewer (JJ) a short explanation that the study aimed at describing the viewpoint of professionals regarding cesarean section within the delivery ward. First basic demographic questions were asked (which hospital do you work in, which department, function, and years of experience within the job). Then the following were asked: what the local standard protocol for cesarean sections was and what the participant’s personal experience with this was. In case an operating theatre was present in the delivery ward we questioned when it was used (always, mostly, only in emergencies, or any other description), and whether only sections were done or also other procedures. Next questions were on personal work experience considering advantages and disadvantages of this location, personal experience in cooperating with other disciplines, any other remark or recommendation one would like to make, and how the respondent thinks pregnant women and their family think about the location.
### 2.2. Analysis
All interviews were recorded electronically and coded. Analysis was based on a thematic approach. The interviews were transcribed and code book was developed. To avoid bias the first interview was coded not only by the researcher but also by an independent other; the code book was developed in consensus and is given in Table1.Table 1
Codetree.
Descriptive codes Professional experience Use of labor ward operating theatre Role of the midwife Interprofessional cooperation Advantages of labor ward operating theatre Disadvantages of labor ward operating theatre Recommendation for use of labor ward operating theatre Other remarks on labor ward operating theatre Location of the general operating theatre Quality and safety Interpretative codes Role of the midwife Role of the anesthesiologists Interprofessional cooperation Use of labor ward operating theatre Advantages and disadvantages of labor ward operating theatre Other remarks on labor ward operating theatre Patient and work safety Subthemes Use of labor ward operating theatre Advantages and disadvantages of labor ward operating theatre Remarks and recommendations on the use of this space Role of the midwife Interprofessional cooperation Working safely Themes Role of the midwife Organization SafetyEthical committees of each hospital have reviewed and approved this research. Written informed consent was obtained from each respondent before the interview.
## 2.1. Design
We performed descriptive qualitative research using a constructive paradigm. The paradigm states that any person has his/her own views and interpretation of reality; no interpretation is considered superior to another. Face-to-face semistructured interviews were performed and recorded. The study population consisted of midwives, gynaecologists, and obstetricians from three hospitals in the region of Antwerp, Belgium. All 3 hospitals are comparable as to the number of deliveries, between 1500 and 2000 per year. The hospitals are different for the infrastructure related to a cesarean section room in the delivery ward. The first hospital (Sint Augustinus Hospital) has a long-standing tradition of cesarean sections in the delivery room; the second hospital (Antwerp University Hospital (UZA)) had started performing cesarean sections in a new operating theatre in the delivery suit 6 months before our study; the third hospital (Middelheimziekenhuis) has no operating theatre in the delivery room; all cesarean sections are performed in the main operating theatre. In every hospital at least one midwife, one anaesthesiologist, and one gynaecologist have been interviewed. Every interview followed a previously determined scenario including 3 topics: cesarean section, location, and working experience. Respondents in each hospital have been selected by sending an email to all midwives, obstetricians, and anaesthesiologists working at the obstetric department. Data collection was started in December 2016 and ended in April 2017. The scenario for the semistructured interviews included after presentation of the interviewer (JJ) a short explanation that the study aimed at describing the viewpoint of professionals regarding cesarean section within the delivery ward. First basic demographic questions were asked (which hospital do you work in, which department, function, and years of experience within the job). Then the following were asked: what the local standard protocol for cesarean sections was and what the participant’s personal experience with this was. In case an operating theatre was present in the delivery ward we questioned when it was used (always, mostly, only in emergencies, or any other description), and whether only sections were done or also other procedures. Next questions were on personal work experience considering advantages and disadvantages of this location, personal experience in cooperating with other disciplines, any other remark or recommendation one would like to make, and how the respondent thinks pregnant women and their family think about the location.
## 2.2. Analysis
All interviews were recorded electronically and coded. Analysis was based on a thematic approach. The interviews were transcribed and code book was developed. To avoid bias the first interview was coded not only by the researcher but also by an independent other; the code book was developed in consensus and is given in Table1.Table 1
Codetree.
Descriptive codes Professional experience Use of labor ward operating theatre Role of the midwife Interprofessional cooperation Advantages of labor ward operating theatre Disadvantages of labor ward operating theatre Recommendation for use of labor ward operating theatre Other remarks on labor ward operating theatre Location of the general operating theatre Quality and safety Interpretative codes Role of the midwife Role of the anesthesiologists Interprofessional cooperation Use of labor ward operating theatre Advantages and disadvantages of labor ward operating theatre Other remarks on labor ward operating theatre Patient and work safety Subthemes Use of labor ward operating theatre Advantages and disadvantages of labor ward operating theatre Remarks and recommendations on the use of this space Role of the midwife Interprofessional cooperation Working safely Themes Role of the midwife Organization SafetyEthical committees of each hospital have reviewed and approved this research. Written informed consent was obtained from each respondent before the interview.
## 3. Results
Qualitative thematic analysis resulted in three main themes (see Table1): organization, role of the midwife, and safety.Four anaesthesiologists, 3 obstetricians, and 3 midwives have been interviewed. Demographic data are listed in Table2. Analysis revealed three major themes: organization, role and function of the midwife, and safety.Table 2
Demographic characteristics of respondents.
Function Sex Years of professional experience Midwife Female 7 Midwife Female 27 Midwife Female 25 Obstetrician Female 23 Obstetrician Male 20 Obstetrician Female 21 Anesthesiologist Female 27 Anesthesiologist Male 3 Anesthesiologist Female 10 Anesthesiologist Male 35In both hospitals that perform cesarean sections in the delivery suit organization and location were similar. The dedicated cesarean section room is next to the delivery room and connected to a neonatal reanimation room. After cesarean section the patient is observed for one hour in a room with maternal monitoring. Cesarean sections are coded according to emergency degree by colours: red, orange, yellow, and green from highly urgent to elective. The system with colours is highly appreciated as it improves communication. Quote: “we have introduced this colour system as an urgent cesarean section was quite differently experienced by obstetricians versus anaesthesiologists; code red means immediately, that is clear for everyone, no delay” (an anesthesiologist). In the hospital that does not perform cesarean sections in the delivery ward, this was a specific choice because the anaesthesiologists considered it an unsafe practice.In every interview it was clear that medical and midwife staffing was crucial and difficult because it was not reported by the hospital. Quote: “first it was decided that a cesarean section room was going to be built in the new delivery ward, but then hospital management did not allow for extra midwives or anaesthesiologists” (a midwife). All respondents mentioned that extra staffing is necessary. Dedicated anaesthesiologists for the delivery ward are considered necessary especially in case of an emergency. Quote: “it should be possible to have an anaesthesiologist available all day and an extra midwife for planned an unplanned cesarean sections, but in our context this is financially impossible” (anaesthesiologist). The location of the obstetric department and the distance to the general operating block are considered crucial factors for patient safety. Quote: “in my opinion a cesarean section room can be organised autonomically but it should be localised next to the central operating block” (anaesthesiologist). On the other hand, others considered having an operating theatre within the maternity unit safer when the distance with a central operating block is long “I think in all hospitals with the distance between the maternity and operating block is more than 100 m, there should be an operating theatre within the maternity” (anaesthesiologist).All respondents consider a dedicated cesarean section room within the delivery ward as a positive experience for patient as mother and child are not separated and can stay together all the time; also the recovery period in which mother and child are often not together can be deleted; this improves skin-to-skin contact, first breastfeeding, and mother-child bounding. Quote: “all patients are happy, especially that they do not have to go to the recovery room and they are together with the baby continuously now, also the partner can stay with the baby and a woman all the time, patient is really appreciate that we perform cesarean sections here in our department” (a midwife). In the hospitals where it is not possible to perform a cesarean section in the delivery ward alternative ways to improve the experience are looked for; in the recovery room, an isolated space where the mother can stay with her baby was realized but financial limitations made it impossible for a midwife to stay with the mother.It is clear that midwives do not feel prepared from the formation to assist in operating theatre cesarean sections; in one of the hospitals where cesarean sections were only recently performed at the maternity unit, operating nurses were still assisting and this was considered as a source of frustration; obstetricians and anaesthesiologists know exactly what to do in the operating theatre; for the midwives this was a new experience and their exact job and responsibility was not clearly defined. Tension arises when an operating nurse enters the cesarean section room. Quote: “midwives are not nurses and have no experience with working in operating room, they have not received any formation for that and during interventions really feel the difference between an operating nurse and a midwife” (gynaecologist); “we need training more than we have received until now; if anything goes wrong I would not know what to do; I do not feel certainly enough to stand alone as a midwife in the operating together” (midwife). Midwives feel that it is difficult on one hand to be a technical person and assisting the anaesthesiologists or the obstetrician and on the other hand to be present for the parents and taking care of the baby. Training and communication are mentioned as highly important especially in emergency situations.Logistics are also considered of high importance; after each cesarean section all the materials should be checked and frequently discussions rise regarding who has to perform this; anaesthesiologists that are not used to working on the obstetric department mentioned that they feel unsafe in a room where they do not feel at home as in their usual operating theatre.
## 4. Discussion
From these interviews it becomes clear that introducing cesarean sections into the delivery ward makes it necessary that midwives receive extra training and formation to assist surgery as in Belgium, and most countries, a midwife is not a nurse. In general in the formation of midwives the physiology of labor and delivery is considered crucial; it can be questioned whether in a changing world it would not be better to train midwives systematically for assisting cesarean sections. The competencies that midwives should achieve are changing [9]. Actually the changing practice creates an enlarging gap between theory at school and practice in the delivery room [10].Anaesthesiologists are much more critical when it comes to patient safety; midwives and obstetricians have more interest in the positive experience of the patient. This demonstrates the technical medical view of anaesthesiologists versus the more physiologic and normal medical view of midwives and obstetricians. To realize safety teamwork is of the utmost importance, especially in obstetric care [11]. This can be optimalized by team training and education especially simulation [12].Literature on dedicated cesarean section room within a delivery unit is scarce. Graham et al. describe the experience in an academic hospital where patients planned for elective cesarean experienced waiting times and difficulty in transport to the operating theatre [2]. To improve the experience patients were prepared for surgery in the same room they would stay in after surgery; family could stay with them and they no longer stayed in a recovery room avoiding separation of mother and child. This changing policy was generally highly appreciated by the patientsKasagi et al. describe the start of an operating theatre in a perinatal unit, away from the central operating block, a situation comparable to the hospital as we describe in this paper [6]. Before starting surgery in this new unit formation was given to the medical staff and the midwives by an experienced anaesthesiologist and operating nurse; in this hospital logistics and extra medical and midwife staff were provided before implementing the change. The interval between the decision to perform a cesarean section and birth of the baby is diminished. Kasagi et al. conclude that the lack of staff makes this dedicated cesarean section room not available for 24 hours. Patients find it less stressful.In our descriptive study the same limitations are mentioned: lack of staff and lack of training; also the same advantages are mentioned, especially the positive experience by the patient.Our study is limited by its quantitative and descriptive setup and by the small number of interviews. We do not know whether the opinions of anaesthesiologists, obstetricians, and midwives in our sample can be generalised but they do seem to be in the same line as what is published. We did not interview patients or family.Discussions on the safety of the concept of an isolated cesarean section room seem to be frequent between anaesthesiologists and obstetricians; it is remarkable that almost no studies are available [13–15].When introducing a dedicated operating room for cesarean sections within the delivery ward we recommend that this process be developed together with anesthesiologists and midwives from the very start. As concerns the midwives an intense coaching by trained operating nurses who progressively leave responsibility to the midwives is advised. A rotating period in which midwives do perform basic practice in the general operating room, in case this was lacking during basic training, is considered worthwhile. A dedicated team of obstetrics anesthesiologists would most probably be the ideal situation, but this can be difficult to realize in most centres. Anyhow involving anesthesiologists in the organization of the delivery ward operating theatre so they feel at ease with the location and material is necessary. Regular phantom training with the multidisciplinary team can also increase the ease of working.
## 5. Conclusion
Advantages for the patient and family are often mentioned as arguments in favour of cesarean section within the delivery department; safety issues are unclear. Transposing surgery from the operating theatre to the delivery ward necessitates a new competency profile of the midwife. More systematic research on this subject is necessary to guide us further in the organization of obstetrics in hospitals.
---
*Source: 1017572-2018-09-27.xml* | 1017572-2018-09-27_1017572-2018-09-27.md | 22,988 | Cesarean Section in the Delivery Room: An Exploration of the Viewpoint of Midwives, Anaesthesiologists, and Obstetricians | Jansegers Jolien; Jacquemyn Yves | Journal of Pregnancy
(2018) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1017572 | 1017572-2018-09-27.xml | ---
## Abstract
Aim. To explore the attitude and vision of midwives, anaesthesiologists, and obstetricians concerning a dedicated operating room for cesarean sections within the delivery ward versus cesarean sections within the general operating room.Method. A descriptive qualitative study using a constructive paradigm. Face-to-face semistructured interviews were performed in 3 different hospitals, one without operating theatre within the delivery ward, one with a recently built cesarean section room within the delivery ward, and one with a long time tradition of cesarean section in the delivery room. Interviews have been analysed thematically.Results. Three themes have been identified: organization, role of the midwife, and safety. Although identical protocols for the degree of emergency of a cesarean section are used, infrastructure and daily practice differ between hospitals. Logistic support, medical and midwife staffing, and hospital infrastructure are systematically mentioned as needing improvement. Realizing cesarean section within the delivery ward was considered as an improvement for the patient’s experience. Midwives need a clear and new job description and delineation and mention a lack of formal education to assist surgical procedures. To increase patient safety continuous education and communication are considered necessary.Conclusion. A detailed job description and education of all those involved in cesarean section at the delivery ward are necessary to improve patient safety. Patient experience is improved, but our knowledge on this is hampered by lack of studies.
---
## Body
## 1. Introduction
Cesarean section rate is rising all over the world [1]. The process of becoming parents is a major life event of multidimensional, complex, and unique character, influenced by the surroundings where it takes place and a continuous search to improve this experience is necessary [2]. A reorganization of traditional obstetrics in a multidisciplinary context including a change of location can be part of this amelioration.An essential part of improving the experience of cesarean section consists in avoiding separation of mother and child including skin to skin contact immediately after birth in the operating theatre [3, 4]. Frederick et al. concluded that women highly appreciated not being separated from the child during and after cesarean section [5].A dedicated operating room for cesarean section within the delivery ward can help implement continuous skin to skin contact and improve patient experience; such a specified cesarean section room is not connected to the main operating block [6]. A specialized cesarean section operating room not connected to the central surgical unit is not widely available. While constructing a new perinatal unit we performed a small survey in 2015; only 3 of 88 (3.4%) delivery units in Flanders (Belgium) had this available; the Belgian Association of Regional Association also was not able to provide a general advice on this as during development of their guideline major discussions arose, resulting in a very general guideline stating “the location for an elective or urgent cesarean section is variable form one hospital to the other” [7]. Furthermore until 2018 midwives did not receive systematic training in assisting anesthesiologists or gynaecologists during cesarean section in our country; since 2018 by law this is made part of the formation to become a midwife. This situation has prompted us to further explore the actual views of anesthesiologists and midwives on cesarean section in the delivery room.The place where delivery takes place influences not only the way the mother-to-be lives through this major life event, but also the attitude of midwives and thus should be taken into consideration when trying to improve the care given to parturient women [8]. On the other side for an anesthesiologist leaving the trusted environment of the operating theatre can be a threatening experience or can provoke uncertainty.The aim of the current research is to analyse the views of midwives, obstetricians, and anaesthesiologists on performing cesarean sections outside the main operating block and in the delivery ward.
## 2. Methods
### 2.1. Design
We performed descriptive qualitative research using a constructive paradigm. The paradigm states that any person has his/her own views and interpretation of reality; no interpretation is considered superior to another. Face-to-face semistructured interviews were performed and recorded. The study population consisted of midwives, gynaecologists, and obstetricians from three hospitals in the region of Antwerp, Belgium. All 3 hospitals are comparable as to the number of deliveries, between 1500 and 2000 per year. The hospitals are different for the infrastructure related to a cesarean section room in the delivery ward. The first hospital (Sint Augustinus Hospital) has a long-standing tradition of cesarean sections in the delivery room; the second hospital (Antwerp University Hospital (UZA)) had started performing cesarean sections in a new operating theatre in the delivery suit 6 months before our study; the third hospital (Middelheimziekenhuis) has no operating theatre in the delivery room; all cesarean sections are performed in the main operating theatre. In every hospital at least one midwife, one anaesthesiologist, and one gynaecologist have been interviewed. Every interview followed a previously determined scenario including 3 topics: cesarean section, location, and working experience. Respondents in each hospital have been selected by sending an email to all midwives, obstetricians, and anaesthesiologists working at the obstetric department. Data collection was started in December 2016 and ended in April 2017. The scenario for the semistructured interviews included after presentation of the interviewer (JJ) a short explanation that the study aimed at describing the viewpoint of professionals regarding cesarean section within the delivery ward. First basic demographic questions were asked (which hospital do you work in, which department, function, and years of experience within the job). Then the following were asked: what the local standard protocol for cesarean sections was and what the participant’s personal experience with this was. In case an operating theatre was present in the delivery ward we questioned when it was used (always, mostly, only in emergencies, or any other description), and whether only sections were done or also other procedures. Next questions were on personal work experience considering advantages and disadvantages of this location, personal experience in cooperating with other disciplines, any other remark or recommendation one would like to make, and how the respondent thinks pregnant women and their family think about the location.
### 2.2. Analysis
All interviews were recorded electronically and coded. Analysis was based on a thematic approach. The interviews were transcribed and code book was developed. To avoid bias the first interview was coded not only by the researcher but also by an independent other; the code book was developed in consensus and is given in Table1.Table 1
Codetree.
Descriptive codes Professional experience Use of labor ward operating theatre Role of the midwife Interprofessional cooperation Advantages of labor ward operating theatre Disadvantages of labor ward operating theatre Recommendation for use of labor ward operating theatre Other remarks on labor ward operating theatre Location of the general operating theatre Quality and safety Interpretative codes Role of the midwife Role of the anesthesiologists Interprofessional cooperation Use of labor ward operating theatre Advantages and disadvantages of labor ward operating theatre Other remarks on labor ward operating theatre Patient and work safety Subthemes Use of labor ward operating theatre Advantages and disadvantages of labor ward operating theatre Remarks and recommendations on the use of this space Role of the midwife Interprofessional cooperation Working safely Themes Role of the midwife Organization SafetyEthical committees of each hospital have reviewed and approved this research. Written informed consent was obtained from each respondent before the interview.
## 2.1. Design
We performed descriptive qualitative research using a constructive paradigm. The paradigm states that any person has his/her own views and interpretation of reality; no interpretation is considered superior to another. Face-to-face semistructured interviews were performed and recorded. The study population consisted of midwives, gynaecologists, and obstetricians from three hospitals in the region of Antwerp, Belgium. All 3 hospitals are comparable as to the number of deliveries, between 1500 and 2000 per year. The hospitals are different for the infrastructure related to a cesarean section room in the delivery ward. The first hospital (Sint Augustinus Hospital) has a long-standing tradition of cesarean sections in the delivery room; the second hospital (Antwerp University Hospital (UZA)) had started performing cesarean sections in a new operating theatre in the delivery suit 6 months before our study; the third hospital (Middelheimziekenhuis) has no operating theatre in the delivery room; all cesarean sections are performed in the main operating theatre. In every hospital at least one midwife, one anaesthesiologist, and one gynaecologist have been interviewed. Every interview followed a previously determined scenario including 3 topics: cesarean section, location, and working experience. Respondents in each hospital have been selected by sending an email to all midwives, obstetricians, and anaesthesiologists working at the obstetric department. Data collection was started in December 2016 and ended in April 2017. The scenario for the semistructured interviews included after presentation of the interviewer (JJ) a short explanation that the study aimed at describing the viewpoint of professionals regarding cesarean section within the delivery ward. First basic demographic questions were asked (which hospital do you work in, which department, function, and years of experience within the job). Then the following were asked: what the local standard protocol for cesarean sections was and what the participant’s personal experience with this was. In case an operating theatre was present in the delivery ward we questioned when it was used (always, mostly, only in emergencies, or any other description), and whether only sections were done or also other procedures. Next questions were on personal work experience considering advantages and disadvantages of this location, personal experience in cooperating with other disciplines, any other remark or recommendation one would like to make, and how the respondent thinks pregnant women and their family think about the location.
## 2.2. Analysis
All interviews were recorded electronically and coded. Analysis was based on a thematic approach. The interviews were transcribed and code book was developed. To avoid bias the first interview was coded not only by the researcher but also by an independent other; the code book was developed in consensus and is given in Table1.Table 1
Codetree.
Descriptive codes Professional experience Use of labor ward operating theatre Role of the midwife Interprofessional cooperation Advantages of labor ward operating theatre Disadvantages of labor ward operating theatre Recommendation for use of labor ward operating theatre Other remarks on labor ward operating theatre Location of the general operating theatre Quality and safety Interpretative codes Role of the midwife Role of the anesthesiologists Interprofessional cooperation Use of labor ward operating theatre Advantages and disadvantages of labor ward operating theatre Other remarks on labor ward operating theatre Patient and work safety Subthemes Use of labor ward operating theatre Advantages and disadvantages of labor ward operating theatre Remarks and recommendations on the use of this space Role of the midwife Interprofessional cooperation Working safely Themes Role of the midwife Organization SafetyEthical committees of each hospital have reviewed and approved this research. Written informed consent was obtained from each respondent before the interview.
## 3. Results
Qualitative thematic analysis resulted in three main themes (see Table1): organization, role of the midwife, and safety.Four anaesthesiologists, 3 obstetricians, and 3 midwives have been interviewed. Demographic data are listed in Table2. Analysis revealed three major themes: organization, role and function of the midwife, and safety.Table 2
Demographic characteristics of respondents.
Function Sex Years of professional experience Midwife Female 7 Midwife Female 27 Midwife Female 25 Obstetrician Female 23 Obstetrician Male 20 Obstetrician Female 21 Anesthesiologist Female 27 Anesthesiologist Male 3 Anesthesiologist Female 10 Anesthesiologist Male 35In both hospitals that perform cesarean sections in the delivery suit organization and location were similar. The dedicated cesarean section room is next to the delivery room and connected to a neonatal reanimation room. After cesarean section the patient is observed for one hour in a room with maternal monitoring. Cesarean sections are coded according to emergency degree by colours: red, orange, yellow, and green from highly urgent to elective. The system with colours is highly appreciated as it improves communication. Quote: “we have introduced this colour system as an urgent cesarean section was quite differently experienced by obstetricians versus anaesthesiologists; code red means immediately, that is clear for everyone, no delay” (an anesthesiologist). In the hospital that does not perform cesarean sections in the delivery ward, this was a specific choice because the anaesthesiologists considered it an unsafe practice.In every interview it was clear that medical and midwife staffing was crucial and difficult because it was not reported by the hospital. Quote: “first it was decided that a cesarean section room was going to be built in the new delivery ward, but then hospital management did not allow for extra midwives or anaesthesiologists” (a midwife). All respondents mentioned that extra staffing is necessary. Dedicated anaesthesiologists for the delivery ward are considered necessary especially in case of an emergency. Quote: “it should be possible to have an anaesthesiologist available all day and an extra midwife for planned an unplanned cesarean sections, but in our context this is financially impossible” (anaesthesiologist). The location of the obstetric department and the distance to the general operating block are considered crucial factors for patient safety. Quote: “in my opinion a cesarean section room can be organised autonomically but it should be localised next to the central operating block” (anaesthesiologist). On the other hand, others considered having an operating theatre within the maternity unit safer when the distance with a central operating block is long “I think in all hospitals with the distance between the maternity and operating block is more than 100 m, there should be an operating theatre within the maternity” (anaesthesiologist).All respondents consider a dedicated cesarean section room within the delivery ward as a positive experience for patient as mother and child are not separated and can stay together all the time; also the recovery period in which mother and child are often not together can be deleted; this improves skin-to-skin contact, first breastfeeding, and mother-child bounding. Quote: “all patients are happy, especially that they do not have to go to the recovery room and they are together with the baby continuously now, also the partner can stay with the baby and a woman all the time, patient is really appreciate that we perform cesarean sections here in our department” (a midwife). In the hospitals where it is not possible to perform a cesarean section in the delivery ward alternative ways to improve the experience are looked for; in the recovery room, an isolated space where the mother can stay with her baby was realized but financial limitations made it impossible for a midwife to stay with the mother.It is clear that midwives do not feel prepared from the formation to assist in operating theatre cesarean sections; in one of the hospitals where cesarean sections were only recently performed at the maternity unit, operating nurses were still assisting and this was considered as a source of frustration; obstetricians and anaesthesiologists know exactly what to do in the operating theatre; for the midwives this was a new experience and their exact job and responsibility was not clearly defined. Tension arises when an operating nurse enters the cesarean section room. Quote: “midwives are not nurses and have no experience with working in operating room, they have not received any formation for that and during interventions really feel the difference between an operating nurse and a midwife” (gynaecologist); “we need training more than we have received until now; if anything goes wrong I would not know what to do; I do not feel certainly enough to stand alone as a midwife in the operating together” (midwife). Midwives feel that it is difficult on one hand to be a technical person and assisting the anaesthesiologists or the obstetrician and on the other hand to be present for the parents and taking care of the baby. Training and communication are mentioned as highly important especially in emergency situations.Logistics are also considered of high importance; after each cesarean section all the materials should be checked and frequently discussions rise regarding who has to perform this; anaesthesiologists that are not used to working on the obstetric department mentioned that they feel unsafe in a room where they do not feel at home as in their usual operating theatre.
## 4. Discussion
From these interviews it becomes clear that introducing cesarean sections into the delivery ward makes it necessary that midwives receive extra training and formation to assist surgery as in Belgium, and most countries, a midwife is not a nurse. In general in the formation of midwives the physiology of labor and delivery is considered crucial; it can be questioned whether in a changing world it would not be better to train midwives systematically for assisting cesarean sections. The competencies that midwives should achieve are changing [9]. Actually the changing practice creates an enlarging gap between theory at school and practice in the delivery room [10].Anaesthesiologists are much more critical when it comes to patient safety; midwives and obstetricians have more interest in the positive experience of the patient. This demonstrates the technical medical view of anaesthesiologists versus the more physiologic and normal medical view of midwives and obstetricians. To realize safety teamwork is of the utmost importance, especially in obstetric care [11]. This can be optimalized by team training and education especially simulation [12].Literature on dedicated cesarean section room within a delivery unit is scarce. Graham et al. describe the experience in an academic hospital where patients planned for elective cesarean experienced waiting times and difficulty in transport to the operating theatre [2]. To improve the experience patients were prepared for surgery in the same room they would stay in after surgery; family could stay with them and they no longer stayed in a recovery room avoiding separation of mother and child. This changing policy was generally highly appreciated by the patientsKasagi et al. describe the start of an operating theatre in a perinatal unit, away from the central operating block, a situation comparable to the hospital as we describe in this paper [6]. Before starting surgery in this new unit formation was given to the medical staff and the midwives by an experienced anaesthesiologist and operating nurse; in this hospital logistics and extra medical and midwife staff were provided before implementing the change. The interval between the decision to perform a cesarean section and birth of the baby is diminished. Kasagi et al. conclude that the lack of staff makes this dedicated cesarean section room not available for 24 hours. Patients find it less stressful.In our descriptive study the same limitations are mentioned: lack of staff and lack of training; also the same advantages are mentioned, especially the positive experience by the patient.Our study is limited by its quantitative and descriptive setup and by the small number of interviews. We do not know whether the opinions of anaesthesiologists, obstetricians, and midwives in our sample can be generalised but they do seem to be in the same line as what is published. We did not interview patients or family.Discussions on the safety of the concept of an isolated cesarean section room seem to be frequent between anaesthesiologists and obstetricians; it is remarkable that almost no studies are available [13–15].When introducing a dedicated operating room for cesarean sections within the delivery ward we recommend that this process be developed together with anesthesiologists and midwives from the very start. As concerns the midwives an intense coaching by trained operating nurses who progressively leave responsibility to the midwives is advised. A rotating period in which midwives do perform basic practice in the general operating room, in case this was lacking during basic training, is considered worthwhile. A dedicated team of obstetrics anesthesiologists would most probably be the ideal situation, but this can be difficult to realize in most centres. Anyhow involving anesthesiologists in the organization of the delivery ward operating theatre so they feel at ease with the location and material is necessary. Regular phantom training with the multidisciplinary team can also increase the ease of working.
## 5. Conclusion
Advantages for the patient and family are often mentioned as arguments in favour of cesarean section within the delivery department; safety issues are unclear. Transposing surgery from the operating theatre to the delivery ward necessitates a new competency profile of the midwife. More systematic research on this subject is necessary to guide us further in the organization of obstetrics in hospitals.
---
*Source: 1017572-2018-09-27.xml* | 2018 |
# A Single Centre Experience of Day Case Laparoscopic Cholecystectomy Outcomes by Body Mass Index Group
**Authors:** Kirk Bowling; Samantha Leong; Sarah El-Badawy; Erfan Massri; Jaideep Rait; Jay Atkinson; Gandrapu Srinivas; Stuart Andrews
**Journal:** Surgery Research and Practice
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1017584
---
## Abstract
Aim. The purpose of this study was to evaluate whether patients with a high BMI can undergo safe day case LC for cholecystitis compared to groups of patients with a lower BMI.Setting. NHS District General Hospital, UK.Methods. A retrospective review of 2391 patients who underwent an attempted day case LC between 1 January 2009 and 15 August 2015 was performed. Patients were divided into five groups depending on their BMI. Inclusion criteria were patients undergoing elective day case laparoscopic cholecystectomy with cholecystitis on histology. The endpoints were complication requiring readmission and postoperative length of stay (LOS).Results. There were 2391 LCs performed in the time period of which 1646 were eligible for inclusion. These LCs were classified as 273 (16.9%), 608 (37.8%), 428 (26.6%), 208 (12.9%), and 91 (5.66%) patients in the groups with BMI values of 18.5–24.9, 25–29.9, 30–34.9, 35–39.9, and >40, respectively. Average BMI was 30.0 (±5.53, 19–51) with an average postoperative LOS of 0.86, and there was no difference between the BMI groups. Overall complication rate was 4.3%; there was no significance between BMI groups.Conclusions. Increased BMI was not associated with worse outcomes after day case LC.
---
## Body
## 1. Background
Laparoscopic cholecystectomy (LC) has become the standard of care for the treatment of symptomatic gallbladder disease [1]. Compared to the traditional open cholecystectomy (OC), LC is associated with lower morbidity and mortality, shorter length of hospital stay, and quicker return to normal activities [2].Obesity in the United Kingdom is a growing problem and is one of the leading causes of preventable death in the UK. Adult obesity rates have almost quadrupled in the last 25 years, with 23.1% of British people obese as of 2012 and one-third of all UK males predicted to be obese by 2030 [3, 4].Many studies have looked for factors that are associated with higher risks of conversion and complications. Several factors that have been identified are increased age, time of day, male gender, increased acuity of illness, and many others [5–8].There is evidence that we aim to further add to that day case laparoscopic cholecystectomy can be performed safely in patients with a high BMI (Body Mass Index) without a higher readmission or complication rate; most of the current evidence is from the United States of America [9].The British Association of Day Surgery already recommends that providing adequate training, equipment, and staff is present; patients with an increased BMI should be operated on in the day case setting. This is due to factors such as early mobilization and short anesthetics being of great benefit to these groups of patients in their recovery and that obesity per se is not a contraindication to day case surgery [10].With increased prevalence of obesity and increasing experience with managing such patients, patients with a higher BMI are being operated on more routinely in the District General Hospital setting for day case laparoscopic cholecystectomy. However to the authors best knowledge that many units have different policies with regard to obesity in the day case unit.Surgery in the obese has traditionally been labeled as high risk. We hypothesized that obesity may not be an independent significant risk factor leading to increased conversion, complication rates, and readmission.With an increasing proportion of patients having a Body Mass Index (BMI) of more than 30 our study draws light on the impact of an increased BMI on the day case LC within the District General Hospital setting.
## 2. Aim
The primary outcome of this study is readmission rate following day case LC. Secondary outcomes include LOS (length of stay), conversion, complication rate, and mortality. These outcomes will be measured with the aim of concluding whether day case LC can be performed safely in patients with higher BMI’s in a District General Hospital setting.
## 3. Methods
A retrospective review of a prospectively maintained database identified 2391 patients who underwent an attempted LC between 1 January 2009 and 15 August 2015. This database included standard demographical data such as height, weight, BMI, and patient identifiers. Each patient case was cross-referenced with the hospital episode statistics database and the theatre and pathology databases. This allowed compilation of data for each patient. Patients were excluded if classified as an emergency or if the indication was not gallstone disease. Inclusion and exclusion criteria can be seen in Table1.Table 1
Inclusion and exclusion criteria.
Inclusion criteria
Exclusion criteria
(1) Day case laparoscopic cholecystectomy
(1) Emergency cholecystectomy
(2) Any BMI, any sex
(2) Planned open cholecystectomy
(3) Cholecystitis
(3) CBD exploration
(4) Previous ERCP/PTC
(5) Empyema, hepatobiliary cancer
(6) No recorded BMI
BMI: Body Mass Index; CBD: Common Bile Duct; PTC: Percutaneous Transcutaneous Cholangiogram; ERCP: Endoscopic Retrograde Cholangiopancreatography.Cholecystitis was the main inclusion criteria as to decrease variability in the difficulty of the operation and decrease heterogeneity within the data.Patients were divided into five groups depending on their BMI: 18.5–24.9, 25–29.9, 30–34.9, 35–39.9, and >40. The primary endpoints were conversion rates, complication rates, and postoperative length of stay. Complications were defined as any event requiring a procedure or hospital admission. Surgical site infections not requiring hospital admission were excluded. A hospital admission was any readmission to hospital 30 days after the procedure but did not include prolonged hospital stay from day case.A small number of patients were identified as being readmissions but found to have illness separate to the original surgery. For example, a patient was readmitted for removal of a suspected melanoma electively.Pearson Chi-Square and ANOVA tests were performed to check for statistical significance.SPSS software version 19.0 (SPSS, Chicago, IL, USA) was used for statistical computation, andP<0.05 was considered significant.
## 4. Results
There were 2391 LCs performed between 1 January 2009 and 15 August 2015; 2204 were elective nonemergency cases; 1646 cases were appropriate for study. See the following tables for inclusion criteria and excluded cases (Tables1 and 2 and Figure 1).Table 2
Patient demographics.
Total patients
1646
Male/female
354/1292
Age (year) mean ± SD, range
53.4
±
16.25, 16–87
BMI (kg/m2) mean ± SD, range
30.0
±
5.53, 19–51
SD: Standard Deviation; BMI: Body Mass Index.Figure 1
Study patients inclusion breakdown; CBD: Common Bile Duct; PTC: Percutaneous Transcutaneous Cholangiogram; ERCP: Endoscopic Retrograde Cholangiopancreatography.These were distributed as per Table3 into WHO (World Health Organization) recognized BMI groups.Table 3
Distribution of patients by BMI groups.
Normal weight BMI 18.5–24.9
280 (17.0%)
Overweight BMI 25.0–29.9
620 (37.7%)
Class I obesity BMI 30.0–34.9
438 (26.6%)
Class II obesity BMI 35.0–39.9
213 (12.9%)
Class III obesity BMI > 40.0
95 (5.8%)
Total
1646
BMI: Body Mass Index.Seven (0.44%) patients required conversion to open surgery. There was no significance for the rate of conversion amongst the BMI groups (P=0.835) and postoperative LOS (P=0.86). Overall complication rate was 4.3% including wound infections through to bile leaks (0.18%) again with no statistical significance between BMI groups (Table 4).Table 4
Table showing patient outcomes categorized by BMI group.
Total
<24.9
25–29.9
30–34.9
35–39.9
>40
Sig
(n=1646)
(n=280)
(n=620)
(n=438)
(n=213)
(n=95)
Conversion
7 (4.3%)
2
3
1
1
0
0.835
∗
Complication
65 (3.95%)
9
31
15
7
3
0.183
∗
Mean LOS ± SD (days)
0.86
0.83
±
2.20
0.88
±
2.22
0.81
±
2.12
0.98
±
2.34
0.78
±
1.75
0.280
∧
LOS: length of stay. SD: Standard Deviation.Pearson∗ Chi-Squared test. One∧ way ANOVA. Median LOS for all groups = 0.In Table5 the readmission events are broken down by cause; the delineation between nonspecific chest pain and pain with no cause found classifications is an arbitrary one by the authors. The latter classification was made where a patient was readmitted for pain which was warranted serious enough for investigation in the form of imaging USS, CT, or CTPA (Table 5) with no positive finding and where the patient did not warrant intervention, the former being chest pain investigated and being refuted as cardiac or patients observed and then subsequently discharged with no cause found.Table 5
Types of complication/readmission reason.
Normal weight
Overweight
Class I obesity
Class II obesity
Class III obesity
Total
P value
BMI 18.5–24.9(n=280)
BMI 25.0–29.9(n=620)
BMI 30.0–34.9(n=438)
BMI 35.0–39.9 (n=213)
BMI > 40.0(n=95)
All BMI(n=1646)
Abdominal collection
1
4
2
1
0
8
0.926
Bile leak requiring ERCP
0
1
1
0
0
2
0.880
Bile leak requiring hepaticojejunostomy
0
1
0
0
0
1
0.796
Constipation
0
2
0
0
0
2
0.502
Death
0
1
0
0
0
1
0.796
Post-op nausea
3
1
1
0
0
5
0.136
Missed pancreatic cancer
0
0
1
0
0
1
0.608
Nonspecific chest pain
0
3
1
0
1
5
0.568
Pain with no cause found
4
8
1
2
1
16
0.418
Port site bleeding
0
0
1
0
0
1
0.607
Retained stone
1
3
2
1
0
7
0.975
Small bowel obstruction
0
1
1
1
0
3
0.797
Urinary retention
0
2
1
0
1
3
0.666
Wound infection
0
4
3
2
0
9
0.514
Total
9
31
15
7
3
65
0.183
Complication total/BMI category number
3.21%
5.00%
3.42%
3.29%
3.16%
3.95%
BMI: Body Mass Index, ERCP: Endoscopic Retrograde Cholangiopancreatography;P value: all statistical tests using Pearson Chi-Squared unless stated.
## 5. Discussion
The data showed an expected demographical distribution of patients; the majority of patients are female with a mean age of 53.4. However it is also shown that the average BMI of patients appears to be increasing with over 45% of all included LCs being performed on patients with class I obesity or above.Within the category of small bowel obstruction all three patients required return to theatre for port site hernia repair with none requiring resection. All seven patients with retained stones were successfully managed with ERCP.Despite nearly half of patients being obese there is no statistical significance between the groups in terms of conversion rate, complication rate, or LOS. However a proportion of patients were excluded from analysis due to no BMI value availability (n=104, 5.67%). This is concerning with regard to the robustness of the data leading to selection bias. This could manifest in the form of patients whose weight was not recordable on the preassessment scales leading to patients with very high BMIs being excluded. In this excluded group no bile leaks were identified, or readmissions or deaths; it is therefore unlikely that the primary outcomes would be affected, but other outcomes could be influenced such as conversion rate.Of note the data does not differentiate between the degree of cholecystitis, mainly due to the fact that there is no clear grading system of cholecystitis that could be applied to the data retrospectively other than chronic versus acute on histology. It was for this reason that biliary colic was excluded as although cholecystitis can lead to a variable difficulty in operation it was felt amongst the authors that this was acceptable and should not introduce bias to the data. Biliary colic inclusion, however, would lead to too much heterogeneity within the dataset.Also comorbidities were not included in this study such as diabetes and steroid use. However these factors if managed appropriately should not affect day case management and indeed the guidelines from the British Association of Day Surgery state patients with such comorbidities are best managed in the day case setting.Despite these limitations the mean LOS and secondary outcomes appear to not be affected by BMI category. Each individual readmission event derives no statistical significance by BMI grouping. Overall significant complication rates are shown to be low in our study and laparoscopic cholecystectomy is a safe procedure with BMI not being an independent risk factor for major complications.
## 6. Conclusion
The data corroborates demographical data from the Office of National Statistics that the patients we operate on are presenting with increased BMIs with only 17% of the patients having a normal BMI. This dataset offers a large sample size; however as mentioned in Discussion 5.67% of patients were excluded on the basis of no BMI data. Of this excluded group there were no bile leaks or deaths and these outcomes would unlikely be affected. However of the data available it shows clearly that increased BMI was not associated with statistically worse outcomes after day case LC. Compared with normal weight patients, obese and even morbidly obese patients have no increased risk of conversion to open surgery, or complications. Readmission rate and LOS are also not significantly influenced by BMI. This study therefore supports previous research and the British Association of Surgery guidelines that patients within an increased BMI class if managed appropriately have no worse outcomes than the normal BMI class if operated on in a District General setting with adequate training, staff, and equipment to handle such cases. It does not offer any evidence to the operative outcomes of obese patients in the emergency setting; this should be an area of further study. We therefore conclude that such patients can be managed without specialist bariatric input in the District General Hospital setting safely compared to other BMI groups in the elective day case setting effectively with appropriate staff, training, and equipment. However it is the authors opinion that an open discussion should take place with all patients who are eligible for specialist bariatric input with regard to the options available. As within our practice a number of patients select referral to a weight loss management service for a potential combined weight loss procedure and laparoscopic cholecystectomy; however this needs to be managed against patient symptoms and risk.
---
*Source: 1017584-2017-09-28.xml* | 1017584-2017-09-28_1017584-2017-09-28.md | 14,641 | A Single Centre Experience of Day Case Laparoscopic Cholecystectomy Outcomes by Body Mass Index Group | Kirk Bowling; Samantha Leong; Sarah El-Badawy; Erfan Massri; Jaideep Rait; Jay Atkinson; Gandrapu Srinivas; Stuart Andrews | Surgery Research and Practice
(2017) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1017584 | 1017584-2017-09-28.xml | ---
## Abstract
Aim. The purpose of this study was to evaluate whether patients with a high BMI can undergo safe day case LC for cholecystitis compared to groups of patients with a lower BMI.Setting. NHS District General Hospital, UK.Methods. A retrospective review of 2391 patients who underwent an attempted day case LC between 1 January 2009 and 15 August 2015 was performed. Patients were divided into five groups depending on their BMI. Inclusion criteria were patients undergoing elective day case laparoscopic cholecystectomy with cholecystitis on histology. The endpoints were complication requiring readmission and postoperative length of stay (LOS).Results. There were 2391 LCs performed in the time period of which 1646 were eligible for inclusion. These LCs were classified as 273 (16.9%), 608 (37.8%), 428 (26.6%), 208 (12.9%), and 91 (5.66%) patients in the groups with BMI values of 18.5–24.9, 25–29.9, 30–34.9, 35–39.9, and >40, respectively. Average BMI was 30.0 (±5.53, 19–51) with an average postoperative LOS of 0.86, and there was no difference between the BMI groups. Overall complication rate was 4.3%; there was no significance between BMI groups.Conclusions. Increased BMI was not associated with worse outcomes after day case LC.
---
## Body
## 1. Background
Laparoscopic cholecystectomy (LC) has become the standard of care for the treatment of symptomatic gallbladder disease [1]. Compared to the traditional open cholecystectomy (OC), LC is associated with lower morbidity and mortality, shorter length of hospital stay, and quicker return to normal activities [2].Obesity in the United Kingdom is a growing problem and is one of the leading causes of preventable death in the UK. Adult obesity rates have almost quadrupled in the last 25 years, with 23.1% of British people obese as of 2012 and one-third of all UK males predicted to be obese by 2030 [3, 4].Many studies have looked for factors that are associated with higher risks of conversion and complications. Several factors that have been identified are increased age, time of day, male gender, increased acuity of illness, and many others [5–8].There is evidence that we aim to further add to that day case laparoscopic cholecystectomy can be performed safely in patients with a high BMI (Body Mass Index) without a higher readmission or complication rate; most of the current evidence is from the United States of America [9].The British Association of Day Surgery already recommends that providing adequate training, equipment, and staff is present; patients with an increased BMI should be operated on in the day case setting. This is due to factors such as early mobilization and short anesthetics being of great benefit to these groups of patients in their recovery and that obesity per se is not a contraindication to day case surgery [10].With increased prevalence of obesity and increasing experience with managing such patients, patients with a higher BMI are being operated on more routinely in the District General Hospital setting for day case laparoscopic cholecystectomy. However to the authors best knowledge that many units have different policies with regard to obesity in the day case unit.Surgery in the obese has traditionally been labeled as high risk. We hypothesized that obesity may not be an independent significant risk factor leading to increased conversion, complication rates, and readmission.With an increasing proportion of patients having a Body Mass Index (BMI) of more than 30 our study draws light on the impact of an increased BMI on the day case LC within the District General Hospital setting.
## 2. Aim
The primary outcome of this study is readmission rate following day case LC. Secondary outcomes include LOS (length of stay), conversion, complication rate, and mortality. These outcomes will be measured with the aim of concluding whether day case LC can be performed safely in patients with higher BMI’s in a District General Hospital setting.
## 3. Methods
A retrospective review of a prospectively maintained database identified 2391 patients who underwent an attempted LC between 1 January 2009 and 15 August 2015. This database included standard demographical data such as height, weight, BMI, and patient identifiers. Each patient case was cross-referenced with the hospital episode statistics database and the theatre and pathology databases. This allowed compilation of data for each patient. Patients were excluded if classified as an emergency or if the indication was not gallstone disease. Inclusion and exclusion criteria can be seen in Table1.Table 1
Inclusion and exclusion criteria.
Inclusion criteria
Exclusion criteria
(1) Day case laparoscopic cholecystectomy
(1) Emergency cholecystectomy
(2) Any BMI, any sex
(2) Planned open cholecystectomy
(3) Cholecystitis
(3) CBD exploration
(4) Previous ERCP/PTC
(5) Empyema, hepatobiliary cancer
(6) No recorded BMI
BMI: Body Mass Index; CBD: Common Bile Duct; PTC: Percutaneous Transcutaneous Cholangiogram; ERCP: Endoscopic Retrograde Cholangiopancreatography.Cholecystitis was the main inclusion criteria as to decrease variability in the difficulty of the operation and decrease heterogeneity within the data.Patients were divided into five groups depending on their BMI: 18.5–24.9, 25–29.9, 30–34.9, 35–39.9, and >40. The primary endpoints were conversion rates, complication rates, and postoperative length of stay. Complications were defined as any event requiring a procedure or hospital admission. Surgical site infections not requiring hospital admission were excluded. A hospital admission was any readmission to hospital 30 days after the procedure but did not include prolonged hospital stay from day case.A small number of patients were identified as being readmissions but found to have illness separate to the original surgery. For example, a patient was readmitted for removal of a suspected melanoma electively.Pearson Chi-Square and ANOVA tests were performed to check for statistical significance.SPSS software version 19.0 (SPSS, Chicago, IL, USA) was used for statistical computation, andP<0.05 was considered significant.
## 4. Results
There were 2391 LCs performed between 1 January 2009 and 15 August 2015; 2204 were elective nonemergency cases; 1646 cases were appropriate for study. See the following tables for inclusion criteria and excluded cases (Tables1 and 2 and Figure 1).Table 2
Patient demographics.
Total patients
1646
Male/female
354/1292
Age (year) mean ± SD, range
53.4
±
16.25, 16–87
BMI (kg/m2) mean ± SD, range
30.0
±
5.53, 19–51
SD: Standard Deviation; BMI: Body Mass Index.Figure 1
Study patients inclusion breakdown; CBD: Common Bile Duct; PTC: Percutaneous Transcutaneous Cholangiogram; ERCP: Endoscopic Retrograde Cholangiopancreatography.These were distributed as per Table3 into WHO (World Health Organization) recognized BMI groups.Table 3
Distribution of patients by BMI groups.
Normal weight BMI 18.5–24.9
280 (17.0%)
Overweight BMI 25.0–29.9
620 (37.7%)
Class I obesity BMI 30.0–34.9
438 (26.6%)
Class II obesity BMI 35.0–39.9
213 (12.9%)
Class III obesity BMI > 40.0
95 (5.8%)
Total
1646
BMI: Body Mass Index.Seven (0.44%) patients required conversion to open surgery. There was no significance for the rate of conversion amongst the BMI groups (P=0.835) and postoperative LOS (P=0.86). Overall complication rate was 4.3% including wound infections through to bile leaks (0.18%) again with no statistical significance between BMI groups (Table 4).Table 4
Table showing patient outcomes categorized by BMI group.
Total
<24.9
25–29.9
30–34.9
35–39.9
>40
Sig
(n=1646)
(n=280)
(n=620)
(n=438)
(n=213)
(n=95)
Conversion
7 (4.3%)
2
3
1
1
0
0.835
∗
Complication
65 (3.95%)
9
31
15
7
3
0.183
∗
Mean LOS ± SD (days)
0.86
0.83
±
2.20
0.88
±
2.22
0.81
±
2.12
0.98
±
2.34
0.78
±
1.75
0.280
∧
LOS: length of stay. SD: Standard Deviation.Pearson∗ Chi-Squared test. One∧ way ANOVA. Median LOS for all groups = 0.In Table5 the readmission events are broken down by cause; the delineation between nonspecific chest pain and pain with no cause found classifications is an arbitrary one by the authors. The latter classification was made where a patient was readmitted for pain which was warranted serious enough for investigation in the form of imaging USS, CT, or CTPA (Table 5) with no positive finding and where the patient did not warrant intervention, the former being chest pain investigated and being refuted as cardiac or patients observed and then subsequently discharged with no cause found.Table 5
Types of complication/readmission reason.
Normal weight
Overweight
Class I obesity
Class II obesity
Class III obesity
Total
P value
BMI 18.5–24.9(n=280)
BMI 25.0–29.9(n=620)
BMI 30.0–34.9(n=438)
BMI 35.0–39.9 (n=213)
BMI > 40.0(n=95)
All BMI(n=1646)
Abdominal collection
1
4
2
1
0
8
0.926
Bile leak requiring ERCP
0
1
1
0
0
2
0.880
Bile leak requiring hepaticojejunostomy
0
1
0
0
0
1
0.796
Constipation
0
2
0
0
0
2
0.502
Death
0
1
0
0
0
1
0.796
Post-op nausea
3
1
1
0
0
5
0.136
Missed pancreatic cancer
0
0
1
0
0
1
0.608
Nonspecific chest pain
0
3
1
0
1
5
0.568
Pain with no cause found
4
8
1
2
1
16
0.418
Port site bleeding
0
0
1
0
0
1
0.607
Retained stone
1
3
2
1
0
7
0.975
Small bowel obstruction
0
1
1
1
0
3
0.797
Urinary retention
0
2
1
0
1
3
0.666
Wound infection
0
4
3
2
0
9
0.514
Total
9
31
15
7
3
65
0.183
Complication total/BMI category number
3.21%
5.00%
3.42%
3.29%
3.16%
3.95%
BMI: Body Mass Index, ERCP: Endoscopic Retrograde Cholangiopancreatography;P value: all statistical tests using Pearson Chi-Squared unless stated.
## 5. Discussion
The data showed an expected demographical distribution of patients; the majority of patients are female with a mean age of 53.4. However it is also shown that the average BMI of patients appears to be increasing with over 45% of all included LCs being performed on patients with class I obesity or above.Within the category of small bowel obstruction all three patients required return to theatre for port site hernia repair with none requiring resection. All seven patients with retained stones were successfully managed with ERCP.Despite nearly half of patients being obese there is no statistical significance between the groups in terms of conversion rate, complication rate, or LOS. However a proportion of patients were excluded from analysis due to no BMI value availability (n=104, 5.67%). This is concerning with regard to the robustness of the data leading to selection bias. This could manifest in the form of patients whose weight was not recordable on the preassessment scales leading to patients with very high BMIs being excluded. In this excluded group no bile leaks were identified, or readmissions or deaths; it is therefore unlikely that the primary outcomes would be affected, but other outcomes could be influenced such as conversion rate.Of note the data does not differentiate between the degree of cholecystitis, mainly due to the fact that there is no clear grading system of cholecystitis that could be applied to the data retrospectively other than chronic versus acute on histology. It was for this reason that biliary colic was excluded as although cholecystitis can lead to a variable difficulty in operation it was felt amongst the authors that this was acceptable and should not introduce bias to the data. Biliary colic inclusion, however, would lead to too much heterogeneity within the dataset.Also comorbidities were not included in this study such as diabetes and steroid use. However these factors if managed appropriately should not affect day case management and indeed the guidelines from the British Association of Day Surgery state patients with such comorbidities are best managed in the day case setting.Despite these limitations the mean LOS and secondary outcomes appear to not be affected by BMI category. Each individual readmission event derives no statistical significance by BMI grouping. Overall significant complication rates are shown to be low in our study and laparoscopic cholecystectomy is a safe procedure with BMI not being an independent risk factor for major complications.
## 6. Conclusion
The data corroborates demographical data from the Office of National Statistics that the patients we operate on are presenting with increased BMIs with only 17% of the patients having a normal BMI. This dataset offers a large sample size; however as mentioned in Discussion 5.67% of patients were excluded on the basis of no BMI data. Of this excluded group there were no bile leaks or deaths and these outcomes would unlikely be affected. However of the data available it shows clearly that increased BMI was not associated with statistically worse outcomes after day case LC. Compared with normal weight patients, obese and even morbidly obese patients have no increased risk of conversion to open surgery, or complications. Readmission rate and LOS are also not significantly influenced by BMI. This study therefore supports previous research and the British Association of Surgery guidelines that patients within an increased BMI class if managed appropriately have no worse outcomes than the normal BMI class if operated on in a District General setting with adequate training, staff, and equipment to handle such cases. It does not offer any evidence to the operative outcomes of obese patients in the emergency setting; this should be an area of further study. We therefore conclude that such patients can be managed without specialist bariatric input in the District General Hospital setting safely compared to other BMI groups in the elective day case setting effectively with appropriate staff, training, and equipment. However it is the authors opinion that an open discussion should take place with all patients who are eligible for specialist bariatric input with regard to the options available. As within our practice a number of patients select referral to a weight loss management service for a potential combined weight loss procedure and laparoscopic cholecystectomy; however this needs to be managed against patient symptoms and risk.
---
*Source: 1017584-2017-09-28.xml* | 2017 |
# In Vivo Potential Anti-Inflammatory Activity of Melissa officinalis L. Essential Oil
**Authors:** Amina Bounihi; Ghizlane Hajjaj; Rachad Alnamer; Yahia Cherrah; Amina Zellou
**Journal:** Advances in Pharmacological Sciences
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101759
---
## Abstract
Melissa officinalis L. (Lamiaceae) had been reported in traditional Moroccan medicine to exhibit calming, antispasmodic, and strengthening heart effects. Therefore, this study is aimed at determining the anti-inflammatory activities of M. officinalis L. leaves. The effect of the essential oil of the leaves of this plant was investigated for anti-inflammatory properties by using carrageenan and experimental trauma-induced hind paw edema in rats. The essential oil extracted from leaves by hydrodistillation was characterized by means of gas chromatography-mass spectrometry (GC-MS). M. officinalis contained Nerol (30.44%), Citral (27.03%), Isopulegol (22.02%), Caryophyllene (2.29%), Caryophyllene oxide (1.24%), and Citronella (1.06%). Anti-inflammatory properties of oral administration of essential oil at the doses of 200, 400 mg/kg p.o., respectively, showed significant reduction and inhibition of edema with 61.76% and 70.58%, respectively, (P<0.001) induced by carrageenan at 6 h when compared with control and standard drug (Indomethacin). On experimental trauma, M. officinalis L. essential oil showed pronounced reduction and inhibition of edema induced by carrageenan at 6 h at 200 and 400 mg/kg with 91.66% and 94.44%, respectively (P<0.001). We can conclude that the essential oil of M. officinalis L. possesses potential anti-inflammatory activities, supporting the traditional application of this plant in treating various diseases associated with inflammation and pain.
---
## Body
## 1. Introduction
The varied climate and heterogeneous ecologic conditions in Morocco have favoured the proliferation of more than 42,000 species of plants, divided into 150 families and 940 genuses [1–4]. Over the past decade herbal medicine has become a topic of global importance, making an impact on both world health and international trade. Medicinal plants continue to play central roles in the healthcare system of large proportion of the world’s population [3]. This is particularly true in the developing countries, where herbal medicine has a long and uninterrupted history of use. Recognition and development of medicinal and economic benefits of these plants are increasing in both developing and industrialized nations. Continuous usage of herbal medicine by a large proportion of the population in the developing countries is largely due to the high cost of western pharmaceuticals, health care, adverse effects that follow their use (in some cases), and the cultural, spiritual point of view of people [5–7]. In western developed countries, however, after a downturn in the pace of herbal use in recent decades, the pace is again quickening as scientists realize that the effective life span of any antibiotic is limited [8–10]. Worldwide spending on finding new anti-infective agents (including vaccines) was expected to increase 60% from the spending levels in 1993. New sources, especially plant sources, are also being investigated. Secondly, the public is becoming increasingly aware of problems with the overprescription and misuse of traditional antibiotics. In addition, many people are interested in having more autonomy over their medical care. All these make the knowledge of chemical, biological, and therapeutic activities of medicinal plants used as folklore medicine become necessary [11–14].Generally, the inflammatory process involves a series of events that can be elicited by numerous stimuli such as infectious agents, ischemia, antigen-antibody interaction, and thermal or physical injury. Inflammation is usually associated with pain as a secondary process resulting from the release of analgesic mediators: nonsteroidal anti-inflammatory drugs (NSAIDs), steroidal drugs, and immunosuppressant drugs, which have been used usually in the relief of inflammatory diseases by people around the world for a long time [9].However, these drugs were often associated with severe adverse side effects, such as gastrointestinal bleeding and peptic ulcers [9]. Recently, many natural medicines derived from medicinal plants were considered as effective and safer for the treatment of various diseases including inflammation and pain [15].There are various components to an inflammatory reaction that can contribute to the associated symptoms and tissue injury. Edema formation, leukocyte infiltration, and granuloma formation represent such components of inflammation [16]. Edema formation in the paw is the result of a synergism between various inflammatory mediators that increase vascular permeability and/or the mediators that increase blood flow [17, 18].M. officinalis L. (Lamiaceae) is a herbal medicine native to the Eastern Mediterranean region and Western Asia. M. officinalis has been traditionally used for different medical purposes such as tonic, antispasmodic medicine drug, carminative, diaphoretic medicine drug, surgical dressing for wounds, and sedative/hypnotic, and it is used for strengthening the memory and relief of stress-induced headache [19–21]. In our previous study, we have proven the efficacy of the extract of the essential oil of this plant on central nervous activity [22]. To the best of our knowledge, this is the first study to provide data that the essential oil of the leaves of M. officinalis L. evaluated against inflammations. Thus, the aim of this study is to evaluate the anti-inflammatory effect of the essential oil of the leaves of M. officinalis L. and, therefore, to determine the scientific basis for its use in traditional medicine in the treatment of inflammation.
## 2. Materials and Methods
### 2.1. Plant Material
Fresh leaves ofMelissa officinalis L. (Lamiaceae) were collected based on ethnopharmacological information from villages around the region Eljadida, middle Morocco in January 2013, with the agreement from the authorities with respect to the United Nations Convention of Biodiversity and with assistance of traditional medical practitioner. The plant was identified with botanist of the Department of Medicinal and Aromatic Plants, National Institute for Agricultural Research, Morocco. A voucher specimen (no. RAB76712) was deposited in the Herbarium of Botany Department of Scientific Institute of Rabat.
### 2.2. Preparation of the Essential Oil
Fresh leaves ofMelissa officinalis L. were hydrodistilled in Clevenger apparatus for 4 hours to obtain the essential oil with (v/w) yield. The extract was stored in a refrigerator at 4°C [23] and protected against light and heat until use. The essential oil was produced from leaves of M. officinalis by hydrodistillation method. Plant materials (100 g) cut into small pieces were placed in distillation apparatus and hydrodistilled for 4 h after the oils were dried over hydrous K2CO3; they were stored at +4°C until used for GC-MS analysis. The yield of extraction (ratio weight of EO/weight of dry plant) was 0.5% [1–3].
### 2.3. Phytochemical Analysis ofMelissa officinalis L. Essential Oil by Combined Gas Chromatography-Mass Spectrometry (GC-MS)
The essential oil was submitted to quantitative analysis in a Hewlett-Packard 575, GC condition: carrier gas N2 (0.5 bar) at flow rate of 1.0 m/min, sample size: 0.2μL injected, and capillary column (30 m siloxane 5% HP EM). The temperature of the injector and detector was set at 250°C. The oven temperature was programmed from 50°C to 250°C (5 min). The MS was taken at 70 eV. The components of the essential oil were identified by comparison of their mass spectra with those in the Wiley-NIST 7th edition library of mass spectral data. The percentage composition on the oil sample was calculated from GC-MS peak areas [1, 2].
### 2.4. Animals
Male Wistar rats weighing 180–220 g were used in this study. The animals were obtained from the animal centre of Mohammed V-Souissi University, Medicine and Pharmacy Faculty, Rabat, Morocco. All animals were kept in a room maintained under environmentally controlled conditions of23±1°C and 12 h light-12 h dark cycle. All animals had free access to water and standard diet. They were acclimatized at least one week before the experiments started. The animals submitted to oral administration of the extracts or drugs were fasted for 18 h before the experiment (water was available). All experiments were conducted in accordance with the Official Journal of the European Committee in 1991. The experiment protocol was approved by the Institutional Research Committee regarding the care and use of animals for experimental procedure in 2010; CEE509 [1–3, 24, 25].
### 2.5.In Vivo Anti-Inflammatory Activity
The evaluation of the anti-inflammatory activity ofM. officinalis L. essential oil was carried out by using two different methods that used chemical stimuli (winter test) [26] and mechanical stimuli (Riesterer and Jacques test) [27] induced paw edema in rats. In both methods, all animals were fasted 18 h before testing and received 5 mL of distilled water by gavages to minimize individual variations in response to the swelling of the paws. The right hind paw (RP) is not treated, and it is taken as control.
### 2.6. Carrageenan-Induced Rat Paw Edema
The carrageenan-induced paw edema model [9, 26–28] was used to evaluate the anti-inflammatory effect of M. officinalisessential oil. The initial paw volume was recorded using an Ugo Basile model LE750 plethysmometer.Rats groups were orally administered essential oil ofM. officinalisL. (200 and 400 mg/kg); Indomethacin (10 mg/kg) was used as reference drug while distilled water (5 mL/kg) was used as negative control. After 60 min had passed, carrageenan (0.05 mL of a 1% w/v solution, prepared in sterile saline) was injected subcutaneously into subplantar region of the left hind paw of each rat. The right hind paw is not treated; it is taken as a witness. One hour 30 minutes, 3 hour and 6 hours after the injection of carrageenan, the paw volumes of each rat were measured. Mean differences of treated groups were compared with the mean differences of the control group. The percentages of inhibition of inflammation were calculated according to the following formula:
(1)%ofinhibition=mean[Vleft-Vright]control-[Vleft-Vright]treated[Vleft-Vright]control*100,
where Vleft is the mean volume of edema on the left hind paw and Vright is the mean volume of edema on the right hind paw.
### 2.7. Experimental Trauma-Induced Rat Paw Edema
This assay was determined as described by Riesterer and Jacques test [27]. The test groups of rats were given orally 200 and 400 mg/kg of M. officinalisessential oil, the control group received 5 mL/kg of distilled water, and the standard group received the reference drug (Indomethacin 10 mg/kg). One hour after oral administration of different substances dropping a weight of 50 g onto the dorsum of the left hind paw of all animals. The right hind paw is not treated; it is taken as a witness.The difference volume of two paws was measured and taken as the edema value by using digital plethysmometer LE750 at 1 h 30 min, 3 h and 6 h after induction of inflammation [29]. Mean differences of treated groups were compared with the mean differences of the control group. The percentages of inhibition of inflammation were calculated according to the following formula:
(2)%ofinhibition=mean[Vleft-Vright]control-[Vleft-Vright]treated[Vleft-Vright]control*100.
### 2.8. Statistical Analysis
The results are expressed as mean ± SEM and analyzed by one-way analysis of variance (ANOVA) followed by Student’st-test. A value of P<0.001 was considered significant.
## 2.1. Plant Material
Fresh leaves ofMelissa officinalis L. (Lamiaceae) were collected based on ethnopharmacological information from villages around the region Eljadida, middle Morocco in January 2013, with the agreement from the authorities with respect to the United Nations Convention of Biodiversity and with assistance of traditional medical practitioner. The plant was identified with botanist of the Department of Medicinal and Aromatic Plants, National Institute for Agricultural Research, Morocco. A voucher specimen (no. RAB76712) was deposited in the Herbarium of Botany Department of Scientific Institute of Rabat.
## 2.2. Preparation of the Essential Oil
Fresh leaves ofMelissa officinalis L. were hydrodistilled in Clevenger apparatus for 4 hours to obtain the essential oil with (v/w) yield. The extract was stored in a refrigerator at 4°C [23] and protected against light and heat until use. The essential oil was produced from leaves of M. officinalis by hydrodistillation method. Plant materials (100 g) cut into small pieces were placed in distillation apparatus and hydrodistilled for 4 h after the oils were dried over hydrous K2CO3; they were stored at +4°C until used for GC-MS analysis. The yield of extraction (ratio weight of EO/weight of dry plant) was 0.5% [1–3].
## 2.3. Phytochemical Analysis ofMelissa officinalis L. Essential Oil by Combined Gas Chromatography-Mass Spectrometry (GC-MS)
The essential oil was submitted to quantitative analysis in a Hewlett-Packard 575, GC condition: carrier gas N2 (0.5 bar) at flow rate of 1.0 m/min, sample size: 0.2μL injected, and capillary column (30 m siloxane 5% HP EM). The temperature of the injector and detector was set at 250°C. The oven temperature was programmed from 50°C to 250°C (5 min). The MS was taken at 70 eV. The components of the essential oil were identified by comparison of their mass spectra with those in the Wiley-NIST 7th edition library of mass spectral data. The percentage composition on the oil sample was calculated from GC-MS peak areas [1, 2].
## 2.4. Animals
Male Wistar rats weighing 180–220 g were used in this study. The animals were obtained from the animal centre of Mohammed V-Souissi University, Medicine and Pharmacy Faculty, Rabat, Morocco. All animals were kept in a room maintained under environmentally controlled conditions of23±1°C and 12 h light-12 h dark cycle. All animals had free access to water and standard diet. They were acclimatized at least one week before the experiments started. The animals submitted to oral administration of the extracts or drugs were fasted for 18 h before the experiment (water was available). All experiments were conducted in accordance with the Official Journal of the European Committee in 1991. The experiment protocol was approved by the Institutional Research Committee regarding the care and use of animals for experimental procedure in 2010; CEE509 [1–3, 24, 25].
## 2.5.In Vivo Anti-Inflammatory Activity
The evaluation of the anti-inflammatory activity ofM. officinalis L. essential oil was carried out by using two different methods that used chemical stimuli (winter test) [26] and mechanical stimuli (Riesterer and Jacques test) [27] induced paw edema in rats. In both methods, all animals were fasted 18 h before testing and received 5 mL of distilled water by gavages to minimize individual variations in response to the swelling of the paws. The right hind paw (RP) is not treated, and it is taken as control.
## 2.6. Carrageenan-Induced Rat Paw Edema
The carrageenan-induced paw edema model [9, 26–28] was used to evaluate the anti-inflammatory effect of M. officinalisessential oil. The initial paw volume was recorded using an Ugo Basile model LE750 plethysmometer.Rats groups were orally administered essential oil ofM. officinalisL. (200 and 400 mg/kg); Indomethacin (10 mg/kg) was used as reference drug while distilled water (5 mL/kg) was used as negative control. After 60 min had passed, carrageenan (0.05 mL of a 1% w/v solution, prepared in sterile saline) was injected subcutaneously into subplantar region of the left hind paw of each rat. The right hind paw is not treated; it is taken as a witness. One hour 30 minutes, 3 hour and 6 hours after the injection of carrageenan, the paw volumes of each rat were measured. Mean differences of treated groups were compared with the mean differences of the control group. The percentages of inhibition of inflammation were calculated according to the following formula:
(1)%ofinhibition=mean[Vleft-Vright]control-[Vleft-Vright]treated[Vleft-Vright]control*100,
where Vleft is the mean volume of edema on the left hind paw and Vright is the mean volume of edema on the right hind paw.
## 2.7. Experimental Trauma-Induced Rat Paw Edema
This assay was determined as described by Riesterer and Jacques test [27]. The test groups of rats were given orally 200 and 400 mg/kg of M. officinalisessential oil, the control group received 5 mL/kg of distilled water, and the standard group received the reference drug (Indomethacin 10 mg/kg). One hour after oral administration of different substances dropping a weight of 50 g onto the dorsum of the left hind paw of all animals. The right hind paw is not treated; it is taken as a witness.The difference volume of two paws was measured and taken as the edema value by using digital plethysmometer LE750 at 1 h 30 min, 3 h and 6 h after induction of inflammation [29]. Mean differences of treated groups were compared with the mean differences of the control group. The percentages of inhibition of inflammation were calculated according to the following formula:
(2)%ofinhibition=mean[Vleft-Vright]control-[Vleft-Vright]treated[Vleft-Vright]control*100.
## 2.8. Statistical Analysis
The results are expressed as mean ± SEM and analyzed by one-way analysis of variance (ANOVA) followed by Student’st-test. A value of P<0.001 was considered significant.
## 3. Results
### 3.1. Chemical Composition of the Essential Oil
The results obtained by GC-MS analyses of the essential oils ofMelissa officinalisL. are presented in Figure 1. Thirteen compounds were identified in this essential oil by GC-MS analyses (Figure 1); M. officinalisL. contained six major compounds, that is, Nerol (30.44%), Isopulegol (22.02%), Citral (27.03%), Caryophyllene (2.29%), Caryophyllene oxide (1.24%), and Citronella (1.06%) as the main constituents of essential oil of M. officinalis L. Nerol and Citral as have been previously reported major chemical components of M. officinalisL. [18–22], but Isopulegol has never been reported as the main component of M. officinalisL. (Figure 1).Figure 1
Gas chromatography-mass spectrometry (GC-MS) ofMelissa officinalisL. essential oil.
### 3.2. Carrageenan-Induced Rat Paw Edema
The results of the effect of theM. officinalisessential oil on carrageenan-induced edema are shown in Tables 1 and 2. At doses of 200 and 400 mg/kg via oral pathway, M. officinalisessential oil exhibited significant (P<0.001) anti-inflammatory activity as compared to the control and standard group (Table 1). At 1 h 30 min, the extract of the essential oil showed similar inhibition of edema by 70.58% and 76.47% at 200 and 400 mg/kg, respectively, as compared to the standard drug Indomethacin (10 mg/kg) by 76.47%. However, at the sixth hour the M. officinalisessential oil showed greater inhibition with 61.76% and 70.58% at 200 and 400 mg/kg, p.o., respectively, as compared to reference drug Indomethacin (10 mg/kg) by 52.94% (Table 2).Table 1
Effect of essential oil ofMelissa officinalis L. on carrageenan-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Mean volume of edema (left paw-right paw) induced by carrageenan (mL)
1 h 30 min
3 h
6 h
Control
0.17
±
0.013
0.26
±
0.01
0.34
±
0.023
Indomethacin
10
0.04
±
0.004
*
0.07
±
0.005
*
0.16
±
0.008
*
EOMO
200
0.05
±
0.009
*
0.1
±
0.011
*
0.13
±
0.01
*
EOMO
400
0.04
±
0.01
*
0.09
±
0.008
*
0.1
±
0.013
*
Values are expressed as mean ± S.E.M. (n=6), p.o.: oral route, n: number of animals per group, EOMO: essential oil of Melissa officinalisL., *P<0.001 statistically significant relative to the control and reference drug (Indomethacin).Table 2
Percentage of inhibition of inflammation of essential oil ofMelissa officinalis L. using carrageenan-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Percentage of inhibition of inflammation induced by carrageenan (%)
1 h 30 min
3 h
6 h
Indomethacin
10
76.47
73.07
52.94
EOMO
200
70.58
61.53
61.76
EOMO
400
76.74
65.38
70.58
N
=
6; these results compared with standard drug (Indomethacin, 10 mg/kg, p.o.) were administered by the oral route.
### 3.3. Experimental Trauma-Induced Rat Paw Edema
The effect of two doses of theM. officinalisessential oil on experimental trauma-induced inflammation is shown in Tables 3 and 4, and the results are comparable to that of the control and standard drug Indomethacin (10 mg/kg, p.o.). The M. officinalisessential oil at all DOE levels significantly decreased inflammation induced by experimental trauma (Table 3).Table 3
Effect of essential oil ofMelissa officinalisL. on experimental trauma-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Mean volume of edema (left paw-right paw) induced by experimental trauma (mL)
1 h 30 min
3 h
6 h
Control
0.153
±
0.005
0.22
±
0.021
0.36
±
0.015
Indomethacin
10
0.04
±
0.009
*
0.03
±
0.01
*
0.03
±
0.01
*
EOMO
200
0.085
±
0.005
*
0.05
±
0.007
*
0.03
±
0.005
*
EOMO
400
0.068
±
0.004
*
0.04
±
0.004
*
0.02
±
0.005
*
Values are expressed as mean ± S.E.M. (n=6), EOMO: essential oil of Melissa officinalisL., *P<0.001 statistically significant compared to the control and reference drug (Indomethacin).Table 4
Percentage of inhibition of inflammation of essential oil ofMelissa officinalis L.using experimental trauma-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Percentage of inhibition of inflammation induced by experimental trauma (%)
1 h 30 min
3 h
6 h
Indomethacin
10
65.81
86.36
91.66
EOMO
200
44.44
77.27
91.66
EOMO
400
55.55
81.81
94.44
N
=
6; these results compared with standard drug (Indomethacin, 10 mg/kg, p.o.) were administered by the oral route.At 200 and 400 mg/kg, p.o.,M. officinalisessential oil exhibited maximum anti-inflammatory activity of 91.66% and 94.44%, respectively, at the sixth hour (Table 4). This inhibition of edema was significantly similar to that obtained with Indomethacin (10 mg/kg, p.o.) by 91.66% during the same time.
## 3.1. Chemical Composition of the Essential Oil
The results obtained by GC-MS analyses of the essential oils ofMelissa officinalisL. are presented in Figure 1. Thirteen compounds were identified in this essential oil by GC-MS analyses (Figure 1); M. officinalisL. contained six major compounds, that is, Nerol (30.44%), Isopulegol (22.02%), Citral (27.03%), Caryophyllene (2.29%), Caryophyllene oxide (1.24%), and Citronella (1.06%) as the main constituents of essential oil of M. officinalis L. Nerol and Citral as have been previously reported major chemical components of M. officinalisL. [18–22], but Isopulegol has never been reported as the main component of M. officinalisL. (Figure 1).Figure 1
Gas chromatography-mass spectrometry (GC-MS) ofMelissa officinalisL. essential oil.
## 3.2. Carrageenan-Induced Rat Paw Edema
The results of the effect of theM. officinalisessential oil on carrageenan-induced edema are shown in Tables 1 and 2. At doses of 200 and 400 mg/kg via oral pathway, M. officinalisessential oil exhibited significant (P<0.001) anti-inflammatory activity as compared to the control and standard group (Table 1). At 1 h 30 min, the extract of the essential oil showed similar inhibition of edema by 70.58% and 76.47% at 200 and 400 mg/kg, respectively, as compared to the standard drug Indomethacin (10 mg/kg) by 76.47%. However, at the sixth hour the M. officinalisessential oil showed greater inhibition with 61.76% and 70.58% at 200 and 400 mg/kg, p.o., respectively, as compared to reference drug Indomethacin (10 mg/kg) by 52.94% (Table 2).Table 1
Effect of essential oil ofMelissa officinalis L. on carrageenan-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Mean volume of edema (left paw-right paw) induced by carrageenan (mL)
1 h 30 min
3 h
6 h
Control
0.17
±
0.013
0.26
±
0.01
0.34
±
0.023
Indomethacin
10
0.04
±
0.004
*
0.07
±
0.005
*
0.16
±
0.008
*
EOMO
200
0.05
±
0.009
*
0.1
±
0.011
*
0.13
±
0.01
*
EOMO
400
0.04
±
0.01
*
0.09
±
0.008
*
0.1
±
0.013
*
Values are expressed as mean ± S.E.M. (n=6), p.o.: oral route, n: number of animals per group, EOMO: essential oil of Melissa officinalisL., *P<0.001 statistically significant relative to the control and reference drug (Indomethacin).Table 2
Percentage of inhibition of inflammation of essential oil ofMelissa officinalis L. using carrageenan-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Percentage of inhibition of inflammation induced by carrageenan (%)
1 h 30 min
3 h
6 h
Indomethacin
10
76.47
73.07
52.94
EOMO
200
70.58
61.53
61.76
EOMO
400
76.74
65.38
70.58
N
=
6; these results compared with standard drug (Indomethacin, 10 mg/kg, p.o.) were administered by the oral route.
## 3.3. Experimental Trauma-Induced Rat Paw Edema
The effect of two doses of theM. officinalisessential oil on experimental trauma-induced inflammation is shown in Tables 3 and 4, and the results are comparable to that of the control and standard drug Indomethacin (10 mg/kg, p.o.). The M. officinalisessential oil at all DOE levels significantly decreased inflammation induced by experimental trauma (Table 3).Table 3
Effect of essential oil ofMelissa officinalisL. on experimental trauma-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Mean volume of edema (left paw-right paw) induced by experimental trauma (mL)
1 h 30 min
3 h
6 h
Control
0.153
±
0.005
0.22
±
0.021
0.36
±
0.015
Indomethacin
10
0.04
±
0.009
*
0.03
±
0.01
*
0.03
±
0.01
*
EOMO
200
0.085
±
0.005
*
0.05
±
0.007
*
0.03
±
0.005
*
EOMO
400
0.068
±
0.004
*
0.04
±
0.004
*
0.02
±
0.005
*
Values are expressed as mean ± S.E.M. (n=6), EOMO: essential oil of Melissa officinalisL., *P<0.001 statistically significant compared to the control and reference drug (Indomethacin).Table 4
Percentage of inhibition of inflammation of essential oil ofMelissa officinalis L.using experimental trauma-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Percentage of inhibition of inflammation induced by experimental trauma (%)
1 h 30 min
3 h
6 h
Indomethacin
10
65.81
86.36
91.66
EOMO
200
44.44
77.27
91.66
EOMO
400
55.55
81.81
94.44
N
=
6; these results compared with standard drug (Indomethacin, 10 mg/kg, p.o.) were administered by the oral route.At 200 and 400 mg/kg, p.o.,M. officinalisessential oil exhibited maximum anti-inflammatory activity of 91.66% and 94.44%, respectively, at the sixth hour (Table 4). This inhibition of edema was significantly similar to that obtained with Indomethacin (10 mg/kg, p.o.) by 91.66% during the same time.
## 4. Discussion
Aromatic and medicinal plants have been used for thousands of years in every part of the world by numerous civilizations. Driven by their intuition and their sense of observation, they were able to find answers to their health problems in the plant environment [4–6]. Recently, the search for novel pharmacotherapy from medicinal plants for inflammation diseases has progressed significantly owing to their less side effects and better tolerability. Aromatherapy is currently used worldwide in the management of chronic pain [1–5]. Oil compositions of M. officinalis L. have already been reported [18–22]. Thus, it has been shown that Nerol, Isopulegol, Citral, Caryophyllene, Caryophyllene oxide, and Citronella account for 80% of M. officinalis L. essential oils, but in our study, these compounds represent 84.08%. These differences in chemical composition of essential oil may be due to both developmental and environmental factors that influence plant metabolism. We analyzed the effects of different doses of essential oil from leaves of M. officinalis L. for their anti-inflammatory activity.Following oral administration ofM. officinalis L. extract at the doses of 300 and 2000 mg/kg, p.o., no toxicity and no significant changes in the body weight between the control and treated groups were demonstrated at these doses. This result indicates that the LD50 was higher than 2000 mg/kg. These results were previously reported by Bounihi et al. [22].In the present study, anti-inflammatory effect of the essential oil ofM. officinalis L. was investigated after subplantar injection of carrageenan and experimental trauma in rat paw.The method of carrageenan induced paw edema is the most widely used to evaluate the anti-inflammatory effect of natural products. Edema formation due to carrageenan in the rat paw is the biphasic event [30]. The initial phase, which occurs between 0 and 2.5 h after the injection of the phlogistic agent, has been attributed to the action of mediators such as histamine, serotonin and bradykinin on vascular permeability [30–32]. It has been reported that histamine and serotonin are mainly released during the first 1.5 h while bradykinin is released within 2.5 h after carrageenan injection [32, 33]. While the late phase is associated with the release of prostaglandins and may be occurs from 2.5 h to 6 h post-carrageenan injection [33, 34].In the inflammatory response there is an increase of permeability of endothelial lining cells and influxes of blood leukocytes into the interstitium, oxidative burst, and release of cytokines (interleukins and tumors necrosis factor-α (TNF-α)). At the same time, there is also an induction of the activity of several enzymes (oxygenases, nitric oxide synthases, and peroxidases) as well as the arachidonic acid metabolism. In the inflammatory process there is also the expression of cellular adhesion molecules, such as intercellular adhesion molecule (ICAM) and vascular cell adhesion molecule (VCAM) [34].The carrageenan-induced hind paw edema in rat is known to be sensitive to cyclooxygenase inhibitors, but not to lipooxygenase inhibitors, and has been used to evaluate the effect of nonsteroidal anti-inflammatory drugs which primarily inhibit the cyclooxygenase involved in prostaglandins synthesis. It has been demonstrated that the suppression of carrageenan-induced inflammation after the third hour correlates reasonably with therapeutic doses of most clinically effective anti-inflammatory agents [33].M. officinalisessential oil at doses of 200 and 400 mg/kg, p.o., reduced and inhibited significantly (P<0.001) the edema in the early and late phases of inflammation induced by carrageenan. In the experimental trauma-induced edema in rats, the extract was also reduced and inhibited significantly (P<0.001) the edema in the different phases of inflammatory response. Based on the results obtained, M. officinalisessential oil was able to effectively inhibit the increase in paw volume during the phases of inflammation. This indicates that the extract of the M. officinalisessential oil, has a significant anti-inflammatory activity perhaps by inhibiting the release of the inflammatory mediators; serotonin and histamine also suppressed prostaglandin and cytokine.Carrageenan and experimental trauma-induced paw edema in rats are a suitable experimental animal model to evaluate the antiedematous effect of diverse bioactive compounds such as plant extracts and essential oils [35–38]. If this method allows screening the anti-inflammatory of samples, very little information is given about its mechanism.The exact mechanism of the anti-inflammatory activity of the essential oil used in the present study is unclear. However, other investigators have reported that theM. officinalisessential oil contains Nerol, Citral, Caryophyllene and Citronella as the main components. The same phytochemicals also investigated in this extract by us [22]. Citral constitutes the main components of Cymbopogon citrates stapf essential oil. This EO is revealed to be capable to suppress IL-1β and IL-6 in LPS-stimulated peritoneal macrophages of normal mice [37, 38]. Whether some essential oils are able to inhibit the production of proinflammatory cytokines such as TNF-α, some of them, their main components (Citral, geraniol, citronellol, and carvone), can also suppress TNF-α-induced neutrophil adherence responses [38]. Another work [39] revealed that Citral inhibited TNF-α in RAW 264.7 cells stimulated by lipopolysaccharide. According to these authors, the M. officinalisessential oil we used can be associated with anti-inflammatory activity at least due to the presence of the Citral as the main component.Further chemical and pharmacological analysis of the extract will be conducted to isolate and characterize the active principles responsible for the anti-inflammatory activity. We can conclude that the essential oil ofM. officinalis L. possesses potential anti-inflammatory activities, supporting the traditional application of this plant in treating various diseases associated with inflammation and pain.
---
*Source: 101759-2013-12-05.xml* | 101759-2013-12-05_101759-2013-12-05.md | 33,518 | In Vivo Potential Anti-Inflammatory Activity of Melissa officinalis L. Essential Oil | Amina Bounihi; Ghizlane Hajjaj; Rachad Alnamer; Yahia Cherrah; Amina Zellou | Advances in Pharmacological Sciences
(2013) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101759 | 101759-2013-12-05.xml | ---
## Abstract
Melissa officinalis L. (Lamiaceae) had been reported in traditional Moroccan medicine to exhibit calming, antispasmodic, and strengthening heart effects. Therefore, this study is aimed at determining the anti-inflammatory activities of M. officinalis L. leaves. The effect of the essential oil of the leaves of this plant was investigated for anti-inflammatory properties by using carrageenan and experimental trauma-induced hind paw edema in rats. The essential oil extracted from leaves by hydrodistillation was characterized by means of gas chromatography-mass spectrometry (GC-MS). M. officinalis contained Nerol (30.44%), Citral (27.03%), Isopulegol (22.02%), Caryophyllene (2.29%), Caryophyllene oxide (1.24%), and Citronella (1.06%). Anti-inflammatory properties of oral administration of essential oil at the doses of 200, 400 mg/kg p.o., respectively, showed significant reduction and inhibition of edema with 61.76% and 70.58%, respectively, (P<0.001) induced by carrageenan at 6 h when compared with control and standard drug (Indomethacin). On experimental trauma, M. officinalis L. essential oil showed pronounced reduction and inhibition of edema induced by carrageenan at 6 h at 200 and 400 mg/kg with 91.66% and 94.44%, respectively (P<0.001). We can conclude that the essential oil of M. officinalis L. possesses potential anti-inflammatory activities, supporting the traditional application of this plant in treating various diseases associated with inflammation and pain.
---
## Body
## 1. Introduction
The varied climate and heterogeneous ecologic conditions in Morocco have favoured the proliferation of more than 42,000 species of plants, divided into 150 families and 940 genuses [1–4]. Over the past decade herbal medicine has become a topic of global importance, making an impact on both world health and international trade. Medicinal plants continue to play central roles in the healthcare system of large proportion of the world’s population [3]. This is particularly true in the developing countries, where herbal medicine has a long and uninterrupted history of use. Recognition and development of medicinal and economic benefits of these plants are increasing in both developing and industrialized nations. Continuous usage of herbal medicine by a large proportion of the population in the developing countries is largely due to the high cost of western pharmaceuticals, health care, adverse effects that follow their use (in some cases), and the cultural, spiritual point of view of people [5–7]. In western developed countries, however, after a downturn in the pace of herbal use in recent decades, the pace is again quickening as scientists realize that the effective life span of any antibiotic is limited [8–10]. Worldwide spending on finding new anti-infective agents (including vaccines) was expected to increase 60% from the spending levels in 1993. New sources, especially plant sources, are also being investigated. Secondly, the public is becoming increasingly aware of problems with the overprescription and misuse of traditional antibiotics. In addition, many people are interested in having more autonomy over their medical care. All these make the knowledge of chemical, biological, and therapeutic activities of medicinal plants used as folklore medicine become necessary [11–14].Generally, the inflammatory process involves a series of events that can be elicited by numerous stimuli such as infectious agents, ischemia, antigen-antibody interaction, and thermal or physical injury. Inflammation is usually associated with pain as a secondary process resulting from the release of analgesic mediators: nonsteroidal anti-inflammatory drugs (NSAIDs), steroidal drugs, and immunosuppressant drugs, which have been used usually in the relief of inflammatory diseases by people around the world for a long time [9].However, these drugs were often associated with severe adverse side effects, such as gastrointestinal bleeding and peptic ulcers [9]. Recently, many natural medicines derived from medicinal plants were considered as effective and safer for the treatment of various diseases including inflammation and pain [15].There are various components to an inflammatory reaction that can contribute to the associated symptoms and tissue injury. Edema formation, leukocyte infiltration, and granuloma formation represent such components of inflammation [16]. Edema formation in the paw is the result of a synergism between various inflammatory mediators that increase vascular permeability and/or the mediators that increase blood flow [17, 18].M. officinalis L. (Lamiaceae) is a herbal medicine native to the Eastern Mediterranean region and Western Asia. M. officinalis has been traditionally used for different medical purposes such as tonic, antispasmodic medicine drug, carminative, diaphoretic medicine drug, surgical dressing for wounds, and sedative/hypnotic, and it is used for strengthening the memory and relief of stress-induced headache [19–21]. In our previous study, we have proven the efficacy of the extract of the essential oil of this plant on central nervous activity [22]. To the best of our knowledge, this is the first study to provide data that the essential oil of the leaves of M. officinalis L. evaluated against inflammations. Thus, the aim of this study is to evaluate the anti-inflammatory effect of the essential oil of the leaves of M. officinalis L. and, therefore, to determine the scientific basis for its use in traditional medicine in the treatment of inflammation.
## 2. Materials and Methods
### 2.1. Plant Material
Fresh leaves ofMelissa officinalis L. (Lamiaceae) were collected based on ethnopharmacological information from villages around the region Eljadida, middle Morocco in January 2013, with the agreement from the authorities with respect to the United Nations Convention of Biodiversity and with assistance of traditional medical practitioner. The plant was identified with botanist of the Department of Medicinal and Aromatic Plants, National Institute for Agricultural Research, Morocco. A voucher specimen (no. RAB76712) was deposited in the Herbarium of Botany Department of Scientific Institute of Rabat.
### 2.2. Preparation of the Essential Oil
Fresh leaves ofMelissa officinalis L. were hydrodistilled in Clevenger apparatus for 4 hours to obtain the essential oil with (v/w) yield. The extract was stored in a refrigerator at 4°C [23] and protected against light and heat until use. The essential oil was produced from leaves of M. officinalis by hydrodistillation method. Plant materials (100 g) cut into small pieces were placed in distillation apparatus and hydrodistilled for 4 h after the oils were dried over hydrous K2CO3; they were stored at +4°C until used for GC-MS analysis. The yield of extraction (ratio weight of EO/weight of dry plant) was 0.5% [1–3].
### 2.3. Phytochemical Analysis ofMelissa officinalis L. Essential Oil by Combined Gas Chromatography-Mass Spectrometry (GC-MS)
The essential oil was submitted to quantitative analysis in a Hewlett-Packard 575, GC condition: carrier gas N2 (0.5 bar) at flow rate of 1.0 m/min, sample size: 0.2μL injected, and capillary column (30 m siloxane 5% HP EM). The temperature of the injector and detector was set at 250°C. The oven temperature was programmed from 50°C to 250°C (5 min). The MS was taken at 70 eV. The components of the essential oil were identified by comparison of their mass spectra with those in the Wiley-NIST 7th edition library of mass spectral data. The percentage composition on the oil sample was calculated from GC-MS peak areas [1, 2].
### 2.4. Animals
Male Wistar rats weighing 180–220 g were used in this study. The animals were obtained from the animal centre of Mohammed V-Souissi University, Medicine and Pharmacy Faculty, Rabat, Morocco. All animals were kept in a room maintained under environmentally controlled conditions of23±1°C and 12 h light-12 h dark cycle. All animals had free access to water and standard diet. They were acclimatized at least one week before the experiments started. The animals submitted to oral administration of the extracts or drugs were fasted for 18 h before the experiment (water was available). All experiments were conducted in accordance with the Official Journal of the European Committee in 1991. The experiment protocol was approved by the Institutional Research Committee regarding the care and use of animals for experimental procedure in 2010; CEE509 [1–3, 24, 25].
### 2.5.In Vivo Anti-Inflammatory Activity
The evaluation of the anti-inflammatory activity ofM. officinalis L. essential oil was carried out by using two different methods that used chemical stimuli (winter test) [26] and mechanical stimuli (Riesterer and Jacques test) [27] induced paw edema in rats. In both methods, all animals were fasted 18 h before testing and received 5 mL of distilled water by gavages to minimize individual variations in response to the swelling of the paws. The right hind paw (RP) is not treated, and it is taken as control.
### 2.6. Carrageenan-Induced Rat Paw Edema
The carrageenan-induced paw edema model [9, 26–28] was used to evaluate the anti-inflammatory effect of M. officinalisessential oil. The initial paw volume was recorded using an Ugo Basile model LE750 plethysmometer.Rats groups were orally administered essential oil ofM. officinalisL. (200 and 400 mg/kg); Indomethacin (10 mg/kg) was used as reference drug while distilled water (5 mL/kg) was used as negative control. After 60 min had passed, carrageenan (0.05 mL of a 1% w/v solution, prepared in sterile saline) was injected subcutaneously into subplantar region of the left hind paw of each rat. The right hind paw is not treated; it is taken as a witness. One hour 30 minutes, 3 hour and 6 hours after the injection of carrageenan, the paw volumes of each rat were measured. Mean differences of treated groups were compared with the mean differences of the control group. The percentages of inhibition of inflammation were calculated according to the following formula:
(1)%ofinhibition=mean[Vleft-Vright]control-[Vleft-Vright]treated[Vleft-Vright]control*100,
where Vleft is the mean volume of edema on the left hind paw and Vright is the mean volume of edema on the right hind paw.
### 2.7. Experimental Trauma-Induced Rat Paw Edema
This assay was determined as described by Riesterer and Jacques test [27]. The test groups of rats were given orally 200 and 400 mg/kg of M. officinalisessential oil, the control group received 5 mL/kg of distilled water, and the standard group received the reference drug (Indomethacin 10 mg/kg). One hour after oral administration of different substances dropping a weight of 50 g onto the dorsum of the left hind paw of all animals. The right hind paw is not treated; it is taken as a witness.The difference volume of two paws was measured and taken as the edema value by using digital plethysmometer LE750 at 1 h 30 min, 3 h and 6 h after induction of inflammation [29]. Mean differences of treated groups were compared with the mean differences of the control group. The percentages of inhibition of inflammation were calculated according to the following formula:
(2)%ofinhibition=mean[Vleft-Vright]control-[Vleft-Vright]treated[Vleft-Vright]control*100.
### 2.8. Statistical Analysis
The results are expressed as mean ± SEM and analyzed by one-way analysis of variance (ANOVA) followed by Student’st-test. A value of P<0.001 was considered significant.
## 2.1. Plant Material
Fresh leaves ofMelissa officinalis L. (Lamiaceae) were collected based on ethnopharmacological information from villages around the region Eljadida, middle Morocco in January 2013, with the agreement from the authorities with respect to the United Nations Convention of Biodiversity and with assistance of traditional medical practitioner. The plant was identified with botanist of the Department of Medicinal and Aromatic Plants, National Institute for Agricultural Research, Morocco. A voucher specimen (no. RAB76712) was deposited in the Herbarium of Botany Department of Scientific Institute of Rabat.
## 2.2. Preparation of the Essential Oil
Fresh leaves ofMelissa officinalis L. were hydrodistilled in Clevenger apparatus for 4 hours to obtain the essential oil with (v/w) yield. The extract was stored in a refrigerator at 4°C [23] and protected against light and heat until use. The essential oil was produced from leaves of M. officinalis by hydrodistillation method. Plant materials (100 g) cut into small pieces were placed in distillation apparatus and hydrodistilled for 4 h after the oils were dried over hydrous K2CO3; they were stored at +4°C until used for GC-MS analysis. The yield of extraction (ratio weight of EO/weight of dry plant) was 0.5% [1–3].
## 2.3. Phytochemical Analysis ofMelissa officinalis L. Essential Oil by Combined Gas Chromatography-Mass Spectrometry (GC-MS)
The essential oil was submitted to quantitative analysis in a Hewlett-Packard 575, GC condition: carrier gas N2 (0.5 bar) at flow rate of 1.0 m/min, sample size: 0.2μL injected, and capillary column (30 m siloxane 5% HP EM). The temperature of the injector and detector was set at 250°C. The oven temperature was programmed from 50°C to 250°C (5 min). The MS was taken at 70 eV. The components of the essential oil were identified by comparison of their mass spectra with those in the Wiley-NIST 7th edition library of mass spectral data. The percentage composition on the oil sample was calculated from GC-MS peak areas [1, 2].
## 2.4. Animals
Male Wistar rats weighing 180–220 g were used in this study. The animals were obtained from the animal centre of Mohammed V-Souissi University, Medicine and Pharmacy Faculty, Rabat, Morocco. All animals were kept in a room maintained under environmentally controlled conditions of23±1°C and 12 h light-12 h dark cycle. All animals had free access to water and standard diet. They were acclimatized at least one week before the experiments started. The animals submitted to oral administration of the extracts or drugs were fasted for 18 h before the experiment (water was available). All experiments were conducted in accordance with the Official Journal of the European Committee in 1991. The experiment protocol was approved by the Institutional Research Committee regarding the care and use of animals for experimental procedure in 2010; CEE509 [1–3, 24, 25].
## 2.5.In Vivo Anti-Inflammatory Activity
The evaluation of the anti-inflammatory activity ofM. officinalis L. essential oil was carried out by using two different methods that used chemical stimuli (winter test) [26] and mechanical stimuli (Riesterer and Jacques test) [27] induced paw edema in rats. In both methods, all animals were fasted 18 h before testing and received 5 mL of distilled water by gavages to minimize individual variations in response to the swelling of the paws. The right hind paw (RP) is not treated, and it is taken as control.
## 2.6. Carrageenan-Induced Rat Paw Edema
The carrageenan-induced paw edema model [9, 26–28] was used to evaluate the anti-inflammatory effect of M. officinalisessential oil. The initial paw volume was recorded using an Ugo Basile model LE750 plethysmometer.Rats groups were orally administered essential oil ofM. officinalisL. (200 and 400 mg/kg); Indomethacin (10 mg/kg) was used as reference drug while distilled water (5 mL/kg) was used as negative control. After 60 min had passed, carrageenan (0.05 mL of a 1% w/v solution, prepared in sterile saline) was injected subcutaneously into subplantar region of the left hind paw of each rat. The right hind paw is not treated; it is taken as a witness. One hour 30 minutes, 3 hour and 6 hours after the injection of carrageenan, the paw volumes of each rat were measured. Mean differences of treated groups were compared with the mean differences of the control group. The percentages of inhibition of inflammation were calculated according to the following formula:
(1)%ofinhibition=mean[Vleft-Vright]control-[Vleft-Vright]treated[Vleft-Vright]control*100,
where Vleft is the mean volume of edema on the left hind paw and Vright is the mean volume of edema on the right hind paw.
## 2.7. Experimental Trauma-Induced Rat Paw Edema
This assay was determined as described by Riesterer and Jacques test [27]. The test groups of rats were given orally 200 and 400 mg/kg of M. officinalisessential oil, the control group received 5 mL/kg of distilled water, and the standard group received the reference drug (Indomethacin 10 mg/kg). One hour after oral administration of different substances dropping a weight of 50 g onto the dorsum of the left hind paw of all animals. The right hind paw is not treated; it is taken as a witness.The difference volume of two paws was measured and taken as the edema value by using digital plethysmometer LE750 at 1 h 30 min, 3 h and 6 h after induction of inflammation [29]. Mean differences of treated groups were compared with the mean differences of the control group. The percentages of inhibition of inflammation were calculated according to the following formula:
(2)%ofinhibition=mean[Vleft-Vright]control-[Vleft-Vright]treated[Vleft-Vright]control*100.
## 2.8. Statistical Analysis
The results are expressed as mean ± SEM and analyzed by one-way analysis of variance (ANOVA) followed by Student’st-test. A value of P<0.001 was considered significant.
## 3. Results
### 3.1. Chemical Composition of the Essential Oil
The results obtained by GC-MS analyses of the essential oils ofMelissa officinalisL. are presented in Figure 1. Thirteen compounds were identified in this essential oil by GC-MS analyses (Figure 1); M. officinalisL. contained six major compounds, that is, Nerol (30.44%), Isopulegol (22.02%), Citral (27.03%), Caryophyllene (2.29%), Caryophyllene oxide (1.24%), and Citronella (1.06%) as the main constituents of essential oil of M. officinalis L. Nerol and Citral as have been previously reported major chemical components of M. officinalisL. [18–22], but Isopulegol has never been reported as the main component of M. officinalisL. (Figure 1).Figure 1
Gas chromatography-mass spectrometry (GC-MS) ofMelissa officinalisL. essential oil.
### 3.2. Carrageenan-Induced Rat Paw Edema
The results of the effect of theM. officinalisessential oil on carrageenan-induced edema are shown in Tables 1 and 2. At doses of 200 and 400 mg/kg via oral pathway, M. officinalisessential oil exhibited significant (P<0.001) anti-inflammatory activity as compared to the control and standard group (Table 1). At 1 h 30 min, the extract of the essential oil showed similar inhibition of edema by 70.58% and 76.47% at 200 and 400 mg/kg, respectively, as compared to the standard drug Indomethacin (10 mg/kg) by 76.47%. However, at the sixth hour the M. officinalisessential oil showed greater inhibition with 61.76% and 70.58% at 200 and 400 mg/kg, p.o., respectively, as compared to reference drug Indomethacin (10 mg/kg) by 52.94% (Table 2).Table 1
Effect of essential oil ofMelissa officinalis L. on carrageenan-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Mean volume of edema (left paw-right paw) induced by carrageenan (mL)
1 h 30 min
3 h
6 h
Control
0.17
±
0.013
0.26
±
0.01
0.34
±
0.023
Indomethacin
10
0.04
±
0.004
*
0.07
±
0.005
*
0.16
±
0.008
*
EOMO
200
0.05
±
0.009
*
0.1
±
0.011
*
0.13
±
0.01
*
EOMO
400
0.04
±
0.01
*
0.09
±
0.008
*
0.1
±
0.013
*
Values are expressed as mean ± S.E.M. (n=6), p.o.: oral route, n: number of animals per group, EOMO: essential oil of Melissa officinalisL., *P<0.001 statistically significant relative to the control and reference drug (Indomethacin).Table 2
Percentage of inhibition of inflammation of essential oil ofMelissa officinalis L. using carrageenan-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Percentage of inhibition of inflammation induced by carrageenan (%)
1 h 30 min
3 h
6 h
Indomethacin
10
76.47
73.07
52.94
EOMO
200
70.58
61.53
61.76
EOMO
400
76.74
65.38
70.58
N
=
6; these results compared with standard drug (Indomethacin, 10 mg/kg, p.o.) were administered by the oral route.
### 3.3. Experimental Trauma-Induced Rat Paw Edema
The effect of two doses of theM. officinalisessential oil on experimental trauma-induced inflammation is shown in Tables 3 and 4, and the results are comparable to that of the control and standard drug Indomethacin (10 mg/kg, p.o.). The M. officinalisessential oil at all DOE levels significantly decreased inflammation induced by experimental trauma (Table 3).Table 3
Effect of essential oil ofMelissa officinalisL. on experimental trauma-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Mean volume of edema (left paw-right paw) induced by experimental trauma (mL)
1 h 30 min
3 h
6 h
Control
0.153
±
0.005
0.22
±
0.021
0.36
±
0.015
Indomethacin
10
0.04
±
0.009
*
0.03
±
0.01
*
0.03
±
0.01
*
EOMO
200
0.085
±
0.005
*
0.05
±
0.007
*
0.03
±
0.005
*
EOMO
400
0.068
±
0.004
*
0.04
±
0.004
*
0.02
±
0.005
*
Values are expressed as mean ± S.E.M. (n=6), EOMO: essential oil of Melissa officinalisL., *P<0.001 statistically significant compared to the control and reference drug (Indomethacin).Table 4
Percentage of inhibition of inflammation of essential oil ofMelissa officinalis L.using experimental trauma-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Percentage of inhibition of inflammation induced by experimental trauma (%)
1 h 30 min
3 h
6 h
Indomethacin
10
65.81
86.36
91.66
EOMO
200
44.44
77.27
91.66
EOMO
400
55.55
81.81
94.44
N
=
6; these results compared with standard drug (Indomethacin, 10 mg/kg, p.o.) were administered by the oral route.At 200 and 400 mg/kg, p.o.,M. officinalisessential oil exhibited maximum anti-inflammatory activity of 91.66% and 94.44%, respectively, at the sixth hour (Table 4). This inhibition of edema was significantly similar to that obtained with Indomethacin (10 mg/kg, p.o.) by 91.66% during the same time.
## 3.1. Chemical Composition of the Essential Oil
The results obtained by GC-MS analyses of the essential oils ofMelissa officinalisL. are presented in Figure 1. Thirteen compounds were identified in this essential oil by GC-MS analyses (Figure 1); M. officinalisL. contained six major compounds, that is, Nerol (30.44%), Isopulegol (22.02%), Citral (27.03%), Caryophyllene (2.29%), Caryophyllene oxide (1.24%), and Citronella (1.06%) as the main constituents of essential oil of M. officinalis L. Nerol and Citral as have been previously reported major chemical components of M. officinalisL. [18–22], but Isopulegol has never been reported as the main component of M. officinalisL. (Figure 1).Figure 1
Gas chromatography-mass spectrometry (GC-MS) ofMelissa officinalisL. essential oil.
## 3.2. Carrageenan-Induced Rat Paw Edema
The results of the effect of theM. officinalisessential oil on carrageenan-induced edema are shown in Tables 1 and 2. At doses of 200 and 400 mg/kg via oral pathway, M. officinalisessential oil exhibited significant (P<0.001) anti-inflammatory activity as compared to the control and standard group (Table 1). At 1 h 30 min, the extract of the essential oil showed similar inhibition of edema by 70.58% and 76.47% at 200 and 400 mg/kg, respectively, as compared to the standard drug Indomethacin (10 mg/kg) by 76.47%. However, at the sixth hour the M. officinalisessential oil showed greater inhibition with 61.76% and 70.58% at 200 and 400 mg/kg, p.o., respectively, as compared to reference drug Indomethacin (10 mg/kg) by 52.94% (Table 2).Table 1
Effect of essential oil ofMelissa officinalis L. on carrageenan-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Mean volume of edema (left paw-right paw) induced by carrageenan (mL)
1 h 30 min
3 h
6 h
Control
0.17
±
0.013
0.26
±
0.01
0.34
±
0.023
Indomethacin
10
0.04
±
0.004
*
0.07
±
0.005
*
0.16
±
0.008
*
EOMO
200
0.05
±
0.009
*
0.1
±
0.011
*
0.13
±
0.01
*
EOMO
400
0.04
±
0.01
*
0.09
±
0.008
*
0.1
±
0.013
*
Values are expressed as mean ± S.E.M. (n=6), p.o.: oral route, n: number of animals per group, EOMO: essential oil of Melissa officinalisL., *P<0.001 statistically significant relative to the control and reference drug (Indomethacin).Table 2
Percentage of inhibition of inflammation of essential oil ofMelissa officinalis L. using carrageenan-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Percentage of inhibition of inflammation induced by carrageenan (%)
1 h 30 min
3 h
6 h
Indomethacin
10
76.47
73.07
52.94
EOMO
200
70.58
61.53
61.76
EOMO
400
76.74
65.38
70.58
N
=
6; these results compared with standard drug (Indomethacin, 10 mg/kg, p.o.) were administered by the oral route.
## 3.3. Experimental Trauma-Induced Rat Paw Edema
The effect of two doses of theM. officinalisessential oil on experimental trauma-induced inflammation is shown in Tables 3 and 4, and the results are comparable to that of the control and standard drug Indomethacin (10 mg/kg, p.o.). The M. officinalisessential oil at all DOE levels significantly decreased inflammation induced by experimental trauma (Table 3).Table 3
Effect of essential oil ofMelissa officinalisL. on experimental trauma-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Mean volume of edema (left paw-right paw) induced by experimental trauma (mL)
1 h 30 min
3 h
6 h
Control
0.153
±
0.005
0.22
±
0.021
0.36
±
0.015
Indomethacin
10
0.04
±
0.009
*
0.03
±
0.01
*
0.03
±
0.01
*
EOMO
200
0.085
±
0.005
*
0.05
±
0.007
*
0.03
±
0.005
*
EOMO
400
0.068
±
0.004
*
0.04
±
0.004
*
0.02
±
0.005
*
Values are expressed as mean ± S.E.M. (n=6), EOMO: essential oil of Melissa officinalisL., *P<0.001 statistically significant compared to the control and reference drug (Indomethacin).Table 4
Percentage of inhibition of inflammation of essential oil ofMelissa officinalis L.using experimental trauma-induced rat paw edema.
Treatment groups
Dose mg/kg p.o.
Percentage of inhibition of inflammation induced by experimental trauma (%)
1 h 30 min
3 h
6 h
Indomethacin
10
65.81
86.36
91.66
EOMO
200
44.44
77.27
91.66
EOMO
400
55.55
81.81
94.44
N
=
6; these results compared with standard drug (Indomethacin, 10 mg/kg, p.o.) were administered by the oral route.At 200 and 400 mg/kg, p.o.,M. officinalisessential oil exhibited maximum anti-inflammatory activity of 91.66% and 94.44%, respectively, at the sixth hour (Table 4). This inhibition of edema was significantly similar to that obtained with Indomethacin (10 mg/kg, p.o.) by 91.66% during the same time.
## 4. Discussion
Aromatic and medicinal plants have been used for thousands of years in every part of the world by numerous civilizations. Driven by their intuition and their sense of observation, they were able to find answers to their health problems in the plant environment [4–6]. Recently, the search for novel pharmacotherapy from medicinal plants for inflammation diseases has progressed significantly owing to their less side effects and better tolerability. Aromatherapy is currently used worldwide in the management of chronic pain [1–5]. Oil compositions of M. officinalis L. have already been reported [18–22]. Thus, it has been shown that Nerol, Isopulegol, Citral, Caryophyllene, Caryophyllene oxide, and Citronella account for 80% of M. officinalis L. essential oils, but in our study, these compounds represent 84.08%. These differences in chemical composition of essential oil may be due to both developmental and environmental factors that influence plant metabolism. We analyzed the effects of different doses of essential oil from leaves of M. officinalis L. for their anti-inflammatory activity.Following oral administration ofM. officinalis L. extract at the doses of 300 and 2000 mg/kg, p.o., no toxicity and no significant changes in the body weight between the control and treated groups were demonstrated at these doses. This result indicates that the LD50 was higher than 2000 mg/kg. These results were previously reported by Bounihi et al. [22].In the present study, anti-inflammatory effect of the essential oil ofM. officinalis L. was investigated after subplantar injection of carrageenan and experimental trauma in rat paw.The method of carrageenan induced paw edema is the most widely used to evaluate the anti-inflammatory effect of natural products. Edema formation due to carrageenan in the rat paw is the biphasic event [30]. The initial phase, which occurs between 0 and 2.5 h after the injection of the phlogistic agent, has been attributed to the action of mediators such as histamine, serotonin and bradykinin on vascular permeability [30–32]. It has been reported that histamine and serotonin are mainly released during the first 1.5 h while bradykinin is released within 2.5 h after carrageenan injection [32, 33]. While the late phase is associated with the release of prostaglandins and may be occurs from 2.5 h to 6 h post-carrageenan injection [33, 34].In the inflammatory response there is an increase of permeability of endothelial lining cells and influxes of blood leukocytes into the interstitium, oxidative burst, and release of cytokines (interleukins and tumors necrosis factor-α (TNF-α)). At the same time, there is also an induction of the activity of several enzymes (oxygenases, nitric oxide synthases, and peroxidases) as well as the arachidonic acid metabolism. In the inflammatory process there is also the expression of cellular adhesion molecules, such as intercellular adhesion molecule (ICAM) and vascular cell adhesion molecule (VCAM) [34].The carrageenan-induced hind paw edema in rat is known to be sensitive to cyclooxygenase inhibitors, but not to lipooxygenase inhibitors, and has been used to evaluate the effect of nonsteroidal anti-inflammatory drugs which primarily inhibit the cyclooxygenase involved in prostaglandins synthesis. It has been demonstrated that the suppression of carrageenan-induced inflammation after the third hour correlates reasonably with therapeutic doses of most clinically effective anti-inflammatory agents [33].M. officinalisessential oil at doses of 200 and 400 mg/kg, p.o., reduced and inhibited significantly (P<0.001) the edema in the early and late phases of inflammation induced by carrageenan. In the experimental trauma-induced edema in rats, the extract was also reduced and inhibited significantly (P<0.001) the edema in the different phases of inflammatory response. Based on the results obtained, M. officinalisessential oil was able to effectively inhibit the increase in paw volume during the phases of inflammation. This indicates that the extract of the M. officinalisessential oil, has a significant anti-inflammatory activity perhaps by inhibiting the release of the inflammatory mediators; serotonin and histamine also suppressed prostaglandin and cytokine.Carrageenan and experimental trauma-induced paw edema in rats are a suitable experimental animal model to evaluate the antiedematous effect of diverse bioactive compounds such as plant extracts and essential oils [35–38]. If this method allows screening the anti-inflammatory of samples, very little information is given about its mechanism.The exact mechanism of the anti-inflammatory activity of the essential oil used in the present study is unclear. However, other investigators have reported that theM. officinalisessential oil contains Nerol, Citral, Caryophyllene and Citronella as the main components. The same phytochemicals also investigated in this extract by us [22]. Citral constitutes the main components of Cymbopogon citrates stapf essential oil. This EO is revealed to be capable to suppress IL-1β and IL-6 in LPS-stimulated peritoneal macrophages of normal mice [37, 38]. Whether some essential oils are able to inhibit the production of proinflammatory cytokines such as TNF-α, some of them, their main components (Citral, geraniol, citronellol, and carvone), can also suppress TNF-α-induced neutrophil adherence responses [38]. Another work [39] revealed that Citral inhibited TNF-α in RAW 264.7 cells stimulated by lipopolysaccharide. According to these authors, the M. officinalisessential oil we used can be associated with anti-inflammatory activity at least due to the presence of the Citral as the main component.Further chemical and pharmacological analysis of the extract will be conducted to isolate and characterize the active principles responsible for the anti-inflammatory activity. We can conclude that the essential oil ofM. officinalis L. possesses potential anti-inflammatory activities, supporting the traditional application of this plant in treating various diseases associated with inflammation and pain.
---
*Source: 101759-2013-12-05.xml* | 2013 |
# Genetic Polymorphisms ofIL17 and Chagas Disease in the South and Southeast of Brazil
**Authors:** Pâmela Guimarães Reis; Christiane Maria Ayo; Luiz Carlos de Mattos; Cinara de Cássia Brandão de Mattos; Karina Mayumi Sakita; Amarilis Giaretta de Moraes; Larissa Pires Muller; Julimary Suematsu Aquino; Luciana Conci Macedo; Priscila Saamara Mazini; Ana Maria Sell; Divina Seila de Oliveira Marques; Reinaldo Bulgarelli Bestetti; Jeane Eliete Laguila Visentainer
**Journal:** Journal of Immunology Research
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1017621
---
## Abstract
The aim of this study was to investigate possible associations between genetic polymorphisms ofIL17A G197A (rs2275913) andIL17F T7488C (rs763780) with Chagas Disease (CD) and/or the severity of left ventricular systolic dysfunction (LVSD) in patients with chronic Chagas cardiomyopathy (CCC). The study with 260 patients and 150 controls was conducted in the South and Southeast regions of Brazil. The genotyping was performed by PCR-RFLP. The A allele and A/A genotype ofIL17A were significantly increased in patients and their subgroups (patients with CCC; patients with CCC and LVSD; and patients with CCC and severe LVSD) when compared to the control group. The analysis according to the gender showed that the A/A genotype ofIL17A was more frequent in female with LVSD and mild to moderate LVSD and also in male patients with LVSD. The frequency ofIL17F T/C genotype was higher in male patients with CCC and severe LVSD and in female with mild to moderate LVSD. The results suggest the possible involvement of the polymorphisms ofIL17A andIL17F in the susceptibility to chronic Chagas disease and in development and progression of cardiomyopathy.
---
## Body
## 1. Introduction
Chagas disease (CD) is a serious anthropozoonosis common in the Americas and found mainly in endemic areas of the 21 Latin American countries [1]. On account of multinational initiatives, infection prevalence is progressively decreasing, and it is estimated that 6 to 8 million individuals are currently infected in the world, with an incidence of 28.000 cases a year [2]. Chagas disease presents an acute phase and a chronic phase. After the acute phase, most of the infected patients enter in the chronic phase of the disease and about 60 to 70% of infected persons are considered to have the indeterminate form (asymptomatic) of the disease [3–6]. After several years (10 to 30) of starting the chronic phase, 30 to 40% of the patients develop clinical manifestations known as the clinical forms: cardiac, digestive (mainly megaesophagus and megacolon), and cardiodigestive [5, 6]. The chronic Chagas cardiomyopathy (CCC) is the most severe form of the disease that affects 20 to 30% of the infected individuals. In endemic areas the disease is the main death cause in patients aged between 30 and 50 years [5, 7].It is known that genetic variability and immunologic response influence the pathogenesis of the chronic phase of the disease. Associations were observed in several cytokine genes [8] with the susceptibility or protection against the development or progression of the CD and/or its clinical forms. The IL-17 is a proinflammatory cytokine secreted by T cells activated and expressed in different tissues. This cytokine takes part in inflammatory responses mediated by T cells and plays an important role in the tissue homeostasis and diseases progression [9]. The IL-17F presents a high degree of homology with the IL-17A (57% identical) [9] and seems to have a biological action similar to IL-17A, in vitro and in vivo, though significantly weaker [10]. The genes that codify them are mapped in the same chromosome, in the position 6p12 [9, 11].Polymorphism in genes encoding cytokines may influence the level of cytokines production and, consequently, cause different immunological responses to different diseases. Previous studies show that genetic polymorphisms ofIL17A G197A andIL17F T7488C affect the production of IL-17A and F, respectively [12, 13]. Such polymorphisms have already been associated with autoimmune and inflammatory diseases, as rheumatoid arthritis [14], periodontitis [15], and cancer, both gastric [16] and breast cancer [17]. To our knowledge, only one study involving the SNPs ofIL17A and the CD [18] was found so far, and if we consider the SNPs ofIL17F there are no related articles published yet. For this reason, our study aims to investigate whether the genetic polymorphisms ofIL17A G197A (rs2275913) andIL17F T7488C (rs763780) were related to CD and/or the severity of the left ventricular systolic dysfunction (LVSD) in patients with CCC from North and Northeast regions of Parana and the Northeast region of São Paulo (states located in the South and Southeast of Brazil, resp.).
## 2. Material and Methods
### 2.1. Patients and Controls
For this study, 260 patients with chronic CD were selected from different municipalities in the North and Northwest regions of Parana and in the Northwest region of São Paulo. The patients were cared for in the Chagas Disease Laboratory in the State University of Maringa, the Clinical Hospital in Londrina, and the Base Hospital of the Medical School in São José do Rio Preto. All patients were submitted to a resting electrocardiogram (ECG) exam and a two-dimensional echocardiography. Patients who presented a normal ECG were classified as patients without CCC and patients with electrocardiographic changes, common to CCC, were classified as patients with CCC. The severity of the LVSD was measured according to the left ventricular ejection fraction (LVEF) and the Teichhoolz method was applied following the II Brazilian Guideline for Severe Heart Diseases [19]. Patients with CCC were classified considering the (LVEF) in three different groups: patients without LVSD (LVEF > 60%); patients with mild to moderate LVSD (LVEF 40–60%); and patients with severe LVSD (LVEF < 40%). To all statistical analysis were considered the following groups: all Chagas disease patients (CD), chronic Chagas cardiomyopathy patients (CCC), without Chagas cardiomyopathy patients (without CCC), chronic Chagas cardiomyopathy patients with LVSD (with LVSD), chronic Chagas cardiomyopathy patients without LVSD (without LVSD), patients with mild to moderate LVSD (Mild/moderate LVSD), and patients with severe LVSD (severe LVSD).The control group was composed of 150 individuals, healthy and nonrelated, patient’s spouses, and contacts retirement communities’ residents with negative serology toT. cruzi antigens. The clinicopathological features of patients and controls are presented in Table 1. No significant differences were observed among groups in terms of gender, but differences in age were observed between CCC and without CCC patients (63.9 ± 10.2 versus 58.6 ± 7.8, respectively; P≤0.05). Due to the significant miscegenation of Brazilian population we consider patients and controls as a mixed ethnic group (Caucasians, Mulattos, and Blacks) according to Parra et al. (2003) [20]. Mean age, gender rates, and residence in the same geographical areas were carefully matching to select the groups.Table 1
Characteristics of the chronic Chagas disease patients and controls from South and Southeast of Brazil.
CD patients
CCC
Without CCC
Control
N
=
260
N
=
212
N
=
48
N
=
150
Genderan (%)
Male
121 (46.5)
97 (45.8)
24 (50.0)
74 (49.3)
Female
139 (53.5)
115 (54.2)
24 (50.0)
76 (50.7)
Ageb
Min-max
31–90
31–90
38–76
28–100
Mean ± SD (year)
62.9 ± 10.0
63.9 ± 10.2
58.6 ± 7.8
62.3 ± 17.4
CCC, patients with chronic Chagas cardiomyopathy; Min, minimum age; Max, maximum age; SD, standard deviation.
aNo statistically significant difference was observed between the groups for gender.
bStatistically significant difference was observed between the groups for age: CCC versus without CCC.The laboratory diagnosis of CD in patients and controls was made by ELISA (Enzyme-Linked ImmunoSorbent Assay) test, in serum or plasma, using the immunoassay “Chagas” from Abbott Laboratories (Santiago, Chile). In cases of weak reagent, the diagnosis was confirmed by the indirect immunofluorescence test (IIFT) with the IMUNOCRUZI® antigen (Biolab, Rio de Janeiro, Brazil) or ELISAcruzi (bioMerieus SA, Brazil), respecting the manufacturer’s instructions.The Ethics committees from each institution have approved this study, as seen in the protocols they have registered (012/2010-COPEP-UEM, CAAE 0296.0.093.000–09; FAMERP - # 009/2011), and written informed consent was obtained from all subjects prior to participation.
### 2.2. DNA Extraction and Genotyping
The extraction method used in this research was the salting-out adapted [21]. The genomic DNA was extracted from 250 μL of buffy-coat obtained from 5 mL of peripheral blood collected in tubes with EDTA (Ethylenediaminetetraacetic acid). The material’s concentration and purity were determined by NanoDrop 2000® equipment (Thermo Scientific, Wilmington, USA).The SNPs inIL-17A (rs2275913) andIL-17F (rs763780) were genotyped using PCR-RFLP (Polymerase Chain Reaction-Restriction Fragment Length Polymorphism) [15]. The primers sequences toIL17A G197A were sense 5′-AACAAGTAAGAATGAAAAGAGGACATGGT-3′ and antisense 5′-CCCCCAATGAGGTCATAGAAGAATC-3, while toIL17F T7488C they were sense 5′-ACCAAGGCTGCTCTGTTTCT-3′ and antisense 5′-GGTAAGGAGTGGCATTTCTA-3′. The reaction of DNA amplification was made in a total volume of 30 μL, containing 100 ng of genomic DNA, 1,0 μM from each primer, 200 μM from each dNTP, 2,0 mM of MgCl2, 3 μL of 10x PCR buffer, and 1,5 U of Taq DNA polymerase (Invitrogen Life Technologies, Grand Island, NY, USA). The PCR products were digested during one hour submitted to 37°C with the enzymeXagI (Fermentas, Canada) toIL17A G197A and the enzymeNlaIII (New England Biolabs) toIL17F T7488C and, subsequently, separated by agarose gel electrophoresis to 3,5% with SYBR Green (Invitrogen Life Technologies, Grand Island, NY, USA).
### 2.3. Statistical Analysis
The allele and genotype frequencies ofIL17A G197A andIL17F T7488C were estimated and the genotype distribution was evaluated to Hardy-Weinberg balance [22]. The association tests were realized to the codominant, dominant, recessive, overdominant, and log-additive genetic inheritance models. The P≤0.05 values were considered statistically significant to Chi-square test with Yates correction and logistic regression. The statistical comparisons between these groups were realized and the estimated risk to develop CD and/or CCC in individuals who hold genetic polymorphisms was calculated by determination of OD (Odds Ratio) with 95% of confidence interval, adjusted by gender and age. All statistical analysis was performed using the software SNPStats (http://bioinfo.iconcologia.net/index.php) [23] and the OpenEpi program, version 3.03a (http://www.openepi.com/Menu/OE_Menu.htm).
## 2.1. Patients and Controls
For this study, 260 patients with chronic CD were selected from different municipalities in the North and Northwest regions of Parana and in the Northwest region of São Paulo. The patients were cared for in the Chagas Disease Laboratory in the State University of Maringa, the Clinical Hospital in Londrina, and the Base Hospital of the Medical School in São José do Rio Preto. All patients were submitted to a resting electrocardiogram (ECG) exam and a two-dimensional echocardiography. Patients who presented a normal ECG were classified as patients without CCC and patients with electrocardiographic changes, common to CCC, were classified as patients with CCC. The severity of the LVSD was measured according to the left ventricular ejection fraction (LVEF) and the Teichhoolz method was applied following the II Brazilian Guideline for Severe Heart Diseases [19]. Patients with CCC were classified considering the (LVEF) in three different groups: patients without LVSD (LVEF > 60%); patients with mild to moderate LVSD (LVEF 40–60%); and patients with severe LVSD (LVEF < 40%). To all statistical analysis were considered the following groups: all Chagas disease patients (CD), chronic Chagas cardiomyopathy patients (CCC), without Chagas cardiomyopathy patients (without CCC), chronic Chagas cardiomyopathy patients with LVSD (with LVSD), chronic Chagas cardiomyopathy patients without LVSD (without LVSD), patients with mild to moderate LVSD (Mild/moderate LVSD), and patients with severe LVSD (severe LVSD).The control group was composed of 150 individuals, healthy and nonrelated, patient’s spouses, and contacts retirement communities’ residents with negative serology toT. cruzi antigens. The clinicopathological features of patients and controls are presented in Table 1. No significant differences were observed among groups in terms of gender, but differences in age were observed between CCC and without CCC patients (63.9 ± 10.2 versus 58.6 ± 7.8, respectively; P≤0.05). Due to the significant miscegenation of Brazilian population we consider patients and controls as a mixed ethnic group (Caucasians, Mulattos, and Blacks) according to Parra et al. (2003) [20]. Mean age, gender rates, and residence in the same geographical areas were carefully matching to select the groups.Table 1
Characteristics of the chronic Chagas disease patients and controls from South and Southeast of Brazil.
CD patients
CCC
Without CCC
Control
N
=
260
N
=
212
N
=
48
N
=
150
Genderan (%)
Male
121 (46.5)
97 (45.8)
24 (50.0)
74 (49.3)
Female
139 (53.5)
115 (54.2)
24 (50.0)
76 (50.7)
Ageb
Min-max
31–90
31–90
38–76
28–100
Mean ± SD (year)
62.9 ± 10.0
63.9 ± 10.2
58.6 ± 7.8
62.3 ± 17.4
CCC, patients with chronic Chagas cardiomyopathy; Min, minimum age; Max, maximum age; SD, standard deviation.
aNo statistically significant difference was observed between the groups for gender.
bStatistically significant difference was observed between the groups for age: CCC versus without CCC.The laboratory diagnosis of CD in patients and controls was made by ELISA (Enzyme-Linked ImmunoSorbent Assay) test, in serum or plasma, using the immunoassay “Chagas” from Abbott Laboratories (Santiago, Chile). In cases of weak reagent, the diagnosis was confirmed by the indirect immunofluorescence test (IIFT) with the IMUNOCRUZI® antigen (Biolab, Rio de Janeiro, Brazil) or ELISAcruzi (bioMerieus SA, Brazil), respecting the manufacturer’s instructions.The Ethics committees from each institution have approved this study, as seen in the protocols they have registered (012/2010-COPEP-UEM, CAAE 0296.0.093.000–09; FAMERP - # 009/2011), and written informed consent was obtained from all subjects prior to participation.
## 2.2. DNA Extraction and Genotyping
The extraction method used in this research was the salting-out adapted [21]. The genomic DNA was extracted from 250 μL of buffy-coat obtained from 5 mL of peripheral blood collected in tubes with EDTA (Ethylenediaminetetraacetic acid). The material’s concentration and purity were determined by NanoDrop 2000® equipment (Thermo Scientific, Wilmington, USA).The SNPs inIL-17A (rs2275913) andIL-17F (rs763780) were genotyped using PCR-RFLP (Polymerase Chain Reaction-Restriction Fragment Length Polymorphism) [15]. The primers sequences toIL17A G197A were sense 5′-AACAAGTAAGAATGAAAAGAGGACATGGT-3′ and antisense 5′-CCCCCAATGAGGTCATAGAAGAATC-3, while toIL17F T7488C they were sense 5′-ACCAAGGCTGCTCTGTTTCT-3′ and antisense 5′-GGTAAGGAGTGGCATTTCTA-3′. The reaction of DNA amplification was made in a total volume of 30 μL, containing 100 ng of genomic DNA, 1,0 μM from each primer, 200 μM from each dNTP, 2,0 mM of MgCl2, 3 μL of 10x PCR buffer, and 1,5 U of Taq DNA polymerase (Invitrogen Life Technologies, Grand Island, NY, USA). The PCR products were digested during one hour submitted to 37°C with the enzymeXagI (Fermentas, Canada) toIL17A G197A and the enzymeNlaIII (New England Biolabs) toIL17F T7488C and, subsequently, separated by agarose gel electrophoresis to 3,5% with SYBR Green (Invitrogen Life Technologies, Grand Island, NY, USA).
## 2.3. Statistical Analysis
The allele and genotype frequencies ofIL17A G197A andIL17F T7488C were estimated and the genotype distribution was evaluated to Hardy-Weinberg balance [22]. The association tests were realized to the codominant, dominant, recessive, overdominant, and log-additive genetic inheritance models. The P≤0.05 values were considered statistically significant to Chi-square test with Yates correction and logistic regression. The statistical comparisons between these groups were realized and the estimated risk to develop CD and/or CCC in individuals who hold genetic polymorphisms was calculated by determination of OD (Odds Ratio) with 95% of confidence interval, adjusted by gender and age. All statistical analysis was performed using the software SNPStats (http://bioinfo.iconcologia.net/index.php) [23] and the OpenEpi program, version 3.03a (http://www.openepi.com/Menu/OE_Menu.htm).
## 3. Results
The ratio distributions of genotype frequency for all analyzed genes were in Hardy-Weinberg equilibrium (P>0.05). In order to evaluate the possible association ofIL17A G197A andIL17F T7488C SNPs and Chagas disease, the allele and genotype frequencies between patients (CD) and their subgroups (CCC, without CCC, with LVSD, without LVSD, Mild/moderate LVSD, severe LVSD) and controls were compared (Table 2). Statistically significant differences were observed for A allele and A/A genotype ofIL17A but no significant difference was found toIL17F.Table 2
Genotypes and allele frequencies distribution ofIL17A rs2275913 and IL17F rs763780 in Chagas disease patients and controls in a population from South and Southeast of Brazil.
Allele/genotypen (%)
CD patients
CCC
WithoutLVSD
WithLVSD
Mild/moderate LVSD
SevereLVSD
Without CCC
Control
N
=
260
N
=
212
N
=
109
N
=
103
N
=
52
N
=
51
N
=
48
N
=
150
IL17A G197A
G
369 (71.2)
297 (70.4)
159 (72.9)
138 (67.6)
72 (70.6)
66 (64.7)
72 (75.0)
235 (78.3)
A
149 (28.8)
a
125 (29.6)
b
59 (27.1)
66 (32.4)
c
30 (29.4)
36 (35.3)
d
24 (25.0)
65 (21.7)
GG
130 (50.2)
104 (49.3)
58 (53.2)
46 (45.1)
24 (47.0)
22 (43.1)
26 (54.2)
88 (58.7)
GA
109 (42.1)
89 (42.2)
43 (39.5)
46 (45.1)
24 (47.0)
22 (43.1)
20 (41.7)
59 (39.3)
AA
20 (7.7)
e
18 (8.5)
f
8 (7.3)
10 (9.8)
g
3 (6.0)
7 (13.8)
h
2 (4.1)
3 (2.0)
IL17F T7488C
T
484 (93.1)
394 (92.9)
207 (94.9)
187 (90.8)
97 (93.3)
90 (88.2)
90 (93.7)
282 (94.0)
C
36 (6.9)
30 (7.1)
11 (5.1)
19 (9.2)
7 (6.7)
12 (11.8)
6 (6.3)
18 (6.0)
TT
224 (86.2)
182 (85.8)
98 (89.9)
84 (81.6)
45 (86.5)
39 (76.5)
42 (87.5)
132 (88.0)
TC
36 (13.8)
30 (14.2)
11 (10.1)
19 (18.4)
7 (13.5)
12 (23.5)
6 (12.5)
18 (12.0)
CCC: patients with chronic Chagas cardiomyopathy; LVSD: left ventricular systolic dysfunction; Recessive model: AA versus GA + GG; OR:odds ratio; CI: confidence interval. Adjustment of the genotypic differences for the effect of age and gender was applied.
P
a
=
0.032. OR = 1.46 and 95% CI = 1.05–2.05; CD patients versus controls.
P
b = 0.021. OR = 1.52 and 95% CI = 1.08–2.15; CCC versus controls.
P
c
=
0.009. OR = 1.73 and 95% CI = 1.15–2.59.With LVSD versus controls.
P
d
=
0.009. OR = 1.97 and 95% CI = 1.20–3.21. Severe LVSD versus controls.
eRecessive model: P=0.009; OR = 4.12; 95% CI = 1.20–14.13. CD patients versus controls.
fRecessive model: P=0.005; OR = 4.67; 95% CI = 1.35–16.18. CCC versus controls.
gRecessive model: P=0.005; OR = 5.73; 95% CI = 1.52–21.64. With LVSD versus controls.
hRecessive model: P=0.002; OR = 8.18; 95% CI = 2.00–33.51. Severe LVSD versus controls.The A allele frequency ofIL17A was significantly higher in the CD patients when compared to controls (P=0.032, OR = 1.46, 95% CI = 1.05–2.05). The same was found when CCC patients (P=0.021, OR = 1.52, 95% CI = 1.08–2.15), patients with CCC and LVSD (P=0.009, OR = 1.73, 95% CI = 1.15–2.59), and patients with CCC and severe LVSD (P=0.009, OR = 1.97, CI = 1.20–3.21) were compared to controls.The A/A genotype was more frequent in the CD patients than in the control group and statistically significant differences were observed in more than one model of genetic inheritance (Codominant:P=0.019, OR = 4.53, 95% CI = 1.31–15.73; Recessive: P=0.0089, OR = 4.12, 95% CI = 1.20–14.13; Log-additive: P=0.02, OR = 1.50, 95% CI = 1.06–2.13). Same results can be seen when the subsets are compared: CCC versus controls (Codominant: P=0.01, OR = 5.16, 95% CI = 1.47–18.14; Recessive: P=0.0048, OR = 4.67, 95% CI = 1.35–16.18; Log-additive: P=0.013, OR = 1.57, 95% CI = 1.09–2.24); patients with CCC and LVSD versus controls (Codominant: P=0.006, OR = 6.81, 95% CI = 1.77–26.29; Dominant: P=0.034, OR = 1.73, 95% CI = 1.04–2.87; Recessive: P=0.0045, OR = 5.73, 95% CI = 1.52–21.64; Log-additive: P=0.0046, OR = 1.85, 95% CI = 1.20–2.85); patients with CCC and severe LVSD versus controls (Codominant: P=0.0047, OR = 9.64, 95% CI = 2.28–40.85; Recessive: P=0.002, OR = 8.18, 95% CI = 2.00–33.51; Log-additive: P=0.057, OR = 2.11, 95% CI = 1.24–3.60). For all comparisons, the recessive inherence model was the best according Akaike information criteria (AIC). It means that two copies of A are necessary to change the risk, so G/A or G/G have the same effect. No difference was observed when allele and genotype frequencies ofIL17A were compared between patients with CCC and patients without CCC. Likewise, no association was observed when the progression of cardiac forms was considered: the different forms (without LVSD, with LVSD, mild/moderate LVSD, and severe LVSD) were compared with each other and no statistically significant difference was noticed.After stratifying according to gender significant differences were observed forIL17A andIL17F genotype frequencies when the progression of cardiac form was evaluated. TheIL17A A/A genotype was more frequent in female with LVSD (OR = 6.63, 95% CI = 1.21–36.40) and with mild/moderate LVSD (OR = 7.57, 95% CI = 1.07–53.40) than in the control group, although not significant males with LVSD also had higher frequency of AA genotype compared to controls (13.5 versus 7.84%, resp.) (Table 3). In relation toIL17F, the T/C genotype was more frequent in male patients with severe LVSD when compared to other groups: without LVSD (OR = 4.82, 95% CI = 1.55–14.98), with mild/moderate LVSD (OR = 6.00, 95% CI = 1.18–30.63), without CCC patients (OR = 6.70, 95% CI = 1.19–37.53), and controls (OR = 3.40, 95% CI = 1.24–9.31). In female statistical difference was not observed, although T/C was higher in mild/moderate LVSD (17%) when compared to others patients and control (Table 4). Considering the variable age, no significant difference was observed betweenIL17 SNPs and CD and/or the severity of the left ventricular systolic dysfunction (LVSD).Table 3
Genotype frequencies ofIL17Ars2275913 in Brazilian patients with LVSD in chronic Chagas cardiomyopathy, stratified according to gender.
Gender
IL17A G197A
WithLVSD n (%)
Mild/moderateLVSD n (%)
Controln (%)
Male
GG
23 (45.1)
13 (59.1)
43 (58.11)
GA
24 (47.06)
9 (40.9)
30 (40.54)
AA
4 (7.84)
0
1 (1.35)
Female
GG
23 (45.1)
11 (37.93)
45 (59.21)
GA
22 (43.14)
15 (51.72)
29 (38.16)
AA
6 (11.76)
a
3 (10.35)
b
2 (2.63)
LVSD, chronic Chagas cardiomyopathy patients with left ventricular systolic dysfunction; OR,odds ratio; CI, confidence interval.
Data adjusted by age.
Only significant results are showed.
aOR = 6.63 and 95% CI = 1.21–36.40; with LVSD versus control.
bOR = 7.57 and 95% CI = 1.07–53.40; mild/moderate LVSD versus control.Table 4
Genotype frequencies ofIL17Frs753780 in Brazilian Chagas disease patients with chronic cardiomyopathy, stratified according to gender.
Gender
IL17F T7488C
WithoutLVSD n (%)
Mild/moderateLVSD n (%)
SevereLVSD n (%)
Without CCCn (%)
Controlsn (%)
Male
TT
40 (86.96)
21 (91.3)
19 (65.5)
22 (45.8)
64 (86.49)
TC
6 (13.04)
a
2 (8.7)
b
10 (34.5)
2 (4.2)
c
10 (13.51)
d
Female
TT
58 (92.06)
24 (82.8)
20 (90.9)
20 (41.7)
68 (89.47)
TC
5 (7.94)
5 (17.2)
2 (9.1)
4 (8.3)
8 (10.53)
CCC, chronic Chagas cardiomyopathy; LVSD, left ventricular systolic dysfunction; OR, odds ratio; CI, confidence interval.
Data adjusted by age.
Only significant results are showed.
aOR = 4.82 and 95% CI = 1.55–14.98; severe LVSD versus without LVSD.
bOR = 6.02 and 95% CI = 1.18–30.78; severe LVSD versus mild/moderate LVSD.
cOR = 6.70 and 95% CI = 1.19–37.53; severe LVSD versus without CCC patients.
dOR = 3.40 and 95% CI = 1.24–9.31; severe LVSD versus controls.
## 4. Discussion
The identification of genes that are candidates for susceptibility or protection against CD has major implications, not only to better understand the pathogenesis of the disease, but also to control and develop therapeutic strategies. In this study, a possible association between the genetic polymorphisms ofIL17A G197A andIL17F T7488C with CD and the severity of CCC was investigated in a population from South and Southwest regions in Brazil.In this study, theIL17A A allele and the A/A genotype were more frequent in CD and CCC patients, female with LVSD or mild/moderate LVSD and male with LVSD when compared to control. The risk to severe LVSD was observed in male carrying theIL17F T/C genotype. TheIL17 polymorphism could be correlated to the risk of disease, indicating susceptibility to chronic Chagas disease and increasing risk of severe cardiomyopathy when gender was considered in multivariate analyses. The mutant allele A ofIL17A was associated with a higher production of IL-17 [12] and the IL-17F activity is similar to IL-17A, although significantly weaker [10, 12, 13]. Based on these findings, it is possible to infer that the higher production of IL-17, a proinflammatory cytokine, could contribute to tissue damage and might be related to the development and progression of CCC in this population.Considering the IL-17 biological function in Chagas disease, Guedes et al. [24] showed that the neutralization of IL-17 in mice BALB/c infected withT. cruzi has resulted in a higher recruitment of inflammatory cells to the cardiac tissue in the acute phase of the infection, leading to an increase in myocarditis and, consequently, premature death, despite the reduction of the local parasitism. Miyazaki et al. [25] have reported the importance of the IL-17 in theT. cruzi infection and the cardiac inflammation control in CD. They observed that in the experimental acute infection withT. cruzi, disabled mice in IL-17 presented a higher mortality rate and parasitemia when compared to the group control (C57BL/6, wild type), as well as a lower expression of cytokines, as IFN-γ, IL-6, and TNF-α, suggesting a protective role of IL-17 in the acute phase of the disease. The neutralization of IL-17 also resulted in a higher production of IL-12, IFN-γ, TNF-α, chemokines, and its receptors, indicating that the IL-17 may perform a role in the control of cardiac inflammation, through the modulation of Th1 response. On the other hand, Magalhães et al. [26] showed that in Chagas patients with cardiac form the total lymphocytes and the Th17 cells presented a low expression of IL-17A in comparison to the patients with the indeterminate form and control group, and the analysis of correlation between IL-17A and the cardiac function showed that the high expression of this cytokine was associated with a better clinical outcome in the human CD, according to values of the ejection fraction and left ventricular diastolic diameter, indicating a protective role against the severity of CCC.Five SNPs ofIL17A were analyzed in patients with CD in a population of an endemic region of Colombia. >The SNP rs8193036 was associated with the protection againstT. cruzi infection and the development of CCC. Meanwhile for the SNP rs2275913, the same SNP evaluated in this study, the frequency of allele A was higher in patients than in controls and significant difference was observed, although significance was lost after the correction [18]. We observed thatIL17A A allele and AA genotype were higher in Chagas disease as well in CCC with or without LVSD, but no difference was observed between CD or CCC patients. However, after stratifying according to gender, female withIL17A AA genotype had risk of developing mild/moderate LVSD (approximately seven), as male to develop LVSD (although not significant); and male withIL17F T/C genotype had higher risk to develop severe LVSD compared to other cardiac form and controls.A study conducted by Peng et al. [27] in Chinese patients with dilated cardiomyopathy did not find association withIL17A G197A andIL17F T7488C polymorphism. However, after stratification by gender, theIL17F was associated with dilated cardiomyopathy in male patients that present the T/C-C/C genotypes, suggesting that the presence of the rare allele (C) might be associated with the disease in these patients. In this study we found that theIL17F T/C genotype was associated with developing severe LVSD in male patients when the sample stratification by gender was done.The risk of development severe cardiac form in male with CD was showed in two Brazilian studies. Rassi et al. [28] showed that gender (male) and left ventricular systolic dysfunction on echocardiography are potential risk factors for death in subjects with CD. They evaluated a cohort of 424 Brazilian outpatients followed for about eight years and confirmed the results in 153 patients of other Brazilian community hospital. Faé et al. [29] observed a higher risk of developing severe forms of cardiomyopathy in men (OR = 8.75), corroborating the results of this study.The present study has potential limitations. The major limitation was the number of patients limiting the significance of results and consequently no strong association could be found, principally when independent multiple comparisons were carried out. However, the risk of population stratification bias, due to differences in ethnic background, was minimized by matching patients with controls individuals of the same ethnic background. Mean age, gender rates, and residence in the same geographical areas were carefully matching to select the groups. Another limitation was thatIL17 gene expression or serum levels were not evaluated.
## 5. Conclusions
In these South and Southeast Brazilian patients, theIL17A polymorphisms, AA genotype and A allele, were associated with susceptibility to chronic CD and the severity of the left ventricular systolic dysfunction (LVSD). In addition, theIL17A A/A genotype was associated with mild/moderate LVSD in female patients, whereas theIL17F T/C genotype was associated with severe LVSD in male patients. These results suggest the possible involvement of the polymorphisms ofIL17A andIL17F in the susceptibility to chronic CD and in development and progression of CCC. Additional studies are needed to confirm these results and for understanding the functional role polymorphism in CD.
---
*Source: 1017621-2017-04-02.xml* | 1017621-2017-04-02_1017621-2017-04-02.md | 31,159 | Genetic Polymorphisms ofIL17 and Chagas Disease in the South and Southeast of Brazil | Pâmela Guimarães Reis; Christiane Maria Ayo; Luiz Carlos de Mattos; Cinara de Cássia Brandão de Mattos; Karina Mayumi Sakita; Amarilis Giaretta de Moraes; Larissa Pires Muller; Julimary Suematsu Aquino; Luciana Conci Macedo; Priscila Saamara Mazini; Ana Maria Sell; Divina Seila de Oliveira Marques; Reinaldo Bulgarelli Bestetti; Jeane Eliete Laguila Visentainer | Journal of Immunology Research
(2017) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1017621 | 1017621-2017-04-02.xml | ---
## Abstract
The aim of this study was to investigate possible associations between genetic polymorphisms ofIL17A G197A (rs2275913) andIL17F T7488C (rs763780) with Chagas Disease (CD) and/or the severity of left ventricular systolic dysfunction (LVSD) in patients with chronic Chagas cardiomyopathy (CCC). The study with 260 patients and 150 controls was conducted in the South and Southeast regions of Brazil. The genotyping was performed by PCR-RFLP. The A allele and A/A genotype ofIL17A were significantly increased in patients and their subgroups (patients with CCC; patients with CCC and LVSD; and patients with CCC and severe LVSD) when compared to the control group. The analysis according to the gender showed that the A/A genotype ofIL17A was more frequent in female with LVSD and mild to moderate LVSD and also in male patients with LVSD. The frequency ofIL17F T/C genotype was higher in male patients with CCC and severe LVSD and in female with mild to moderate LVSD. The results suggest the possible involvement of the polymorphisms ofIL17A andIL17F in the susceptibility to chronic Chagas disease and in development and progression of cardiomyopathy.
---
## Body
## 1. Introduction
Chagas disease (CD) is a serious anthropozoonosis common in the Americas and found mainly in endemic areas of the 21 Latin American countries [1]. On account of multinational initiatives, infection prevalence is progressively decreasing, and it is estimated that 6 to 8 million individuals are currently infected in the world, with an incidence of 28.000 cases a year [2]. Chagas disease presents an acute phase and a chronic phase. After the acute phase, most of the infected patients enter in the chronic phase of the disease and about 60 to 70% of infected persons are considered to have the indeterminate form (asymptomatic) of the disease [3–6]. After several years (10 to 30) of starting the chronic phase, 30 to 40% of the patients develop clinical manifestations known as the clinical forms: cardiac, digestive (mainly megaesophagus and megacolon), and cardiodigestive [5, 6]. The chronic Chagas cardiomyopathy (CCC) is the most severe form of the disease that affects 20 to 30% of the infected individuals. In endemic areas the disease is the main death cause in patients aged between 30 and 50 years [5, 7].It is known that genetic variability and immunologic response influence the pathogenesis of the chronic phase of the disease. Associations were observed in several cytokine genes [8] with the susceptibility or protection against the development or progression of the CD and/or its clinical forms. The IL-17 is a proinflammatory cytokine secreted by T cells activated and expressed in different tissues. This cytokine takes part in inflammatory responses mediated by T cells and plays an important role in the tissue homeostasis and diseases progression [9]. The IL-17F presents a high degree of homology with the IL-17A (57% identical) [9] and seems to have a biological action similar to IL-17A, in vitro and in vivo, though significantly weaker [10]. The genes that codify them are mapped in the same chromosome, in the position 6p12 [9, 11].Polymorphism in genes encoding cytokines may influence the level of cytokines production and, consequently, cause different immunological responses to different diseases. Previous studies show that genetic polymorphisms ofIL17A G197A andIL17F T7488C affect the production of IL-17A and F, respectively [12, 13]. Such polymorphisms have already been associated with autoimmune and inflammatory diseases, as rheumatoid arthritis [14], periodontitis [15], and cancer, both gastric [16] and breast cancer [17]. To our knowledge, only one study involving the SNPs ofIL17A and the CD [18] was found so far, and if we consider the SNPs ofIL17F there are no related articles published yet. For this reason, our study aims to investigate whether the genetic polymorphisms ofIL17A G197A (rs2275913) andIL17F T7488C (rs763780) were related to CD and/or the severity of the left ventricular systolic dysfunction (LVSD) in patients with CCC from North and Northeast regions of Parana and the Northeast region of São Paulo (states located in the South and Southeast of Brazil, resp.).
## 2. Material and Methods
### 2.1. Patients and Controls
For this study, 260 patients with chronic CD were selected from different municipalities in the North and Northwest regions of Parana and in the Northwest region of São Paulo. The patients were cared for in the Chagas Disease Laboratory in the State University of Maringa, the Clinical Hospital in Londrina, and the Base Hospital of the Medical School in São José do Rio Preto. All patients were submitted to a resting electrocardiogram (ECG) exam and a two-dimensional echocardiography. Patients who presented a normal ECG were classified as patients without CCC and patients with electrocardiographic changes, common to CCC, were classified as patients with CCC. The severity of the LVSD was measured according to the left ventricular ejection fraction (LVEF) and the Teichhoolz method was applied following the II Brazilian Guideline for Severe Heart Diseases [19]. Patients with CCC were classified considering the (LVEF) in three different groups: patients without LVSD (LVEF > 60%); patients with mild to moderate LVSD (LVEF 40–60%); and patients with severe LVSD (LVEF < 40%). To all statistical analysis were considered the following groups: all Chagas disease patients (CD), chronic Chagas cardiomyopathy patients (CCC), without Chagas cardiomyopathy patients (without CCC), chronic Chagas cardiomyopathy patients with LVSD (with LVSD), chronic Chagas cardiomyopathy patients without LVSD (without LVSD), patients with mild to moderate LVSD (Mild/moderate LVSD), and patients with severe LVSD (severe LVSD).The control group was composed of 150 individuals, healthy and nonrelated, patient’s spouses, and contacts retirement communities’ residents with negative serology toT. cruzi antigens. The clinicopathological features of patients and controls are presented in Table 1. No significant differences were observed among groups in terms of gender, but differences in age were observed between CCC and without CCC patients (63.9 ± 10.2 versus 58.6 ± 7.8, respectively; P≤0.05). Due to the significant miscegenation of Brazilian population we consider patients and controls as a mixed ethnic group (Caucasians, Mulattos, and Blacks) according to Parra et al. (2003) [20]. Mean age, gender rates, and residence in the same geographical areas were carefully matching to select the groups.Table 1
Characteristics of the chronic Chagas disease patients and controls from South and Southeast of Brazil.
CD patients
CCC
Without CCC
Control
N
=
260
N
=
212
N
=
48
N
=
150
Genderan (%)
Male
121 (46.5)
97 (45.8)
24 (50.0)
74 (49.3)
Female
139 (53.5)
115 (54.2)
24 (50.0)
76 (50.7)
Ageb
Min-max
31–90
31–90
38–76
28–100
Mean ± SD (year)
62.9 ± 10.0
63.9 ± 10.2
58.6 ± 7.8
62.3 ± 17.4
CCC, patients with chronic Chagas cardiomyopathy; Min, minimum age; Max, maximum age; SD, standard deviation.
aNo statistically significant difference was observed between the groups for gender.
bStatistically significant difference was observed between the groups for age: CCC versus without CCC.The laboratory diagnosis of CD in patients and controls was made by ELISA (Enzyme-Linked ImmunoSorbent Assay) test, in serum or plasma, using the immunoassay “Chagas” from Abbott Laboratories (Santiago, Chile). In cases of weak reagent, the diagnosis was confirmed by the indirect immunofluorescence test (IIFT) with the IMUNOCRUZI® antigen (Biolab, Rio de Janeiro, Brazil) or ELISAcruzi (bioMerieus SA, Brazil), respecting the manufacturer’s instructions.The Ethics committees from each institution have approved this study, as seen in the protocols they have registered (012/2010-COPEP-UEM, CAAE 0296.0.093.000–09; FAMERP - # 009/2011), and written informed consent was obtained from all subjects prior to participation.
### 2.2. DNA Extraction and Genotyping
The extraction method used in this research was the salting-out adapted [21]. The genomic DNA was extracted from 250 μL of buffy-coat obtained from 5 mL of peripheral blood collected in tubes with EDTA (Ethylenediaminetetraacetic acid). The material’s concentration and purity were determined by NanoDrop 2000® equipment (Thermo Scientific, Wilmington, USA).The SNPs inIL-17A (rs2275913) andIL-17F (rs763780) were genotyped using PCR-RFLP (Polymerase Chain Reaction-Restriction Fragment Length Polymorphism) [15]. The primers sequences toIL17A G197A were sense 5′-AACAAGTAAGAATGAAAAGAGGACATGGT-3′ and antisense 5′-CCCCCAATGAGGTCATAGAAGAATC-3, while toIL17F T7488C they were sense 5′-ACCAAGGCTGCTCTGTTTCT-3′ and antisense 5′-GGTAAGGAGTGGCATTTCTA-3′. The reaction of DNA amplification was made in a total volume of 30 μL, containing 100 ng of genomic DNA, 1,0 μM from each primer, 200 μM from each dNTP, 2,0 mM of MgCl2, 3 μL of 10x PCR buffer, and 1,5 U of Taq DNA polymerase (Invitrogen Life Technologies, Grand Island, NY, USA). The PCR products were digested during one hour submitted to 37°C with the enzymeXagI (Fermentas, Canada) toIL17A G197A and the enzymeNlaIII (New England Biolabs) toIL17F T7488C and, subsequently, separated by agarose gel electrophoresis to 3,5% with SYBR Green (Invitrogen Life Technologies, Grand Island, NY, USA).
### 2.3. Statistical Analysis
The allele and genotype frequencies ofIL17A G197A andIL17F T7488C were estimated and the genotype distribution was evaluated to Hardy-Weinberg balance [22]. The association tests were realized to the codominant, dominant, recessive, overdominant, and log-additive genetic inheritance models. The P≤0.05 values were considered statistically significant to Chi-square test with Yates correction and logistic regression. The statistical comparisons between these groups were realized and the estimated risk to develop CD and/or CCC in individuals who hold genetic polymorphisms was calculated by determination of OD (Odds Ratio) with 95% of confidence interval, adjusted by gender and age. All statistical analysis was performed using the software SNPStats (http://bioinfo.iconcologia.net/index.php) [23] and the OpenEpi program, version 3.03a (http://www.openepi.com/Menu/OE_Menu.htm).
## 2.1. Patients and Controls
For this study, 260 patients with chronic CD were selected from different municipalities in the North and Northwest regions of Parana and in the Northwest region of São Paulo. The patients were cared for in the Chagas Disease Laboratory in the State University of Maringa, the Clinical Hospital in Londrina, and the Base Hospital of the Medical School in São José do Rio Preto. All patients were submitted to a resting electrocardiogram (ECG) exam and a two-dimensional echocardiography. Patients who presented a normal ECG were classified as patients without CCC and patients with electrocardiographic changes, common to CCC, were classified as patients with CCC. The severity of the LVSD was measured according to the left ventricular ejection fraction (LVEF) and the Teichhoolz method was applied following the II Brazilian Guideline for Severe Heart Diseases [19]. Patients with CCC were classified considering the (LVEF) in three different groups: patients without LVSD (LVEF > 60%); patients with mild to moderate LVSD (LVEF 40–60%); and patients with severe LVSD (LVEF < 40%). To all statistical analysis were considered the following groups: all Chagas disease patients (CD), chronic Chagas cardiomyopathy patients (CCC), without Chagas cardiomyopathy patients (without CCC), chronic Chagas cardiomyopathy patients with LVSD (with LVSD), chronic Chagas cardiomyopathy patients without LVSD (without LVSD), patients with mild to moderate LVSD (Mild/moderate LVSD), and patients with severe LVSD (severe LVSD).The control group was composed of 150 individuals, healthy and nonrelated, patient’s spouses, and contacts retirement communities’ residents with negative serology toT. cruzi antigens. The clinicopathological features of patients and controls are presented in Table 1. No significant differences were observed among groups in terms of gender, but differences in age were observed between CCC and without CCC patients (63.9 ± 10.2 versus 58.6 ± 7.8, respectively; P≤0.05). Due to the significant miscegenation of Brazilian population we consider patients and controls as a mixed ethnic group (Caucasians, Mulattos, and Blacks) according to Parra et al. (2003) [20]. Mean age, gender rates, and residence in the same geographical areas were carefully matching to select the groups.Table 1
Characteristics of the chronic Chagas disease patients and controls from South and Southeast of Brazil.
CD patients
CCC
Without CCC
Control
N
=
260
N
=
212
N
=
48
N
=
150
Genderan (%)
Male
121 (46.5)
97 (45.8)
24 (50.0)
74 (49.3)
Female
139 (53.5)
115 (54.2)
24 (50.0)
76 (50.7)
Ageb
Min-max
31–90
31–90
38–76
28–100
Mean ± SD (year)
62.9 ± 10.0
63.9 ± 10.2
58.6 ± 7.8
62.3 ± 17.4
CCC, patients with chronic Chagas cardiomyopathy; Min, minimum age; Max, maximum age; SD, standard deviation.
aNo statistically significant difference was observed between the groups for gender.
bStatistically significant difference was observed between the groups for age: CCC versus without CCC.The laboratory diagnosis of CD in patients and controls was made by ELISA (Enzyme-Linked ImmunoSorbent Assay) test, in serum or plasma, using the immunoassay “Chagas” from Abbott Laboratories (Santiago, Chile). In cases of weak reagent, the diagnosis was confirmed by the indirect immunofluorescence test (IIFT) with the IMUNOCRUZI® antigen (Biolab, Rio de Janeiro, Brazil) or ELISAcruzi (bioMerieus SA, Brazil), respecting the manufacturer’s instructions.The Ethics committees from each institution have approved this study, as seen in the protocols they have registered (012/2010-COPEP-UEM, CAAE 0296.0.093.000–09; FAMERP - # 009/2011), and written informed consent was obtained from all subjects prior to participation.
## 2.2. DNA Extraction and Genotyping
The extraction method used in this research was the salting-out adapted [21]. The genomic DNA was extracted from 250 μL of buffy-coat obtained from 5 mL of peripheral blood collected in tubes with EDTA (Ethylenediaminetetraacetic acid). The material’s concentration and purity were determined by NanoDrop 2000® equipment (Thermo Scientific, Wilmington, USA).The SNPs inIL-17A (rs2275913) andIL-17F (rs763780) were genotyped using PCR-RFLP (Polymerase Chain Reaction-Restriction Fragment Length Polymorphism) [15]. The primers sequences toIL17A G197A were sense 5′-AACAAGTAAGAATGAAAAGAGGACATGGT-3′ and antisense 5′-CCCCCAATGAGGTCATAGAAGAATC-3, while toIL17F T7488C they were sense 5′-ACCAAGGCTGCTCTGTTTCT-3′ and antisense 5′-GGTAAGGAGTGGCATTTCTA-3′. The reaction of DNA amplification was made in a total volume of 30 μL, containing 100 ng of genomic DNA, 1,0 μM from each primer, 200 μM from each dNTP, 2,0 mM of MgCl2, 3 μL of 10x PCR buffer, and 1,5 U of Taq DNA polymerase (Invitrogen Life Technologies, Grand Island, NY, USA). The PCR products were digested during one hour submitted to 37°C with the enzymeXagI (Fermentas, Canada) toIL17A G197A and the enzymeNlaIII (New England Biolabs) toIL17F T7488C and, subsequently, separated by agarose gel electrophoresis to 3,5% with SYBR Green (Invitrogen Life Technologies, Grand Island, NY, USA).
## 2.3. Statistical Analysis
The allele and genotype frequencies ofIL17A G197A andIL17F T7488C were estimated and the genotype distribution was evaluated to Hardy-Weinberg balance [22]. The association tests were realized to the codominant, dominant, recessive, overdominant, and log-additive genetic inheritance models. The P≤0.05 values were considered statistically significant to Chi-square test with Yates correction and logistic regression. The statistical comparisons between these groups were realized and the estimated risk to develop CD and/or CCC in individuals who hold genetic polymorphisms was calculated by determination of OD (Odds Ratio) with 95% of confidence interval, adjusted by gender and age. All statistical analysis was performed using the software SNPStats (http://bioinfo.iconcologia.net/index.php) [23] and the OpenEpi program, version 3.03a (http://www.openepi.com/Menu/OE_Menu.htm).
## 3. Results
The ratio distributions of genotype frequency for all analyzed genes were in Hardy-Weinberg equilibrium (P>0.05). In order to evaluate the possible association ofIL17A G197A andIL17F T7488C SNPs and Chagas disease, the allele and genotype frequencies between patients (CD) and their subgroups (CCC, without CCC, with LVSD, without LVSD, Mild/moderate LVSD, severe LVSD) and controls were compared (Table 2). Statistically significant differences were observed for A allele and A/A genotype ofIL17A but no significant difference was found toIL17F.Table 2
Genotypes and allele frequencies distribution ofIL17A rs2275913 and IL17F rs763780 in Chagas disease patients and controls in a population from South and Southeast of Brazil.
Allele/genotypen (%)
CD patients
CCC
WithoutLVSD
WithLVSD
Mild/moderate LVSD
SevereLVSD
Without CCC
Control
N
=
260
N
=
212
N
=
109
N
=
103
N
=
52
N
=
51
N
=
48
N
=
150
IL17A G197A
G
369 (71.2)
297 (70.4)
159 (72.9)
138 (67.6)
72 (70.6)
66 (64.7)
72 (75.0)
235 (78.3)
A
149 (28.8)
a
125 (29.6)
b
59 (27.1)
66 (32.4)
c
30 (29.4)
36 (35.3)
d
24 (25.0)
65 (21.7)
GG
130 (50.2)
104 (49.3)
58 (53.2)
46 (45.1)
24 (47.0)
22 (43.1)
26 (54.2)
88 (58.7)
GA
109 (42.1)
89 (42.2)
43 (39.5)
46 (45.1)
24 (47.0)
22 (43.1)
20 (41.7)
59 (39.3)
AA
20 (7.7)
e
18 (8.5)
f
8 (7.3)
10 (9.8)
g
3 (6.0)
7 (13.8)
h
2 (4.1)
3 (2.0)
IL17F T7488C
T
484 (93.1)
394 (92.9)
207 (94.9)
187 (90.8)
97 (93.3)
90 (88.2)
90 (93.7)
282 (94.0)
C
36 (6.9)
30 (7.1)
11 (5.1)
19 (9.2)
7 (6.7)
12 (11.8)
6 (6.3)
18 (6.0)
TT
224 (86.2)
182 (85.8)
98 (89.9)
84 (81.6)
45 (86.5)
39 (76.5)
42 (87.5)
132 (88.0)
TC
36 (13.8)
30 (14.2)
11 (10.1)
19 (18.4)
7 (13.5)
12 (23.5)
6 (12.5)
18 (12.0)
CCC: patients with chronic Chagas cardiomyopathy; LVSD: left ventricular systolic dysfunction; Recessive model: AA versus GA + GG; OR:odds ratio; CI: confidence interval. Adjustment of the genotypic differences for the effect of age and gender was applied.
P
a
=
0.032. OR = 1.46 and 95% CI = 1.05–2.05; CD patients versus controls.
P
b = 0.021. OR = 1.52 and 95% CI = 1.08–2.15; CCC versus controls.
P
c
=
0.009. OR = 1.73 and 95% CI = 1.15–2.59.With LVSD versus controls.
P
d
=
0.009. OR = 1.97 and 95% CI = 1.20–3.21. Severe LVSD versus controls.
eRecessive model: P=0.009; OR = 4.12; 95% CI = 1.20–14.13. CD patients versus controls.
fRecessive model: P=0.005; OR = 4.67; 95% CI = 1.35–16.18. CCC versus controls.
gRecessive model: P=0.005; OR = 5.73; 95% CI = 1.52–21.64. With LVSD versus controls.
hRecessive model: P=0.002; OR = 8.18; 95% CI = 2.00–33.51. Severe LVSD versus controls.The A allele frequency ofIL17A was significantly higher in the CD patients when compared to controls (P=0.032, OR = 1.46, 95% CI = 1.05–2.05). The same was found when CCC patients (P=0.021, OR = 1.52, 95% CI = 1.08–2.15), patients with CCC and LVSD (P=0.009, OR = 1.73, 95% CI = 1.15–2.59), and patients with CCC and severe LVSD (P=0.009, OR = 1.97, CI = 1.20–3.21) were compared to controls.The A/A genotype was more frequent in the CD patients than in the control group and statistically significant differences were observed in more than one model of genetic inheritance (Codominant:P=0.019, OR = 4.53, 95% CI = 1.31–15.73; Recessive: P=0.0089, OR = 4.12, 95% CI = 1.20–14.13; Log-additive: P=0.02, OR = 1.50, 95% CI = 1.06–2.13). Same results can be seen when the subsets are compared: CCC versus controls (Codominant: P=0.01, OR = 5.16, 95% CI = 1.47–18.14; Recessive: P=0.0048, OR = 4.67, 95% CI = 1.35–16.18; Log-additive: P=0.013, OR = 1.57, 95% CI = 1.09–2.24); patients with CCC and LVSD versus controls (Codominant: P=0.006, OR = 6.81, 95% CI = 1.77–26.29; Dominant: P=0.034, OR = 1.73, 95% CI = 1.04–2.87; Recessive: P=0.0045, OR = 5.73, 95% CI = 1.52–21.64; Log-additive: P=0.0046, OR = 1.85, 95% CI = 1.20–2.85); patients with CCC and severe LVSD versus controls (Codominant: P=0.0047, OR = 9.64, 95% CI = 2.28–40.85; Recessive: P=0.002, OR = 8.18, 95% CI = 2.00–33.51; Log-additive: P=0.057, OR = 2.11, 95% CI = 1.24–3.60). For all comparisons, the recessive inherence model was the best according Akaike information criteria (AIC). It means that two copies of A are necessary to change the risk, so G/A or G/G have the same effect. No difference was observed when allele and genotype frequencies ofIL17A were compared between patients with CCC and patients without CCC. Likewise, no association was observed when the progression of cardiac forms was considered: the different forms (without LVSD, with LVSD, mild/moderate LVSD, and severe LVSD) were compared with each other and no statistically significant difference was noticed.After stratifying according to gender significant differences were observed forIL17A andIL17F genotype frequencies when the progression of cardiac form was evaluated. TheIL17A A/A genotype was more frequent in female with LVSD (OR = 6.63, 95% CI = 1.21–36.40) and with mild/moderate LVSD (OR = 7.57, 95% CI = 1.07–53.40) than in the control group, although not significant males with LVSD also had higher frequency of AA genotype compared to controls (13.5 versus 7.84%, resp.) (Table 3). In relation toIL17F, the T/C genotype was more frequent in male patients with severe LVSD when compared to other groups: without LVSD (OR = 4.82, 95% CI = 1.55–14.98), with mild/moderate LVSD (OR = 6.00, 95% CI = 1.18–30.63), without CCC patients (OR = 6.70, 95% CI = 1.19–37.53), and controls (OR = 3.40, 95% CI = 1.24–9.31). In female statistical difference was not observed, although T/C was higher in mild/moderate LVSD (17%) when compared to others patients and control (Table 4). Considering the variable age, no significant difference was observed betweenIL17 SNPs and CD and/or the severity of the left ventricular systolic dysfunction (LVSD).Table 3
Genotype frequencies ofIL17Ars2275913 in Brazilian patients with LVSD in chronic Chagas cardiomyopathy, stratified according to gender.
Gender
IL17A G197A
WithLVSD n (%)
Mild/moderateLVSD n (%)
Controln (%)
Male
GG
23 (45.1)
13 (59.1)
43 (58.11)
GA
24 (47.06)
9 (40.9)
30 (40.54)
AA
4 (7.84)
0
1 (1.35)
Female
GG
23 (45.1)
11 (37.93)
45 (59.21)
GA
22 (43.14)
15 (51.72)
29 (38.16)
AA
6 (11.76)
a
3 (10.35)
b
2 (2.63)
LVSD, chronic Chagas cardiomyopathy patients with left ventricular systolic dysfunction; OR,odds ratio; CI, confidence interval.
Data adjusted by age.
Only significant results are showed.
aOR = 6.63 and 95% CI = 1.21–36.40; with LVSD versus control.
bOR = 7.57 and 95% CI = 1.07–53.40; mild/moderate LVSD versus control.Table 4
Genotype frequencies ofIL17Frs753780 in Brazilian Chagas disease patients with chronic cardiomyopathy, stratified according to gender.
Gender
IL17F T7488C
WithoutLVSD n (%)
Mild/moderateLVSD n (%)
SevereLVSD n (%)
Without CCCn (%)
Controlsn (%)
Male
TT
40 (86.96)
21 (91.3)
19 (65.5)
22 (45.8)
64 (86.49)
TC
6 (13.04)
a
2 (8.7)
b
10 (34.5)
2 (4.2)
c
10 (13.51)
d
Female
TT
58 (92.06)
24 (82.8)
20 (90.9)
20 (41.7)
68 (89.47)
TC
5 (7.94)
5 (17.2)
2 (9.1)
4 (8.3)
8 (10.53)
CCC, chronic Chagas cardiomyopathy; LVSD, left ventricular systolic dysfunction; OR, odds ratio; CI, confidence interval.
Data adjusted by age.
Only significant results are showed.
aOR = 4.82 and 95% CI = 1.55–14.98; severe LVSD versus without LVSD.
bOR = 6.02 and 95% CI = 1.18–30.78; severe LVSD versus mild/moderate LVSD.
cOR = 6.70 and 95% CI = 1.19–37.53; severe LVSD versus without CCC patients.
dOR = 3.40 and 95% CI = 1.24–9.31; severe LVSD versus controls.
## 4. Discussion
The identification of genes that are candidates for susceptibility or protection against CD has major implications, not only to better understand the pathogenesis of the disease, but also to control and develop therapeutic strategies. In this study, a possible association between the genetic polymorphisms ofIL17A G197A andIL17F T7488C with CD and the severity of CCC was investigated in a population from South and Southwest regions in Brazil.In this study, theIL17A A allele and the A/A genotype were more frequent in CD and CCC patients, female with LVSD or mild/moderate LVSD and male with LVSD when compared to control. The risk to severe LVSD was observed in male carrying theIL17F T/C genotype. TheIL17 polymorphism could be correlated to the risk of disease, indicating susceptibility to chronic Chagas disease and increasing risk of severe cardiomyopathy when gender was considered in multivariate analyses. The mutant allele A ofIL17A was associated with a higher production of IL-17 [12] and the IL-17F activity is similar to IL-17A, although significantly weaker [10, 12, 13]. Based on these findings, it is possible to infer that the higher production of IL-17, a proinflammatory cytokine, could contribute to tissue damage and might be related to the development and progression of CCC in this population.Considering the IL-17 biological function in Chagas disease, Guedes et al. [24] showed that the neutralization of IL-17 in mice BALB/c infected withT. cruzi has resulted in a higher recruitment of inflammatory cells to the cardiac tissue in the acute phase of the infection, leading to an increase in myocarditis and, consequently, premature death, despite the reduction of the local parasitism. Miyazaki et al. [25] have reported the importance of the IL-17 in theT. cruzi infection and the cardiac inflammation control in CD. They observed that in the experimental acute infection withT. cruzi, disabled mice in IL-17 presented a higher mortality rate and parasitemia when compared to the group control (C57BL/6, wild type), as well as a lower expression of cytokines, as IFN-γ, IL-6, and TNF-α, suggesting a protective role of IL-17 in the acute phase of the disease. The neutralization of IL-17 also resulted in a higher production of IL-12, IFN-γ, TNF-α, chemokines, and its receptors, indicating that the IL-17 may perform a role in the control of cardiac inflammation, through the modulation of Th1 response. On the other hand, Magalhães et al. [26] showed that in Chagas patients with cardiac form the total lymphocytes and the Th17 cells presented a low expression of IL-17A in comparison to the patients with the indeterminate form and control group, and the analysis of correlation between IL-17A and the cardiac function showed that the high expression of this cytokine was associated with a better clinical outcome in the human CD, according to values of the ejection fraction and left ventricular diastolic diameter, indicating a protective role against the severity of CCC.Five SNPs ofIL17A were analyzed in patients with CD in a population of an endemic region of Colombia. >The SNP rs8193036 was associated with the protection againstT. cruzi infection and the development of CCC. Meanwhile for the SNP rs2275913, the same SNP evaluated in this study, the frequency of allele A was higher in patients than in controls and significant difference was observed, although significance was lost after the correction [18]. We observed thatIL17A A allele and AA genotype were higher in Chagas disease as well in CCC with or without LVSD, but no difference was observed between CD or CCC patients. However, after stratifying according to gender, female withIL17A AA genotype had risk of developing mild/moderate LVSD (approximately seven), as male to develop LVSD (although not significant); and male withIL17F T/C genotype had higher risk to develop severe LVSD compared to other cardiac form and controls.A study conducted by Peng et al. [27] in Chinese patients with dilated cardiomyopathy did not find association withIL17A G197A andIL17F T7488C polymorphism. However, after stratification by gender, theIL17F was associated with dilated cardiomyopathy in male patients that present the T/C-C/C genotypes, suggesting that the presence of the rare allele (C) might be associated with the disease in these patients. In this study we found that theIL17F T/C genotype was associated with developing severe LVSD in male patients when the sample stratification by gender was done.The risk of development severe cardiac form in male with CD was showed in two Brazilian studies. Rassi et al. [28] showed that gender (male) and left ventricular systolic dysfunction on echocardiography are potential risk factors for death in subjects with CD. They evaluated a cohort of 424 Brazilian outpatients followed for about eight years and confirmed the results in 153 patients of other Brazilian community hospital. Faé et al. [29] observed a higher risk of developing severe forms of cardiomyopathy in men (OR = 8.75), corroborating the results of this study.The present study has potential limitations. The major limitation was the number of patients limiting the significance of results and consequently no strong association could be found, principally when independent multiple comparisons were carried out. However, the risk of population stratification bias, due to differences in ethnic background, was minimized by matching patients with controls individuals of the same ethnic background. Mean age, gender rates, and residence in the same geographical areas were carefully matching to select the groups. Another limitation was thatIL17 gene expression or serum levels were not evaluated.
## 5. Conclusions
In these South and Southeast Brazilian patients, theIL17A polymorphisms, AA genotype and A allele, were associated with susceptibility to chronic CD and the severity of the left ventricular systolic dysfunction (LVSD). In addition, theIL17A A/A genotype was associated with mild/moderate LVSD in female patients, whereas theIL17F T/C genotype was associated with severe LVSD in male patients. These results suggest the possible involvement of the polymorphisms ofIL17A andIL17F in the susceptibility to chronic CD and in development and progression of CCC. Additional studies are needed to confirm these results and for understanding the functional role polymorphism in CD.
---
*Source: 1017621-2017-04-02.xml* | 2017 |
# Orangutan Night-Time Long Call Behavior: Sleep Quality Costs Associated with Vocalizations in CaptivePongo
**Authors:** David R. Samson; Del Hurst; Robert W. Shumaker
**Journal:** Advances in Zoology
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101763
---
## Abstract
Researchers have suggested that the ability of male primates to emit long-distance vocalizations is energetically costly and potentially incurring important adaptive consequences upon the calling individuals. Here, we present the first preliminary data on captive orangutan (Pongo spp.) nocturnal long calls, generated at the Indianapolis Zoo. We used videography to characterize long calls with observed behavioral contexts for 48 nights (816 observed hours totaling 83 long calls). We generated somnographic data for a subset of the long calls. Overall measures of sleep quality generated by infrared videography were then compared to the somnographic, nocturnal long call data. We tested hypotheses related to the proximate mechanisms involved in the initialization of vocalization and the potential costs of emitting long calls to overall sleep quality. We found that (1) performed long calls were conscious and premeditated in nature and (2) greater number of night-time long calls shared a positive relationship with arousability and sleep fragmentation and a negative relationship with total sleep time and sleep quality. These findings strongly suggest that only several minutes of total time invested in long calls throughout the night disproportionately cost the caller by negatively impacting overall sleep quality.
---
## Body
## 1. Introduction
Several nonhuman primate species are known to emit “loud calls.” These alarm call vocalizations are like most animal acoustic signals in that they are primarily produced during their active period [1–6] and thus are characterized by species-specific circadian distribution [7, 8]. Given primate loud calls are stereotypically characterized by traits such as acoustic intensity (dB), form type (i.e., length frequency modulation), and often vocalized by high ranking males [9], it has been suggested that Old World monkey and ape loud calls are phylogenetic homologous characteristics [10]. In general, several hypotheses have been forwarded in attempts to explain the function of primate loud calls: mate-attraction, inter-group spacing and intra-group cohesion, and territorial advertisement [11].Within the repertoire of orangutan (Pongo) vocalizations the long call is characterized by several idiosyncratic traits: it is produced often by large individuals [5, 12–14], it travels greater than 300 meters [15], it has only been observed to be emitted by flanged males [13, 14, 16], it is the loudest call in the repertoire (reaching 100 db at 1 meter away and can be heard up to more than 1 km in distance) and calls may exceed three minutes in length [13, 17].It has been hypothesized that in the wild long calls attract adult females to coordinate mating [5, 12, 14, 18], act as a mechanism with which individuals associate within a community and coordinate seasonal movements [19, 20] and/or mediate dominance relationships among adult males [5, 21, 22]. Specific functions have been proposed, including signaling nulliparous females to initiate mating with dominant males [12, 23, 24] and/or serve as a beacon for location for sexually receptive females given long distances [12, 25, 26], keeping potentially antagonistic males apart [5, 12, 14, 27], and a way for males to communicate their next day travel direction [28]. It has also been suggested these calls may function as a social mechanism to restrict hormone production in small subordinate/unflanged males and thus keep them from developing into large dominate flanged males, although this hypothesis has yet to be tested [29].The proximate mechanisms and/or stimuli involved in the initiation of long calls remain unknown. MacKinnon (1974 : 54) noted: “Sometimes the calls seemed spontaneous but often they followed a sudden sound cue. Calls were…triggered by sudden pig noises…the bark of a deer…a human sneeze…a distant gunshot…a sudden gust of wind and a clap of thunder…the crash of nearby tree-fall…or the sound of breaking branches.” Furthermore, RS has noted anecdotal evidence that Azy (the flanged male observed in this captive group) has emitted long calls in response to spontaneous environmental cues (e.g., automobiles driving by, wind gusts, etc.). The REM (rapid eye movement) sleep state is associated with vivid dreaming; during sleep dreaming occurs because the brain attends to endogenously generated activity which can be internally perceived as actually occurring events—much like hallucination [30]. Therefore it is possible that sleep-to-wake transitions leading to long calls could be catalyzed from an endogenous stimulus. Quantitative data circumscribing the context with which captive orangutans perform long calls has yet to be recorded.In mammals (humans included), sleep fragmentation has been found to diminish attention, sensory-motor processing, motivation, and memory [31–34]. These diminishing effects in behavioral performance have been associated with direct costs to the individual [35]. The costs of sleep deprivation on waking function are well documented, yet contextual descriptions in which sleep loss is adaptive are emerging. Costly morphological, physiological, and behavioral traits have been evolutionarily selected, especially via sexual selection [36]. In an extreme example, the polygynous pectoral sandpiper has been found to remain active for greater than 95% of a 19-day period when females are in peak fertility [37]. To date, it has yet to be assessed whether orangutans experience any physiological costs to sleep quality relative to the investment directed in night-time long call performance.The goal of this study was to document and describe captive orangutan night-time vocalizations. In addition, hypotheses related to (1) the proximate mechanisms involved in the initialization of vocalization and (2) the potential costs of emitting long calls to overall sleep quality were tested. First, we hypothesize that night-time long calls will begin from an abrupt sleep-to-wake transition, as an unconscious reactionary response to abiotic forces (e.g., loud, disruptive noises) or internal sleep states (i.e., dreams). Second, we hypothesize that individuals that vocalize a greater number of long calls will experience reduced overall sleep quality. Specifically, we tested the following predictions.(1)
Night-time long calls will initialize from a REM stage of sleep and/or in association with high-decibel environmental stimulus.
(2)
Greater number of night-time long call vocalizations will reduce sleep quality and sleep duration and increase sleep arousability (number of motor activity bouts per hour) and fragmentation (the number of brief awakenings greater than 2 min per hour).
## 2. Methods
### 2.1. Study Subjects
Subjects housed at the Indianapolis Zoo (totalN
=
5) were three females, Katy (Studbook ID number: 2248), Knobi (1733), and Lucy (1972), and two males Azy (1616) and Rocky (3331). All subjects were classified as adults with the exception of Rocky, the only adolescent. None of the subjects were geriatric, as life span in the wild for orangutans is approximately 60 years old [38]. All subjects were hybrids of Bornean (Pongo pygmaeus) and Sumatran (Pongo abelii) species. Rocky, Katy, and Lucy were privately owned and were part of the entertainment industry prior to moving into the Association of Zoos & Aquariums (AZA) community; specific information about their personal histories is therefore limited (RS personal data). The individuals from the entertainment industry were hand-reared by humans, none having any exposure to their mothers during early growth and development. Azy and Knobi have always lived within the AZA community and have well documented biographies and rich social experience. Subjects were housed in interconnected indoor and outdoor enclosures and had regular access to all areas throughout the duration of the study. The indoor enclosure contained laminate sleeping platforms located approximately 1 m off the floor. The indoor space included five possible sleeping rooms. Subjects had access to natural and artificially enriched environments. The indoor enclosure was set at a constant temperature of 23.3°C. Natural lighting was the primary source of light for the group and was accessible by way of windows and access to the outdoor enclosure; in addition, lights were manually turned on by the keepers at 07:30 h and turned off at 17:30 h. For further detail regarding night-time sleep related behaviors in captive orangutans see [39–41].
### 2.2. Data Collection
This study was conducted over four months during August 2012–November 2012. The occurrences of long calls were continuously video-recorded nightly, from 16:00–09:00 for a total of 48 nights (816 hours total). All-occurrence sampling captured each instance of vocalization throughout the nightly period (totalN
=
83). The temporal distributions of long calls were tabulated to describe occurrence of calls associated within hourly intervals. Context of long calls was recorded; nominal data were generated for vocalization instances, such as the presence or absence of associated copulation, presence or absence of discrete abiotic (i.e., automobiles and inclement weather, etc.) or biotic (i.e., vocalizations by conspecifics) noises, direction (i.e., vocalizing into the wall or directed towards conspecifics), stationary versus mobile (i.e., states were defined as mobile if the vocalizer moved out of the sleep area during the long call), and state prior to vocalization (i.e., upright awake, resting awake, or sleep).Sleep behavior was recorded continuously throughout the night using all-occurrence sampling on subjects [42]. Two instruments (AXIS P3344 and AXIS Q6032-E Network Cameras) were used to generate nightly sleep quota data on subjects within line of sight. One stationary camera (P3344) was manually placed in front of the subject at the time of sleeping platform construction; another rotatable camera (Q6032-E) was remotely controlled throughout the night to ensure focal subjects were continuously within line of sight from start to finish of the recording session (Axis Communications, Lund, Sweden). Videography generated values for sleep quotas (for detailed methods on sleep behavior analysis see [40]): total time spent awake, total NREM (nonrapid eye movement), total REM, total sleep time (sum of NREM and REM), and total time in bed (absolute difference between rising and retiring times from their constructed sleeping platforms). Measures of overall sleep quality include sleep fragmentation (the number of brief awakenings greater than 2 min per hour), arousability (number of motor activity bouts per hour), and sleep quality (sleep duration/time in bed).Long calls were recorded using an infrared camera (P3344 Axis with a two-way built in mic) and then converted from .asf into .wav files for audio analysis. Detailed audio-spectrographic analysis was performed on 29 long calls. All sound analyses were conducted using Audacity acoustic analysis computer program (Audacity 1.3.12-beta). Audio data generated included minimum frequency (the number of times that a periodic vibration occurs within a 1 second period measured in Hz), maximum frequency, duration (total time of vocalization), peak frequency (the greatest instantaneous value of a standard frequency), and peak decibel (a ratio between the measured level and a reference threshold level indicative of acoustic power, measured in dB). All minimum frequency measures were taken at the end of the long call vocalizations, as this is where the lowest frequencies were thought to occur. Peak frequency, maximum frequency, and peak dB, however, were obtained from analysis of the entire call.
### 2.3. Data Analysis
We generated descriptive statistics characterizing the nightly distribution of call frequency and duration (statistical tests were conducted using IBM SPSS 21); average calls per night and average number of calls per observation hour were calculated. We generated long call values from audio-spectrographic analysis, which were checked for normality with Kolmogorov-Smirnov tests. Frequencies for nominal categories were generated andχ
2 was adopted to test expected versus observed frequencies of sleep/wake states prior to long calls (2-tailed) and temporal distribution of long calls. Independent-samples t-test was used to compare the intensity of long calls between states and the difference between high occurrence vocalization nights (i.e., nights where long calls were greater than 2) versus low occurrence vocalization nights (i.e., nights where long calls were less than 3) determined by using the midway point of the call range distribution. We applied Spearman’s rho correlation (due to nonnormality in analyzed variables) coefficients (
r
) to examine relationships among long calls emitted and overall sleep quality (1-tailed); we include correlation slopes. All reported errors are standard deviations and all tests set at the significance level of P
≤
0.05.
## 2.1. Study Subjects
Subjects housed at the Indianapolis Zoo (totalN
=
5) were three females, Katy (Studbook ID number: 2248), Knobi (1733), and Lucy (1972), and two males Azy (1616) and Rocky (3331). All subjects were classified as adults with the exception of Rocky, the only adolescent. None of the subjects were geriatric, as life span in the wild for orangutans is approximately 60 years old [38]. All subjects were hybrids of Bornean (Pongo pygmaeus) and Sumatran (Pongo abelii) species. Rocky, Katy, and Lucy were privately owned and were part of the entertainment industry prior to moving into the Association of Zoos & Aquariums (AZA) community; specific information about their personal histories is therefore limited (RS personal data). The individuals from the entertainment industry were hand-reared by humans, none having any exposure to their mothers during early growth and development. Azy and Knobi have always lived within the AZA community and have well documented biographies and rich social experience. Subjects were housed in interconnected indoor and outdoor enclosures and had regular access to all areas throughout the duration of the study. The indoor enclosure contained laminate sleeping platforms located approximately 1 m off the floor. The indoor space included five possible sleeping rooms. Subjects had access to natural and artificially enriched environments. The indoor enclosure was set at a constant temperature of 23.3°C. Natural lighting was the primary source of light for the group and was accessible by way of windows and access to the outdoor enclosure; in addition, lights were manually turned on by the keepers at 07:30 h and turned off at 17:30 h. For further detail regarding night-time sleep related behaviors in captive orangutans see [39–41].
## 2.2. Data Collection
This study was conducted over four months during August 2012–November 2012. The occurrences of long calls were continuously video-recorded nightly, from 16:00–09:00 for a total of 48 nights (816 hours total). All-occurrence sampling captured each instance of vocalization throughout the nightly period (totalN
=
83). The temporal distributions of long calls were tabulated to describe occurrence of calls associated within hourly intervals. Context of long calls was recorded; nominal data were generated for vocalization instances, such as the presence or absence of associated copulation, presence or absence of discrete abiotic (i.e., automobiles and inclement weather, etc.) or biotic (i.e., vocalizations by conspecifics) noises, direction (i.e., vocalizing into the wall or directed towards conspecifics), stationary versus mobile (i.e., states were defined as mobile if the vocalizer moved out of the sleep area during the long call), and state prior to vocalization (i.e., upright awake, resting awake, or sleep).Sleep behavior was recorded continuously throughout the night using all-occurrence sampling on subjects [42]. Two instruments (AXIS P3344 and AXIS Q6032-E Network Cameras) were used to generate nightly sleep quota data on subjects within line of sight. One stationary camera (P3344) was manually placed in front of the subject at the time of sleeping platform construction; another rotatable camera (Q6032-E) was remotely controlled throughout the night to ensure focal subjects were continuously within line of sight from start to finish of the recording session (Axis Communications, Lund, Sweden). Videography generated values for sleep quotas (for detailed methods on sleep behavior analysis see [40]): total time spent awake, total NREM (nonrapid eye movement), total REM, total sleep time (sum of NREM and REM), and total time in bed (absolute difference between rising and retiring times from their constructed sleeping platforms). Measures of overall sleep quality include sleep fragmentation (the number of brief awakenings greater than 2 min per hour), arousability (number of motor activity bouts per hour), and sleep quality (sleep duration/time in bed).Long calls were recorded using an infrared camera (P3344 Axis with a two-way built in mic) and then converted from .asf into .wav files for audio analysis. Detailed audio-spectrographic analysis was performed on 29 long calls. All sound analyses were conducted using Audacity acoustic analysis computer program (Audacity 1.3.12-beta). Audio data generated included minimum frequency (the number of times that a periodic vibration occurs within a 1 second period measured in Hz), maximum frequency, duration (total time of vocalization), peak frequency (the greatest instantaneous value of a standard frequency), and peak decibel (a ratio between the measured level and a reference threshold level indicative of acoustic power, measured in dB). All minimum frequency measures were taken at the end of the long call vocalizations, as this is where the lowest frequencies were thought to occur. Peak frequency, maximum frequency, and peak dB, however, were obtained from analysis of the entire call.
## 2.3. Data Analysis
We generated descriptive statistics characterizing the nightly distribution of call frequency and duration (statistical tests were conducted using IBM SPSS 21); average calls per night and average number of calls per observation hour were calculated. We generated long call values from audio-spectrographic analysis, which were checked for normality with Kolmogorov-Smirnov tests. Frequencies for nominal categories were generated andχ
2 was adopted to test expected versus observed frequencies of sleep/wake states prior to long calls (2-tailed) and temporal distribution of long calls. Independent-samples t-test was used to compare the intensity of long calls between states and the difference between high occurrence vocalization nights (i.e., nights where long calls were greater than 2) versus low occurrence vocalization nights (i.e., nights where long calls were less than 3) determined by using the midway point of the call range distribution. We applied Spearman’s rho correlation (due to nonnormality in analyzed variables) coefficients (
r
) to examine relationships among long calls emitted and overall sleep quality (1-tailed); we include correlation slopes. All reported errors are standard deviations and all tests set at the significance level of P
≤
0.05.
## 3. Results
Results for this study show that only one of the five subjects vocalized long calls (the fully flanged male named Azy). The long calls (see Figure1, e.g., spectrograph) were similar in structure to previously described wild long calls [5] and generally consisted of a traditional three part structure (introduction, climax, and tail-off); although as noted by other researchers [17] deviations from the three-part structure are known to occur.Figure 1
An example audio-spectrogram of a night-time long call produced by Azy. Notice the tail-off is long and the frequency is lower than the human range of ability to hear.Azy long-called1.73
±
1.00 times per night and averaged 0.10
±
0.09 long calls per survey hour. The long call temporal distribution (Figure 2) shows significant differences in expected (with the assumption that occurrence of long calls would be equal per hour of observation) and observed distributions (χ
2
=
62.0, d
f
=
16, P
<
0.01); there was a bimodal distribution, with two peaks approximately at 01:00 (total vocalizations, N
=
14) and 05:00 (total vocalizations, N
=
12) in the morning. Spectrographic analysis of the long calls (N
=
29) shows the average long call duration to be 85.4
±
19.5 seconds (see Table 1 and Figures 3(a) and 3(b)). Furthermore, long calls significantly decreased in duration with each additional long call (r
2
=
-
0.64, P
=
0.001; Figure 3(b)).Table 1
Descriptive statistics characterizing night-time flanged male orangutan long calls (n
=
29) generated from somnographic analysis.
Range
Mean ± SD
Test of normality
Long call duration
35.0–112.0
85.4 ± 19.5
P
=
0.20
Long call min. frequency
12.0–59.0
19.5 ± 10.1
P
<
0.001
Long call peak frequency
59.0–400.0
278.4 ± 93.5
P
=
0.009
Long call peak dB
−35.0–0.0
−10.0 ± 12.4
P
=
0.012Figure 2
The occurrence of adult male long calls was recorded at all hours from 17:00–09:00. The circadian distribution of calls revealed a bimodal pattern. Bimodal long call frequency is characteristic of orangutan male vocalizations in the wild but is in this instance of captivity expressed in a temporally unique manner.(a) A histogram of male orangutan long call duration; duration was normally distributed. (b) Long call duration decreased with each additional call throughout the night.
(a)
(b)Prelong call context and state was shown to be predominantly from an awake but restful state (Figure4). Observed prelong call state tested against expected occurrence (a previous study resulted in 72% of the time in a sleeping area to be spent asleep [42]) shows that Azy was in a significantly different state than expected (sleep state 23%; χ
2
=
119.01, d
f
=
1, P
<
0.001). Of the 11 times Azy was in a sleep state prior to long calls, he was in REM state only twice (4.2% of the overall sample). Azy long-called directed at conspecifics 67.4% of the time, whereas he called directly into the wall or a walled corner 30.4% of the time (2.2% he was out of line of site and direction could not be determined). Long calls directed at conspecifics did not differ in peak dB (N
=
19, −19.7
±
11.2 versus N
=
12, −13.3
±
12.0; independent-samples t-test, t
=
-
1.5, P
=
0.14) or peak frequency (N
=
15, 277.9
±
90.0 versus N
=
7, 322.1
±
58.1; independent-samples t-test, t
=
-
1.19, P
=
0.25), when compared to long calls directed into walls. Azy was stationary 67.4% and was mobile 30.4% of the time during long calls. The moments before initiation of long calls were analyzed for discrete abiotic or biotic noises; no such instances were observed. A copulation was associated with a long call only once (1.7%); no associated copulation was observed a majority of the time (55.2%); although he was outside the line of sight for 13 instances, therefore, associated copulation cannot be ruled out for these instances.Figure 4
Azy most often called prior to several minutes of restful alertness, suggesting calls were premeditated and conscious.Several measures of sleep quality significantly reduced relative to the number of nightly long calls (Table2 and Figures 5(a) and 5(b)). As the number of nightly long calls performed by Azy increased, his arousability increased, sleep fragmentation increased, sleep quality decreased, and total time spent asleep decreased (see Figure 6 illustrating performed long call posture and intent). When compared to nights that had a low number of total calls (less than 3 vocalizations) high total long call nights (greater than 2 vocalizations) were associated with significantly decreased sleep quality (N
=
27, 0.77
±
0.07 versus N
=
8, 0.70
±
0.06; independent-samples t-test, t
=
2.26, P
=
0.012).Table 2
Spearman’s rho correlation showing the significant relationship between the number of nightly long calls (n
=
35) and measures of sleep quality: arousability (number of motor activity bouts per hour), sleep fragmentation (the number of brief awakenings greater than 2 min per hour), sleep quality (sleep duration/time in bed), and total sleep time (mins).
Arousability
Sleep fragmentation
Sleep quality
Total sleep time
Number of nightly long calls
r
2
=
0.31
P
=
0.35
r
2
=
0.30
P
=
0.04
r
2
=
-
0.47
P
=
0.002
r
2
=
-
0.44
P
=
0.004(a) Azy experienced less nightly total sleep time and (b) sleep quality the more he invested in performing long calls. The correlation slope for total sleep time and number of night-time long calls wasY
=
0.81
±
0.03; the interpolation line used a quadratic fit method. The correlation slope for sleep quality and number of night-time long calls was Y
=
6.22
E
2
±
25.68; the interpolation line was linear.
(a)
(b)Figure 6
Azy performing a stationary long call from a state of restful alertness; the vocalization was directed at conspecifics and not associated with a copulation.
## 4. Discussion
To our knowledge, this study is the first to describe night-time long calls in captive orangutans; captive and wild environments differ in several ways which could affect long call behavior—notably, captive settings control for proximity to conspecifics and a consistent rest/wake period. Azy, the lone fully flanged male, was the only individual to perform long call vocalizations which is consistent with observations in the wild [5]. Azy’s long call pattern exhibits structure previously described by researchers (see supplemental video in Supplementary Material for long call example available online at http://dx.doi.org/10.1155/2014/101763) [5, 19, 43]. Azy produced exhalation as well as inhalation sounds, commonly expressed by an exhalation bubbling and then hiatus roars and intermediaries and then a long trailing of sighs [17]. Long call nightly temporal distribution fit a bimodal pattern, which is characteristic of male orangutans circadian distribution in the wild [7]; interestingly, although the distribution was bimodal, it temporally did not correspond to any known pattern exhibited in the wild. Azy most frequently performed nocturnal long calls between the hours of 01:00-02:00 and 05:00-06:00. Azy’s peak calling time fit with the predawn long call rates seen in wild populations, but his greatest number of long calls was between 01:00-02:00—in stark contrast to a study performed at Batang Ai National Park in Northern Borneo, which found no recorded instance of a long call during this period [7]. Finally, the average calls per survey hour heard at Batang Ai were 0.45, whereas Azy’s call per survey hour were 0.10; this is most likely due to a lack of long call response behavior to potentially antagonistic males, given no other adult males were present.The duration of long calls reduced throughout the night (Figure3(b)). We hypothesize this could either be related to increasing fatigue associated with multiple long call performances or an association with differing levels of alertness related to the passage of SWS dominated early sleep and increasingly lengthening REM stages towards the end of the night [44], assuming a human-like pattern in orangutans. Quantitative assessment of the energy costs and proposed fatigue buildup of sequential long calls waits experimental testing.The first hypothesis, that night-time long calls will begin from an abrupt sleep-to-wake transition, as an unconscious reactionary response to abiotic forces or internal sleep states, was rejected. Azy’s prelong call state can be characterized as a several minute period of alert-restfulness. Two instances were observed when long calls were initiated from a sleep state (~4% of the overall sample) and were associated with REM sleep stages; his observed behavior was not characterized by abrupt, reactionary movements. Furthermore, Azy directed long calls ~67% of the time towards conspecifics, which is suggestive of the premeditated intention in this behavior, although intentionality in animal communication is difficult to assess; an alternative interpretation is that postural orientation was by chance, rather than intentional. We hypothesized that wall directed calls function as acoustic amplification; testing vocalization peak dB and frequency between both direction states (towards conspecifics versus towards a wall) revealed no difference between the outputs. Although, it could be that wall directed long calls appear to be more acoustically powerful to the individual caller, despite the fact that there was no overall difference in dB levels to others within the enclosure. Azy long-called from a stationary, upright position a majority of the time (~67%). Only a single copulation was observed as associated with a long call; this is not suggestive of the function of long calls in a captive context initiating sex; it should be noted that ~28% of post long call sample was out of line of sight due to Azy leaving his sleeping area during mobile long call displays. Additionally, female estrous could have exerted a possible influence on copulation behavior, but these states were not recorded. Therefore we are cautious against interpretation rejecting or supporting this conclusion. Overall, we interpret this data as supporting evidence for the supposition that long calls performed by Azy may be conscious and premeditated in nature.The second hypothesis, that individuals that vocalize a greater number of long calls will experience reduced overall sleep quality, was supported; greater number of night-time long calls elicited by Azy were associated with poorer sleep quality. Greater occurrence of long calls share a positive relationship with arousability and sleep fragmentation and a negative relationship with total sleep time and sleep quality. The average long call was 85 seconds: therefore, the total time needed to be in a wake state is marginal relative to the total time spent asleep (i.e., a night with two total long calls costs 2.8 minutes of sleep (0.005% total time asleep), whereas a night with four total long calls costs 5.6 minutes (0.01% total time asleep)). Despite this marginal total duration of time invested in long calls, the effect on sleep quality is significant (a decrease in 9% of the total proportion of time asleep relative to the time spent in the sleeping area). We interpret this data as being evidence of a significant investment of energy on the part of the caller; this could manifest in the cognitive preamble before a long call or the cool down cost associated with postlong call excitement. Yet, we acknowledge that causal direction remains unclear; for example, it could be that, during restive nights, the individual is conscious longer and therefore is more prone to exhibit long call behavior. One way forward may be the monitoring of night-time long call investment relative to the hormonal profiles of not only individuals that call, but individuals that are within audio range of the caller. Given the proximate mechanisms for this phenomenon are poorly understood, we suggest future research should investigate the endocrinological correlates of this behavior.The comparison of a trait in both wild and captive contexts is beneficial in that the trait in question can be observed in a controlled environment, removed from intervening variation. Some insights can be made, from observing the context of captive long call behavior in orangutans. We tested theproximate stimulus of external or endogenous “surprise” cues to explain spontaneous long calls—which our data rejected. There is no evidence to suggest that long calls in a zoo environment servefunctionally as postsleep travel planning. Anevolutionary (intrasexual competition) cause is less relevant because there are no other adult males in the enclosure. Reproductive fitness could be directly associated with long calls, but the evidence from this study is equivocal given that only 1.7% of observed calls were associated with mating. Furthermore, this behavior in Azy could simply be a manifestation of ontogenetic factors if he was exposed to his father’s vocalizations during development, but unfortunately there is no evidence that his father performed night-time long calls. Yet, there is significant cost in sleep quality associated with greater investment in night-time long calls, which may have adverse effects on next day cognition [39]; the persistence of such costly behavior, in a controlled context where the benefits of the behavior are less obvious, indicates that it is likely to be functionally adaptive, although we are cautious that interpreting behavior based on current ecology relative to an evolved function presents obvious difficulties—as it would take several generations to lose a behavior not under selective pressure. Finally, it should be noted that the sample is one individual, which may not be statistically representative ofPongo; notwithstanding; this data reveals the capability of the species [45] and awaits further confirmation with captive flanged, malePongo at other institutions.The cross-cultural study of human sleep expression, sleep architecture (distribution of NREM and REM throughout a nightly sleep bout), diurnal bouts of inactivity (i.e., napping and/or energy conservation), and sleep quality is in its infancy [46–49]. The sociophysical ecology of human sleep is more readily accessible in historical and ethnographic records, yet the only data relative to forager sleep expression is anecdotal. It has been noted that the forager pattern may be polyphasic [50, 51]; even preindustrial, preelectric western populations divided the night into “first sleep” and “second sleep,” indicating a polyphasic pattern [52]. Chimpanzees night-time vocalizations at Mahale have been recorded to be especially active during the periods of 23:00–02:00, with a predominance of pant-hoots associated with night-time defecation and urination [53]. With respect to orangutans, it may also be the case that the cost of night-time vocalizations can be made up during the day with “siesta napping,” which has been observed in all ape populations [54] and is characteristic of equatorial forager’s daily inactivity patterns [55, 56].In conclusion, captive male orangutans exhibit long call behavior which can be characterized with relatively the same form and structure as their wild counterparts. Given the evidence for an alert preamble to long calls, these findings suggest that the behavior may have been conscious and premeditated in nature. Furthermore, only several minutes invested in long calls throughout the night disproportionately cost the caller by negatively impacting overall sleep quality. The fact that this behavior persists in a captive environment, where the benefits for the behavior are less obvious, may indicate that the ability is adaptive in many wild social and ecological conditions. In polygynous species, in which paternal investment in offspring is minimal to absent, access to fertile females is essential to male reproductive fitness; although researchers have yet to unravel the function of nocturnal long calls in wild populations, it may be that sexual selection favors an ability in males to forgo sleep or experience lower levels of sleep quality, when overall reproductive benefits outweigh the cost of the behavior. We therefore do not expect such sleep quality-to-costs tradeoffs to be limited to orangutans but rather to exist also in other primates as well.
---
*Source: 101763-2014-09-07.xml* | 101763-2014-09-07_101763-2014-09-07.md | 36,177 | Orangutan Night-Time Long Call Behavior: Sleep Quality Costs Associated with Vocalizations in CaptivePongo | David R. Samson; Del Hurst; Robert W. Shumaker | Advances in Zoology
(2014) | Biological Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101763 | 101763-2014-09-07.xml | ---
## Abstract
Researchers have suggested that the ability of male primates to emit long-distance vocalizations is energetically costly and potentially incurring important adaptive consequences upon the calling individuals. Here, we present the first preliminary data on captive orangutan (Pongo spp.) nocturnal long calls, generated at the Indianapolis Zoo. We used videography to characterize long calls with observed behavioral contexts for 48 nights (816 observed hours totaling 83 long calls). We generated somnographic data for a subset of the long calls. Overall measures of sleep quality generated by infrared videography were then compared to the somnographic, nocturnal long call data. We tested hypotheses related to the proximate mechanisms involved in the initialization of vocalization and the potential costs of emitting long calls to overall sleep quality. We found that (1) performed long calls were conscious and premeditated in nature and (2) greater number of night-time long calls shared a positive relationship with arousability and sleep fragmentation and a negative relationship with total sleep time and sleep quality. These findings strongly suggest that only several minutes of total time invested in long calls throughout the night disproportionately cost the caller by negatively impacting overall sleep quality.
---
## Body
## 1. Introduction
Several nonhuman primate species are known to emit “loud calls.” These alarm call vocalizations are like most animal acoustic signals in that they are primarily produced during their active period [1–6] and thus are characterized by species-specific circadian distribution [7, 8]. Given primate loud calls are stereotypically characterized by traits such as acoustic intensity (dB), form type (i.e., length frequency modulation), and often vocalized by high ranking males [9], it has been suggested that Old World monkey and ape loud calls are phylogenetic homologous characteristics [10]. In general, several hypotheses have been forwarded in attempts to explain the function of primate loud calls: mate-attraction, inter-group spacing and intra-group cohesion, and territorial advertisement [11].Within the repertoire of orangutan (Pongo) vocalizations the long call is characterized by several idiosyncratic traits: it is produced often by large individuals [5, 12–14], it travels greater than 300 meters [15], it has only been observed to be emitted by flanged males [13, 14, 16], it is the loudest call in the repertoire (reaching 100 db at 1 meter away and can be heard up to more than 1 km in distance) and calls may exceed three minutes in length [13, 17].It has been hypothesized that in the wild long calls attract adult females to coordinate mating [5, 12, 14, 18], act as a mechanism with which individuals associate within a community and coordinate seasonal movements [19, 20] and/or mediate dominance relationships among adult males [5, 21, 22]. Specific functions have been proposed, including signaling nulliparous females to initiate mating with dominant males [12, 23, 24] and/or serve as a beacon for location for sexually receptive females given long distances [12, 25, 26], keeping potentially antagonistic males apart [5, 12, 14, 27], and a way for males to communicate their next day travel direction [28]. It has also been suggested these calls may function as a social mechanism to restrict hormone production in small subordinate/unflanged males and thus keep them from developing into large dominate flanged males, although this hypothesis has yet to be tested [29].The proximate mechanisms and/or stimuli involved in the initiation of long calls remain unknown. MacKinnon (1974 : 54) noted: “Sometimes the calls seemed spontaneous but often they followed a sudden sound cue. Calls were…triggered by sudden pig noises…the bark of a deer…a human sneeze…a distant gunshot…a sudden gust of wind and a clap of thunder…the crash of nearby tree-fall…or the sound of breaking branches.” Furthermore, RS has noted anecdotal evidence that Azy (the flanged male observed in this captive group) has emitted long calls in response to spontaneous environmental cues (e.g., automobiles driving by, wind gusts, etc.). The REM (rapid eye movement) sleep state is associated with vivid dreaming; during sleep dreaming occurs because the brain attends to endogenously generated activity which can be internally perceived as actually occurring events—much like hallucination [30]. Therefore it is possible that sleep-to-wake transitions leading to long calls could be catalyzed from an endogenous stimulus. Quantitative data circumscribing the context with which captive orangutans perform long calls has yet to be recorded.In mammals (humans included), sleep fragmentation has been found to diminish attention, sensory-motor processing, motivation, and memory [31–34]. These diminishing effects in behavioral performance have been associated with direct costs to the individual [35]. The costs of sleep deprivation on waking function are well documented, yet contextual descriptions in which sleep loss is adaptive are emerging. Costly morphological, physiological, and behavioral traits have been evolutionarily selected, especially via sexual selection [36]. In an extreme example, the polygynous pectoral sandpiper has been found to remain active for greater than 95% of a 19-day period when females are in peak fertility [37]. To date, it has yet to be assessed whether orangutans experience any physiological costs to sleep quality relative to the investment directed in night-time long call performance.The goal of this study was to document and describe captive orangutan night-time vocalizations. In addition, hypotheses related to (1) the proximate mechanisms involved in the initialization of vocalization and (2) the potential costs of emitting long calls to overall sleep quality were tested. First, we hypothesize that night-time long calls will begin from an abrupt sleep-to-wake transition, as an unconscious reactionary response to abiotic forces (e.g., loud, disruptive noises) or internal sleep states (i.e., dreams). Second, we hypothesize that individuals that vocalize a greater number of long calls will experience reduced overall sleep quality. Specifically, we tested the following predictions.(1)
Night-time long calls will initialize from a REM stage of sleep and/or in association with high-decibel environmental stimulus.
(2)
Greater number of night-time long call vocalizations will reduce sleep quality and sleep duration and increase sleep arousability (number of motor activity bouts per hour) and fragmentation (the number of brief awakenings greater than 2 min per hour).
## 2. Methods
### 2.1. Study Subjects
Subjects housed at the Indianapolis Zoo (totalN
=
5) were three females, Katy (Studbook ID number: 2248), Knobi (1733), and Lucy (1972), and two males Azy (1616) and Rocky (3331). All subjects were classified as adults with the exception of Rocky, the only adolescent. None of the subjects were geriatric, as life span in the wild for orangutans is approximately 60 years old [38]. All subjects were hybrids of Bornean (Pongo pygmaeus) and Sumatran (Pongo abelii) species. Rocky, Katy, and Lucy were privately owned and were part of the entertainment industry prior to moving into the Association of Zoos & Aquariums (AZA) community; specific information about their personal histories is therefore limited (RS personal data). The individuals from the entertainment industry were hand-reared by humans, none having any exposure to their mothers during early growth and development. Azy and Knobi have always lived within the AZA community and have well documented biographies and rich social experience. Subjects were housed in interconnected indoor and outdoor enclosures and had regular access to all areas throughout the duration of the study. The indoor enclosure contained laminate sleeping platforms located approximately 1 m off the floor. The indoor space included five possible sleeping rooms. Subjects had access to natural and artificially enriched environments. The indoor enclosure was set at a constant temperature of 23.3°C. Natural lighting was the primary source of light for the group and was accessible by way of windows and access to the outdoor enclosure; in addition, lights were manually turned on by the keepers at 07:30 h and turned off at 17:30 h. For further detail regarding night-time sleep related behaviors in captive orangutans see [39–41].
### 2.2. Data Collection
This study was conducted over four months during August 2012–November 2012. The occurrences of long calls were continuously video-recorded nightly, from 16:00–09:00 for a total of 48 nights (816 hours total). All-occurrence sampling captured each instance of vocalization throughout the nightly period (totalN
=
83). The temporal distributions of long calls were tabulated to describe occurrence of calls associated within hourly intervals. Context of long calls was recorded; nominal data were generated for vocalization instances, such as the presence or absence of associated copulation, presence or absence of discrete abiotic (i.e., automobiles and inclement weather, etc.) or biotic (i.e., vocalizations by conspecifics) noises, direction (i.e., vocalizing into the wall or directed towards conspecifics), stationary versus mobile (i.e., states were defined as mobile if the vocalizer moved out of the sleep area during the long call), and state prior to vocalization (i.e., upright awake, resting awake, or sleep).Sleep behavior was recorded continuously throughout the night using all-occurrence sampling on subjects [42]. Two instruments (AXIS P3344 and AXIS Q6032-E Network Cameras) were used to generate nightly sleep quota data on subjects within line of sight. One stationary camera (P3344) was manually placed in front of the subject at the time of sleeping platform construction; another rotatable camera (Q6032-E) was remotely controlled throughout the night to ensure focal subjects were continuously within line of sight from start to finish of the recording session (Axis Communications, Lund, Sweden). Videography generated values for sleep quotas (for detailed methods on sleep behavior analysis see [40]): total time spent awake, total NREM (nonrapid eye movement), total REM, total sleep time (sum of NREM and REM), and total time in bed (absolute difference between rising and retiring times from their constructed sleeping platforms). Measures of overall sleep quality include sleep fragmentation (the number of brief awakenings greater than 2 min per hour), arousability (number of motor activity bouts per hour), and sleep quality (sleep duration/time in bed).Long calls were recorded using an infrared camera (P3344 Axis with a two-way built in mic) and then converted from .asf into .wav files for audio analysis. Detailed audio-spectrographic analysis was performed on 29 long calls. All sound analyses were conducted using Audacity acoustic analysis computer program (Audacity 1.3.12-beta). Audio data generated included minimum frequency (the number of times that a periodic vibration occurs within a 1 second period measured in Hz), maximum frequency, duration (total time of vocalization), peak frequency (the greatest instantaneous value of a standard frequency), and peak decibel (a ratio between the measured level and a reference threshold level indicative of acoustic power, measured in dB). All minimum frequency measures were taken at the end of the long call vocalizations, as this is where the lowest frequencies were thought to occur. Peak frequency, maximum frequency, and peak dB, however, were obtained from analysis of the entire call.
### 2.3. Data Analysis
We generated descriptive statistics characterizing the nightly distribution of call frequency and duration (statistical tests were conducted using IBM SPSS 21); average calls per night and average number of calls per observation hour were calculated. We generated long call values from audio-spectrographic analysis, which were checked for normality with Kolmogorov-Smirnov tests. Frequencies for nominal categories were generated andχ
2 was adopted to test expected versus observed frequencies of sleep/wake states prior to long calls (2-tailed) and temporal distribution of long calls. Independent-samples t-test was used to compare the intensity of long calls between states and the difference between high occurrence vocalization nights (i.e., nights where long calls were greater than 2) versus low occurrence vocalization nights (i.e., nights where long calls were less than 3) determined by using the midway point of the call range distribution. We applied Spearman’s rho correlation (due to nonnormality in analyzed variables) coefficients (
r
) to examine relationships among long calls emitted and overall sleep quality (1-tailed); we include correlation slopes. All reported errors are standard deviations and all tests set at the significance level of P
≤
0.05.
## 2.1. Study Subjects
Subjects housed at the Indianapolis Zoo (totalN
=
5) were three females, Katy (Studbook ID number: 2248), Knobi (1733), and Lucy (1972), and two males Azy (1616) and Rocky (3331). All subjects were classified as adults with the exception of Rocky, the only adolescent. None of the subjects were geriatric, as life span in the wild for orangutans is approximately 60 years old [38]. All subjects were hybrids of Bornean (Pongo pygmaeus) and Sumatran (Pongo abelii) species. Rocky, Katy, and Lucy were privately owned and were part of the entertainment industry prior to moving into the Association of Zoos & Aquariums (AZA) community; specific information about their personal histories is therefore limited (RS personal data). The individuals from the entertainment industry were hand-reared by humans, none having any exposure to their mothers during early growth and development. Azy and Knobi have always lived within the AZA community and have well documented biographies and rich social experience. Subjects were housed in interconnected indoor and outdoor enclosures and had regular access to all areas throughout the duration of the study. The indoor enclosure contained laminate sleeping platforms located approximately 1 m off the floor. The indoor space included five possible sleeping rooms. Subjects had access to natural and artificially enriched environments. The indoor enclosure was set at a constant temperature of 23.3°C. Natural lighting was the primary source of light for the group and was accessible by way of windows and access to the outdoor enclosure; in addition, lights were manually turned on by the keepers at 07:30 h and turned off at 17:30 h. For further detail regarding night-time sleep related behaviors in captive orangutans see [39–41].
## 2.2. Data Collection
This study was conducted over four months during August 2012–November 2012. The occurrences of long calls were continuously video-recorded nightly, from 16:00–09:00 for a total of 48 nights (816 hours total). All-occurrence sampling captured each instance of vocalization throughout the nightly period (totalN
=
83). The temporal distributions of long calls were tabulated to describe occurrence of calls associated within hourly intervals. Context of long calls was recorded; nominal data were generated for vocalization instances, such as the presence or absence of associated copulation, presence or absence of discrete abiotic (i.e., automobiles and inclement weather, etc.) or biotic (i.e., vocalizations by conspecifics) noises, direction (i.e., vocalizing into the wall or directed towards conspecifics), stationary versus mobile (i.e., states were defined as mobile if the vocalizer moved out of the sleep area during the long call), and state prior to vocalization (i.e., upright awake, resting awake, or sleep).Sleep behavior was recorded continuously throughout the night using all-occurrence sampling on subjects [42]. Two instruments (AXIS P3344 and AXIS Q6032-E Network Cameras) were used to generate nightly sleep quota data on subjects within line of sight. One stationary camera (P3344) was manually placed in front of the subject at the time of sleeping platform construction; another rotatable camera (Q6032-E) was remotely controlled throughout the night to ensure focal subjects were continuously within line of sight from start to finish of the recording session (Axis Communications, Lund, Sweden). Videography generated values for sleep quotas (for detailed methods on sleep behavior analysis see [40]): total time spent awake, total NREM (nonrapid eye movement), total REM, total sleep time (sum of NREM and REM), and total time in bed (absolute difference between rising and retiring times from their constructed sleeping platforms). Measures of overall sleep quality include sleep fragmentation (the number of brief awakenings greater than 2 min per hour), arousability (number of motor activity bouts per hour), and sleep quality (sleep duration/time in bed).Long calls were recorded using an infrared camera (P3344 Axis with a two-way built in mic) and then converted from .asf into .wav files for audio analysis. Detailed audio-spectrographic analysis was performed on 29 long calls. All sound analyses were conducted using Audacity acoustic analysis computer program (Audacity 1.3.12-beta). Audio data generated included minimum frequency (the number of times that a periodic vibration occurs within a 1 second period measured in Hz), maximum frequency, duration (total time of vocalization), peak frequency (the greatest instantaneous value of a standard frequency), and peak decibel (a ratio between the measured level and a reference threshold level indicative of acoustic power, measured in dB). All minimum frequency measures were taken at the end of the long call vocalizations, as this is where the lowest frequencies were thought to occur. Peak frequency, maximum frequency, and peak dB, however, were obtained from analysis of the entire call.
## 2.3. Data Analysis
We generated descriptive statistics characterizing the nightly distribution of call frequency and duration (statistical tests were conducted using IBM SPSS 21); average calls per night and average number of calls per observation hour were calculated. We generated long call values from audio-spectrographic analysis, which were checked for normality with Kolmogorov-Smirnov tests. Frequencies for nominal categories were generated andχ
2 was adopted to test expected versus observed frequencies of sleep/wake states prior to long calls (2-tailed) and temporal distribution of long calls. Independent-samples t-test was used to compare the intensity of long calls between states and the difference between high occurrence vocalization nights (i.e., nights where long calls were greater than 2) versus low occurrence vocalization nights (i.e., nights where long calls were less than 3) determined by using the midway point of the call range distribution. We applied Spearman’s rho correlation (due to nonnormality in analyzed variables) coefficients (
r
) to examine relationships among long calls emitted and overall sleep quality (1-tailed); we include correlation slopes. All reported errors are standard deviations and all tests set at the significance level of P
≤
0.05.
## 3. Results
Results for this study show that only one of the five subjects vocalized long calls (the fully flanged male named Azy). The long calls (see Figure1, e.g., spectrograph) were similar in structure to previously described wild long calls [5] and generally consisted of a traditional three part structure (introduction, climax, and tail-off); although as noted by other researchers [17] deviations from the three-part structure are known to occur.Figure 1
An example audio-spectrogram of a night-time long call produced by Azy. Notice the tail-off is long and the frequency is lower than the human range of ability to hear.Azy long-called1.73
±
1.00 times per night and averaged 0.10
±
0.09 long calls per survey hour. The long call temporal distribution (Figure 2) shows significant differences in expected (with the assumption that occurrence of long calls would be equal per hour of observation) and observed distributions (χ
2
=
62.0, d
f
=
16, P
<
0.01); there was a bimodal distribution, with two peaks approximately at 01:00 (total vocalizations, N
=
14) and 05:00 (total vocalizations, N
=
12) in the morning. Spectrographic analysis of the long calls (N
=
29) shows the average long call duration to be 85.4
±
19.5 seconds (see Table 1 and Figures 3(a) and 3(b)). Furthermore, long calls significantly decreased in duration with each additional long call (r
2
=
-
0.64, P
=
0.001; Figure 3(b)).Table 1
Descriptive statistics characterizing night-time flanged male orangutan long calls (n
=
29) generated from somnographic analysis.
Range
Mean ± SD
Test of normality
Long call duration
35.0–112.0
85.4 ± 19.5
P
=
0.20
Long call min. frequency
12.0–59.0
19.5 ± 10.1
P
<
0.001
Long call peak frequency
59.0–400.0
278.4 ± 93.5
P
=
0.009
Long call peak dB
−35.0–0.0
−10.0 ± 12.4
P
=
0.012Figure 2
The occurrence of adult male long calls was recorded at all hours from 17:00–09:00. The circadian distribution of calls revealed a bimodal pattern. Bimodal long call frequency is characteristic of orangutan male vocalizations in the wild but is in this instance of captivity expressed in a temporally unique manner.(a) A histogram of male orangutan long call duration; duration was normally distributed. (b) Long call duration decreased with each additional call throughout the night.
(a)
(b)Prelong call context and state was shown to be predominantly from an awake but restful state (Figure4). Observed prelong call state tested against expected occurrence (a previous study resulted in 72% of the time in a sleeping area to be spent asleep [42]) shows that Azy was in a significantly different state than expected (sleep state 23%; χ
2
=
119.01, d
f
=
1, P
<
0.001). Of the 11 times Azy was in a sleep state prior to long calls, he was in REM state only twice (4.2% of the overall sample). Azy long-called directed at conspecifics 67.4% of the time, whereas he called directly into the wall or a walled corner 30.4% of the time (2.2% he was out of line of site and direction could not be determined). Long calls directed at conspecifics did not differ in peak dB (N
=
19, −19.7
±
11.2 versus N
=
12, −13.3
±
12.0; independent-samples t-test, t
=
-
1.5, P
=
0.14) or peak frequency (N
=
15, 277.9
±
90.0 versus N
=
7, 322.1
±
58.1; independent-samples t-test, t
=
-
1.19, P
=
0.25), when compared to long calls directed into walls. Azy was stationary 67.4% and was mobile 30.4% of the time during long calls. The moments before initiation of long calls were analyzed for discrete abiotic or biotic noises; no such instances were observed. A copulation was associated with a long call only once (1.7%); no associated copulation was observed a majority of the time (55.2%); although he was outside the line of sight for 13 instances, therefore, associated copulation cannot be ruled out for these instances.Figure 4
Azy most often called prior to several minutes of restful alertness, suggesting calls were premeditated and conscious.Several measures of sleep quality significantly reduced relative to the number of nightly long calls (Table2 and Figures 5(a) and 5(b)). As the number of nightly long calls performed by Azy increased, his arousability increased, sleep fragmentation increased, sleep quality decreased, and total time spent asleep decreased (see Figure 6 illustrating performed long call posture and intent). When compared to nights that had a low number of total calls (less than 3 vocalizations) high total long call nights (greater than 2 vocalizations) were associated with significantly decreased sleep quality (N
=
27, 0.77
±
0.07 versus N
=
8, 0.70
±
0.06; independent-samples t-test, t
=
2.26, P
=
0.012).Table 2
Spearman’s rho correlation showing the significant relationship between the number of nightly long calls (n
=
35) and measures of sleep quality: arousability (number of motor activity bouts per hour), sleep fragmentation (the number of brief awakenings greater than 2 min per hour), sleep quality (sleep duration/time in bed), and total sleep time (mins).
Arousability
Sleep fragmentation
Sleep quality
Total sleep time
Number of nightly long calls
r
2
=
0.31
P
=
0.35
r
2
=
0.30
P
=
0.04
r
2
=
-
0.47
P
=
0.002
r
2
=
-
0.44
P
=
0.004(a) Azy experienced less nightly total sleep time and (b) sleep quality the more he invested in performing long calls. The correlation slope for total sleep time and number of night-time long calls wasY
=
0.81
±
0.03; the interpolation line used a quadratic fit method. The correlation slope for sleep quality and number of night-time long calls was Y
=
6.22
E
2
±
25.68; the interpolation line was linear.
(a)
(b)Figure 6
Azy performing a stationary long call from a state of restful alertness; the vocalization was directed at conspecifics and not associated with a copulation.
## 4. Discussion
To our knowledge, this study is the first to describe night-time long calls in captive orangutans; captive and wild environments differ in several ways which could affect long call behavior—notably, captive settings control for proximity to conspecifics and a consistent rest/wake period. Azy, the lone fully flanged male, was the only individual to perform long call vocalizations which is consistent with observations in the wild [5]. Azy’s long call pattern exhibits structure previously described by researchers (see supplemental video in Supplementary Material for long call example available online at http://dx.doi.org/10.1155/2014/101763) [5, 19, 43]. Azy produced exhalation as well as inhalation sounds, commonly expressed by an exhalation bubbling and then hiatus roars and intermediaries and then a long trailing of sighs [17]. Long call nightly temporal distribution fit a bimodal pattern, which is characteristic of male orangutans circadian distribution in the wild [7]; interestingly, although the distribution was bimodal, it temporally did not correspond to any known pattern exhibited in the wild. Azy most frequently performed nocturnal long calls between the hours of 01:00-02:00 and 05:00-06:00. Azy’s peak calling time fit with the predawn long call rates seen in wild populations, but his greatest number of long calls was between 01:00-02:00—in stark contrast to a study performed at Batang Ai National Park in Northern Borneo, which found no recorded instance of a long call during this period [7]. Finally, the average calls per survey hour heard at Batang Ai were 0.45, whereas Azy’s call per survey hour were 0.10; this is most likely due to a lack of long call response behavior to potentially antagonistic males, given no other adult males were present.The duration of long calls reduced throughout the night (Figure3(b)). We hypothesize this could either be related to increasing fatigue associated with multiple long call performances or an association with differing levels of alertness related to the passage of SWS dominated early sleep and increasingly lengthening REM stages towards the end of the night [44], assuming a human-like pattern in orangutans. Quantitative assessment of the energy costs and proposed fatigue buildup of sequential long calls waits experimental testing.The first hypothesis, that night-time long calls will begin from an abrupt sleep-to-wake transition, as an unconscious reactionary response to abiotic forces or internal sleep states, was rejected. Azy’s prelong call state can be characterized as a several minute period of alert-restfulness. Two instances were observed when long calls were initiated from a sleep state (~4% of the overall sample) and were associated with REM sleep stages; his observed behavior was not characterized by abrupt, reactionary movements. Furthermore, Azy directed long calls ~67% of the time towards conspecifics, which is suggestive of the premeditated intention in this behavior, although intentionality in animal communication is difficult to assess; an alternative interpretation is that postural orientation was by chance, rather than intentional. We hypothesized that wall directed calls function as acoustic amplification; testing vocalization peak dB and frequency between both direction states (towards conspecifics versus towards a wall) revealed no difference between the outputs. Although, it could be that wall directed long calls appear to be more acoustically powerful to the individual caller, despite the fact that there was no overall difference in dB levels to others within the enclosure. Azy long-called from a stationary, upright position a majority of the time (~67%). Only a single copulation was observed as associated with a long call; this is not suggestive of the function of long calls in a captive context initiating sex; it should be noted that ~28% of post long call sample was out of line of sight due to Azy leaving his sleeping area during mobile long call displays. Additionally, female estrous could have exerted a possible influence on copulation behavior, but these states were not recorded. Therefore we are cautious against interpretation rejecting or supporting this conclusion. Overall, we interpret this data as supporting evidence for the supposition that long calls performed by Azy may be conscious and premeditated in nature.The second hypothesis, that individuals that vocalize a greater number of long calls will experience reduced overall sleep quality, was supported; greater number of night-time long calls elicited by Azy were associated with poorer sleep quality. Greater occurrence of long calls share a positive relationship with arousability and sleep fragmentation and a negative relationship with total sleep time and sleep quality. The average long call was 85 seconds: therefore, the total time needed to be in a wake state is marginal relative to the total time spent asleep (i.e., a night with two total long calls costs 2.8 minutes of sleep (0.005% total time asleep), whereas a night with four total long calls costs 5.6 minutes (0.01% total time asleep)). Despite this marginal total duration of time invested in long calls, the effect on sleep quality is significant (a decrease in 9% of the total proportion of time asleep relative to the time spent in the sleeping area). We interpret this data as being evidence of a significant investment of energy on the part of the caller; this could manifest in the cognitive preamble before a long call or the cool down cost associated with postlong call excitement. Yet, we acknowledge that causal direction remains unclear; for example, it could be that, during restive nights, the individual is conscious longer and therefore is more prone to exhibit long call behavior. One way forward may be the monitoring of night-time long call investment relative to the hormonal profiles of not only individuals that call, but individuals that are within audio range of the caller. Given the proximate mechanisms for this phenomenon are poorly understood, we suggest future research should investigate the endocrinological correlates of this behavior.The comparison of a trait in both wild and captive contexts is beneficial in that the trait in question can be observed in a controlled environment, removed from intervening variation. Some insights can be made, from observing the context of captive long call behavior in orangutans. We tested theproximate stimulus of external or endogenous “surprise” cues to explain spontaneous long calls—which our data rejected. There is no evidence to suggest that long calls in a zoo environment servefunctionally as postsleep travel planning. Anevolutionary (intrasexual competition) cause is less relevant because there are no other adult males in the enclosure. Reproductive fitness could be directly associated with long calls, but the evidence from this study is equivocal given that only 1.7% of observed calls were associated with mating. Furthermore, this behavior in Azy could simply be a manifestation of ontogenetic factors if he was exposed to his father’s vocalizations during development, but unfortunately there is no evidence that his father performed night-time long calls. Yet, there is significant cost in sleep quality associated with greater investment in night-time long calls, which may have adverse effects on next day cognition [39]; the persistence of such costly behavior, in a controlled context where the benefits of the behavior are less obvious, indicates that it is likely to be functionally adaptive, although we are cautious that interpreting behavior based on current ecology relative to an evolved function presents obvious difficulties—as it would take several generations to lose a behavior not under selective pressure. Finally, it should be noted that the sample is one individual, which may not be statistically representative ofPongo; notwithstanding; this data reveals the capability of the species [45] and awaits further confirmation with captive flanged, malePongo at other institutions.The cross-cultural study of human sleep expression, sleep architecture (distribution of NREM and REM throughout a nightly sleep bout), diurnal bouts of inactivity (i.e., napping and/or energy conservation), and sleep quality is in its infancy [46–49]. The sociophysical ecology of human sleep is more readily accessible in historical and ethnographic records, yet the only data relative to forager sleep expression is anecdotal. It has been noted that the forager pattern may be polyphasic [50, 51]; even preindustrial, preelectric western populations divided the night into “first sleep” and “second sleep,” indicating a polyphasic pattern [52]. Chimpanzees night-time vocalizations at Mahale have been recorded to be especially active during the periods of 23:00–02:00, with a predominance of pant-hoots associated with night-time defecation and urination [53]. With respect to orangutans, it may also be the case that the cost of night-time vocalizations can be made up during the day with “siesta napping,” which has been observed in all ape populations [54] and is characteristic of equatorial forager’s daily inactivity patterns [55, 56].In conclusion, captive male orangutans exhibit long call behavior which can be characterized with relatively the same form and structure as their wild counterparts. Given the evidence for an alert preamble to long calls, these findings suggest that the behavior may have been conscious and premeditated in nature. Furthermore, only several minutes invested in long calls throughout the night disproportionately cost the caller by negatively impacting overall sleep quality. The fact that this behavior persists in a captive environment, where the benefits for the behavior are less obvious, may indicate that the ability is adaptive in many wild social and ecological conditions. In polygynous species, in which paternal investment in offspring is minimal to absent, access to fertile females is essential to male reproductive fitness; although researchers have yet to unravel the function of nocturnal long calls in wild populations, it may be that sexual selection favors an ability in males to forgo sleep or experience lower levels of sleep quality, when overall reproductive benefits outweigh the cost of the behavior. We therefore do not expect such sleep quality-to-costs tradeoffs to be limited to orangutans but rather to exist also in other primates as well.
---
*Source: 101763-2014-09-07.xml* | 2014 |
# Global Attractivity of a Diffusive Nicholson's Blowflies Equation with Multiple Delays
**Authors:** Xiongwei Liu; Xiao Wang
**Journal:** Abstract and Applied Analysis
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101764
---
## Abstract
The present paper considers a diffusive Nicholson's blowflies model with multiple delays under a Neumann boundary condition. Delay independent conditions are derived for the global attractivity of the trivial equilibrium and the positive equilibrium, respectively. Two open problems concerning the stability of positive equilibrium and the occurrence of Hopf bifurcation are proposed.
---
## Body
## 1. Introduction
Since blowflies are important parasites of the sheep industry in some countries such as Australia, based on the experimental data of Nicholson [1, 2], Gurney et al. [3] first proposed Nicholson’s blowflies equation
(1)N˙(t)=-δN(t)+pN(t-τ)e-aN(t-τ),t>0,
where N(t) is the size of the adult blowflies population at time t; p is the maximum per capita daily egg production rate; 1/a is the size at which the blowflies population reproduces at its maximum rate; δ is the per capita daily adult death rate; τ is the generation time. For this equation, global attractivity and oscillation of solutions have been investigated by several authors (see [4–9]).It is impossible that the size of the adult blowflies population is independent of a spatial variable; therefore, Yang and So [10] investigated both temporal and spatial variations of the diffusive Nicholson’s blowflies equation
(2)∂N(t,x)∂t=ΔN(t,x)-δN(t,x)+pN(t-τ,x)e-aN(t-τ,x),inD≜(0,∞)×Ω
under Neumann boundary condition and gave the similar sufficient conditions for oscillation of all positive solutions about the positive steady state. Whereafter, many authors studied the various dynamical behaviors for this equation; we refer to Lin and Mei [11], Saker [12], Wang and Li [13], and Yi and Zou [14].Meanwhile, one can consider a nonlinear equation with several delays because of variability of the generation time; for this purpose, Györi and Ladas [15] and Kulenović and Ladas [6] proposed the following generalized Nicholson’s blowflies model:
(3)N′(t)=-δN(t)+∑i=1npiN(t-τi)e-aiN(t-τi),t>0.Luo and Liu [16] studied the global attractivity of the nonnegative equilibria of (3).It is of interest to investigate both several temporal and spatial variations of the blowflies population using mathematical models. Hereby, in this paper, we consider the following system:(4)∂N(t,x)∂t=ΔN(t,x)-δN(t,x)+∑i=1npiN(t-τi,x)e-aiN(t-τi,x),inD
with Neumann boundary condition
(5)∂N(t,x)∂ν=0,onΓ≜(0,∞)×∂Ω,
and initial condition
(6)N(θ,x)=ψ(θ,x)≥0,inDτ≜[-τ,0]×Ω-,
where τi≥0,τ=max1≤i≤n{τi}, pi and ai=a, i=1,2,…,n, are all positive constants, Ω⊂ℝm is a bounded domain with a smooth boundary ∂Ω, ΔN(t,x)=∑i=1m((∂i2N(t,x))/(∂xi2)), (∂/∂ν) denotes the exterior normal derivative on ∂Ω, and ψ(θ,x) is Hölder continuous in Dτ with ψ(0,x)∈C1(Ω-).Though the global attractivity of the nonnegative equilibria of (2) has been studied by Yang and So [10] and Wang and Li [13, 17], they just gave some sufficient conditions. Furthermore, as far as we know, the stability for partial functional differential equations with several delays was investigated by few papers. Motivated by the above excellent works, in this paper, we consider the global attractivity of the nonnegative equilibria of the systems (4)–(6) and present some conditions which depend on coefficients of the systems (4)–(6). When n=1, our results complement those in Yang and So [10] and Wang and Li [13].It is not difficult to see that if∑i=1npi≤δ, then (4) has a unique nonnegative equilibrium N0≡0 and if ∑i=1npi>δ, then (4) has a unique positive equilibrium N*=(1/a)ln((∑i=1npi)/δ).The rest of the paper is organized as follows. We give some lemmas and definitions in Section2 and state and prove our main results in Section 3. In Section 4, several simulations are obtained to testify our results, and some unsolved problems are discussed.
## 2. Preliminaries
In this section, we will give some lemmas which can be proved by using the similar methods as those in Yang and So [10].Lemma 1.
(i) The solutionN(t,x) of (4)–(6) satisfies N(t,x)≥0 for (t,x)∈(0,∞)×Ω-.
(ii) Ifψ(θ,x)≢0 on Dτ, then the solution N(t,x) of (4)–(6) satisfies N(t,x)>0 for (t,x)∈(τ,∞)×Ω-.Next, we will introduce the concept of lower-upper solution due to Redlinger [18] as adapted to (4)–(6).Definition 2.
A lower-upper solution pair for (4)–(6) is a pair of suitably smooth function v and w such that(i)
v≤w in D-,(ii)
v and w satisfy
(7)∂w∂t≥Δw(t,x)-δw+∑i=1npiφ(t-τi,x)e-aφ(t-τi,x),(t,x)∈D,∂w∂ν≥0,(t,x)∈Γ,∂v∂t≤Δv(t,x)-δv+∑i=1npiφ(t-τi,x)e-aφ(t-τi,x),(t,x)∈D,∂v∂ν≤0,(t,x)∈Γ
for all φ∈C(Dτ∪D-) with v≤φ≤w,(t,x)∈Dτ∪D-, and(iii)
v(θ,x)≤φ(θ,x)≤w(θ,x),(θ,x)∈Dτ.The following lemma is a special case of Redlinger [19].Lemma 3.
Let(v,w) be a lower-upper solution pair for the initial boundary value problem (4)–(6). Then, there exists a unique regular solution N(t,x) of (4)–(6) such that v≤N≤w on Dτ∪D-.The following lemma gives us boundedness of the solutionN(t,x).Lemma 4.
(i) The solutionN(t,x) of (4)-(6) satisfies
(8)limsupt→∞N(t,x)≤∑i=1npiaeδ,uniformlyinx.
(ii) There exists a constantK=K(ψ)≥0 such that N(t,x)≤K on Dτ∪D-.Proof.
Letw(t) be the solution of the following Cauchy problem:
(9)dwdt=-δw+∑i=1npiae,t>0,w(0)=max(θ,x)∈Dτψ(θ,x).
Solving the equation, we have(10)w(t)=∑i=1npiaeδ+e-δt(w(0)-∑i=1npiaeδ),t≥0.
Taking(11)w¯(t)={w(0),t∈[-τ,0],w(t),t>0,
then (w¯(t),0) is a lower-upper solution pair for (4)–(6). In fact, for any φ∈C(Dτ∪D-) with 0≤φ≤w¯(t), (t,x)∈Dτ∪D-, one can get
(12)∂w¯(t)∂t-Δw¯(t)+δw¯(t)-∑i=1npiφ(t-τi,x)e-aφ(t-τi,x)≥∂w¯(t)∂t+δw¯(t)-∑i=1npiae=dwdt+δw-∑i=1npiae=0.
By Lemma3, there is a unique regular solution N(t,x) such that
(13)0≤N(t,x)≤w¯(t),(t,x)∈Dτ∪D-.
Note that(14)limt→+∞w¯(t)=limt→+∞w(t)=∑i=1npiaeδ.
Therefore, the formula (8) is correct, and there exists one K(ψ)>0 such that w¯(t)≤K(ψ) for any t∈(-τ,∞) and
(15)0≤N(t,x)≤K(ψ),(t,x)∈Dτ∪D-.
So we complete Lemma4.
## 3. Main Results and Proofs
Theorem 5.
Assume that∑i=1npi≤δ, then every solution N(t,x) of (4)–(6) tends to N0=0 (uniformly in x) as t→+∞.Proof.
By Lemma4, without loss of generality, let 0<N(t,x)≤∑i=1n(pi/aeδ) for (t,x)∈Dτ∪D-. Under the condition ∑i=1npi≤δ, we can get
(16)0<N(t,x)≤1ae<1afor(t,x)∈Dτ∪D-.
Definem(t) and y(t) to be the solutions of the following two delay equations, respectively:
(17)m′(t)=-δm(t)+∑i=1npim(t-τi)e-am(t-τi),t>0,m(θ)=minx∈Ω-ψ(θ,x),θ∈[-τ,0],y′(t)=-δy(t)+∑i=1npiy(t-τi)e-ay(t-τi),t>0,y(θ)=maxx∈Ω-ψ(θ,x),θ∈[-τ,0].
By using the similar methods to prove Lemma4, we can get that
(18)limsupt→∞m(t)≤∑i=1npiaeδ<1a,limsupt→∞y(t)≤∑i=1npiaeδ<1a
under the condition ∑i=1npi≤δ, and here m(t) and y(t) are the solutions of (17).
Because ofN(t,x)<1/a, for any φ∈C(Dτ∪D-), m(t)≤φ≤y(t)<1/a, one can get
(19)∂m(t)∂t-Δm(t)+δm(t)-∑i=1npiφ(t-τi,x)e-aφ(t-τi,x)≤∂m(t)∂t+δm(t)-∑i=1npim(t-τi)e-am(t-τi)=0,∂y(t)∂t-Δy(t)+δy(t)-∑i=1npiφ(t-τi,x)e-aφ(t-τi,x)≥∂y(t)∂t+δy(t)-∑i=1npiy(t-τi)e-ay(t-τi)=0.
Therefore, from Definition2, (m(t),y(t)) is a lower-upper pair of (4)-(5) with initial condition m(θ)≤ψ(θ,x)≤y(θ) on Dτ. Consequently, by Lemma 3, we have
(20)m(t)≤N(t,x)≤y(t)on[-τ,+∞)×Ω-.
By Theorem 1 of Luo and Liu [16], it follows from ∑i=1npi≤δ that the solutions m(t) and y(t) of (17) both satisfy
(21)limt→∞m(t)=0,limt→∞y(t)=0.
Hence, we complete the proof of Theorem5.Theorem 6.
If1<∑i=1n(pi/δ)≤e, then every nontrivial solution N(t,x) of (4)–(6) satisfies
(22)limt→∞N(t,x)=N*,uniformlyinx.Proof.
Letf(x)=xe-ax, then the function f(x) is increasing on (0,(1/a)) and decreasing on ((1/a),+∞), f(1/a)=maxx∈[0,∞)f(x), N*=(1/a)ln(∑i=1n(pi/δ))≤1/a for 1<∑i=1n(pi/δ)≤e. Let g(y)=∑i=1npif(y), then it is not difficult to verify that the function g(y) satisfies the following conditions:(g1)
the functiong(y) is increasing on (0,(1/a)) and decreasing on ((1/a),+∞), maxx∈[0,∞)g(x)=g(1/a)=∑i=1n(pi/ae),(g2)
g(y)>δy for y∈(0,N*) and g(y)<δy for y∈(N*,+∞).
There are now two possible cases to consider.
Case 1 (
N
*
<
1
/
a
). In view of Lemma 4, we may also assume without loss of generality that every solution N(t,x) of (4)–(6) satisfies
(23)0≤N(t,x)≤g(1/a)δ=∑i=1npiaeδ<1a,onDτ∪D-.
LetN_(t)=minx∈Ω-N(t,x), N¯(t)=maxx∈Ω-N(t,x), N_=liminft→∞N_(t) and N¯=limsupt→∞N¯(t). By (23), we have
(24)0≤N_≤N¯≤g(1/a)δ=∑i=1npiaeδ<1a.
From Lemma1(ii), let
(25)z0=min{min(t,x)∈[2τ,∞)×Ω-N(t,x),N*}>0,y0=1a.
LetI∞={1,2,…}. Now, we define two sequences {zk} and {yk} to satisfy, respectively,
(26)zk=g(zk-1)δ,k∈I∞,yk=g(yk-1)δ,k∈I∞.
We prove that{zk} and {yk} are monotonic and bounded. First of all, we prove that {zk} is monotonically increasing, and N* is the least upper bounded. Note (g1) and (g2), we have
(27)z1=g(z0)δ>z0,z1=g(z0)δ<g(N*)δ=N*.
By induction and direct computation, we have(28)0<z0<z1<⋯<limk→∞zk=N*.
Similarly, we have(29)0>y0>y1>⋯>limk→∞yk=N*.
Definev1(t) and w1(t) to be the solutions of the following differential equations, respectively:
(30)v1′(t)=-δ[v1(t)-z1],t≥3τ,v1(θ)=z0<N*,θ∈[2τ,3τ],w1′(t)=-δ[w1(t)-y1],t≥3τ,w1(θ)=y0>N*,θ∈[2τ,3τ].
It follows from (24) and (25) that z0≤N(t,x)≤y0 for any (t,x)∈[2τ,∞)×Ω-. Consider (30), for any (t,x)∈[2τ,∞]×Ω-, we have
(31)∂v1(t)∂t=Δv1(t)-δv1(t)+g(z0)≤Δv1(t)-δv1(t)+g(N(t-τ,x)),∂w1(t)∂t=Δw1(t)-δw1(t)+g(y0)≥Δw1(t)-δw1(t)+g(N(t-τ,x)).
Therefore, from Definition2, (v1(t),w1(t)) is a lower-upper pair of (4)-(5) with initial condition z0≤N(t,x)≤y0 on [2τ,3τ]×Ω-. Consequently, by Lemma 3, we have
(32)v1(t)≤N(t,x)≤ω1(t)on[2τ,∞]×Ω-.
Note thatw1(t) is monotonically decreasing for t≥3τ and limt→∞w1(t)=y1, while v1(t) is monotonically increasing for t≥3τ and limt→∞v1(t)=z1. Hence,
(33)z1=limt→∞v1(t)≤N_≤N¯≤limt→∞w1(t)=y1.
Definevn(t) and wn(t) to be the solutions of the following differential equations, respectively:
(34)vn′(t)=-δ[vn(t)-zn],t≥3τ,vn(θ)=zn-1<N*,θ∈[2τ,3τ],wn′(t)=-δ[wn(t)-yn],t≥3τ,wn(θ)=wn-1<N*,θ∈[2τ,3τ].
Repeating the above procedure, we have the following relation:(35)z1<z2<⋯<zn≤N_≤N¯≤yn<⋯<y2<y1.
By (28) and (29), and taking limits on both sides of (35), we have
(36)N*=limn→∞zn≤N_≤N¯≤limn→∞yn=N*,
which implies
(37)limt→∞N(t,x)=N*,uniformlyinx.
Case 2 (
N
*
=
y
0
). Similarly, let yk=N* and zk be the same as in the proof of Case 1; we can also get (35). Hence, the proof of Theorem 6 is complete.Remark 7.
Our main results are also valid whenN does not depend on a spatial variable x∈Ω in (4).
## 4. Numerical Simulations and Discussion
In this section, we will give some numerical simulations to verify our main results in Section3 and present several interesting phenomena by simulations that we cannot give a theoretical proof. We just consider the case n=2 in (4).
### 4.1. Numerical Simulations
Different parameters will be used for simulations, and some data come from [20]. Figure 1 corresponds to the case with δ=0.4, p1=0.1, p2=0.15, a=0.1, τ1=12, and τ2=15, and under the above conditions, we have 0<(p1+p2)/δ=0.625<1. We choose the initial condition ψ(θ,x)=1, (θ,x)∈[-15,0]×[0,1], and the solution N(t,x) is decreasing and almost zero at time 160.Figure 1
Parameters:δ=0.4, p1=0.1, p2=0.15, a=0.1, τ1=12, and τ2=15. Initial condition is ψ(θ,x)=1, (θ,x)∈[-15,0]×[0,1].Figure2 corresponds to the case with δ=0.1, p1=0.1, p2=0.15, a=0.2, τ1=12, and τ2=15, and under the above conditions, we have 1<(p1+p2)/δ=2.5<e and N*=4.58145. Choose the initial condition ψ(θ,x)=4+sinθ, (θ,x)∈[-15,0]×[0,1]. From Figure 2, we can observe that the solution N(t,x) oscillates around 13 and 14 days; however, N(t,x) tends to N* as time t tends to 100 days. Therefore, Figures 1 and 2 support our main results (Theorems 5 and 6).Figure 2
Parameters:δ=0.1, p1=0.1, p2=0.15, a=0.2, τ1=12, τ2=15, and N*=4.58145. Initial condition is ψ(θ,x)=4+sinθ, (θ,x)∈[-15,0]×[0,1].
### 4.2. Discussion
In Section3, we obtain two main results under the conditions ∑i=1n(pi/δ)≤1 and 1<∑i=1n(pi/δ)≤e, which are independent of the delays τi, i=1,2,…,n. A natural problem is what will happen when ∑i=1n(pi/δ)>e and the delays τi, i=1,2,…,n are changed.It is similar to Theorem 3 in Luo and Liu [16]; we present the following open problems.Open Problem 1.
If∑i=1n(pi/δ)>e and aN*(eδτ-1)≤1, then every nontrivial solution N(t,x) of (4)–(6) satisfies
(38)limt→∞N(t,x)=N*,uniformlyinx.Figure3 corresponds to the case with δ=0.01, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=23.0259, and initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1]. Under the above conditions, we have (p1+p2)/δ=100>e and aN*(eδτ-1)=0.745274<1. Sufficient conditions are dependent on coefficients and delay for the global attractivity of equilibria N*, and Figure 3 shows that the Open Problem 1 is right, but we cannot prove that.Figure 3
Parameters:δ=0.01, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=23.0259. Initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1].From Figure4, we have ((p1+p2)/δ)=5>e and aN*(eδτ-1)=30.717>1. The condition is not satisfied, but N* is still globally attractive.Figure 4
Parameters:δ=0.2, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=8.04719. Initial condition is ψ(θ,x)=9+sinθ, (θ,x)∈[-15,0]×[0,1].From Figure5, we have ((p1+p2)/δ)=50>e and aN*(eδτ-1)=13.6204>1. The condition is not satisfied, but the global attractivity N* is not true. Moreover, Figure 5 shows that there is a periodic solution, which is very interesting. We guess that the reason is that the system brings Hopf bifurcation as the parameters change. Therefore, we state the following open problem.Figure 5
Parameters:δ=0.1, p1=3, p2=2, a=0.2, τ1=12, τ2=15, and N*=19.5601. Initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1].Open Problem 2.
Under suitable conditions, the systems (4)–(6) will lead to Hopf bifurcation.Remark 8.
Now, we have not intensively studied these two problems. Because the nonmonotonicity of the nonlinear term in (4) makes it very difficult for us to solve Open Problem 1, and we cannot prove Open Problem 2 because of multiple delays.
## 4.1. Numerical Simulations
Different parameters will be used for simulations, and some data come from [20]. Figure 1 corresponds to the case with δ=0.4, p1=0.1, p2=0.15, a=0.1, τ1=12, and τ2=15, and under the above conditions, we have 0<(p1+p2)/δ=0.625<1. We choose the initial condition ψ(θ,x)=1, (θ,x)∈[-15,0]×[0,1], and the solution N(t,x) is decreasing and almost zero at time 160.Figure 1
Parameters:δ=0.4, p1=0.1, p2=0.15, a=0.1, τ1=12, and τ2=15. Initial condition is ψ(θ,x)=1, (θ,x)∈[-15,0]×[0,1].Figure2 corresponds to the case with δ=0.1, p1=0.1, p2=0.15, a=0.2, τ1=12, and τ2=15, and under the above conditions, we have 1<(p1+p2)/δ=2.5<e and N*=4.58145. Choose the initial condition ψ(θ,x)=4+sinθ, (θ,x)∈[-15,0]×[0,1]. From Figure 2, we can observe that the solution N(t,x) oscillates around 13 and 14 days; however, N(t,x) tends to N* as time t tends to 100 days. Therefore, Figures 1 and 2 support our main results (Theorems 5 and 6).Figure 2
Parameters:δ=0.1, p1=0.1, p2=0.15, a=0.2, τ1=12, τ2=15, and N*=4.58145. Initial condition is ψ(θ,x)=4+sinθ, (θ,x)∈[-15,0]×[0,1].
## 4.2. Discussion
In Section3, we obtain two main results under the conditions ∑i=1n(pi/δ)≤1 and 1<∑i=1n(pi/δ)≤e, which are independent of the delays τi, i=1,2,…,n. A natural problem is what will happen when ∑i=1n(pi/δ)>e and the delays τi, i=1,2,…,n are changed.It is similar to Theorem 3 in Luo and Liu [16]; we present the following open problems.Open Problem 1.
If∑i=1n(pi/δ)>e and aN*(eδτ-1)≤1, then every nontrivial solution N(t,x) of (4)–(6) satisfies
(38)limt→∞N(t,x)=N*,uniformlyinx.Figure3 corresponds to the case with δ=0.01, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=23.0259, and initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1]. Under the above conditions, we have (p1+p2)/δ=100>e and aN*(eδτ-1)=0.745274<1. Sufficient conditions are dependent on coefficients and delay for the global attractivity of equilibria N*, and Figure 3 shows that the Open Problem 1 is right, but we cannot prove that.Figure 3
Parameters:δ=0.01, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=23.0259. Initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1].From Figure4, we have ((p1+p2)/δ)=5>e and aN*(eδτ-1)=30.717>1. The condition is not satisfied, but N* is still globally attractive.Figure 4
Parameters:δ=0.2, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=8.04719. Initial condition is ψ(θ,x)=9+sinθ, (θ,x)∈[-15,0]×[0,1].From Figure5, we have ((p1+p2)/δ)=50>e and aN*(eδτ-1)=13.6204>1. The condition is not satisfied, but the global attractivity N* is not true. Moreover, Figure 5 shows that there is a periodic solution, which is very interesting. We guess that the reason is that the system brings Hopf bifurcation as the parameters change. Therefore, we state the following open problem.Figure 5
Parameters:δ=0.1, p1=3, p2=2, a=0.2, τ1=12, τ2=15, and N*=19.5601. Initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1].Open Problem 2.
Under suitable conditions, the systems (4)–(6) will lead to Hopf bifurcation.Remark 8.
Now, we have not intensively studied these two problems. Because the nonmonotonicity of the nonlinear term in (4) makes it very difficult for us to solve Open Problem 1, and we cannot prove Open Problem 2 because of multiple delays.
---
*Source: 101764-2013-04-23.xml* | 101764-2013-04-23_101764-2013-04-23.md | 18,603 | Global Attractivity of a Diffusive Nicholson's Blowflies Equation with Multiple Delays | Xiongwei Liu; Xiao Wang | Abstract and Applied Analysis
(2013) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101764 | 101764-2013-04-23.xml | ---
## Abstract
The present paper considers a diffusive Nicholson's blowflies model with multiple delays under a Neumann boundary condition. Delay independent conditions are derived for the global attractivity of the trivial equilibrium and the positive equilibrium, respectively. Two open problems concerning the stability of positive equilibrium and the occurrence of Hopf bifurcation are proposed.
---
## Body
## 1. Introduction
Since blowflies are important parasites of the sheep industry in some countries such as Australia, based on the experimental data of Nicholson [1, 2], Gurney et al. [3] first proposed Nicholson’s blowflies equation
(1)N˙(t)=-δN(t)+pN(t-τ)e-aN(t-τ),t>0,
where N(t) is the size of the adult blowflies population at time t; p is the maximum per capita daily egg production rate; 1/a is the size at which the blowflies population reproduces at its maximum rate; δ is the per capita daily adult death rate; τ is the generation time. For this equation, global attractivity and oscillation of solutions have been investigated by several authors (see [4–9]).It is impossible that the size of the adult blowflies population is independent of a spatial variable; therefore, Yang and So [10] investigated both temporal and spatial variations of the diffusive Nicholson’s blowflies equation
(2)∂N(t,x)∂t=ΔN(t,x)-δN(t,x)+pN(t-τ,x)e-aN(t-τ,x),inD≜(0,∞)×Ω
under Neumann boundary condition and gave the similar sufficient conditions for oscillation of all positive solutions about the positive steady state. Whereafter, many authors studied the various dynamical behaviors for this equation; we refer to Lin and Mei [11], Saker [12], Wang and Li [13], and Yi and Zou [14].Meanwhile, one can consider a nonlinear equation with several delays because of variability of the generation time; for this purpose, Györi and Ladas [15] and Kulenović and Ladas [6] proposed the following generalized Nicholson’s blowflies model:
(3)N′(t)=-δN(t)+∑i=1npiN(t-τi)e-aiN(t-τi),t>0.Luo and Liu [16] studied the global attractivity of the nonnegative equilibria of (3).It is of interest to investigate both several temporal and spatial variations of the blowflies population using mathematical models. Hereby, in this paper, we consider the following system:(4)∂N(t,x)∂t=ΔN(t,x)-δN(t,x)+∑i=1npiN(t-τi,x)e-aiN(t-τi,x),inD
with Neumann boundary condition
(5)∂N(t,x)∂ν=0,onΓ≜(0,∞)×∂Ω,
and initial condition
(6)N(θ,x)=ψ(θ,x)≥0,inDτ≜[-τ,0]×Ω-,
where τi≥0,τ=max1≤i≤n{τi}, pi and ai=a, i=1,2,…,n, are all positive constants, Ω⊂ℝm is a bounded domain with a smooth boundary ∂Ω, ΔN(t,x)=∑i=1m((∂i2N(t,x))/(∂xi2)), (∂/∂ν) denotes the exterior normal derivative on ∂Ω, and ψ(θ,x) is Hölder continuous in Dτ with ψ(0,x)∈C1(Ω-).Though the global attractivity of the nonnegative equilibria of (2) has been studied by Yang and So [10] and Wang and Li [13, 17], they just gave some sufficient conditions. Furthermore, as far as we know, the stability for partial functional differential equations with several delays was investigated by few papers. Motivated by the above excellent works, in this paper, we consider the global attractivity of the nonnegative equilibria of the systems (4)–(6) and present some conditions which depend on coefficients of the systems (4)–(6). When n=1, our results complement those in Yang and So [10] and Wang and Li [13].It is not difficult to see that if∑i=1npi≤δ, then (4) has a unique nonnegative equilibrium N0≡0 and if ∑i=1npi>δ, then (4) has a unique positive equilibrium N*=(1/a)ln((∑i=1npi)/δ).The rest of the paper is organized as follows. We give some lemmas and definitions in Section2 and state and prove our main results in Section 3. In Section 4, several simulations are obtained to testify our results, and some unsolved problems are discussed.
## 2. Preliminaries
In this section, we will give some lemmas which can be proved by using the similar methods as those in Yang and So [10].Lemma 1.
(i) The solutionN(t,x) of (4)–(6) satisfies N(t,x)≥0 for (t,x)∈(0,∞)×Ω-.
(ii) Ifψ(θ,x)≢0 on Dτ, then the solution N(t,x) of (4)–(6) satisfies N(t,x)>0 for (t,x)∈(τ,∞)×Ω-.Next, we will introduce the concept of lower-upper solution due to Redlinger [18] as adapted to (4)–(6).Definition 2.
A lower-upper solution pair for (4)–(6) is a pair of suitably smooth function v and w such that(i)
v≤w in D-,(ii)
v and w satisfy
(7)∂w∂t≥Δw(t,x)-δw+∑i=1npiφ(t-τi,x)e-aφ(t-τi,x),(t,x)∈D,∂w∂ν≥0,(t,x)∈Γ,∂v∂t≤Δv(t,x)-δv+∑i=1npiφ(t-τi,x)e-aφ(t-τi,x),(t,x)∈D,∂v∂ν≤0,(t,x)∈Γ
for all φ∈C(Dτ∪D-) with v≤φ≤w,(t,x)∈Dτ∪D-, and(iii)
v(θ,x)≤φ(θ,x)≤w(θ,x),(θ,x)∈Dτ.The following lemma is a special case of Redlinger [19].Lemma 3.
Let(v,w) be a lower-upper solution pair for the initial boundary value problem (4)–(6). Then, there exists a unique regular solution N(t,x) of (4)–(6) such that v≤N≤w on Dτ∪D-.The following lemma gives us boundedness of the solutionN(t,x).Lemma 4.
(i) The solutionN(t,x) of (4)-(6) satisfies
(8)limsupt→∞N(t,x)≤∑i=1npiaeδ,uniformlyinx.
(ii) There exists a constantK=K(ψ)≥0 such that N(t,x)≤K on Dτ∪D-.Proof.
Letw(t) be the solution of the following Cauchy problem:
(9)dwdt=-δw+∑i=1npiae,t>0,w(0)=max(θ,x)∈Dτψ(θ,x).
Solving the equation, we have(10)w(t)=∑i=1npiaeδ+e-δt(w(0)-∑i=1npiaeδ),t≥0.
Taking(11)w¯(t)={w(0),t∈[-τ,0],w(t),t>0,
then (w¯(t),0) is a lower-upper solution pair for (4)–(6). In fact, for any φ∈C(Dτ∪D-) with 0≤φ≤w¯(t), (t,x)∈Dτ∪D-, one can get
(12)∂w¯(t)∂t-Δw¯(t)+δw¯(t)-∑i=1npiφ(t-τi,x)e-aφ(t-τi,x)≥∂w¯(t)∂t+δw¯(t)-∑i=1npiae=dwdt+δw-∑i=1npiae=0.
By Lemma3, there is a unique regular solution N(t,x) such that
(13)0≤N(t,x)≤w¯(t),(t,x)∈Dτ∪D-.
Note that(14)limt→+∞w¯(t)=limt→+∞w(t)=∑i=1npiaeδ.
Therefore, the formula (8) is correct, and there exists one K(ψ)>0 such that w¯(t)≤K(ψ) for any t∈(-τ,∞) and
(15)0≤N(t,x)≤K(ψ),(t,x)∈Dτ∪D-.
So we complete Lemma4.
## 3. Main Results and Proofs
Theorem 5.
Assume that∑i=1npi≤δ, then every solution N(t,x) of (4)–(6) tends to N0=0 (uniformly in x) as t→+∞.Proof.
By Lemma4, without loss of generality, let 0<N(t,x)≤∑i=1n(pi/aeδ) for (t,x)∈Dτ∪D-. Under the condition ∑i=1npi≤δ, we can get
(16)0<N(t,x)≤1ae<1afor(t,x)∈Dτ∪D-.
Definem(t) and y(t) to be the solutions of the following two delay equations, respectively:
(17)m′(t)=-δm(t)+∑i=1npim(t-τi)e-am(t-τi),t>0,m(θ)=minx∈Ω-ψ(θ,x),θ∈[-τ,0],y′(t)=-δy(t)+∑i=1npiy(t-τi)e-ay(t-τi),t>0,y(θ)=maxx∈Ω-ψ(θ,x),θ∈[-τ,0].
By using the similar methods to prove Lemma4, we can get that
(18)limsupt→∞m(t)≤∑i=1npiaeδ<1a,limsupt→∞y(t)≤∑i=1npiaeδ<1a
under the condition ∑i=1npi≤δ, and here m(t) and y(t) are the solutions of (17).
Because ofN(t,x)<1/a, for any φ∈C(Dτ∪D-), m(t)≤φ≤y(t)<1/a, one can get
(19)∂m(t)∂t-Δm(t)+δm(t)-∑i=1npiφ(t-τi,x)e-aφ(t-τi,x)≤∂m(t)∂t+δm(t)-∑i=1npim(t-τi)e-am(t-τi)=0,∂y(t)∂t-Δy(t)+δy(t)-∑i=1npiφ(t-τi,x)e-aφ(t-τi,x)≥∂y(t)∂t+δy(t)-∑i=1npiy(t-τi)e-ay(t-τi)=0.
Therefore, from Definition2, (m(t),y(t)) is a lower-upper pair of (4)-(5) with initial condition m(θ)≤ψ(θ,x)≤y(θ) on Dτ. Consequently, by Lemma 3, we have
(20)m(t)≤N(t,x)≤y(t)on[-τ,+∞)×Ω-.
By Theorem 1 of Luo and Liu [16], it follows from ∑i=1npi≤δ that the solutions m(t) and y(t) of (17) both satisfy
(21)limt→∞m(t)=0,limt→∞y(t)=0.
Hence, we complete the proof of Theorem5.Theorem 6.
If1<∑i=1n(pi/δ)≤e, then every nontrivial solution N(t,x) of (4)–(6) satisfies
(22)limt→∞N(t,x)=N*,uniformlyinx.Proof.
Letf(x)=xe-ax, then the function f(x) is increasing on (0,(1/a)) and decreasing on ((1/a),+∞), f(1/a)=maxx∈[0,∞)f(x), N*=(1/a)ln(∑i=1n(pi/δ))≤1/a for 1<∑i=1n(pi/δ)≤e. Let g(y)=∑i=1npif(y), then it is not difficult to verify that the function g(y) satisfies the following conditions:(g1)
the functiong(y) is increasing on (0,(1/a)) and decreasing on ((1/a),+∞), maxx∈[0,∞)g(x)=g(1/a)=∑i=1n(pi/ae),(g2)
g(y)>δy for y∈(0,N*) and g(y)<δy for y∈(N*,+∞).
There are now two possible cases to consider.
Case 1 (
N
*
<
1
/
a
). In view of Lemma 4, we may also assume without loss of generality that every solution N(t,x) of (4)–(6) satisfies
(23)0≤N(t,x)≤g(1/a)δ=∑i=1npiaeδ<1a,onDτ∪D-.
LetN_(t)=minx∈Ω-N(t,x), N¯(t)=maxx∈Ω-N(t,x), N_=liminft→∞N_(t) and N¯=limsupt→∞N¯(t). By (23), we have
(24)0≤N_≤N¯≤g(1/a)δ=∑i=1npiaeδ<1a.
From Lemma1(ii), let
(25)z0=min{min(t,x)∈[2τ,∞)×Ω-N(t,x),N*}>0,y0=1a.
LetI∞={1,2,…}. Now, we define two sequences {zk} and {yk} to satisfy, respectively,
(26)zk=g(zk-1)δ,k∈I∞,yk=g(yk-1)δ,k∈I∞.
We prove that{zk} and {yk} are monotonic and bounded. First of all, we prove that {zk} is monotonically increasing, and N* is the least upper bounded. Note (g1) and (g2), we have
(27)z1=g(z0)δ>z0,z1=g(z0)δ<g(N*)δ=N*.
By induction and direct computation, we have(28)0<z0<z1<⋯<limk→∞zk=N*.
Similarly, we have(29)0>y0>y1>⋯>limk→∞yk=N*.
Definev1(t) and w1(t) to be the solutions of the following differential equations, respectively:
(30)v1′(t)=-δ[v1(t)-z1],t≥3τ,v1(θ)=z0<N*,θ∈[2τ,3τ],w1′(t)=-δ[w1(t)-y1],t≥3τ,w1(θ)=y0>N*,θ∈[2τ,3τ].
It follows from (24) and (25) that z0≤N(t,x)≤y0 for any (t,x)∈[2τ,∞)×Ω-. Consider (30), for any (t,x)∈[2τ,∞]×Ω-, we have
(31)∂v1(t)∂t=Δv1(t)-δv1(t)+g(z0)≤Δv1(t)-δv1(t)+g(N(t-τ,x)),∂w1(t)∂t=Δw1(t)-δw1(t)+g(y0)≥Δw1(t)-δw1(t)+g(N(t-τ,x)).
Therefore, from Definition2, (v1(t),w1(t)) is a lower-upper pair of (4)-(5) with initial condition z0≤N(t,x)≤y0 on [2τ,3τ]×Ω-. Consequently, by Lemma 3, we have
(32)v1(t)≤N(t,x)≤ω1(t)on[2τ,∞]×Ω-.
Note thatw1(t) is monotonically decreasing for t≥3τ and limt→∞w1(t)=y1, while v1(t) is monotonically increasing for t≥3τ and limt→∞v1(t)=z1. Hence,
(33)z1=limt→∞v1(t)≤N_≤N¯≤limt→∞w1(t)=y1.
Definevn(t) and wn(t) to be the solutions of the following differential equations, respectively:
(34)vn′(t)=-δ[vn(t)-zn],t≥3τ,vn(θ)=zn-1<N*,θ∈[2τ,3τ],wn′(t)=-δ[wn(t)-yn],t≥3τ,wn(θ)=wn-1<N*,θ∈[2τ,3τ].
Repeating the above procedure, we have the following relation:(35)z1<z2<⋯<zn≤N_≤N¯≤yn<⋯<y2<y1.
By (28) and (29), and taking limits on both sides of (35), we have
(36)N*=limn→∞zn≤N_≤N¯≤limn→∞yn=N*,
which implies
(37)limt→∞N(t,x)=N*,uniformlyinx.
Case 2 (
N
*
=
y
0
). Similarly, let yk=N* and zk be the same as in the proof of Case 1; we can also get (35). Hence, the proof of Theorem 6 is complete.Remark 7.
Our main results are also valid whenN does not depend on a spatial variable x∈Ω in (4).
## 4. Numerical Simulations and Discussion
In this section, we will give some numerical simulations to verify our main results in Section3 and present several interesting phenomena by simulations that we cannot give a theoretical proof. We just consider the case n=2 in (4).
### 4.1. Numerical Simulations
Different parameters will be used for simulations, and some data come from [20]. Figure 1 corresponds to the case with δ=0.4, p1=0.1, p2=0.15, a=0.1, τ1=12, and τ2=15, and under the above conditions, we have 0<(p1+p2)/δ=0.625<1. We choose the initial condition ψ(θ,x)=1, (θ,x)∈[-15,0]×[0,1], and the solution N(t,x) is decreasing and almost zero at time 160.Figure 1
Parameters:δ=0.4, p1=0.1, p2=0.15, a=0.1, τ1=12, and τ2=15. Initial condition is ψ(θ,x)=1, (θ,x)∈[-15,0]×[0,1].Figure2 corresponds to the case with δ=0.1, p1=0.1, p2=0.15, a=0.2, τ1=12, and τ2=15, and under the above conditions, we have 1<(p1+p2)/δ=2.5<e and N*=4.58145. Choose the initial condition ψ(θ,x)=4+sinθ, (θ,x)∈[-15,0]×[0,1]. From Figure 2, we can observe that the solution N(t,x) oscillates around 13 and 14 days; however, N(t,x) tends to N* as time t tends to 100 days. Therefore, Figures 1 and 2 support our main results (Theorems 5 and 6).Figure 2
Parameters:δ=0.1, p1=0.1, p2=0.15, a=0.2, τ1=12, τ2=15, and N*=4.58145. Initial condition is ψ(θ,x)=4+sinθ, (θ,x)∈[-15,0]×[0,1].
### 4.2. Discussion
In Section3, we obtain two main results under the conditions ∑i=1n(pi/δ)≤1 and 1<∑i=1n(pi/δ)≤e, which are independent of the delays τi, i=1,2,…,n. A natural problem is what will happen when ∑i=1n(pi/δ)>e and the delays τi, i=1,2,…,n are changed.It is similar to Theorem 3 in Luo and Liu [16]; we present the following open problems.Open Problem 1.
If∑i=1n(pi/δ)>e and aN*(eδτ-1)≤1, then every nontrivial solution N(t,x) of (4)–(6) satisfies
(38)limt→∞N(t,x)=N*,uniformlyinx.Figure3 corresponds to the case with δ=0.01, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=23.0259, and initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1]. Under the above conditions, we have (p1+p2)/δ=100>e and aN*(eδτ-1)=0.745274<1. Sufficient conditions are dependent on coefficients and delay for the global attractivity of equilibria N*, and Figure 3 shows that the Open Problem 1 is right, but we cannot prove that.Figure 3
Parameters:δ=0.01, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=23.0259. Initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1].From Figure4, we have ((p1+p2)/δ)=5>e and aN*(eδτ-1)=30.717>1. The condition is not satisfied, but N* is still globally attractive.Figure 4
Parameters:δ=0.2, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=8.04719. Initial condition is ψ(θ,x)=9+sinθ, (θ,x)∈[-15,0]×[0,1].From Figure5, we have ((p1+p2)/δ)=50>e and aN*(eδτ-1)=13.6204>1. The condition is not satisfied, but the global attractivity N* is not true. Moreover, Figure 5 shows that there is a periodic solution, which is very interesting. We guess that the reason is that the system brings Hopf bifurcation as the parameters change. Therefore, we state the following open problem.Figure 5
Parameters:δ=0.1, p1=3, p2=2, a=0.2, τ1=12, τ2=15, and N*=19.5601. Initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1].Open Problem 2.
Under suitable conditions, the systems (4)–(6) will lead to Hopf bifurcation.Remark 8.
Now, we have not intensively studied these two problems. Because the nonmonotonicity of the nonlinear term in (4) makes it very difficult for us to solve Open Problem 1, and we cannot prove Open Problem 2 because of multiple delays.
## 4.1. Numerical Simulations
Different parameters will be used for simulations, and some data come from [20]. Figure 1 corresponds to the case with δ=0.4, p1=0.1, p2=0.15, a=0.1, τ1=12, and τ2=15, and under the above conditions, we have 0<(p1+p2)/δ=0.625<1. We choose the initial condition ψ(θ,x)=1, (θ,x)∈[-15,0]×[0,1], and the solution N(t,x) is decreasing and almost zero at time 160.Figure 1
Parameters:δ=0.4, p1=0.1, p2=0.15, a=0.1, τ1=12, and τ2=15. Initial condition is ψ(θ,x)=1, (θ,x)∈[-15,0]×[0,1].Figure2 corresponds to the case with δ=0.1, p1=0.1, p2=0.15, a=0.2, τ1=12, and τ2=15, and under the above conditions, we have 1<(p1+p2)/δ=2.5<e and N*=4.58145. Choose the initial condition ψ(θ,x)=4+sinθ, (θ,x)∈[-15,0]×[0,1]. From Figure 2, we can observe that the solution N(t,x) oscillates around 13 and 14 days; however, N(t,x) tends to N* as time t tends to 100 days. Therefore, Figures 1 and 2 support our main results (Theorems 5 and 6).Figure 2
Parameters:δ=0.1, p1=0.1, p2=0.15, a=0.2, τ1=12, τ2=15, and N*=4.58145. Initial condition is ψ(θ,x)=4+sinθ, (θ,x)∈[-15,0]×[0,1].
## 4.2. Discussion
In Section3, we obtain two main results under the conditions ∑i=1n(pi/δ)≤1 and 1<∑i=1n(pi/δ)≤e, which are independent of the delays τi, i=1,2,…,n. A natural problem is what will happen when ∑i=1n(pi/δ)>e and the delays τi, i=1,2,…,n are changed.It is similar to Theorem 3 in Luo and Liu [16]; we present the following open problems.Open Problem 1.
If∑i=1n(pi/δ)>e and aN*(eδτ-1)≤1, then every nontrivial solution N(t,x) of (4)–(6) satisfies
(38)limt→∞N(t,x)=N*,uniformlyinx.Figure3 corresponds to the case with δ=0.01, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=23.0259, and initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1]. Under the above conditions, we have (p1+p2)/δ=100>e and aN*(eδτ-1)=0.745274<1. Sufficient conditions are dependent on coefficients and delay for the global attractivity of equilibria N*, and Figure 3 shows that the Open Problem 1 is right, but we cannot prove that.Figure 3
Parameters:δ=0.01, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=23.0259. Initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1].From Figure4, we have ((p1+p2)/δ)=5>e and aN*(eδτ-1)=30.717>1. The condition is not satisfied, but N* is still globally attractive.Figure 4
Parameters:δ=0.2, p1=0.5, p2=0.5, a=0.2, τ1=12, τ2=15, and N*=8.04719. Initial condition is ψ(θ,x)=9+sinθ, (θ,x)∈[-15,0]×[0,1].From Figure5, we have ((p1+p2)/δ)=50>e and aN*(eδτ-1)=13.6204>1. The condition is not satisfied, but the global attractivity N* is not true. Moreover, Figure 5 shows that there is a periodic solution, which is very interesting. We guess that the reason is that the system brings Hopf bifurcation as the parameters change. Therefore, we state the following open problem.Figure 5
Parameters:δ=0.1, p1=3, p2=2, a=0.2, τ1=12, τ2=15, and N*=19.5601. Initial condition is ψ(θ,x)=10+sinθ, (θ,x)∈[-15,0]×[0,1].Open Problem 2.
Under suitable conditions, the systems (4)–(6) will lead to Hopf bifurcation.Remark 8.
Now, we have not intensively studied these two problems. Because the nonmonotonicity of the nonlinear term in (4) makes it very difficult for us to solve Open Problem 1, and we cannot prove Open Problem 2 because of multiple delays.
---
*Source: 101764-2013-04-23.xml* | 2013 |
# Thermal Annealing of Exfoliated Graphene
**Authors:** Wang Xueshen; Li Jinjin; Zhong Qing; Zhong Yuan; Zhao Mengke
**Journal:** Journal of Nanomaterials
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101765
---
## Abstract
Monolayer graphene is obtained by mechanical exfoliation using scotch tapes. The effects of thermal annealing on the tape residues and edges of graphene are researched. Atomic force microscope images showed that almost all the residues could be removed in N2/H2 at 400°C but only agglomerated in vacuum. Raman spectra of the annealed graphene show both the 2D peak and G peak blueshift. The full width at half maximum (FWHM) of the 2D peak becomes larger and the intensity ratio of the 2D peak to G peak decreases. The edges of graphene are completely attached to the surface of the substrate after annealing.
---
## Body
## 1. Introduction
Isolated graphene was first prepared by mechanical exfoliation of highly oriented pyrolytic graphite (HOPG) [1]. Such exfoliated graphene exhibits high crystal quality and draws significant attention because of its unique electrical properties [2–5]. The mobility of the exfoliated graphene could be as high as 200000 cm2·V−1·s−1 [6, 7] which allows for the observation of quantum Hall effect even at room temperature [8]. Therefore it is a promising candidate for a high-temperature quantum resistance standard [9–13] for metrology applications.The graphene films, obtained by micromechanical exfoliation [1, 14, 15], synthesized from chemical vapor deposition [16–18] and epitaxial methods [19, 20], usually suffer from the surface contaminations which affect their intrinsic electrical properties. Currently many methods are being investigated for removing the surface resist residues of the graphene films introduced by transfer and E-beam lithography process. Among these methods, thermal annealing is shown to be very reproducible [21–23]. However, insufficient attention has been paid to the surface cleaning issue of the exfoliated graphene films before device fabrication [24]. The scotch tape residues on the exfoliated graphene films need to be removed in order to fabricate high-quality devices.In this paper, we demonstrate the effect of thermal annealing for removing the tape residues of the exfoliated graphene films and flattening the edges. The thermal annealing process is optimized by adjusting the annealing gases and the temperature. AFM and Raman spectroscopy are used to characterize the annealed films.
## 2. Materials and Methods
Graphene films are obtained by mechanical exfoliation of Kish graphite with 3 M magic scotch tapes and then transferred to the SiO2 (300 nm)/Si substrate. Optical microscope is first used to locate the graphene. Raman spectra measurements are then performed on a LabRAM HR800 with the wavelength of 632.8 nm to identify the layer number of the graphene. Atomic force microscope (AFM) is used to characterize the surface of graphene with the tapping mode and an insulating probe.Two pieces of graphene are achieved on the SiO2 substrate in one exfoliation process referred to as M1 and M2, respectively, for contrast experiments. Acetone is first used to clean the films. Optical images of the researched graphene are shown in Figure 1(a) for M1 and Figure 1(b) for M2, respectively. We can observe that there are still tape residues on and beside the graphene M1 and M2 indicated by white circles even after cleaning with acetone. These residues will increase the contact resistance and affect the attachment of electrode metals, thus should be removed before device fabrication.Optical images (top) and the corresponding Raman spectra (bottom) of monolayer graphene M1 (a) and M2 (b) on SiO2/Si before annealing. The scale bars are 10 μm in length (colors online).
(a)
(b)The FWHM of 2D peak is the key factor to determine the number of layers of graphene which is lower than 40 cm−1 with a single Lorenz peak for monolayer graphene. The FWHMs of the 2D peaks of M1 and M2 are both 24 cm−1, and they are highly symmetrical and also can be fitted with a single Lorenz peak [25]. Therefore, the researched samples M1 and M2 are both monolayer graphene.Thermal annealing is performed using an annealing oven with N2/H2 and low pressure of 5 mbar. The heating and cooling rate should be low to prevent the edge of the graphene from folding or rolling up [26]. In our studies, both heating and cooling rates are 6°C/min. The annealing conditions of samples M1 and M2 are shown in Table 1. Graphene M1 is first annealed at 300°C and then 400°C subsequently in vacuum at a pressure of 5 mbar and lastly annealed at 400°C in N2/H2 at a flow rate of 200 cm3/min (sccm) and 50 sccm and a pressure of 50 mbar for 2 hours. For M2, annealing at 300°C and 400°C subsequently in N2/H2 at a pressure of 50 mbar for 2 hours is performed.Table 1
Annealing conditions of monolayer graphene M1 and M2.
Annealing no.
Sample
M1
M2
1
300°C, 5 mbar, 2 hours
300°C, 50mbar, 2 hoursN2/H2 (200 sccm/50 sccm)
2
400°C, 5 mbar, 2 hours
400°C, 50 mbar, 2 hoursN2/H2 (200 sccm/50 sccm)
3
400°C, 50 mbar, 2 hoursN2/H2 (200 sccm/50 sccm)
## 3. Results and Discussion
Figure2 shows the AFM topographic images and the Raman spectra of M1 before and after each annealing step. Before annealing, the tape residues form films on and beside M1 as shown in the AFM image of Figure 2(a). The Raman peaks between 1100 cm−1 and the G peak are due to the tape residue films. After annealing at 300°C in vacuum for 2 h, the residue film on the surface of M1 agglomerates to large particles and that beside M1 becomes thinner as shown in the AFM image of Figure 2(b). The peaks of the tape residues are still present in the corresponding Raman spectrum and the intensity ratios of these peaks to the G peak become larger which indicates that the residues are not removed. D peak at 1340 cm−1 also emerges. Annealing temperature is raised to 400°C in vacuum for 2 h, the surface residue particles diminish, and those on the substrate become much smaller ones that are distributed more evenly as illustrated in Figure 2(c). The Raman spectrum clearly shows that the residues are not completely removed although the intensity ratios decrease.AFM topographic images (top) and the corresponding Raman spectra (bottom) of graphene M1. (a) Before annealing; (b) annealing at 300°C in vacuum of 5 mbar; (c) annealing at 400°C in vacuum of 5 mbar; (d) annealing at 400°C in N2/H2 of 50 mbar. Scale bars are 300 nm in length (colors online).
(a)
(b)
(c)
(d)It is obvious that annealing in vacuum does not efficiently remove the tape residues on the surface of the graphene. The residues just congregate to particles from films and change the positions continuously. N2/H2 gas is introduced into the oven for further annealing the graphene M1 at rates of 200 sccm and 50 sccm, respectively, and the pressure is 50 mbar. The annealing result is shown in Figure 2(d). The AFM image shows that the surface is much smoother than that in Figure 2(b). Most of the surface area of M1 is now close to the substrate, indicating that the surface shows similar roughness with the substrate. The absence of the intrinsic tape residue peaks in the corresponding spectrum confirms the nearly complete removal of the residues on the surface of the monolayer graphene M1.We can also observe that the edge of M1 does not contact fully with the substrate surface in Figure2(a). This kind of disengagement from the surface of SiO2 may cause graphene’s rolling up from the edge. The height values between the two blue forks on the white line marked on the graphene M1 in Figure 2(a) are measured before annealing, after 300°C annealing in vacuum, after 400°C annealing in vacuum, and after 400°C annealing in N2/H2 to be 3.1 nm, 1.9 nm, 1.6 nm, and 1.2 nm. The height of the edge of M1 decreases accompanying annealing and 1.2 nm is already at the level of the thickness of a graphene film on the surface of SiO2 [1, 27]. Therefore, the edge of M1 becomes fully attached to the substrate.Compared with the characterization results of annealing in vacuum and N2/H2, it is concluded that N2/H2 gas is essential in removing the tape residues on the surface of graphene. The mechanism of this gas effect on the removing tape residues is still on research. We speculate that the removal of tape residues on the surface of graphene may be due to the reaggregation of the tape residues and the chemical decomposition of the residues. The introduction of N2/H2 may help the gasification of scission products. Successive researches on the mechanism of N2/H2 helping for removing tape residues are in progress.The effect of the annealing temperature of N2/H2 gas is also studied on the monolayer graphene M2. The topographic AFM images of M2 after annealing are shown in Figure 3. There are still few tape residue particles on the surface of M2 after annealing in N2/H2 at 300°C as shown in Figure 3(a). However, these particles nearly vanish after further annealing at 400°C confirmed by the AFM image shown in Figure 3(b). This indicates that high temperature is advantageous for the removing of the tape residues.AFM topographic images of the graphene M2 after annealing in N2/H2 at 300°C (a) and 400°C (b); left part of the graphene M2 is monolayer (ML) and the right part is bilayer (BL). Scale bars are 300 nm in length (colors online).
(a)
(b)G and 2D peaks in the Raman spectrum are used to characterize the effect of annealing on the graphene M1 and M2. The blueshifts of the G peak of M1 and M2 are 10 cm−1 and 11 cm−1 and the 2D peaks blue-shifts are 12 cm−1 and 11 cm−1, respectively, as shown in Figures 4(a) and 4(b), respectively. It is known that the blue-shift can be caused by either strain [28, 29] or hole doping [23, 24, 30–33]. In our case, the blueshifts of the G peak and 2D peak are close, which implies that hole doping is the main cause of the blue-shift since the 2D peak is more sensitive to strain than the G peak. For the graphene M1, the FWHM of the 2D peak increases with annealing temperatures both in vacuum and N2/H2, as shown in Figure 4(c). This is due to the increase of the disorders which affect the formation of the 2D peak and the strengthening of the interreaction between M1 and the SiO2 substrate with progressive annealing [34]. However, the FWHM of the 2D peak for M2 changes significantly after annealing at 300°C but increments marginally after the following annealing at 400°C in N2/H2 as shown in Figure 4(d). It can be concluded that the disorders emerge in the removing and moving processes of the tape residues. Most of the tape residues are removed by annealing at 300°C in N2/H2 that left few residues being removed in the annealing process at 400°C in N2/H2. Thus the broadening of the 2D peak of M2 is different for the two steps. The intensity ratio of I2D/IG which is sensitive to doping [34, 35] decreases with the increasing annealing temperature for both graphene M1 and M2 as shown in Figures 4(c) and 4(d) by the red axes.The 2D peaks of graphene M1 (a) and M2 (b) after each annealing step; the FWHM (black) of 2D peak and the intensity ratio (red) of the 2D peak and G peak for graphene M1 (c) and M2 (d) (colors online).
(a)
(b)
(c)
(d)
## 4. Conclusions
In conclusion, monolayer graphene is prepared using scotch tape to mechanically exfoliate Kish graphite and thermal annealing in vacuum or N2/H2 is performed to remove the tape residues left on the surface of and beside graphene. The annealing effect is characterized by AFM and Raman spectra from which we confirm that annealing in vacuum is not effective to remove the tape residues and N2/H2 is critical. Also increasing the temperature is helpful for residue removal. Tape residues can be removed by annealing at 400°C in N2/H2 at a pressure of 50 mbar for 2 h. Thermal annealing brings hole doping in graphene which accounts for the blueshifts of the G peak and the 2D peak and the decrease of the I2D/IG intensity ratio in Raman spectra. The 2D peaks broaden due to the emerging disorders during the annealing process and the enhanced interreaction between graphene and SiO2 substrate. The edges of graphene completely attach the surface of the substrates after thermal annealing processes.
---
*Source: 101765-2013-04-16.xml* | 101765-2013-04-16_101765-2013-04-16.md | 12,571 | Thermal Annealing of Exfoliated Graphene | Wang Xueshen; Li Jinjin; Zhong Qing; Zhong Yuan; Zhao Mengke | Journal of Nanomaterials
(2013) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101765 | 101765-2013-04-16.xml | ---
## Abstract
Monolayer graphene is obtained by mechanical exfoliation using scotch tapes. The effects of thermal annealing on the tape residues and edges of graphene are researched. Atomic force microscope images showed that almost all the residues could be removed in N2/H2 at 400°C but only agglomerated in vacuum. Raman spectra of the annealed graphene show both the 2D peak and G peak blueshift. The full width at half maximum (FWHM) of the 2D peak becomes larger and the intensity ratio of the 2D peak to G peak decreases. The edges of graphene are completely attached to the surface of the substrate after annealing.
---
## Body
## 1. Introduction
Isolated graphene was first prepared by mechanical exfoliation of highly oriented pyrolytic graphite (HOPG) [1]. Such exfoliated graphene exhibits high crystal quality and draws significant attention because of its unique electrical properties [2–5]. The mobility of the exfoliated graphene could be as high as 200000 cm2·V−1·s−1 [6, 7] which allows for the observation of quantum Hall effect even at room temperature [8]. Therefore it is a promising candidate for a high-temperature quantum resistance standard [9–13] for metrology applications.The graphene films, obtained by micromechanical exfoliation [1, 14, 15], synthesized from chemical vapor deposition [16–18] and epitaxial methods [19, 20], usually suffer from the surface contaminations which affect their intrinsic electrical properties. Currently many methods are being investigated for removing the surface resist residues of the graphene films introduced by transfer and E-beam lithography process. Among these methods, thermal annealing is shown to be very reproducible [21–23]. However, insufficient attention has been paid to the surface cleaning issue of the exfoliated graphene films before device fabrication [24]. The scotch tape residues on the exfoliated graphene films need to be removed in order to fabricate high-quality devices.In this paper, we demonstrate the effect of thermal annealing for removing the tape residues of the exfoliated graphene films and flattening the edges. The thermal annealing process is optimized by adjusting the annealing gases and the temperature. AFM and Raman spectroscopy are used to characterize the annealed films.
## 2. Materials and Methods
Graphene films are obtained by mechanical exfoliation of Kish graphite with 3 M magic scotch tapes and then transferred to the SiO2 (300 nm)/Si substrate. Optical microscope is first used to locate the graphene. Raman spectra measurements are then performed on a LabRAM HR800 with the wavelength of 632.8 nm to identify the layer number of the graphene. Atomic force microscope (AFM) is used to characterize the surface of graphene with the tapping mode and an insulating probe.Two pieces of graphene are achieved on the SiO2 substrate in one exfoliation process referred to as M1 and M2, respectively, for contrast experiments. Acetone is first used to clean the films. Optical images of the researched graphene are shown in Figure 1(a) for M1 and Figure 1(b) for M2, respectively. We can observe that there are still tape residues on and beside the graphene M1 and M2 indicated by white circles even after cleaning with acetone. These residues will increase the contact resistance and affect the attachment of electrode metals, thus should be removed before device fabrication.Optical images (top) and the corresponding Raman spectra (bottom) of monolayer graphene M1 (a) and M2 (b) on SiO2/Si before annealing. The scale bars are 10 μm in length (colors online).
(a)
(b)The FWHM of 2D peak is the key factor to determine the number of layers of graphene which is lower than 40 cm−1 with a single Lorenz peak for monolayer graphene. The FWHMs of the 2D peaks of M1 and M2 are both 24 cm−1, and they are highly symmetrical and also can be fitted with a single Lorenz peak [25]. Therefore, the researched samples M1 and M2 are both monolayer graphene.Thermal annealing is performed using an annealing oven with N2/H2 and low pressure of 5 mbar. The heating and cooling rate should be low to prevent the edge of the graphene from folding or rolling up [26]. In our studies, both heating and cooling rates are 6°C/min. The annealing conditions of samples M1 and M2 are shown in Table 1. Graphene M1 is first annealed at 300°C and then 400°C subsequently in vacuum at a pressure of 5 mbar and lastly annealed at 400°C in N2/H2 at a flow rate of 200 cm3/min (sccm) and 50 sccm and a pressure of 50 mbar for 2 hours. For M2, annealing at 300°C and 400°C subsequently in N2/H2 at a pressure of 50 mbar for 2 hours is performed.Table 1
Annealing conditions of monolayer graphene M1 and M2.
Annealing no.
Sample
M1
M2
1
300°C, 5 mbar, 2 hours
300°C, 50mbar, 2 hoursN2/H2 (200 sccm/50 sccm)
2
400°C, 5 mbar, 2 hours
400°C, 50 mbar, 2 hoursN2/H2 (200 sccm/50 sccm)
3
400°C, 50 mbar, 2 hoursN2/H2 (200 sccm/50 sccm)
## 3. Results and Discussion
Figure2 shows the AFM topographic images and the Raman spectra of M1 before and after each annealing step. Before annealing, the tape residues form films on and beside M1 as shown in the AFM image of Figure 2(a). The Raman peaks between 1100 cm−1 and the G peak are due to the tape residue films. After annealing at 300°C in vacuum for 2 h, the residue film on the surface of M1 agglomerates to large particles and that beside M1 becomes thinner as shown in the AFM image of Figure 2(b). The peaks of the tape residues are still present in the corresponding Raman spectrum and the intensity ratios of these peaks to the G peak become larger which indicates that the residues are not removed. D peak at 1340 cm−1 also emerges. Annealing temperature is raised to 400°C in vacuum for 2 h, the surface residue particles diminish, and those on the substrate become much smaller ones that are distributed more evenly as illustrated in Figure 2(c). The Raman spectrum clearly shows that the residues are not completely removed although the intensity ratios decrease.AFM topographic images (top) and the corresponding Raman spectra (bottom) of graphene M1. (a) Before annealing; (b) annealing at 300°C in vacuum of 5 mbar; (c) annealing at 400°C in vacuum of 5 mbar; (d) annealing at 400°C in N2/H2 of 50 mbar. Scale bars are 300 nm in length (colors online).
(a)
(b)
(c)
(d)It is obvious that annealing in vacuum does not efficiently remove the tape residues on the surface of the graphene. The residues just congregate to particles from films and change the positions continuously. N2/H2 gas is introduced into the oven for further annealing the graphene M1 at rates of 200 sccm and 50 sccm, respectively, and the pressure is 50 mbar. The annealing result is shown in Figure 2(d). The AFM image shows that the surface is much smoother than that in Figure 2(b). Most of the surface area of M1 is now close to the substrate, indicating that the surface shows similar roughness with the substrate. The absence of the intrinsic tape residue peaks in the corresponding spectrum confirms the nearly complete removal of the residues on the surface of the monolayer graphene M1.We can also observe that the edge of M1 does not contact fully with the substrate surface in Figure2(a). This kind of disengagement from the surface of SiO2 may cause graphene’s rolling up from the edge. The height values between the two blue forks on the white line marked on the graphene M1 in Figure 2(a) are measured before annealing, after 300°C annealing in vacuum, after 400°C annealing in vacuum, and after 400°C annealing in N2/H2 to be 3.1 nm, 1.9 nm, 1.6 nm, and 1.2 nm. The height of the edge of M1 decreases accompanying annealing and 1.2 nm is already at the level of the thickness of a graphene film on the surface of SiO2 [1, 27]. Therefore, the edge of M1 becomes fully attached to the substrate.Compared with the characterization results of annealing in vacuum and N2/H2, it is concluded that N2/H2 gas is essential in removing the tape residues on the surface of graphene. The mechanism of this gas effect on the removing tape residues is still on research. We speculate that the removal of tape residues on the surface of graphene may be due to the reaggregation of the tape residues and the chemical decomposition of the residues. The introduction of N2/H2 may help the gasification of scission products. Successive researches on the mechanism of N2/H2 helping for removing tape residues are in progress.The effect of the annealing temperature of N2/H2 gas is also studied on the monolayer graphene M2. The topographic AFM images of M2 after annealing are shown in Figure 3. There are still few tape residue particles on the surface of M2 after annealing in N2/H2 at 300°C as shown in Figure 3(a). However, these particles nearly vanish after further annealing at 400°C confirmed by the AFM image shown in Figure 3(b). This indicates that high temperature is advantageous for the removing of the tape residues.AFM topographic images of the graphene M2 after annealing in N2/H2 at 300°C (a) and 400°C (b); left part of the graphene M2 is monolayer (ML) and the right part is bilayer (BL). Scale bars are 300 nm in length (colors online).
(a)
(b)G and 2D peaks in the Raman spectrum are used to characterize the effect of annealing on the graphene M1 and M2. The blueshifts of the G peak of M1 and M2 are 10 cm−1 and 11 cm−1 and the 2D peaks blue-shifts are 12 cm−1 and 11 cm−1, respectively, as shown in Figures 4(a) and 4(b), respectively. It is known that the blue-shift can be caused by either strain [28, 29] or hole doping [23, 24, 30–33]. In our case, the blueshifts of the G peak and 2D peak are close, which implies that hole doping is the main cause of the blue-shift since the 2D peak is more sensitive to strain than the G peak. For the graphene M1, the FWHM of the 2D peak increases with annealing temperatures both in vacuum and N2/H2, as shown in Figure 4(c). This is due to the increase of the disorders which affect the formation of the 2D peak and the strengthening of the interreaction between M1 and the SiO2 substrate with progressive annealing [34]. However, the FWHM of the 2D peak for M2 changes significantly after annealing at 300°C but increments marginally after the following annealing at 400°C in N2/H2 as shown in Figure 4(d). It can be concluded that the disorders emerge in the removing and moving processes of the tape residues. Most of the tape residues are removed by annealing at 300°C in N2/H2 that left few residues being removed in the annealing process at 400°C in N2/H2. Thus the broadening of the 2D peak of M2 is different for the two steps. The intensity ratio of I2D/IG which is sensitive to doping [34, 35] decreases with the increasing annealing temperature for both graphene M1 and M2 as shown in Figures 4(c) and 4(d) by the red axes.The 2D peaks of graphene M1 (a) and M2 (b) after each annealing step; the FWHM (black) of 2D peak and the intensity ratio (red) of the 2D peak and G peak for graphene M1 (c) and M2 (d) (colors online).
(a)
(b)
(c)
(d)
## 4. Conclusions
In conclusion, monolayer graphene is prepared using scotch tape to mechanically exfoliate Kish graphite and thermal annealing in vacuum or N2/H2 is performed to remove the tape residues left on the surface of and beside graphene. The annealing effect is characterized by AFM and Raman spectra from which we confirm that annealing in vacuum is not effective to remove the tape residues and N2/H2 is critical. Also increasing the temperature is helpful for residue removal. Tape residues can be removed by annealing at 400°C in N2/H2 at a pressure of 50 mbar for 2 h. Thermal annealing brings hole doping in graphene which accounts for the blueshifts of the G peak and the 2D peak and the decrease of the I2D/IG intensity ratio in Raman spectra. The 2D peaks broaden due to the emerging disorders during the annealing process and the enhanced interreaction between graphene and SiO2 substrate. The edges of graphene completely attach the surface of the substrates after thermal annealing processes.
---
*Source: 101765-2013-04-16.xml* | 2013 |
# Accidental Fire in the Cerrado: Its Impact on Communities of Caterpillars on Two Species ofErythroxylum
**Authors:** Cintia Lepesqueur; Helena C. Morais; Ivone Rezende Diniz
**Journal:** Psyche
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101767
---
## Abstract
Among the mechanisms that influence herbivorous insects, fires, a very frequent historical phenomenon in the cerrado, appear to be an important modifying influence on lepidopteran communities. The purpose of this study was to compare the richness, abundance, frequency, and composition of species of caterpillars in two adjacent areas of cerradosensu stricto, one recently burned and one unburned since 1994, on the experimental farm “Fazenda Água Limpa” (FAL) (15∘55′S and 47∘55′W), DF, Brazil. Caterpillars were surveyed on two plant species, genus Erythroxylum: E. deciduum A. St.-Hil. and E. tortuosum Mart. (Erythroxylaceae). We inspected a total of 4,196 plants in both areas, and 972 caterpillars were found on 13.3% of these plants. The number of plants with caterpillars (frequency) differed significantly between the areas. The results indicate that recent and accidental fires have a positive effect on the abundance of caterpillars up to one year postfire, increase the frequency of caterpillars associated with Erythroxylum species in the cerrado and do not affect the richness of caterpillars on these plants. Moreover, the fires change the species composition of caterpillars by promoting an increase in rare or opportunistic species.
---
## Body
## 1. Introduction
Systems represented by the associations of plants and insects include more than one-half of the world’s multicellular species. The impacts of disturbances, anthropogenic or otherwise, affect the characteristics of communities of herbivorous insects in any biome worldwide [1]. There is strong evidence that these disturbances result in complex changes in the interactions between plants and herbivores [2]. Fires affect communities of herbivorous insects and provide opportunities for changes in species richness, abundance and species composition in space and time [3]. Among herbivores, Lepidoptera can serve as good indicators of environmental changes caused by these disturbances in certain habitats [4].Fires in the cerrado are a natural phenomenon of recognized ecological importance [5] and occur during the dry season, from May to September [6, 7]. The effects of fire on the structure, composition and diversity of plants in the cerrado are far more extensively documented [8–12] than the effects on the fauna [13–15]. The knowledge of the effects of fire on insect herbivores and their natural enemies is even more limited [3, 16, 17].The general literature on the responses of insects to fire in comparison with the responses to other forms of management in open habitats indicates that a significant decrease of insects occurs soon after a fire. The magnitude of the decrease is related to the degree of exposure to flames and to the mobility of the insect [18]. In cerrado, a very rapid and vigorous regrowth of vegetation occurs [19] and this regrowth may favor an increase in the abundance of herbivores. The caterpillar community in the cerrado is species rich and the abundance of most species is low but is highly variable throughout the year [20, 21], due primarily to the climate variability that characterizes the two seasons (dry and wet) in the region. This pattern has also been observed for herbivorous insects in New Guinea. It is characteristic of the herbivorous insect communities in general and is also typical of tropical regions [22]. Among the mechanisms that influence these herbivorous insect community patterns, fires, a very frequent historical phenomenon in the cerrado, appear to be an important modifying influence on lepidopteran communities.The objective of this study was to compare the richness, relative abundance, frequency, and species composition of caterpillars between two cerrado areas, one recently burned and one unburned since 1994. The study hypotheses that the richness, relative abundance, frequency, and species composition of the caterpillars on the host plants vary between recently burned areas and areas without recent burning (used as a control). We predict that the abundance and species richness of caterpillars will increase significantly in a recently burned area as a result of the intense regrowth of vegetation in the postfire environment [19]. The postfire environment differs greatly from the prefire environment because of the higher phenological synchrony of plants and because of changes in microclimate result from to increased exposure to the sun.
## 2. Methodology
External folivorous caterpillars were surveyed on two plant species,Erythroxylum deciduumA. St.-Hil. and E. tortuosumMart. (Erythroxylaceae), in two adjacent areas of cerrado sensu stricto, on the experimental farm “Fazenda Água Limpa” (FAL) (15°55′S and 47°55′W), DF, Brazil. Both plant species were abundant and had similar size in the burned and unburned areas. This system, including only two plant species in the genus and their caterpillars, was chosen for study due to the need for simplification in the analysis and reduction of variables. This choice also reflected the ease of collection and identification and the prior knowledge of the system in the protected areas of the cerrado. The two plant species occur at high densities in the cerrado region and their lepidopteran fauna is known from previous studies in unburned areas [20, 23]. An accidental fire affected the entire area in 1994, and the area suffered another accidental fire in August 31, 2005. The area burned in 1994 was viewed as a control, and the area burned in 2005 was considered recently burned. Data were collected from September 2005 through August 2006.In both study areas (recently burned and control), external folivorous caterpillars were collected weekly from foliage of 50 individuals of each of the two species of plants. All caterpillars were collected, photographed, numbered as morphospecies, and individually reared in the laboratory in plastic pots (except for gregarious caterpillars), with leaves of the host plant as a food. The adults obtained from laboratory rearing were, as far as possible, identified and deposited in the Entomological Collection, Departamento de Zoologia, Universidade de Brasilia.A binomial test of two proportions was applied with a significance level of 0.05 to evaluate the occurrence of a consistent difference in the proportion of plants with caterpillars (relative abundance and species richness) between the areas [24]. Species rarefaction curves were constructed to analyze the species richness of caterpillars in each area [25]. EcoSim 7.0 software was used to construct these curves based on 1000 replications [26].The Shannon-Wiener index (H′), Simpson index (D) and Berger-Parker index (Dbp) were used to compare the diversity and dominance of the community of caterpillars on Erythoxylumin the two study areas. The indices were obtained with DivEs 2.0 software [27]. The Jaccard similarity index was also applied to evaluate the degree of similarity of the species composition of two communities. If the Jaccard index is equal to one (B=0 and C=0), all species are shared between the two communities. If the Jaccard index is near 0, few if any species are shared.
## 3. Results
We inspected a total of 4,196 plants, with similar numbers in both areas (Table1). A total of 972 caterpillars were found on 13.3% of the plants inspected. The number of plants with caterpillars (frequency) differed significantly between areas (p1=0.11; p2=0.16; Z=-4.46; P<0.001). The probability of finding a plant with a caterpillar in the control area (one out of nine plants inspected) was smaller than in the burned area (one to six plants). The relative abundance of caterpillars also differed significantly (p1=0.17, p2=0.30, Z=-9.69, P<0.001) between areas. Almost twice as many caterpillars were found in the burned area as found in the control area (Table 1).Table 1
Number of plants with caterpillars, abundance, and richness of caterpillars on two species ofErythroxylum, in two areas of cerrado sensu stricto in the FAL (burned and control areas) from September 2005 to August 2006.
Variables
Areas
Total
Control (%)
Burned (%)
Inspected plants
2,065 (49.2)
2,131 (50.8)
4,196
Plants with caterpillars
226 (10.9)
333 (15.6)
559
Abundance of caterpillars
346 (35.6)
626 (64.4)
972
Richness of caterpillars
29 (59.0)
36 (74.0)
47*
*Species richness is not the sum total of the richness of the two areas because some species occur in both areas.Forty-seven species or morphospecies (hereafter treated as species) of caterpillars were recorded, belonging to at least 15 families (two species belonged to unidentified families). The burned area had 36 species, compared with 29 species in the control area (Table1). However, this difference in species richness between the areas was not significant (p1=0.08; p2=0.06; Z=1.57; P>0.05). Even after adjustment by the rarefaction method to a common basis of an equal number of caterpillars in both areas (n=346) the species richness did not differ, and the estimated number of species varied between 24 and 32 (Table 2; Figure 1).Table 2
Diversity of caterpillars on two species ofErythroxylum in two areas of cerrado sensu stricto in the FAL (recently burned and control) from September 2005 to August 2006: number of caterpillars, species richness, estimated species richness through rarefaction in the control area (n=346, 95% confidence interval), dominant species and dominance observed in both areas, estimated dominance by rarefaction in the control area (n=346, 95% confidence interval), diversity index (H′), and dominance (D and Dbp).
Control area
Burned area
Number of caterpillars
346
626
Observed species richness
29
36
Estimated richness(Rarefaction, n=346)
—
24–32
Dominant species
Antaeotricha sp.
Antaeotricha sp.
Observed dominance
29.8%
34.5%
Expected dominance
—
31.2–37.9%
Diversity of Shanon-Wiener (H′)
1.01
0.89
Dominance of Simpson (D)
0.16
0.21
Dominance of Berger-Parker (Dbp)
0.30
0.35Figure 1
Rarefaction curves of caterpillar species of the control area (line with circle) and the burned area (line with star) in relative to the number of individuals estimated from randomizations of the order of 1000 samples in cerradosensu stricto in the FAL from September 2005 to August 2006. The dotted line indicates 95% confidence intervals.The value of dominance was higher in the burned area (34.5%) than in the control area (29.8%) (Table2). Likewise, the dominance for the burned area, estimated by rarefaction, was between 31.2% and 37.9%, significantly higher than the value estimated for the control area on a common basis of 346 caterpillars in both areas (Table 2). These results are also consistent with the dominance index values D and Dbp, which were higher in the burned area. The diversity index H′ was higher in the control area (Table 2).An unidentified species ofAntaeotricha (Elachistidae) was dominant, with 29.7% and 34.5% of the individuals found in the control and burned areas, respectively. Ten species recorded in the control area showed intermediate dominance, between 1.2 and 7.5%, whereas six species showed intermediate dominance in the burned area, with values between 1.1 and 8.0%. The proportion of rare species, those represented by less than 1% of all caterpillars, was significantly higher (p1=0.55, p2=0.75, Z=-1.68, P<0.05) in the burned area (n=27) than in the control area (n=16).The similarity between the study areas was low (Sj = 0.38), even on a monthly basis, with January (Sj = 0.70) and June (Sj = 0.62) being the sole exceptions (Table3). Of the 47 species recorded, 38.3% (n=18 species) were common to the two areas (Table 4), and 25.5% of the species (n=12) were restricted to the control area. The species restricted to the control area included the gregarious moth Hylesia shuessleri Strand, 1934 (Saturniidae) and the solitary Dalcerina tijucana(Schaus, 1892) (Dalceridae), both dietary generalists (Table 4). Approximately 40% of the species (n=18) were found only in the burned area. These species included Fregela semiluna (Walker, 1854) (Arctiidae), a generalist species, and Eloria subapicalis (Walker, 1855) (Noctuidae) a dietary specialist. The effects of the fire appear to be more evident for Limacodidae as five of the eight species of this family found in the survey occurred exclusively in the control area. Certain species, however, appear to benefit from the effects of fire, for example, three species of Noctuidae found exclusively in the burned area: Cydosia mimica (Walker, 1866), Cydosia punctistriga (Schauss, 1904) and Noctuidae sp. The five most abundant species (more than 15 individuals per area) were found in both areas and are apparently restricted to the Erythroxylaceae in the region (Table 4).Table 3
Abundance of caterpillars and Jaccard similarity index between the two areas of cerradosensu stricto in the FAL (recently burned and control) from September 2005 to August 2006 based on caterpillars found on two species of Erythroxylum.
Months
Abundance
Jaccard index
Control
Burned
area
area
Sep
16
0
0.00
Oct
12
17
0.22
Nov
3
7
0.20
Dec
12
30
0.29
Jan
31
40
0.70
Feb
16
27
0.50
Mar
12
33
0.27
Apr
26
42
0.25
May
132
242
0.33
Jun
51
144
0.62
Jul
26
37
0.29
Aug
9
7
0.00
Total
346
626
0.38Table 4
Families and species of caterpillars found on two species ofErythroxylum in burned and control areas of cerrado in the FAL from September 2005 to August 2006 (NI = no information about diet breadth; polyphagous = feeds on species from two or more families of plants; restricted = feeds only on species of Erythroxylaceae).
Family
Species
Control area
Burned area
Diet breadth
Arctiidae
Fregela semiluna (Walker, 1854)
0
4
Polyphagous
Paracles sp.
6
2
Polyphagous
Dalceridae
Acraga infusa (Schauss, 1905)
4
2
Polyphagous
Acraga sp. 1
0
1
NI
Acraga sp. 2
0
2
NI
Dalceridae sp.
0
1
NI
Dalcerina tijucana (Schauss, 1892)
1
0
Polyphagous
Elachistidae
Antaeotricha sp.*
103
216
Restricted
Timocratica melanocosta (Becker, 1982)
2
3
Polyphagous
Gelechiidae
Dichomeris sp. 1
1
10
Restricted
Dichomeris sp. 2
22
6
Polyphagous
Dichomeris sp. 3*
26
160
Restricted
Dichomeris sp. 4
3
8
Polyphagous
Dichomeris spp. (duas espécies)*
68
84
Restricted
Gelechiidae sp.*
44
50
Restricted
Geometridae
Cyclomia mopsaria (Guenée, 1857)*
16
24
Restricted
Geometridae sp. 1
3
0
Restricted
Geometridae sp. 2
0
1
Restricted
Stenalcidia sp. 1
0
5
NI
Stenalcidia sp. 2
1
0
Restricted
Limacodidae
Limacodidae sp. 1
0
1
Polyphagous
Limacodidae sp. 2
0
1
NI
Limacodidae sp. 3
1
0
NI
Limacodidae sp. 4
2
0
NI
Limacodidae sp. 5
2
0
NI
Miresa clarissa (Stoll, 1790)
0
1
Polyphagous
Platyprosterna perpectinata (Dyar, 1905)
5
0
Polyphagous
Semyra incisa (Walker, 1855)
2
1
Polyphagous
Megalopigydae
Megalopyge albicollis (Schauss, 1900)
0
1
Polyphagous
Megalopyge braulio Schauss, 1924
0
1
Polyphagous
Norape sp.
4
3
Polyphagous
Podalia annulipes (Boisduval, 1833)
0
1
Polyphagous
Noctuidae
Cydosia mimica (Walker 1866)
0
1
Restricted
Cydosia punctistriga (Schauss, 1904)
0
1
NI
Eloria subapicalis (Walker, 1855)
0
7
Restricted
Noctuidae sp.
0
1
Restricted
Notodontidae
Heterocampa sp.
7
12
Polyphagous
Oecophoridae
Inga haemataula (Meyrick, 1911)
6
1
Polyphagous
Inga phaeocrossa (Meyrick, 1912)
1
0
Polyphagous
Pyralidae
Carthara abrupta (Zeller, 1881)
12
3
Polyphagous
Riodinidae
Emesis sp.
1
0
Polyphagous
Hallonympha paucipuncta (Spitz, 1930)
0
1
Polyphagous
Saturniidae
Hylesia schuessleri Strand, 1934
1
0
Polyphagous
Tortricidae
Platynota rostrana (Walker, 1863)
0
3
Polyphagous
Urodidae
Urodus sp.
0
5
Restricted
Unidentified
sp. 1
1
0
NI
sp. 2
1
1
NI
*Indicates the five commonest species.No caterpillars were found on species ofErythroxylumuntil one month after the fire (Table 3). However, the relative abundance of caterpillars was higher in the burned area in all of the following months. Until 12 months after the occurrence of the fire, the caterpillar relative abundance in the burned remained higher than the abundance found in the control area (Figure 2). The temporal occupation of the species of Erythroxylum by caterpillars resulted in a pattern whose abundance and richness gradually increased with sampling effort and showed a greater increase during the dry season, specifically during May and June (Figure 2).Figure 2
Cumulative number of caterpillars (bars) and species (rows) in two areas of cerradosensu stricto in the FAL (recently burned and control) from September 2005 to August 2006.
## 4. Discussion
The sporadic and accidental fires in restricted areas of the cerrado may act to renew the vegetation [19], allowing the reoccupation of sites more rapidly by plant species. Several studies in tropical forests and in the cerrado have shown the importance of sprouting as a mechanism of post-fire regeneration of shrub and tree species [28–32]. The new foliage that results from sprouting attracts a variety of herbivores.In the cerrado, a low frequency of caterpillars on host plants is a common feature [20, 33–35]. However, recent fire in the cerrado study area produced as 4.7% increase in the frequency of caterpillars on plants of Erythroxylum. The reason for this increase may be that fire may benefit herbivores by increasing the availability of resources. This high availability of resources results from the regrowth of plants because many new leaves are synchronously produced.Although species richness did not differ between areas, the higher dominance observed in the burned area suggests a higher diversity in the control area. The most interesting feature of this system is the increase of rare species in the burned area. This increase may result from intense regrowth, which may produce new oviposition sites and new environments for these species. At the same time, nearby areas were available to act as a source for re-colonization [17]. However, the rarefaction curves did not reach an asymptote. In fact, previous studies [23, 36] indicate that species caterpillars not found in our surveys occur on the two species of Erythroxylumthat were examined. These additional species include Erynnis funeralis (Scudder & Burgess, 1870) (Hesperiidae), Phobetron hipparchia (Cramer, [1777]) (Limacodidae) and Automeris bilinea Walker, 1855 (Saturniidae). These species are all polyphagous and could be present on other species of host plants.The variation in the abundance of insects in the cerrado occurs regardless of the passage of fire and remains seasonal [37]. However, the mortality caused by fire produces an immediate reduction in population size. Even, one month after the fire, caterpillars were not found on the plants surveyed. Moreover, the caterpillar abundance on both species of plants during all the subsequent months was higher in the area disturbed by the recent fire. Similar results have been found for adults of certain insect orders, such as Coleoptera, Hemiptera, Hymenoptera and Lepidoptera, in the cerrado of Brasilia [37]. The return to the previous levels of abundance depends on the order to which the insect belongs and ranges from two to more than thirteen months after the occurrence of the fire [3]. Up to 12 months after the occurrence of fire, the abundance of caterpillars associated with the Erythroxylum species studied here had not returned to a level comparable with that observed in the control area.Research conducted in the same region with the community of caterpillars associated withByrsonima (Malpighiaceae), showed that if the fire in the cerrado is recurrent every two years during the dry season, the results are quite different [38] from those previously discussed. In this case, the abundance and species richness of caterpillars in areas with frequent fires were markedly less than the abundance and species richness of caterpillars in areas protected from fire for more than 30 years. These results are consistent with other previous reports that fire reduces the populations of caterpillars [39], and may cause local extinction of some species [40]. However, these results from areas with frequent fires are in contrast to the results found if the fires are accidental and sporadic, as in the case of this study.Even with smaller losses than those caused by recurrent fires, the recent accidental fire dramatically increased the abundance of caterpillars and as result, the attacks on plants in the postfire period, just at the time at which most synchronous leaf production in the cerrado occurs. For this reason, this process may produce extensive damage to vegetation and may harm biodiversity conservation in the region. Furthermore, a scheme of recurrent burns during several years in the same area results in the biological and physicochemical degradation of the soil and thus in the reduction of aerial biomass [41].Although we did not replicate each treatment, our results reflect the effect of fire, as we have followed the changes in communities of caterpillars on various plant species for several years in protected areas from fire [21, 23, 38, 42], and in addition, we have surveyed caterpillars on other plant species in postfire conditions, with similar results (unpublished data). Furthermore, some studies suggest the impossibility of replication treatments when it comes from natural phenomena occurring on a large scale, as in the case of burning [43]. Thus, the results of this study indicate that the recent accidental fire had the following effects on the external folivorous caterpillars: (a) killed eggs and larvae at first but had a positive effect on the relative abundance of caterpillars up to one year postfire, (b) increased the frequency of caterpillars associated with two Erythroxylum species in the cerrado, (c) did not affect the richness of caterpillars on these plants and (d) changed the caterpillar species composition because the effects of the fire promoted increases of rare or opportunistic species.
---
*Source: 101767-2012-11-27.xml* | 101767-2012-11-27_101767-2012-11-27.md | 22,274 | Accidental Fire in the Cerrado: Its Impact on Communities of Caterpillars on Two Species ofErythroxylum | Cintia Lepesqueur; Helena C. Morais; Ivone Rezende Diniz | Psyche
(2012) | Social Sciences & Business | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101767 | 101767-2012-11-27.xml | ---
## Abstract
Among the mechanisms that influence herbivorous insects, fires, a very frequent historical phenomenon in the cerrado, appear to be an important modifying influence on lepidopteran communities. The purpose of this study was to compare the richness, abundance, frequency, and composition of species of caterpillars in two adjacent areas of cerradosensu stricto, one recently burned and one unburned since 1994, on the experimental farm “Fazenda Água Limpa” (FAL) (15∘55′S and 47∘55′W), DF, Brazil. Caterpillars were surveyed on two plant species, genus Erythroxylum: E. deciduum A. St.-Hil. and E. tortuosum Mart. (Erythroxylaceae). We inspected a total of 4,196 plants in both areas, and 972 caterpillars were found on 13.3% of these plants. The number of plants with caterpillars (frequency) differed significantly between the areas. The results indicate that recent and accidental fires have a positive effect on the abundance of caterpillars up to one year postfire, increase the frequency of caterpillars associated with Erythroxylum species in the cerrado and do not affect the richness of caterpillars on these plants. Moreover, the fires change the species composition of caterpillars by promoting an increase in rare or opportunistic species.
---
## Body
## 1. Introduction
Systems represented by the associations of plants and insects include more than one-half of the world’s multicellular species. The impacts of disturbances, anthropogenic or otherwise, affect the characteristics of communities of herbivorous insects in any biome worldwide [1]. There is strong evidence that these disturbances result in complex changes in the interactions between plants and herbivores [2]. Fires affect communities of herbivorous insects and provide opportunities for changes in species richness, abundance and species composition in space and time [3]. Among herbivores, Lepidoptera can serve as good indicators of environmental changes caused by these disturbances in certain habitats [4].Fires in the cerrado are a natural phenomenon of recognized ecological importance [5] and occur during the dry season, from May to September [6, 7]. The effects of fire on the structure, composition and diversity of plants in the cerrado are far more extensively documented [8–12] than the effects on the fauna [13–15]. The knowledge of the effects of fire on insect herbivores and their natural enemies is even more limited [3, 16, 17].The general literature on the responses of insects to fire in comparison with the responses to other forms of management in open habitats indicates that a significant decrease of insects occurs soon after a fire. The magnitude of the decrease is related to the degree of exposure to flames and to the mobility of the insect [18]. In cerrado, a very rapid and vigorous regrowth of vegetation occurs [19] and this regrowth may favor an increase in the abundance of herbivores. The caterpillar community in the cerrado is species rich and the abundance of most species is low but is highly variable throughout the year [20, 21], due primarily to the climate variability that characterizes the two seasons (dry and wet) in the region. This pattern has also been observed for herbivorous insects in New Guinea. It is characteristic of the herbivorous insect communities in general and is also typical of tropical regions [22]. Among the mechanisms that influence these herbivorous insect community patterns, fires, a very frequent historical phenomenon in the cerrado, appear to be an important modifying influence on lepidopteran communities.The objective of this study was to compare the richness, relative abundance, frequency, and species composition of caterpillars between two cerrado areas, one recently burned and one unburned since 1994. The study hypotheses that the richness, relative abundance, frequency, and species composition of the caterpillars on the host plants vary between recently burned areas and areas without recent burning (used as a control). We predict that the abundance and species richness of caterpillars will increase significantly in a recently burned area as a result of the intense regrowth of vegetation in the postfire environment [19]. The postfire environment differs greatly from the prefire environment because of the higher phenological synchrony of plants and because of changes in microclimate result from to increased exposure to the sun.
## 2. Methodology
External folivorous caterpillars were surveyed on two plant species,Erythroxylum deciduumA. St.-Hil. and E. tortuosumMart. (Erythroxylaceae), in two adjacent areas of cerrado sensu stricto, on the experimental farm “Fazenda Água Limpa” (FAL) (15°55′S and 47°55′W), DF, Brazil. Both plant species were abundant and had similar size in the burned and unburned areas. This system, including only two plant species in the genus and their caterpillars, was chosen for study due to the need for simplification in the analysis and reduction of variables. This choice also reflected the ease of collection and identification and the prior knowledge of the system in the protected areas of the cerrado. The two plant species occur at high densities in the cerrado region and their lepidopteran fauna is known from previous studies in unburned areas [20, 23]. An accidental fire affected the entire area in 1994, and the area suffered another accidental fire in August 31, 2005. The area burned in 1994 was viewed as a control, and the area burned in 2005 was considered recently burned. Data were collected from September 2005 through August 2006.In both study areas (recently burned and control), external folivorous caterpillars were collected weekly from foliage of 50 individuals of each of the two species of plants. All caterpillars were collected, photographed, numbered as morphospecies, and individually reared in the laboratory in plastic pots (except for gregarious caterpillars), with leaves of the host plant as a food. The adults obtained from laboratory rearing were, as far as possible, identified and deposited in the Entomological Collection, Departamento de Zoologia, Universidade de Brasilia.A binomial test of two proportions was applied with a significance level of 0.05 to evaluate the occurrence of a consistent difference in the proportion of plants with caterpillars (relative abundance and species richness) between the areas [24]. Species rarefaction curves were constructed to analyze the species richness of caterpillars in each area [25]. EcoSim 7.0 software was used to construct these curves based on 1000 replications [26].The Shannon-Wiener index (H′), Simpson index (D) and Berger-Parker index (Dbp) were used to compare the diversity and dominance of the community of caterpillars on Erythoxylumin the two study areas. The indices were obtained with DivEs 2.0 software [27]. The Jaccard similarity index was also applied to evaluate the degree of similarity of the species composition of two communities. If the Jaccard index is equal to one (B=0 and C=0), all species are shared between the two communities. If the Jaccard index is near 0, few if any species are shared.
## 3. Results
We inspected a total of 4,196 plants, with similar numbers in both areas (Table1). A total of 972 caterpillars were found on 13.3% of the plants inspected. The number of plants with caterpillars (frequency) differed significantly between areas (p1=0.11; p2=0.16; Z=-4.46; P<0.001). The probability of finding a plant with a caterpillar in the control area (one out of nine plants inspected) was smaller than in the burned area (one to six plants). The relative abundance of caterpillars also differed significantly (p1=0.17, p2=0.30, Z=-9.69, P<0.001) between areas. Almost twice as many caterpillars were found in the burned area as found in the control area (Table 1).Table 1
Number of plants with caterpillars, abundance, and richness of caterpillars on two species ofErythroxylum, in two areas of cerrado sensu stricto in the FAL (burned and control areas) from September 2005 to August 2006.
Variables
Areas
Total
Control (%)
Burned (%)
Inspected plants
2,065 (49.2)
2,131 (50.8)
4,196
Plants with caterpillars
226 (10.9)
333 (15.6)
559
Abundance of caterpillars
346 (35.6)
626 (64.4)
972
Richness of caterpillars
29 (59.0)
36 (74.0)
47*
*Species richness is not the sum total of the richness of the two areas because some species occur in both areas.Forty-seven species or morphospecies (hereafter treated as species) of caterpillars were recorded, belonging to at least 15 families (two species belonged to unidentified families). The burned area had 36 species, compared with 29 species in the control area (Table1). However, this difference in species richness between the areas was not significant (p1=0.08; p2=0.06; Z=1.57; P>0.05). Even after adjustment by the rarefaction method to a common basis of an equal number of caterpillars in both areas (n=346) the species richness did not differ, and the estimated number of species varied between 24 and 32 (Table 2; Figure 1).Table 2
Diversity of caterpillars on two species ofErythroxylum in two areas of cerrado sensu stricto in the FAL (recently burned and control) from September 2005 to August 2006: number of caterpillars, species richness, estimated species richness through rarefaction in the control area (n=346, 95% confidence interval), dominant species and dominance observed in both areas, estimated dominance by rarefaction in the control area (n=346, 95% confidence interval), diversity index (H′), and dominance (D and Dbp).
Control area
Burned area
Number of caterpillars
346
626
Observed species richness
29
36
Estimated richness(Rarefaction, n=346)
—
24–32
Dominant species
Antaeotricha sp.
Antaeotricha sp.
Observed dominance
29.8%
34.5%
Expected dominance
—
31.2–37.9%
Diversity of Shanon-Wiener (H′)
1.01
0.89
Dominance of Simpson (D)
0.16
0.21
Dominance of Berger-Parker (Dbp)
0.30
0.35Figure 1
Rarefaction curves of caterpillar species of the control area (line with circle) and the burned area (line with star) in relative to the number of individuals estimated from randomizations of the order of 1000 samples in cerradosensu stricto in the FAL from September 2005 to August 2006. The dotted line indicates 95% confidence intervals.The value of dominance was higher in the burned area (34.5%) than in the control area (29.8%) (Table2). Likewise, the dominance for the burned area, estimated by rarefaction, was between 31.2% and 37.9%, significantly higher than the value estimated for the control area on a common basis of 346 caterpillars in both areas (Table 2). These results are also consistent with the dominance index values D and Dbp, which were higher in the burned area. The diversity index H′ was higher in the control area (Table 2).An unidentified species ofAntaeotricha (Elachistidae) was dominant, with 29.7% and 34.5% of the individuals found in the control and burned areas, respectively. Ten species recorded in the control area showed intermediate dominance, between 1.2 and 7.5%, whereas six species showed intermediate dominance in the burned area, with values between 1.1 and 8.0%. The proportion of rare species, those represented by less than 1% of all caterpillars, was significantly higher (p1=0.55, p2=0.75, Z=-1.68, P<0.05) in the burned area (n=27) than in the control area (n=16).The similarity between the study areas was low (Sj = 0.38), even on a monthly basis, with January (Sj = 0.70) and June (Sj = 0.62) being the sole exceptions (Table3). Of the 47 species recorded, 38.3% (n=18 species) were common to the two areas (Table 4), and 25.5% of the species (n=12) were restricted to the control area. The species restricted to the control area included the gregarious moth Hylesia shuessleri Strand, 1934 (Saturniidae) and the solitary Dalcerina tijucana(Schaus, 1892) (Dalceridae), both dietary generalists (Table 4). Approximately 40% of the species (n=18) were found only in the burned area. These species included Fregela semiluna (Walker, 1854) (Arctiidae), a generalist species, and Eloria subapicalis (Walker, 1855) (Noctuidae) a dietary specialist. The effects of the fire appear to be more evident for Limacodidae as five of the eight species of this family found in the survey occurred exclusively in the control area. Certain species, however, appear to benefit from the effects of fire, for example, three species of Noctuidae found exclusively in the burned area: Cydosia mimica (Walker, 1866), Cydosia punctistriga (Schauss, 1904) and Noctuidae sp. The five most abundant species (more than 15 individuals per area) were found in both areas and are apparently restricted to the Erythroxylaceae in the region (Table 4).Table 3
Abundance of caterpillars and Jaccard similarity index between the two areas of cerradosensu stricto in the FAL (recently burned and control) from September 2005 to August 2006 based on caterpillars found on two species of Erythroxylum.
Months
Abundance
Jaccard index
Control
Burned
area
area
Sep
16
0
0.00
Oct
12
17
0.22
Nov
3
7
0.20
Dec
12
30
0.29
Jan
31
40
0.70
Feb
16
27
0.50
Mar
12
33
0.27
Apr
26
42
0.25
May
132
242
0.33
Jun
51
144
0.62
Jul
26
37
0.29
Aug
9
7
0.00
Total
346
626
0.38Table 4
Families and species of caterpillars found on two species ofErythroxylum in burned and control areas of cerrado in the FAL from September 2005 to August 2006 (NI = no information about diet breadth; polyphagous = feeds on species from two or more families of plants; restricted = feeds only on species of Erythroxylaceae).
Family
Species
Control area
Burned area
Diet breadth
Arctiidae
Fregela semiluna (Walker, 1854)
0
4
Polyphagous
Paracles sp.
6
2
Polyphagous
Dalceridae
Acraga infusa (Schauss, 1905)
4
2
Polyphagous
Acraga sp. 1
0
1
NI
Acraga sp. 2
0
2
NI
Dalceridae sp.
0
1
NI
Dalcerina tijucana (Schauss, 1892)
1
0
Polyphagous
Elachistidae
Antaeotricha sp.*
103
216
Restricted
Timocratica melanocosta (Becker, 1982)
2
3
Polyphagous
Gelechiidae
Dichomeris sp. 1
1
10
Restricted
Dichomeris sp. 2
22
6
Polyphagous
Dichomeris sp. 3*
26
160
Restricted
Dichomeris sp. 4
3
8
Polyphagous
Dichomeris spp. (duas espécies)*
68
84
Restricted
Gelechiidae sp.*
44
50
Restricted
Geometridae
Cyclomia mopsaria (Guenée, 1857)*
16
24
Restricted
Geometridae sp. 1
3
0
Restricted
Geometridae sp. 2
0
1
Restricted
Stenalcidia sp. 1
0
5
NI
Stenalcidia sp. 2
1
0
Restricted
Limacodidae
Limacodidae sp. 1
0
1
Polyphagous
Limacodidae sp. 2
0
1
NI
Limacodidae sp. 3
1
0
NI
Limacodidae sp. 4
2
0
NI
Limacodidae sp. 5
2
0
NI
Miresa clarissa (Stoll, 1790)
0
1
Polyphagous
Platyprosterna perpectinata (Dyar, 1905)
5
0
Polyphagous
Semyra incisa (Walker, 1855)
2
1
Polyphagous
Megalopigydae
Megalopyge albicollis (Schauss, 1900)
0
1
Polyphagous
Megalopyge braulio Schauss, 1924
0
1
Polyphagous
Norape sp.
4
3
Polyphagous
Podalia annulipes (Boisduval, 1833)
0
1
Polyphagous
Noctuidae
Cydosia mimica (Walker 1866)
0
1
Restricted
Cydosia punctistriga (Schauss, 1904)
0
1
NI
Eloria subapicalis (Walker, 1855)
0
7
Restricted
Noctuidae sp.
0
1
Restricted
Notodontidae
Heterocampa sp.
7
12
Polyphagous
Oecophoridae
Inga haemataula (Meyrick, 1911)
6
1
Polyphagous
Inga phaeocrossa (Meyrick, 1912)
1
0
Polyphagous
Pyralidae
Carthara abrupta (Zeller, 1881)
12
3
Polyphagous
Riodinidae
Emesis sp.
1
0
Polyphagous
Hallonympha paucipuncta (Spitz, 1930)
0
1
Polyphagous
Saturniidae
Hylesia schuessleri Strand, 1934
1
0
Polyphagous
Tortricidae
Platynota rostrana (Walker, 1863)
0
3
Polyphagous
Urodidae
Urodus sp.
0
5
Restricted
Unidentified
sp. 1
1
0
NI
sp. 2
1
1
NI
*Indicates the five commonest species.No caterpillars were found on species ofErythroxylumuntil one month after the fire (Table 3). However, the relative abundance of caterpillars was higher in the burned area in all of the following months. Until 12 months after the occurrence of the fire, the caterpillar relative abundance in the burned remained higher than the abundance found in the control area (Figure 2). The temporal occupation of the species of Erythroxylum by caterpillars resulted in a pattern whose abundance and richness gradually increased with sampling effort and showed a greater increase during the dry season, specifically during May and June (Figure 2).Figure 2
Cumulative number of caterpillars (bars) and species (rows) in two areas of cerradosensu stricto in the FAL (recently burned and control) from September 2005 to August 2006.
## 4. Discussion
The sporadic and accidental fires in restricted areas of the cerrado may act to renew the vegetation [19], allowing the reoccupation of sites more rapidly by plant species. Several studies in tropical forests and in the cerrado have shown the importance of sprouting as a mechanism of post-fire regeneration of shrub and tree species [28–32]. The new foliage that results from sprouting attracts a variety of herbivores.In the cerrado, a low frequency of caterpillars on host plants is a common feature [20, 33–35]. However, recent fire in the cerrado study area produced as 4.7% increase in the frequency of caterpillars on plants of Erythroxylum. The reason for this increase may be that fire may benefit herbivores by increasing the availability of resources. This high availability of resources results from the regrowth of plants because many new leaves are synchronously produced.Although species richness did not differ between areas, the higher dominance observed in the burned area suggests a higher diversity in the control area. The most interesting feature of this system is the increase of rare species in the burned area. This increase may result from intense regrowth, which may produce new oviposition sites and new environments for these species. At the same time, nearby areas were available to act as a source for re-colonization [17]. However, the rarefaction curves did not reach an asymptote. In fact, previous studies [23, 36] indicate that species caterpillars not found in our surveys occur on the two species of Erythroxylumthat were examined. These additional species include Erynnis funeralis (Scudder & Burgess, 1870) (Hesperiidae), Phobetron hipparchia (Cramer, [1777]) (Limacodidae) and Automeris bilinea Walker, 1855 (Saturniidae). These species are all polyphagous and could be present on other species of host plants.The variation in the abundance of insects in the cerrado occurs regardless of the passage of fire and remains seasonal [37]. However, the mortality caused by fire produces an immediate reduction in population size. Even, one month after the fire, caterpillars were not found on the plants surveyed. Moreover, the caterpillar abundance on both species of plants during all the subsequent months was higher in the area disturbed by the recent fire. Similar results have been found for adults of certain insect orders, such as Coleoptera, Hemiptera, Hymenoptera and Lepidoptera, in the cerrado of Brasilia [37]. The return to the previous levels of abundance depends on the order to which the insect belongs and ranges from two to more than thirteen months after the occurrence of the fire [3]. Up to 12 months after the occurrence of fire, the abundance of caterpillars associated with the Erythroxylum species studied here had not returned to a level comparable with that observed in the control area.Research conducted in the same region with the community of caterpillars associated withByrsonima (Malpighiaceae), showed that if the fire in the cerrado is recurrent every two years during the dry season, the results are quite different [38] from those previously discussed. In this case, the abundance and species richness of caterpillars in areas with frequent fires were markedly less than the abundance and species richness of caterpillars in areas protected from fire for more than 30 years. These results are consistent with other previous reports that fire reduces the populations of caterpillars [39], and may cause local extinction of some species [40]. However, these results from areas with frequent fires are in contrast to the results found if the fires are accidental and sporadic, as in the case of this study.Even with smaller losses than those caused by recurrent fires, the recent accidental fire dramatically increased the abundance of caterpillars and as result, the attacks on plants in the postfire period, just at the time at which most synchronous leaf production in the cerrado occurs. For this reason, this process may produce extensive damage to vegetation and may harm biodiversity conservation in the region. Furthermore, a scheme of recurrent burns during several years in the same area results in the biological and physicochemical degradation of the soil and thus in the reduction of aerial biomass [41].Although we did not replicate each treatment, our results reflect the effect of fire, as we have followed the changes in communities of caterpillars on various plant species for several years in protected areas from fire [21, 23, 38, 42], and in addition, we have surveyed caterpillars on other plant species in postfire conditions, with similar results (unpublished data). Furthermore, some studies suggest the impossibility of replication treatments when it comes from natural phenomena occurring on a large scale, as in the case of burning [43]. Thus, the results of this study indicate that the recent accidental fire had the following effects on the external folivorous caterpillars: (a) killed eggs and larvae at first but had a positive effect on the relative abundance of caterpillars up to one year postfire, (b) increased the frequency of caterpillars associated with two Erythroxylum species in the cerrado, (c) did not affect the richness of caterpillars on these plants and (d) changed the caterpillar species composition because the effects of the fire promoted increases of rare or opportunistic species.
---
*Source: 101767-2012-11-27.xml* | 2012 |
# The Content Variation of Four Active Components inAmygdalus persica L. during Different Harvesting Periods
**Authors:** Juanjuan Zhang; Xudong Chen; Zhenhua Yin; Qinfeng Guo; Baocheng Yang; Miaoqing Feng; Xiao Li; Lin Chen; Wei Zhang; Wenyi Kang
**Journal:** Journal of Food Quality
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1017674
---
## Abstract
In this study, a quantitative method for the content determination of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol inAmygdalus persica L. flowers during different harvest periods was established to investigate its various rules and determine the optimal harvesting period. The determination was performed on the XTERRA MS C18 column with a mobile phase consisting of 0.1% formic acid aqueous solution and acetonitrile (gradient elution) at a flow rate of 1.0 mL/min. In combination with other validation data, including precision, stability, and recovery tests, this method demonstrated good reliability and sensitivity. The results showed that the contents of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol in A. persica flowers during different harvest periods were quite different, and the content in samples at the early blooming stage was the highest. The method is simple, accurate, and rapid for determining the contents of four active ingredients in A. persica flowers.
---
## Body
## 1. Introduction
Amygdalus persica L., belonging to Rosaceae, was widely distributed in most regions of China and had the effect of reducing diarrhea and defecation, promoting water, and reducing swelling [1]. A. persica has an effect on beauty and health care and is widely used in the fields of food and medicine [2, 3]. A. persica flowers mainly contain flavonoids, polyphenols, polysaccharides, and other chemical components, which have antioxidant and antibacterial activities [4, 5]. In addition, Zhang et al. found the major volatile constituents of A. persica flowers were linolenic alcohol, n-hexadecanoic acid, cyclohexane, and octadecanoic acid [6]. Li et al. found A. persica flowers polyphenol can significantly increase the brain 5-hydroxytryptamine and norepinephrine levels in the hippocampus of mice with chronic depression [7]. Liu et al. studied the inhibitory effect and kinetics of a methanol extract of A. persica flowers on tyrosinase. Results showed that A. persica flowers extract could inhibit the monophenolase activity of tyrosinase effectively as well as diphenolase activity [8].At present, althoughA. persica can be used as medicine and the curative effect is accurate, it is only used in traditional Chinese medicine prescription [9–11]. A. persica was not included in the Chinese Pharmacopoeia and local standards, and there was no unified standard for its quality control. In our previous study, we investigated the chemical constituents and coagulation activity of A. persica flowers, and we found that rutin and kaempferol possessed significant procoagulant activity, while chlorogenic acid butyl ester had anticoagulant activity in vitro [12, 13]. Therefore, the content variation of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol in A. persica flowers in different harvest periods was simultaneously determined for the first time in this study. The content differences and dynamic changes of four effective components were compared and analyzed in order to understand the dynamic accumulation law in different growing periods and to provide a theoretical basis for strictly controlling the quality, timely harvesting, and rational development and utilization of A. persica flowers.
## 2. Materials and Reagents
### 2.1. Instruments
All the analyses were performed on a Waters 2695 liquid chromatography system (Waters, Milford, USA) equipped with a vacuum degasser, a quaternary solvent delivery system, an autosampler, a column compartment, and a w2489 UV visible detector. The Kq-250db CNC ultrasonic cleaner was purchased from Kunshan Ultrasonic Instrument Co., Ltd. (Kunshan, China). The AG285 electronic analytical balance was purchased from Mettler Toledo (Switzerland).
### 2.2. Chemicals and Reagents
Rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were provided by Henan Engineering Research Center for comprehensive utilization of edible and medicinal plant resources and Huanghe Science and Technology College, and their purities were up to 98%. Deionized water was prepared using a Milli-Q ultrapure water purifier (ELGA, Labwater, Marlow, UK). Acetonitrile and methyl alcohol were purchased from Thermo Fisher Technologies Ltd. All other reagents were in analytical grade.
### 2.3. Plant Material
A. persica flower samples (No: S1–S14) were collected in the medicinal botanical garden of Henan University and identified by Professor Changqin Li of Henan University. The voucher specimens were deposited in the Institute of Natural Medicine of Huanghe Science and Technology College. And the information on samples of A. persica flowers are given in Table 1.Table 1
Information of samples ofA. persica flowers.
Lot no.Collecting timeS12020-03-20S22020-03-21S32020-03-22S42020-03-23S52020-03-24S62020-03-25S72020-03-26S82020-03-27S92020-03-28S102020-03-29S112020-03-30S122020-03-31S132020-04-01S142020-04-02
## 2.1. Instruments
All the analyses were performed on a Waters 2695 liquid chromatography system (Waters, Milford, USA) equipped with a vacuum degasser, a quaternary solvent delivery system, an autosampler, a column compartment, and a w2489 UV visible detector. The Kq-250db CNC ultrasonic cleaner was purchased from Kunshan Ultrasonic Instrument Co., Ltd. (Kunshan, China). The AG285 electronic analytical balance was purchased from Mettler Toledo (Switzerland).
## 2.2. Chemicals and Reagents
Rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were provided by Henan Engineering Research Center for comprehensive utilization of edible and medicinal plant resources and Huanghe Science and Technology College, and their purities were up to 98%. Deionized water was prepared using a Milli-Q ultrapure water purifier (ELGA, Labwater, Marlow, UK). Acetonitrile and methyl alcohol were purchased from Thermo Fisher Technologies Ltd. All other reagents were in analytical grade.
## 2.3. Plant Material
A. persica flower samples (No: S1–S14) were collected in the medicinal botanical garden of Henan University and identified by Professor Changqin Li of Henan University. The voucher specimens were deposited in the Institute of Natural Medicine of Huanghe Science and Technology College. And the information on samples of A. persica flowers are given in Table 1.Table 1
Information of samples ofA. persica flowers.
Lot no.Collecting timeS12020-03-20S22020-03-21S32020-03-22S42020-03-23S52020-03-24S62020-03-25S72020-03-26S82020-03-27S92020-03-28S102020-03-29S112020-03-30S122020-03-31S132020-04-01S142020-04-02
## 3. Methods and Results
### 3.1. Chromatographic Conditions
All analyses were performed on a Waters e2695 HPLC system (Waters, Milford, USA). The chromatographic separation was achieved using an XTERRA MS C18 column (4.6 mm × 250 mm, 5µm) (Waters, Milford, USA), with the column oven temperature maintained at 25°C. The mobile phase consisted of 0.1% formic acid solution (Solvent A) and acetonitrile (Solvent B) and employed gradient elution at a flow rate of 1.0 mL/min. The elution program was designed as follows: from 0 to 15 min, 5–23% B; from 15 to 35 min, 23–30% B; from 35 to 40 min, 30–40% B. After a 5 min equilibration period, the samples were used for injection. The sample injection volume was 10 µL. The column effluent was monitored at 360 nm.
### 3.2. Preparation of Solutions
#### 3.2.1. Standard Solutions
Standard stock solutions of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were prepared at the concentrations of 170.4, 118.0, 155.0, and 185.6μg/mL in methanol, respectively. Precisely measure the right amount of standard stock solutions placed in a 10 mL volumetric flask and add methanol to the constant volume. The concentrations of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol of the mixed standard solution were 34.1, 35.4, 46.5, and 37.1 μg/mL, respectively.
#### 3.2.2. Sample Solutions
A. persica flowers were dried in the shade, triturated with a pulverizer, and passed through a 40-mesh sieve. Accurately 1.0 g of A. persica flowers was put into a conical flask with plug and added with 25 mL of methanol. After weighing, the solution was treated by ultrasound for 30 min, cooled, and weighed again. The lost weight was complemented with methanol. The solution was shaken well, centrifuged, filtrated, and the filtrate was obtained.All solutions were stored at 4°C and filtered through 0.22μm membrane filters before being injected into the HPLC system for analysis. Methanol was used as a blank control solution.
### 3.3. System Suitability
Standard solutions, sample solutions (No: S1), and methanol blank control solutions were taken for sample injection and determination to analyze system suitability according to chromatographic conditions in Section3.1. The result showed that methanol as a solvent had no interference with the detection. The theoretical plate numbers were all more than 3000, the chromatographic peaks of each component reached the baseline separation, and the separation degree from the adjacent chromatographic peaks was greater than 1.5. All results were obtained within acceptable ranges (Figure 1).Figure 1
HPLC chromatograms of reference substances (a), samples S8 (b), and methanol blank control (c). (1) rutin; (2) 5-O-coumaroylquinic acid methyl ester; (3) chlorogenic acid butyl ester; (4) kaempferol.
(a)(b)(c)
### 3.4. Method Validation
In the validation of the analytical method used for the quantification of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol ofA. persica flowers, the following parameters were determined: linearity, stability, precision, repeatability, and recovery.
#### 3.4.1. Investigation of Linear Relations
0.2 mL, 0.6 mL, 0.8 mL, 1.2 mL, 1.5 mL, and 2.0 mL of mixed standard stock solutions were, respectively, accurately absorbed and placed in a 20 mL volumetric flask. Methanol was added at a constant volume to obtain standard solutions of various concentrations. The standard solutions were detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. Standard curves of the investigated components were established by plotting the peak areas (Y) versus the concentration of each standard compound (X). The limits of detection (LOD) under the chromatographic conditions were determined at the lowest detectable concentration with a signal-to-noise ratio (S/N) greater than three, and the limits of quantification (LOQ) were determined at the lowest concentration with an S/N greater than ten. All the calibration curves of the four analytes were gained with a good linear relationship, and the correlation coefficients of all the calibration curves were found to be higher than 0.9990. The results are shown in Table 2.Table 2
Regression equation and linear range of four active ingredients.
ComponentsRegression equationR2Linear range/(μg)Rutiny = 33772x − 131890.99980.0682∼0.54565-O-Coumaroylquinic acid methyl estery = 79565x − 16690.99960.0708∼0.5664Chlorogenic acid butyl estery = 32987x − 313250.99990.0930∼0.7440Kaempferoly = 52143x − 657030.99950.0742∼0.5936
#### 3.4.2. Precision Test
Intraday and interday variations were utilized to evaluate the precision of the developed method. The mixed standard solution was repeatedly sampled six times and detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. The results showed that the RSD of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol peak area were 0.78%, 1.15%, 0.94%, and 1.13%, which indicated that the liquid chromatograph had good precision.
#### 3.4.3. Repeatability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed in six replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 to determine the peak area of each sample. The results showed that the peak area RSD values of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were 0.83%, 1.15%, 1.06, and 0.94%, which indicated good repeatability.
#### 3.4.4. Stability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed and prepared into solutions according to the methods described in Section 3.2.2. Samples were detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 at 0 h, 2 h, 4 h, 8 h, 12 h, and 24 h to determine the peak areas of each sample. The RSD values of the peak areas were 1.22%, 1.05%, 0.95%, and 1.14%, respectively. This result suggested that the sample was stable within 24 h.
#### 3.4.5. Recovery Test
The recovery experiment was performed by adding 50%, 100%, and 150% of individual standards to a known concentration ofA. persica flowers. 1.0 g of A. persica flowers (No: S1) was accurately weighed in nine replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1, to determine the peak area of each sample. The results are shown in Table 3, and the method had good accuracy.Table 3
Test results of sample recovery (n = 9).
CompoundsMass (sample)/gMass (original)/μgMass (added)/μgMass (found)/μgRecovery/%Average recovery/%RSD/%Rutin1.000625.9812.9938.9499.7799.640.431.002526.0012.9938.9399.541.001725.9812.9938.98100.081.001525.9725.9851.9499.961.001425.9725.9851.6798.921.002426.0025.9851.9499.851.001925.9838.9764.7999.591.003526.0238.9765.01100.051.002726.0038.9764.5899.005-O-Coumaroylquinic acid methyl ester1.001630.5015.2545.6399.2199.200.831.002030.5115.2545.76100.001.001630.5015.2545.3897.571.002530.5330.560.5898.521.002430.5330.560.8099.251.003730.5730.561.0399.871.001830.5145.7576.35100.201.001430.5045.7576.0099.451.002330.5245.7575.7098.75Chlorogenic acid butyl ester1.002881.3240.66122.40101.0398.961.331.001681.2240.66121.6899.511.002681.3040.66121.7099.361.001781.2381.32160.3097.231.003381.3681.32161.6798.761.002581.3081.32161.8699.071.001981.25121.98203.78100.451.002081.26121.98199.8397.201.002781.31121.98200.9398.07Kaempferol1.001219.099.5528.5799.2799.351.031.000719.089.5528.4698.221.001419.109.5528.5899.271.002619.1219.138.48101.361.000919.0919.138.0399.161.002619.1219.137.8798.171.001319.1028.6547.5799.371.002319.1128.6547.4398.851.003019.1328.6547.92100.49
### 3.5. Determination of Sample Content
The established analytical method was successfully applied to the simultaneous analysis of the four active ingredients (rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol) ofA. persica flowers samples (No.: S1–S14). The contents of the four analytes in the samples were quantified, and the results are listed in Table 4 with the mean content of three replicated analyses (n = 3).Table 4
The content of four active ingredients of samples (n = 3, μg/g).
Lot no.Rutin5-O-Coumaroylquinic acid methyl esterChlorogenic acid butyl esterKaempferolS125.93430.45481.09419.071S226.26531.02878.1446.992S324.15524.41559.35821.551S421.85920.32954.4146.953S520.81824.67553.3329.632S619.07121.74141.7309.073S719.57025.77237.93011.504S822.22624.80133.86012.051S921.78220.80233.8606.956S1017.72021.64043.10312.625S1121.59514.70931.26210.694S1221.97814.07826.34713.187S1316.20313.24319.3527.980S1420.72520.06526.46312.066As shown in Figure2, the contents of rutin, 5-O-coumaroylquinic acid methyl ester, and kaempferol in the A. persica flowers during different harvest periods fluctuated slightly, and the three components showed a downward trend. The content of chlorogenic acid butyl ester fluctuates relatively large, and the overall trend was downward. The contents of rutin and 5-O-coumaroylquinic acid methyl ester in the A. persica flowers harvested on March 21 were the highest (26.265 μg/g and 31.028 μg/g, respectively) and reached the lowest on April 1 (16.203 μg/g and 13.243 μg/g, respectively). The content of chlorogenic acid butyl ester in A. persica flowers harvested on March 20 was the highest (81.094 μg/g) and showed a downward trend, reaching the lowest on April 1 (19.352 μg/g). The content of kaempferol in A. persica flowers harvested on March 22 was the highest (21.551 μg/g) and showed a significant downward trend, reaching the lowest on March 28 (6.956 μg/g). The total content of the four active ingredients in A. persica flowers during different harvest periods was the highest at the beginning of flowering, showing a downward trend, reaching the lowest on April 1.Figure 2
Dynamic change of the contents of four active ingredients inA. persica flowers during different harvesting periods.
## 3.1. Chromatographic Conditions
All analyses were performed on a Waters e2695 HPLC system (Waters, Milford, USA). The chromatographic separation was achieved using an XTERRA MS C18 column (4.6 mm × 250 mm, 5µm) (Waters, Milford, USA), with the column oven temperature maintained at 25°C. The mobile phase consisted of 0.1% formic acid solution (Solvent A) and acetonitrile (Solvent B) and employed gradient elution at a flow rate of 1.0 mL/min. The elution program was designed as follows: from 0 to 15 min, 5–23% B; from 15 to 35 min, 23–30% B; from 35 to 40 min, 30–40% B. After a 5 min equilibration period, the samples were used for injection. The sample injection volume was 10 µL. The column effluent was monitored at 360 nm.
## 3.2. Preparation of Solutions
### 3.2.1. Standard Solutions
Standard stock solutions of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were prepared at the concentrations of 170.4, 118.0, 155.0, and 185.6μg/mL in methanol, respectively. Precisely measure the right amount of standard stock solutions placed in a 10 mL volumetric flask and add methanol to the constant volume. The concentrations of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol of the mixed standard solution were 34.1, 35.4, 46.5, and 37.1 μg/mL, respectively.
### 3.2.2. Sample Solutions
A. persica flowers were dried in the shade, triturated with a pulverizer, and passed through a 40-mesh sieve. Accurately 1.0 g of A. persica flowers was put into a conical flask with plug and added with 25 mL of methanol. After weighing, the solution was treated by ultrasound for 30 min, cooled, and weighed again. The lost weight was complemented with methanol. The solution was shaken well, centrifuged, filtrated, and the filtrate was obtained.All solutions were stored at 4°C and filtered through 0.22μm membrane filters before being injected into the HPLC system for analysis. Methanol was used as a blank control solution.
## 3.2.1. Standard Solutions
Standard stock solutions of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were prepared at the concentrations of 170.4, 118.0, 155.0, and 185.6μg/mL in methanol, respectively. Precisely measure the right amount of standard stock solutions placed in a 10 mL volumetric flask and add methanol to the constant volume. The concentrations of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol of the mixed standard solution were 34.1, 35.4, 46.5, and 37.1 μg/mL, respectively.
## 3.2.2. Sample Solutions
A. persica flowers were dried in the shade, triturated with a pulverizer, and passed through a 40-mesh sieve. Accurately 1.0 g of A. persica flowers was put into a conical flask with plug and added with 25 mL of methanol. After weighing, the solution was treated by ultrasound for 30 min, cooled, and weighed again. The lost weight was complemented with methanol. The solution was shaken well, centrifuged, filtrated, and the filtrate was obtained.All solutions were stored at 4°C and filtered through 0.22μm membrane filters before being injected into the HPLC system for analysis. Methanol was used as a blank control solution.
## 3.3. System Suitability
Standard solutions, sample solutions (No: S1), and methanol blank control solutions were taken for sample injection and determination to analyze system suitability according to chromatographic conditions in Section3.1. The result showed that methanol as a solvent had no interference with the detection. The theoretical plate numbers were all more than 3000, the chromatographic peaks of each component reached the baseline separation, and the separation degree from the adjacent chromatographic peaks was greater than 1.5. All results were obtained within acceptable ranges (Figure 1).Figure 1
HPLC chromatograms of reference substances (a), samples S8 (b), and methanol blank control (c). (1) rutin; (2) 5-O-coumaroylquinic acid methyl ester; (3) chlorogenic acid butyl ester; (4) kaempferol.
(a)(b)(c)
## 3.4. Method Validation
In the validation of the analytical method used for the quantification of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol ofA. persica flowers, the following parameters were determined: linearity, stability, precision, repeatability, and recovery.
### 3.4.1. Investigation of Linear Relations
0.2 mL, 0.6 mL, 0.8 mL, 1.2 mL, 1.5 mL, and 2.0 mL of mixed standard stock solutions were, respectively, accurately absorbed and placed in a 20 mL volumetric flask. Methanol was added at a constant volume to obtain standard solutions of various concentrations. The standard solutions were detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. Standard curves of the investigated components were established by plotting the peak areas (Y) versus the concentration of each standard compound (X). The limits of detection (LOD) under the chromatographic conditions were determined at the lowest detectable concentration with a signal-to-noise ratio (S/N) greater than three, and the limits of quantification (LOQ) were determined at the lowest concentration with an S/N greater than ten. All the calibration curves of the four analytes were gained with a good linear relationship, and the correlation coefficients of all the calibration curves were found to be higher than 0.9990. The results are shown in Table 2.Table 2
Regression equation and linear range of four active ingredients.
ComponentsRegression equationR2Linear range/(μg)Rutiny = 33772x − 131890.99980.0682∼0.54565-O-Coumaroylquinic acid methyl estery = 79565x − 16690.99960.0708∼0.5664Chlorogenic acid butyl estery = 32987x − 313250.99990.0930∼0.7440Kaempferoly = 52143x − 657030.99950.0742∼0.5936
### 3.4.2. Precision Test
Intraday and interday variations were utilized to evaluate the precision of the developed method. The mixed standard solution was repeatedly sampled six times and detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. The results showed that the RSD of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol peak area were 0.78%, 1.15%, 0.94%, and 1.13%, which indicated that the liquid chromatograph had good precision.
### 3.4.3. Repeatability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed in six replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 to determine the peak area of each sample. The results showed that the peak area RSD values of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were 0.83%, 1.15%, 1.06, and 0.94%, which indicated good repeatability.
### 3.4.4. Stability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed and prepared into solutions according to the methods described in Section 3.2.2. Samples were detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 at 0 h, 2 h, 4 h, 8 h, 12 h, and 24 h to determine the peak areas of each sample. The RSD values of the peak areas were 1.22%, 1.05%, 0.95%, and 1.14%, respectively. This result suggested that the sample was stable within 24 h.
### 3.4.5. Recovery Test
The recovery experiment was performed by adding 50%, 100%, and 150% of individual standards to a known concentration ofA. persica flowers. 1.0 g of A. persica flowers (No: S1) was accurately weighed in nine replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1, to determine the peak area of each sample. The results are shown in Table 3, and the method had good accuracy.Table 3
Test results of sample recovery (n = 9).
CompoundsMass (sample)/gMass (original)/μgMass (added)/μgMass (found)/μgRecovery/%Average recovery/%RSD/%Rutin1.000625.9812.9938.9499.7799.640.431.002526.0012.9938.9399.541.001725.9812.9938.98100.081.001525.9725.9851.9499.961.001425.9725.9851.6798.921.002426.0025.9851.9499.851.001925.9838.9764.7999.591.003526.0238.9765.01100.051.002726.0038.9764.5899.005-O-Coumaroylquinic acid methyl ester1.001630.5015.2545.6399.2199.200.831.002030.5115.2545.76100.001.001630.5015.2545.3897.571.002530.5330.560.5898.521.002430.5330.560.8099.251.003730.5730.561.0399.871.001830.5145.7576.35100.201.001430.5045.7576.0099.451.002330.5245.7575.7098.75Chlorogenic acid butyl ester1.002881.3240.66122.40101.0398.961.331.001681.2240.66121.6899.511.002681.3040.66121.7099.361.001781.2381.32160.3097.231.003381.3681.32161.6798.761.002581.3081.32161.8699.071.001981.25121.98203.78100.451.002081.26121.98199.8397.201.002781.31121.98200.9398.07Kaempferol1.001219.099.5528.5799.2799.351.031.000719.089.5528.4698.221.001419.109.5528.5899.271.002619.1219.138.48101.361.000919.0919.138.0399.161.002619.1219.137.8798.171.001319.1028.6547.5799.371.002319.1128.6547.4398.851.003019.1328.6547.92100.49
## 3.4.1. Investigation of Linear Relations
0.2 mL, 0.6 mL, 0.8 mL, 1.2 mL, 1.5 mL, and 2.0 mL of mixed standard stock solutions were, respectively, accurately absorbed and placed in a 20 mL volumetric flask. Methanol was added at a constant volume to obtain standard solutions of various concentrations. The standard solutions were detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. Standard curves of the investigated components were established by plotting the peak areas (Y) versus the concentration of each standard compound (X). The limits of detection (LOD) under the chromatographic conditions were determined at the lowest detectable concentration with a signal-to-noise ratio (S/N) greater than three, and the limits of quantification (LOQ) were determined at the lowest concentration with an S/N greater than ten. All the calibration curves of the four analytes were gained with a good linear relationship, and the correlation coefficients of all the calibration curves were found to be higher than 0.9990. The results are shown in Table 2.Table 2
Regression equation and linear range of four active ingredients.
ComponentsRegression equationR2Linear range/(μg)Rutiny = 33772x − 131890.99980.0682∼0.54565-O-Coumaroylquinic acid methyl estery = 79565x − 16690.99960.0708∼0.5664Chlorogenic acid butyl estery = 32987x − 313250.99990.0930∼0.7440Kaempferoly = 52143x − 657030.99950.0742∼0.5936
## 3.4.2. Precision Test
Intraday and interday variations were utilized to evaluate the precision of the developed method. The mixed standard solution was repeatedly sampled six times and detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. The results showed that the RSD of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol peak area were 0.78%, 1.15%, 0.94%, and 1.13%, which indicated that the liquid chromatograph had good precision.
## 3.4.3. Repeatability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed in six replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 to determine the peak area of each sample. The results showed that the peak area RSD values of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were 0.83%, 1.15%, 1.06, and 0.94%, which indicated good repeatability.
## 3.4.4. Stability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed and prepared into solutions according to the methods described in Section 3.2.2. Samples were detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 at 0 h, 2 h, 4 h, 8 h, 12 h, and 24 h to determine the peak areas of each sample. The RSD values of the peak areas were 1.22%, 1.05%, 0.95%, and 1.14%, respectively. This result suggested that the sample was stable within 24 h.
## 3.4.5. Recovery Test
The recovery experiment was performed by adding 50%, 100%, and 150% of individual standards to a known concentration ofA. persica flowers. 1.0 g of A. persica flowers (No: S1) was accurately weighed in nine replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1, to determine the peak area of each sample. The results are shown in Table 3, and the method had good accuracy.Table 3
Test results of sample recovery (n = 9).
CompoundsMass (sample)/gMass (original)/μgMass (added)/μgMass (found)/μgRecovery/%Average recovery/%RSD/%Rutin1.000625.9812.9938.9499.7799.640.431.002526.0012.9938.9399.541.001725.9812.9938.98100.081.001525.9725.9851.9499.961.001425.9725.9851.6798.921.002426.0025.9851.9499.851.001925.9838.9764.7999.591.003526.0238.9765.01100.051.002726.0038.9764.5899.005-O-Coumaroylquinic acid methyl ester1.001630.5015.2545.6399.2199.200.831.002030.5115.2545.76100.001.001630.5015.2545.3897.571.002530.5330.560.5898.521.002430.5330.560.8099.251.003730.5730.561.0399.871.001830.5145.7576.35100.201.001430.5045.7576.0099.451.002330.5245.7575.7098.75Chlorogenic acid butyl ester1.002881.3240.66122.40101.0398.961.331.001681.2240.66121.6899.511.002681.3040.66121.7099.361.001781.2381.32160.3097.231.003381.3681.32161.6798.761.002581.3081.32161.8699.071.001981.25121.98203.78100.451.002081.26121.98199.8397.201.002781.31121.98200.9398.07Kaempferol1.001219.099.5528.5799.2799.351.031.000719.089.5528.4698.221.001419.109.5528.5899.271.002619.1219.138.48101.361.000919.0919.138.0399.161.002619.1219.137.8798.171.001319.1028.6547.5799.371.002319.1128.6547.4398.851.003019.1328.6547.92100.49
## 3.5. Determination of Sample Content
The established analytical method was successfully applied to the simultaneous analysis of the four active ingredients (rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol) ofA. persica flowers samples (No.: S1–S14). The contents of the four analytes in the samples were quantified, and the results are listed in Table 4 with the mean content of three replicated analyses (n = 3).Table 4
The content of four active ingredients of samples (n = 3, μg/g).
Lot no.Rutin5-O-Coumaroylquinic acid methyl esterChlorogenic acid butyl esterKaempferolS125.93430.45481.09419.071S226.26531.02878.1446.992S324.15524.41559.35821.551S421.85920.32954.4146.953S520.81824.67553.3329.632S619.07121.74141.7309.073S719.57025.77237.93011.504S822.22624.80133.86012.051S921.78220.80233.8606.956S1017.72021.64043.10312.625S1121.59514.70931.26210.694S1221.97814.07826.34713.187S1316.20313.24319.3527.980S1420.72520.06526.46312.066As shown in Figure2, the contents of rutin, 5-O-coumaroylquinic acid methyl ester, and kaempferol in the A. persica flowers during different harvest periods fluctuated slightly, and the three components showed a downward trend. The content of chlorogenic acid butyl ester fluctuates relatively large, and the overall trend was downward. The contents of rutin and 5-O-coumaroylquinic acid methyl ester in the A. persica flowers harvested on March 21 were the highest (26.265 μg/g and 31.028 μg/g, respectively) and reached the lowest on April 1 (16.203 μg/g and 13.243 μg/g, respectively). The content of chlorogenic acid butyl ester in A. persica flowers harvested on March 20 was the highest (81.094 μg/g) and showed a downward trend, reaching the lowest on April 1 (19.352 μg/g). The content of kaempferol in A. persica flowers harvested on March 22 was the highest (21.551 μg/g) and showed a significant downward trend, reaching the lowest on March 28 (6.956 μg/g). The total content of the four active ingredients in A. persica flowers during different harvest periods was the highest at the beginning of flowering, showing a downward trend, reaching the lowest on April 1.Figure 2
Dynamic change of the contents of four active ingredients inA. persica flowers during different harvesting periods.
## 4. Discussion
The ingredients of traditional Chinese medicine are complex, and the efficacy of traditional Chinese medicine is often the result of the synergistic effect of many ingredients [14, 15]. If only one or two ingredients are used as quality evaluation indexes, it is difficult to reflect the true quality of traditional Chinese medicine, while the multi-index evaluation method can more comprehensively characterize the quality of traditional Chinese medicine [16, 17]. Therefore, the content determination of multi-ingredients has become the development trend of quality evaluation of traditional Chinese medicine [18, 19]. A. persica flowers contain a variety of effective ingredients, of which kaempferol has antioxidant, anti-inflammatory, antitumor, and other activities [20–23]. Sun et al. found that, in a certain concentration range, rutin had an obvious protective effect on HUVECs injured by H2O2 and glucose. And the mechanism was related to the inhibition of the intercellular adhesion molecule expression and the regulation of NO and TNF-α production [24]. The anti-inflammatory and cytotoxic activities of rutin were determined by the Griess and CCK-8 methods; then, the results of screening demonstrated that rutin showed moderate NO inhibitory effects [25]. Combined with the previous study on the coagulation activity of A. persica flowers in vitro, rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were selected as the quality control indexes of A. persica flowers, which can reflect the quality of A. persica flowers more comprehensively.
### 4.1. Optimization of Extraction Methods
The main chemical components ofA. persica flowers are flavonoids and phenolic acids, which are easily soluble in polar solvents. The components and contents obtained by extraction with different polar solvents are different. Therefore, in the early stages of this study, three different polar solvents, namely, ultrapure water, methanol, and ethanol, were used for extraction. The results show that, compared with water and ethanol, methanol extract has a better peak shape and the largest absorption peak intensity, so methanol was the best pure solvent for A. persica flowers extraction. Furthermore, the effects of different volume fractions of methanol (10%, 20%, 40%, 60%, 80%, and 100%) and different extraction methods (ultrasonic extraction [26] and reflux extraction [27]) on the extraction of active components from A. persica flowers were compared by the single factor method. The results showed that when 100% methanol was used as the solvent, the peak areas of rutin and other four components were higher and the method stability was better. At the same time, the author found that the effects of ultrasonic extraction and reflux extraction were similar. Considering the simplicity of operation and the repeatability of the method, ultrasonic extraction was selected in this study. On this basis, this research also investigated the effects of different extraction times (30, 45, and 60 min), different solid-liquid ratios (1.0 g: 10 mL, 1.0 g: 25 mL, and 1.0 g: 50 mL), and other factors on the extraction of rutin from A. persica flowers. The results showed that the results of different extraction times were similar, so the shortest extraction time was 30 min. According to different solid-liquid ratios, the extraction rate of 1.0 g: 25 mL is high, which can better reflect the chemical information of A. persica flowers. Therefore, it was determined that the extraction method was ultrasonic extraction, the extraction solvent was 100% methanol, the extraction time was 30 min, and the extraction solid-liquid ratio was 1.0 g: 25 mL.
### 4.2. Investigation of Chromatographic Conditions
According to references [28, 29], the four components were scanned in the wavelength range of 200∼400 nm, and the four components had a better linear relationship at 360 nm. At this wavelength, the interference of other components in the sample to the four components was less, and the baseline was more stable. Meanwhile, this research investigated the separation effects of two mobile phase systems, acetonitrile-water and acetonitrile-0.1% formic acid solution. The results showed that when the mobile phase was acetonitrile-water, the chromatographic peaks and separation degrees of chlorogenic acid butyl ester and kaempferol were poor. When an acetonitrile-0.1% formic acid solution was used as the mobile phase, the four components, such as kaempferol, could be well separated and the peak time was stable. Therefore, an acetonitrile-0.1% formic acid solution was selected as the mobile phase.
### 4.3. Effect of Harvesting Time on the Content of Active Ingredients
It can be seen that it is difficult to evaluate the overall quality ofA. persica flowers medicinal materials with a single component, so this study simultaneously measured the contents of four active ingredients in A. persica flowers for the first time. The contents of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol in A. persica flowers reached the highest value at the beginning of flowering and then began to decline and reached the lowest on April 1. The results showed the optimum harvesting period for A. persica flowers in the early flowering stage. The variation in the four components was thought to be due to differences in origin, harvesting time, and environmental conditions of raw plant materials, as well as differences in temperature. So, there is important academic significance and application value to carry out research on the chemical composition and pharmacological action of A. persica flowers during the early stage of anthesis.
## 4.1. Optimization of Extraction Methods
The main chemical components ofA. persica flowers are flavonoids and phenolic acids, which are easily soluble in polar solvents. The components and contents obtained by extraction with different polar solvents are different. Therefore, in the early stages of this study, three different polar solvents, namely, ultrapure water, methanol, and ethanol, were used for extraction. The results show that, compared with water and ethanol, methanol extract has a better peak shape and the largest absorption peak intensity, so methanol was the best pure solvent for A. persica flowers extraction. Furthermore, the effects of different volume fractions of methanol (10%, 20%, 40%, 60%, 80%, and 100%) and different extraction methods (ultrasonic extraction [26] and reflux extraction [27]) on the extraction of active components from A. persica flowers were compared by the single factor method. The results showed that when 100% methanol was used as the solvent, the peak areas of rutin and other four components were higher and the method stability was better. At the same time, the author found that the effects of ultrasonic extraction and reflux extraction were similar. Considering the simplicity of operation and the repeatability of the method, ultrasonic extraction was selected in this study. On this basis, this research also investigated the effects of different extraction times (30, 45, and 60 min), different solid-liquid ratios (1.0 g: 10 mL, 1.0 g: 25 mL, and 1.0 g: 50 mL), and other factors on the extraction of rutin from A. persica flowers. The results showed that the results of different extraction times were similar, so the shortest extraction time was 30 min. According to different solid-liquid ratios, the extraction rate of 1.0 g: 25 mL is high, which can better reflect the chemical information of A. persica flowers. Therefore, it was determined that the extraction method was ultrasonic extraction, the extraction solvent was 100% methanol, the extraction time was 30 min, and the extraction solid-liquid ratio was 1.0 g: 25 mL.
## 4.2. Investigation of Chromatographic Conditions
According to references [28, 29], the four components were scanned in the wavelength range of 200∼400 nm, and the four components had a better linear relationship at 360 nm. At this wavelength, the interference of other components in the sample to the four components was less, and the baseline was more stable. Meanwhile, this research investigated the separation effects of two mobile phase systems, acetonitrile-water and acetonitrile-0.1% formic acid solution. The results showed that when the mobile phase was acetonitrile-water, the chromatographic peaks and separation degrees of chlorogenic acid butyl ester and kaempferol were poor. When an acetonitrile-0.1% formic acid solution was used as the mobile phase, the four components, such as kaempferol, could be well separated and the peak time was stable. Therefore, an acetonitrile-0.1% formic acid solution was selected as the mobile phase.
## 4.3. Effect of Harvesting Time on the Content of Active Ingredients
It can be seen that it is difficult to evaluate the overall quality ofA. persica flowers medicinal materials with a single component, so this study simultaneously measured the contents of four active ingredients in A. persica flowers for the first time. The contents of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol in A. persica flowers reached the highest value at the beginning of flowering and then began to decline and reached the lowest on April 1. The results showed the optimum harvesting period for A. persica flowers in the early flowering stage. The variation in the four components was thought to be due to differences in origin, harvesting time, and environmental conditions of raw plant materials, as well as differences in temperature. So, there is important academic significance and application value to carry out research on the chemical composition and pharmacological action of A. persica flowers during the early stage of anthesis.
## 5. Conclusions
A simple, rapid, and sensitive HPLC method for the determination of four active ingredients inA. persica flowers during different harvest periods was developed and validated. The results of the present study indicate that the early stage of anthesis is the optimum harvesting period for A. persica flowers. The method will provide a scientific basis for the quality control of A. persica.
---
*Source: 1017674-2022-09-19.xml* | 1017674-2022-09-19_1017674-2022-09-19.md | 44,869 | The Content Variation of Four Active Components inAmygdalus persica L. during Different Harvesting Periods | Juanjuan Zhang; Xudong Chen; Zhenhua Yin; Qinfeng Guo; Baocheng Yang; Miaoqing Feng; Xiao Li; Lin Chen; Wei Zhang; Wenyi Kang | Journal of Food Quality
(2022) | Agricultural Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1017674 | 1017674-2022-09-19.xml | ---
## Abstract
In this study, a quantitative method for the content determination of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol inAmygdalus persica L. flowers during different harvest periods was established to investigate its various rules and determine the optimal harvesting period. The determination was performed on the XTERRA MS C18 column with a mobile phase consisting of 0.1% formic acid aqueous solution and acetonitrile (gradient elution) at a flow rate of 1.0 mL/min. In combination with other validation data, including precision, stability, and recovery tests, this method demonstrated good reliability and sensitivity. The results showed that the contents of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol in A. persica flowers during different harvest periods were quite different, and the content in samples at the early blooming stage was the highest. The method is simple, accurate, and rapid for determining the contents of four active ingredients in A. persica flowers.
---
## Body
## 1. Introduction
Amygdalus persica L., belonging to Rosaceae, was widely distributed in most regions of China and had the effect of reducing diarrhea and defecation, promoting water, and reducing swelling [1]. A. persica has an effect on beauty and health care and is widely used in the fields of food and medicine [2, 3]. A. persica flowers mainly contain flavonoids, polyphenols, polysaccharides, and other chemical components, which have antioxidant and antibacterial activities [4, 5]. In addition, Zhang et al. found the major volatile constituents of A. persica flowers were linolenic alcohol, n-hexadecanoic acid, cyclohexane, and octadecanoic acid [6]. Li et al. found A. persica flowers polyphenol can significantly increase the brain 5-hydroxytryptamine and norepinephrine levels in the hippocampus of mice with chronic depression [7]. Liu et al. studied the inhibitory effect and kinetics of a methanol extract of A. persica flowers on tyrosinase. Results showed that A. persica flowers extract could inhibit the monophenolase activity of tyrosinase effectively as well as diphenolase activity [8].At present, althoughA. persica can be used as medicine and the curative effect is accurate, it is only used in traditional Chinese medicine prescription [9–11]. A. persica was not included in the Chinese Pharmacopoeia and local standards, and there was no unified standard for its quality control. In our previous study, we investigated the chemical constituents and coagulation activity of A. persica flowers, and we found that rutin and kaempferol possessed significant procoagulant activity, while chlorogenic acid butyl ester had anticoagulant activity in vitro [12, 13]. Therefore, the content variation of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol in A. persica flowers in different harvest periods was simultaneously determined for the first time in this study. The content differences and dynamic changes of four effective components were compared and analyzed in order to understand the dynamic accumulation law in different growing periods and to provide a theoretical basis for strictly controlling the quality, timely harvesting, and rational development and utilization of A. persica flowers.
## 2. Materials and Reagents
### 2.1. Instruments
All the analyses were performed on a Waters 2695 liquid chromatography system (Waters, Milford, USA) equipped with a vacuum degasser, a quaternary solvent delivery system, an autosampler, a column compartment, and a w2489 UV visible detector. The Kq-250db CNC ultrasonic cleaner was purchased from Kunshan Ultrasonic Instrument Co., Ltd. (Kunshan, China). The AG285 electronic analytical balance was purchased from Mettler Toledo (Switzerland).
### 2.2. Chemicals and Reagents
Rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were provided by Henan Engineering Research Center for comprehensive utilization of edible and medicinal plant resources and Huanghe Science and Technology College, and their purities were up to 98%. Deionized water was prepared using a Milli-Q ultrapure water purifier (ELGA, Labwater, Marlow, UK). Acetonitrile and methyl alcohol were purchased from Thermo Fisher Technologies Ltd. All other reagents were in analytical grade.
### 2.3. Plant Material
A. persica flower samples (No: S1–S14) were collected in the medicinal botanical garden of Henan University and identified by Professor Changqin Li of Henan University. The voucher specimens were deposited in the Institute of Natural Medicine of Huanghe Science and Technology College. And the information on samples of A. persica flowers are given in Table 1.Table 1
Information of samples ofA. persica flowers.
Lot no.Collecting timeS12020-03-20S22020-03-21S32020-03-22S42020-03-23S52020-03-24S62020-03-25S72020-03-26S82020-03-27S92020-03-28S102020-03-29S112020-03-30S122020-03-31S132020-04-01S142020-04-02
## 2.1. Instruments
All the analyses were performed on a Waters 2695 liquid chromatography system (Waters, Milford, USA) equipped with a vacuum degasser, a quaternary solvent delivery system, an autosampler, a column compartment, and a w2489 UV visible detector. The Kq-250db CNC ultrasonic cleaner was purchased from Kunshan Ultrasonic Instrument Co., Ltd. (Kunshan, China). The AG285 electronic analytical balance was purchased from Mettler Toledo (Switzerland).
## 2.2. Chemicals and Reagents
Rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were provided by Henan Engineering Research Center for comprehensive utilization of edible and medicinal plant resources and Huanghe Science and Technology College, and their purities were up to 98%. Deionized water was prepared using a Milli-Q ultrapure water purifier (ELGA, Labwater, Marlow, UK). Acetonitrile and methyl alcohol were purchased from Thermo Fisher Technologies Ltd. All other reagents were in analytical grade.
## 2.3. Plant Material
A. persica flower samples (No: S1–S14) were collected in the medicinal botanical garden of Henan University and identified by Professor Changqin Li of Henan University. The voucher specimens were deposited in the Institute of Natural Medicine of Huanghe Science and Technology College. And the information on samples of A. persica flowers are given in Table 1.Table 1
Information of samples ofA. persica flowers.
Lot no.Collecting timeS12020-03-20S22020-03-21S32020-03-22S42020-03-23S52020-03-24S62020-03-25S72020-03-26S82020-03-27S92020-03-28S102020-03-29S112020-03-30S122020-03-31S132020-04-01S142020-04-02
## 3. Methods and Results
### 3.1. Chromatographic Conditions
All analyses were performed on a Waters e2695 HPLC system (Waters, Milford, USA). The chromatographic separation was achieved using an XTERRA MS C18 column (4.6 mm × 250 mm, 5µm) (Waters, Milford, USA), with the column oven temperature maintained at 25°C. The mobile phase consisted of 0.1% formic acid solution (Solvent A) and acetonitrile (Solvent B) and employed gradient elution at a flow rate of 1.0 mL/min. The elution program was designed as follows: from 0 to 15 min, 5–23% B; from 15 to 35 min, 23–30% B; from 35 to 40 min, 30–40% B. After a 5 min equilibration period, the samples were used for injection. The sample injection volume was 10 µL. The column effluent was monitored at 360 nm.
### 3.2. Preparation of Solutions
#### 3.2.1. Standard Solutions
Standard stock solutions of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were prepared at the concentrations of 170.4, 118.0, 155.0, and 185.6μg/mL in methanol, respectively. Precisely measure the right amount of standard stock solutions placed in a 10 mL volumetric flask and add methanol to the constant volume. The concentrations of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol of the mixed standard solution were 34.1, 35.4, 46.5, and 37.1 μg/mL, respectively.
#### 3.2.2. Sample Solutions
A. persica flowers were dried in the shade, triturated with a pulverizer, and passed through a 40-mesh sieve. Accurately 1.0 g of A. persica flowers was put into a conical flask with plug and added with 25 mL of methanol. After weighing, the solution was treated by ultrasound for 30 min, cooled, and weighed again. The lost weight was complemented with methanol. The solution was shaken well, centrifuged, filtrated, and the filtrate was obtained.All solutions were stored at 4°C and filtered through 0.22μm membrane filters before being injected into the HPLC system for analysis. Methanol was used as a blank control solution.
### 3.3. System Suitability
Standard solutions, sample solutions (No: S1), and methanol blank control solutions were taken for sample injection and determination to analyze system suitability according to chromatographic conditions in Section3.1. The result showed that methanol as a solvent had no interference with the detection. The theoretical plate numbers were all more than 3000, the chromatographic peaks of each component reached the baseline separation, and the separation degree from the adjacent chromatographic peaks was greater than 1.5. All results were obtained within acceptable ranges (Figure 1).Figure 1
HPLC chromatograms of reference substances (a), samples S8 (b), and methanol blank control (c). (1) rutin; (2) 5-O-coumaroylquinic acid methyl ester; (3) chlorogenic acid butyl ester; (4) kaempferol.
(a)(b)(c)
### 3.4. Method Validation
In the validation of the analytical method used for the quantification of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol ofA. persica flowers, the following parameters were determined: linearity, stability, precision, repeatability, and recovery.
#### 3.4.1. Investigation of Linear Relations
0.2 mL, 0.6 mL, 0.8 mL, 1.2 mL, 1.5 mL, and 2.0 mL of mixed standard stock solutions were, respectively, accurately absorbed and placed in a 20 mL volumetric flask. Methanol was added at a constant volume to obtain standard solutions of various concentrations. The standard solutions were detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. Standard curves of the investigated components were established by plotting the peak areas (Y) versus the concentration of each standard compound (X). The limits of detection (LOD) under the chromatographic conditions were determined at the lowest detectable concentration with a signal-to-noise ratio (S/N) greater than three, and the limits of quantification (LOQ) were determined at the lowest concentration with an S/N greater than ten. All the calibration curves of the four analytes were gained with a good linear relationship, and the correlation coefficients of all the calibration curves were found to be higher than 0.9990. The results are shown in Table 2.Table 2
Regression equation and linear range of four active ingredients.
ComponentsRegression equationR2Linear range/(μg)Rutiny = 33772x − 131890.99980.0682∼0.54565-O-Coumaroylquinic acid methyl estery = 79565x − 16690.99960.0708∼0.5664Chlorogenic acid butyl estery = 32987x − 313250.99990.0930∼0.7440Kaempferoly = 52143x − 657030.99950.0742∼0.5936
#### 3.4.2. Precision Test
Intraday and interday variations were utilized to evaluate the precision of the developed method. The mixed standard solution was repeatedly sampled six times and detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. The results showed that the RSD of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol peak area were 0.78%, 1.15%, 0.94%, and 1.13%, which indicated that the liquid chromatograph had good precision.
#### 3.4.3. Repeatability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed in six replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 to determine the peak area of each sample. The results showed that the peak area RSD values of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were 0.83%, 1.15%, 1.06, and 0.94%, which indicated good repeatability.
#### 3.4.4. Stability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed and prepared into solutions according to the methods described in Section 3.2.2. Samples were detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 at 0 h, 2 h, 4 h, 8 h, 12 h, and 24 h to determine the peak areas of each sample. The RSD values of the peak areas were 1.22%, 1.05%, 0.95%, and 1.14%, respectively. This result suggested that the sample was stable within 24 h.
#### 3.4.5. Recovery Test
The recovery experiment was performed by adding 50%, 100%, and 150% of individual standards to a known concentration ofA. persica flowers. 1.0 g of A. persica flowers (No: S1) was accurately weighed in nine replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1, to determine the peak area of each sample. The results are shown in Table 3, and the method had good accuracy.Table 3
Test results of sample recovery (n = 9).
CompoundsMass (sample)/gMass (original)/μgMass (added)/μgMass (found)/μgRecovery/%Average recovery/%RSD/%Rutin1.000625.9812.9938.9499.7799.640.431.002526.0012.9938.9399.541.001725.9812.9938.98100.081.001525.9725.9851.9499.961.001425.9725.9851.6798.921.002426.0025.9851.9499.851.001925.9838.9764.7999.591.003526.0238.9765.01100.051.002726.0038.9764.5899.005-O-Coumaroylquinic acid methyl ester1.001630.5015.2545.6399.2199.200.831.002030.5115.2545.76100.001.001630.5015.2545.3897.571.002530.5330.560.5898.521.002430.5330.560.8099.251.003730.5730.561.0399.871.001830.5145.7576.35100.201.001430.5045.7576.0099.451.002330.5245.7575.7098.75Chlorogenic acid butyl ester1.002881.3240.66122.40101.0398.961.331.001681.2240.66121.6899.511.002681.3040.66121.7099.361.001781.2381.32160.3097.231.003381.3681.32161.6798.761.002581.3081.32161.8699.071.001981.25121.98203.78100.451.002081.26121.98199.8397.201.002781.31121.98200.9398.07Kaempferol1.001219.099.5528.5799.2799.351.031.000719.089.5528.4698.221.001419.109.5528.5899.271.002619.1219.138.48101.361.000919.0919.138.0399.161.002619.1219.137.8798.171.001319.1028.6547.5799.371.002319.1128.6547.4398.851.003019.1328.6547.92100.49
### 3.5. Determination of Sample Content
The established analytical method was successfully applied to the simultaneous analysis of the four active ingredients (rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol) ofA. persica flowers samples (No.: S1–S14). The contents of the four analytes in the samples were quantified, and the results are listed in Table 4 with the mean content of three replicated analyses (n = 3).Table 4
The content of four active ingredients of samples (n = 3, μg/g).
Lot no.Rutin5-O-Coumaroylquinic acid methyl esterChlorogenic acid butyl esterKaempferolS125.93430.45481.09419.071S226.26531.02878.1446.992S324.15524.41559.35821.551S421.85920.32954.4146.953S520.81824.67553.3329.632S619.07121.74141.7309.073S719.57025.77237.93011.504S822.22624.80133.86012.051S921.78220.80233.8606.956S1017.72021.64043.10312.625S1121.59514.70931.26210.694S1221.97814.07826.34713.187S1316.20313.24319.3527.980S1420.72520.06526.46312.066As shown in Figure2, the contents of rutin, 5-O-coumaroylquinic acid methyl ester, and kaempferol in the A. persica flowers during different harvest periods fluctuated slightly, and the three components showed a downward trend. The content of chlorogenic acid butyl ester fluctuates relatively large, and the overall trend was downward. The contents of rutin and 5-O-coumaroylquinic acid methyl ester in the A. persica flowers harvested on March 21 were the highest (26.265 μg/g and 31.028 μg/g, respectively) and reached the lowest on April 1 (16.203 μg/g and 13.243 μg/g, respectively). The content of chlorogenic acid butyl ester in A. persica flowers harvested on March 20 was the highest (81.094 μg/g) and showed a downward trend, reaching the lowest on April 1 (19.352 μg/g). The content of kaempferol in A. persica flowers harvested on March 22 was the highest (21.551 μg/g) and showed a significant downward trend, reaching the lowest on March 28 (6.956 μg/g). The total content of the four active ingredients in A. persica flowers during different harvest periods was the highest at the beginning of flowering, showing a downward trend, reaching the lowest on April 1.Figure 2
Dynamic change of the contents of four active ingredients inA. persica flowers during different harvesting periods.
## 3.1. Chromatographic Conditions
All analyses were performed on a Waters e2695 HPLC system (Waters, Milford, USA). The chromatographic separation was achieved using an XTERRA MS C18 column (4.6 mm × 250 mm, 5µm) (Waters, Milford, USA), with the column oven temperature maintained at 25°C. The mobile phase consisted of 0.1% formic acid solution (Solvent A) and acetonitrile (Solvent B) and employed gradient elution at a flow rate of 1.0 mL/min. The elution program was designed as follows: from 0 to 15 min, 5–23% B; from 15 to 35 min, 23–30% B; from 35 to 40 min, 30–40% B. After a 5 min equilibration period, the samples were used for injection. The sample injection volume was 10 µL. The column effluent was monitored at 360 nm.
## 3.2. Preparation of Solutions
### 3.2.1. Standard Solutions
Standard stock solutions of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were prepared at the concentrations of 170.4, 118.0, 155.0, and 185.6μg/mL in methanol, respectively. Precisely measure the right amount of standard stock solutions placed in a 10 mL volumetric flask and add methanol to the constant volume. The concentrations of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol of the mixed standard solution were 34.1, 35.4, 46.5, and 37.1 μg/mL, respectively.
### 3.2.2. Sample Solutions
A. persica flowers were dried in the shade, triturated with a pulverizer, and passed through a 40-mesh sieve. Accurately 1.0 g of A. persica flowers was put into a conical flask with plug and added with 25 mL of methanol. After weighing, the solution was treated by ultrasound for 30 min, cooled, and weighed again. The lost weight was complemented with methanol. The solution was shaken well, centrifuged, filtrated, and the filtrate was obtained.All solutions were stored at 4°C and filtered through 0.22μm membrane filters before being injected into the HPLC system for analysis. Methanol was used as a blank control solution.
## 3.2.1. Standard Solutions
Standard stock solutions of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were prepared at the concentrations of 170.4, 118.0, 155.0, and 185.6μg/mL in methanol, respectively. Precisely measure the right amount of standard stock solutions placed in a 10 mL volumetric flask and add methanol to the constant volume. The concentrations of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol of the mixed standard solution were 34.1, 35.4, 46.5, and 37.1 μg/mL, respectively.
## 3.2.2. Sample Solutions
A. persica flowers were dried in the shade, triturated with a pulverizer, and passed through a 40-mesh sieve. Accurately 1.0 g of A. persica flowers was put into a conical flask with plug and added with 25 mL of methanol. After weighing, the solution was treated by ultrasound for 30 min, cooled, and weighed again. The lost weight was complemented with methanol. The solution was shaken well, centrifuged, filtrated, and the filtrate was obtained.All solutions were stored at 4°C and filtered through 0.22μm membrane filters before being injected into the HPLC system for analysis. Methanol was used as a blank control solution.
## 3.3. System Suitability
Standard solutions, sample solutions (No: S1), and methanol blank control solutions were taken for sample injection and determination to analyze system suitability according to chromatographic conditions in Section3.1. The result showed that methanol as a solvent had no interference with the detection. The theoretical plate numbers were all more than 3000, the chromatographic peaks of each component reached the baseline separation, and the separation degree from the adjacent chromatographic peaks was greater than 1.5. All results were obtained within acceptable ranges (Figure 1).Figure 1
HPLC chromatograms of reference substances (a), samples S8 (b), and methanol blank control (c). (1) rutin; (2) 5-O-coumaroylquinic acid methyl ester; (3) chlorogenic acid butyl ester; (4) kaempferol.
(a)(b)(c)
## 3.4. Method Validation
In the validation of the analytical method used for the quantification of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol ofA. persica flowers, the following parameters were determined: linearity, stability, precision, repeatability, and recovery.
### 3.4.1. Investigation of Linear Relations
0.2 mL, 0.6 mL, 0.8 mL, 1.2 mL, 1.5 mL, and 2.0 mL of mixed standard stock solutions were, respectively, accurately absorbed and placed in a 20 mL volumetric flask. Methanol was added at a constant volume to obtain standard solutions of various concentrations. The standard solutions were detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. Standard curves of the investigated components were established by plotting the peak areas (Y) versus the concentration of each standard compound (X). The limits of detection (LOD) under the chromatographic conditions were determined at the lowest detectable concentration with a signal-to-noise ratio (S/N) greater than three, and the limits of quantification (LOQ) were determined at the lowest concentration with an S/N greater than ten. All the calibration curves of the four analytes were gained with a good linear relationship, and the correlation coefficients of all the calibration curves were found to be higher than 0.9990. The results are shown in Table 2.Table 2
Regression equation and linear range of four active ingredients.
ComponentsRegression equationR2Linear range/(μg)Rutiny = 33772x − 131890.99980.0682∼0.54565-O-Coumaroylquinic acid methyl estery = 79565x − 16690.99960.0708∼0.5664Chlorogenic acid butyl estery = 32987x − 313250.99990.0930∼0.7440Kaempferoly = 52143x − 657030.99950.0742∼0.5936
### 3.4.2. Precision Test
Intraday and interday variations were utilized to evaluate the precision of the developed method. The mixed standard solution was repeatedly sampled six times and detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. The results showed that the RSD of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol peak area were 0.78%, 1.15%, 0.94%, and 1.13%, which indicated that the liquid chromatograph had good precision.
### 3.4.3. Repeatability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed in six replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 to determine the peak area of each sample. The results showed that the peak area RSD values of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were 0.83%, 1.15%, 1.06, and 0.94%, which indicated good repeatability.
### 3.4.4. Stability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed and prepared into solutions according to the methods described in Section 3.2.2. Samples were detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 at 0 h, 2 h, 4 h, 8 h, 12 h, and 24 h to determine the peak areas of each sample. The RSD values of the peak areas were 1.22%, 1.05%, 0.95%, and 1.14%, respectively. This result suggested that the sample was stable within 24 h.
### 3.4.5. Recovery Test
The recovery experiment was performed by adding 50%, 100%, and 150% of individual standards to a known concentration ofA. persica flowers. 1.0 g of A. persica flowers (No: S1) was accurately weighed in nine replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1, to determine the peak area of each sample. The results are shown in Table 3, and the method had good accuracy.Table 3
Test results of sample recovery (n = 9).
CompoundsMass (sample)/gMass (original)/μgMass (added)/μgMass (found)/μgRecovery/%Average recovery/%RSD/%Rutin1.000625.9812.9938.9499.7799.640.431.002526.0012.9938.9399.541.001725.9812.9938.98100.081.001525.9725.9851.9499.961.001425.9725.9851.6798.921.002426.0025.9851.9499.851.001925.9838.9764.7999.591.003526.0238.9765.01100.051.002726.0038.9764.5899.005-O-Coumaroylquinic acid methyl ester1.001630.5015.2545.6399.2199.200.831.002030.5115.2545.76100.001.001630.5015.2545.3897.571.002530.5330.560.5898.521.002430.5330.560.8099.251.003730.5730.561.0399.871.001830.5145.7576.35100.201.001430.5045.7576.0099.451.002330.5245.7575.7098.75Chlorogenic acid butyl ester1.002881.3240.66122.40101.0398.961.331.001681.2240.66121.6899.511.002681.3040.66121.7099.361.001781.2381.32160.3097.231.003381.3681.32161.6798.761.002581.3081.32161.8699.071.001981.25121.98203.78100.451.002081.26121.98199.8397.201.002781.31121.98200.9398.07Kaempferol1.001219.099.5528.5799.2799.351.031.000719.089.5528.4698.221.001419.109.5528.5899.271.002619.1219.138.48101.361.000919.0919.138.0399.161.002619.1219.137.8798.171.001319.1028.6547.5799.371.002319.1128.6547.4398.851.003019.1328.6547.92100.49
## 3.4.1. Investigation of Linear Relations
0.2 mL, 0.6 mL, 0.8 mL, 1.2 mL, 1.5 mL, and 2.0 mL of mixed standard stock solutions were, respectively, accurately absorbed and placed in a 20 mL volumetric flask. Methanol was added at a constant volume to obtain standard solutions of various concentrations. The standard solutions were detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. Standard curves of the investigated components were established by plotting the peak areas (Y) versus the concentration of each standard compound (X). The limits of detection (LOD) under the chromatographic conditions were determined at the lowest detectable concentration with a signal-to-noise ratio (S/N) greater than three, and the limits of quantification (LOQ) were determined at the lowest concentration with an S/N greater than ten. All the calibration curves of the four analytes were gained with a good linear relationship, and the correlation coefficients of all the calibration curves were found to be higher than 0.9990. The results are shown in Table 2.Table 2
Regression equation and linear range of four active ingredients.
ComponentsRegression equationR2Linear range/(μg)Rutiny = 33772x − 131890.99980.0682∼0.54565-O-Coumaroylquinic acid methyl estery = 79565x − 16690.99960.0708∼0.5664Chlorogenic acid butyl estery = 32987x − 313250.99990.0930∼0.7440Kaempferoly = 52143x − 657030.99950.0742∼0.5936
## 3.4.2. Precision Test
Intraday and interday variations were utilized to evaluate the precision of the developed method. The mixed standard solution was repeatedly sampled six times and detected according to the chromatographic conditions and mobile phase conditions described in Section3.1, and the peak area was recorded. The results showed that the RSD of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol peak area were 0.78%, 1.15%, 0.94%, and 1.13%, which indicated that the liquid chromatograph had good precision.
## 3.4.3. Repeatability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed in six replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 to determine the peak area of each sample. The results showed that the peak area RSD values of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were 0.83%, 1.15%, 1.06, and 0.94%, which indicated good repeatability.
## 3.4.4. Stability Test
1.0 g ofA. persica flowers (No: S1) was accurately weighed and prepared into solutions according to the methods described in Section 3.2.2. Samples were detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1 at 0 h, 2 h, 4 h, 8 h, 12 h, and 24 h to determine the peak areas of each sample. The RSD values of the peak areas were 1.22%, 1.05%, 0.95%, and 1.14%, respectively. This result suggested that the sample was stable within 24 h.
## 3.4.5. Recovery Test
The recovery experiment was performed by adding 50%, 100%, and 150% of individual standards to a known concentration ofA. persica flowers. 1.0 g of A. persica flowers (No: S1) was accurately weighed in nine replicates, prepared into solutions according to the methods described in Section 3.2.2, and detected according to the chromatographic conditions and mobile phase conditions described in Section 3.1, to determine the peak area of each sample. The results are shown in Table 3, and the method had good accuracy.Table 3
Test results of sample recovery (n = 9).
CompoundsMass (sample)/gMass (original)/μgMass (added)/μgMass (found)/μgRecovery/%Average recovery/%RSD/%Rutin1.000625.9812.9938.9499.7799.640.431.002526.0012.9938.9399.541.001725.9812.9938.98100.081.001525.9725.9851.9499.961.001425.9725.9851.6798.921.002426.0025.9851.9499.851.001925.9838.9764.7999.591.003526.0238.9765.01100.051.002726.0038.9764.5899.005-O-Coumaroylquinic acid methyl ester1.001630.5015.2545.6399.2199.200.831.002030.5115.2545.76100.001.001630.5015.2545.3897.571.002530.5330.560.5898.521.002430.5330.560.8099.251.003730.5730.561.0399.871.001830.5145.7576.35100.201.001430.5045.7576.0099.451.002330.5245.7575.7098.75Chlorogenic acid butyl ester1.002881.3240.66122.40101.0398.961.331.001681.2240.66121.6899.511.002681.3040.66121.7099.361.001781.2381.32160.3097.231.003381.3681.32161.6798.761.002581.3081.32161.8699.071.001981.25121.98203.78100.451.002081.26121.98199.8397.201.002781.31121.98200.9398.07Kaempferol1.001219.099.5528.5799.2799.351.031.000719.089.5528.4698.221.001419.109.5528.5899.271.002619.1219.138.48101.361.000919.0919.138.0399.161.002619.1219.137.8798.171.001319.1028.6547.5799.371.002319.1128.6547.4398.851.003019.1328.6547.92100.49
## 3.5. Determination of Sample Content
The established analytical method was successfully applied to the simultaneous analysis of the four active ingredients (rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol) ofA. persica flowers samples (No.: S1–S14). The contents of the four analytes in the samples were quantified, and the results are listed in Table 4 with the mean content of three replicated analyses (n = 3).Table 4
The content of four active ingredients of samples (n = 3, μg/g).
Lot no.Rutin5-O-Coumaroylquinic acid methyl esterChlorogenic acid butyl esterKaempferolS125.93430.45481.09419.071S226.26531.02878.1446.992S324.15524.41559.35821.551S421.85920.32954.4146.953S520.81824.67553.3329.632S619.07121.74141.7309.073S719.57025.77237.93011.504S822.22624.80133.86012.051S921.78220.80233.8606.956S1017.72021.64043.10312.625S1121.59514.70931.26210.694S1221.97814.07826.34713.187S1316.20313.24319.3527.980S1420.72520.06526.46312.066As shown in Figure2, the contents of rutin, 5-O-coumaroylquinic acid methyl ester, and kaempferol in the A. persica flowers during different harvest periods fluctuated slightly, and the three components showed a downward trend. The content of chlorogenic acid butyl ester fluctuates relatively large, and the overall trend was downward. The contents of rutin and 5-O-coumaroylquinic acid methyl ester in the A. persica flowers harvested on March 21 were the highest (26.265 μg/g and 31.028 μg/g, respectively) and reached the lowest on April 1 (16.203 μg/g and 13.243 μg/g, respectively). The content of chlorogenic acid butyl ester in A. persica flowers harvested on March 20 was the highest (81.094 μg/g) and showed a downward trend, reaching the lowest on April 1 (19.352 μg/g). The content of kaempferol in A. persica flowers harvested on March 22 was the highest (21.551 μg/g) and showed a significant downward trend, reaching the lowest on March 28 (6.956 μg/g). The total content of the four active ingredients in A. persica flowers during different harvest periods was the highest at the beginning of flowering, showing a downward trend, reaching the lowest on April 1.Figure 2
Dynamic change of the contents of four active ingredients inA. persica flowers during different harvesting periods.
## 4. Discussion
The ingredients of traditional Chinese medicine are complex, and the efficacy of traditional Chinese medicine is often the result of the synergistic effect of many ingredients [14, 15]. If only one or two ingredients are used as quality evaluation indexes, it is difficult to reflect the true quality of traditional Chinese medicine, while the multi-index evaluation method can more comprehensively characterize the quality of traditional Chinese medicine [16, 17]. Therefore, the content determination of multi-ingredients has become the development trend of quality evaluation of traditional Chinese medicine [18, 19]. A. persica flowers contain a variety of effective ingredients, of which kaempferol has antioxidant, anti-inflammatory, antitumor, and other activities [20–23]. Sun et al. found that, in a certain concentration range, rutin had an obvious protective effect on HUVECs injured by H2O2 and glucose. And the mechanism was related to the inhibition of the intercellular adhesion molecule expression and the regulation of NO and TNF-α production [24]. The anti-inflammatory and cytotoxic activities of rutin were determined by the Griess and CCK-8 methods; then, the results of screening demonstrated that rutin showed moderate NO inhibitory effects [25]. Combined with the previous study on the coagulation activity of A. persica flowers in vitro, rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol were selected as the quality control indexes of A. persica flowers, which can reflect the quality of A. persica flowers more comprehensively.
### 4.1. Optimization of Extraction Methods
The main chemical components ofA. persica flowers are flavonoids and phenolic acids, which are easily soluble in polar solvents. The components and contents obtained by extraction with different polar solvents are different. Therefore, in the early stages of this study, three different polar solvents, namely, ultrapure water, methanol, and ethanol, were used for extraction. The results show that, compared with water and ethanol, methanol extract has a better peak shape and the largest absorption peak intensity, so methanol was the best pure solvent for A. persica flowers extraction. Furthermore, the effects of different volume fractions of methanol (10%, 20%, 40%, 60%, 80%, and 100%) and different extraction methods (ultrasonic extraction [26] and reflux extraction [27]) on the extraction of active components from A. persica flowers were compared by the single factor method. The results showed that when 100% methanol was used as the solvent, the peak areas of rutin and other four components were higher and the method stability was better. At the same time, the author found that the effects of ultrasonic extraction and reflux extraction were similar. Considering the simplicity of operation and the repeatability of the method, ultrasonic extraction was selected in this study. On this basis, this research also investigated the effects of different extraction times (30, 45, and 60 min), different solid-liquid ratios (1.0 g: 10 mL, 1.0 g: 25 mL, and 1.0 g: 50 mL), and other factors on the extraction of rutin from A. persica flowers. The results showed that the results of different extraction times were similar, so the shortest extraction time was 30 min. According to different solid-liquid ratios, the extraction rate of 1.0 g: 25 mL is high, which can better reflect the chemical information of A. persica flowers. Therefore, it was determined that the extraction method was ultrasonic extraction, the extraction solvent was 100% methanol, the extraction time was 30 min, and the extraction solid-liquid ratio was 1.0 g: 25 mL.
### 4.2. Investigation of Chromatographic Conditions
According to references [28, 29], the four components were scanned in the wavelength range of 200∼400 nm, and the four components had a better linear relationship at 360 nm. At this wavelength, the interference of other components in the sample to the four components was less, and the baseline was more stable. Meanwhile, this research investigated the separation effects of two mobile phase systems, acetonitrile-water and acetonitrile-0.1% formic acid solution. The results showed that when the mobile phase was acetonitrile-water, the chromatographic peaks and separation degrees of chlorogenic acid butyl ester and kaempferol were poor. When an acetonitrile-0.1% formic acid solution was used as the mobile phase, the four components, such as kaempferol, could be well separated and the peak time was stable. Therefore, an acetonitrile-0.1% formic acid solution was selected as the mobile phase.
### 4.3. Effect of Harvesting Time on the Content of Active Ingredients
It can be seen that it is difficult to evaluate the overall quality ofA. persica flowers medicinal materials with a single component, so this study simultaneously measured the contents of four active ingredients in A. persica flowers for the first time. The contents of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol in A. persica flowers reached the highest value at the beginning of flowering and then began to decline and reached the lowest on April 1. The results showed the optimum harvesting period for A. persica flowers in the early flowering stage. The variation in the four components was thought to be due to differences in origin, harvesting time, and environmental conditions of raw plant materials, as well as differences in temperature. So, there is important academic significance and application value to carry out research on the chemical composition and pharmacological action of A. persica flowers during the early stage of anthesis.
## 4.1. Optimization of Extraction Methods
The main chemical components ofA. persica flowers are flavonoids and phenolic acids, which are easily soluble in polar solvents. The components and contents obtained by extraction with different polar solvents are different. Therefore, in the early stages of this study, three different polar solvents, namely, ultrapure water, methanol, and ethanol, were used for extraction. The results show that, compared with water and ethanol, methanol extract has a better peak shape and the largest absorption peak intensity, so methanol was the best pure solvent for A. persica flowers extraction. Furthermore, the effects of different volume fractions of methanol (10%, 20%, 40%, 60%, 80%, and 100%) and different extraction methods (ultrasonic extraction [26] and reflux extraction [27]) on the extraction of active components from A. persica flowers were compared by the single factor method. The results showed that when 100% methanol was used as the solvent, the peak areas of rutin and other four components were higher and the method stability was better. At the same time, the author found that the effects of ultrasonic extraction and reflux extraction were similar. Considering the simplicity of operation and the repeatability of the method, ultrasonic extraction was selected in this study. On this basis, this research also investigated the effects of different extraction times (30, 45, and 60 min), different solid-liquid ratios (1.0 g: 10 mL, 1.0 g: 25 mL, and 1.0 g: 50 mL), and other factors on the extraction of rutin from A. persica flowers. The results showed that the results of different extraction times were similar, so the shortest extraction time was 30 min. According to different solid-liquid ratios, the extraction rate of 1.0 g: 25 mL is high, which can better reflect the chemical information of A. persica flowers. Therefore, it was determined that the extraction method was ultrasonic extraction, the extraction solvent was 100% methanol, the extraction time was 30 min, and the extraction solid-liquid ratio was 1.0 g: 25 mL.
## 4.2. Investigation of Chromatographic Conditions
According to references [28, 29], the four components were scanned in the wavelength range of 200∼400 nm, and the four components had a better linear relationship at 360 nm. At this wavelength, the interference of other components in the sample to the four components was less, and the baseline was more stable. Meanwhile, this research investigated the separation effects of two mobile phase systems, acetonitrile-water and acetonitrile-0.1% formic acid solution. The results showed that when the mobile phase was acetonitrile-water, the chromatographic peaks and separation degrees of chlorogenic acid butyl ester and kaempferol were poor. When an acetonitrile-0.1% formic acid solution was used as the mobile phase, the four components, such as kaempferol, could be well separated and the peak time was stable. Therefore, an acetonitrile-0.1% formic acid solution was selected as the mobile phase.
## 4.3. Effect of Harvesting Time on the Content of Active Ingredients
It can be seen that it is difficult to evaluate the overall quality ofA. persica flowers medicinal materials with a single component, so this study simultaneously measured the contents of four active ingredients in A. persica flowers for the first time. The contents of rutin, 5-O-coumaroylquinic acid methyl ester, chlorogenic acid butyl ester, and kaempferol in A. persica flowers reached the highest value at the beginning of flowering and then began to decline and reached the lowest on April 1. The results showed the optimum harvesting period for A. persica flowers in the early flowering stage. The variation in the four components was thought to be due to differences in origin, harvesting time, and environmental conditions of raw plant materials, as well as differences in temperature. So, there is important academic significance and application value to carry out research on the chemical composition and pharmacological action of A. persica flowers during the early stage of anthesis.
## 5. Conclusions
A simple, rapid, and sensitive HPLC method for the determination of four active ingredients inA. persica flowers during different harvest periods was developed and validated. The results of the present study indicate that the early stage of anthesis is the optimum harvesting period for A. persica flowers. The method will provide a scientific basis for the quality control of A. persica.
---
*Source: 1017674-2022-09-19.xml* | 2022 |
# Ventricular Fibrillation following Varicella Zoster Myocarditis
**Authors:** Adam Ioannou; Irene Tsappa; Sofia Metaxa; Constantinos G. Missouris
**Journal:** Case Reports in Cardiology
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1017686
---
## Abstract
Varicella-zoster virus (VZV) infection can rarely lead to serious cardiac complications and life-threatening arrhythmias. We present a case of a 46-year-old male patient who developed VZV myocarditis and presented with recurrent syncopal episodes followed by a cardiac arrest. He had a further collapse eight years later, and cardiac magnetic resonance imaging (MRI) demonstrated mild mid-wall basal and inferolateral wall fibrosis. He was treated with an implantable cardioverter defibrillator (ICD) and represented two years later with ICD shocks, and interrogation of the device revealed ventricular fibrillation episodes. This case demonstrates the life-threatening long-term sequelae of VZV myocarditis in adults. We suggest that VZV myocarditis should be considered in all patients who present with a syncopal event after VZV infection. In these patients, ICD implantation is a potentially life-saving procedure.
---
## Body
## 1. Introduction
Infection with varicella-zoster virus (VZV) predominantly affects children and is, in most cases, a self-limiting and benign condition. However, in rare cases, it may lead to life-threatening cardiac complications [1]. We report a 46-year-old male patient who developed recurrent ventricular arrhythmias following the diagnosis of chicken pox.
## 2. Case Presentation
A 46-year-old male patient was first admitted to the emergency department of our hospital 12 years ago with recurrent episodes of collapse and a documented ventricular fibrillation (VF) arrest requiring emergency cardioversion by the paramedic team. The patient gave a 5-day history of general malaise and fever and a 24-hour history of an itchy vesicular rash. He had no relevant past medical history, but both his children had been diagnosed with chicken pox two weeks ago.On examination, he was apyrexial, with a normal cardiovascular examination. He had multiple widespread erythematous, vesicular lesions approximately 2 mm across with some weeping that involved all limbs and his trunk. The haematological and biochemical investigations were normal, apart from a C-reactive protein (CRP) of 70 mg/L (normal ≤ 5 mg/L). The resting electrocardiogram (ECG) was within normal limits with a QTc interval of 403 msec. Transthoracic echocardiography confirmed normal biventricular structure and function with no regional wall motion abnormalities and normal cardiac valves. Intravenous amiodarone was administered for 24 hours, and no further arrhythmias were detected. In addition, he was treated with intravenous acyclovir for 10 days. He made an uneventful recovery. The patient took his own discharge a few days following the acute presentation.He represented 8 years later with a further syncopal event lasting for less than a minute. He was on treatment with citalopram 20 mg od prescribed by his general practitioner for anxiety. He had a normal clinical examination, and the biochemical investigations were within normal limits. The resting ECG confirmed sinus rhythm with a normal QTc interval of 430 msec. The cardiac magnetic resonance imaging (MRI) confirmed normal biventricular function but also mild mid-wall myocardial enhancement at the basal inferior and inferolateral walls consistent with myocarditis and no evidence of inducible ischaemia (Figures1 and 2). He was reviewed by the electrophysiology consultant, and no findings suggestive of channelopathy were identified, and the patient was treated with nebivolol 10 mg od and flecainide 100 mg bd. Following discussion in the multidisciplinary cardiology meeting, an implantable cardioverter defibrillator (ICD) was implanted (Boston Scientific ENERGEN F142).Figure 1
Short-axis view of the cardiac MRI demonstrating a normal left ventricular size and features consistent with myocarditis (arrow).Figure 2
Cardiac MRI inversion recovery images after contrast injection revealing mild mid-wall enhancement in the basal inferior and inferolateral walls consistent with myocarditis (arrow).Two years later, he developed 2 further episodes of VF leading to ICD activation and shock. On examination, his heart rate was 67 beats per minute and regular, and the supine blood pressure was 163/58 mm Hg. He had normal heart sounds and no signs of heart failure. The routine full blood count and biochemistry were normal (serum potassium 4.2 mmol/L and magnesium 0.76 mmol/L), and the high sensitivity troponin was normal (3 ng/L). ICD interrogation revealed two appropriate shocks for VF (Figure3). The resting ECG confirmed a normal QTc duration of 420 msec. The transthoracic echocardiogram confirmed normal biventricular structure and function with an ejection fraction of 55–60%. There were no regional wall motion abnormalities, and the valves were structurally normal.Figure 3
Interrogation of the dual-chamber defibrillator revealing an episode of ventricular fibrillation, followed by a shock.The patient continued to experience unifocal ventricular ectopic beats and short runs of nonsustained ventricular tachycardia (NSVT). As a result, flecainide was stopped, and he was treated with intravenous amiodarone. He made an uneventful recovery and was discharged home on nebivolol 10 mg od and amiodarone 200 mg od. Ablation therapy was not considered as an option as all the shocks were the result of VF and not VT.
## 3. Discussion
VZV infection leading to chicken pox is a common condition with the majority of cases occurring in childhood and is usually a benign and self-limiting disease. However, rarely, the infection may lead to life-threatening sequelae including encephalitis, myocarditis, and pneumonitis. These complications are more common in adults [1, 2].More than 20 viruses have been shown to cause myocarditis in humans. Varicella myocarditis was first described in 1953, based upon a study of seven necropsy findings [3], and in 1977, Fiddler et al. [4] reported a 10-year-old child who developed syncopal events caused by VT and VF after contact with his grandfather who had a VZV infection 2 weeks earlier.It is believed that the virus has a direct cytotoxic effect on the cardiac myocytes causing myocytolysis, necrosis, and oedema. In the acute phase, there is marked focal interstitial myocarditis with a collection of mononuclear cells, lymphocytes and occasional plasma cells, neutrophils, and eosinophils. Autoimmune reactions are also believed to take place. Following the acute inflammatory response, the resultant fibrosis and scarring leads to an electrical conduction block and reentry circuits predisposing patients to life-threatening ventricular arrhythmias. Furthermore, VZV myocarditis may mimic acute myocardial infarction, and in some patients, it may lead to congestive heart failure [4–6].In these patients, anti-arrhythmic drugs such as flecainide, that acts on the sodium ion channels to delay myocyte recovery from excitation, also slows conduction through the scar tissue and may increase the risk of ventricular arrhythmias. Radiofrequency ablation treatment of scar tissue, resulting from myocarditis, is often challenging, as the scar tissue is often intramural or epicardial. In all these patients, the implantation of an ICD device is required to terminate the ventricular arrhythmia either by the delivery of a high voltage shock or a burst of rapid ventricular pacing to interrupt the reentry circuit. Treatment with the antiviral agent acyclovir is only beneficial in the early stages of viral replication within the myocardium, which coincides with the appearance of skin lesions [5, 7].Our case clearly demonstrates that VZV infection in an adult patient may lead to life-threatening ventricular arrhythmias not only in the acute phase but long after the initial presentation. To our knowledge, this is the first case report of ventricular arrhythmias developing many years after the acute presentation of a VZV infection. We suggest, therefore, that in patients who present with a syncopal event after VZV infection, a high index of suspicion is required to investigate for potentially life-threatening ventricular arrhythmias. In these patients, the use of beta-blockade therapy and/or amiodarone, intravenous acyclovir, and ICD implantation is likely to improve the long-term outcome and prognosis.
---
*Source: 1017686-2017-11-23.xml* | 1017686-2017-11-23_1017686-2017-11-23.md | 8,666 | Ventricular Fibrillation following Varicella Zoster Myocarditis | Adam Ioannou; Irene Tsappa; Sofia Metaxa; Constantinos G. Missouris | Case Reports in Cardiology
(2017) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1017686 | 1017686-2017-11-23.xml | ---
## Abstract
Varicella-zoster virus (VZV) infection can rarely lead to serious cardiac complications and life-threatening arrhythmias. We present a case of a 46-year-old male patient who developed VZV myocarditis and presented with recurrent syncopal episodes followed by a cardiac arrest. He had a further collapse eight years later, and cardiac magnetic resonance imaging (MRI) demonstrated mild mid-wall basal and inferolateral wall fibrosis. He was treated with an implantable cardioverter defibrillator (ICD) and represented two years later with ICD shocks, and interrogation of the device revealed ventricular fibrillation episodes. This case demonstrates the life-threatening long-term sequelae of VZV myocarditis in adults. We suggest that VZV myocarditis should be considered in all patients who present with a syncopal event after VZV infection. In these patients, ICD implantation is a potentially life-saving procedure.
---
## Body
## 1. Introduction
Infection with varicella-zoster virus (VZV) predominantly affects children and is, in most cases, a self-limiting and benign condition. However, in rare cases, it may lead to life-threatening cardiac complications [1]. We report a 46-year-old male patient who developed recurrent ventricular arrhythmias following the diagnosis of chicken pox.
## 2. Case Presentation
A 46-year-old male patient was first admitted to the emergency department of our hospital 12 years ago with recurrent episodes of collapse and a documented ventricular fibrillation (VF) arrest requiring emergency cardioversion by the paramedic team. The patient gave a 5-day history of general malaise and fever and a 24-hour history of an itchy vesicular rash. He had no relevant past medical history, but both his children had been diagnosed with chicken pox two weeks ago.On examination, he was apyrexial, with a normal cardiovascular examination. He had multiple widespread erythematous, vesicular lesions approximately 2 mm across with some weeping that involved all limbs and his trunk. The haematological and biochemical investigations were normal, apart from a C-reactive protein (CRP) of 70 mg/L (normal ≤ 5 mg/L). The resting electrocardiogram (ECG) was within normal limits with a QTc interval of 403 msec. Transthoracic echocardiography confirmed normal biventricular structure and function with no regional wall motion abnormalities and normal cardiac valves. Intravenous amiodarone was administered for 24 hours, and no further arrhythmias were detected. In addition, he was treated with intravenous acyclovir for 10 days. He made an uneventful recovery. The patient took his own discharge a few days following the acute presentation.He represented 8 years later with a further syncopal event lasting for less than a minute. He was on treatment with citalopram 20 mg od prescribed by his general practitioner for anxiety. He had a normal clinical examination, and the biochemical investigations were within normal limits. The resting ECG confirmed sinus rhythm with a normal QTc interval of 430 msec. The cardiac magnetic resonance imaging (MRI) confirmed normal biventricular function but also mild mid-wall myocardial enhancement at the basal inferior and inferolateral walls consistent with myocarditis and no evidence of inducible ischaemia (Figures1 and 2). He was reviewed by the electrophysiology consultant, and no findings suggestive of channelopathy were identified, and the patient was treated with nebivolol 10 mg od and flecainide 100 mg bd. Following discussion in the multidisciplinary cardiology meeting, an implantable cardioverter defibrillator (ICD) was implanted (Boston Scientific ENERGEN F142).Figure 1
Short-axis view of the cardiac MRI demonstrating a normal left ventricular size and features consistent with myocarditis (arrow).Figure 2
Cardiac MRI inversion recovery images after contrast injection revealing mild mid-wall enhancement in the basal inferior and inferolateral walls consistent with myocarditis (arrow).Two years later, he developed 2 further episodes of VF leading to ICD activation and shock. On examination, his heart rate was 67 beats per minute and regular, and the supine blood pressure was 163/58 mm Hg. He had normal heart sounds and no signs of heart failure. The routine full blood count and biochemistry were normal (serum potassium 4.2 mmol/L and magnesium 0.76 mmol/L), and the high sensitivity troponin was normal (3 ng/L). ICD interrogation revealed two appropriate shocks for VF (Figure3). The resting ECG confirmed a normal QTc duration of 420 msec. The transthoracic echocardiogram confirmed normal biventricular structure and function with an ejection fraction of 55–60%. There were no regional wall motion abnormalities, and the valves were structurally normal.Figure 3
Interrogation of the dual-chamber defibrillator revealing an episode of ventricular fibrillation, followed by a shock.The patient continued to experience unifocal ventricular ectopic beats and short runs of nonsustained ventricular tachycardia (NSVT). As a result, flecainide was stopped, and he was treated with intravenous amiodarone. He made an uneventful recovery and was discharged home on nebivolol 10 mg od and amiodarone 200 mg od. Ablation therapy was not considered as an option as all the shocks were the result of VF and not VT.
## 3. Discussion
VZV infection leading to chicken pox is a common condition with the majority of cases occurring in childhood and is usually a benign and self-limiting disease. However, rarely, the infection may lead to life-threatening sequelae including encephalitis, myocarditis, and pneumonitis. These complications are more common in adults [1, 2].More than 20 viruses have been shown to cause myocarditis in humans. Varicella myocarditis was first described in 1953, based upon a study of seven necropsy findings [3], and in 1977, Fiddler et al. [4] reported a 10-year-old child who developed syncopal events caused by VT and VF after contact with his grandfather who had a VZV infection 2 weeks earlier.It is believed that the virus has a direct cytotoxic effect on the cardiac myocytes causing myocytolysis, necrosis, and oedema. In the acute phase, there is marked focal interstitial myocarditis with a collection of mononuclear cells, lymphocytes and occasional plasma cells, neutrophils, and eosinophils. Autoimmune reactions are also believed to take place. Following the acute inflammatory response, the resultant fibrosis and scarring leads to an electrical conduction block and reentry circuits predisposing patients to life-threatening ventricular arrhythmias. Furthermore, VZV myocarditis may mimic acute myocardial infarction, and in some patients, it may lead to congestive heart failure [4–6].In these patients, anti-arrhythmic drugs such as flecainide, that acts on the sodium ion channels to delay myocyte recovery from excitation, also slows conduction through the scar tissue and may increase the risk of ventricular arrhythmias. Radiofrequency ablation treatment of scar tissue, resulting from myocarditis, is often challenging, as the scar tissue is often intramural or epicardial. In all these patients, the implantation of an ICD device is required to terminate the ventricular arrhythmia either by the delivery of a high voltage shock or a burst of rapid ventricular pacing to interrupt the reentry circuit. Treatment with the antiviral agent acyclovir is only beneficial in the early stages of viral replication within the myocardium, which coincides with the appearance of skin lesions [5, 7].Our case clearly demonstrates that VZV infection in an adult patient may lead to life-threatening ventricular arrhythmias not only in the acute phase but long after the initial presentation. To our knowledge, this is the first case report of ventricular arrhythmias developing many years after the acute presentation of a VZV infection. We suggest, therefore, that in patients who present with a syncopal event after VZV infection, a high index of suspicion is required to investigate for potentially life-threatening ventricular arrhythmias. In these patients, the use of beta-blockade therapy and/or amiodarone, intravenous acyclovir, and ICD implantation is likely to improve the long-term outcome and prognosis.
---
*Source: 1017686-2017-11-23.xml* | 2017 |
# Posterior Reversible Encephalopathy Syndrome in a Patient with Hemorrhagic Fever with Renal Syndrome
**Authors:** Ermira Muco; Amela Hasa; Arben Rroji; Arta Kushi; Edmond Puca; Dhimiter Kraja
**Journal:** Case Reports in Infectious Diseases
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1017689
---
## Abstract
We presently report the case of hantavirus infection in a 45-year-old male who was hospitalized to our clinic of infectious diseases, with fever, myalgia, vomiting, nausea, headache, and abdominal pain. The physical findings included hepatomegaly, splenomegaly, rash, and conjunctival injection. Eight days before the start of complaints, the patient has cut trees in the mountain. An acute renal failure was observed with an oliguria and an increase of serum creatinine and blood urea nitrogen. Urinalysis shows albuminuria and hematuria. Elevations of amylase, lipase, and liver enzymes levels, low serum albumin level, and thrombocytopenia were observed. A positive ELISA test for hantavirus IgM/IgG antibodies confirmed hemorrhagic fever with renal syndrome. On the third day of hospitalization, the patient had seizures. The unenhanced head computed tomography (CT) performed after seizures showed subcortical bilateral hypodensities within frontal, parietal, and occipital regions corresponding to areas of increased signal intensity in magnetic resonance imaging (MRI) associated with cerebral edema in posterior reversible encephalopathy syndrome (PRES). The treatment consisted of supportive therapy. The patient underwent another head MRI with contrast enhancement after 2 months, which resulted normal.
---
## Body
## 1. Introduction
Hantaviruses are enveloped RNA viruses and members of the Bunyaviridae family. Hantavirus infection to humans is considered a spill over infection that causes two types of serious illnesses, hemorrhagic fever with renal syndrome (HFRS) and hantavirus pulmonary syndrome (HPS) [1]. People can also become infected when they touch mouse or rat urine, droppings, or nesting materials that contain the virus and then touch their eyes, nose, or mouth. Hantavirus infection affects 30,000 individuals annually and tends to occur among people living in lower socioeconomic housing environments and those enjoying the outdoors [2]. The species that cause HFRS include Hantaan River, Dobrava-Belgrade, Saaremaa, Seoul, Puumala, and other hantaviruses. These are found in Europe, Asia, and Africa [3]. Hantaan and Dobrava virus infections usually cause severe symptoms, while Seoul, Saaremaa, and Puumala virus infections are usually more moderate [4]. Cases with hemorrhagic fever with renal syndrome in Albania are caused by Dobrava strains [5]. Albania, a part of Balkans, is part of an endemic area [6]. Posterior reversible encephalopathy syndrome (PRES) was first described in 1996 and is a clinico-radiological syndrome characterized by symptoms including a headache, seizures, altered consciousness, and visual disturbances [7]. Infections are one of the clinical conditions associated with PRES.
## 2. Case Report
Our case is a 45-year-old white male who was hospitalized to the clinic of infectious diseases, with fever (39°C), myalgia, vomiting, nausea, headache, and abdominal pain. The physical findings included hepatomegaly (19 cm), splenomegaly (16 cm), rash, and conjunctival injection. Eight days before the start of complaints, the patient had been cutting trees in the forest. He did not have a history of traveling to another HFRS endemic area. An acute renal failure was observed in the laboratory tests with an increase of serum creatinine and blood urea nitrogen. Urinalysis shows albuminuria (9.9 gr) and hematuria (35–40 cell/field). Initial total blood count revealed thrombocytopenia (91,000/mm3). Elevations of amylase, lipase, aspartate aminotransferase (AST), and alanine aminotransferase (ALT) levels and low serum albumin level were observed as shown in Table 1. PCR was 11.4 mg/L. Also, an oliguria (300 ml/day) was present. On the third day of hospitalization, the patient had seizures. He was transferred to the Intensive Care Unit because of his worsening condition. The patient refused to have a lumbar puncture. The unenhanced head CT performed in urgency conditions after seizures showed subcortical bilateral hypodensities within frontal, parietal, and occipital regions (Figure 1). A head MRI with intravenous contrast showed hyperintensities in affected regions in T2 and FLAIR sequences without diffusion restriction of signal and without microhemorrhages in T2∗ sequences (Figures 2 and 3). The radiological consultations considered these pathological images as edematous regions which correspond with posterior reversible encephalopathy syndrome. The electroencephalogram realized found problems related to electrical activity of the brain: “Intermittent bilateral 7-8 Hz slow wave on the left temporal and frontal lobe in a background of low amplitude registration.” HFRS was detected from a blood sample drawn two days after hospitalization, with a positive ELISA test for hantavirus IgM and IgG antibodies. First blood sample showed hantavirus IgM antibody titer 8.2 (0.9–1.1) and IgG antibody titer 6.7 (0.9–1.1). Second blood sample evaluation, after two weeks, showed hantavirus IgM antibody titer 7.1 and IgG antibody titer 6.9. Serological test of Leptospira, HBV (anti-HBc antibody test and HbsAg antigen test), and HCV (anti-HCV antibody test) resulted negative. All the laboratory test results during hospitalization are shown in Table 1. Treatment consisted of supportive therapy with ceftriaxone, corticosteroids, antiepileptic, saline infusions, electrolytes, antipyretics, and oxygen therapy. The patient was discharged after 16 days. He underwent another head MRI after 2 months, which resulted normal, without presence of any cerebral hyperintensities (Figures 2 and 3).Table 1
Laboratory data of biochemical and clinical tests.
Laboratory data
Reference range
D0
D1
D2
D3
D5
D7
D14
AST
0–35 U/L
154
113
162
87
77
77
69
ALT
0–45 U/L
97
90
133
97
87
127
150
Bilirubin
<1.2 mg/dL
0.3
0.5
0.6
0.4
0.6
0.5
Alkaline phosphatase
32–117 U/L
46
47
46
42
66
66
77
Amylase
28–100 U/L
—
—
153
110
—
—
195
Lipase
21–67 U/L
—
—
224
146
—
—
259
Gamma GT
0–55 U/L
72
109
102
98
122
137
199
Lactate dehydrogenase
125–250 U/L
435
334
438
597
338
255
212
Albumin
3.5–5.2 g/dL
2.8
2.8
2.8
2.4
3.1
3.1
3.6
Total protein
6–8.3 g/dL
5.3
5.2
5.3
4.9
6
6.1
6.8
Serum creatinine
0.1–1.3 mg/dL
6.9
7.5
7.6
4.8
3.2
1.7
0.9
Blood urea nitrogen
<43 mg/dL
193
187
236
169
104
67
36
Creatinine kinase
0–171 U/L
53
75
219
642
1902
435
75
Glucose level
74–106 mg/dL
196
163
142
147
162
145
98
Platelet count
150–390 × 103/mm3
91
94
133
212
294
261
178
White blood cells
4000–10,000/mm3
11.2
9.1
9.2
9.8
12.6
9.7
10
Hematocrit
35–50%
40
41.7
35.9
40
41.2
41.8
43.7Figure 1
CT scan images: bilateral subcortical hypodensities in frontal, occipital, and parietal regions.Figure 2
Axial T2 images: bilateral hyperintense zones in parietal and occipital regions. Comparative pictures (lower) showing total disappearance of lesions after 2 months.Figure 3
Axial FLAIR: bilateral hyperintense zones in frontal, occipital, and parietal regions. Comparative pictures (lower) showing total disappearance of lesions after 2 months (resolving vasogenic edema).
## 3. Discussion
Hantaviruses have a worldwide distribution and are broadly split into the New World hantaviruses, which includes those causing HPS, and the Old World hantaviruses (including the prototype Hantaan virus (HTNV)), which are associated with a different disease, hemorrhagic fever with renal syndrome (HFRS) [8]. Epidemic seasonal predominance was observed in autumn/winter [9]. Our case was introduced in summer. Summer as the season of occurrence of the disease is also described in other articles [10]. Forestry workers and farmers have an increased risk of exposure. Even our patient worked in the forest cutting trees. Incubation of HFRS infection has not been precisely determined, but it is most frequently around two weeks. Patients with HCPS typically present a short febrile prodrome of 3–5 days [11]. In addition to fever and myalgias, early symptoms include headache, chills, dizziness, nonproductive cough, nausea, vomiting, and other gastrointestinal symptoms. Malaise, diarrhea, and lightheadedness are reported by approximately half of all patients, with less frequent reports of arthralgia, back pain, and abdominal pain [1]. Conjunctival, cerebral, and gastrointestinal (GI) hemorrhages occur in about one-third of patients [4]. The basic pathologic and pathophysiologic disorder in HFRS is capillary damage (vasculitis) [12]. Increased vascular permeability and decreased platelet count are the hallmarks of hantavirus-associated diseases [1]. The diagnosis of hantavirus infections in humans is based on clinical and epidemiological information, as well as laboratory tests. We review diagnosis for hantavirus infections based on serology (ELISA IgM and IgG tests were used for the detection of specific IgM and IgG antibodies), PCR, immunochemistry, and virus culture [13]. We could not perform the hantaan virus PCR test in Albania. Posterior reversible encephalopathy syndrome (PRES) is a neurotoxic state with a mechanism not well understood but is thought to be related to the altered integrity of the blood brain barrier. A hallmark of pathogenesis is increased vascular permeability that seems to be due to endothelial cell dysfunction [14]. In PRES, most commonly, there is vasogenic edema within the occipital and parietal regions (∼95% of cases), usually symmetrical. PRES can be found even in a nonposterior distribution, mainly in watershed areas, including within the frontal, inferior temporal, cerebellar, and brainstem regions. PRES presents with rapid onset of symptoms including headache, seizures, altered consciousness, and visual disturbances [15–17]. In our case, the patient presented with seizures after three days of hospitalization. Infection may be an important cause of PRES. Treatment of hantavirus infections is mainly supportive and involves intensive medical care. Our case discharged from hospital in a good condition. MRI of head realized after 2 months resulted normal. If promptly recognized and treated, the clinical syndrome usually resolves within a week and the changes seen in MRI resolve over days to weeks.
## 4. Conclusion
In summary, hantavirus infection should be considered in the differential diagnosis of renal failure, especially in patients from endemic areas and typical history. The diagnosis is established with laboratory techniques. In cases of neurological symptoms, realization of CT scan and MRI head is useful to detect PRES. Treatment is mainly supportive and involves intensive medical care.
---
*Source: 1017689-2020-02-29.xml* | 1017689-2020-02-29_1017689-2020-02-29.md | 11,053 | Posterior Reversible Encephalopathy Syndrome in a Patient with Hemorrhagic Fever with Renal Syndrome | Ermira Muco; Amela Hasa; Arben Rroji; Arta Kushi; Edmond Puca; Dhimiter Kraja | Case Reports in Infectious Diseases
(2020) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1017689 | 1017689-2020-02-29.xml | ---
## Abstract
We presently report the case of hantavirus infection in a 45-year-old male who was hospitalized to our clinic of infectious diseases, with fever, myalgia, vomiting, nausea, headache, and abdominal pain. The physical findings included hepatomegaly, splenomegaly, rash, and conjunctival injection. Eight days before the start of complaints, the patient has cut trees in the mountain. An acute renal failure was observed with an oliguria and an increase of serum creatinine and blood urea nitrogen. Urinalysis shows albuminuria and hematuria. Elevations of amylase, lipase, and liver enzymes levels, low serum albumin level, and thrombocytopenia were observed. A positive ELISA test for hantavirus IgM/IgG antibodies confirmed hemorrhagic fever with renal syndrome. On the third day of hospitalization, the patient had seizures. The unenhanced head computed tomography (CT) performed after seizures showed subcortical bilateral hypodensities within frontal, parietal, and occipital regions corresponding to areas of increased signal intensity in magnetic resonance imaging (MRI) associated with cerebral edema in posterior reversible encephalopathy syndrome (PRES). The treatment consisted of supportive therapy. The patient underwent another head MRI with contrast enhancement after 2 months, which resulted normal.
---
## Body
## 1. Introduction
Hantaviruses are enveloped RNA viruses and members of the Bunyaviridae family. Hantavirus infection to humans is considered a spill over infection that causes two types of serious illnesses, hemorrhagic fever with renal syndrome (HFRS) and hantavirus pulmonary syndrome (HPS) [1]. People can also become infected when they touch mouse or rat urine, droppings, or nesting materials that contain the virus and then touch their eyes, nose, or mouth. Hantavirus infection affects 30,000 individuals annually and tends to occur among people living in lower socioeconomic housing environments and those enjoying the outdoors [2]. The species that cause HFRS include Hantaan River, Dobrava-Belgrade, Saaremaa, Seoul, Puumala, and other hantaviruses. These are found in Europe, Asia, and Africa [3]. Hantaan and Dobrava virus infections usually cause severe symptoms, while Seoul, Saaremaa, and Puumala virus infections are usually more moderate [4]. Cases with hemorrhagic fever with renal syndrome in Albania are caused by Dobrava strains [5]. Albania, a part of Balkans, is part of an endemic area [6]. Posterior reversible encephalopathy syndrome (PRES) was first described in 1996 and is a clinico-radiological syndrome characterized by symptoms including a headache, seizures, altered consciousness, and visual disturbances [7]. Infections are one of the clinical conditions associated with PRES.
## 2. Case Report
Our case is a 45-year-old white male who was hospitalized to the clinic of infectious diseases, with fever (39°C), myalgia, vomiting, nausea, headache, and abdominal pain. The physical findings included hepatomegaly (19 cm), splenomegaly (16 cm), rash, and conjunctival injection. Eight days before the start of complaints, the patient had been cutting trees in the forest. He did not have a history of traveling to another HFRS endemic area. An acute renal failure was observed in the laboratory tests with an increase of serum creatinine and blood urea nitrogen. Urinalysis shows albuminuria (9.9 gr) and hematuria (35–40 cell/field). Initial total blood count revealed thrombocytopenia (91,000/mm3). Elevations of amylase, lipase, aspartate aminotransferase (AST), and alanine aminotransferase (ALT) levels and low serum albumin level were observed as shown in Table 1. PCR was 11.4 mg/L. Also, an oliguria (300 ml/day) was present. On the third day of hospitalization, the patient had seizures. He was transferred to the Intensive Care Unit because of his worsening condition. The patient refused to have a lumbar puncture. The unenhanced head CT performed in urgency conditions after seizures showed subcortical bilateral hypodensities within frontal, parietal, and occipital regions (Figure 1). A head MRI with intravenous contrast showed hyperintensities in affected regions in T2 and FLAIR sequences without diffusion restriction of signal and without microhemorrhages in T2∗ sequences (Figures 2 and 3). The radiological consultations considered these pathological images as edematous regions which correspond with posterior reversible encephalopathy syndrome. The electroencephalogram realized found problems related to electrical activity of the brain: “Intermittent bilateral 7-8 Hz slow wave on the left temporal and frontal lobe in a background of low amplitude registration.” HFRS was detected from a blood sample drawn two days after hospitalization, with a positive ELISA test for hantavirus IgM and IgG antibodies. First blood sample showed hantavirus IgM antibody titer 8.2 (0.9–1.1) and IgG antibody titer 6.7 (0.9–1.1). Second blood sample evaluation, after two weeks, showed hantavirus IgM antibody titer 7.1 and IgG antibody titer 6.9. Serological test of Leptospira, HBV (anti-HBc antibody test and HbsAg antigen test), and HCV (anti-HCV antibody test) resulted negative. All the laboratory test results during hospitalization are shown in Table 1. Treatment consisted of supportive therapy with ceftriaxone, corticosteroids, antiepileptic, saline infusions, electrolytes, antipyretics, and oxygen therapy. The patient was discharged after 16 days. He underwent another head MRI after 2 months, which resulted normal, without presence of any cerebral hyperintensities (Figures 2 and 3).Table 1
Laboratory data of biochemical and clinical tests.
Laboratory data
Reference range
D0
D1
D2
D3
D5
D7
D14
AST
0–35 U/L
154
113
162
87
77
77
69
ALT
0–45 U/L
97
90
133
97
87
127
150
Bilirubin
<1.2 mg/dL
0.3
0.5
0.6
0.4
0.6
0.5
Alkaline phosphatase
32–117 U/L
46
47
46
42
66
66
77
Amylase
28–100 U/L
—
—
153
110
—
—
195
Lipase
21–67 U/L
—
—
224
146
—
—
259
Gamma GT
0–55 U/L
72
109
102
98
122
137
199
Lactate dehydrogenase
125–250 U/L
435
334
438
597
338
255
212
Albumin
3.5–5.2 g/dL
2.8
2.8
2.8
2.4
3.1
3.1
3.6
Total protein
6–8.3 g/dL
5.3
5.2
5.3
4.9
6
6.1
6.8
Serum creatinine
0.1–1.3 mg/dL
6.9
7.5
7.6
4.8
3.2
1.7
0.9
Blood urea nitrogen
<43 mg/dL
193
187
236
169
104
67
36
Creatinine kinase
0–171 U/L
53
75
219
642
1902
435
75
Glucose level
74–106 mg/dL
196
163
142
147
162
145
98
Platelet count
150–390 × 103/mm3
91
94
133
212
294
261
178
White blood cells
4000–10,000/mm3
11.2
9.1
9.2
9.8
12.6
9.7
10
Hematocrit
35–50%
40
41.7
35.9
40
41.2
41.8
43.7Figure 1
CT scan images: bilateral subcortical hypodensities in frontal, occipital, and parietal regions.Figure 2
Axial T2 images: bilateral hyperintense zones in parietal and occipital regions. Comparative pictures (lower) showing total disappearance of lesions after 2 months.Figure 3
Axial FLAIR: bilateral hyperintense zones in frontal, occipital, and parietal regions. Comparative pictures (lower) showing total disappearance of lesions after 2 months (resolving vasogenic edema).
## 3. Discussion
Hantaviruses have a worldwide distribution and are broadly split into the New World hantaviruses, which includes those causing HPS, and the Old World hantaviruses (including the prototype Hantaan virus (HTNV)), which are associated with a different disease, hemorrhagic fever with renal syndrome (HFRS) [8]. Epidemic seasonal predominance was observed in autumn/winter [9]. Our case was introduced in summer. Summer as the season of occurrence of the disease is also described in other articles [10]. Forestry workers and farmers have an increased risk of exposure. Even our patient worked in the forest cutting trees. Incubation of HFRS infection has not been precisely determined, but it is most frequently around two weeks. Patients with HCPS typically present a short febrile prodrome of 3–5 days [11]. In addition to fever and myalgias, early symptoms include headache, chills, dizziness, nonproductive cough, nausea, vomiting, and other gastrointestinal symptoms. Malaise, diarrhea, and lightheadedness are reported by approximately half of all patients, with less frequent reports of arthralgia, back pain, and abdominal pain [1]. Conjunctival, cerebral, and gastrointestinal (GI) hemorrhages occur in about one-third of patients [4]. The basic pathologic and pathophysiologic disorder in HFRS is capillary damage (vasculitis) [12]. Increased vascular permeability and decreased platelet count are the hallmarks of hantavirus-associated diseases [1]. The diagnosis of hantavirus infections in humans is based on clinical and epidemiological information, as well as laboratory tests. We review diagnosis for hantavirus infections based on serology (ELISA IgM and IgG tests were used for the detection of specific IgM and IgG antibodies), PCR, immunochemistry, and virus culture [13]. We could not perform the hantaan virus PCR test in Albania. Posterior reversible encephalopathy syndrome (PRES) is a neurotoxic state with a mechanism not well understood but is thought to be related to the altered integrity of the blood brain barrier. A hallmark of pathogenesis is increased vascular permeability that seems to be due to endothelial cell dysfunction [14]. In PRES, most commonly, there is vasogenic edema within the occipital and parietal regions (∼95% of cases), usually symmetrical. PRES can be found even in a nonposterior distribution, mainly in watershed areas, including within the frontal, inferior temporal, cerebellar, and brainstem regions. PRES presents with rapid onset of symptoms including headache, seizures, altered consciousness, and visual disturbances [15–17]. In our case, the patient presented with seizures after three days of hospitalization. Infection may be an important cause of PRES. Treatment of hantavirus infections is mainly supportive and involves intensive medical care. Our case discharged from hospital in a good condition. MRI of head realized after 2 months resulted normal. If promptly recognized and treated, the clinical syndrome usually resolves within a week and the changes seen in MRI resolve over days to weeks.
## 4. Conclusion
In summary, hantavirus infection should be considered in the differential diagnosis of renal failure, especially in patients from endemic areas and typical history. The diagnosis is established with laboratory techniques. In cases of neurological symptoms, realization of CT scan and MRI head is useful to detect PRES. Treatment is mainly supportive and involves intensive medical care.
---
*Source: 1017689-2020-02-29.xml* | 2020 |
# An On-Chip Planar Inverted-F Antenna at 38 GHz for 5G Communication Applications
**Authors:** Syed Muhammad Ammar Ali
**Journal:** International Journal of Antennas and Propagation
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1017816
---
## Abstract
This paper presents an on-chip planar inverted-F antenna (PIFA) implemented in TSMC 180 nm CMOS process technology. The antenna operates at a 5 G millimeter-wave center frequency of 38 GHz. The ultrathick metal (UTM) layer of the technology is utilized to implement the on-chip antenna (OCA). The OCA is positioned close to the edge of the microchip for improving the gain performance of the antenna. The open end of the antenna is folded to develop a top-loaded PIFA structure yielding better 50 Ω impedance matching and wider bandwidth. On-wafer measurements are conducted through the Cascade Microtech Summit 11K probe-station and ZVA-50 vector network analyzer to measure the return loss and gain of the fabricated on-chip antenna. The measurements are performed after placing the fabricated OCA over a 3D-printed plastic slab to minimize the reflections from the metallic chuck of the probe station. The measurement results show that the fabricated on-chip PIFA achieves a minimum return loss of 14.8 dB and a gain of 0.7 dBi at the center frequency of 38 GHz. To the best of the authors’ knowledge, the presented OCA is the first on-chip PIFA designed, fabricated, and tested at the 5G millimeter-wave frequency of 38 GHz.
---
## Body
## 1. Introduction
The millimeter-wave frequencies have recently gained enormous attention among research circles because of their capability of providing high data rates for 5 G communication systems. As millimeter-wave (mmW) frequencies exhibit relatively smaller wavelengths, therefore, it becomes feasible to design antennas on microchips using standard CMOS processes. Millimeter-wave on-chip antennas (OCAs) offer a high level of integration with RF front-end circuitry, external interconnect-free interface, and low fabrication cost. The on-chip antenna (OCA) can overcome the last barrier to realize a truly integrated RF system [1]. A potential candidate for next-era cellular communications at millimeter-wave frequencies is 38 GHz due to its minimum atmospheric absorption characteristics [2]. Therefore, in this work, a 5 G millimeter-wave frequency of 38 GHz is selected for designing an on-chip antenna.Several on-chip antenna structures have been proposed in the literature, and most of these OCAs were designed to operate at the millimeter-wave frequency of 60 GHz. A couple of on-chip PIFA has been proposed in the literature. In [3], a straight-line PIFA fabricated in standard CMOS process technology was proposed. The OCA was operable at a millimeter-wave frequency of 60 GHz. The antenna was excited at the fourth-order mode resulting in an increased antenna footprint. The measurement results showed that the OCA yielded an absolute gain of −19 dBi. A PIFA has a very strong dependence on the ground plane, and the OCA in [3] was designed without a ground plane thereby causing significant deterioration in antenna gain. A meander-line on-chip PIFA fabricated in TSMC 180 nm CMOS process technology was proposed in [4]. The OCA was excited at a 5G millimeter-wave frequency of 60 GHz and produced an absolute gain value of −15.7 dBi. The OCA’s meandered section, residing at the edge of the microchip, helped to reduce the overall antenna length; however, a considerable part of the antenna body remained away from the edge of the microchip causing reduction in radiation efficiency. Moreover, the work did not show the dimensions of the antenna thereby providing no information on the width and length of the fabricated OCA. A 60-GHz triangular monopole antenna-on-chip in 180 nm CMOS process technology was designed in [5]. The gain of the antenna was attempted to be improved with the help of artificial magnetic conductors (AMCs). Simulation results showed that the antenna produced a gain of 2.5 dBi. A 60 GHz on-chip patch antenna in 180-nm CMOS technology was presented in [6]. The measurement results indicated that the antenna offered a gain performance of around −2.2 dBi in the frequency range between 50 and 70 GHz. Very recently, a monopole on-chip antenna fabricated in 65-nm CMOS technology was proposed in [7]. The OCA reported an antenna gain of 0-dBi at 60-GHz. Apart from the above-mentioned OCAs, there were several other OCAs excited at the millimeter-wave frequency of 60 GHz [8–11]; however, a 38-GHz triangular monopole on-chip antenna was reported in [12]. The OCA was designed with AMCs in 28 nm CMOS process technology. The work presented simulation results only. The OCA showed an antenna gain of −1.75 dBi and occupied a considerably huge area of more than 4 mm2 on the microchip. The metal width of the OCA was 8 μm which is very narrow and thus contributed to constraining the antenna gain. Moreover, the simulation-based OCA did not discuss the requirements of the metal-fill density and the practical feasibility or nonfeasibility of such a large (more than 4 mm2) exclusion area, essential for the proper operation of the antenna on the microchip. In simulations related to on-chip antennas, exclusion area limitation may opt to be ignored by the designer, but practically the scenario appears to be quite different.A large metallic area like a patch or similar structure constructed using the top-most metal layer of CMOS technology has the potential to suffer microfracture; hence, in this work, a planar inverted-F antenna comprising of a few metallic lines is selected to avoid such a risk. Moreover, a favorable feature of PIFA with regard to on-chip integration is its small vertical dimension. It enables the antenna to be implemented at one of the chip’s edges thereby facilitating the antenna to radiate readily in the free space and hence minimizing the absorption of electromagnetic radiation within the silicon substrate. As PIFA has both horizontal and vertical elements, therefore, it can perform in both horizontal and vertical polarizations. The performance of PIFA with two-polarizations helps to improve the reception in WPAN environments. The input impedance of a PIFA can be set by adjusting the distance between the shorting stripe and the feeding stripe. The input impedance can be tuned to an appropriate value to match the source impedance without making use of an additional circuit between the source and the antenna. PIFA offers a small form factor as it is only a quarter wavelength long, and therefore, it can easily fit inside an already space-constraint environment of a microchip. Moreover, a PIFA has a strong connection with the ground plane through its shorting stripe; therefore, as compared to other antennas, it naturally behaves in a very robust manner when operating close to metallic objects, which have the tendency to affect the radiation capabilities of an antenna. This fact is particularly important in a microchip environment where a design rule check (DRC) called as “pattern density” is a ritual that needs to be satisfied. Therefore, a PIFA can better tolerate the presence of the metal-fill chunks in its vicinity as compared to the other OCA designs. After taking all of the above-mentioned facts into account, it can be inferred that a PIFA is the best-suited OCA candidate for indoor 5G-wireless applications. Moreover, due to PIFA’s proven edge over other antennas, it is practically being widely used as a mobile communication antenna. However, such an antenna has not been investigated/implemented on-chip at 38 GHz, which is one of the potential 5G frequencies. This work presents a top-loaded on-chip planar inverted-F antenna (PIFA) operable at the millimeter-wave frequency of 38 GHz. The proposed antenna shows a reflection co-efficient |S11| value of −14.8 dB and offers an antenna gain of 0.7 dBi at the center frequency of 38 GHz. To the best of the authors’ knowledge, this work proposes the first on-chip PIFA designed, fabricated, and tested at the 5 G millimeter-wave frequency of 38 GHz.This paper is organized as follows. Section2 describes the details of the CMOS technology and the design of the proposed on-chip PIFA along with the layout challenges. Measurement results are reported in Section 3, and finally, Section 4 concludes the work presented in this paper.
## 2. On-Chip Planar Inverted-F Antenna Design
The proposed on-chip planar inverted-F antenna (PIFA) is implemented in TSMC 180-nm CMOS process technology. Figure1 shows the stacked back-end-of-line (BEOL) metal layers of the technology which offers 6 metallization levels. The top-most metal layer M-6 is utilized to implement the on-chip antenna (OCA). There is a passivation layer of silicon nitride which is deposited at the top of the microchip for protection purposes. Silicon substrate is 300 μm thick with permittivity of 11.9 and resistivity of 10 Ω-cm. The region between the metal layer M-6 and the substrate is filled with silicon dioxide having a dielectric constant of 3.9.Figure 1
Stacked BEOL metal layers of the TSMC 180 nm CMOS process technology.The proposed on-chip PIFA design is shown in Figure2. The antenna consists of a feeding stripe, a shorting stripe, main antenna body, and a folded-stripe section. The structure of the antenna is positioned very close to the edge of the microchip enabling the OCA to radiate readily into the free space and hence minimizes the absorption of radiation within the silicon substrate. The placement of the OCA close to the chip’s edge improves the gain performance of the antenna. By varying the effective length (L1 + L2 + L3 + H) of the PIFA, the resonant frequency can be tuned. The input impedance of the antenna can be matched to 50 Ω by adjusting the spacing between the feeding and the shorting stripes. The open end of the PIFA is folded to achieve a top-loaded structure. The folded section provides an additional capacitance effect. This capacitance helps to achieve wide bandwidth and improved 50 Ω matching.Figure 2
Design and dimensions of the proposed on-chip PIFA.As a PIFA is a quarter-wavelength antenna, therefore, the approximate length of the proposed antenna can be evaluated by using the following formula:(1)L≈λg4=14⋅cfεr,L≈1000μm,where “λg” is guided wavelength. The dimensions of the PIFA were optimized as shown in Figure 3. The blue-colored trace in the figure was captured when the height of the antenna was 100 μm and the open end of the PIFA was not folded whereas the green-colored trace was obtained along with the folded section of the antenna. A clear improvement in terms of reflection coefficient values and bandwidth can be observed in the green trace as compared to the blue trace. However, the green-colored trace was still offset from the desired resonant frequency of 38 GHz. A slight increase of 10 μm in the height of the PIFA centered the resonance dip (red-colored trace) at exactly 38 GHz along with relatively providing an increase of around 1 GHz in the bandwidth of the antenna.Figure 3
Reflection coefficient traces show a selection of the optimized dimensions of the proposed OCA.The antenna is fed with the help of a coplanar waveguide (CPW) incorporating 100μm pitch ground-signal-ground (GSG) pads as shown in Figure 2. As the coplanar waveguide is part of the overall antenna structure, therefore, its effect is involved in the impedance matching characteristics of the antenna. Figure 4 shows the top view of the optimized on-chip antenna. The proposed PIFA is printed by the topmost metal layer M-6, and the ground plane is deployed on the metal layer M-1. In the 180-nm CMOS process node, the top-most metal layer M-6 comes with a few options with regard to the layer thickness. The first one is the general option providing a thickness of 0.99 μm, and the second option is relatively a thicker metal layer with a thickness of 2.34 μm whereas the third option offers the thickest layer called as an ultrathick metal (UTM) layer with 4.6 μm of thickness. In this work, the 4.6 μm UTM is used to implement the antenna structure. The 20-µm-wide metal width of the OCA stripes along with the maximum thickness (4.6 μm) contributes to enhancing the gain performance of the antenna. The ground pads of the proposed antenna are connected to the ground plane at metal layer M − 1 with the help of vias. The optimized ground plane covers an area of 1645 μm × 897 μm as shown in Figure 4. The ground plane reflects the electromagnetic radiation and thereby improves the antenna gain.Figure 4
Top view of the proposed OCA depicting designated exclusion area along with dummy metal-fill region and the ground plane.There is an important practical consideration regarding standard foundry fabrication rules which needs to be taken into account while designing an integrated antenna in standard CMOS processes. Fabrication rules also termed as “design rule check” (DRC) are imperative to be satisfied for deploying any structure on the silicon substrate for manufacturability. The DRC of concern for the OCA is “pattern density”. Pattern density means that all the metal layers of that process technology need to satisfy a specific percentage (20% to 80%) of the metal-fill in the total area of the microchip. However, these small chunks of every metal layer spread all around the microchip will cause disturbance in the electromagnetic radiation from the integrated antenna. Therefore, in order to avoid the metal-fill interference, an exclusion area (0.439 mm2) in the layout is designed surrounding the PIFA structure as shown in Figure 4.Mostly, on-chip antennas (OCAs) designed at 60-GHz have deployed artificial magnetic conductors (AMCs) underneath the antenna structure; however, AMCs at 38 GHz are practically nonfeasible on the chip due to the constraint of the large exclusion area. At high mmW frequencies (like 60-GHz), the dimensions of the AMC unit cell are small whereas, at relatively low mmW frequencies (like 38-GHz), these dimensions become comparatively large. Therefore, the overall AMC grid designed at the mmW frequency of 38-GHz will occupy a huge area on the microchip and, hence, for the purpose of ensuring effective operation, will demand all its occupied regions to be excluded from the dummy metal-fill. However, at the fabrication end, the microchip foundries do not allow a large exclusion area due to the high possibility of microfractures in the chip and/or deformation of the microchip structure. In fact, a large exclusion area jeopardizes the mechanical stability of the microchip and, hence, is not approved by the foundry.
## 3. Antenna Measurement Results
The photomicrograph of the on-chip planar inverted-F antenna (PIFA) implemented in TSMC 180 nm CMOS process technology is shown in Figure5. The area occupied by the antenna on the microchip is 1645 μm × 1164 μm. Cascade Microtech Summit 11K probe station and Rohde & Schwarz (ZVA-50) vector network analyzer (VNA) are used to perform the on-wafer measurements. The Cascade Microtech coplanar probes are landed on 100 μm-pitch GSG (ground-signal-ground) pads for the purpose of exciting the antenna. The reflection coefficient |S11| of the proposed OCA is shown in Figure 6. Simulated and measured reflection coefficient values are −23.76 dB and −14.8 dB, respectively, at the center frequency of 38 GHz. The figure shows that the measured reflection coefficient trace stays below −10 dB for a considerable range of frequencies. It can also be observed from Figure 6 that the measured resonance dip has shifted to about 2 GHz from the simulated resonance dip. The reason for this shift could be that the introduced signal has perceived the longitudinal dimension of the antenna as slightly smaller than the realized stripe.Figure 5
Radiation pattern measurement setup of the antenna under test (AUT) employing Cascade Microtech GSG coplanar probe, WR-28 horn antenna, and Rohde & Schwarz ZVA-50 vector network analyzer.Figure 6
Simulated and measured reflection coefficient |S11| of the on-chip PIFA.Figure5 depicts the test setup for antenna gain and radiation pattern measurements. The antenna under test (AUT) senses the radiation from the WR-28 standard gain horn antenna (26.5–40 GHz) with a gain of 15 dBi. The aperture dimensions of the horn antenna are 19.03 × 13.64 mm2. The rotating shaft is steered at different angles to trace the radiation pattern of the AUT. For radiation-gain calculation through Friis transmission expression, the transmission coefficient |S21| is measured between the horn antenna and the AUT with the help of a vector network analyzer. The distance between the AUT and the horn antenna is kept as 40 cm to ensure the far-field criteria, expressed by the following formula:(2)R≥2D2λ0,where D is the largest aperture dimension of the horn antenna and λ0 is the free-space wavelength.The antenna gain is calculated by the help of the following formula:(3)GAUTdB=S21dB−GHorndB+LProbedB+LAdapterdB−λ4πR2dB,where S21 is transmission coefficient, GHorn is gain of the horn antenna (15 dBi), LProbe is probe loss (2.0 dB), LAdapter is waveguide to coax adapter loss (0.35 dB), λ is the free-space wavelength (7.89 mm), and “R” is the distance between the horn antenna and the AUT (40 cm). The above-mentioned calculations are related to 38 GHz of frequency.The measurements are conducted after placing the fabricated on-chip antenna over a miniature plastic slab (5 × 5 × 3 mm3) for the purpose of minimizing the reflections from the metallic chuck of the probe station. 3-D printed plastic slab, shown in Figure 5, is made up of poly lactic acid (PLA) material. Figure 7 depicts the simulated and measured radiation patterns of the proposed OCA captured at the center frequency of 38 GHz. In the XZ plane, the simulated radiation pattern shows a peak antenna gain of 1.6 dBi, and the measured radiation trace exhibits the peak gain of 0.7 dBi whereas, in the YZ plane, the simulated and measured radiation traces show the peak gain values of 0.09 dBi and −0.52 dBi, respectively. It is important to note here that the proximity of the probe head to AUT limits the scanning zone of the horn antenna in the XZ plane as shown in Figure 8. The radiation pattern is scanned up to a safe limit of 10°, and the reading is not captured beyond this point. However, in the YZ plane, the entire angular sector from 90° to 270° is scanned for acquiring the radiation pattern. The tilt in the antenna beam of radiation (Figure 7(a)) confirms that the positioning of the OCA structure close to the edge of the microchip facilitates the antenna to radiate readily into free space and hence minimizes the absorption of radiation within the silicon substrate. The minor discrepancy in the simulated and measured results could be due to the reason that the OCA experienced slightly higher dielectric and conductor losses than expected.Figure 7
Simulated (blue) and measured (orange) radiation patterns in theXZ plane (a) and YZ plane (b) at the center frequency of 38 GHz.
(a)(b)Figure 8
Scannable angular sectors for XZ plane and YZ plane.Figure9 depicts the measured peak gain of the fabricated OCA at different frequencies in the vicinity of 38 GHz. It can be observed from Figure 9 that the gain values are slightly higher at 39 GHz and 40 GHz as compared to 38 GHz of frequency. This can easily be explained by observing the measured reflection coefficient trace in Figure 6. The measured S11 trace in Figure 6 touches −15 dB at 38 GHz and continues to follow the downward trend up to 40 GHz. A better reflection coefficient value means there exists less mismatch and more power is being transferred to the OCA. This is the reason that at 39 GHz and 40 GHz, the measured gain of the antenna is coming better than at 38 GHz of frequency whereas, for frequencies below 38 GHz, as the measured reflection coefficient trace shows an upward trend (moving from 38 GHz to 32 GHz) in Figure 6, therefore, the corresponding gain values in Figure 9 are lower than that at 38 GHz of frequency.Figure 9
Measured peak gain values of the fabricated OCA at different frequencies.
## 4. Conclusion
The paper presents a 38 GHz on-chip planar inverted-F antenna (PIFA) implemented in TSMC 180 nm CMOS process node. The OCA structure is deployed with the help of the ultrathick metal (UTM) layer. For improving the gain performance, the OCA is positioned close to the edge of the microchip. The open end of the antenna is bent to develop a top-loaded PIFA structure resulting in better 50 Ω impedance matching and wider bandwidth. Measurements are conducted after placing the OCA over a 3D-printed plastic slab to reduce the reflections from the metallic chuck of the probe station. The proposed antenna showed a return loss of 14.8 dB and a gain of 0.7 dBi at the center frequency of 38 GHz. The implemented CMOS-PIFA offered a simple geometrical structure, a small form factor, and a cost-effective antenna solution. Therefore, it is one of the most suitable on-chip antennas for applications related to 5 G cellular communications at the 38 GHz band.
---
*Source: 1017816-2022-06-08.xml* | 1017816-2022-06-08_1017816-2022-06-08.md | 21,614 | An On-Chip Planar Inverted-F Antenna at 38 GHz for 5G Communication Applications | Syed Muhammad Ammar Ali | International Journal of Antennas and Propagation
(2022) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1017816 | 1017816-2022-06-08.xml | ---
## Abstract
This paper presents an on-chip planar inverted-F antenna (PIFA) implemented in TSMC 180 nm CMOS process technology. The antenna operates at a 5 G millimeter-wave center frequency of 38 GHz. The ultrathick metal (UTM) layer of the technology is utilized to implement the on-chip antenna (OCA). The OCA is positioned close to the edge of the microchip for improving the gain performance of the antenna. The open end of the antenna is folded to develop a top-loaded PIFA structure yielding better 50 Ω impedance matching and wider bandwidth. On-wafer measurements are conducted through the Cascade Microtech Summit 11K probe-station and ZVA-50 vector network analyzer to measure the return loss and gain of the fabricated on-chip antenna. The measurements are performed after placing the fabricated OCA over a 3D-printed plastic slab to minimize the reflections from the metallic chuck of the probe station. The measurement results show that the fabricated on-chip PIFA achieves a minimum return loss of 14.8 dB and a gain of 0.7 dBi at the center frequency of 38 GHz. To the best of the authors’ knowledge, the presented OCA is the first on-chip PIFA designed, fabricated, and tested at the 5G millimeter-wave frequency of 38 GHz.
---
## Body
## 1. Introduction
The millimeter-wave frequencies have recently gained enormous attention among research circles because of their capability of providing high data rates for 5 G communication systems. As millimeter-wave (mmW) frequencies exhibit relatively smaller wavelengths, therefore, it becomes feasible to design antennas on microchips using standard CMOS processes. Millimeter-wave on-chip antennas (OCAs) offer a high level of integration with RF front-end circuitry, external interconnect-free interface, and low fabrication cost. The on-chip antenna (OCA) can overcome the last barrier to realize a truly integrated RF system [1]. A potential candidate for next-era cellular communications at millimeter-wave frequencies is 38 GHz due to its minimum atmospheric absorption characteristics [2]. Therefore, in this work, a 5 G millimeter-wave frequency of 38 GHz is selected for designing an on-chip antenna.Several on-chip antenna structures have been proposed in the literature, and most of these OCAs were designed to operate at the millimeter-wave frequency of 60 GHz. A couple of on-chip PIFA has been proposed in the literature. In [3], a straight-line PIFA fabricated in standard CMOS process technology was proposed. The OCA was operable at a millimeter-wave frequency of 60 GHz. The antenna was excited at the fourth-order mode resulting in an increased antenna footprint. The measurement results showed that the OCA yielded an absolute gain of −19 dBi. A PIFA has a very strong dependence on the ground plane, and the OCA in [3] was designed without a ground plane thereby causing significant deterioration in antenna gain. A meander-line on-chip PIFA fabricated in TSMC 180 nm CMOS process technology was proposed in [4]. The OCA was excited at a 5G millimeter-wave frequency of 60 GHz and produced an absolute gain value of −15.7 dBi. The OCA’s meandered section, residing at the edge of the microchip, helped to reduce the overall antenna length; however, a considerable part of the antenna body remained away from the edge of the microchip causing reduction in radiation efficiency. Moreover, the work did not show the dimensions of the antenna thereby providing no information on the width and length of the fabricated OCA. A 60-GHz triangular monopole antenna-on-chip in 180 nm CMOS process technology was designed in [5]. The gain of the antenna was attempted to be improved with the help of artificial magnetic conductors (AMCs). Simulation results showed that the antenna produced a gain of 2.5 dBi. A 60 GHz on-chip patch antenna in 180-nm CMOS technology was presented in [6]. The measurement results indicated that the antenna offered a gain performance of around −2.2 dBi in the frequency range between 50 and 70 GHz. Very recently, a monopole on-chip antenna fabricated in 65-nm CMOS technology was proposed in [7]. The OCA reported an antenna gain of 0-dBi at 60-GHz. Apart from the above-mentioned OCAs, there were several other OCAs excited at the millimeter-wave frequency of 60 GHz [8–11]; however, a 38-GHz triangular monopole on-chip antenna was reported in [12]. The OCA was designed with AMCs in 28 nm CMOS process technology. The work presented simulation results only. The OCA showed an antenna gain of −1.75 dBi and occupied a considerably huge area of more than 4 mm2 on the microchip. The metal width of the OCA was 8 μm which is very narrow and thus contributed to constraining the antenna gain. Moreover, the simulation-based OCA did not discuss the requirements of the metal-fill density and the practical feasibility or nonfeasibility of such a large (more than 4 mm2) exclusion area, essential for the proper operation of the antenna on the microchip. In simulations related to on-chip antennas, exclusion area limitation may opt to be ignored by the designer, but practically the scenario appears to be quite different.A large metallic area like a patch or similar structure constructed using the top-most metal layer of CMOS technology has the potential to suffer microfracture; hence, in this work, a planar inverted-F antenna comprising of a few metallic lines is selected to avoid such a risk. Moreover, a favorable feature of PIFA with regard to on-chip integration is its small vertical dimension. It enables the antenna to be implemented at one of the chip’s edges thereby facilitating the antenna to radiate readily in the free space and hence minimizing the absorption of electromagnetic radiation within the silicon substrate. As PIFA has both horizontal and vertical elements, therefore, it can perform in both horizontal and vertical polarizations. The performance of PIFA with two-polarizations helps to improve the reception in WPAN environments. The input impedance of a PIFA can be set by adjusting the distance between the shorting stripe and the feeding stripe. The input impedance can be tuned to an appropriate value to match the source impedance without making use of an additional circuit between the source and the antenna. PIFA offers a small form factor as it is only a quarter wavelength long, and therefore, it can easily fit inside an already space-constraint environment of a microchip. Moreover, a PIFA has a strong connection with the ground plane through its shorting stripe; therefore, as compared to other antennas, it naturally behaves in a very robust manner when operating close to metallic objects, which have the tendency to affect the radiation capabilities of an antenna. This fact is particularly important in a microchip environment where a design rule check (DRC) called as “pattern density” is a ritual that needs to be satisfied. Therefore, a PIFA can better tolerate the presence of the metal-fill chunks in its vicinity as compared to the other OCA designs. After taking all of the above-mentioned facts into account, it can be inferred that a PIFA is the best-suited OCA candidate for indoor 5G-wireless applications. Moreover, due to PIFA’s proven edge over other antennas, it is practically being widely used as a mobile communication antenna. However, such an antenna has not been investigated/implemented on-chip at 38 GHz, which is one of the potential 5G frequencies. This work presents a top-loaded on-chip planar inverted-F antenna (PIFA) operable at the millimeter-wave frequency of 38 GHz. The proposed antenna shows a reflection co-efficient |S11| value of −14.8 dB and offers an antenna gain of 0.7 dBi at the center frequency of 38 GHz. To the best of the authors’ knowledge, this work proposes the first on-chip PIFA designed, fabricated, and tested at the 5 G millimeter-wave frequency of 38 GHz.This paper is organized as follows. Section2 describes the details of the CMOS technology and the design of the proposed on-chip PIFA along with the layout challenges. Measurement results are reported in Section 3, and finally, Section 4 concludes the work presented in this paper.
## 2. On-Chip Planar Inverted-F Antenna Design
The proposed on-chip planar inverted-F antenna (PIFA) is implemented in TSMC 180-nm CMOS process technology. Figure1 shows the stacked back-end-of-line (BEOL) metal layers of the technology which offers 6 metallization levels. The top-most metal layer M-6 is utilized to implement the on-chip antenna (OCA). There is a passivation layer of silicon nitride which is deposited at the top of the microchip for protection purposes. Silicon substrate is 300 μm thick with permittivity of 11.9 and resistivity of 10 Ω-cm. The region between the metal layer M-6 and the substrate is filled with silicon dioxide having a dielectric constant of 3.9.Figure 1
Stacked BEOL metal layers of the TSMC 180 nm CMOS process technology.The proposed on-chip PIFA design is shown in Figure2. The antenna consists of a feeding stripe, a shorting stripe, main antenna body, and a folded-stripe section. The structure of the antenna is positioned very close to the edge of the microchip enabling the OCA to radiate readily into the free space and hence minimizes the absorption of radiation within the silicon substrate. The placement of the OCA close to the chip’s edge improves the gain performance of the antenna. By varying the effective length (L1 + L2 + L3 + H) of the PIFA, the resonant frequency can be tuned. The input impedance of the antenna can be matched to 50 Ω by adjusting the spacing between the feeding and the shorting stripes. The open end of the PIFA is folded to achieve a top-loaded structure. The folded section provides an additional capacitance effect. This capacitance helps to achieve wide bandwidth and improved 50 Ω matching.Figure 2
Design and dimensions of the proposed on-chip PIFA.As a PIFA is a quarter-wavelength antenna, therefore, the approximate length of the proposed antenna can be evaluated by using the following formula:(1)L≈λg4=14⋅cfεr,L≈1000μm,where “λg” is guided wavelength. The dimensions of the PIFA were optimized as shown in Figure 3. The blue-colored trace in the figure was captured when the height of the antenna was 100 μm and the open end of the PIFA was not folded whereas the green-colored trace was obtained along with the folded section of the antenna. A clear improvement in terms of reflection coefficient values and bandwidth can be observed in the green trace as compared to the blue trace. However, the green-colored trace was still offset from the desired resonant frequency of 38 GHz. A slight increase of 10 μm in the height of the PIFA centered the resonance dip (red-colored trace) at exactly 38 GHz along with relatively providing an increase of around 1 GHz in the bandwidth of the antenna.Figure 3
Reflection coefficient traces show a selection of the optimized dimensions of the proposed OCA.The antenna is fed with the help of a coplanar waveguide (CPW) incorporating 100μm pitch ground-signal-ground (GSG) pads as shown in Figure 2. As the coplanar waveguide is part of the overall antenna structure, therefore, its effect is involved in the impedance matching characteristics of the antenna. Figure 4 shows the top view of the optimized on-chip antenna. The proposed PIFA is printed by the topmost metal layer M-6, and the ground plane is deployed on the metal layer M-1. In the 180-nm CMOS process node, the top-most metal layer M-6 comes with a few options with regard to the layer thickness. The first one is the general option providing a thickness of 0.99 μm, and the second option is relatively a thicker metal layer with a thickness of 2.34 μm whereas the third option offers the thickest layer called as an ultrathick metal (UTM) layer with 4.6 μm of thickness. In this work, the 4.6 μm UTM is used to implement the antenna structure. The 20-µm-wide metal width of the OCA stripes along with the maximum thickness (4.6 μm) contributes to enhancing the gain performance of the antenna. The ground pads of the proposed antenna are connected to the ground plane at metal layer M − 1 with the help of vias. The optimized ground plane covers an area of 1645 μm × 897 μm as shown in Figure 4. The ground plane reflects the electromagnetic radiation and thereby improves the antenna gain.Figure 4
Top view of the proposed OCA depicting designated exclusion area along with dummy metal-fill region and the ground plane.There is an important practical consideration regarding standard foundry fabrication rules which needs to be taken into account while designing an integrated antenna in standard CMOS processes. Fabrication rules also termed as “design rule check” (DRC) are imperative to be satisfied for deploying any structure on the silicon substrate for manufacturability. The DRC of concern for the OCA is “pattern density”. Pattern density means that all the metal layers of that process technology need to satisfy a specific percentage (20% to 80%) of the metal-fill in the total area of the microchip. However, these small chunks of every metal layer spread all around the microchip will cause disturbance in the electromagnetic radiation from the integrated antenna. Therefore, in order to avoid the metal-fill interference, an exclusion area (0.439 mm2) in the layout is designed surrounding the PIFA structure as shown in Figure 4.Mostly, on-chip antennas (OCAs) designed at 60-GHz have deployed artificial magnetic conductors (AMCs) underneath the antenna structure; however, AMCs at 38 GHz are practically nonfeasible on the chip due to the constraint of the large exclusion area. At high mmW frequencies (like 60-GHz), the dimensions of the AMC unit cell are small whereas, at relatively low mmW frequencies (like 38-GHz), these dimensions become comparatively large. Therefore, the overall AMC grid designed at the mmW frequency of 38-GHz will occupy a huge area on the microchip and, hence, for the purpose of ensuring effective operation, will demand all its occupied regions to be excluded from the dummy metal-fill. However, at the fabrication end, the microchip foundries do not allow a large exclusion area due to the high possibility of microfractures in the chip and/or deformation of the microchip structure. In fact, a large exclusion area jeopardizes the mechanical stability of the microchip and, hence, is not approved by the foundry.
## 3. Antenna Measurement Results
The photomicrograph of the on-chip planar inverted-F antenna (PIFA) implemented in TSMC 180 nm CMOS process technology is shown in Figure5. The area occupied by the antenna on the microchip is 1645 μm × 1164 μm. Cascade Microtech Summit 11K probe station and Rohde & Schwarz (ZVA-50) vector network analyzer (VNA) are used to perform the on-wafer measurements. The Cascade Microtech coplanar probes are landed on 100 μm-pitch GSG (ground-signal-ground) pads for the purpose of exciting the antenna. The reflection coefficient |S11| of the proposed OCA is shown in Figure 6. Simulated and measured reflection coefficient values are −23.76 dB and −14.8 dB, respectively, at the center frequency of 38 GHz. The figure shows that the measured reflection coefficient trace stays below −10 dB for a considerable range of frequencies. It can also be observed from Figure 6 that the measured resonance dip has shifted to about 2 GHz from the simulated resonance dip. The reason for this shift could be that the introduced signal has perceived the longitudinal dimension of the antenna as slightly smaller than the realized stripe.Figure 5
Radiation pattern measurement setup of the antenna under test (AUT) employing Cascade Microtech GSG coplanar probe, WR-28 horn antenna, and Rohde & Schwarz ZVA-50 vector network analyzer.Figure 6
Simulated and measured reflection coefficient |S11| of the on-chip PIFA.Figure5 depicts the test setup for antenna gain and radiation pattern measurements. The antenna under test (AUT) senses the radiation from the WR-28 standard gain horn antenna (26.5–40 GHz) with a gain of 15 dBi. The aperture dimensions of the horn antenna are 19.03 × 13.64 mm2. The rotating shaft is steered at different angles to trace the radiation pattern of the AUT. For radiation-gain calculation through Friis transmission expression, the transmission coefficient |S21| is measured between the horn antenna and the AUT with the help of a vector network analyzer. The distance between the AUT and the horn antenna is kept as 40 cm to ensure the far-field criteria, expressed by the following formula:(2)R≥2D2λ0,where D is the largest aperture dimension of the horn antenna and λ0 is the free-space wavelength.The antenna gain is calculated by the help of the following formula:(3)GAUTdB=S21dB−GHorndB+LProbedB+LAdapterdB−λ4πR2dB,where S21 is transmission coefficient, GHorn is gain of the horn antenna (15 dBi), LProbe is probe loss (2.0 dB), LAdapter is waveguide to coax adapter loss (0.35 dB), λ is the free-space wavelength (7.89 mm), and “R” is the distance between the horn antenna and the AUT (40 cm). The above-mentioned calculations are related to 38 GHz of frequency.The measurements are conducted after placing the fabricated on-chip antenna over a miniature plastic slab (5 × 5 × 3 mm3) for the purpose of minimizing the reflections from the metallic chuck of the probe station. 3-D printed plastic slab, shown in Figure 5, is made up of poly lactic acid (PLA) material. Figure 7 depicts the simulated and measured radiation patterns of the proposed OCA captured at the center frequency of 38 GHz. In the XZ plane, the simulated radiation pattern shows a peak antenna gain of 1.6 dBi, and the measured radiation trace exhibits the peak gain of 0.7 dBi whereas, in the YZ plane, the simulated and measured radiation traces show the peak gain values of 0.09 dBi and −0.52 dBi, respectively. It is important to note here that the proximity of the probe head to AUT limits the scanning zone of the horn antenna in the XZ plane as shown in Figure 8. The radiation pattern is scanned up to a safe limit of 10°, and the reading is not captured beyond this point. However, in the YZ plane, the entire angular sector from 90° to 270° is scanned for acquiring the radiation pattern. The tilt in the antenna beam of radiation (Figure 7(a)) confirms that the positioning of the OCA structure close to the edge of the microchip facilitates the antenna to radiate readily into free space and hence minimizes the absorption of radiation within the silicon substrate. The minor discrepancy in the simulated and measured results could be due to the reason that the OCA experienced slightly higher dielectric and conductor losses than expected.Figure 7
Simulated (blue) and measured (orange) radiation patterns in theXZ plane (a) and YZ plane (b) at the center frequency of 38 GHz.
(a)(b)Figure 8
Scannable angular sectors for XZ plane and YZ plane.Figure9 depicts the measured peak gain of the fabricated OCA at different frequencies in the vicinity of 38 GHz. It can be observed from Figure 9 that the gain values are slightly higher at 39 GHz and 40 GHz as compared to 38 GHz of frequency. This can easily be explained by observing the measured reflection coefficient trace in Figure 6. The measured S11 trace in Figure 6 touches −15 dB at 38 GHz and continues to follow the downward trend up to 40 GHz. A better reflection coefficient value means there exists less mismatch and more power is being transferred to the OCA. This is the reason that at 39 GHz and 40 GHz, the measured gain of the antenna is coming better than at 38 GHz of frequency whereas, for frequencies below 38 GHz, as the measured reflection coefficient trace shows an upward trend (moving from 38 GHz to 32 GHz) in Figure 6, therefore, the corresponding gain values in Figure 9 are lower than that at 38 GHz of frequency.Figure 9
Measured peak gain values of the fabricated OCA at different frequencies.
## 4. Conclusion
The paper presents a 38 GHz on-chip planar inverted-F antenna (PIFA) implemented in TSMC 180 nm CMOS process node. The OCA structure is deployed with the help of the ultrathick metal (UTM) layer. For improving the gain performance, the OCA is positioned close to the edge of the microchip. The open end of the antenna is bent to develop a top-loaded PIFA structure resulting in better 50 Ω impedance matching and wider bandwidth. Measurements are conducted after placing the OCA over a 3D-printed plastic slab to reduce the reflections from the metallic chuck of the probe station. The proposed antenna showed a return loss of 14.8 dB and a gain of 0.7 dBi at the center frequency of 38 GHz. The implemented CMOS-PIFA offered a simple geometrical structure, a small form factor, and a cost-effective antenna solution. Therefore, it is one of the most suitable on-chip antennas for applications related to 5 G cellular communications at the 38 GHz band.
---
*Source: 1017816-2022-06-08.xml* | 2022 |
# On Lacunary Mean Ideal Convergence in Generalized Randomn-Normed Spaces
**Authors:** Awad A. Bakery; Mustafa M. Mohammed
**Journal:** Abstract and Applied Analysis
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101782
---
## Abstract
An idealI is a hereditary and additive family of subsets of positive integers ℕ. In this paper, we will introduce the concept of generalized random n-normed space as an extension of random n-normed space. Also, we study the concept of lacunary mean (L)-ideal convergence and L-ideal Cauchy for sequences of complex numbers in the generalized random n-norm. We introduce I
L-limit points and I
L-cluster points. Furthermore, Cauchy and I
L-Cauchy sequences in this construction are given. Finally, we find relations among these concepts.
---
## Body
## 1. Introduction
The sets of natural numbers and complex numbers will be denoted byℕ and ℂ, respectively. Fast [1] and Steinhaus [2] independently introduced the notion of statistical convergence for sequences of real numbers, which is a generalization of the concept of convergence. The concept of statistical convergence is a very valuable functional tool for studying the convergence problems of numerical sequences through the concept of density. Afterward, several generalizations and applications of this concept have been presented by different authors (see [3–6]). Kostyrko et al. [7] presented a generalization of the concept of statistical convergence with the help of an ideal I of subsets of the set of natural numbers ℕ, and more is studied in [8–11]. This concept of ideal convergence plays a fundamental role not only in pure mathematics but also in other branches of science concerning mathematics, mainly in information theory, computer science, dynamical systems, geographic information systems, and population modelling. Menger [12] generalized the metric axioms by associating a distribution function with each pair of points of a set. This system is called a probabilistic metric space. By using the concept of Menger, Šerstnev [13] introduced the concept of probabilistic normed spaces. It provides an important area into which many essential results of linear normed spaces can be generalized; see [14]. Later, Alsina et al. [15] presented a new definition of probabilistic normed space which includes the definition of normed space which includes the definition of Šerstnev as a special case. The concept of ideal convergence for single and double sequences of real numbers in probabilistic normed space was introduced and studied by Mursaleen and Mohiuddine [16]. Mursaleen and Alotaibi [17] studied the notion of ideal convergence for single and double sequences in random 2-normed spaces, respectively. For more details and linked concept, we refer to [18–26]. In [27, 28], Gähler introduced a gorgeous theory of 2-normed and n-normed spaces in the 1960s; we have studied these subjects and constructed some sequence spaces defined by ideal convergence in n-normed spaces [29, 30]. Another important alternative of statistical convergence is the notion of lacunary statistical convergence introduced by Fridy and Orhan [31]. Recently, Mohiuddine and Aiyub [4] studied lacunary statistical convergence by introducing the concept Θ-statistical convergence in random 2-normed space. Their work can be considered as a particular generalization of the statistical convergence. In [32], Mursaleen and Mohiuddine generalized the idea of lacunary statistical convergence with respect to the intuitionistic fuzzy normed space, and Debnath [33] investigated lacunary ideal convergence in intuitionistic fuzzy normed linear spaces. Also, lacunary statistically convergent double sequences in probabilistic normed space were studied by Mohiuddine and Savaş in [34]. Jebril and Dutta [35] introduced the concept of random n-normed space. In this paper, we firstly give some basic definitions and properties of random n-normed space in Section 2. In Section 3, we define a new and interesting notion of generalized random n-normed spaces; convergent sequences in it are introduced and we provide some results on it. In Section 4, we study lacunary mean (L)-ideal convergence and L-ideal Cauchy for sequences of complex numbers in the generalized random n-norm. Finally, in Section 5, we introduce I
L-limit points and I
L-cluster points. Moreover, Cauchy and I
L-Cauchy sequences in this framework are given, and we find relations among these concepts.
## 2. Definitions and Preliminaries
For the reader’s expediency, we restate some definitions and results that will be used in this paper.The notion of statistical convergence depends on the density (asymptotic or natural) of subsets ofℕ.Definition 1.
A subsetE of ℕ is said to have natural density δ
(
E
) if
(1)
δ
(
E
)
=
lim
n
→
∞
1
n
|
{
k
≤
n
:
k
∈
E
}
|
exists
,
where |
E
| denotes the cardinality of the set E.Definition 2.
A sequence(
x
k
) is statistically convergent to ℓ if, for every ε
>
0,
(2)
δ
(
{
k
∈
ℕ
:
|
x
k
-
ℓ
|
≥
ε
}
)
=
0
.
In this case, ℓ is called the statistical limit of the sequence (
x
k
).Definition 3.
A nonempty family of setsI
⊆
2
ℕ is said to be an ideal on ℕ if and only if (a)
ϕ
∈
I,
(b)
for eachA
,
B
∈
I one has A
∪
B
∈
I,
(c)
for eachB
∈
I and A
⊂
B, implies A
∈
I.Definition 4.
An idealI is an admissible ideal if {
x
}
∈
I for each x
∈
ℕ.Definition 5.
An idealI
⊆
2
ℕ is said to be nontrivial if I
≠
ϕ and ℕ
∉
I.Definition 6.
A nonempty family of setsF
⊆
2
ℕ is said to be a filter on ℕ if and only if (a)
ϕ
∉
F,
(b)
for eachA
,
B
∈
F one has A
∩
B
∈
F,
(c)
for eachA
∈
F and B
⊃
A, implies B
∈
F.
For each idealI, there is a filter F
(
I
) corresponding to I; that is,F
(
I
)
=
{
K
⊆
ℕ
:
ℕ
-
K
∈
I
}.Example 7.
If we takeI
=
I
f
=
{
A
⊆
ℕ
:
A
is
a
finite
subset
}, then I
f is a nontrivial admissible ideal of ℕ and the corresponding convergence coincides with the usual convergence.Example 8.
If we getI
=
I
δ
=
{
A
⊆
ℕ
:
δ
(
A
)
=
0
}, where δ
(
A
) denote the asymptotic density of the set A, then I
δ is a nontrivial admissible ideal of ℕ and the corresponding convergence coincides with the statistical convergence.Definition 9.
A sequencex
=
(
x
k
) is said to be I-convergent to a real number ℓ if
(3)
{
k
∈
ℕ
:
|
x
k
-
ℓ
|
≥
ε
}
∈
I
for
every
ε
>
0
.
In this case, we write I
-
lim
x
k
=
ℓ.Definition 10.
By a lacunary sequenceΘ
=
(
i
j
), j
=
0,1
,
2
,
…, where i
0
=
0, one will mean an increasing sequence of nonnegative integers with i
j
-
i
j
-
1
→
∞ as j
→
∞, h
j
=
i
j
-
i
j
-
1. The intervals determined by Θ will be denoted by Λ
j
=
(
i
j
-
1
,
i
j
].Definition 11.
A sequencex
=
(
x
k
) is said to be lacunary (L)-statistically convergent to the number ℓ if, for every ε
>
0, one has
(4)
lim
j
→
∞
1
h
j
|
{
k
∈
Λ
j
:
|
x
k
-
ℓ
|
≥
ε
}
|
=
0
.
The notion of lacunary ideal convergence of real sequences is introduced by Tripathy et al. [36], and Hazarika [37, 38] introduced the lacunary ideal convergent sequences of fuzzy real numbers and studied some properties.Definition 12.
LetI
⊂
2
ℕ be a nontrivial ideal. A sequence x
=
(
x
k
) is said to be I
L-summable to a number ℓ if, for every ε
>
0, the set
(5)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
|
x
k
-
ℓ
|
≥
ε
}
∈
I
.Definition 13.
Letn
∈
ℕ and let X be a linear space over the field K of dimension d, where d
≥
n
≥
2 and K is the field of real or complex numbers. A real valued function ∥
·
,
…
,
·
∥ on X
n satisfies the following four conditions:(1)
∥
x
1
,
x
2
,
…
,
x
n
∥
=
0 if and only if x
1
,
x
2
,
…
,
x
n are linearly dependent in X;
(2)
∥
x
1
,
x
2
,
…
,
x
n
∥ is invariant under permutation;
(3)
∥
α
x
1
,
x
2
,
…
,
x
n
∥
=
|
α
|
∥
x
1
,
x
2
,
…
,
x
n
∥ for any α
∈
K;
(4)
∥
x
+
x
′
,
x
2
,
…
,
x
n
∥
≤
∥
x
,
x
2
,
…
,
x
n
∥
+
∥
x
′
,
x
2
,
…
,
x
n
∥ is called an n-norm on X, and the pair (
X
;
∥
·
,
…
,
·
∥
) is called an n-normed space over the field K.Definition 14.
A probability distribution function is a functionF that is nondecreasing, left continuous on (
0
,
∞
) such that F
(
0
)
=
0 and F
(
∞
)
=
1. The family of all probability distribution functions will be denoted by Δ
+. The space Δ
+ is partially ordered by the usual pointwise ordering of functions and has both a maximal element ε
0 and a minimal element ε
∞; these are given, respectively, by
(6)
ε
0
(
t
)
=
{
0
,
t
≤
0
,
1
,
t
>
0
,
ε
∞
(
t
)
=
{
0
,
t
<
∞
,
1
,
t
=
∞
.
There is a natural topology on Δ
+ that is induced by the modified Lévy metric d
L [39, 40]; that is,
(7)
d
L
(
F
,
G
)
=
inf
{
h
:
both
[
F
,
G
;
h
]
and
[
G
,
F
;
h
]
hold
}
for all F
,
G
∈
Δ
+
and
h
∈
(
0,1
], where [
F
,
G
,
h
] denote the condition
(8)
G
(
t
)
≤
F
(
t
+
h
)
+
h
,
for
t
∈
(
0
,
1
h
)
.
Convergence with respect to this metric is equivalent to weak convergence of distribution functions; that is, (
F
n
) in Δ
+ converges weakly to F (written as F
n
→
ω
F) if and only if F
n
(
t
) converges to F
(
t
) at every point of continuity of the limit function F. Therefore, one has
(9)
F
n
→
ω
F
iff
d
L
(
F
n
,
F
)
⟶
0
,
F
(
x
)
>
1
-
x
iff
d
L
(
F
,
ε
0
)
<
x
for
every
x
>
0
.
Moreover, the metric space (
Δ
+
,
d
L
) is compact.Definition 15.
A binary operation⋆
:
[
0,1
]
×
[
0,1
]
→
[
0,1
] is said to be a continuous t-norm if the following conditions are satisfied: (1)
⋆ is associative and commutative,
(2)
⋆ is continuous,
(3)
a
⋆
1
=
a for all a
∈
[
0,1
],
(4)
a
⋆
b
≤
c
⋆
d whenever a
≤
c and b
≤
d for each a
,
b
,
c
,
d
∈
[
0,1
].Definition 16.
A binary operation⋄
:
[
0,1
]
×
[
0,1
]
→
[
0,1
] is said to be a continuous t-conorm if the following conditions are satisfied: (1)
⋄ is associative and commutative,
(2)
⋄ is continuous,
(3)
a
⋄
0
=
a for all a
∈
[
0,1
],
(4)
a
⋄
b
≤
c
⋄
d whenever a
≤
c and b
≤
d for each a
,
b
,
c
,
d
∈
[
0,1
].Definition 17.
LetX be a linear space of dimension greater than one, ⋆ a continuous t-norm, and ρ a mapping from X
2 into D
+. If the following conditions are satisfied: (1)
ρ
x
,
y
=
ε
0 if x and y are linearly dependent,
(2)
ρ
x
,
y
=
ρ
y
,
x for every x and y in X,
(3)
ρ
α
x
,
y
(
t
)
=
ρ
x
,
y
(
t
/
|
α
|
) for every t
>
0; α
≠
0 and x
;
y
∈
X,
(4)
ρ
x
+
y
,
z
(
t
)
≥
ρ
x
,
z
(
t
)
⋆
ρ
y
,
z
(
t
),
thenρ is called a random 2-norm on X and (
X
;
ρ
;
⋆
) is called a random 2-normed space.Definition 18.
LetX be a linear space of dimension greater than one over a real field, ⋆ a continuous t-norm, and ρ a mapping from X
n into D
+. If the following conditions are satisfied: (1)
ρ
x
1
,
x
2
,
…
,
x
n
=
ε
0
⇔
x
1
,
x
2
,
…
,
x
n are linearly dependent,
(2)
ρ
x
1
,
x
2
,
…
,
x
n is invariant under any permutation of x
1
,
x
2
,
…
,
x
n,
(3)
ρ
α
x
1
,
x
2
,
…
,
x
n
(
t
)
=
ρ
x
1
,
x
2
,
…
,
x
n
(
t
/
|
α
|
) for every t
>
0; α
≠
0,
(4)
ρ
x
1
,
x
2
,
…
,
x
n
+
x
n
′
(
t
+
s
)
≥
ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
⋆
ρ
x
1
,
x
2
,
…
,
x
n
′
(
s
),
thenρ is called a random n-norm on X and (
X
;
ρ
;
⋆
) is called a random n-normed space.
## 3. Generalized Randomn-Normed Space
Throughout the paper letI be an admissible ideal of ℕ. By generalizing Definition 18, we obtain a new notion of generalized random n-normed space as follows.Definition 19.
The five-tuple(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) is said to be generalized random n-normed linear space or in short GRnNLS if X is a linear space over the field of complex numbers ℂ, ⋆ is a continuous t-norm, ⋄ is a continuous t-conorm, and ρ, ϱ are two mappings on X
n
×
(
0
,
∞
) into D
+
×
(
0
,
∞
) satisfying the following conditions for every x
=
(
x
1
,
x
2
,
…
,
x
n
)
∈
X
n and for each s
,
t
∈
(
0
,
∞
): (1)
ρ
x
1
,
x
2
,
…
,
x
n
+
ϱ
x
1
,
x
2
,
…
,
x
n
≤
ε
0,
(2)
ρ
x
1
,
x
2
,
…
,
x
n
≥
ε
∞,
(3)
ρ
x
1
,
x
2
,
…
,
x
n
=
ε
0 if and only if x
1
,
x
2
,
…
,
x
n are linearly dependent,
(4)
ρ
α
x
1
,
x
2
,
…
,
x
n
(
t
)
=
ρ
x
1
,
x
2
,
…
,
x
n
(
t
/
|
α
|
) for each α
∉
ℂ
∖
0,
(5)
ρ
x
1
,
x
2
,
…
,
x
n
′
(
t
)
⋆
ρ
x
1
,
x
2
,
…
,
x
n
(
s
)
≤
ρ
x
1
,
x
2
,
…
,
x
n
′
+
x
n
(
t
+
s
),
(6)
ρ
x
1
,
x
2
,
…
,
x
n
(
·
)
:
(
0
,
∞
)
→
[
0,1
] is continuous,
(7)
ρ
x
1
,
x
2
,
…
,
x
n
(
t
) is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
),
(8)
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
≥
ε
∞,
(9)
ϱ
x
1
,
x
2
,
…
,
x
n
=
ε
∞ if and only if x
1
,
x
2
,
…
,
x
n are linearly dependent,
(10)
ϱ
α
x
1
,
x
2
,
…
,
x
n
(
t
)
=
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
/
|
α
|
) for each α
∉
ℂ
∖
0,
(11)
ϱ
(
x
1
,
x
2
,
…
,
x
n
′
,
t
)
⋄
ϱ
(
x
1
,
x
2
,
…
,
x
n
,
s
)
≥
ϱ
(
x
1
,
x
2
,
…
,
x
n
′
+
x
n
,
t
+
s
),
(12)
ϱ
x
1
,
x
2
,
…
,
x
n
(
·
)
:
(
0
,
∞
)
→
[
0,1
] is continuous,
(13)
ρ
x
1
,
x
2
,
…
,
x
n
(
t
) is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
).
In this case,(
ρ
,
ϱ
) is called generalized random n-norm on X and we denote it by (
ρ
,
ϱ
)
n.Example 20.
Let(
X
,
∥
·
,
…
,
·
∥
) be an n-normed linear space. Put a
⋆
b
=
min
{
a
,
b
} and a
⋄
b
=
max
{
a
,
b
} for all a
,
b
∈
[
0,1
], ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
t
/
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
), and ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
∥
x
1
,
x
2
,
…
,
x
n
∥
/
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
). Then, (
X
,
ρ
,
ϱ
,
⋆
,
⋄
) is GRnNLS.Proof.
For allt
,
s
∈
(
0
,
∞
), we have the following.(1)
Evidently,ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
+
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
≤
1.
(2)
Visibly,ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
≥
0.
(3)
And(10)
ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
1
⟺
t
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
=
1
⟺
t
=
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
⟺
∥
x
1
,
x
2
,
…
,
x
n
∥
=
0
⟺
x
1
,
x
2
,
…
,
x
n
are
linearly
dependent
.
(4)
While∥
x
1
,
x
2
,
…
,
x
n
∥ is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
), then ρ
x
1
,
x
2
,
…
,
x
n
(
t
) is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
).
(5)
Consider(11)
ρ
x
1
,
x
2
,
…
,
x
n
(
t
|
α
|
)
=
t
/
|
α
|
t
/
|
α
|
+
∥
x
1
,
x
2
,
…
,
x
n
∥
=
t
t
+
|
α
|
∥
x
1
,
x
2
,
…
,
x
n
∥
=
t
t
+
∥
α
x
1
,
x
2
,
…
,
x
n
∥
=
ρ
x
1
,
x
2
,
…
,
x
n
(
t
|
α
|
)
.
(6)
Suppose that, without loss of generality,(12)
ρ
x
1
,
x
2
,
…
,
x
n
′
(
t
)
≤
ρ
x
1
,
x
2
,
…
,
x
n
(
s
)
.
⟹
t
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
≤
s
s
+
∥
x
1
,
x
2
,
…
,
x
n
∥
⟹
t
(
s
+
∥
x
1
,
x
2
,
…
,
x
n
∥
)
≤
s
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
)
⟹
t
(
∥
x
1
,
x
2
,
…
,
x
n
∥
)
≤
s
(
∥
x
1
,
x
2
,
…
,
x
n
′
∥
)
⟹
∥
x
1
,
x
2
,
…
,
x
n
∥
=
s
t
∥
x
1
,
x
2
,
…
,
x
n
′
∥
.
As a result,(13)
∥
x
1
,
x
2
,
…
,
x
n
∥
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
≤
s
t
∥
x
1
,
x
2
,
…
,
x
n
′
∥
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
=
s
+
t
t
∥
x
1
,
x
2
,
…
,
x
n
′
∥
.
However,
(14)
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
≤
∥
x
1
,
x
2
,
…
,
x
n
∥
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
≤
s
+
t
t
∥
x
1
,
x
2
,
…
,
x
n
′
∥
,
⟹
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
≤
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
⟹
1
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
≤
1
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
⟹
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
≤
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
⟹
s
+
t
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
≥
t
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
⟹
ρ
x
1
,
x
2
,
…
,
x
n
+
x
n
′
(
s
+
t
)
≥
min
{
ρ
x
1
,
x
2
,
…
,
x
n
(
s
)
,
ρ
x
1
,
x
2
,
…
,
x
n
′
(
t
)
}
.
(7)
Evidently,ρ
x
1
,
x
2
,
…
,
x
n
(
·
)
:
(
0
,
∞
)
→
[
0,1
] is continuous.
(8)
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
≥
0.
(9)
And(15)
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
0
⟺
∥
x
1
,
x
2
,
…
,
x
n
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
=
0
⟺
∥
x
1
,
x
2
,
…
,
x
n
∥
=
0
⟺
x
1
,
x
2
,
…
,
x
n
are
linearly
dependent
.
(10)
As∥
x
1
,
x
2
,
…
,
x
n
∥ is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
), then ϱ
x
1
,
x
2
,
…
,
x
n
(
t
) is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
).
(11)
Consider(16)
ϱ
α
x
1
,
x
2
,
…
,
x
n
(
t
)
=
∥
α
x
1
,
x
2
,
…
,
x
n
∥
t
+
∥
α
x
1
,
x
2
,
…
,
x
n
∥
=
|
α
|
∥
x
1
,
x
2
,
…
,
x
n
∥
t
+
|
α
|
∥
x
1
,
x
2
,
…
,
x
n
∥
=
∥
x
1
,
x
2
,
…
,
x
n
∥
t
/
|
α
|
+
∥
x
1
,
x
2
,
…
,
x
n
∥
=
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
|
α
|
)
.
(12)
Presume, without loss of generality, that(17)
ϱ
x
1
,
x
2
,
…
,
x
n
(
s
)
≤
ρ
x
1
,
x
2
,
…
,
x
n
′
(
t
)
.
⟹
∥
x
1
,
x
2
,
…
,
x
n
∥
s
+
∥
x
1
,
x
2
,
…
,
x
n
∥
≤
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
⟹
∥
x
1
,
x
2
,
…
,
x
n
∥
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
)
≤
∥
x
1
,
x
2
,
…
,
x
n
′
∥
(
s
+
∥
x
1
,
x
2
,
…
,
x
n
∥
)
⟹
t
∥
x
1
,
x
2
,
…
,
x
n
∥
≤
s
∥
x
1
,
x
2
,
…
,
x
n
′
∥
.
Currently,(18)
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
-
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
≤
∥
x
1
,
x
2
,
…
,
x
n
∥
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
-
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
=
t
∥
x
1
,
x
2
,
…
,
x
n
∥
-
s
∥
x
1
,
x
2
,
…
,
x
n
′
∥
(
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
)
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
)
.
By (17),
(19)
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
≤
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
.
In the same way,
(20)
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
≤
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
.
⟹
ϱ
x
1
,
x
2
,
…
,
x
n
+
x
n
′
(
s
+
t
)
≤
max
{
ϱ
x
1
,
x
2
,
…
,
x
n
(
s
)
,
ρ
x
1
,
x
2
,
…
,
x
n
′
(
t
)
}
.
(13)
Clearly,ρ
x
1
,
x
2
,
…
,
x
n
(
·
)
:
(
0
,
∞
)
→
[
0,1
] is continuous.Remark 21.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. Since ⋆ is a continuous t-norm and ⋄ is a continuous t-conorm, the system (
r
,
t
)-neighborhoods of θ (the null vector in X) with respect to t is
(21)
{
B
(
θ
,
r
,
t
)
:
t
>
0,0
<
r
<
1
}
,
where
(22)
B
(
θ
,
r
,
t
)
=
{
y
∈
X
:
ρ
y
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
r
,
ϱ
y
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
r
,
for
t
>
0
}
defined a first countable Hausdorff topology on X, called the (
ρ
,
ϱ
)
n-topology. Hence, the (
ρ
,
ϱ
)
n-topology can be completely specified by means of (
ρ
,
ϱ
)
n-convergence of sequences.Definition 22.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS, and let r
∈
(
0,1
) and x
∈
X. The set
(23)
B
(
x
,
r
,
t
)
=
{
y
∈
X
:
ρ
y
-
x
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
r
,
h
h
h
ϱ
y
-
x
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
r
,
for
t
>
0
}
is called open ball with center x and radius r with respect to t.Definition 23.
Let(
X
,
ρ
,
ϱ
,
*
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) in X is (
ρ
,
ϱ
)
n-convergent to ℓ
∈
X with respect to the generalized random n-norm (
ρ
,
ϱ
)
n if, for r
∈
(
0,1
) and every t
>
0, there exists k
0 such that
(24)
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
1
-
r
,
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
r
hhhhhhhhhh
∀
k
≥
k
0
.
In this case, one writes (
ρ
,
ϱ
)
n
-
lim
x
=
ℓ.Theorem 24.
Let(
X
,
∥
·
,
…
,
·
∥
) be an n-normed linear space. Put a
⋆
b
=
min
{
a
,
b
} and a
⋄
b
=
max
{
a
,
b
} for all a
,
b
∈
[
0,1
], ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
t
/
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
), and ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
∥
x
1
,
x
2
,
…
,
x
n
∥
/
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
). Then, for every sequence x
=
(
x
k
) and nonzero x
1
,
x
2
,
…
,
x
n
-
1
∈
X, one has
(25)
lim
k
→
∞
∥
x
1
,
x
2
,
…
,
x
n
-
1
,
x
k
-
ℓ
∥
=
0
⟹
(
ρ
,
ϱ
)
n
-
lim
x
k
=
ℓ
.Proof.
Assume thatlim
k
→
∞
∥
x
1
,
x
2
,
…
,
x
n
-
1
,
x
k
-
ℓ
∥
=
0. Then, for every ε
>
0 and for every x
1
,
x
2
,
…
,
x
n
-
1
∈
X, there exists a positive integer k
0 such that
(26)
∥
x
1
,
x
2
,
…
,
x
n
-
1
,
x
k
-
ℓ
∥
<
ε
for
each
k
≥
k
0
,
and, therefore, for any given t
>
0,
(27)
t
+
∥
x
1
,
x
2
,
…
,
x
n
-
1
,
x
k
-
ℓ
∥
t
<
t
+
ε
t
which is the same as
(28)
t
t
+
∥
x
1
,
x
2
,
…
,
x
n
-
1
,
x
k
-
ℓ
∥
>
t
t
+
ε
=
1
-
ε
t
+
ε
.
By letting r
=
ε
/
(
t
+
ε
)
∈
(
0,1
), we have
(29)
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
1
-
r
∀
k
≥
k
0
.
And since ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
1
-
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
), then we have
(30)
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
r
∀
k
≥
k
0
.
This means (
ρ
,
ϱ
)
n
-
lim
x
k
=
ℓ.
## 4.I
L
(
ρ
,
ϱ
)
n-Cauchy and Convergence in GRnNLS
Remark 25.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. Since ⋆ is a continuous t-norm and ⋄ is a continuous t-conorm, the system (
r
,
t
)
n
L-neighborhoods of θ with respect to t are
(31)
{
B
L
(
θ
,
r
,
t
)
:
t
>
0,0
<
r
<
1
}
,
where
(32)
B
L
(
θ
,
r
,
t
)
=
{
y
∈
X
:
1
h
j
∑
k
∈
Λ
j
ρ
y
k
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
r
,
1
h
j
∑
k
∈
Λ
j
ϱ
y
k
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
r
,
for
t
>
0
}
determines a first countable Hausdorff topology on X, called the (
ρ
,
ϱ
)
n
L-topology. Thus, the (
ρ
,
ϱ
)
n
L-topology can be completely specified by means of (
ρ
,
ϱ
)
n
L-convergence of sequences.Definition 26.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS, and let r
∈
(
0,1
) and x
∈
X. The set
(33)
B
L
(
x
,
r
,
t
)
=
{
y
∈
X
:
1
h
j
∑
k
∈
Λ
j
ρ
y
k
-
x
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
r
,
h
1
h
j
∑
k
∈
Λ
j
ϱ
y
k
-
x
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
r
,
for
t
>
0
}
is called open ball with center x and radius r with respect to t.Definition 27.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) in X is L-convergent to ℓ
∈
X with respect to the generalized random n-norm (
ρ
,
ϱ
)
n if, for ε
∈
(
0,1
) and every t
>
0, there exists j
0 such that
(34)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
ε
hhhhhhhhhhhhhhh
∀
j
≥
j
0
.
In this case, one writes (
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ.Definition 28.
LetI
⊂
2
ℕ and let (
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) of elements in X is said to be I
L-convergent to ℓ
∈
X with respect to the generalized random n-norm (
ρ
,
ϱ
)
n if, for every ε
∈
(
0,1
) and t
>
0, the set
(35)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
∈
I
.
Then, one writes I
L
(
ρ
,
ϱ
)
n
-
lim
x
=
ℓ.Example 29.
Let(
ℂ
,
∥
·
,
…
,
·
∥
) be an n-normed linear space; take a
⋆
b
=
a
b and a
⋄
b
=
min
{
a
+
b
,
1
} for all a
,
b
∈
[
0,1
]. For all x
∈
ℂ and every t
>
0, consider
(36)
ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
t
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
,
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
∥
x
1
,
x
2
,
…
,
x
n
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
.
Then, (
ℂ
,
ρ
,
ϱ
,
⋆
,
⋄
) is GRnNLS. If we take I
=
I
δ, define a sequence x
=
(
x
k
) as follows:
(37)
x
k
=
{
1
,
if
k
=
i
8
,
i
∈
ℕ
,
0
,
otherwise
.
Hence, for every ε
∈
(
0,1
) and t
>
0, we have
(38)
δ
(
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
)
=
0
.
So I
L
(
ρ
,
ϱ
)
n
-
lim
x
=
0.Definition 30.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) in X is said to be a Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L if, for every t
>
0 and ε
∈
(
0,1
), there exists j
0
∈
ℕ satisfying
(39)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
x
m
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
x
m
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
hhhhhhhhhhhhhh
∀
j
,
m
≥
k
0
.Definition 31.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) in X is said to be an I
L-Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L if, for every t
>
0 and ε
∈
(
0,1
), there exists j
0
∈
ℕ satisfying
(40)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
x
m
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
x
m
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
}
∈
F
(
I
)
.Theorem 32.
LetI
⊂
2
ℕ, let (
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS, and let x
=
(
x
k
) be a sequence in X; then, for every ε
>
0 and t
>
0, one has the following: (1)
I
L
(
ρ
,
ϱ
)
n
-
lim
x
=
ℓ,
(2)
{
j
∈
ℕ
:
(
1
/
h
j
)
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
}
∈
I and {
j
∈
ℕ
:
(
1
/
h
j
)
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
∈
I,
(3)
{
j
∈
ℕ
:
(
1
/
h
j
)
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
1
-
ε and (
1
/
h
j
)
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
ε
}
∈
F
(
I
),
(4)
{
j
∈
ℕ
:
(
1
/
h
j
)
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
1
-
ε
}
∈
F
(
I
) and {
j
∈
ℕ
:
(
1
/
h
j
)
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
ε
}
∈
F
(
I
),
(5)
I
L
-
lim
k
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
1 and I
L
-
lim
k
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
0.The proof is easy, so it is omitted.Theorem 33.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and let x
=
(
x
k
) be a sequence in X. If (
ρ
,
ϱ
)
n
L
-
lim
x exists, then it is unique.Proof.
Suppose that(
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ
1 and (
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ
2 with ℓ
1
≠
ℓ
2. Give ε
∈
(
0,1
) and choose λ
∈
(
0,1
) such that (
1
-
λ
)
⋆
(
1
-
λ
)
>
1
-
ε and λ
⋄
λ
<
ε. Then, for each t
>
0, there exists j
1
∈
ℕ such that
(41)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
hhhhhhhhhhhhhhhh
∀
j
≥
j
1
.
Also, there exists j
2
∈
ℕ such that
(42)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
hhhhhhhhhhhhhhhh
∀
j
≥
j
2
.
Now, consider j
0
=
max
{
j
1
,
j
2
}. Then, for j
≥
j
0, we find a s
∈
ℕ such that
(43)
ρ
x
s
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
>
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
≥
1
-
λ
,
ρ
x
s
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
>
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
≥
1
-
λ
.
Then, we get
(44)
ρ
ℓ
1
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ρ
x
s
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
⋆
ρ
x
s
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
>
(
1
-
λ
)
⋆
(
1
-
λ
)
>
1
-
ε
.
Since ε
>
0 is arbitrary, we have ρ
ℓ
1
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
1 for all t
>
0. By using a similar technique, it can be proved that ϱ
ℓ
1
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
0 for all t
>
0; hence, ℓ
1
=
ℓ
2.Theorem 34.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and let x
=
(
x
k
) be a sequence in X. Then, one has
(45)
(
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ
⟹
I
L
(
ρ
,
ϱ
)
n
-
lim
x
=
ℓ
.Proof.
Let(
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ, and, then, for all t
>
0 and given ε
∈
(
0,1
), there exists j
0
∈
ℕ such that
(46)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
∀
j
≥
j
0
.
Since I is an admissible ideal and
(47)
G
=
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
⊆
{
1,2
,
3
,
…
,
j
0
-
1
}
,
we get G
∈
I. So I
L
(
ρ
,
ϱ
)
n
-
lim
x
=
ℓ.Theorem 35.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and let x
=
(
x
k
) be a sequence in X. If I
L
(
ρ
,
ϱ
)
n
-
lim
x exists, then it is unique.The proof follows by using Theorems33 and 34.Theorem 36.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and let x
=
(
x
k
) be a sequence in X. If (
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ, then there exists a subsequence (
x
m
k
) of x
=
(
x
k
) such that (
ρ
,
ϱ
)
n
-
lim
x
m
k
=
ℓ.Proof.
Let(
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ. Then, for all t
>
0 and given ε
∈
(
0,1
), there exists j
0
∈
ℕ such that
(48)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
hhhhhhhhhhhhhhh
∀
j
≥
j
0
.
Observably, for each j
≥
j
0, we can take an m
k
∈
Λ
j such that
(49)
ρ
x
m
k
-
L
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
ϱ
x
m
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
.
It follows that (
ρ
,
ϱ
)
n
-
lim
x
m
k
=
ℓ.
We create the following two results without proofs, since they can be easily recognized.Theorem 37.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. If a sequence x
=
(
x
k
) in X is Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L, then it is I
L-Cauchy sequence with respect to the same norm.Theorem 38.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. If a sequence x
=
(
x
k
) in X is Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L, then there is a subsequence of x
=
(
x
k
) which is ordinary Cauchy sequence with respect to the norm (
ρ
,
ϱ
)
n.
## 5.I
L-Limit Point, I
L-Cluster Point, and I
L-Cauchy Sequence in GRnNLS
Definition 39.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS, and if a sequence x
=
(
x
k
) in X, then one has the following. (1)
An elementℓ
∈
X is said to be I
L-limit point of x
=
(
x
k
) if there is a set
(50)
𝕄
=
{
m
1
<
m
2
<
⋯
<
m
k
<
⋯
}
⊂
ℕ
with
𝕄
′
=
{
j
∈
ℕ
:
m
k
∈
Λ
j
}
∈
F
(
I
)
,
(
ρ
,
ϱ
)
n
L
-
lim
x
m
k
=
ℓ
.
(2)
An elementℓ
∈
X is said to be I
L-cluster point of x
=
(
x
k
) if, for every t
>
0 and ε
∈
(
0,1
), one has
(51)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
}
∈
F
(
I
)
.By ⋀
(
ρ
,
ϱ
)
n
L
(
x
) we denote the set of all I
L-limit points and ⋁
(
ρ
,
ϱ
)
n
L
(
x
) the set of all I
L-cluster points in X, respectively.Definition 40.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) in X is said to be I
L
*-Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L if (i)
there exists a set𝕄
=
{
m
1
<
m
2
<
⋯
<
m
k
,
…
}
⊂
ℕ such that 𝕄
′
=
{
j
∈
ℕ
:
m
k
∈
Λ
j
}
∈
F
(
I
);
(ii)
the subsequence(
x
m
k
) of x
=
(
x
k
) is a Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L.Theorem 41.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. For each sequence x
=
(
x
k
) in X, one has
(52)
⋀
(
ρ
,
ϱ
)
n
L
(
x
)
⊂
⋁
(
ρ
,
ϱ
)
n
L
(
x
)
.Proof.
Letℓ
∈
⋀
(
ρ
,
ϱ
)
n
L
(
x
); then there exists a set 𝕄
⊂
ℕ such that 𝕄
′
∈
F
(
I
), where 𝕄 and 𝕄
′ are as in Definition 39, satisfies (
ρ
,
ϱ
)
n
L
-
lim
x
m
k
=
ℓ. Thus, for every t
>
0 and ε
∈
(
0,1
), there exists j
0
∈
ℕ such that
(53)
1
h
j
∑
k
∈
Λ
j
ρ
x
m
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
m
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
∀
j
≥
j
0
.
Thus, we have
(54)
G
=
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
m
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ρ
x
m
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
}
⊇
𝕄
′
∖
{
m
1
,
m
2
,
…
,
m
j
0
}
.
Since I is an admissible ideal, we have
(55)
𝕄
′
∖
{
m
1
,
m
2
,
…
,
m
j
0
}
∈
F
(
I
)
and
so
G
∈
F
(
I
)
.
Hence
ℓ
∈
⋁
(
ρ
,
ϱ
)
n
L
(
x
)
.Theorem 42.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. For each sequence x
=
(
x
k
) in X, the set ⋀
(
ρ
,
ϱ
)
n
L
(
x
) is closed set in X with respect to the usual topology induced by the generalized random n-norm (
ρ
,
ϱ
)
n
L.Proof.
Lety
∈
⋀
(
ρ
,
ϱ
)
n
L
(
x
)
¯. Take t
>
0 and ε
∈
(
0,1
). Then, there exists ℓ
0
∈
⋀
(
ρ
,
ϱ
)
n
L
(
x
)
∩
B
L
(
y
,
ε
,
t
). Choose δ
>
0 such that B
L
(
ℓ
0
,
δ
,
t
)
⊂
B
L
(
y
,
ε
,
t
). We have
(56)
G
=
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
y
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
y
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
}
⊇
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
0
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
δ
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
0
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
δ
}
=
H
.
Thus, H
∈
F
(
I
) and so G
∈
F
(
I
). Hence, y
∈
⋀
(
ρ
,
ϱ
)
n
L
(
x
).Theorem 43.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and x
=
(
x
k
) in X. Then, the following statements are equivalent: (1)
ℓ is a I
L-limit point of x;
(2)
there exist two sequencesy and z in X such that
(57)
x
=
y
+
z
,
(
ρ
,
ϱ
)
n
L
-
lim
y
=
ℓ
,
{
j
∈
ℕ
:
k
∈
Λ
j
,
z
k
≠
θ
}
∈
I
,
where θ is the zero element in X.Proof.
Let (1) hold; then, there exist sets𝕄 and 𝕄
′ as in Definition 39 such that
(58)
𝕄
′
∉
I
,
(
ρ
,
ϱ
)
n
L
-
lim
x
m
k
=
ℓ
.
Define the sequences y and z as follows:
(59)
y
k
=
{
x
k
,
if
k
∈
Λ
j
;
j
∈
𝕄
′
,
ℓ
,
otherwise
,
z
k
=
{
θ
,
if
k
∈
Λ
j
;
j
∈
𝕄
′
,
x
k
-
ℓ
,
otherwise
.
Consider the case k
∈
Λ
j such that j
∈
ℕ
-
𝕄
′. Then, for each ε
∈
(
0,1
) and t
>
0, we get
(60)
ρ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
1
>
1
-
ε
,
ϱ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
0
<
ε
.
Thus, in this case,
(61)
1
h
j
∑
k
∈
Λ
j
ρ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
1
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
0
<
ε
.
For that, (
ρ
,
ϱ
)
n
L
-
lim
y
=
ℓ. Now, we have
(62)
{
j
∈
ℕ
:
k
∈
Λ
j
,
Z
k
≠
θ
}
⊂
ℕ
-
𝕄
′
and
so
{
j
∈
ℕ
:
k
∈
Λ
j
,
Z
k
≠
θ
}
∈
I
.
Now, suppose that (2) holds. Let 𝕄
′
=
{
j
∈
ℕ
:
k
∈
Λ
j
,
Z
k
=
θ
}. Then, obviously 𝕄
′
∈
F
(
I
) and so it is an infinite set. Form the set
(63)
𝕄
=
{
m
1
<
m
2
<
⋯
<
m
k
<
⋯
}
⊂
ℕ
such
that
m
k
∈
Λ
j
,
z
m
k
=
θ
.
Since x
m
k
=
y
m
k and (
ρ
,
ϱ
)
n
L
-
lim
y
=
ℓ, we find that (
ρ
,
ϱ
)
n
L
-
lim
x
m
k
=
ℓ. This completes the proof.Theorem 44.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and let x
=
(
x
k
) be a sequence in X. Let I be a nontrivial ideal. If there is a I
L
(
ρ
,
ϱ
)
n-convergent sequence y
=
(
y
k
) in X such that {
k
∈
ℕ
:
y
k
≠
x
k
}
∈
I, then x is also I
L
(
ρ
,
ϱ
)
n-convergent.Proof.
Suppose that{
k
∈
ℕ
:
y
k
≠
x
k
}
∈
I and I
L
(
ρ
,
ϱ
)
n
-
lim
y
=
ℓ. Then, for every ε
∈
(
0,1
) and t
>
0, the set
(64)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
∈
I
.
For every 0
<
ε
<
1 and t
>
0, we have
(65)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
⊆
{
k
∈
ℕ
:
y
k
≠
x
k
}
∪
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
hhhhh
1
h
j
∑
k
∈
Λ
j
ϱ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
.
As both of the sets of the right-hand side are in I, we have that
(66)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
∈
I
.
And the proof of the theorem follows.
The proof of the following result can be easily reputable from the definitions.Theorem 45.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. If a sequence x
=
(
x
k
) in X is I
L
*-Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L, then it is I
L-Cauchy sequence also.
---
*Source: 101782-2014-04-24.xml* | 101782-2014-04-24_101782-2014-04-24.md | 37,136 | On Lacunary Mean Ideal Convergence in Generalized Randomn-Normed Spaces | Awad A. Bakery; Mustafa M. Mohammed | Abstract and Applied Analysis
(2014) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101782 | 101782-2014-04-24.xml | ---
## Abstract
An idealI is a hereditary and additive family of subsets of positive integers ℕ. In this paper, we will introduce the concept of generalized random n-normed space as an extension of random n-normed space. Also, we study the concept of lacunary mean (L)-ideal convergence and L-ideal Cauchy for sequences of complex numbers in the generalized random n-norm. We introduce I
L-limit points and I
L-cluster points. Furthermore, Cauchy and I
L-Cauchy sequences in this construction are given. Finally, we find relations among these concepts.
---
## Body
## 1. Introduction
The sets of natural numbers and complex numbers will be denoted byℕ and ℂ, respectively. Fast [1] and Steinhaus [2] independently introduced the notion of statistical convergence for sequences of real numbers, which is a generalization of the concept of convergence. The concept of statistical convergence is a very valuable functional tool for studying the convergence problems of numerical sequences through the concept of density. Afterward, several generalizations and applications of this concept have been presented by different authors (see [3–6]). Kostyrko et al. [7] presented a generalization of the concept of statistical convergence with the help of an ideal I of subsets of the set of natural numbers ℕ, and more is studied in [8–11]. This concept of ideal convergence plays a fundamental role not only in pure mathematics but also in other branches of science concerning mathematics, mainly in information theory, computer science, dynamical systems, geographic information systems, and population modelling. Menger [12] generalized the metric axioms by associating a distribution function with each pair of points of a set. This system is called a probabilistic metric space. By using the concept of Menger, Šerstnev [13] introduced the concept of probabilistic normed spaces. It provides an important area into which many essential results of linear normed spaces can be generalized; see [14]. Later, Alsina et al. [15] presented a new definition of probabilistic normed space which includes the definition of normed space which includes the definition of Šerstnev as a special case. The concept of ideal convergence for single and double sequences of real numbers in probabilistic normed space was introduced and studied by Mursaleen and Mohiuddine [16]. Mursaleen and Alotaibi [17] studied the notion of ideal convergence for single and double sequences in random 2-normed spaces, respectively. For more details and linked concept, we refer to [18–26]. In [27, 28], Gähler introduced a gorgeous theory of 2-normed and n-normed spaces in the 1960s; we have studied these subjects and constructed some sequence spaces defined by ideal convergence in n-normed spaces [29, 30]. Another important alternative of statistical convergence is the notion of lacunary statistical convergence introduced by Fridy and Orhan [31]. Recently, Mohiuddine and Aiyub [4] studied lacunary statistical convergence by introducing the concept Θ-statistical convergence in random 2-normed space. Their work can be considered as a particular generalization of the statistical convergence. In [32], Mursaleen and Mohiuddine generalized the idea of lacunary statistical convergence with respect to the intuitionistic fuzzy normed space, and Debnath [33] investigated lacunary ideal convergence in intuitionistic fuzzy normed linear spaces. Also, lacunary statistically convergent double sequences in probabilistic normed space were studied by Mohiuddine and Savaş in [34]. Jebril and Dutta [35] introduced the concept of random n-normed space. In this paper, we firstly give some basic definitions and properties of random n-normed space in Section 2. In Section 3, we define a new and interesting notion of generalized random n-normed spaces; convergent sequences in it are introduced and we provide some results on it. In Section 4, we study lacunary mean (L)-ideal convergence and L-ideal Cauchy for sequences of complex numbers in the generalized random n-norm. Finally, in Section 5, we introduce I
L-limit points and I
L-cluster points. Moreover, Cauchy and I
L-Cauchy sequences in this framework are given, and we find relations among these concepts.
## 2. Definitions and Preliminaries
For the reader’s expediency, we restate some definitions and results that will be used in this paper.The notion of statistical convergence depends on the density (asymptotic or natural) of subsets ofℕ.Definition 1.
A subsetE of ℕ is said to have natural density δ
(
E
) if
(1)
δ
(
E
)
=
lim
n
→
∞
1
n
|
{
k
≤
n
:
k
∈
E
}
|
exists
,
where |
E
| denotes the cardinality of the set E.Definition 2.
A sequence(
x
k
) is statistically convergent to ℓ if, for every ε
>
0,
(2)
δ
(
{
k
∈
ℕ
:
|
x
k
-
ℓ
|
≥
ε
}
)
=
0
.
In this case, ℓ is called the statistical limit of the sequence (
x
k
).Definition 3.
A nonempty family of setsI
⊆
2
ℕ is said to be an ideal on ℕ if and only if (a)
ϕ
∈
I,
(b)
for eachA
,
B
∈
I one has A
∪
B
∈
I,
(c)
for eachB
∈
I and A
⊂
B, implies A
∈
I.Definition 4.
An idealI is an admissible ideal if {
x
}
∈
I for each x
∈
ℕ.Definition 5.
An idealI
⊆
2
ℕ is said to be nontrivial if I
≠
ϕ and ℕ
∉
I.Definition 6.
A nonempty family of setsF
⊆
2
ℕ is said to be a filter on ℕ if and only if (a)
ϕ
∉
F,
(b)
for eachA
,
B
∈
F one has A
∩
B
∈
F,
(c)
for eachA
∈
F and B
⊃
A, implies B
∈
F.
For each idealI, there is a filter F
(
I
) corresponding to I; that is,F
(
I
)
=
{
K
⊆
ℕ
:
ℕ
-
K
∈
I
}.Example 7.
If we takeI
=
I
f
=
{
A
⊆
ℕ
:
A
is
a
finite
subset
}, then I
f is a nontrivial admissible ideal of ℕ and the corresponding convergence coincides with the usual convergence.Example 8.
If we getI
=
I
δ
=
{
A
⊆
ℕ
:
δ
(
A
)
=
0
}, where δ
(
A
) denote the asymptotic density of the set A, then I
δ is a nontrivial admissible ideal of ℕ and the corresponding convergence coincides with the statistical convergence.Definition 9.
A sequencex
=
(
x
k
) is said to be I-convergent to a real number ℓ if
(3)
{
k
∈
ℕ
:
|
x
k
-
ℓ
|
≥
ε
}
∈
I
for
every
ε
>
0
.
In this case, we write I
-
lim
x
k
=
ℓ.Definition 10.
By a lacunary sequenceΘ
=
(
i
j
), j
=
0,1
,
2
,
…, where i
0
=
0, one will mean an increasing sequence of nonnegative integers with i
j
-
i
j
-
1
→
∞ as j
→
∞, h
j
=
i
j
-
i
j
-
1. The intervals determined by Θ will be denoted by Λ
j
=
(
i
j
-
1
,
i
j
].Definition 11.
A sequencex
=
(
x
k
) is said to be lacunary (L)-statistically convergent to the number ℓ if, for every ε
>
0, one has
(4)
lim
j
→
∞
1
h
j
|
{
k
∈
Λ
j
:
|
x
k
-
ℓ
|
≥
ε
}
|
=
0
.
The notion of lacunary ideal convergence of real sequences is introduced by Tripathy et al. [36], and Hazarika [37, 38] introduced the lacunary ideal convergent sequences of fuzzy real numbers and studied some properties.Definition 12.
LetI
⊂
2
ℕ be a nontrivial ideal. A sequence x
=
(
x
k
) is said to be I
L-summable to a number ℓ if, for every ε
>
0, the set
(5)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
|
x
k
-
ℓ
|
≥
ε
}
∈
I
.Definition 13.
Letn
∈
ℕ and let X be a linear space over the field K of dimension d, where d
≥
n
≥
2 and K is the field of real or complex numbers. A real valued function ∥
·
,
…
,
·
∥ on X
n satisfies the following four conditions:(1)
∥
x
1
,
x
2
,
…
,
x
n
∥
=
0 if and only if x
1
,
x
2
,
…
,
x
n are linearly dependent in X;
(2)
∥
x
1
,
x
2
,
…
,
x
n
∥ is invariant under permutation;
(3)
∥
α
x
1
,
x
2
,
…
,
x
n
∥
=
|
α
|
∥
x
1
,
x
2
,
…
,
x
n
∥ for any α
∈
K;
(4)
∥
x
+
x
′
,
x
2
,
…
,
x
n
∥
≤
∥
x
,
x
2
,
…
,
x
n
∥
+
∥
x
′
,
x
2
,
…
,
x
n
∥ is called an n-norm on X, and the pair (
X
;
∥
·
,
…
,
·
∥
) is called an n-normed space over the field K.Definition 14.
A probability distribution function is a functionF that is nondecreasing, left continuous on (
0
,
∞
) such that F
(
0
)
=
0 and F
(
∞
)
=
1. The family of all probability distribution functions will be denoted by Δ
+. The space Δ
+ is partially ordered by the usual pointwise ordering of functions and has both a maximal element ε
0 and a minimal element ε
∞; these are given, respectively, by
(6)
ε
0
(
t
)
=
{
0
,
t
≤
0
,
1
,
t
>
0
,
ε
∞
(
t
)
=
{
0
,
t
<
∞
,
1
,
t
=
∞
.
There is a natural topology on Δ
+ that is induced by the modified Lévy metric d
L [39, 40]; that is,
(7)
d
L
(
F
,
G
)
=
inf
{
h
:
both
[
F
,
G
;
h
]
and
[
G
,
F
;
h
]
hold
}
for all F
,
G
∈
Δ
+
and
h
∈
(
0,1
], where [
F
,
G
,
h
] denote the condition
(8)
G
(
t
)
≤
F
(
t
+
h
)
+
h
,
for
t
∈
(
0
,
1
h
)
.
Convergence with respect to this metric is equivalent to weak convergence of distribution functions; that is, (
F
n
) in Δ
+ converges weakly to F (written as F
n
→
ω
F) if and only if F
n
(
t
) converges to F
(
t
) at every point of continuity of the limit function F. Therefore, one has
(9)
F
n
→
ω
F
iff
d
L
(
F
n
,
F
)
⟶
0
,
F
(
x
)
>
1
-
x
iff
d
L
(
F
,
ε
0
)
<
x
for
every
x
>
0
.
Moreover, the metric space (
Δ
+
,
d
L
) is compact.Definition 15.
A binary operation⋆
:
[
0,1
]
×
[
0,1
]
→
[
0,1
] is said to be a continuous t-norm if the following conditions are satisfied: (1)
⋆ is associative and commutative,
(2)
⋆ is continuous,
(3)
a
⋆
1
=
a for all a
∈
[
0,1
],
(4)
a
⋆
b
≤
c
⋆
d whenever a
≤
c and b
≤
d for each a
,
b
,
c
,
d
∈
[
0,1
].Definition 16.
A binary operation⋄
:
[
0,1
]
×
[
0,1
]
→
[
0,1
] is said to be a continuous t-conorm if the following conditions are satisfied: (1)
⋄ is associative and commutative,
(2)
⋄ is continuous,
(3)
a
⋄
0
=
a for all a
∈
[
0,1
],
(4)
a
⋄
b
≤
c
⋄
d whenever a
≤
c and b
≤
d for each a
,
b
,
c
,
d
∈
[
0,1
].Definition 17.
LetX be a linear space of dimension greater than one, ⋆ a continuous t-norm, and ρ a mapping from X
2 into D
+. If the following conditions are satisfied: (1)
ρ
x
,
y
=
ε
0 if x and y are linearly dependent,
(2)
ρ
x
,
y
=
ρ
y
,
x for every x and y in X,
(3)
ρ
α
x
,
y
(
t
)
=
ρ
x
,
y
(
t
/
|
α
|
) for every t
>
0; α
≠
0 and x
;
y
∈
X,
(4)
ρ
x
+
y
,
z
(
t
)
≥
ρ
x
,
z
(
t
)
⋆
ρ
y
,
z
(
t
),
thenρ is called a random 2-norm on X and (
X
;
ρ
;
⋆
) is called a random 2-normed space.Definition 18.
LetX be a linear space of dimension greater than one over a real field, ⋆ a continuous t-norm, and ρ a mapping from X
n into D
+. If the following conditions are satisfied: (1)
ρ
x
1
,
x
2
,
…
,
x
n
=
ε
0
⇔
x
1
,
x
2
,
…
,
x
n are linearly dependent,
(2)
ρ
x
1
,
x
2
,
…
,
x
n is invariant under any permutation of x
1
,
x
2
,
…
,
x
n,
(3)
ρ
α
x
1
,
x
2
,
…
,
x
n
(
t
)
=
ρ
x
1
,
x
2
,
…
,
x
n
(
t
/
|
α
|
) for every t
>
0; α
≠
0,
(4)
ρ
x
1
,
x
2
,
…
,
x
n
+
x
n
′
(
t
+
s
)
≥
ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
⋆
ρ
x
1
,
x
2
,
…
,
x
n
′
(
s
),
thenρ is called a random n-norm on X and (
X
;
ρ
;
⋆
) is called a random n-normed space.
## 3. Generalized Randomn-Normed Space
Throughout the paper letI be an admissible ideal of ℕ. By generalizing Definition 18, we obtain a new notion of generalized random n-normed space as follows.Definition 19.
The five-tuple(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) is said to be generalized random n-normed linear space or in short GRnNLS if X is a linear space over the field of complex numbers ℂ, ⋆ is a continuous t-norm, ⋄ is a continuous t-conorm, and ρ, ϱ are two mappings on X
n
×
(
0
,
∞
) into D
+
×
(
0
,
∞
) satisfying the following conditions for every x
=
(
x
1
,
x
2
,
…
,
x
n
)
∈
X
n and for each s
,
t
∈
(
0
,
∞
): (1)
ρ
x
1
,
x
2
,
…
,
x
n
+
ϱ
x
1
,
x
2
,
…
,
x
n
≤
ε
0,
(2)
ρ
x
1
,
x
2
,
…
,
x
n
≥
ε
∞,
(3)
ρ
x
1
,
x
2
,
…
,
x
n
=
ε
0 if and only if x
1
,
x
2
,
…
,
x
n are linearly dependent,
(4)
ρ
α
x
1
,
x
2
,
…
,
x
n
(
t
)
=
ρ
x
1
,
x
2
,
…
,
x
n
(
t
/
|
α
|
) for each α
∉
ℂ
∖
0,
(5)
ρ
x
1
,
x
2
,
…
,
x
n
′
(
t
)
⋆
ρ
x
1
,
x
2
,
…
,
x
n
(
s
)
≤
ρ
x
1
,
x
2
,
…
,
x
n
′
+
x
n
(
t
+
s
),
(6)
ρ
x
1
,
x
2
,
…
,
x
n
(
·
)
:
(
0
,
∞
)
→
[
0,1
] is continuous,
(7)
ρ
x
1
,
x
2
,
…
,
x
n
(
t
) is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
),
(8)
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
≥
ε
∞,
(9)
ϱ
x
1
,
x
2
,
…
,
x
n
=
ε
∞ if and only if x
1
,
x
2
,
…
,
x
n are linearly dependent,
(10)
ϱ
α
x
1
,
x
2
,
…
,
x
n
(
t
)
=
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
/
|
α
|
) for each α
∉
ℂ
∖
0,
(11)
ϱ
(
x
1
,
x
2
,
…
,
x
n
′
,
t
)
⋄
ϱ
(
x
1
,
x
2
,
…
,
x
n
,
s
)
≥
ϱ
(
x
1
,
x
2
,
…
,
x
n
′
+
x
n
,
t
+
s
),
(12)
ϱ
x
1
,
x
2
,
…
,
x
n
(
·
)
:
(
0
,
∞
)
→
[
0,1
] is continuous,
(13)
ρ
x
1
,
x
2
,
…
,
x
n
(
t
) is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
).
In this case,(
ρ
,
ϱ
) is called generalized random n-norm on X and we denote it by (
ρ
,
ϱ
)
n.Example 20.
Let(
X
,
∥
·
,
…
,
·
∥
) be an n-normed linear space. Put a
⋆
b
=
min
{
a
,
b
} and a
⋄
b
=
max
{
a
,
b
} for all a
,
b
∈
[
0,1
], ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
t
/
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
), and ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
∥
x
1
,
x
2
,
…
,
x
n
∥
/
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
). Then, (
X
,
ρ
,
ϱ
,
⋆
,
⋄
) is GRnNLS.Proof.
For allt
,
s
∈
(
0
,
∞
), we have the following.(1)
Evidently,ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
+
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
≤
1.
(2)
Visibly,ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
≥
0.
(3)
And(10)
ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
1
⟺
t
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
=
1
⟺
t
=
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
⟺
∥
x
1
,
x
2
,
…
,
x
n
∥
=
0
⟺
x
1
,
x
2
,
…
,
x
n
are
linearly
dependent
.
(4)
While∥
x
1
,
x
2
,
…
,
x
n
∥ is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
), then ρ
x
1
,
x
2
,
…
,
x
n
(
t
) is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
).
(5)
Consider(11)
ρ
x
1
,
x
2
,
…
,
x
n
(
t
|
α
|
)
=
t
/
|
α
|
t
/
|
α
|
+
∥
x
1
,
x
2
,
…
,
x
n
∥
=
t
t
+
|
α
|
∥
x
1
,
x
2
,
…
,
x
n
∥
=
t
t
+
∥
α
x
1
,
x
2
,
…
,
x
n
∥
=
ρ
x
1
,
x
2
,
…
,
x
n
(
t
|
α
|
)
.
(6)
Suppose that, without loss of generality,(12)
ρ
x
1
,
x
2
,
…
,
x
n
′
(
t
)
≤
ρ
x
1
,
x
2
,
…
,
x
n
(
s
)
.
⟹
t
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
≤
s
s
+
∥
x
1
,
x
2
,
…
,
x
n
∥
⟹
t
(
s
+
∥
x
1
,
x
2
,
…
,
x
n
∥
)
≤
s
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
)
⟹
t
(
∥
x
1
,
x
2
,
…
,
x
n
∥
)
≤
s
(
∥
x
1
,
x
2
,
…
,
x
n
′
∥
)
⟹
∥
x
1
,
x
2
,
…
,
x
n
∥
=
s
t
∥
x
1
,
x
2
,
…
,
x
n
′
∥
.
As a result,(13)
∥
x
1
,
x
2
,
…
,
x
n
∥
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
≤
s
t
∥
x
1
,
x
2
,
…
,
x
n
′
∥
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
=
s
+
t
t
∥
x
1
,
x
2
,
…
,
x
n
′
∥
.
However,
(14)
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
≤
∥
x
1
,
x
2
,
…
,
x
n
∥
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
≤
s
+
t
t
∥
x
1
,
x
2
,
…
,
x
n
′
∥
,
⟹
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
≤
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
⟹
1
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
≤
1
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
⟹
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
≤
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
⟹
s
+
t
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
≥
t
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
⟹
ρ
x
1
,
x
2
,
…
,
x
n
+
x
n
′
(
s
+
t
)
≥
min
{
ρ
x
1
,
x
2
,
…
,
x
n
(
s
)
,
ρ
x
1
,
x
2
,
…
,
x
n
′
(
t
)
}
.
(7)
Evidently,ρ
x
1
,
x
2
,
…
,
x
n
(
·
)
:
(
0
,
∞
)
→
[
0,1
] is continuous.
(8)
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
≥
0.
(9)
And(15)
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
0
⟺
∥
x
1
,
x
2
,
…
,
x
n
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
=
0
⟺
∥
x
1
,
x
2
,
…
,
x
n
∥
=
0
⟺
x
1
,
x
2
,
…
,
x
n
are
linearly
dependent
.
(10)
As∥
x
1
,
x
2
,
…
,
x
n
∥ is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
), then ϱ
x
1
,
x
2
,
…
,
x
n
(
t
) is invariant under any permutation of (
x
1
,
x
2
,
…
,
x
n
).
(11)
Consider(16)
ϱ
α
x
1
,
x
2
,
…
,
x
n
(
t
)
=
∥
α
x
1
,
x
2
,
…
,
x
n
∥
t
+
∥
α
x
1
,
x
2
,
…
,
x
n
∥
=
|
α
|
∥
x
1
,
x
2
,
…
,
x
n
∥
t
+
|
α
|
∥
x
1
,
x
2
,
…
,
x
n
∥
=
∥
x
1
,
x
2
,
…
,
x
n
∥
t
/
|
α
|
+
∥
x
1
,
x
2
,
…
,
x
n
∥
=
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
|
α
|
)
.
(12)
Presume, without loss of generality, that(17)
ϱ
x
1
,
x
2
,
…
,
x
n
(
s
)
≤
ρ
x
1
,
x
2
,
…
,
x
n
′
(
t
)
.
⟹
∥
x
1
,
x
2
,
…
,
x
n
∥
s
+
∥
x
1
,
x
2
,
…
,
x
n
∥
≤
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
⟹
∥
x
1
,
x
2
,
…
,
x
n
∥
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
)
≤
∥
x
1
,
x
2
,
…
,
x
n
′
∥
(
s
+
∥
x
1
,
x
2
,
…
,
x
n
∥
)
⟹
t
∥
x
1
,
x
2
,
…
,
x
n
∥
≤
s
∥
x
1
,
x
2
,
…
,
x
n
′
∥
.
Currently,(18)
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
-
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
≤
∥
x
1
,
x
2
,
…
,
x
n
∥
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
-
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
=
t
∥
x
1
,
x
2
,
…
,
x
n
∥
-
s
∥
x
1
,
x
2
,
…
,
x
n
′
∥
(
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
)
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
)
.
By (17),
(19)
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
≤
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
.
In the same way,
(20)
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
s
+
t
+
∥
x
1
,
x
2
,
…
,
x
n
+
x
n
′
∥
≤
∥
x
1
,
x
2
,
…
,
x
n
′
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
′
∥
.
⟹
ϱ
x
1
,
x
2
,
…
,
x
n
+
x
n
′
(
s
+
t
)
≤
max
{
ϱ
x
1
,
x
2
,
…
,
x
n
(
s
)
,
ρ
x
1
,
x
2
,
…
,
x
n
′
(
t
)
}
.
(13)
Clearly,ρ
x
1
,
x
2
,
…
,
x
n
(
·
)
:
(
0
,
∞
)
→
[
0,1
] is continuous.Remark 21.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. Since ⋆ is a continuous t-norm and ⋄ is a continuous t-conorm, the system (
r
,
t
)-neighborhoods of θ (the null vector in X) with respect to t is
(21)
{
B
(
θ
,
r
,
t
)
:
t
>
0,0
<
r
<
1
}
,
where
(22)
B
(
θ
,
r
,
t
)
=
{
y
∈
X
:
ρ
y
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
r
,
ϱ
y
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
r
,
for
t
>
0
}
defined a first countable Hausdorff topology on X, called the (
ρ
,
ϱ
)
n-topology. Hence, the (
ρ
,
ϱ
)
n-topology can be completely specified by means of (
ρ
,
ϱ
)
n-convergence of sequences.Definition 22.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS, and let r
∈
(
0,1
) and x
∈
X. The set
(23)
B
(
x
,
r
,
t
)
=
{
y
∈
X
:
ρ
y
-
x
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
r
,
h
h
h
ϱ
y
-
x
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
r
,
for
t
>
0
}
is called open ball with center x and radius r with respect to t.Definition 23.
Let(
X
,
ρ
,
ϱ
,
*
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) in X is (
ρ
,
ϱ
)
n-convergent to ℓ
∈
X with respect to the generalized random n-norm (
ρ
,
ϱ
)
n if, for r
∈
(
0,1
) and every t
>
0, there exists k
0 such that
(24)
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
1
-
r
,
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
r
hhhhhhhhhh
∀
k
≥
k
0
.
In this case, one writes (
ρ
,
ϱ
)
n
-
lim
x
=
ℓ.Theorem 24.
Let(
X
,
∥
·
,
…
,
·
∥
) be an n-normed linear space. Put a
⋆
b
=
min
{
a
,
b
} and a
⋄
b
=
max
{
a
,
b
} for all a
,
b
∈
[
0,1
], ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
t
/
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
), and ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
∥
x
1
,
x
2
,
…
,
x
n
∥
/
(
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
). Then, for every sequence x
=
(
x
k
) and nonzero x
1
,
x
2
,
…
,
x
n
-
1
∈
X, one has
(25)
lim
k
→
∞
∥
x
1
,
x
2
,
…
,
x
n
-
1
,
x
k
-
ℓ
∥
=
0
⟹
(
ρ
,
ϱ
)
n
-
lim
x
k
=
ℓ
.Proof.
Assume thatlim
k
→
∞
∥
x
1
,
x
2
,
…
,
x
n
-
1
,
x
k
-
ℓ
∥
=
0. Then, for every ε
>
0 and for every x
1
,
x
2
,
…
,
x
n
-
1
∈
X, there exists a positive integer k
0 such that
(26)
∥
x
1
,
x
2
,
…
,
x
n
-
1
,
x
k
-
ℓ
∥
<
ε
for
each
k
≥
k
0
,
and, therefore, for any given t
>
0,
(27)
t
+
∥
x
1
,
x
2
,
…
,
x
n
-
1
,
x
k
-
ℓ
∥
t
<
t
+
ε
t
which is the same as
(28)
t
t
+
∥
x
1
,
x
2
,
…
,
x
n
-
1
,
x
k
-
ℓ
∥
>
t
t
+
ε
=
1
-
ε
t
+
ε
.
By letting r
=
ε
/
(
t
+
ε
)
∈
(
0,1
), we have
(29)
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
1
-
r
∀
k
≥
k
0
.
And since ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
1
-
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
), then we have
(30)
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
r
∀
k
≥
k
0
.
This means (
ρ
,
ϱ
)
n
-
lim
x
k
=
ℓ.
## 4.I
L
(
ρ
,
ϱ
)
n-Cauchy and Convergence in GRnNLS
Remark 25.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. Since ⋆ is a continuous t-norm and ⋄ is a continuous t-conorm, the system (
r
,
t
)
n
L-neighborhoods of θ with respect to t are
(31)
{
B
L
(
θ
,
r
,
t
)
:
t
>
0,0
<
r
<
1
}
,
where
(32)
B
L
(
θ
,
r
,
t
)
=
{
y
∈
X
:
1
h
j
∑
k
∈
Λ
j
ρ
y
k
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
r
,
1
h
j
∑
k
∈
Λ
j
ϱ
y
k
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
r
,
for
t
>
0
}
determines a first countable Hausdorff topology on X, called the (
ρ
,
ϱ
)
n
L-topology. Thus, the (
ρ
,
ϱ
)
n
L-topology can be completely specified by means of (
ρ
,
ϱ
)
n
L-convergence of sequences.Definition 26.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS, and let r
∈
(
0,1
) and x
∈
X. The set
(33)
B
L
(
x
,
r
,
t
)
=
{
y
∈
X
:
1
h
j
∑
k
∈
Λ
j
ρ
y
k
-
x
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
r
,
h
1
h
j
∑
k
∈
Λ
j
ϱ
y
k
-
x
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
r
,
for
t
>
0
}
is called open ball with center x and radius r with respect to t.Definition 27.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) in X is L-convergent to ℓ
∈
X with respect to the generalized random n-norm (
ρ
,
ϱ
)
n if, for ε
∈
(
0,1
) and every t
>
0, there exists j
0 such that
(34)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
ε
hhhhhhhhhhhhhhh
∀
j
≥
j
0
.
In this case, one writes (
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ.Definition 28.
LetI
⊂
2
ℕ and let (
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) of elements in X is said to be I
L-convergent to ℓ
∈
X with respect to the generalized random n-norm (
ρ
,
ϱ
)
n if, for every ε
∈
(
0,1
) and t
>
0, the set
(35)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
∈
I
.
Then, one writes I
L
(
ρ
,
ϱ
)
n
-
lim
x
=
ℓ.Example 29.
Let(
ℂ
,
∥
·
,
…
,
·
∥
) be an n-normed linear space; take a
⋆
b
=
a
b and a
⋄
b
=
min
{
a
+
b
,
1
} for all a
,
b
∈
[
0,1
]. For all x
∈
ℂ and every t
>
0, consider
(36)
ρ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
t
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
,
ϱ
x
1
,
x
2
,
…
,
x
n
(
t
)
=
∥
x
1
,
x
2
,
…
,
x
n
∥
t
+
∥
x
1
,
x
2
,
…
,
x
n
∥
.
Then, (
ℂ
,
ρ
,
ϱ
,
⋆
,
⋄
) is GRnNLS. If we take I
=
I
δ, define a sequence x
=
(
x
k
) as follows:
(37)
x
k
=
{
1
,
if
k
=
i
8
,
i
∈
ℕ
,
0
,
otherwise
.
Hence, for every ε
∈
(
0,1
) and t
>
0, we have
(38)
δ
(
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
)
=
0
.
So I
L
(
ρ
,
ϱ
)
n
-
lim
x
=
0.Definition 30.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) in X is said to be a Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L if, for every t
>
0 and ε
∈
(
0,1
), there exists j
0
∈
ℕ satisfying
(39)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
x
m
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
x
m
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
hhhhhhhhhhhhhh
∀
j
,
m
≥
k
0
.Definition 31.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) in X is said to be an I
L-Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L if, for every t
>
0 and ε
∈
(
0,1
), there exists j
0
∈
ℕ satisfying
(40)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
x
m
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
x
m
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
}
∈
F
(
I
)
.Theorem 32.
LetI
⊂
2
ℕ, let (
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS, and let x
=
(
x
k
) be a sequence in X; then, for every ε
>
0 and t
>
0, one has the following: (1)
I
L
(
ρ
,
ϱ
)
n
-
lim
x
=
ℓ,
(2)
{
j
∈
ℕ
:
(
1
/
h
j
)
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
}
∈
I and {
j
∈
ℕ
:
(
1
/
h
j
)
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
∈
I,
(3)
{
j
∈
ℕ
:
(
1
/
h
j
)
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
1
-
ε and (
1
/
h
j
)
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
ε
}
∈
F
(
I
),
(4)
{
j
∈
ℕ
:
(
1
/
h
j
)
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
1
-
ε
}
∈
F
(
I
) and {
j
∈
ℕ
:
(
1
/
h
j
)
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
ε
}
∈
F
(
I
),
(5)
I
L
-
lim
k
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
1 and I
L
-
lim
k
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
0.The proof is easy, so it is omitted.Theorem 33.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and let x
=
(
x
k
) be a sequence in X. If (
ρ
,
ϱ
)
n
L
-
lim
x exists, then it is unique.Proof.
Suppose that(
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ
1 and (
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ
2 with ℓ
1
≠
ℓ
2. Give ε
∈
(
0,1
) and choose λ
∈
(
0,1
) such that (
1
-
λ
)
⋆
(
1
-
λ
)
>
1
-
ε and λ
⋄
λ
<
ε. Then, for each t
>
0, there exists j
1
∈
ℕ such that
(41)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
hhhhhhhhhhhhhhhh
∀
j
≥
j
1
.
Also, there exists j
2
∈
ℕ such that
(42)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
hhhhhhhhhhhhhhhh
∀
j
≥
j
2
.
Now, consider j
0
=
max
{
j
1
,
j
2
}. Then, for j
≥
j
0, we find a s
∈
ℕ such that
(43)
ρ
x
s
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
>
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
≥
1
-
λ
,
ρ
x
s
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
>
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
≥
1
-
λ
.
Then, we get
(44)
ρ
ℓ
1
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ρ
x
s
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
⋆
ρ
x
s
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
2
)
>
(
1
-
λ
)
⋆
(
1
-
λ
)
>
1
-
ε
.
Since ε
>
0 is arbitrary, we have ρ
ℓ
1
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
1 for all t
>
0. By using a similar technique, it can be proved that ϱ
ℓ
1
-
ℓ
2
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
0 for all t
>
0; hence, ℓ
1
=
ℓ
2.Theorem 34.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and let x
=
(
x
k
) be a sequence in X. Then, one has
(45)
(
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ
⟹
I
L
(
ρ
,
ϱ
)
n
-
lim
x
=
ℓ
.Proof.
Let(
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ, and, then, for all t
>
0 and given ε
∈
(
0,1
), there exists j
0
∈
ℕ such that
(46)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
∀
j
≥
j
0
.
Since I is an admissible ideal and
(47)
G
=
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
⊆
{
1,2
,
3
,
…
,
j
0
-
1
}
,
we get G
∈
I. So I
L
(
ρ
,
ϱ
)
n
-
lim
x
=
ℓ.Theorem 35.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and let x
=
(
x
k
) be a sequence in X. If I
L
(
ρ
,
ϱ
)
n
-
lim
x exists, then it is unique.The proof follows by using Theorems33 and 34.Theorem 36.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and let x
=
(
x
k
) be a sequence in X. If (
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ, then there exists a subsequence (
x
m
k
) of x
=
(
x
k
) such that (
ρ
,
ϱ
)
n
-
lim
x
m
k
=
ℓ.Proof.
Let(
ρ
,
ϱ
)
n
L
-
lim
x
=
ℓ. Then, for all t
>
0 and given ε
∈
(
0,1
), there exists j
0
∈
ℕ such that
(48)
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
hhhhhhhhhhhhhhh
∀
j
≥
j
0
.
Observably, for each j
≥
j
0, we can take an m
k
∈
Λ
j such that
(49)
ρ
x
m
k
-
L
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
1
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
ϱ
x
m
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
.
It follows that (
ρ
,
ϱ
)
n
-
lim
x
m
k
=
ℓ.
We create the following two results without proofs, since they can be easily recognized.Theorem 37.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. If a sequence x
=
(
x
k
) in X is Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L, then it is I
L-Cauchy sequence with respect to the same norm.Theorem 38.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. If a sequence x
=
(
x
k
) in X is Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L, then there is a subsequence of x
=
(
x
k
) which is ordinary Cauchy sequence with respect to the norm (
ρ
,
ϱ
)
n.
## 5.I
L-Limit Point, I
L-Cluster Point, and I
L-Cauchy Sequence in GRnNLS
Definition 39.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS, and if a sequence x
=
(
x
k
) in X, then one has the following. (1)
An elementℓ
∈
X is said to be I
L-limit point of x
=
(
x
k
) if there is a set
(50)
𝕄
=
{
m
1
<
m
2
<
⋯
<
m
k
<
⋯
}
⊂
ℕ
with
𝕄
′
=
{
j
∈
ℕ
:
m
k
∈
Λ
j
}
∈
F
(
I
)
,
(
ρ
,
ϱ
)
n
L
-
lim
x
m
k
=
ℓ
.
(2)
An elementℓ
∈
X is said to be I
L-cluster point of x
=
(
x
k
) if, for every t
>
0 and ε
∈
(
0,1
), one has
(51)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
}
∈
F
(
I
)
.By ⋀
(
ρ
,
ϱ
)
n
L
(
x
) we denote the set of all I
L-limit points and ⋁
(
ρ
,
ϱ
)
n
L
(
x
) the set of all I
L-cluster points in X, respectively.Definition 40.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. A sequence x
=
(
x
k
) in X is said to be I
L
*-Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L if (i)
there exists a set𝕄
=
{
m
1
<
m
2
<
⋯
<
m
k
,
…
}
⊂
ℕ such that 𝕄
′
=
{
j
∈
ℕ
:
m
k
∈
Λ
j
}
∈
F
(
I
);
(ii)
the subsequence(
x
m
k
) of x
=
(
x
k
) is a Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L.Theorem 41.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. For each sequence x
=
(
x
k
) in X, one has
(52)
⋀
(
ρ
,
ϱ
)
n
L
(
x
)
⊂
⋁
(
ρ
,
ϱ
)
n
L
(
x
)
.Proof.
Letℓ
∈
⋀
(
ρ
,
ϱ
)
n
L
(
x
); then there exists a set 𝕄
⊂
ℕ such that 𝕄
′
∈
F
(
I
), where 𝕄 and 𝕄
′ are as in Definition 39, satisfies (
ρ
,
ϱ
)
n
L
-
lim
x
m
k
=
ℓ. Thus, for every t
>
0 and ε
∈
(
0,1
), there exists j
0
∈
ℕ such that
(53)
1
h
j
∑
k
∈
Λ
j
ρ
x
m
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
m
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
h
∀
j
≥
j
0
.
Thus, we have
(54)
G
=
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
m
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ρ
x
m
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
}
⊇
𝕄
′
∖
{
m
1
,
m
2
,
…
,
m
j
0
}
.
Since I is an admissible ideal, we have
(55)
𝕄
′
∖
{
m
1
,
m
2
,
…
,
m
j
0
}
∈
F
(
I
)
and
so
G
∈
F
(
I
)
.
Hence
ℓ
∈
⋁
(
ρ
,
ϱ
)
n
L
(
x
)
.Theorem 42.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. For each sequence x
=
(
x
k
) in X, the set ⋀
(
ρ
,
ϱ
)
n
L
(
x
) is closed set in X with respect to the usual topology induced by the generalized random n-norm (
ρ
,
ϱ
)
n
L.Proof.
Lety
∈
⋀
(
ρ
,
ϱ
)
n
L
(
x
)
¯. Take t
>
0 and ε
∈
(
0,1
). Then, there exists ℓ
0
∈
⋀
(
ρ
,
ϱ
)
n
L
(
x
)
∩
B
L
(
y
,
ε
,
t
). Choose δ
>
0 such that B
L
(
ℓ
0
,
δ
,
t
)
⊂
B
L
(
y
,
ε
,
t
). We have
(56)
G
=
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
y
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
y
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
ε
}
⊇
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
0
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
>
1
-
δ
,
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
0
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
<
δ
}
=
H
.
Thus, H
∈
F
(
I
) and so G
∈
F
(
I
). Hence, y
∈
⋀
(
ρ
,
ϱ
)
n
L
(
x
).Theorem 43.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and x
=
(
x
k
) in X. Then, the following statements are equivalent: (1)
ℓ is a I
L-limit point of x;
(2)
there exist two sequencesy and z in X such that
(57)
x
=
y
+
z
,
(
ρ
,
ϱ
)
n
L
-
lim
y
=
ℓ
,
{
j
∈
ℕ
:
k
∈
Λ
j
,
z
k
≠
θ
}
∈
I
,
where θ is the zero element in X.Proof.
Let (1) hold; then, there exist sets𝕄 and 𝕄
′ as in Definition 39 such that
(58)
𝕄
′
∉
I
,
(
ρ
,
ϱ
)
n
L
-
lim
x
m
k
=
ℓ
.
Define the sequences y and z as follows:
(59)
y
k
=
{
x
k
,
if
k
∈
Λ
j
;
j
∈
𝕄
′
,
ℓ
,
otherwise
,
z
k
=
{
θ
,
if
k
∈
Λ
j
;
j
∈
𝕄
′
,
x
k
-
ℓ
,
otherwise
.
Consider the case k
∈
Λ
j such that j
∈
ℕ
-
𝕄
′. Then, for each ε
∈
(
0,1
) and t
>
0, we get
(60)
ρ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
1
>
1
-
ε
,
ϱ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
0
<
ε
.
Thus, in this case,
(61)
1
h
j
∑
k
∈
Λ
j
ρ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
1
>
1
-
ε
,
1
h
j
∑
k
∈
Λ
j
ϱ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
=
0
<
ε
.
For that, (
ρ
,
ϱ
)
n
L
-
lim
y
=
ℓ. Now, we have
(62)
{
j
∈
ℕ
:
k
∈
Λ
j
,
Z
k
≠
θ
}
⊂
ℕ
-
𝕄
′
and
so
{
j
∈
ℕ
:
k
∈
Λ
j
,
Z
k
≠
θ
}
∈
I
.
Now, suppose that (2) holds. Let 𝕄
′
=
{
j
∈
ℕ
:
k
∈
Λ
j
,
Z
k
=
θ
}. Then, obviously 𝕄
′
∈
F
(
I
) and so it is an infinite set. Form the set
(63)
𝕄
=
{
m
1
<
m
2
<
⋯
<
m
k
<
⋯
}
⊂
ℕ
such
that
m
k
∈
Λ
j
,
z
m
k
=
θ
.
Since x
m
k
=
y
m
k and (
ρ
,
ϱ
)
n
L
-
lim
y
=
ℓ, we find that (
ρ
,
ϱ
)
n
L
-
lim
x
m
k
=
ℓ. This completes the proof.Theorem 44.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS and let x
=
(
x
k
) be a sequence in X. Let I be a nontrivial ideal. If there is a I
L
(
ρ
,
ϱ
)
n-convergent sequence y
=
(
y
k
) in X such that {
k
∈
ℕ
:
y
k
≠
x
k
}
∈
I, then x is also I
L
(
ρ
,
ϱ
)
n-convergent.Proof.
Suppose that{
k
∈
ℕ
:
y
k
≠
x
k
}
∈
I and I
L
(
ρ
,
ϱ
)
n
-
lim
y
=
ℓ. Then, for every ε
∈
(
0,1
) and t
>
0, the set
(64)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
∈
I
.
For every 0
<
ε
<
1 and t
>
0, we have
(65)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
⊆
{
k
∈
ℕ
:
y
k
≠
x
k
}
∪
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
hhhhh
1
h
j
∑
k
∈
Λ
j
ϱ
y
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
.
As both of the sets of the right-hand side are in I, we have that
(66)
{
j
∈
ℕ
:
1
h
j
∑
k
∈
Λ
j
ρ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≤
1
-
ε
or
1
h
j
∑
k
∈
Λ
j
ϱ
x
k
-
ℓ
,
x
1
,
x
2
,
…
,
x
n
-
1
(
t
)
≥
ε
}
∈
I
.
And the proof of the theorem follows.
The proof of the following result can be easily reputable from the definitions.Theorem 45.
Let(
X
,
ρ
,
ϱ
,
⋆
,
⋄
) be GRnNLS. If a sequence x
=
(
x
k
) in X is I
L
*-Cauchy sequence with respect to the generalized random n-norm (
ρ
,
ϱ
)
n
L, then it is I
L-Cauchy sequence also.
---
*Source: 101782-2014-04-24.xml* | 2014 |
# Diagnostic Phase of Calcium Scoring Scan Applied as the Center of Acquisition Window of Coronary Computed Tomography Angiography Improves Image Quality in Minimal Acquisition Window Scan (Target CTA Mode) Using the Second Generation 320-Row CT
**Authors:** Eriko Maeda; Kodai Yamamoto; Shigeaki Kanno; Kenji Ino; Nobuo Tomizawa; Masaaki Akahane; Rumiko Torigoe; Kuni Ohtomo
**Journal:** The Scientific World Journal
(2016)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2016/1017851
---
## Abstract
Objective. To compare the image quality of coronary computed tomography angiography (CCTA) acquired under two conditions: 75% fixed as the acquisition window center (Group 75%) and the diagnostic phase for calcium scoring scan as the center (CS; Group CS).Methods. 320-row cardiac CT with a minimal acquisition window (scanned using “Target CTA” mode) was performed on 81 patients. In Group 75% (n = 40), CS was obtained and reconstructed at 75% and the center of the CCTA acquisition window was set at 75%. In Group CS (n = 41), CS was obtained at 75% and the diagnostic phase showing minimal artifacts was applied as the center of the CCTA acquisition window. Image quality was evaluated using a four-point scale (4-excellent) and the mean scores were compared between groups.Results. The CCTA scan diagnostic phase occurred significantly earlier in CS (75.7 ± 3.2% vs. 73.6 ± 4.5% for Groups 75% and CS, resp.,p = 0.013). The mean Group CS image quality score (3.58 ± 0.63) was also higher than that for Group 75% (3.19 ± 0.66,p < 0.0001).Conclusions. The image quality of CCTA in Target CTA mode was significantly better when the center of acquisition window is adjusted using CS.
---
## Body
## 1. Introduction
Adult coronary computed tomography angiography (CCTA) usually begins with a noncontrast electrocardiogram-gated chest CT called a “calcium scoring (CS) scan” performed after scout scans. CS is used to determine the range of the CCTA scan and to calculate an Agatston score, the counterpart to the calcium score which is obtained using electron beam CT [1]. Because evaluation of the coronary lumen during CCTA is hampered by dense calcification of the coronary artery wall, the Agatston score can be used to select cases with diffuse coronary calcifications, who should not receive further scans due to the likelihood of limited benefit and the risks associated with contrast material and additional radiation exposure [2–6].Second generation 320-row CT scanners with a rotation speed of 275 ms can scan the whole heart in one rotation, using a minimal acquisition window (“Target CTA”; Toshiba, Tochigi, Japan). This scan mode can be applied for evaluating cases with a heart rate lower than approximately 75 beats per minute (bpm). With Target CTA scans, the center of the acquisition window is set to any integral percentage, and X-ray exposure is limited to only the minimum duration needed to reconstruct the images. Although the acquisition window is set using only one integral, such as 75%, the scan has a short reconstruction window and the diagnostic phase (i.e., the phase showing minimal artifacts) can be searched for within the acquisition window. As an example, use of a Target CTA scan of 75% in a patient with an RR interval of 1000 ms results in an acquisition window of 689–811 ms, with the center of the acquisition window at 750 ms and the width of reconstruction window at 122 ms (note: the acquisition window or the exposure duration always exceeds the reconstruction window). These phase types are searched using “PhaseNavi” cardiac-phase search software (Toshiba, Tochigi, Japan), which automatically searches for the phase that produces the lowest average SD value for all voxels in the volume. However, the results of automated phase searching do not always correspond with the diagnostic phase. Also, the most static phase needs to be visually searched using the same software, if the coronary arteries contain motion artifacts at the point identified by the automated phase search. Compared with other methods, Target CTA is reported to produce low-dose scans together with noninferiority in image quality [7, 8]. The value of 75% is widely used as the center of the Target CTA acquisition window, although this value is empirical [9–13].CS has been scanned using a 75% Target CTA mode for patients with a heart rate (HR) ≤ 75 bpm, and at 40% for those with an HR > 75 bpm. During the CS Target CTA scan, the reconstruction phase was fixed to 75%, and it was not possible for PhaseNavi software to adjust the reconstruction phase. However, a recent software upgrade (Aquilion ONE ViSION edition version 6.0; Toshiba, Tochigi, Japan) allows for the adjustment of CS scan reconstruction window. We hypothesized that the CS diagnostic phase correlates with that of CCTA and that the image quality of CCTA would improve with adjustment of the center of the CCTA acquisition window using the CS diagnostic phase as compared to using a fixed percentage value.Therefore the aim of this study was to determine potential correlation between CS and CCTA scan diagnostic phases and to compare the CCTA image quality with the use of 75% (Group 75%) versus the CS diagnostic phase (Group CS) as the center of the acquisition window.
## 2. Materials and Methods
This study which was conducted at a single research center was approved by the local ethics committee. Because of this study’s retrospective design, the requirement for informed consent prior to study participation was waived.
### 2.1. Patients
The Target CTA scan was applied to the patients with sinus rhythm and an HR ≤ 75 bpm. For patients with arrhythmias, a different acquisition program had to be applied in order to run an arrhythmia exclusion program. Thus patients with sinus rhythm and an HR ≤ 75 bpm were included in our study.We retrospectively reviewed the records of 162 consecutive patients who underwent CCTA between October 2013 and February 2014. In December 2013, we started to adjust the center of the CCTA acquisition window for Target CTA using the CS diagnostic phase. Single volume Target CTA mode scanning was not used for 81 patients because of the following reasons: nonsinus rhythm (n=13); single beat scan with long acquisition window because of heart rate fluctuation (n=17); multiple heart beat acquisition (n=23); wide-volume scanning performed to evaluate bypass grafts or aorta (n=16); ventricular evaluation prior to catheter ablation (n=3); or irregular protocol for evaluation of complex cardiac anomaly (n=9). The final study group included 81 patients (Group 75%, n=41; Group CS, n=40) who were scanned because of known or suspected coronary artery disease with chest pain and/or dyspnea, or abnormal electrocardiogram, echocardiogram, or treadmill results.
### 2.2. CT Data Acquisition
All patients underwent CT angiography performed using second generation 320-detector row CT for all enrolled patients and prospective electrocardiogram-gated scans were performed in one heartbeat. The scanning parameters were as follows: detector configuration, 320 × 0.5 mm; gantry rotation time 275 ms; and tube potential, 120 kVp. The tube current was set at 150 mA for calcium scoring scan and from 250 mA to 760 mA for CCTA depending on patient body weight. The mean effective dose was derived from the dose length product multiplied by a conversion coefficient for the chest (κ=0.014mSv×mGy-1×cm-1) [14]. The scan length ranged from 12 to 16 cm depending on the size of the heart.For CS, the center of the acquisition window was set at 75% throughout the period. Until December 2013, the CS reconstruction phase was not adjustable. After December 2013, the reconstruction phase became adjustable, allowing the diagnostic phase with minimal artifacts to be determined at the CT console using PhaseNavi software. The reconstructed slice thickness was 1.0 mm with a 1.0 mm increment. Images were reconstructed using a “medium soft tissue” kernel (FC04). We routinely use this low-pass kernel for cardiac CT because it reduces beam hardening artifacts originating from the vertebra and the aorta.For CCTA, the center of the acquisition window was empirically fixed at 75% of the RR interval until November 2013 (Group 75%). After December 2013, the center of acquisition window was set at the CS diagnostic phase value (Group CS). For CCTA scans in both groups, the phase with minimum artifacts was determined at the CT console using PhaseNavi software. Half cycle reconstruction was performed for all patients, meaning that there was a full cycle of X-ray exposure but only a half cycle of data was used for reconstruction. The reconstructed slice thickness was 0.50 mm with an increment of 0.25 mm. Images were reconstructed using a “medium soft tissue” kernel (FC04) with Adaptive Iterative Dose Reduction in 3D (AIDR-3D) strong and symmetric cone beam reconstruction [15, 16]. Images were transferred to a workstation (ZIO Station System; Ziosoft, Tokyo, Japan) for processing.Patients received 22.2 mg I/kg/s of iopamidol 370 mg I/mL (Iopamiron 370; Bayer, Osaka, Japan). Contrast medium was injected for 10 sec and then a 50 : 50 mixed contrast medium and saline for 4 sec, followed by a 30 mL saline flush. Bolus tracking in the ascending aorta was performed using a double threshold of 100 and 260 Hounsfield Units (HU). Patients were assigned to breathe in and hold their breath after the first threshold. The scan started just after the second threshold.Nineteen patients were being treated with an oralβ-blocker (e.g., bisoprolol and carvedilol) as a part of their baseline medication. An oral β-blocker (20–40 mg of metoprolol) was administered to 18 patients with HR higher than >65 bpm. The patients were told to take the medicine 2 hours prior to CT angiography. Landiolol (Corebeta; Ono Pharmaceutical, Osaka, Japan) was administered intravenously at 0.125 mg/kg when a patient’s HR was over 75 bpm during the time between the calcium scoring scan and CCTA. Patients underwent CCTA 4–7 min after injection (n=11). No patient had any contraindication preventing β-blocker use, and no β-blocker side effects were observed or reported. All patients received 2.5 mg sublingual isosorbide dinitrate (Nitorol; Eisai, Tokyo, Japan) before imaging.
### 2.3. Subjective Image Analysis
Subjective image quality was rated by Kodai Yamamoto and Eriko Maeda, two cardiovascular radiologists with 5 and 11 years of experience, respectively. Both were blinded from the details of the CT data sets, provided in a randomized order, and clinical information. The Society of Cardiovascular Computed Tomography 18-segment classification was applied for the analysis of coronary angiography data [17]. Image quality was graded on a per-segment level, and a study was deemed diagnostic when every anatomically present segment (≥1.5 mm) could be assessed for the presence of atherosclerosis and severity of stenosis. The results were scored according to a four-point scale as previously described: 4, excellent, no artifact; 3, good, mild artifact; 2, acceptable, moderate artifact present, but images still interpretable; 1, unable to evaluate, severe artifact making interpretation impossible [18]. When scores differed between the two readers, the final score was determined by review and consensus.
### 2.4. Objective Image Analysis
Regions of interest (ROIs) were drawn on a cross-sectional image, at the proximal ascending aorta; the proximal, middle, and distal segments of the right coronary; the left anterior descending artery; and the left circumflex artery. The average CT number (in HU) and noise were recorded for each segment using a circular ROI. The ROI was made as large as possible while carefully avoiding inclusion of the vessel wall to prevent partial volume effects (Figure1). An ROI was placed immediately next to the vessel contour on an axial image and the average CT number was recorded. The overall signal-to-noise ratio was defined as the average standard deviation of the circular ROI placed at the ascending aorta. The SNR of each coronary vessel was defined as the average standard deviation of the circular ROI placed at the proximal, middle, and distal segments of the vessel. The overall contrast-to-noise ratio (CNR) was calculated as the difference in the CT number between the ascending aortic lumen and nearby connective tissue divided by the overall image noise. For each coronary vessel, CNR was defined as the average CNR of the circular ROI placed at the proximal, middle, and distal segments of the vessel. We expected that the ascending aorta SNR and CNR would not change between the groups, because aortic image noise is unlikely to be related to the motion of the coronary arteries. Therefore we calculated SNR and CNR at the ascending aorta as a control.Figure 1
Examples of regions of interest (ROIs) drawn on a cross-sectional image, at the proximal ascending aorta; the proximal, middle, and distal segments of the left anterior descending artery. Black circle shows ROIs drawn inside the lumens of the arteries, while white circle shows ROIs drawn in the nearby connective tissue to calculate contrast-to-noise ratio.
(a)
(b)
(c)
(d)
### 2.5. Statistical Analysis
A power analysis was performed to determine the minimal cohort size required usingG∗power version 3.1.9.2. (Universitat Düsseldorf, Düsseldorf, Germany). Our hypothesis was that per-segment subjective image quality would improve in Group CS. To detect a difference of 0.1 in subjective image quality score, the minimum sample size was determined to be a total of 527 segments (approximately 30 patients) at 0.90 power. Sample size calculations were based on a type-2 error (α=) of 0.05 [19].The minimal acquisition window scans (Target CTA mode) for October 2013 were reviewed (n=20) and the reconstruction window was calculated from the console information. The reconstruction window was invariably proven to be 122 ms. To know the percentage of patients in Group CS whose best reconstruction phase would not have been included in the scan if the fixed 75% scan was applied, we compared the actual exposure time as well as “virtual” 75% exposure {i.e., RRintervalms×0.75±61ms}. The correlation between the CS and CCTA scan diagnostic phases was calculated using Spearman’s correlation coefficient analysis.All statistical analyses were performed using JMP software (version 10; SAS, Cary, NC, USA). Quantitative variables were expressed as the mean ± standard deviation and group differences were tested by Student’st-test. Categorical values were expressed as the number (percentage) and were compared using Fisher’s exact test or the chi-squared test. Statistical significance was accepted when p<0.05.
## 2.1. Patients
The Target CTA scan was applied to the patients with sinus rhythm and an HR ≤ 75 bpm. For patients with arrhythmias, a different acquisition program had to be applied in order to run an arrhythmia exclusion program. Thus patients with sinus rhythm and an HR ≤ 75 bpm were included in our study.We retrospectively reviewed the records of 162 consecutive patients who underwent CCTA between October 2013 and February 2014. In December 2013, we started to adjust the center of the CCTA acquisition window for Target CTA using the CS diagnostic phase. Single volume Target CTA mode scanning was not used for 81 patients because of the following reasons: nonsinus rhythm (n=13); single beat scan with long acquisition window because of heart rate fluctuation (n=17); multiple heart beat acquisition (n=23); wide-volume scanning performed to evaluate bypass grafts or aorta (n=16); ventricular evaluation prior to catheter ablation (n=3); or irregular protocol for evaluation of complex cardiac anomaly (n=9). The final study group included 81 patients (Group 75%, n=41; Group CS, n=40) who were scanned because of known or suspected coronary artery disease with chest pain and/or dyspnea, or abnormal electrocardiogram, echocardiogram, or treadmill results.
## 2.2. CT Data Acquisition
All patients underwent CT angiography performed using second generation 320-detector row CT for all enrolled patients and prospective electrocardiogram-gated scans were performed in one heartbeat. The scanning parameters were as follows: detector configuration, 320 × 0.5 mm; gantry rotation time 275 ms; and tube potential, 120 kVp. The tube current was set at 150 mA for calcium scoring scan and from 250 mA to 760 mA for CCTA depending on patient body weight. The mean effective dose was derived from the dose length product multiplied by a conversion coefficient for the chest (κ=0.014mSv×mGy-1×cm-1) [14]. The scan length ranged from 12 to 16 cm depending on the size of the heart.For CS, the center of the acquisition window was set at 75% throughout the period. Until December 2013, the CS reconstruction phase was not adjustable. After December 2013, the reconstruction phase became adjustable, allowing the diagnostic phase with minimal artifacts to be determined at the CT console using PhaseNavi software. The reconstructed slice thickness was 1.0 mm with a 1.0 mm increment. Images were reconstructed using a “medium soft tissue” kernel (FC04). We routinely use this low-pass kernel for cardiac CT because it reduces beam hardening artifacts originating from the vertebra and the aorta.For CCTA, the center of the acquisition window was empirically fixed at 75% of the RR interval until November 2013 (Group 75%). After December 2013, the center of acquisition window was set at the CS diagnostic phase value (Group CS). For CCTA scans in both groups, the phase with minimum artifacts was determined at the CT console using PhaseNavi software. Half cycle reconstruction was performed for all patients, meaning that there was a full cycle of X-ray exposure but only a half cycle of data was used for reconstruction. The reconstructed slice thickness was 0.50 mm with an increment of 0.25 mm. Images were reconstructed using a “medium soft tissue” kernel (FC04) with Adaptive Iterative Dose Reduction in 3D (AIDR-3D) strong and symmetric cone beam reconstruction [15, 16]. Images were transferred to a workstation (ZIO Station System; Ziosoft, Tokyo, Japan) for processing.Patients received 22.2 mg I/kg/s of iopamidol 370 mg I/mL (Iopamiron 370; Bayer, Osaka, Japan). Contrast medium was injected for 10 sec and then a 50 : 50 mixed contrast medium and saline for 4 sec, followed by a 30 mL saline flush. Bolus tracking in the ascending aorta was performed using a double threshold of 100 and 260 Hounsfield Units (HU). Patients were assigned to breathe in and hold their breath after the first threshold. The scan started just after the second threshold.Nineteen patients were being treated with an oralβ-blocker (e.g., bisoprolol and carvedilol) as a part of their baseline medication. An oral β-blocker (20–40 mg of metoprolol) was administered to 18 patients with HR higher than >65 bpm. The patients were told to take the medicine 2 hours prior to CT angiography. Landiolol (Corebeta; Ono Pharmaceutical, Osaka, Japan) was administered intravenously at 0.125 mg/kg when a patient’s HR was over 75 bpm during the time between the calcium scoring scan and CCTA. Patients underwent CCTA 4–7 min after injection (n=11). No patient had any contraindication preventing β-blocker use, and no β-blocker side effects were observed or reported. All patients received 2.5 mg sublingual isosorbide dinitrate (Nitorol; Eisai, Tokyo, Japan) before imaging.
## 2.3. Subjective Image Analysis
Subjective image quality was rated by Kodai Yamamoto and Eriko Maeda, two cardiovascular radiologists with 5 and 11 years of experience, respectively. Both were blinded from the details of the CT data sets, provided in a randomized order, and clinical information. The Society of Cardiovascular Computed Tomography 18-segment classification was applied for the analysis of coronary angiography data [17]. Image quality was graded on a per-segment level, and a study was deemed diagnostic when every anatomically present segment (≥1.5 mm) could be assessed for the presence of atherosclerosis and severity of stenosis. The results were scored according to a four-point scale as previously described: 4, excellent, no artifact; 3, good, mild artifact; 2, acceptable, moderate artifact present, but images still interpretable; 1, unable to evaluate, severe artifact making interpretation impossible [18]. When scores differed between the two readers, the final score was determined by review and consensus.
## 2.4. Objective Image Analysis
Regions of interest (ROIs) were drawn on a cross-sectional image, at the proximal ascending aorta; the proximal, middle, and distal segments of the right coronary; the left anterior descending artery; and the left circumflex artery. The average CT number (in HU) and noise were recorded for each segment using a circular ROI. The ROI was made as large as possible while carefully avoiding inclusion of the vessel wall to prevent partial volume effects (Figure1). An ROI was placed immediately next to the vessel contour on an axial image and the average CT number was recorded. The overall signal-to-noise ratio was defined as the average standard deviation of the circular ROI placed at the ascending aorta. The SNR of each coronary vessel was defined as the average standard deviation of the circular ROI placed at the proximal, middle, and distal segments of the vessel. The overall contrast-to-noise ratio (CNR) was calculated as the difference in the CT number between the ascending aortic lumen and nearby connective tissue divided by the overall image noise. For each coronary vessel, CNR was defined as the average CNR of the circular ROI placed at the proximal, middle, and distal segments of the vessel. We expected that the ascending aorta SNR and CNR would not change between the groups, because aortic image noise is unlikely to be related to the motion of the coronary arteries. Therefore we calculated SNR and CNR at the ascending aorta as a control.Figure 1
Examples of regions of interest (ROIs) drawn on a cross-sectional image, at the proximal ascending aorta; the proximal, middle, and distal segments of the left anterior descending artery. Black circle shows ROIs drawn inside the lumens of the arteries, while white circle shows ROIs drawn in the nearby connective tissue to calculate contrast-to-noise ratio.
(a)
(b)
(c)
(d)
## 2.5. Statistical Analysis
A power analysis was performed to determine the minimal cohort size required usingG∗power version 3.1.9.2. (Universitat Düsseldorf, Düsseldorf, Germany). Our hypothesis was that per-segment subjective image quality would improve in Group CS. To detect a difference of 0.1 in subjective image quality score, the minimum sample size was determined to be a total of 527 segments (approximately 30 patients) at 0.90 power. Sample size calculations were based on a type-2 error (α=) of 0.05 [19].The minimal acquisition window scans (Target CTA mode) for October 2013 were reviewed (n=20) and the reconstruction window was calculated from the console information. The reconstruction window was invariably proven to be 122 ms. To know the percentage of patients in Group CS whose best reconstruction phase would not have been included in the scan if the fixed 75% scan was applied, we compared the actual exposure time as well as “virtual” 75% exposure {i.e., RRintervalms×0.75±61ms}. The correlation between the CS and CCTA scan diagnostic phases was calculated using Spearman’s correlation coefficient analysis.All statistical analyses were performed using JMP software (version 10; SAS, Cary, NC, USA). Quantitative variables were expressed as the mean ± standard deviation and group differences were tested by Student’st-test. Categorical values were expressed as the number (percentage) and were compared using Fisher’s exact test or the chi-squared test. Statistical significance was accepted when p<0.05.
## 3. Results
There was no group difference in patient demographics and scanning parameters (Tables1 and 2). The mean best reconstruction phase (%) for the CCTA scan was significantly earlier for Group CS, although quite widely variable range was observed among the diagnostic phases (Table 3, Figure 2). For eight patients in Group CS, the diagnostic phase occurred outside of the virtual 75% exposure (19.5%) (Figure 3). A significant correlation was detected between CS and CCTA diagnostic phases (Spearman’s correlation coefficient 0.351, p=0.02, R2=0.113).Table 1
Patient demographics.
Parameter
Group 75%
Group CS
p value
Number of patients
40
41
Male/female
28/12
21/20
0.14
Age (years)
66.8 ± 11.8
66.6 ± 9.9
0.94
Body weight (kg)
64.1 ± 14.8
60.7 ± 11.5
0.26
Body mass index (kg/m2)
24.4 ± 3.8
24.2 ± 3.9
0.83
Beta-blocker+
15 (38)
19 (46)
0.28
Heart rate (bpm)
58.9 ± 6.5
57.7 ± 7.0
0.44
Coronary risk factor+
Hypertension
21 (53)
26 (63)
0.18
Diabetes mellitus
10 (25)
12 (29)
0.43
Dyslipidemia
21 (53)
25 (61)
0.29
Smoking
17 (43)
17 (41)
0.55
Family history
3 (8)
5 (12)
0.37
+Data represent the number of patients (percentage).Table 2
Scanning parameters.
Parameter
Group 75%
Group CS
p value
Contrast medium (mL)
45.5 ± 9.4
42.8 ± 8.0
0.17
Injection rate (mL/sec)
3.8 ± 0.8
3.6 ± 0.6
0.13
Tube current (mA)
397 ± 125
354 ± 92
0.08
Scan length (cm)
13.1 ± 1.3
12.9 ± 1.2
0.47
Effective dose (mSv)
1.87 ± 0.75
1.70 ± 0.66
0.28Table 3
Comparison of reconstruction phases between both reconstruction methods.
Group 75%
Group CS
p value
CS scan (%)
Average
75 (unadjustable)
73.9 ± 3.0
N/A
Range
N/A
67.0–85.3
CCTA scan (%)
Average
75.7 ± 3.2
73.6 ± 4.5
0.013∗
Range
70.2–81.2
60.8–82.0
∗Statistically significant.Figure 2
Scatter plot comparing the calcium scoring (CS) scan and coronary computed tomography angiography (CCTA) diagnostic phases for each patient. The line represents predictive formula for the CCTA diagnostic phase, which is CCTA diagnosticphase=36.3+0.5×CS diagnostic phase.Figure 3
Presurgical screening coronary computed tomography angiography (RR interval = 1114 ms) performed in Target CTA mode on a 61-year-old male with a history of aortic valve replacement. Curved multiplanar reconstruction images show the right coronary (a) and left anterior descending arteries (b) with minimal motion artifacts. The diagnostic phase was 68% for calcium scoring scan and 69.1% for coronary computed tomography angiography. The virtual window of 75% exposure was calculated as 70.0–80.4%, which does not include the CCTA diagnostic phase.
(a)
(b)Among the 1458 total segments captured for each group (18 segments in 81 patients), 137 segments were not evaluable because the segment was absent or too small (60 and 77 segments for Group 75% and Group CS, resp.). The subjective image quality scores were significantly better in Group CS, both for overall and for branch specific analyses (Table4). Branch specific analyses of objective image quality scores were also higher in Group CS (Table 5). Interobserver agreement on subjective image quality was “good” (κ=0.68). When patients with poor interobserver agreement were defined as “difference in subjective score between two graders being 2 or more in more than five segments,” we found that just four patients qualified as having poor interobserver agreement due to either the presence of dense calcification or multiple stents.Table 4
Subjective image quality.
Group 75%
Group CS
p value
Overall
3.20 ± 0.66
3.58 ± 0.63
<0.0001∗
RCA
3.18 ± 0.65
3.63 ± 0.60
<0.0001∗
LMT + LAD + HL
3.23 ± 0.66
3.58 ± 0.62
<0.0001∗
LCX
3.17 ± 0.67
3.55 ± 0.67
<0.0001∗
∗Statistically significant.
RCA: right coronary artery (#1–4 and #16).
LMT + LAD + HL: left main trunk, left anterior descending, and high lateral branch (#5–10 and #17).
LCX: left circumflex artery (#11–15 and #18).Table 5
Objective image quality.
Group 75%
Group CS
p value
Signal-to-noise ratio
Overall
21.5 ± 2.0
21.5 ± 2.1
0.98
RCA
20.6 ± 4.6
13.4 ± 3.1
<0.0001∗
LAD
21.1 ± 4.3
14.8 ± 2.8
<0.0001∗
LCX
19.7 ± 3.3
16.1 ± 3.0
0.0038∗
Contrast-to-noise ratio
Overall
25.2 ± 5.7
23.1 ± 4.0
0.24
RCA
27.5 ± 5.4
40.7 ± 12.5
0.0023∗
LAD
26.9 ± 7.0
35.6 ± 9.9
0.015∗
LCX
26.2 ± 6.5
31.4 ± 9.4
0.112∗
∗Statistically significant.
## 4. Discussion
This is the first report on the use of the CS diagnostic phase as the center of the CCTA acquisition window for Target CTA mode scanning. Group CS image quality was significantly better than that for Group 75% using both subjective and objective evaluations. The CS and CCTA diagnostic phases were both earlier than the empirically derived 75%, with the diagnostic phases considered to be outside the 75%-centered acquisition window in 19.5% of cases. The premise of this study of correlation between the CS and CTCAG diagnostic phases was also proven. The correlation efficiency between diagnostic phases of CS and CCTA was 0.351 (p=0.02) indicating weak positive correlation [20].The greatest advantage of adjusting the center of the acquisition window using CS instead of applying a fixed percentage as the center of acquisition window was an improvement in image quality due to individual adjustments made to the center of the acquisition window. The necessity of this adjustment is based on the wide individual variation in the diagnostic phase and the significant correlation between CS and CCTA scan diagnostic phases. The major disadvantage of this method is the increased workload during the scan as the method requires several additional steps, as compared with using a fixed percentage. CS images need to be reconstructed with a narrower field of view, the diagnostic phase must be identified on multiple planes using cardiac-phase search software and must be reconstructed using the searched phase. This entire sequence of actions needs to be completed in timely fashion (i.e., before the CCTA scan) and must be repeated for certain “difficult” cases. Therefore, the phase search usually requires the input of a radiologist or a technologist in addition to the scanning technologist.There is a worldwide trend to reduce radiation exposure during cardiac CT. Indeed, radiologists should make their best effort to achieve “as low as reasonable achievable (ALARA)” radiation exposure during every examination. Target CTA scan is a product of the response to this mandate. However, the present study showed wide individual variation in cardiac CT diagnostic phase. Radiologists should tailor the center of the acquisition window during Target CTA scanning or set the acquisition window wider than the narrowest setting to help achieve this goal. For instance, Steigner et al. suggest 72–81% acquisition window has a good probability of including the diagnostic phase for 95% coronary arteries [21]. If Target CTA is to be used without tailoring the center of acquisition window using CS, the accompanying physician or technologist should at least look for motion artifacts on the CS scan before deciding to use Target CTA for CCTA scanning. If coronary arteries on the CS images contain motion artifacts, setting a wider acquisition window than the Target CTA (e.g., 70–80%) raises the probability of obtaining better CCTA images without motion artifacts.There are some limitations to our study. The heart usually straddles two volumes in CS, because CS needs to cover an area wider than the heart. First, because the RR intervals differ between those volumes, the reconstruction window becomes narrow when there is a large difference in the RR intervals. When this difference is too big, the reconstruction requires an artificial adjustment of the position of one of the R waves. In this study, cases that required such adjustment were considered to be arrhythmic and were excluded, because a longer CCTA acquisition window was applied to such cases. For the second limitation, the center of the CS acquisition window was fixed to 75%. This means that if the diagnostic phase existed at an extreme such as 55% or 92%, it would not be possible to include this phase, even when the center of the acquisition window was adjusted using CS. In fact, with some of our cases, the CS diagnostic phase was located at the earliest pole of the scan. Likewise, the best CCTA reconstruction phase was also found at the earliest pole. With these cases, there is likely to be and even better phase that occurred even earlier. The third study limitation was that many cases exhibited a gap between the CS and CCTA diagnostic phases, as was expected based on the correlation coefficient. In cases with large gaps, an even better phase may exist beyond the CCTA acquisition window. The fourth limitation pertains to the limited sample size because the average Group CS diagnostic phases and the percentage of patients that had values outside of the 75% fixed scan acquisition window were derived from a small number of patients. This point could be addressed by repeating the investigation in a larger population. In addition, further studies should be performed to determine whether the method described herein is also effective with other CT systems, such as 256-row CT or dual-source CT.
## 5. Conclusions
This study found that CS and CCTA diagnostic phases were significantly correlated, with average diagnostic phase of 73.9% and 73.6%, respectively, although the phases showed substantial interindividual variation. The CCTA scan image quality using Target CTA mode was significantly better when the center of the acquisition window was adjusted using CS, compared with that using a fixed percentage.
---
*Source: 1017851-2016-02-10.xml* | 1017851-2016-02-10_1017851-2016-02-10.md | 34,077 | Diagnostic Phase of Calcium Scoring Scan Applied as the Center of Acquisition Window of Coronary Computed Tomography Angiography Improves Image Quality in Minimal Acquisition Window Scan (Target CTA Mode) Using the Second Generation 320-Row CT | Eriko Maeda; Kodai Yamamoto; Shigeaki Kanno; Kenji Ino; Nobuo Tomizawa; Masaaki Akahane; Rumiko Torigoe; Kuni Ohtomo | The Scientific World Journal
(2016) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2016/1017851 | 1017851-2016-02-10.xml | ---
## Abstract
Objective. To compare the image quality of coronary computed tomography angiography (CCTA) acquired under two conditions: 75% fixed as the acquisition window center (Group 75%) and the diagnostic phase for calcium scoring scan as the center (CS; Group CS).Methods. 320-row cardiac CT with a minimal acquisition window (scanned using “Target CTA” mode) was performed on 81 patients. In Group 75% (n = 40), CS was obtained and reconstructed at 75% and the center of the CCTA acquisition window was set at 75%. In Group CS (n = 41), CS was obtained at 75% and the diagnostic phase showing minimal artifacts was applied as the center of the CCTA acquisition window. Image quality was evaluated using a four-point scale (4-excellent) and the mean scores were compared between groups.Results. The CCTA scan diagnostic phase occurred significantly earlier in CS (75.7 ± 3.2% vs. 73.6 ± 4.5% for Groups 75% and CS, resp.,p = 0.013). The mean Group CS image quality score (3.58 ± 0.63) was also higher than that for Group 75% (3.19 ± 0.66,p < 0.0001).Conclusions. The image quality of CCTA in Target CTA mode was significantly better when the center of acquisition window is adjusted using CS.
---
## Body
## 1. Introduction
Adult coronary computed tomography angiography (CCTA) usually begins with a noncontrast electrocardiogram-gated chest CT called a “calcium scoring (CS) scan” performed after scout scans. CS is used to determine the range of the CCTA scan and to calculate an Agatston score, the counterpart to the calcium score which is obtained using electron beam CT [1]. Because evaluation of the coronary lumen during CCTA is hampered by dense calcification of the coronary artery wall, the Agatston score can be used to select cases with diffuse coronary calcifications, who should not receive further scans due to the likelihood of limited benefit and the risks associated with contrast material and additional radiation exposure [2–6].Second generation 320-row CT scanners with a rotation speed of 275 ms can scan the whole heart in one rotation, using a minimal acquisition window (“Target CTA”; Toshiba, Tochigi, Japan). This scan mode can be applied for evaluating cases with a heart rate lower than approximately 75 beats per minute (bpm). With Target CTA scans, the center of the acquisition window is set to any integral percentage, and X-ray exposure is limited to only the minimum duration needed to reconstruct the images. Although the acquisition window is set using only one integral, such as 75%, the scan has a short reconstruction window and the diagnostic phase (i.e., the phase showing minimal artifacts) can be searched for within the acquisition window. As an example, use of a Target CTA scan of 75% in a patient with an RR interval of 1000 ms results in an acquisition window of 689–811 ms, with the center of the acquisition window at 750 ms and the width of reconstruction window at 122 ms (note: the acquisition window or the exposure duration always exceeds the reconstruction window). These phase types are searched using “PhaseNavi” cardiac-phase search software (Toshiba, Tochigi, Japan), which automatically searches for the phase that produces the lowest average SD value for all voxels in the volume. However, the results of automated phase searching do not always correspond with the diagnostic phase. Also, the most static phase needs to be visually searched using the same software, if the coronary arteries contain motion artifacts at the point identified by the automated phase search. Compared with other methods, Target CTA is reported to produce low-dose scans together with noninferiority in image quality [7, 8]. The value of 75% is widely used as the center of the Target CTA acquisition window, although this value is empirical [9–13].CS has been scanned using a 75% Target CTA mode for patients with a heart rate (HR) ≤ 75 bpm, and at 40% for those with an HR > 75 bpm. During the CS Target CTA scan, the reconstruction phase was fixed to 75%, and it was not possible for PhaseNavi software to adjust the reconstruction phase. However, a recent software upgrade (Aquilion ONE ViSION edition version 6.0; Toshiba, Tochigi, Japan) allows for the adjustment of CS scan reconstruction window. We hypothesized that the CS diagnostic phase correlates with that of CCTA and that the image quality of CCTA would improve with adjustment of the center of the CCTA acquisition window using the CS diagnostic phase as compared to using a fixed percentage value.Therefore the aim of this study was to determine potential correlation between CS and CCTA scan diagnostic phases and to compare the CCTA image quality with the use of 75% (Group 75%) versus the CS diagnostic phase (Group CS) as the center of the acquisition window.
## 2. Materials and Methods
This study which was conducted at a single research center was approved by the local ethics committee. Because of this study’s retrospective design, the requirement for informed consent prior to study participation was waived.
### 2.1. Patients
The Target CTA scan was applied to the patients with sinus rhythm and an HR ≤ 75 bpm. For patients with arrhythmias, a different acquisition program had to be applied in order to run an arrhythmia exclusion program. Thus patients with sinus rhythm and an HR ≤ 75 bpm were included in our study.We retrospectively reviewed the records of 162 consecutive patients who underwent CCTA between October 2013 and February 2014. In December 2013, we started to adjust the center of the CCTA acquisition window for Target CTA using the CS diagnostic phase. Single volume Target CTA mode scanning was not used for 81 patients because of the following reasons: nonsinus rhythm (n=13); single beat scan with long acquisition window because of heart rate fluctuation (n=17); multiple heart beat acquisition (n=23); wide-volume scanning performed to evaluate bypass grafts or aorta (n=16); ventricular evaluation prior to catheter ablation (n=3); or irregular protocol for evaluation of complex cardiac anomaly (n=9). The final study group included 81 patients (Group 75%, n=41; Group CS, n=40) who were scanned because of known or suspected coronary artery disease with chest pain and/or dyspnea, or abnormal electrocardiogram, echocardiogram, or treadmill results.
### 2.2. CT Data Acquisition
All patients underwent CT angiography performed using second generation 320-detector row CT for all enrolled patients and prospective electrocardiogram-gated scans were performed in one heartbeat. The scanning parameters were as follows: detector configuration, 320 × 0.5 mm; gantry rotation time 275 ms; and tube potential, 120 kVp. The tube current was set at 150 mA for calcium scoring scan and from 250 mA to 760 mA for CCTA depending on patient body weight. The mean effective dose was derived from the dose length product multiplied by a conversion coefficient for the chest (κ=0.014mSv×mGy-1×cm-1) [14]. The scan length ranged from 12 to 16 cm depending on the size of the heart.For CS, the center of the acquisition window was set at 75% throughout the period. Until December 2013, the CS reconstruction phase was not adjustable. After December 2013, the reconstruction phase became adjustable, allowing the diagnostic phase with minimal artifacts to be determined at the CT console using PhaseNavi software. The reconstructed slice thickness was 1.0 mm with a 1.0 mm increment. Images were reconstructed using a “medium soft tissue” kernel (FC04). We routinely use this low-pass kernel for cardiac CT because it reduces beam hardening artifacts originating from the vertebra and the aorta.For CCTA, the center of the acquisition window was empirically fixed at 75% of the RR interval until November 2013 (Group 75%). After December 2013, the center of acquisition window was set at the CS diagnostic phase value (Group CS). For CCTA scans in both groups, the phase with minimum artifacts was determined at the CT console using PhaseNavi software. Half cycle reconstruction was performed for all patients, meaning that there was a full cycle of X-ray exposure but only a half cycle of data was used for reconstruction. The reconstructed slice thickness was 0.50 mm with an increment of 0.25 mm. Images were reconstructed using a “medium soft tissue” kernel (FC04) with Adaptive Iterative Dose Reduction in 3D (AIDR-3D) strong and symmetric cone beam reconstruction [15, 16]. Images were transferred to a workstation (ZIO Station System; Ziosoft, Tokyo, Japan) for processing.Patients received 22.2 mg I/kg/s of iopamidol 370 mg I/mL (Iopamiron 370; Bayer, Osaka, Japan). Contrast medium was injected for 10 sec and then a 50 : 50 mixed contrast medium and saline for 4 sec, followed by a 30 mL saline flush. Bolus tracking in the ascending aorta was performed using a double threshold of 100 and 260 Hounsfield Units (HU). Patients were assigned to breathe in and hold their breath after the first threshold. The scan started just after the second threshold.Nineteen patients were being treated with an oralβ-blocker (e.g., bisoprolol and carvedilol) as a part of their baseline medication. An oral β-blocker (20–40 mg of metoprolol) was administered to 18 patients with HR higher than >65 bpm. The patients were told to take the medicine 2 hours prior to CT angiography. Landiolol (Corebeta; Ono Pharmaceutical, Osaka, Japan) was administered intravenously at 0.125 mg/kg when a patient’s HR was over 75 bpm during the time between the calcium scoring scan and CCTA. Patients underwent CCTA 4–7 min after injection (n=11). No patient had any contraindication preventing β-blocker use, and no β-blocker side effects were observed or reported. All patients received 2.5 mg sublingual isosorbide dinitrate (Nitorol; Eisai, Tokyo, Japan) before imaging.
### 2.3. Subjective Image Analysis
Subjective image quality was rated by Kodai Yamamoto and Eriko Maeda, two cardiovascular radiologists with 5 and 11 years of experience, respectively. Both were blinded from the details of the CT data sets, provided in a randomized order, and clinical information. The Society of Cardiovascular Computed Tomography 18-segment classification was applied for the analysis of coronary angiography data [17]. Image quality was graded on a per-segment level, and a study was deemed diagnostic when every anatomically present segment (≥1.5 mm) could be assessed for the presence of atherosclerosis and severity of stenosis. The results were scored according to a four-point scale as previously described: 4, excellent, no artifact; 3, good, mild artifact; 2, acceptable, moderate artifact present, but images still interpretable; 1, unable to evaluate, severe artifact making interpretation impossible [18]. When scores differed between the two readers, the final score was determined by review and consensus.
### 2.4. Objective Image Analysis
Regions of interest (ROIs) were drawn on a cross-sectional image, at the proximal ascending aorta; the proximal, middle, and distal segments of the right coronary; the left anterior descending artery; and the left circumflex artery. The average CT number (in HU) and noise were recorded for each segment using a circular ROI. The ROI was made as large as possible while carefully avoiding inclusion of the vessel wall to prevent partial volume effects (Figure1). An ROI was placed immediately next to the vessel contour on an axial image and the average CT number was recorded. The overall signal-to-noise ratio was defined as the average standard deviation of the circular ROI placed at the ascending aorta. The SNR of each coronary vessel was defined as the average standard deviation of the circular ROI placed at the proximal, middle, and distal segments of the vessel. The overall contrast-to-noise ratio (CNR) was calculated as the difference in the CT number between the ascending aortic lumen and nearby connective tissue divided by the overall image noise. For each coronary vessel, CNR was defined as the average CNR of the circular ROI placed at the proximal, middle, and distal segments of the vessel. We expected that the ascending aorta SNR and CNR would not change between the groups, because aortic image noise is unlikely to be related to the motion of the coronary arteries. Therefore we calculated SNR and CNR at the ascending aorta as a control.Figure 1
Examples of regions of interest (ROIs) drawn on a cross-sectional image, at the proximal ascending aorta; the proximal, middle, and distal segments of the left anterior descending artery. Black circle shows ROIs drawn inside the lumens of the arteries, while white circle shows ROIs drawn in the nearby connective tissue to calculate contrast-to-noise ratio.
(a)
(b)
(c)
(d)
### 2.5. Statistical Analysis
A power analysis was performed to determine the minimal cohort size required usingG∗power version 3.1.9.2. (Universitat Düsseldorf, Düsseldorf, Germany). Our hypothesis was that per-segment subjective image quality would improve in Group CS. To detect a difference of 0.1 in subjective image quality score, the minimum sample size was determined to be a total of 527 segments (approximately 30 patients) at 0.90 power. Sample size calculations were based on a type-2 error (α=) of 0.05 [19].The minimal acquisition window scans (Target CTA mode) for October 2013 were reviewed (n=20) and the reconstruction window was calculated from the console information. The reconstruction window was invariably proven to be 122 ms. To know the percentage of patients in Group CS whose best reconstruction phase would not have been included in the scan if the fixed 75% scan was applied, we compared the actual exposure time as well as “virtual” 75% exposure {i.e., RRintervalms×0.75±61ms}. The correlation between the CS and CCTA scan diagnostic phases was calculated using Spearman’s correlation coefficient analysis.All statistical analyses were performed using JMP software (version 10; SAS, Cary, NC, USA). Quantitative variables were expressed as the mean ± standard deviation and group differences were tested by Student’st-test. Categorical values were expressed as the number (percentage) and were compared using Fisher’s exact test or the chi-squared test. Statistical significance was accepted when p<0.05.
## 2.1. Patients
The Target CTA scan was applied to the patients with sinus rhythm and an HR ≤ 75 bpm. For patients with arrhythmias, a different acquisition program had to be applied in order to run an arrhythmia exclusion program. Thus patients with sinus rhythm and an HR ≤ 75 bpm were included in our study.We retrospectively reviewed the records of 162 consecutive patients who underwent CCTA between October 2013 and February 2014. In December 2013, we started to adjust the center of the CCTA acquisition window for Target CTA using the CS diagnostic phase. Single volume Target CTA mode scanning was not used for 81 patients because of the following reasons: nonsinus rhythm (n=13); single beat scan with long acquisition window because of heart rate fluctuation (n=17); multiple heart beat acquisition (n=23); wide-volume scanning performed to evaluate bypass grafts or aorta (n=16); ventricular evaluation prior to catheter ablation (n=3); or irregular protocol for evaluation of complex cardiac anomaly (n=9). The final study group included 81 patients (Group 75%, n=41; Group CS, n=40) who were scanned because of known or suspected coronary artery disease with chest pain and/or dyspnea, or abnormal electrocardiogram, echocardiogram, or treadmill results.
## 2.2. CT Data Acquisition
All patients underwent CT angiography performed using second generation 320-detector row CT for all enrolled patients and prospective electrocardiogram-gated scans were performed in one heartbeat. The scanning parameters were as follows: detector configuration, 320 × 0.5 mm; gantry rotation time 275 ms; and tube potential, 120 kVp. The tube current was set at 150 mA for calcium scoring scan and from 250 mA to 760 mA for CCTA depending on patient body weight. The mean effective dose was derived from the dose length product multiplied by a conversion coefficient for the chest (κ=0.014mSv×mGy-1×cm-1) [14]. The scan length ranged from 12 to 16 cm depending on the size of the heart.For CS, the center of the acquisition window was set at 75% throughout the period. Until December 2013, the CS reconstruction phase was not adjustable. After December 2013, the reconstruction phase became adjustable, allowing the diagnostic phase with minimal artifacts to be determined at the CT console using PhaseNavi software. The reconstructed slice thickness was 1.0 mm with a 1.0 mm increment. Images were reconstructed using a “medium soft tissue” kernel (FC04). We routinely use this low-pass kernel for cardiac CT because it reduces beam hardening artifacts originating from the vertebra and the aorta.For CCTA, the center of the acquisition window was empirically fixed at 75% of the RR interval until November 2013 (Group 75%). After December 2013, the center of acquisition window was set at the CS diagnostic phase value (Group CS). For CCTA scans in both groups, the phase with minimum artifacts was determined at the CT console using PhaseNavi software. Half cycle reconstruction was performed for all patients, meaning that there was a full cycle of X-ray exposure but only a half cycle of data was used for reconstruction. The reconstructed slice thickness was 0.50 mm with an increment of 0.25 mm. Images were reconstructed using a “medium soft tissue” kernel (FC04) with Adaptive Iterative Dose Reduction in 3D (AIDR-3D) strong and symmetric cone beam reconstruction [15, 16]. Images were transferred to a workstation (ZIO Station System; Ziosoft, Tokyo, Japan) for processing.Patients received 22.2 mg I/kg/s of iopamidol 370 mg I/mL (Iopamiron 370; Bayer, Osaka, Japan). Contrast medium was injected for 10 sec and then a 50 : 50 mixed contrast medium and saline for 4 sec, followed by a 30 mL saline flush. Bolus tracking in the ascending aorta was performed using a double threshold of 100 and 260 Hounsfield Units (HU). Patients were assigned to breathe in and hold their breath after the first threshold. The scan started just after the second threshold.Nineteen patients were being treated with an oralβ-blocker (e.g., bisoprolol and carvedilol) as a part of their baseline medication. An oral β-blocker (20–40 mg of metoprolol) was administered to 18 patients with HR higher than >65 bpm. The patients were told to take the medicine 2 hours prior to CT angiography. Landiolol (Corebeta; Ono Pharmaceutical, Osaka, Japan) was administered intravenously at 0.125 mg/kg when a patient’s HR was over 75 bpm during the time between the calcium scoring scan and CCTA. Patients underwent CCTA 4–7 min after injection (n=11). No patient had any contraindication preventing β-blocker use, and no β-blocker side effects were observed or reported. All patients received 2.5 mg sublingual isosorbide dinitrate (Nitorol; Eisai, Tokyo, Japan) before imaging.
## 2.3. Subjective Image Analysis
Subjective image quality was rated by Kodai Yamamoto and Eriko Maeda, two cardiovascular radiologists with 5 and 11 years of experience, respectively. Both were blinded from the details of the CT data sets, provided in a randomized order, and clinical information. The Society of Cardiovascular Computed Tomography 18-segment classification was applied for the analysis of coronary angiography data [17]. Image quality was graded on a per-segment level, and a study was deemed diagnostic when every anatomically present segment (≥1.5 mm) could be assessed for the presence of atherosclerosis and severity of stenosis. The results were scored according to a four-point scale as previously described: 4, excellent, no artifact; 3, good, mild artifact; 2, acceptable, moderate artifact present, but images still interpretable; 1, unable to evaluate, severe artifact making interpretation impossible [18]. When scores differed between the two readers, the final score was determined by review and consensus.
## 2.4. Objective Image Analysis
Regions of interest (ROIs) were drawn on a cross-sectional image, at the proximal ascending aorta; the proximal, middle, and distal segments of the right coronary; the left anterior descending artery; and the left circumflex artery. The average CT number (in HU) and noise were recorded for each segment using a circular ROI. The ROI was made as large as possible while carefully avoiding inclusion of the vessel wall to prevent partial volume effects (Figure1). An ROI was placed immediately next to the vessel contour on an axial image and the average CT number was recorded. The overall signal-to-noise ratio was defined as the average standard deviation of the circular ROI placed at the ascending aorta. The SNR of each coronary vessel was defined as the average standard deviation of the circular ROI placed at the proximal, middle, and distal segments of the vessel. The overall contrast-to-noise ratio (CNR) was calculated as the difference in the CT number between the ascending aortic lumen and nearby connective tissue divided by the overall image noise. For each coronary vessel, CNR was defined as the average CNR of the circular ROI placed at the proximal, middle, and distal segments of the vessel. We expected that the ascending aorta SNR and CNR would not change between the groups, because aortic image noise is unlikely to be related to the motion of the coronary arteries. Therefore we calculated SNR and CNR at the ascending aorta as a control.Figure 1
Examples of regions of interest (ROIs) drawn on a cross-sectional image, at the proximal ascending aorta; the proximal, middle, and distal segments of the left anterior descending artery. Black circle shows ROIs drawn inside the lumens of the arteries, while white circle shows ROIs drawn in the nearby connective tissue to calculate contrast-to-noise ratio.
(a)
(b)
(c)
(d)
## 2.5. Statistical Analysis
A power analysis was performed to determine the minimal cohort size required usingG∗power version 3.1.9.2. (Universitat Düsseldorf, Düsseldorf, Germany). Our hypothesis was that per-segment subjective image quality would improve in Group CS. To detect a difference of 0.1 in subjective image quality score, the minimum sample size was determined to be a total of 527 segments (approximately 30 patients) at 0.90 power. Sample size calculations were based on a type-2 error (α=) of 0.05 [19].The minimal acquisition window scans (Target CTA mode) for October 2013 were reviewed (n=20) and the reconstruction window was calculated from the console information. The reconstruction window was invariably proven to be 122 ms. To know the percentage of patients in Group CS whose best reconstruction phase would not have been included in the scan if the fixed 75% scan was applied, we compared the actual exposure time as well as “virtual” 75% exposure {i.e., RRintervalms×0.75±61ms}. The correlation between the CS and CCTA scan diagnostic phases was calculated using Spearman’s correlation coefficient analysis.All statistical analyses were performed using JMP software (version 10; SAS, Cary, NC, USA). Quantitative variables were expressed as the mean ± standard deviation and group differences were tested by Student’st-test. Categorical values were expressed as the number (percentage) and were compared using Fisher’s exact test or the chi-squared test. Statistical significance was accepted when p<0.05.
## 3. Results
There was no group difference in patient demographics and scanning parameters (Tables1 and 2). The mean best reconstruction phase (%) for the CCTA scan was significantly earlier for Group CS, although quite widely variable range was observed among the diagnostic phases (Table 3, Figure 2). For eight patients in Group CS, the diagnostic phase occurred outside of the virtual 75% exposure (19.5%) (Figure 3). A significant correlation was detected between CS and CCTA diagnostic phases (Spearman’s correlation coefficient 0.351, p=0.02, R2=0.113).Table 1
Patient demographics.
Parameter
Group 75%
Group CS
p value
Number of patients
40
41
Male/female
28/12
21/20
0.14
Age (years)
66.8 ± 11.8
66.6 ± 9.9
0.94
Body weight (kg)
64.1 ± 14.8
60.7 ± 11.5
0.26
Body mass index (kg/m2)
24.4 ± 3.8
24.2 ± 3.9
0.83
Beta-blocker+
15 (38)
19 (46)
0.28
Heart rate (bpm)
58.9 ± 6.5
57.7 ± 7.0
0.44
Coronary risk factor+
Hypertension
21 (53)
26 (63)
0.18
Diabetes mellitus
10 (25)
12 (29)
0.43
Dyslipidemia
21 (53)
25 (61)
0.29
Smoking
17 (43)
17 (41)
0.55
Family history
3 (8)
5 (12)
0.37
+Data represent the number of patients (percentage).Table 2
Scanning parameters.
Parameter
Group 75%
Group CS
p value
Contrast medium (mL)
45.5 ± 9.4
42.8 ± 8.0
0.17
Injection rate (mL/sec)
3.8 ± 0.8
3.6 ± 0.6
0.13
Tube current (mA)
397 ± 125
354 ± 92
0.08
Scan length (cm)
13.1 ± 1.3
12.9 ± 1.2
0.47
Effective dose (mSv)
1.87 ± 0.75
1.70 ± 0.66
0.28Table 3
Comparison of reconstruction phases between both reconstruction methods.
Group 75%
Group CS
p value
CS scan (%)
Average
75 (unadjustable)
73.9 ± 3.0
N/A
Range
N/A
67.0–85.3
CCTA scan (%)
Average
75.7 ± 3.2
73.6 ± 4.5
0.013∗
Range
70.2–81.2
60.8–82.0
∗Statistically significant.Figure 2
Scatter plot comparing the calcium scoring (CS) scan and coronary computed tomography angiography (CCTA) diagnostic phases for each patient. The line represents predictive formula for the CCTA diagnostic phase, which is CCTA diagnosticphase=36.3+0.5×CS diagnostic phase.Figure 3
Presurgical screening coronary computed tomography angiography (RR interval = 1114 ms) performed in Target CTA mode on a 61-year-old male with a history of aortic valve replacement. Curved multiplanar reconstruction images show the right coronary (a) and left anterior descending arteries (b) with minimal motion artifacts. The diagnostic phase was 68% for calcium scoring scan and 69.1% for coronary computed tomography angiography. The virtual window of 75% exposure was calculated as 70.0–80.4%, which does not include the CCTA diagnostic phase.
(a)
(b)Among the 1458 total segments captured for each group (18 segments in 81 patients), 137 segments were not evaluable because the segment was absent or too small (60 and 77 segments for Group 75% and Group CS, resp.). The subjective image quality scores were significantly better in Group CS, both for overall and for branch specific analyses (Table4). Branch specific analyses of objective image quality scores were also higher in Group CS (Table 5). Interobserver agreement on subjective image quality was “good” (κ=0.68). When patients with poor interobserver agreement were defined as “difference in subjective score between two graders being 2 or more in more than five segments,” we found that just four patients qualified as having poor interobserver agreement due to either the presence of dense calcification or multiple stents.Table 4
Subjective image quality.
Group 75%
Group CS
p value
Overall
3.20 ± 0.66
3.58 ± 0.63
<0.0001∗
RCA
3.18 ± 0.65
3.63 ± 0.60
<0.0001∗
LMT + LAD + HL
3.23 ± 0.66
3.58 ± 0.62
<0.0001∗
LCX
3.17 ± 0.67
3.55 ± 0.67
<0.0001∗
∗Statistically significant.
RCA: right coronary artery (#1–4 and #16).
LMT + LAD + HL: left main trunk, left anterior descending, and high lateral branch (#5–10 and #17).
LCX: left circumflex artery (#11–15 and #18).Table 5
Objective image quality.
Group 75%
Group CS
p value
Signal-to-noise ratio
Overall
21.5 ± 2.0
21.5 ± 2.1
0.98
RCA
20.6 ± 4.6
13.4 ± 3.1
<0.0001∗
LAD
21.1 ± 4.3
14.8 ± 2.8
<0.0001∗
LCX
19.7 ± 3.3
16.1 ± 3.0
0.0038∗
Contrast-to-noise ratio
Overall
25.2 ± 5.7
23.1 ± 4.0
0.24
RCA
27.5 ± 5.4
40.7 ± 12.5
0.0023∗
LAD
26.9 ± 7.0
35.6 ± 9.9
0.015∗
LCX
26.2 ± 6.5
31.4 ± 9.4
0.112∗
∗Statistically significant.
## 4. Discussion
This is the first report on the use of the CS diagnostic phase as the center of the CCTA acquisition window for Target CTA mode scanning. Group CS image quality was significantly better than that for Group 75% using both subjective and objective evaluations. The CS and CCTA diagnostic phases were both earlier than the empirically derived 75%, with the diagnostic phases considered to be outside the 75%-centered acquisition window in 19.5% of cases. The premise of this study of correlation between the CS and CTCAG diagnostic phases was also proven. The correlation efficiency between diagnostic phases of CS and CCTA was 0.351 (p=0.02) indicating weak positive correlation [20].The greatest advantage of adjusting the center of the acquisition window using CS instead of applying a fixed percentage as the center of acquisition window was an improvement in image quality due to individual adjustments made to the center of the acquisition window. The necessity of this adjustment is based on the wide individual variation in the diagnostic phase and the significant correlation between CS and CCTA scan diagnostic phases. The major disadvantage of this method is the increased workload during the scan as the method requires several additional steps, as compared with using a fixed percentage. CS images need to be reconstructed with a narrower field of view, the diagnostic phase must be identified on multiple planes using cardiac-phase search software and must be reconstructed using the searched phase. This entire sequence of actions needs to be completed in timely fashion (i.e., before the CCTA scan) and must be repeated for certain “difficult” cases. Therefore, the phase search usually requires the input of a radiologist or a technologist in addition to the scanning technologist.There is a worldwide trend to reduce radiation exposure during cardiac CT. Indeed, radiologists should make their best effort to achieve “as low as reasonable achievable (ALARA)” radiation exposure during every examination. Target CTA scan is a product of the response to this mandate. However, the present study showed wide individual variation in cardiac CT diagnostic phase. Radiologists should tailor the center of the acquisition window during Target CTA scanning or set the acquisition window wider than the narrowest setting to help achieve this goal. For instance, Steigner et al. suggest 72–81% acquisition window has a good probability of including the diagnostic phase for 95% coronary arteries [21]. If Target CTA is to be used without tailoring the center of acquisition window using CS, the accompanying physician or technologist should at least look for motion artifacts on the CS scan before deciding to use Target CTA for CCTA scanning. If coronary arteries on the CS images contain motion artifacts, setting a wider acquisition window than the Target CTA (e.g., 70–80%) raises the probability of obtaining better CCTA images without motion artifacts.There are some limitations to our study. The heart usually straddles two volumes in CS, because CS needs to cover an area wider than the heart. First, because the RR intervals differ between those volumes, the reconstruction window becomes narrow when there is a large difference in the RR intervals. When this difference is too big, the reconstruction requires an artificial adjustment of the position of one of the R waves. In this study, cases that required such adjustment were considered to be arrhythmic and were excluded, because a longer CCTA acquisition window was applied to such cases. For the second limitation, the center of the CS acquisition window was fixed to 75%. This means that if the diagnostic phase existed at an extreme such as 55% or 92%, it would not be possible to include this phase, even when the center of the acquisition window was adjusted using CS. In fact, with some of our cases, the CS diagnostic phase was located at the earliest pole of the scan. Likewise, the best CCTA reconstruction phase was also found at the earliest pole. With these cases, there is likely to be and even better phase that occurred even earlier. The third study limitation was that many cases exhibited a gap between the CS and CCTA diagnostic phases, as was expected based on the correlation coefficient. In cases with large gaps, an even better phase may exist beyond the CCTA acquisition window. The fourth limitation pertains to the limited sample size because the average Group CS diagnostic phases and the percentage of patients that had values outside of the 75% fixed scan acquisition window were derived from a small number of patients. This point could be addressed by repeating the investigation in a larger population. In addition, further studies should be performed to determine whether the method described herein is also effective with other CT systems, such as 256-row CT or dual-source CT.
## 5. Conclusions
This study found that CS and CCTA diagnostic phases were significantly correlated, with average diagnostic phase of 73.9% and 73.6%, respectively, although the phases showed substantial interindividual variation. The CCTA scan image quality using Target CTA mode was significantly better when the center of the acquisition window was adjusted using CS, compared with that using a fixed percentage.
---
*Source: 1017851-2016-02-10.xml* | 2016 |
# Immunological Changes in Peripheral Blood of Ankylosing Spondylitis Patients during Anti-TNF-α Therapy and Their Correlations with Treatment Outcomes
**Authors:** Rongjuan Chen; Hongyan Qian; Xiaoqing Yuan; Shiju Chen; Yuan Liu; Bin Wang; Guixiu Shi
**Journal:** Journal of Immunology Research
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1017938
---
## Abstract
Tumor necrosis factor-α (TNF-α) inhibitors are the main types of biological conventional synthetic disease-modifying antirheumatic drugs and have efficacy in treating ankylosing spondylitis (AS) which is not sensitive for nonsteroidal anti-inflammatory drug. However, the impact of TNF-α inhibitors on immune cells in patients with AS is still clearly undefined, and the impact of immune cells on treatment response is also largely elusive. This study is aimed at evaluating the longitudinal changes of circulating immune cells after anti-TNF-α therapy and their associations with treatment response in AS patients. Thirty-five AS patients receiving the treatment of anti-TNF-α therapy were included into this prospective observational study. The frequencies of immune cells including Th1, Th2, Th17, regulatory T cell (Treg), T follicular helper cell (Tfh), and regulatory B cell (Breg) in the peripheral blood were measured by flow cytometry at baseline and 4 time points after therapy. The difference in the circulating immune cells between responders and nonresponders was compared. This study suggested that anti-TNF-α therapy could significantly reduce circulating proinflammatory immune cells such as Th17 and Tfh, but significantly increased the percentages of circulating Treg and Breg. Moreover, circulating Breg may be a promising predictor of response to anti-TNF-α therapy in AS patients.
---
## Body
## 1. Introduction
Ankylosing spondylitis (AS) is a chronic inflammatory rheumatic disease characterized by inflammatory back pain and progressive ankylosing in spine [1]. AS can result in impaired physical functions including disability and obviously reduced life quality [2, 3]. Nonsteroidal anti-inflammatory drug (NSAID) is the main recommended first-line drug for the treatment of AS [4]. However, NSAID is not effective for some AS patients especially for those with later stages, and a large part of AS patients are still poorly controlled in clinical practice [5]. Thus, those AS patients need additional treatment with conventional synthetic disease-modifying antirheumatic drugs (DMARDs) or biological DMARDs [5–7]. Tumor necrosis factor-α (TNF-α) inhibitors are the main types of biological DMARDs and have a well-established efficacy in treating AS, which has largely revolutionized the treatment of AS in the past two decades [8]. Nevertheless, the treatment response is various involving high risk of infections among some patients such as tuberculosis [9, 10]. Improvement of AS patients’ personalized therapy strategy is an urgent need for the heterogeneity in both the pathogenesis and treatment outcomes [11, 12]. To improve treatment outcomes, minimize infection risk, and reduce costs, it is critical for clinicians to identify responders to specific biological DMARDs and make adequate therapeutic decisions.The roles of T cell subsets in the pathogenesis of AS have been reported in plenty works [13–15], and Th17 cells play a critical pathogenic role in the development of AS [13, 16]. Apart from Th17, other T cell subsets such as Th1 [14, 17] and T follicular helper cell (Tfh) which is correlated with B cell subtypes [18–20] are also involved in the pathogenesis of AS. Besides, several studies confirm that B cells participate in the pathogenesis of AS, such as increasing regulatory B cell (Breg) in peripheral blood of AS [21–24]. Nevertheless, the impact of anti-TNF-α therapy on those immune cells in AS patients is still not clearly defined, and the impact of immune cells on treatment response is also largely elusive. To evaluate the longitudinal changes of circulating immune cells after anti-TNF-α therapy and their associations with treatment response in AS patients, we performed a prospective observational study of AS patients receiving anti-TNF-α therapy.
## 2. Methods
### 2.1. Study Design and Patients
Active AS patients aged 20-65 years were recruited in the department of rheumatology in The First Affiliated Hospital of Xiamen University. The patients were recruited prospectively and followed up to 6 months after beginning anti-TNF-α therapy. Inclusion criteria were as follows: (1) patients met the 1984 modified New York classification criteria for AS; (2) without treatment history of biological DMARDs such as anti-TNF agents, anti-IL-17 agents, and anti-IL-6 agents; (3) with a Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) score of no less than 1; (4) data of clinical characteristics and laboratory testing analyzed in this study were available; (5) receiving a standard treatment of anti-TNF-α inhibitors; and (6) without obvious infections such as tuberculosis. Exclusion criteria were as follows: (1) AS patients had been treated with biologics such as anti-TNF drugs or anti-IL-6 drugs, (2) patients with a history of spinal or joint surgery, (3) patients with other serious diseases such as cancer or cardiovascular diseases, (4) patients had serious adverse events and discontinued treatment, and (5) data of clinical characteristics and laboratory testing analyzed in the present study were not recorded. A total of 35 AS patients meeting both the inclusion and exclusion criteria were finally included between September 2018 and January 2019. The study was approved by the ethics committee of our hospital, and written informed consent was obtained from included patients.
### 2.2. Outcome Assessment and Data Collection
The primary endpoint was to achieve an improvement of no less than 50% in patients at 6 months according to BASDAI. Patients received routine monitoring of disease activity at 5 treatment stages including baseline, 1 month, 2 months, 3 months, and 6 months. Patients with a BASDAI 50% improvement after 6-month treatment were defined as responders, while those failed to gain a BASDAI 50% improvement were defined as nonresponders. Other clinical and laboratory parameters such as disease duration, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) were recorded prospectively at the follow-up visit.
### 2.3. Sample Collection and PBMC Isolation
Peripheral venous blood was collected from each patient at baseline (before treatment) and 4 follow-up stages after the initiation of anti-TNF treatment (1 month, 2 months, 3 months, and 6 months). Serum and plasma were collected for the measurement of liver and renal function parameters. 5 ml peripheral venous blood was used for PBMC isolation with Ficoll-Paque density gradient centrifugation. The isolated PBMCs were stored at −80°C until analysis.
### 2.4. Flow Cytometry Phenotype
The frequencies of immune cells including Th1, Th2, Th17, regulatory T cell (Treg), Tfh, and Breg in the peripheral blood were measured by flow cytometry. Briefly, PBMCs were isolated and incubated with PMA (10 ng/ml, eBioscience) and BFA (10μg/ml, eBioscience) for 4 h then harvested and washed twice for 30 min. Then, cells were stained with anti-CD4 and anti-CD25 for 30 min at 4°C. During the intracellular staining, antibodies against IFN-γ, IL-4, and IL-17A were according to stain Th1, Th2, and Th17, respectively. Intracellular FoxP3 was also stained, and CD4+FoxP3+ was used to determine Treg. CD19+CD24HighCD38High cells were determined as Breg. CD4+PD1+CXCR5+ cells were determined as Tfh. The following anti-human antibodies for surface staining or intracellular staining were used: PE-CY7-anti-CD4, PE-CY5.5-anti-CD25, FITC-anti-IL-17A, PE-anti-Foxp3, FITC-anti-IFN-γ, PE-anti-IL-4, FITC-anti-CD19, PE-anti-CD24, PE-CY7-anti-CD38, PE-anti-PD1, and FITC-anti-CXCR5 (all eBioscience).
### 2.5. Statistical Analysis
Continuous variables were presented asmean±standarddeviationSDormedianwithquartilesQ25−Q75. Difference between responders and nonresponders was determined using Student’s t-test or Mann-Whitney U test. Difference for data at different time points was assessed with paired t-test. The roles of immune cells at baseline in predicting treatment response were assessed by receiver operating characteristic (ROC) analysis, and the area under the ROC curve (AUC) was calculated. Statistical analyses were performed with STATA (Version 12.0, StataCorps, Texas, USA). Two-sided P values less than 0.05 were considered statistically significant.
## 2.1. Study Design and Patients
Active AS patients aged 20-65 years were recruited in the department of rheumatology in The First Affiliated Hospital of Xiamen University. The patients were recruited prospectively and followed up to 6 months after beginning anti-TNF-α therapy. Inclusion criteria were as follows: (1) patients met the 1984 modified New York classification criteria for AS; (2) without treatment history of biological DMARDs such as anti-TNF agents, anti-IL-17 agents, and anti-IL-6 agents; (3) with a Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) score of no less than 1; (4) data of clinical characteristics and laboratory testing analyzed in this study were available; (5) receiving a standard treatment of anti-TNF-α inhibitors; and (6) without obvious infections such as tuberculosis. Exclusion criteria were as follows: (1) AS patients had been treated with biologics such as anti-TNF drugs or anti-IL-6 drugs, (2) patients with a history of spinal or joint surgery, (3) patients with other serious diseases such as cancer or cardiovascular diseases, (4) patients had serious adverse events and discontinued treatment, and (5) data of clinical characteristics and laboratory testing analyzed in the present study were not recorded. A total of 35 AS patients meeting both the inclusion and exclusion criteria were finally included between September 2018 and January 2019. The study was approved by the ethics committee of our hospital, and written informed consent was obtained from included patients.
## 2.2. Outcome Assessment and Data Collection
The primary endpoint was to achieve an improvement of no less than 50% in patients at 6 months according to BASDAI. Patients received routine monitoring of disease activity at 5 treatment stages including baseline, 1 month, 2 months, 3 months, and 6 months. Patients with a BASDAI 50% improvement after 6-month treatment were defined as responders, while those failed to gain a BASDAI 50% improvement were defined as nonresponders. Other clinical and laboratory parameters such as disease duration, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) were recorded prospectively at the follow-up visit.
## 2.3. Sample Collection and PBMC Isolation
Peripheral venous blood was collected from each patient at baseline (before treatment) and 4 follow-up stages after the initiation of anti-TNF treatment (1 month, 2 months, 3 months, and 6 months). Serum and plasma were collected for the measurement of liver and renal function parameters. 5 ml peripheral venous blood was used for PBMC isolation with Ficoll-Paque density gradient centrifugation. The isolated PBMCs were stored at −80°C until analysis.
## 2.4. Flow Cytometry Phenotype
The frequencies of immune cells including Th1, Th2, Th17, regulatory T cell (Treg), Tfh, and Breg in the peripheral blood were measured by flow cytometry. Briefly, PBMCs were isolated and incubated with PMA (10 ng/ml, eBioscience) and BFA (10μg/ml, eBioscience) for 4 h then harvested and washed twice for 30 min. Then, cells were stained with anti-CD4 and anti-CD25 for 30 min at 4°C. During the intracellular staining, antibodies against IFN-γ, IL-4, and IL-17A were according to stain Th1, Th2, and Th17, respectively. Intracellular FoxP3 was also stained, and CD4+FoxP3+ was used to determine Treg. CD19+CD24HighCD38High cells were determined as Breg. CD4+PD1+CXCR5+ cells were determined as Tfh. The following anti-human antibodies for surface staining or intracellular staining were used: PE-CY7-anti-CD4, PE-CY5.5-anti-CD25, FITC-anti-IL-17A, PE-anti-Foxp3, FITC-anti-IFN-γ, PE-anti-IL-4, FITC-anti-CD19, PE-anti-CD24, PE-CY7-anti-CD38, PE-anti-PD1, and FITC-anti-CXCR5 (all eBioscience).
## 2.5. Statistical Analysis
Continuous variables were presented asmean±standarddeviationSDormedianwithquartilesQ25−Q75. Difference between responders and nonresponders was determined using Student’s t-test or Mann-Whitney U test. Difference for data at different time points was assessed with paired t-test. The roles of immune cells at baseline in predicting treatment response were assessed by receiver operating characteristic (ROC) analysis, and the area under the ROC curve (AUC) was calculated. Statistical analyses were performed with STATA (Version 12.0, StataCorps, Texas, USA). Two-sided P values less than 0.05 were considered statistically significant.
## 3. Results
### 3.1. Clinical Characteristics of AS Patients
Table1 summarized the clinical and laboratory characteristics of those AS patients (Table 1). Among those 35 AS patients, 30 (85.7%) were males. The mean age was 33.1±8.8 years old, and the mean disease duration was 8.7±5.1 years. At baseline, the mean ASDAS-CRP and BASDAI were 2.8±0.8 and 4.4±1.0, respectively. After anti-TNF-α therapy of 6 months, both ESR and CRP were significantly reduced (P<0.05; Table 1). The mean ASDAS-CRP significantly declined to 1.4 (P<0.001) and BASDAI declined to 1.9 (P<0.001) at 6 months. Based on BASDAI, the response rate at 6 months after anti-TNF-α therapy was 60.0% (21/35).Table 1
Clinical characteristics of total 35 patients at baseline and after follow-up.
CharacteristicAt baseline6 monthsP valueESR (median[Q25-Q75])21 (9-34)4 (2-11)<0.001CRP (median[Q25-Q75])6.2 (1.9-21.3)1.6 (0.5-3.9)0.002ASDAS-CRP2.8±0.81.4±0.8<0.001BASDAI4.4±1.01.9±1.3<0.001AS: ankylosing spondylitis; data were shown asmean±SDormedianQ25−Q75.
### 3.2. Changes of Circulating Immune Cells after Anti-TNF-α Therapy
Th1, Th17, and Tfh are common proinflammatory immune cells. After anti-TNF-α therapy, both Th17 and Tfh decreased gradually, and there was also a modest but not significant reduction in Th1 (Figure 1). Anti-TNF-α therapy significantly reduced the percentage of circulating Th17 at 1 month after treatment (P<0.005), and the effect was maintained through treatment course (Figure 1). Anti-TNF-α therapy began to significantly reduce the percentage of circulating Tfh at 3 months after treatment (P<0.005), and the effect was also significantly lower at 6 months after treatment (P<0.005). The mean percentage of circulating Th17 significantly decreased from 0.75 to 0.38 after 6 months of anti-TNF-α therapy (P<0.001; Table 2).Figure 1
Changes of circulating immune cells after anti-TNF-α therapy in AS patients. The percentages of immune cells during follow-up were compared with that at baseline. ∗P<0.05; ∗∗P<0.005.Table 2
Changes of immune cells among total 35 patients during follow-up.
Immune cells0 month6 monthsP valueTreg5.62±2.198.06±1.98<0.001Th170.75±0.370.38±0.18<0.001Th17/Treg0.15±0.100.05±0.03<0.001Tfh2.92±0.802.07±0.62<0.001Th10.30 (0.13-1.27)0.18 (0.03-0.25)0.003Th20.38 (0.09-0.87)0.29 (0.14-0.89)0.71Th1/Th21.55 (0.44-5.33)0.28 (0.10-1.38)<0.001Breg4.16±1.946.52±2.89<0.001CD373.23±7.9867.45±10.150.010CD438.04±5.3132.25±7.64<0.001CD828.49±8.4426.22±8.620.27CD4/CD81.47±0.511.38±0.570.48Data were shown asmean±SDstandarddeviationormedianQ25−Q75.Th2, Treg, and Breg are key immunoregulatory immune cells. After anti-TNF-α therapy, both Treg and Breg increased gradually, but the frequency of Th2 was not significantly changed (Figure 1). Anti-TNF-α therapy began to significantly increase the percentages of both Treg and Breg at 1 month after treatment (P<0.005). The mean percentage of circulating Treg significantly increased from 5.62 to 8.06 after 6 months of anti-TNF-α therapy (P<0.001), and the mean percentage of circulating Breg significantly increased from 4.16 to 6.52 (P<0.001; Table 2).
### 3.3. Correlations of Circulating Immune Cells with Response to Anti-TNF-α Therapy in AS Patients
Baseline disease characteristics such as age, disease duration, and ASDAS-CRP were comparable between responders and nonresponders (Table3). Compared with nonresponders, responders had lower levels of ESR (P=0.035) and CRP (P=0.018) but had higher BASDAI (P=0.042) (Table 3). Compared with those responders, nonresponders had a higher percentage of circulating Breg both at baseline and during follow-up (P<0.05) (Figure 2 and Tables 3 and 4). There was no obvious difference in the baseline percentages of other immune cells such as Th17, Treg, and Tfh between nonresponders and responders (Table 3), and similar findings were also found at 6 months after anti-TNF-α therapy (Table 3).Table 3
Differences in baseline clinical characteristics and immune cells between responders and nonresponders.
ItemsResponders (N=21)Nonresponders (N=14)P valueGender (male, %)18 (85.7%)12 (85.7%)1.00Age (year,mean±SD)32.86±7.5333.43±10.730.85Disease duration (year,mean±SD)8.10±5.219.71±5.070.37ESR (median[Q25-Q75])13 (6-30)29 (14-49)0.035CRP (median[Q25-Q75])2.70 (1.19-15.72)8.50 (6.65-33.63)0.018ASDAS-CRP2.73±0.722.98±0.840.35BASDAI4.68±0.873.99±1.050.042Treg5.73±2.445.45±1.830.712Th170.76±0.380.74±0.360.880Th17/Treg0.16±0.100.15±0.110.948Tfh2.89±0.812.95±0.810.816Th10.18 (0.13-0.64)0.81 (0.15-2.62)0.200Th20.43 (0.05-0.96)0.28 (0.11-0.74)0.749Th1/Th20.52 (0.30-4.26)2.30 (0.71-9.81)0.178Breg3.62±1.704.97±2.050.041CD373.62±7.8772.64±8.400.729CD438.02±5.4138.08±5.360.976CD828.26±8.0728.83±9.270.849CD4/CD81.48±0.541.45±0.460.878Data were shown asmean±SDstandarddeviationormedianQ25−Q75.Figure 2
Changes of circulating immune cells after anti-TNF-α therapy in AS patients stratified by treatment response. Red triangle was for those responders, while black circle was for nonresponders. Difference between responders and nonresponder at each time point was compared. ∗P<0.05; ∗∗P<0.005.Table 4
Differences in immune cells at 6 months between responders and nonresponders.
Immune cellsResponders (N=21)Nonresponders (N=14)P valueTreg7.79±2.228.47±1.520.323Th170.36±0.160.41±0.210.356Th17/Treg0.05±0.030.05±0.020.838Tfh2.08±0.652.06±0.590.917Th10.14 (0.03-0.24)0.20 (0.05-0.30)0.204Th20.29 (0.12-1.29)0.25 (0.17-0.77)0.590Th1/Th20.25 (0.09-1.19)1.18 (1.10-1.50)0.449Breg5.68±2.797.78±2.640.033CD368.02±9.5066.60±11.370.690CD432.71±7.0331.55±8.710.666CD825.76±7.6226.92±10.200.703CD4/CD81.40±0.561.35±0.600.799Data were shown asmean±SDstandarddeviationormedianQ25−Q75.ROC analysis suggested that Breg was the best circulating cell in predicting response to anti-TNF-α therapy in AS patients (AUC=0.70, 95% CI 0.52-0.88). Other immune cells had limited roles in predicting response to anti-TNF-α therapy (Figure 3).Figure 3
Assessment of the roles of circulating immune cells at baseline in predicting response to anti-TNF-α therapy in AS patients through ROC analysis (AUC: area under the ROC curve).
## 3.1. Clinical Characteristics of AS Patients
Table1 summarized the clinical and laboratory characteristics of those AS patients (Table 1). Among those 35 AS patients, 30 (85.7%) were males. The mean age was 33.1±8.8 years old, and the mean disease duration was 8.7±5.1 years. At baseline, the mean ASDAS-CRP and BASDAI were 2.8±0.8 and 4.4±1.0, respectively. After anti-TNF-α therapy of 6 months, both ESR and CRP were significantly reduced (P<0.05; Table 1). The mean ASDAS-CRP significantly declined to 1.4 (P<0.001) and BASDAI declined to 1.9 (P<0.001) at 6 months. Based on BASDAI, the response rate at 6 months after anti-TNF-α therapy was 60.0% (21/35).Table 1
Clinical characteristics of total 35 patients at baseline and after follow-up.
CharacteristicAt baseline6 monthsP valueESR (median[Q25-Q75])21 (9-34)4 (2-11)<0.001CRP (median[Q25-Q75])6.2 (1.9-21.3)1.6 (0.5-3.9)0.002ASDAS-CRP2.8±0.81.4±0.8<0.001BASDAI4.4±1.01.9±1.3<0.001AS: ankylosing spondylitis; data were shown asmean±SDormedianQ25−Q75.
## 3.2. Changes of Circulating Immune Cells after Anti-TNF-α Therapy
Th1, Th17, and Tfh are common proinflammatory immune cells. After anti-TNF-α therapy, both Th17 and Tfh decreased gradually, and there was also a modest but not significant reduction in Th1 (Figure 1). Anti-TNF-α therapy significantly reduced the percentage of circulating Th17 at 1 month after treatment (P<0.005), and the effect was maintained through treatment course (Figure 1). Anti-TNF-α therapy began to significantly reduce the percentage of circulating Tfh at 3 months after treatment (P<0.005), and the effect was also significantly lower at 6 months after treatment (P<0.005). The mean percentage of circulating Th17 significantly decreased from 0.75 to 0.38 after 6 months of anti-TNF-α therapy (P<0.001; Table 2).Figure 1
Changes of circulating immune cells after anti-TNF-α therapy in AS patients. The percentages of immune cells during follow-up were compared with that at baseline. ∗P<0.05; ∗∗P<0.005.Table 2
Changes of immune cells among total 35 patients during follow-up.
Immune cells0 month6 monthsP valueTreg5.62±2.198.06±1.98<0.001Th170.75±0.370.38±0.18<0.001Th17/Treg0.15±0.100.05±0.03<0.001Tfh2.92±0.802.07±0.62<0.001Th10.30 (0.13-1.27)0.18 (0.03-0.25)0.003Th20.38 (0.09-0.87)0.29 (0.14-0.89)0.71Th1/Th21.55 (0.44-5.33)0.28 (0.10-1.38)<0.001Breg4.16±1.946.52±2.89<0.001CD373.23±7.9867.45±10.150.010CD438.04±5.3132.25±7.64<0.001CD828.49±8.4426.22±8.620.27CD4/CD81.47±0.511.38±0.570.48Data were shown asmean±SDstandarddeviationormedianQ25−Q75.Th2, Treg, and Breg are key immunoregulatory immune cells. After anti-TNF-α therapy, both Treg and Breg increased gradually, but the frequency of Th2 was not significantly changed (Figure 1). Anti-TNF-α therapy began to significantly increase the percentages of both Treg and Breg at 1 month after treatment (P<0.005). The mean percentage of circulating Treg significantly increased from 5.62 to 8.06 after 6 months of anti-TNF-α therapy (P<0.001), and the mean percentage of circulating Breg significantly increased from 4.16 to 6.52 (P<0.001; Table 2).
## 3.3. Correlations of Circulating Immune Cells with Response to Anti-TNF-α Therapy in AS Patients
Baseline disease characteristics such as age, disease duration, and ASDAS-CRP were comparable between responders and nonresponders (Table3). Compared with nonresponders, responders had lower levels of ESR (P=0.035) and CRP (P=0.018) but had higher BASDAI (P=0.042) (Table 3). Compared with those responders, nonresponders had a higher percentage of circulating Breg both at baseline and during follow-up (P<0.05) (Figure 2 and Tables 3 and 4). There was no obvious difference in the baseline percentages of other immune cells such as Th17, Treg, and Tfh between nonresponders and responders (Table 3), and similar findings were also found at 6 months after anti-TNF-α therapy (Table 3).Table 3
Differences in baseline clinical characteristics and immune cells between responders and nonresponders.
ItemsResponders (N=21)Nonresponders (N=14)P valueGender (male, %)18 (85.7%)12 (85.7%)1.00Age (year,mean±SD)32.86±7.5333.43±10.730.85Disease duration (year,mean±SD)8.10±5.219.71±5.070.37ESR (median[Q25-Q75])13 (6-30)29 (14-49)0.035CRP (median[Q25-Q75])2.70 (1.19-15.72)8.50 (6.65-33.63)0.018ASDAS-CRP2.73±0.722.98±0.840.35BASDAI4.68±0.873.99±1.050.042Treg5.73±2.445.45±1.830.712Th170.76±0.380.74±0.360.880Th17/Treg0.16±0.100.15±0.110.948Tfh2.89±0.812.95±0.810.816Th10.18 (0.13-0.64)0.81 (0.15-2.62)0.200Th20.43 (0.05-0.96)0.28 (0.11-0.74)0.749Th1/Th20.52 (0.30-4.26)2.30 (0.71-9.81)0.178Breg3.62±1.704.97±2.050.041CD373.62±7.8772.64±8.400.729CD438.02±5.4138.08±5.360.976CD828.26±8.0728.83±9.270.849CD4/CD81.48±0.541.45±0.460.878Data were shown asmean±SDstandarddeviationormedianQ25−Q75.Figure 2
Changes of circulating immune cells after anti-TNF-α therapy in AS patients stratified by treatment response. Red triangle was for those responders, while black circle was for nonresponders. Difference between responders and nonresponder at each time point was compared. ∗P<0.05; ∗∗P<0.005.Table 4
Differences in immune cells at 6 months between responders and nonresponders.
Immune cellsResponders (N=21)Nonresponders (N=14)P valueTreg7.79±2.228.47±1.520.323Th170.36±0.160.41±0.210.356Th17/Treg0.05±0.030.05±0.020.838Tfh2.08±0.652.06±0.590.917Th10.14 (0.03-0.24)0.20 (0.05-0.30)0.204Th20.29 (0.12-1.29)0.25 (0.17-0.77)0.590Th1/Th20.25 (0.09-1.19)1.18 (1.10-1.50)0.449Breg5.68±2.797.78±2.640.033CD368.02±9.5066.60±11.370.690CD432.71±7.0331.55±8.710.666CD825.76±7.6226.92±10.200.703CD4/CD81.40±0.561.35±0.600.799Data were shown asmean±SDstandarddeviationormedianQ25−Q75.ROC analysis suggested that Breg was the best circulating cell in predicting response to anti-TNF-α therapy in AS patients (AUC=0.70, 95% CI 0.52-0.88). Other immune cells had limited roles in predicting response to anti-TNF-α therapy (Figure 3).Figure 3
Assessment of the roles of circulating immune cells at baseline in predicting response to anti-TNF-α therapy in AS patients through ROC analysis (AUC: area under the ROC curve).
## 4. Discussion
The impact of TNF-α inhibitors on immune cells in AS patients is still not clearly defined. Besides, the impact of immune cells on treatment response to TNF-α inhibitors is also largely elusive. This study was thus designed to prospectively evaluate the longitudinal changes of circulating immune cells after anti-TNF-α therapy and their associations with treatment response in AS patients. To our knowledge, this is the first prospective study investigating the impact of immune cells on treatment response to TNF-α inhibitors. We found that both Th17 and Tfh were reduced gradually by anti-TNF-α therapy, while Treg and Breg were increased gradually. Moreover, there was some immunological difference between treatment responders and nonresponders, and responders had a higher percentage of circulating Breg both at baseline and during follow-up, suggesting Breg as a possible predictor of response to anti-TNF-α therapy in AS patients.AS is a heterogeneous disease, and it has been cleared that AS patients have various response to anti-TNF-α therapy [25–27]. BASDAI at baseline has an important impact on assessing the response to therapy in AS [28]. Our work showed that compared with nonresponders, responders had higher BASDAI scores and significant response to anti-TNF-α therapy. This study revealed that about 60% patients had at least 50% improvement in BASDAI at 6 months after anti-TNF-α therapy, while the others had poor response. Identification of predictors of treatment outcomes in AS patients is critical for clinicians to make adequate therapeutic decisions and provide personalized therapy for AS patients [5–27, 29–31]. Currently, there is still lack of definite predictors of response to anti-TNF-α therapy in AS patients. A recent systematic review and meta-analysis revealed that several clinical factors such as young age, male sex, and baseline BASDAI were predictors of better response to anti-TNF-α therapy in AS patients [28]. Serological markers such as baseline CRP and HLA-B27 were also identified as predictors of response to anti-TNF-α therapy in AS patients [28]. Nevertheless, the personalized therapy for AS patients is still difficult owing to the limited evidence from clinical studies or the lack of effective predictors [27, 32]. In this study, we assessed the roles of peripheral immunological profiles such as Th17, Treg, and Breg in predicting response to anti-TNF-α therapy of AS patients which are all HLA-B27 positive. We found that Breg was a possible predictor of response to anti-TNF-α therapy in AS patients, but the other immune cells such as Th17, Tfh, and Treg were not candidate predictor of response to anti-TNF-α therapy. The findings may be helpful to identify predictors of response to anti-TNF-α therapy and improve personalized therapy for AS patients from the perspective of immunological profiles in peripheral blood.A major finding in our study is the potential role of Breg as a predictor of response to anti-TNF-α therapy in AS patients. Though anti-TNF-α therapy could increase the percentage of circulating Breg in AS patients, treatment nonresponders had a higher percentage of circulating Breg both at baseline and during follow-up, suggesting Breg as a possible predictor of response to anti-TNF-α therapy (Figure 2 and Tables 3 and 4). Several studies had assessed the changes of B cells with a regulatory phenotype in AS patients [21, 33, 34]. Cantaert et al. firstly reported that spondylarthritis patients had increased circulating B cells with a regulatory phenotype (CD19+CD5+) [21]. A study by Bautista-Caro et al. also reported that AS patients had increased circulating CD19+CD24hiCD38hi B cells with regulatory capacity, and anti-TNF-α therapy could significantly reduce the number of these B subset cells [34]. However, another study reported similar frequencies of CD24+CD38+ B cells between AS patients and controls, but those cells from AS patients produced less IL-10 and thus had functional defects [33]. To our knowledge, apart from those 3 studies, no other study on the roles of Breg in AS has been published. Our study revealed that Breg was possibly related to response to anti-TNF-α therapy in AS patients, and patients with high frequencies of Breg may predispose to poor response to anti-TNF-α therapy, which provides new insights into the roles of Breg in AS. Currently, the molecular mechanism underlying the roles of Breg in the pathogenesis of AS is still unclear and needs to be elucidated in future studies.While there are emerging data providing evidence for the involvement of T cell subsets in the pathogenesis of AS, few studies have evaluated the longitudinal changes of circulating immune cells after anti-TNF-α therapy in detail [35–37]. Additionally, our knowledge about their roles in predicting response to anti-TNF-α therapy in AS patients is still limited. The findings from our study confirmed the reduction in the frequencies of circulating lymphocyte subsets after anti-TNF-α therapy in AS patients. This study suggested that anti-TNF-α therapy could significantly and selectively reduce circulating proinflammatory immune cells such as Th17 and Tfh, but significantly increased the percentage of circulating Treg. While a large part of AS patients had gradual reductions in the percentage of CD4+ subsets such as Th17 and Tfh, some patients had increased percentages of circulating Th17 or Tfh after anti-TNF-α therapy, indicating the existence of variability in treatment response among AS patients. In addition, none of those T cell subsets were obviously related to response to anti-TNF-α therapy in AS patients (Figure 2 and Tables 3 and 4), suggesting that T cell subsets may have limited roles in predicting response to anti-TNF-α therapy in AS patients.Our study suggested that anti-TNF-α therapy began to significantly increase the percentage of circulating Treg at 1 month after treatment (P<0.005), and its mean percentage significantly increased from 5.62 to 8.06 after 6 months of anti-TNF-α therapy (P<0.001, Table 2). It is uncertain whether the increase of Treg after anti-TNF therapy is responsible in part for the benefit of anti-TNF therapy in treating AS. A recent study suggested that expanding Treg through low-dose IL-2 was effective in treating AS [38]. The effect of anti-TNF-α therapy in treating AS may be at least partially mediated by its roles of increasing Treg cells, which need to be further studied.This study used a prospective design and thus could provide a better assessment of the immunological changes in peripheral blood during anti-TNF-α therapy than those studies using retrospectively collected data. However, the findings in our study should be interpreted with caution because the sample size was not large enough. Besides, the treatment duration in this study was 6 months, which could not evaluate either the long-term efficacy of anti-TNF-α therapy or the long-term impact of anti-TNF-α therapy on immune cells. Further studies with larger number of AS patients and long-term follow-up are recommended to provide more evidence.In summary, this study suggested that anti-TNF-α therapy could significantly reduce circulating proinflammatory immune cells such as Th17 and Tfh, but significantly increased the percentages of circulating Treg and Breg in AS patients. Moreover, circulating Breg may be a promising predictor of response to anti-TNF-α therapy in AS patients. Further prospective cohort studies with larger number of AS patients and long-term follow-up are warranted, and the molecular mechanism underlying the roles of Breg in the pathogenesis of AS needs to be elucidated.
---
*Source: 1017938-2021-10-15.xml* | 1017938-2021-10-15_1017938-2021-10-15.md | 33,387 | Immunological Changes in Peripheral Blood of Ankylosing Spondylitis Patients during Anti-TNF-α Therapy and Their Correlations with Treatment Outcomes | Rongjuan Chen; Hongyan Qian; Xiaoqing Yuan; Shiju Chen; Yuan Liu; Bin Wang; Guixiu Shi | Journal of Immunology Research
(2021) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1017938 | 1017938-2021-10-15.xml | ---
## Abstract
Tumor necrosis factor-α (TNF-α) inhibitors are the main types of biological conventional synthetic disease-modifying antirheumatic drugs and have efficacy in treating ankylosing spondylitis (AS) which is not sensitive for nonsteroidal anti-inflammatory drug. However, the impact of TNF-α inhibitors on immune cells in patients with AS is still clearly undefined, and the impact of immune cells on treatment response is also largely elusive. This study is aimed at evaluating the longitudinal changes of circulating immune cells after anti-TNF-α therapy and their associations with treatment response in AS patients. Thirty-five AS patients receiving the treatment of anti-TNF-α therapy were included into this prospective observational study. The frequencies of immune cells including Th1, Th2, Th17, regulatory T cell (Treg), T follicular helper cell (Tfh), and regulatory B cell (Breg) in the peripheral blood were measured by flow cytometry at baseline and 4 time points after therapy. The difference in the circulating immune cells between responders and nonresponders was compared. This study suggested that anti-TNF-α therapy could significantly reduce circulating proinflammatory immune cells such as Th17 and Tfh, but significantly increased the percentages of circulating Treg and Breg. Moreover, circulating Breg may be a promising predictor of response to anti-TNF-α therapy in AS patients.
---
## Body
## 1. Introduction
Ankylosing spondylitis (AS) is a chronic inflammatory rheumatic disease characterized by inflammatory back pain and progressive ankylosing in spine [1]. AS can result in impaired physical functions including disability and obviously reduced life quality [2, 3]. Nonsteroidal anti-inflammatory drug (NSAID) is the main recommended first-line drug for the treatment of AS [4]. However, NSAID is not effective for some AS patients especially for those with later stages, and a large part of AS patients are still poorly controlled in clinical practice [5]. Thus, those AS patients need additional treatment with conventional synthetic disease-modifying antirheumatic drugs (DMARDs) or biological DMARDs [5–7]. Tumor necrosis factor-α (TNF-α) inhibitors are the main types of biological DMARDs and have a well-established efficacy in treating AS, which has largely revolutionized the treatment of AS in the past two decades [8]. Nevertheless, the treatment response is various involving high risk of infections among some patients such as tuberculosis [9, 10]. Improvement of AS patients’ personalized therapy strategy is an urgent need for the heterogeneity in both the pathogenesis and treatment outcomes [11, 12]. To improve treatment outcomes, minimize infection risk, and reduce costs, it is critical for clinicians to identify responders to specific biological DMARDs and make adequate therapeutic decisions.The roles of T cell subsets in the pathogenesis of AS have been reported in plenty works [13–15], and Th17 cells play a critical pathogenic role in the development of AS [13, 16]. Apart from Th17, other T cell subsets such as Th1 [14, 17] and T follicular helper cell (Tfh) which is correlated with B cell subtypes [18–20] are also involved in the pathogenesis of AS. Besides, several studies confirm that B cells participate in the pathogenesis of AS, such as increasing regulatory B cell (Breg) in peripheral blood of AS [21–24]. Nevertheless, the impact of anti-TNF-α therapy on those immune cells in AS patients is still not clearly defined, and the impact of immune cells on treatment response is also largely elusive. To evaluate the longitudinal changes of circulating immune cells after anti-TNF-α therapy and their associations with treatment response in AS patients, we performed a prospective observational study of AS patients receiving anti-TNF-α therapy.
## 2. Methods
### 2.1. Study Design and Patients
Active AS patients aged 20-65 years were recruited in the department of rheumatology in The First Affiliated Hospital of Xiamen University. The patients were recruited prospectively and followed up to 6 months after beginning anti-TNF-α therapy. Inclusion criteria were as follows: (1) patients met the 1984 modified New York classification criteria for AS; (2) without treatment history of biological DMARDs such as anti-TNF agents, anti-IL-17 agents, and anti-IL-6 agents; (3) with a Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) score of no less than 1; (4) data of clinical characteristics and laboratory testing analyzed in this study were available; (5) receiving a standard treatment of anti-TNF-α inhibitors; and (6) without obvious infections such as tuberculosis. Exclusion criteria were as follows: (1) AS patients had been treated with biologics such as anti-TNF drugs or anti-IL-6 drugs, (2) patients with a history of spinal or joint surgery, (3) patients with other serious diseases such as cancer or cardiovascular diseases, (4) patients had serious adverse events and discontinued treatment, and (5) data of clinical characteristics and laboratory testing analyzed in the present study were not recorded. A total of 35 AS patients meeting both the inclusion and exclusion criteria were finally included between September 2018 and January 2019. The study was approved by the ethics committee of our hospital, and written informed consent was obtained from included patients.
### 2.2. Outcome Assessment and Data Collection
The primary endpoint was to achieve an improvement of no less than 50% in patients at 6 months according to BASDAI. Patients received routine monitoring of disease activity at 5 treatment stages including baseline, 1 month, 2 months, 3 months, and 6 months. Patients with a BASDAI 50% improvement after 6-month treatment were defined as responders, while those failed to gain a BASDAI 50% improvement were defined as nonresponders. Other clinical and laboratory parameters such as disease duration, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) were recorded prospectively at the follow-up visit.
### 2.3. Sample Collection and PBMC Isolation
Peripheral venous blood was collected from each patient at baseline (before treatment) and 4 follow-up stages after the initiation of anti-TNF treatment (1 month, 2 months, 3 months, and 6 months). Serum and plasma were collected for the measurement of liver and renal function parameters. 5 ml peripheral venous blood was used for PBMC isolation with Ficoll-Paque density gradient centrifugation. The isolated PBMCs were stored at −80°C until analysis.
### 2.4. Flow Cytometry Phenotype
The frequencies of immune cells including Th1, Th2, Th17, regulatory T cell (Treg), Tfh, and Breg in the peripheral blood were measured by flow cytometry. Briefly, PBMCs were isolated and incubated with PMA (10 ng/ml, eBioscience) and BFA (10μg/ml, eBioscience) for 4 h then harvested and washed twice for 30 min. Then, cells were stained with anti-CD4 and anti-CD25 for 30 min at 4°C. During the intracellular staining, antibodies against IFN-γ, IL-4, and IL-17A were according to stain Th1, Th2, and Th17, respectively. Intracellular FoxP3 was also stained, and CD4+FoxP3+ was used to determine Treg. CD19+CD24HighCD38High cells were determined as Breg. CD4+PD1+CXCR5+ cells were determined as Tfh. The following anti-human antibodies for surface staining or intracellular staining were used: PE-CY7-anti-CD4, PE-CY5.5-anti-CD25, FITC-anti-IL-17A, PE-anti-Foxp3, FITC-anti-IFN-γ, PE-anti-IL-4, FITC-anti-CD19, PE-anti-CD24, PE-CY7-anti-CD38, PE-anti-PD1, and FITC-anti-CXCR5 (all eBioscience).
### 2.5. Statistical Analysis
Continuous variables were presented asmean±standarddeviationSDormedianwithquartilesQ25−Q75. Difference between responders and nonresponders was determined using Student’s t-test or Mann-Whitney U test. Difference for data at different time points was assessed with paired t-test. The roles of immune cells at baseline in predicting treatment response were assessed by receiver operating characteristic (ROC) analysis, and the area under the ROC curve (AUC) was calculated. Statistical analyses were performed with STATA (Version 12.0, StataCorps, Texas, USA). Two-sided P values less than 0.05 were considered statistically significant.
## 2.1. Study Design and Patients
Active AS patients aged 20-65 years were recruited in the department of rheumatology in The First Affiliated Hospital of Xiamen University. The patients were recruited prospectively and followed up to 6 months after beginning anti-TNF-α therapy. Inclusion criteria were as follows: (1) patients met the 1984 modified New York classification criteria for AS; (2) without treatment history of biological DMARDs such as anti-TNF agents, anti-IL-17 agents, and anti-IL-6 agents; (3) with a Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) score of no less than 1; (4) data of clinical characteristics and laboratory testing analyzed in this study were available; (5) receiving a standard treatment of anti-TNF-α inhibitors; and (6) without obvious infections such as tuberculosis. Exclusion criteria were as follows: (1) AS patients had been treated with biologics such as anti-TNF drugs or anti-IL-6 drugs, (2) patients with a history of spinal or joint surgery, (3) patients with other serious diseases such as cancer or cardiovascular diseases, (4) patients had serious adverse events and discontinued treatment, and (5) data of clinical characteristics and laboratory testing analyzed in the present study were not recorded. A total of 35 AS patients meeting both the inclusion and exclusion criteria were finally included between September 2018 and January 2019. The study was approved by the ethics committee of our hospital, and written informed consent was obtained from included patients.
## 2.2. Outcome Assessment and Data Collection
The primary endpoint was to achieve an improvement of no less than 50% in patients at 6 months according to BASDAI. Patients received routine monitoring of disease activity at 5 treatment stages including baseline, 1 month, 2 months, 3 months, and 6 months. Patients with a BASDAI 50% improvement after 6-month treatment were defined as responders, while those failed to gain a BASDAI 50% improvement were defined as nonresponders. Other clinical and laboratory parameters such as disease duration, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) were recorded prospectively at the follow-up visit.
## 2.3. Sample Collection and PBMC Isolation
Peripheral venous blood was collected from each patient at baseline (before treatment) and 4 follow-up stages after the initiation of anti-TNF treatment (1 month, 2 months, 3 months, and 6 months). Serum and plasma were collected for the measurement of liver and renal function parameters. 5 ml peripheral venous blood was used for PBMC isolation with Ficoll-Paque density gradient centrifugation. The isolated PBMCs were stored at −80°C until analysis.
## 2.4. Flow Cytometry Phenotype
The frequencies of immune cells including Th1, Th2, Th17, regulatory T cell (Treg), Tfh, and Breg in the peripheral blood were measured by flow cytometry. Briefly, PBMCs were isolated and incubated with PMA (10 ng/ml, eBioscience) and BFA (10μg/ml, eBioscience) for 4 h then harvested and washed twice for 30 min. Then, cells were stained with anti-CD4 and anti-CD25 for 30 min at 4°C. During the intracellular staining, antibodies against IFN-γ, IL-4, and IL-17A were according to stain Th1, Th2, and Th17, respectively. Intracellular FoxP3 was also stained, and CD4+FoxP3+ was used to determine Treg. CD19+CD24HighCD38High cells were determined as Breg. CD4+PD1+CXCR5+ cells were determined as Tfh. The following anti-human antibodies for surface staining or intracellular staining were used: PE-CY7-anti-CD4, PE-CY5.5-anti-CD25, FITC-anti-IL-17A, PE-anti-Foxp3, FITC-anti-IFN-γ, PE-anti-IL-4, FITC-anti-CD19, PE-anti-CD24, PE-CY7-anti-CD38, PE-anti-PD1, and FITC-anti-CXCR5 (all eBioscience).
## 2.5. Statistical Analysis
Continuous variables were presented asmean±standarddeviationSDormedianwithquartilesQ25−Q75. Difference between responders and nonresponders was determined using Student’s t-test or Mann-Whitney U test. Difference for data at different time points was assessed with paired t-test. The roles of immune cells at baseline in predicting treatment response were assessed by receiver operating characteristic (ROC) analysis, and the area under the ROC curve (AUC) was calculated. Statistical analyses were performed with STATA (Version 12.0, StataCorps, Texas, USA). Two-sided P values less than 0.05 were considered statistically significant.
## 3. Results
### 3.1. Clinical Characteristics of AS Patients
Table1 summarized the clinical and laboratory characteristics of those AS patients (Table 1). Among those 35 AS patients, 30 (85.7%) were males. The mean age was 33.1±8.8 years old, and the mean disease duration was 8.7±5.1 years. At baseline, the mean ASDAS-CRP and BASDAI were 2.8±0.8 and 4.4±1.0, respectively. After anti-TNF-α therapy of 6 months, both ESR and CRP were significantly reduced (P<0.05; Table 1). The mean ASDAS-CRP significantly declined to 1.4 (P<0.001) and BASDAI declined to 1.9 (P<0.001) at 6 months. Based on BASDAI, the response rate at 6 months after anti-TNF-α therapy was 60.0% (21/35).Table 1
Clinical characteristics of total 35 patients at baseline and after follow-up.
CharacteristicAt baseline6 monthsP valueESR (median[Q25-Q75])21 (9-34)4 (2-11)<0.001CRP (median[Q25-Q75])6.2 (1.9-21.3)1.6 (0.5-3.9)0.002ASDAS-CRP2.8±0.81.4±0.8<0.001BASDAI4.4±1.01.9±1.3<0.001AS: ankylosing spondylitis; data were shown asmean±SDormedianQ25−Q75.
### 3.2. Changes of Circulating Immune Cells after Anti-TNF-α Therapy
Th1, Th17, and Tfh are common proinflammatory immune cells. After anti-TNF-α therapy, both Th17 and Tfh decreased gradually, and there was also a modest but not significant reduction in Th1 (Figure 1). Anti-TNF-α therapy significantly reduced the percentage of circulating Th17 at 1 month after treatment (P<0.005), and the effect was maintained through treatment course (Figure 1). Anti-TNF-α therapy began to significantly reduce the percentage of circulating Tfh at 3 months after treatment (P<0.005), and the effect was also significantly lower at 6 months after treatment (P<0.005). The mean percentage of circulating Th17 significantly decreased from 0.75 to 0.38 after 6 months of anti-TNF-α therapy (P<0.001; Table 2).Figure 1
Changes of circulating immune cells after anti-TNF-α therapy in AS patients. The percentages of immune cells during follow-up were compared with that at baseline. ∗P<0.05; ∗∗P<0.005.Table 2
Changes of immune cells among total 35 patients during follow-up.
Immune cells0 month6 monthsP valueTreg5.62±2.198.06±1.98<0.001Th170.75±0.370.38±0.18<0.001Th17/Treg0.15±0.100.05±0.03<0.001Tfh2.92±0.802.07±0.62<0.001Th10.30 (0.13-1.27)0.18 (0.03-0.25)0.003Th20.38 (0.09-0.87)0.29 (0.14-0.89)0.71Th1/Th21.55 (0.44-5.33)0.28 (0.10-1.38)<0.001Breg4.16±1.946.52±2.89<0.001CD373.23±7.9867.45±10.150.010CD438.04±5.3132.25±7.64<0.001CD828.49±8.4426.22±8.620.27CD4/CD81.47±0.511.38±0.570.48Data were shown asmean±SDstandarddeviationormedianQ25−Q75.Th2, Treg, and Breg are key immunoregulatory immune cells. After anti-TNF-α therapy, both Treg and Breg increased gradually, but the frequency of Th2 was not significantly changed (Figure 1). Anti-TNF-α therapy began to significantly increase the percentages of both Treg and Breg at 1 month after treatment (P<0.005). The mean percentage of circulating Treg significantly increased from 5.62 to 8.06 after 6 months of anti-TNF-α therapy (P<0.001), and the mean percentage of circulating Breg significantly increased from 4.16 to 6.52 (P<0.001; Table 2).
### 3.3. Correlations of Circulating Immune Cells with Response to Anti-TNF-α Therapy in AS Patients
Baseline disease characteristics such as age, disease duration, and ASDAS-CRP were comparable between responders and nonresponders (Table3). Compared with nonresponders, responders had lower levels of ESR (P=0.035) and CRP (P=0.018) but had higher BASDAI (P=0.042) (Table 3). Compared with those responders, nonresponders had a higher percentage of circulating Breg both at baseline and during follow-up (P<0.05) (Figure 2 and Tables 3 and 4). There was no obvious difference in the baseline percentages of other immune cells such as Th17, Treg, and Tfh between nonresponders and responders (Table 3), and similar findings were also found at 6 months after anti-TNF-α therapy (Table 3).Table 3
Differences in baseline clinical characteristics and immune cells between responders and nonresponders.
ItemsResponders (N=21)Nonresponders (N=14)P valueGender (male, %)18 (85.7%)12 (85.7%)1.00Age (year,mean±SD)32.86±7.5333.43±10.730.85Disease duration (year,mean±SD)8.10±5.219.71±5.070.37ESR (median[Q25-Q75])13 (6-30)29 (14-49)0.035CRP (median[Q25-Q75])2.70 (1.19-15.72)8.50 (6.65-33.63)0.018ASDAS-CRP2.73±0.722.98±0.840.35BASDAI4.68±0.873.99±1.050.042Treg5.73±2.445.45±1.830.712Th170.76±0.380.74±0.360.880Th17/Treg0.16±0.100.15±0.110.948Tfh2.89±0.812.95±0.810.816Th10.18 (0.13-0.64)0.81 (0.15-2.62)0.200Th20.43 (0.05-0.96)0.28 (0.11-0.74)0.749Th1/Th20.52 (0.30-4.26)2.30 (0.71-9.81)0.178Breg3.62±1.704.97±2.050.041CD373.62±7.8772.64±8.400.729CD438.02±5.4138.08±5.360.976CD828.26±8.0728.83±9.270.849CD4/CD81.48±0.541.45±0.460.878Data were shown asmean±SDstandarddeviationormedianQ25−Q75.Figure 2
Changes of circulating immune cells after anti-TNF-α therapy in AS patients stratified by treatment response. Red triangle was for those responders, while black circle was for nonresponders. Difference between responders and nonresponder at each time point was compared. ∗P<0.05; ∗∗P<0.005.Table 4
Differences in immune cells at 6 months between responders and nonresponders.
Immune cellsResponders (N=21)Nonresponders (N=14)P valueTreg7.79±2.228.47±1.520.323Th170.36±0.160.41±0.210.356Th17/Treg0.05±0.030.05±0.020.838Tfh2.08±0.652.06±0.590.917Th10.14 (0.03-0.24)0.20 (0.05-0.30)0.204Th20.29 (0.12-1.29)0.25 (0.17-0.77)0.590Th1/Th20.25 (0.09-1.19)1.18 (1.10-1.50)0.449Breg5.68±2.797.78±2.640.033CD368.02±9.5066.60±11.370.690CD432.71±7.0331.55±8.710.666CD825.76±7.6226.92±10.200.703CD4/CD81.40±0.561.35±0.600.799Data were shown asmean±SDstandarddeviationormedianQ25−Q75.ROC analysis suggested that Breg was the best circulating cell in predicting response to anti-TNF-α therapy in AS patients (AUC=0.70, 95% CI 0.52-0.88). Other immune cells had limited roles in predicting response to anti-TNF-α therapy (Figure 3).Figure 3
Assessment of the roles of circulating immune cells at baseline in predicting response to anti-TNF-α therapy in AS patients through ROC analysis (AUC: area under the ROC curve).
## 3.1. Clinical Characteristics of AS Patients
Table1 summarized the clinical and laboratory characteristics of those AS patients (Table 1). Among those 35 AS patients, 30 (85.7%) were males. The mean age was 33.1±8.8 years old, and the mean disease duration was 8.7±5.1 years. At baseline, the mean ASDAS-CRP and BASDAI were 2.8±0.8 and 4.4±1.0, respectively. After anti-TNF-α therapy of 6 months, both ESR and CRP were significantly reduced (P<0.05; Table 1). The mean ASDAS-CRP significantly declined to 1.4 (P<0.001) and BASDAI declined to 1.9 (P<0.001) at 6 months. Based on BASDAI, the response rate at 6 months after anti-TNF-α therapy was 60.0% (21/35).Table 1
Clinical characteristics of total 35 patients at baseline and after follow-up.
CharacteristicAt baseline6 monthsP valueESR (median[Q25-Q75])21 (9-34)4 (2-11)<0.001CRP (median[Q25-Q75])6.2 (1.9-21.3)1.6 (0.5-3.9)0.002ASDAS-CRP2.8±0.81.4±0.8<0.001BASDAI4.4±1.01.9±1.3<0.001AS: ankylosing spondylitis; data were shown asmean±SDormedianQ25−Q75.
## 3.2. Changes of Circulating Immune Cells after Anti-TNF-α Therapy
Th1, Th17, and Tfh are common proinflammatory immune cells. After anti-TNF-α therapy, both Th17 and Tfh decreased gradually, and there was also a modest but not significant reduction in Th1 (Figure 1). Anti-TNF-α therapy significantly reduced the percentage of circulating Th17 at 1 month after treatment (P<0.005), and the effect was maintained through treatment course (Figure 1). Anti-TNF-α therapy began to significantly reduce the percentage of circulating Tfh at 3 months after treatment (P<0.005), and the effect was also significantly lower at 6 months after treatment (P<0.005). The mean percentage of circulating Th17 significantly decreased from 0.75 to 0.38 after 6 months of anti-TNF-α therapy (P<0.001; Table 2).Figure 1
Changes of circulating immune cells after anti-TNF-α therapy in AS patients. The percentages of immune cells during follow-up were compared with that at baseline. ∗P<0.05; ∗∗P<0.005.Table 2
Changes of immune cells among total 35 patients during follow-up.
Immune cells0 month6 monthsP valueTreg5.62±2.198.06±1.98<0.001Th170.75±0.370.38±0.18<0.001Th17/Treg0.15±0.100.05±0.03<0.001Tfh2.92±0.802.07±0.62<0.001Th10.30 (0.13-1.27)0.18 (0.03-0.25)0.003Th20.38 (0.09-0.87)0.29 (0.14-0.89)0.71Th1/Th21.55 (0.44-5.33)0.28 (0.10-1.38)<0.001Breg4.16±1.946.52±2.89<0.001CD373.23±7.9867.45±10.150.010CD438.04±5.3132.25±7.64<0.001CD828.49±8.4426.22±8.620.27CD4/CD81.47±0.511.38±0.570.48Data were shown asmean±SDstandarddeviationormedianQ25−Q75.Th2, Treg, and Breg are key immunoregulatory immune cells. After anti-TNF-α therapy, both Treg and Breg increased gradually, but the frequency of Th2 was not significantly changed (Figure 1). Anti-TNF-α therapy began to significantly increase the percentages of both Treg and Breg at 1 month after treatment (P<0.005). The mean percentage of circulating Treg significantly increased from 5.62 to 8.06 after 6 months of anti-TNF-α therapy (P<0.001), and the mean percentage of circulating Breg significantly increased from 4.16 to 6.52 (P<0.001; Table 2).
## 3.3. Correlations of Circulating Immune Cells with Response to Anti-TNF-α Therapy in AS Patients
Baseline disease characteristics such as age, disease duration, and ASDAS-CRP were comparable between responders and nonresponders (Table3). Compared with nonresponders, responders had lower levels of ESR (P=0.035) and CRP (P=0.018) but had higher BASDAI (P=0.042) (Table 3). Compared with those responders, nonresponders had a higher percentage of circulating Breg both at baseline and during follow-up (P<0.05) (Figure 2 and Tables 3 and 4). There was no obvious difference in the baseline percentages of other immune cells such as Th17, Treg, and Tfh between nonresponders and responders (Table 3), and similar findings were also found at 6 months after anti-TNF-α therapy (Table 3).Table 3
Differences in baseline clinical characteristics and immune cells between responders and nonresponders.
ItemsResponders (N=21)Nonresponders (N=14)P valueGender (male, %)18 (85.7%)12 (85.7%)1.00Age (year,mean±SD)32.86±7.5333.43±10.730.85Disease duration (year,mean±SD)8.10±5.219.71±5.070.37ESR (median[Q25-Q75])13 (6-30)29 (14-49)0.035CRP (median[Q25-Q75])2.70 (1.19-15.72)8.50 (6.65-33.63)0.018ASDAS-CRP2.73±0.722.98±0.840.35BASDAI4.68±0.873.99±1.050.042Treg5.73±2.445.45±1.830.712Th170.76±0.380.74±0.360.880Th17/Treg0.16±0.100.15±0.110.948Tfh2.89±0.812.95±0.810.816Th10.18 (0.13-0.64)0.81 (0.15-2.62)0.200Th20.43 (0.05-0.96)0.28 (0.11-0.74)0.749Th1/Th20.52 (0.30-4.26)2.30 (0.71-9.81)0.178Breg3.62±1.704.97±2.050.041CD373.62±7.8772.64±8.400.729CD438.02±5.4138.08±5.360.976CD828.26±8.0728.83±9.270.849CD4/CD81.48±0.541.45±0.460.878Data were shown asmean±SDstandarddeviationormedianQ25−Q75.Figure 2
Changes of circulating immune cells after anti-TNF-α therapy in AS patients stratified by treatment response. Red triangle was for those responders, while black circle was for nonresponders. Difference between responders and nonresponder at each time point was compared. ∗P<0.05; ∗∗P<0.005.Table 4
Differences in immune cells at 6 months between responders and nonresponders.
Immune cellsResponders (N=21)Nonresponders (N=14)P valueTreg7.79±2.228.47±1.520.323Th170.36±0.160.41±0.210.356Th17/Treg0.05±0.030.05±0.020.838Tfh2.08±0.652.06±0.590.917Th10.14 (0.03-0.24)0.20 (0.05-0.30)0.204Th20.29 (0.12-1.29)0.25 (0.17-0.77)0.590Th1/Th20.25 (0.09-1.19)1.18 (1.10-1.50)0.449Breg5.68±2.797.78±2.640.033CD368.02±9.5066.60±11.370.690CD432.71±7.0331.55±8.710.666CD825.76±7.6226.92±10.200.703CD4/CD81.40±0.561.35±0.600.799Data were shown asmean±SDstandarddeviationormedianQ25−Q75.ROC analysis suggested that Breg was the best circulating cell in predicting response to anti-TNF-α therapy in AS patients (AUC=0.70, 95% CI 0.52-0.88). Other immune cells had limited roles in predicting response to anti-TNF-α therapy (Figure 3).Figure 3
Assessment of the roles of circulating immune cells at baseline in predicting response to anti-TNF-α therapy in AS patients through ROC analysis (AUC: area under the ROC curve).
## 4. Discussion
The impact of TNF-α inhibitors on immune cells in AS patients is still not clearly defined. Besides, the impact of immune cells on treatment response to TNF-α inhibitors is also largely elusive. This study was thus designed to prospectively evaluate the longitudinal changes of circulating immune cells after anti-TNF-α therapy and their associations with treatment response in AS patients. To our knowledge, this is the first prospective study investigating the impact of immune cells on treatment response to TNF-α inhibitors. We found that both Th17 and Tfh were reduced gradually by anti-TNF-α therapy, while Treg and Breg were increased gradually. Moreover, there was some immunological difference between treatment responders and nonresponders, and responders had a higher percentage of circulating Breg both at baseline and during follow-up, suggesting Breg as a possible predictor of response to anti-TNF-α therapy in AS patients.AS is a heterogeneous disease, and it has been cleared that AS patients have various response to anti-TNF-α therapy [25–27]. BASDAI at baseline has an important impact on assessing the response to therapy in AS [28]. Our work showed that compared with nonresponders, responders had higher BASDAI scores and significant response to anti-TNF-α therapy. This study revealed that about 60% patients had at least 50% improvement in BASDAI at 6 months after anti-TNF-α therapy, while the others had poor response. Identification of predictors of treatment outcomes in AS patients is critical for clinicians to make adequate therapeutic decisions and provide personalized therapy for AS patients [5–27, 29–31]. Currently, there is still lack of definite predictors of response to anti-TNF-α therapy in AS patients. A recent systematic review and meta-analysis revealed that several clinical factors such as young age, male sex, and baseline BASDAI were predictors of better response to anti-TNF-α therapy in AS patients [28]. Serological markers such as baseline CRP and HLA-B27 were also identified as predictors of response to anti-TNF-α therapy in AS patients [28]. Nevertheless, the personalized therapy for AS patients is still difficult owing to the limited evidence from clinical studies or the lack of effective predictors [27, 32]. In this study, we assessed the roles of peripheral immunological profiles such as Th17, Treg, and Breg in predicting response to anti-TNF-α therapy of AS patients which are all HLA-B27 positive. We found that Breg was a possible predictor of response to anti-TNF-α therapy in AS patients, but the other immune cells such as Th17, Tfh, and Treg were not candidate predictor of response to anti-TNF-α therapy. The findings may be helpful to identify predictors of response to anti-TNF-α therapy and improve personalized therapy for AS patients from the perspective of immunological profiles in peripheral blood.A major finding in our study is the potential role of Breg as a predictor of response to anti-TNF-α therapy in AS patients. Though anti-TNF-α therapy could increase the percentage of circulating Breg in AS patients, treatment nonresponders had a higher percentage of circulating Breg both at baseline and during follow-up, suggesting Breg as a possible predictor of response to anti-TNF-α therapy (Figure 2 and Tables 3 and 4). Several studies had assessed the changes of B cells with a regulatory phenotype in AS patients [21, 33, 34]. Cantaert et al. firstly reported that spondylarthritis patients had increased circulating B cells with a regulatory phenotype (CD19+CD5+) [21]. A study by Bautista-Caro et al. also reported that AS patients had increased circulating CD19+CD24hiCD38hi B cells with regulatory capacity, and anti-TNF-α therapy could significantly reduce the number of these B subset cells [34]. However, another study reported similar frequencies of CD24+CD38+ B cells between AS patients and controls, but those cells from AS patients produced less IL-10 and thus had functional defects [33]. To our knowledge, apart from those 3 studies, no other study on the roles of Breg in AS has been published. Our study revealed that Breg was possibly related to response to anti-TNF-α therapy in AS patients, and patients with high frequencies of Breg may predispose to poor response to anti-TNF-α therapy, which provides new insights into the roles of Breg in AS. Currently, the molecular mechanism underlying the roles of Breg in the pathogenesis of AS is still unclear and needs to be elucidated in future studies.While there are emerging data providing evidence for the involvement of T cell subsets in the pathogenesis of AS, few studies have evaluated the longitudinal changes of circulating immune cells after anti-TNF-α therapy in detail [35–37]. Additionally, our knowledge about their roles in predicting response to anti-TNF-α therapy in AS patients is still limited. The findings from our study confirmed the reduction in the frequencies of circulating lymphocyte subsets after anti-TNF-α therapy in AS patients. This study suggested that anti-TNF-α therapy could significantly and selectively reduce circulating proinflammatory immune cells such as Th17 and Tfh, but significantly increased the percentage of circulating Treg. While a large part of AS patients had gradual reductions in the percentage of CD4+ subsets such as Th17 and Tfh, some patients had increased percentages of circulating Th17 or Tfh after anti-TNF-α therapy, indicating the existence of variability in treatment response among AS patients. In addition, none of those T cell subsets were obviously related to response to anti-TNF-α therapy in AS patients (Figure 2 and Tables 3 and 4), suggesting that T cell subsets may have limited roles in predicting response to anti-TNF-α therapy in AS patients.Our study suggested that anti-TNF-α therapy began to significantly increase the percentage of circulating Treg at 1 month after treatment (P<0.005), and its mean percentage significantly increased from 5.62 to 8.06 after 6 months of anti-TNF-α therapy (P<0.001, Table 2). It is uncertain whether the increase of Treg after anti-TNF therapy is responsible in part for the benefit of anti-TNF therapy in treating AS. A recent study suggested that expanding Treg through low-dose IL-2 was effective in treating AS [38]. The effect of anti-TNF-α therapy in treating AS may be at least partially mediated by its roles of increasing Treg cells, which need to be further studied.This study used a prospective design and thus could provide a better assessment of the immunological changes in peripheral blood during anti-TNF-α therapy than those studies using retrospectively collected data. However, the findings in our study should be interpreted with caution because the sample size was not large enough. Besides, the treatment duration in this study was 6 months, which could not evaluate either the long-term efficacy of anti-TNF-α therapy or the long-term impact of anti-TNF-α therapy on immune cells. Further studies with larger number of AS patients and long-term follow-up are recommended to provide more evidence.In summary, this study suggested that anti-TNF-α therapy could significantly reduce circulating proinflammatory immune cells such as Th17 and Tfh, but significantly increased the percentages of circulating Treg and Breg in AS patients. Moreover, circulating Breg may be a promising predictor of response to anti-TNF-α therapy in AS patients. Further prospective cohort studies with larger number of AS patients and long-term follow-up are warranted, and the molecular mechanism underlying the roles of Breg in the pathogenesis of AS needs to be elucidated.
---
*Source: 1017938-2021-10-15.xml* | 2021 |
# Photocatalytic Oxygenation by Water-Soluble Metalloporphyrins as a Pathway to Functionalized Polycycles
**Authors:** Ivana Šagud; Irena Škorić
**Journal:** International Journal of Photoenergy
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1017957
---
## Abstract
Photocatalytic processes are present in natural biochemical pathways as well as in the organic synthetic ones. This minireview will cover the field of photocatalysis that uses both the free-base and specially metallated porphyrins as catalysts. While free-base porphyrins are valuable sensitizers to output singlet oxygen, metalloporphyrins are even more adjustable as photocatalysts because of their coordination capacity, generating a wider range of oxidation reactions. They can be applied in autooxidation reactions, hydroxylations, or direct oxygen transfer producing epoxides. This review will mainly focus on how manganese and some iron porphyrins can be utilized for the functionalization of compounds that have a polycyclic skeleton in their structure. These kinds of compounds are notoriously taxing to obtain and difficult to further functionalize by conventional organic synthetic methods. We have focused on photocatalytic oxygenation reactions in mild conditions with the use of water-soluble porphyrins, as this has been proven to be a good tool for these transformations. In the photocatalytic reactions of some polycyclic heteroaromatic compounds, new polycyclic epoxides, enediones, ketones, alcohols, and/or hydroperoxides are yielded, depending on the catalyst applied. The application of anionic and cationic Mn(III) porphyrins under different reaction parameters results in different reaction pathways generating a vast number of photocatalytic products. Recently, Co and Ni complexes have been also photophysically investigated and confirmed as potential photocatalysts for the functionalization of organic substrates.
---
## Body
## 1. Introduction
Photocatalytic processes have been demonstrated to be numerous in both natural and artificial surroundings, such as photosynthesis, which is the basis of the food chain on Earth [1], as well as oxidative degradation of manifold damaging organic pollutants [2] and surfactants [3]. These processes are also used in photodynamic therapy (PDT) and oxidation of organic compounds. Living organisms can also profit from the application of these processes where different sensitizers such as porphyrins can, by excitation, lead to in situ production of singlet oxygen and/or superoxide radical anion in the tissue of malignant tumors [4] as oxidative agents. Singlet oxygen can be employed for preparative reasons in organic synthetic chemistry, like for the synthesis of oxygenated derivatives of organic compounds. When a nonmetallated porphyrin is the photoactive species in the reaction, the longer-lived triplet is the key state in the photoinduced reactions. The porphyrin acts as a sensitizer and produces singlet oxygen via triplet quenching by the dissolved ground-state oxygen molecules. While free-base porphyrins are useful sensitizers for the production of singlet oxygen [5–9], metalloporphyrins are much more versatile photocatalysts due to their coordination ability, promoting a wider range of oxidation reactions as was first represented by Hennig et al. [10–12] (Figure 1).Figure 1
Structures of metalloporphyrins investigated by Hennig et al. [11].Metalloporphyrins can be applied in autooxidation, hydroxylation, or direct oxygen transfer giving epoxides [10, 11]. Cationic Mn(III) porphyrins are attested to be effectual catalysts for oxygenation of α-pinene (Scheme 1).Scheme 1
Product distribution for photocatalytic oxygenation ofα-pinene (1) [11, 17].The selective epoxidation giving compound2 was detected in aqueous systems at an apropos low substrate/catalyst ratio (S/C = 500), until in aprotic organic solvents, such as benzene or toluene, allylic hydroxylation (3–5) and keto products (6,7) were formed [12]. Using various metalloporphyrins in acetonitrile, photocatalytic epoxidation of cyclooctene was also achieved [13]. Photocatalytic oxygenation of cycloalkenes [12–15] and other unsaturated heteroaromatics [5] was carried out by the application of both metallated and free-base porphyrins [5, 12–15], and it was confirmed that Fe- and Mn-porphyrin complexes give the most effective results as photocatalysts. Resemblances and differences in the photocatalytic oxygenation runways may shed light on the mechanisms of the diverse oxygenation processes, giving an indication for a convenient choice of a catalyst and efficient conversion with high selectivity. The ligand charge, affecting its Lewis basicity, may cause the catalytic activity of the complex through the metal center.By photocatalytic oxygenation of various alkenes, with dioxygen and (5,10,15,20-tetraarylporphyrinato)iron(III) complexes, allylic oxygenation products and/or epoxides are obtained. The composition of the product mixture is influenced by the nature of the substrate and by the concentrations, but the axial ligands also play a role. Alkenes that have a strained double bond preferentially give epoxides, and allylic oxygenation is observed when unstrained alkenes are used. The proposed reaction mechanisms [16] give the oxoiron(IV) porphyrinate ((P)FeIV=O) as the catalytically active species. The selectivity of this species is related to the oxygenation of α-pinene with microsomal cytochromes P-450 and P-420 obtained from the yeast strain Torulopsis apicola [16]. Oxygenation products observed in both cases give evidence for the occurrence of an oxoiron(IV) heme species in microsomal cytochrome P-450-mediated reactions. The enantio-, regio-, and chemoselectivities of the photooxygenation with the iron(III) porphyrins and molecular oxygen are explained by the abstraction of the allylic hydrogen atom followed by catalyzed autoxidation and direct oxygen-transfer reactions [16]. Oxoiron(IV) porphyrinate exhibits a broad spectrum of oxygenation reaction pathways as does the microsomal cytochrome P-450. It can be presumed that the (P)FeIV=O would be an attractive candidate for an alternative and/or a competing analogous iron heme complex in cytochrome P-450-mediated oxygenation reactions.Besides the water solubility of the metalloporphyrins, their photostability is of great importance and in that way manganese porphyrins are much more stable than the analogous iron complexes [11]. Taking this enhanced photostability of the porphyrin complexes into consideration, the manganese(III) porphyrinates were precisely the ones used in the more recent investigations.According to previous studies on the oxygenation of cycloalkenes, the mechanism of the photooxygenation reaction is quite complicated, involving at least 3–4 elementary steps [5, 6, 9]. In some of the earlier studies [12, 17–30], as well as in more recent ones [13, 15], it is explained that when Mn(III) porphyrins act as photocatalysts, (P)MnIV=O and (P)MnV=O intermediates play a key function in all of the in situ-produced reactive species in the oxygenation of cycloalkenes (Scheme 2).Scheme 2
Photocatalytic oxygenation of alkenes in the presence of metal porphyrinates [11] (A—autooxidation; B—oxygen rebound mechanism; and C—direct oxygen transfer).The production of (P)MnV=O in the primary photoreactions was detected in acetonitrile. (P)MnIV=O was produced by photoinduced homolysis of the O-Cl or O-N bond of axially coordinated chlorate or nitrate, respectively, in acetonitrile [13, 15], or via a ligand-to-metal charge-transfer process (also a photoinduced homolysis but that of the metal-ligand bond) with chloride or hydroxide axial ligands in aqueous systems [11, 12]. In the latter case, the Mn(II) species was formed in the primary photochemical step (equation 1) followed by the coordination of oxygen (equation 2).
(1)PMnIIIOH+hν→PMnII+·OH(2)2PMnII+O2→2PMnIV=OEquation2 is an overall reaction comprising of several steps. When experiments are run in water-acetone as solvent mixtures, hydroxide or water is axially coordinated around the Mn(III) center. Hydroxyl radicals formed in the primary photochemical step most probably react with the organic solvent. The Mn(IV) complexes are readily disproportionate and give highly reactive manganese(V)-oxo species (equation 3) [13, 15].
(3)2PMnIV=O+H+⇌PMnV=O+PMnIIIOHDisproportionation is much faster than synproportionation in the equilibrium system. A polar solvent promotes the disproportionation process, and in this case, the process comes over with nearly a diffusion-controlled rate constant [31]. The rate constants for epoxidation of olefins are several orders of magnitude higher for Mn(V)-oxo porphyrins than for the convenient Mn(IV) species. It is considered that the (P)MnV=O species is the principal oxidant in the photocatalytic oxygenations in the investigated systems.Starting with the idea that a polycyclic structure can be easily modified and functionalized by the utilization of a photocatalytic approach, Kikaš et al. and Vuk et al. have made significant progress in this area. They have been proliferative in the field of photocatalytic oxygenation using water-soluble manganese porphyrins applied to bicycloalkenes [32–35]. From their viewpoint, it was very important to consider the structure of the natural terpene, α-pinene (Scheme 1), as a compound having the bicyclo(3.1.1)hexene structure, very similar to the structure of photoproducts obtained by a cycloaddition reaction in very good yields. Recently, these yields have even been improved by the utilization of the flow-photochemical methodology [36]. The manganese(III) complexes of the cationic 5,10,15,20-tetrakis(1-methyl-4-pyridinium)porphyrin (Mn(III)TMPyP5+) and the anionic 5,10,15,20-tetrakis(4-sulfonatophenyl)porphyrin (Mn(III)TSPP3−) along with the anionic free base were used in the experiments (Figure 2).Figure 2
Structures of cationic porphyrin (Ar1) and anionic porphyrin (Ar2).In the first published paper by these authors, an investigation was performed on the benzobicyclo(3.2.1)octadiene system8 where the oxygenation runways of this furobicyclic skeleton in a thermal reaction involving mCPBA and photocatalytic processes mediated by nonmetallated and Mn(III) porphyrins are compared. In the thermal reaction of 8 and 9 using mCPBA (Scheme 3), the enedione 11 is obtained via the rearrangement of intermediate epoxide 10 produced primarily from 8.Scheme 3
Proposed reaction pathways for the thermal reactions of8 and 9a-c.There are two epoxidation possibilities on the furan derivative8, but the authors corroborate with the literature and propose that the initial epoxidation comes over at the substituted cyclohexyl side of the furan ring as signified in the left sequence in Scheme 3. This is judicious on the basis of the idea that epoxidation should occur at the double bond holding the more electron-donating functional group. Further oxygenation to 12 does not take place. Compounds 9a-c were also subjected to additional thermal transformations using mCPBA, producing all trans-14a-c (formed via intermediates 13a-c). Further oxygenation to the product 15 was not observed.Light-initiated oxygenation of8 was carried out using various porphyrins as photocatalysts (Scheme 4).Scheme 4
Proposed reaction runways for the photocatalytic oxygenation of8 (pH = 7, air saturation).It was concluded that the longer-lived triplet state plays a role in the photoinduced reactions if a free-base porphyrin is the photoactive species used. This catalyst operates as a sensitizer thus producing singlet oxygen via triplet quenching. The product is the hydroxybutenolide20 (Scheme 4), which is not detected in any other oxygenation process. This obviously demonstrates that this is the only case when singlet oxygen is the oxidative agent. Thus, due to previous studies on the oxygenation of cycloalkenes, the mechanism of this reaction is sufficiently complicated [5, 6, 9]. In the case when anionic Mn(III)TSPP3− was used, the epoxy-derivative 16 and furan ring-opened 11 and 17 products were observed (Scheme 4). When the Mn(III)TMPyP5+ porphyrin is used, hydroxy 21 and hydroperoxy 22 derivatives are the major products. The change in the sign of the charge of the ligand alters the assignment of the products. The lower Lewis basicity of the porphyrin ligand improves oxygenation on C=C bonds, and this was verified by flash photolysis experiments with Mn(III) porphyrins holding substituents of different electron demands [15, 31]. When the authors modified the experimental conditions (pH or oxygen concentration), there were variations in the ratio of the products, but this did not affect the establishment of further novel species. Enhancement of pH to 10 magnified the quantity of the epoxide 16. They explain that higher pH disrupts further oxidation of this derivative, and this can be linked to the function protons play in the disproportionation producing an Mn(V) species. Bubbling oxygen instead of air significantly enhanced the amounts of the compounds 16 and 17. Studies [12, 13, 15] have pointed out that in the case of Mn(III) porphyrins as photocatalysts, (P)MnIV=O and (P)MnV=O intermediates play a key function in the oxygenation of cycloalkenes. The production of (P)MnV=O was observed in acetonitrile as a consequence of heterolytic cleavage of the O-Cl bond in the axially coordinated perchlorate counterion [15, 31]. (P)MnIV=O was produced by photoinduced homolysis of the O-Cl or O-N bond of axially coordinated chlorate or nitrate, respectively, in acetonitrile [15, 31] or via a ligand-to-metal charge-transfer process with chloride or hydroxide axial ligands in aqueous systems [11, 12]. The mechanism of this complex set of reactions is as described previously by Hennig et al. and already presented earlier in the minireview. As indicated in Scheme 4, the positively charged porphyrin ligand advances the electrophilic attack to the inside double bond of the furan ring. The anionic catalyst favors the outside double bond. Likened to the outside C=C bond, the accessibility of the inside double bond for the oxygen atom coordinated to the lumbering macrocyclic skeleton is sterically interfered with its bicyclic environment. From an electronic aspect, this bond is more favored for an electrophilic attack as an outcome of the electron-donating consequence of the nearby hydrocarbon (bicycloalkyl) parts of the molecule. Because of the lower Lewis basicity of the cationic complex, the corresponding Mn(V)-oxo intermediate has a much more electrophilic performance than the anionic one. For the lesser electrophilic anionic complex, steric disturbance is the predominant result, promoting the attack at the more accessible outside bond. To study the effect of the increased steric hindrance of the oxidative attack at the outer bond of the furan ring, photocatalytic oxygenation experiments using both the anionic and cationic manganese(III) porphyrins (Mn(III)TSPP3− and Mn(III)TMPyP5+) and their corresponding free bases (H2TSPP4− and H2TMPyP4+) were carried out with the annulated derivative 23 (Scheme 5) [33].Scheme 5
Proposed reaction pathways for photocatalytic oxygenation of23.The major product was the same in every case, no matter which catalyst was used under diverse conditions [33]. Structure determination and characterization unambiguously showed that 10-membered ketolactone 27 was obtained. This result propounds that, beside a strong steric disturbance, a considerable electronic consequence was enforced by the annulation of a benzene ring to the outside of the furan ring.In their later paper [34], the same authors investigate more polycyclic substrates containing oxygen and sulfur in their structures. When the structure that is studied has a (2,3-b-furo) moiety incorporated into the skeleton, the results with both cationic and anionic Mn(III) porphyrin catalysts differ (Scheme 6) from the previously studied (3,2-b)furo-octadienes [32].Scheme 6
Proposed reaction pathways for the photocatalytic oxygenation reactions of28 (pH = 7, oxygen saturation) [34].While in the case of8, using the anionic Mn(III) porphyrin, epoxide and furan ring-opened derivatives were the main products, and photocatalytic oxygenation of 28 led to the formation of hydroxy 29 and hydroperoxy 30 derivatives. In the presence of the cationic Mn(III) porphyrin, only one product was formed (Scheme 6), the hydroxybutenolide derivative 31, which is similar to that observed in the photocatalytic oxygenation of 8 (Scheme 4, formed when using the free-base catalyst where the photochemically generated singlet oxygen was the oxidative agent) [32]. The results in this study suggest that in the compound 28 the inner double bond is more preferred by the attack of the cationic Mn(III) porphyrin than by the anionic one. The replacement of oxygen by sulfur resulted in changes in photocatalytic reactivity (Schemes 7 and 8).Scheme 7
Reaction pathway for the photocatalytic oxygenation reactions of32 (pH = 7, oxygen saturation) [34].Scheme 8
Photocatalytic oxygenation of36 to photoproduct 37 (pH = 7, oxygen saturation) [34].The reactivity of these thienyl substrates is much lower than that of the corresponding furan ones, and this is attributed to the much higher aromaticity of the thiophene ring. The products formed from32 suggest that the attack by hydroxyl radicals (equation 1) play a more determining role in this system than the Mn(V)=O species do, and this is in accordance with the catalyst-independent yields.Continuing this study of photocatalytic oxygenations of various bicyclic organic compounds, derivatives with an isolated/free double bond were investigated [35]. These compounds also contained a phenyl group (unsubstituted or substituted) close to the free double bond, which significantly affected the mechanism of manganese(III) porphyrin-based photocatalytic oxygenation and the products gained (Scheme 9).Scheme 9
Photocatalytic oxygenation of38 (pH = 7, air/oxygen saturation) [35].A considerableπ-stacking interaction between that phenyl ring and the porphyrin catalyst promoted the functionalization of the carbon atom resulting in the formation of the suitable hydroperoxy derivatives of 38a and b. No effect of the porphyrin charge was observed in these cases, and the main oxygenation reaction of the methoxy derivative 38c was efficient only with the cationic complex; this is probably due to its interaction with the electron-rich free double bond. These results further corroborate that both steric and electronic effects govern the mechanisms of the photocatalytic oxygenations of these compounds.All the presented successful results confirm that the use of the photocatalytic activity of water-soluble Mn(III) porphyrins for the oxygenation of benzobicyclo(3.2.1)octadienes8, 23, 28, 32, 36, and 38 was justified, especially as they possess a basic core very similar to those previously analyzed and naturally coming over cycloalkenes, which are bioactive and significant substances isolated from nature [37].Recently, Co and Ni complexes have also been photophysically investigated by Horváth et al. as potential photocatalysts as well as for the functionalization of organic substrates by photocatalytic oxygenation in comparison to Mn(III) porphyrins [38–40]. The obtained results well demonstrated how the size of the metal center determines the structure and, thus, the photoinduced behavior of the porphyrin complexes, along with the substituents on the ligand. Co(III) porphyrin complexes showed similar photophysical characteristics as the depicted Mn(III) porphyrins, while Ni complexes display somewhat different photophysical behavior and function as special sensitizers, which immediately transmit their excitation energy to the electron donor, promoting the direct charge transfer toward the acceptor. All those results well demonstrate that both Co(III) and Ni(II) porphyrin complexes may be applicable for solar energy utilization in the visible range and probably as oxidative reagents for photocatalytic oxygenation of the described unsubstituted photoproducts to give new functionalized polycycles very similar to the structures of some terpenes from nature.
## 2. Conclusions
It is shown that free-base and metallated porphyrins are extremely useful in photocatalysis. This minireview has focused on the utilization of these porphyrins for the functionalization of compounds that have a polycyclic skeleton in their structure. These kinds of compounds are notoriously taxing to obtain and difficult to further functionalize using conventional organic synthetic methods, so photocatalytic oxygenation is a good tool for these transformations. In these photocatalytic processes, novel polycyclic epoxides, enediones, ketones, alcohols, and/or hydroperoxides are formed, subordinate on the catalyst used. The application of anionic and cationic Mn(III) porphyrins under different reaction parameters resulted in different reaction pathways thus generating a vast number of photocatalytic products. As a future development in the field, Co and Ni complexes have also been recently photophysically investigated and confirmed as good potential photocatalysts for further functionalization of organic substrates.
---
*Source: 1017957-2018-12-30.xml* | 1017957-2018-12-30_1017957-2018-12-30.md | 21,489 | Photocatalytic Oxygenation by Water-Soluble Metalloporphyrins as a Pathway to Functionalized Polycycles | Ivana Šagud; Irena Škorić | International Journal of Photoenergy
(2018) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1017957 | 1017957-2018-12-30.xml | ---
## Abstract
Photocatalytic processes are present in natural biochemical pathways as well as in the organic synthetic ones. This minireview will cover the field of photocatalysis that uses both the free-base and specially metallated porphyrins as catalysts. While free-base porphyrins are valuable sensitizers to output singlet oxygen, metalloporphyrins are even more adjustable as photocatalysts because of their coordination capacity, generating a wider range of oxidation reactions. They can be applied in autooxidation reactions, hydroxylations, or direct oxygen transfer producing epoxides. This review will mainly focus on how manganese and some iron porphyrins can be utilized for the functionalization of compounds that have a polycyclic skeleton in their structure. These kinds of compounds are notoriously taxing to obtain and difficult to further functionalize by conventional organic synthetic methods. We have focused on photocatalytic oxygenation reactions in mild conditions with the use of water-soluble porphyrins, as this has been proven to be a good tool for these transformations. In the photocatalytic reactions of some polycyclic heteroaromatic compounds, new polycyclic epoxides, enediones, ketones, alcohols, and/or hydroperoxides are yielded, depending on the catalyst applied. The application of anionic and cationic Mn(III) porphyrins under different reaction parameters results in different reaction pathways generating a vast number of photocatalytic products. Recently, Co and Ni complexes have been also photophysically investigated and confirmed as potential photocatalysts for the functionalization of organic substrates.
---
## Body
## 1. Introduction
Photocatalytic processes have been demonstrated to be numerous in both natural and artificial surroundings, such as photosynthesis, which is the basis of the food chain on Earth [1], as well as oxidative degradation of manifold damaging organic pollutants [2] and surfactants [3]. These processes are also used in photodynamic therapy (PDT) and oxidation of organic compounds. Living organisms can also profit from the application of these processes where different sensitizers such as porphyrins can, by excitation, lead to in situ production of singlet oxygen and/or superoxide radical anion in the tissue of malignant tumors [4] as oxidative agents. Singlet oxygen can be employed for preparative reasons in organic synthetic chemistry, like for the synthesis of oxygenated derivatives of organic compounds. When a nonmetallated porphyrin is the photoactive species in the reaction, the longer-lived triplet is the key state in the photoinduced reactions. The porphyrin acts as a sensitizer and produces singlet oxygen via triplet quenching by the dissolved ground-state oxygen molecules. While free-base porphyrins are useful sensitizers for the production of singlet oxygen [5–9], metalloporphyrins are much more versatile photocatalysts due to their coordination ability, promoting a wider range of oxidation reactions as was first represented by Hennig et al. [10–12] (Figure 1).Figure 1
Structures of metalloporphyrins investigated by Hennig et al. [11].Metalloporphyrins can be applied in autooxidation, hydroxylation, or direct oxygen transfer giving epoxides [10, 11]. Cationic Mn(III) porphyrins are attested to be effectual catalysts for oxygenation of α-pinene (Scheme 1).Scheme 1
Product distribution for photocatalytic oxygenation ofα-pinene (1) [11, 17].The selective epoxidation giving compound2 was detected in aqueous systems at an apropos low substrate/catalyst ratio (S/C = 500), until in aprotic organic solvents, such as benzene or toluene, allylic hydroxylation (3–5) and keto products (6,7) were formed [12]. Using various metalloporphyrins in acetonitrile, photocatalytic epoxidation of cyclooctene was also achieved [13]. Photocatalytic oxygenation of cycloalkenes [12–15] and other unsaturated heteroaromatics [5] was carried out by the application of both metallated and free-base porphyrins [5, 12–15], and it was confirmed that Fe- and Mn-porphyrin complexes give the most effective results as photocatalysts. Resemblances and differences in the photocatalytic oxygenation runways may shed light on the mechanisms of the diverse oxygenation processes, giving an indication for a convenient choice of a catalyst and efficient conversion with high selectivity. The ligand charge, affecting its Lewis basicity, may cause the catalytic activity of the complex through the metal center.By photocatalytic oxygenation of various alkenes, with dioxygen and (5,10,15,20-tetraarylporphyrinato)iron(III) complexes, allylic oxygenation products and/or epoxides are obtained. The composition of the product mixture is influenced by the nature of the substrate and by the concentrations, but the axial ligands also play a role. Alkenes that have a strained double bond preferentially give epoxides, and allylic oxygenation is observed when unstrained alkenes are used. The proposed reaction mechanisms [16] give the oxoiron(IV) porphyrinate ((P)FeIV=O) as the catalytically active species. The selectivity of this species is related to the oxygenation of α-pinene with microsomal cytochromes P-450 and P-420 obtained from the yeast strain Torulopsis apicola [16]. Oxygenation products observed in both cases give evidence for the occurrence of an oxoiron(IV) heme species in microsomal cytochrome P-450-mediated reactions. The enantio-, regio-, and chemoselectivities of the photooxygenation with the iron(III) porphyrins and molecular oxygen are explained by the abstraction of the allylic hydrogen atom followed by catalyzed autoxidation and direct oxygen-transfer reactions [16]. Oxoiron(IV) porphyrinate exhibits a broad spectrum of oxygenation reaction pathways as does the microsomal cytochrome P-450. It can be presumed that the (P)FeIV=O would be an attractive candidate for an alternative and/or a competing analogous iron heme complex in cytochrome P-450-mediated oxygenation reactions.Besides the water solubility of the metalloporphyrins, their photostability is of great importance and in that way manganese porphyrins are much more stable than the analogous iron complexes [11]. Taking this enhanced photostability of the porphyrin complexes into consideration, the manganese(III) porphyrinates were precisely the ones used in the more recent investigations.According to previous studies on the oxygenation of cycloalkenes, the mechanism of the photooxygenation reaction is quite complicated, involving at least 3–4 elementary steps [5, 6, 9]. In some of the earlier studies [12, 17–30], as well as in more recent ones [13, 15], it is explained that when Mn(III) porphyrins act as photocatalysts, (P)MnIV=O and (P)MnV=O intermediates play a key function in all of the in situ-produced reactive species in the oxygenation of cycloalkenes (Scheme 2).Scheme 2
Photocatalytic oxygenation of alkenes in the presence of metal porphyrinates [11] (A—autooxidation; B—oxygen rebound mechanism; and C—direct oxygen transfer).The production of (P)MnV=O in the primary photoreactions was detected in acetonitrile. (P)MnIV=O was produced by photoinduced homolysis of the O-Cl or O-N bond of axially coordinated chlorate or nitrate, respectively, in acetonitrile [13, 15], or via a ligand-to-metal charge-transfer process (also a photoinduced homolysis but that of the metal-ligand bond) with chloride or hydroxide axial ligands in aqueous systems [11, 12]. In the latter case, the Mn(II) species was formed in the primary photochemical step (equation 1) followed by the coordination of oxygen (equation 2).
(1)PMnIIIOH+hν→PMnII+·OH(2)2PMnII+O2→2PMnIV=OEquation2 is an overall reaction comprising of several steps. When experiments are run in water-acetone as solvent mixtures, hydroxide or water is axially coordinated around the Mn(III) center. Hydroxyl radicals formed in the primary photochemical step most probably react with the organic solvent. The Mn(IV) complexes are readily disproportionate and give highly reactive manganese(V)-oxo species (equation 3) [13, 15].
(3)2PMnIV=O+H+⇌PMnV=O+PMnIIIOHDisproportionation is much faster than synproportionation in the equilibrium system. A polar solvent promotes the disproportionation process, and in this case, the process comes over with nearly a diffusion-controlled rate constant [31]. The rate constants for epoxidation of olefins are several orders of magnitude higher for Mn(V)-oxo porphyrins than for the convenient Mn(IV) species. It is considered that the (P)MnV=O species is the principal oxidant in the photocatalytic oxygenations in the investigated systems.Starting with the idea that a polycyclic structure can be easily modified and functionalized by the utilization of a photocatalytic approach, Kikaš et al. and Vuk et al. have made significant progress in this area. They have been proliferative in the field of photocatalytic oxygenation using water-soluble manganese porphyrins applied to bicycloalkenes [32–35]. From their viewpoint, it was very important to consider the structure of the natural terpene, α-pinene (Scheme 1), as a compound having the bicyclo(3.1.1)hexene structure, very similar to the structure of photoproducts obtained by a cycloaddition reaction in very good yields. Recently, these yields have even been improved by the utilization of the flow-photochemical methodology [36]. The manganese(III) complexes of the cationic 5,10,15,20-tetrakis(1-methyl-4-pyridinium)porphyrin (Mn(III)TMPyP5+) and the anionic 5,10,15,20-tetrakis(4-sulfonatophenyl)porphyrin (Mn(III)TSPP3−) along with the anionic free base were used in the experiments (Figure 2).Figure 2
Structures of cationic porphyrin (Ar1) and anionic porphyrin (Ar2).In the first published paper by these authors, an investigation was performed on the benzobicyclo(3.2.1)octadiene system8 where the oxygenation runways of this furobicyclic skeleton in a thermal reaction involving mCPBA and photocatalytic processes mediated by nonmetallated and Mn(III) porphyrins are compared. In the thermal reaction of 8 and 9 using mCPBA (Scheme 3), the enedione 11 is obtained via the rearrangement of intermediate epoxide 10 produced primarily from 8.Scheme 3
Proposed reaction pathways for the thermal reactions of8 and 9a-c.There are two epoxidation possibilities on the furan derivative8, but the authors corroborate with the literature and propose that the initial epoxidation comes over at the substituted cyclohexyl side of the furan ring as signified in the left sequence in Scheme 3. This is judicious on the basis of the idea that epoxidation should occur at the double bond holding the more electron-donating functional group. Further oxygenation to 12 does not take place. Compounds 9a-c were also subjected to additional thermal transformations using mCPBA, producing all trans-14a-c (formed via intermediates 13a-c). Further oxygenation to the product 15 was not observed.Light-initiated oxygenation of8 was carried out using various porphyrins as photocatalysts (Scheme 4).Scheme 4
Proposed reaction runways for the photocatalytic oxygenation of8 (pH = 7, air saturation).It was concluded that the longer-lived triplet state plays a role in the photoinduced reactions if a free-base porphyrin is the photoactive species used. This catalyst operates as a sensitizer thus producing singlet oxygen via triplet quenching. The product is the hydroxybutenolide20 (Scheme 4), which is not detected in any other oxygenation process. This obviously demonstrates that this is the only case when singlet oxygen is the oxidative agent. Thus, due to previous studies on the oxygenation of cycloalkenes, the mechanism of this reaction is sufficiently complicated [5, 6, 9]. In the case when anionic Mn(III)TSPP3− was used, the epoxy-derivative 16 and furan ring-opened 11 and 17 products were observed (Scheme 4). When the Mn(III)TMPyP5+ porphyrin is used, hydroxy 21 and hydroperoxy 22 derivatives are the major products. The change in the sign of the charge of the ligand alters the assignment of the products. The lower Lewis basicity of the porphyrin ligand improves oxygenation on C=C bonds, and this was verified by flash photolysis experiments with Mn(III) porphyrins holding substituents of different electron demands [15, 31]. When the authors modified the experimental conditions (pH or oxygen concentration), there were variations in the ratio of the products, but this did not affect the establishment of further novel species. Enhancement of pH to 10 magnified the quantity of the epoxide 16. They explain that higher pH disrupts further oxidation of this derivative, and this can be linked to the function protons play in the disproportionation producing an Mn(V) species. Bubbling oxygen instead of air significantly enhanced the amounts of the compounds 16 and 17. Studies [12, 13, 15] have pointed out that in the case of Mn(III) porphyrins as photocatalysts, (P)MnIV=O and (P)MnV=O intermediates play a key function in the oxygenation of cycloalkenes. The production of (P)MnV=O was observed in acetonitrile as a consequence of heterolytic cleavage of the O-Cl bond in the axially coordinated perchlorate counterion [15, 31]. (P)MnIV=O was produced by photoinduced homolysis of the O-Cl or O-N bond of axially coordinated chlorate or nitrate, respectively, in acetonitrile [15, 31] or via a ligand-to-metal charge-transfer process with chloride or hydroxide axial ligands in aqueous systems [11, 12]. The mechanism of this complex set of reactions is as described previously by Hennig et al. and already presented earlier in the minireview. As indicated in Scheme 4, the positively charged porphyrin ligand advances the electrophilic attack to the inside double bond of the furan ring. The anionic catalyst favors the outside double bond. Likened to the outside C=C bond, the accessibility of the inside double bond for the oxygen atom coordinated to the lumbering macrocyclic skeleton is sterically interfered with its bicyclic environment. From an electronic aspect, this bond is more favored for an electrophilic attack as an outcome of the electron-donating consequence of the nearby hydrocarbon (bicycloalkyl) parts of the molecule. Because of the lower Lewis basicity of the cationic complex, the corresponding Mn(V)-oxo intermediate has a much more electrophilic performance than the anionic one. For the lesser electrophilic anionic complex, steric disturbance is the predominant result, promoting the attack at the more accessible outside bond. To study the effect of the increased steric hindrance of the oxidative attack at the outer bond of the furan ring, photocatalytic oxygenation experiments using both the anionic and cationic manganese(III) porphyrins (Mn(III)TSPP3− and Mn(III)TMPyP5+) and their corresponding free bases (H2TSPP4− and H2TMPyP4+) were carried out with the annulated derivative 23 (Scheme 5) [33].Scheme 5
Proposed reaction pathways for photocatalytic oxygenation of23.The major product was the same in every case, no matter which catalyst was used under diverse conditions [33]. Structure determination and characterization unambiguously showed that 10-membered ketolactone 27 was obtained. This result propounds that, beside a strong steric disturbance, a considerable electronic consequence was enforced by the annulation of a benzene ring to the outside of the furan ring.In their later paper [34], the same authors investigate more polycyclic substrates containing oxygen and sulfur in their structures. When the structure that is studied has a (2,3-b-furo) moiety incorporated into the skeleton, the results with both cationic and anionic Mn(III) porphyrin catalysts differ (Scheme 6) from the previously studied (3,2-b)furo-octadienes [32].Scheme 6
Proposed reaction pathways for the photocatalytic oxygenation reactions of28 (pH = 7, oxygen saturation) [34].While in the case of8, using the anionic Mn(III) porphyrin, epoxide and furan ring-opened derivatives were the main products, and photocatalytic oxygenation of 28 led to the formation of hydroxy 29 and hydroperoxy 30 derivatives. In the presence of the cationic Mn(III) porphyrin, only one product was formed (Scheme 6), the hydroxybutenolide derivative 31, which is similar to that observed in the photocatalytic oxygenation of 8 (Scheme 4, formed when using the free-base catalyst where the photochemically generated singlet oxygen was the oxidative agent) [32]. The results in this study suggest that in the compound 28 the inner double bond is more preferred by the attack of the cationic Mn(III) porphyrin than by the anionic one. The replacement of oxygen by sulfur resulted in changes in photocatalytic reactivity (Schemes 7 and 8).Scheme 7
Reaction pathway for the photocatalytic oxygenation reactions of32 (pH = 7, oxygen saturation) [34].Scheme 8
Photocatalytic oxygenation of36 to photoproduct 37 (pH = 7, oxygen saturation) [34].The reactivity of these thienyl substrates is much lower than that of the corresponding furan ones, and this is attributed to the much higher aromaticity of the thiophene ring. The products formed from32 suggest that the attack by hydroxyl radicals (equation 1) play a more determining role in this system than the Mn(V)=O species do, and this is in accordance with the catalyst-independent yields.Continuing this study of photocatalytic oxygenations of various bicyclic organic compounds, derivatives with an isolated/free double bond were investigated [35]. These compounds also contained a phenyl group (unsubstituted or substituted) close to the free double bond, which significantly affected the mechanism of manganese(III) porphyrin-based photocatalytic oxygenation and the products gained (Scheme 9).Scheme 9
Photocatalytic oxygenation of38 (pH = 7, air/oxygen saturation) [35].A considerableπ-stacking interaction between that phenyl ring and the porphyrin catalyst promoted the functionalization of the carbon atom resulting in the formation of the suitable hydroperoxy derivatives of 38a and b. No effect of the porphyrin charge was observed in these cases, and the main oxygenation reaction of the methoxy derivative 38c was efficient only with the cationic complex; this is probably due to its interaction with the electron-rich free double bond. These results further corroborate that both steric and electronic effects govern the mechanisms of the photocatalytic oxygenations of these compounds.All the presented successful results confirm that the use of the photocatalytic activity of water-soluble Mn(III) porphyrins for the oxygenation of benzobicyclo(3.2.1)octadienes8, 23, 28, 32, 36, and 38 was justified, especially as they possess a basic core very similar to those previously analyzed and naturally coming over cycloalkenes, which are bioactive and significant substances isolated from nature [37].Recently, Co and Ni complexes have also been photophysically investigated by Horváth et al. as potential photocatalysts as well as for the functionalization of organic substrates by photocatalytic oxygenation in comparison to Mn(III) porphyrins [38–40]. The obtained results well demonstrated how the size of the metal center determines the structure and, thus, the photoinduced behavior of the porphyrin complexes, along with the substituents on the ligand. Co(III) porphyrin complexes showed similar photophysical characteristics as the depicted Mn(III) porphyrins, while Ni complexes display somewhat different photophysical behavior and function as special sensitizers, which immediately transmit their excitation energy to the electron donor, promoting the direct charge transfer toward the acceptor. All those results well demonstrate that both Co(III) and Ni(II) porphyrin complexes may be applicable for solar energy utilization in the visible range and probably as oxidative reagents for photocatalytic oxygenation of the described unsubstituted photoproducts to give new functionalized polycycles very similar to the structures of some terpenes from nature.
## 2. Conclusions
It is shown that free-base and metallated porphyrins are extremely useful in photocatalysis. This minireview has focused on the utilization of these porphyrins for the functionalization of compounds that have a polycyclic skeleton in their structure. These kinds of compounds are notoriously taxing to obtain and difficult to further functionalize using conventional organic synthetic methods, so photocatalytic oxygenation is a good tool for these transformations. In these photocatalytic processes, novel polycyclic epoxides, enediones, ketones, alcohols, and/or hydroperoxides are formed, subordinate on the catalyst used. The application of anionic and cationic Mn(III) porphyrins under different reaction parameters resulted in different reaction pathways thus generating a vast number of photocatalytic products. As a future development in the field, Co and Ni complexes have also been recently photophysically investigated and confirmed as good potential photocatalysts for further functionalization of organic substrates.
---
*Source: 1017957-2018-12-30.xml* | 2018 |
# Motivations for a Career in Dentistry among Dental Students and Dental Interns in Kenya
**Authors:** Ochiba M. Lukandu; Lilian C. Koskei; Elizabeth O. Dimba
**Journal:** International Journal of Dentistry
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1017979
---
## Abstract
A number of factors have been cited as determinants for choosing a career in dentistry around the globe. The purpose of this study was to determine motivations for a career in dentistry among dental students and dental interns in Kenya. This was a cross-sectional study where 293 individuals participated by filling and returning self-administered questionnaires. The mean age of all respondents was 22.3 years. Overall, 59.5% of the respondents had selected dentistry as their preferred career at the end of high school. Majority (76.1%) of the respondents agreed that personal interest in dentistry was an important motivating factor for them. This was followed closely by a desire to help or serve people (74%), a desire for a flexible work schedule (63%), and an aspiration to be self-employed (61.8%). There was no difference between males and females regarding these as motivating factors. On the other hand, among factors that the respondents felt had the lowest influence on their choice of dentistry was parental influence, where only 22% of the respondents indicated that this was a motivating factor for them. Other potential motivating factors such as influence by friends and siblings (30.3%) as well as career talk and guidance (41.3%) were also ranked low. In general, the respondents indicated that they were motivated much more by personal and humanitarian factors, when compared to financial and societal factors.
---
## Body
## 1. Introduction
There are many professions and career paths available in modern times, and it has become a challenge for an individual to make a choice regarding which career to pursue in life. A decision regarding which career to pursue in life has a huge impact on an individuals’ future life. In Kenya, the first opportunity for a choice regarding one’s future career comes just before and immediately after sitting examinations at the end of high school. The service is provided by the Ministry of Education through a central body that coordinates the placement of high school graduates into career programmes at various universities. The placement is based upon overall performance in examinations and also on individual career choices. Similar to many Asian [1] and other African [2] countries, a predetermined grade is used to enroll successful candidates into various degree programs. Medicine and dentistry are among programs that require very high grades for enrollment. More opportunities for choice or even change of choice of future career arise in the transition through college life and even during employment or episodes of unemployment. At every stage, a number of factors are thought to motivate or influence individuals to makes certain career choices.Many factors have been cited as determinants for choosing a career within the medical field and dentistry in particular [3–9]. Dentistry is a noble profession providing essential health care to people and a great opportunity to meet new people on a regular basis. While career in dentistry may sound appealing to many people, it is important that those joining the profession have adequate information and are genuinely passionate about provision of oral health services [10]. Studies from several European countries have shown that most students who chose a career in dentistry were self-motivated [4]. Among self-motivating factors included desire and ability to help people, better opportunities for self-employment, and prestige. In India, even though economic and professional considerations were key factors influencing the choice of dentistry as a profession, many students cited influence from parents as a key motivation for their choice [5]. In the United Arab Emirates, aspirations for a reliable income in dentistry emerged as a key motivation among dental students [11]. This was particularly among male students, suggesting that motivating factors may vary with gender and culture.There is limited information on determinants of career choices by students in developing nations, with only a few studies investigating career motivations and perceptions. Dentistry is a relatively young profession in Kenya, with only two out of over 65 universities providing undergraduate training in this field. In 2018, for a population of 48 million, the country had 1302 registered dentists with only 700 in active practice [12]. Dental training in Kenya takes five years at the dental school and an additional one-year internship training within selected hospitals. About 85% of dental students are publicly funded whereas the rest are privately funded. Privately funded students pay approximately 5000 USD per year. Both groups learn within the same dental schools under similar conditions. The country has postgraduate training opportunities in the fields of oral and maxillofacial surgery, pediatric dentistry, periodontology, and prosthodontics. In 2018, the country had only 147 dentists with specialized training, mostly in the fields of oral and maxillofacial surgery, pediatric dentistry, and restorative dentistry [12].As the dental profession grows in Kenya, and more dental schools get started, career experts and dental educators will begin to take a keen interest in students’ motivations for choosing a career in dentistry. The purpose of this study was to determine factors that motivate students to choose dentistry as a profession in Kenya. It was part of a larger study that also explored students’ perceptions regarding career choice and dental training in Kenya as well as their long-term career expectation using both quantitative and qualitative methods.
## 2. Materials and Methods
The study was conducted among all undergraduate dental students and all newly graduated dentists (dentists on internship training) in Kenya. The study sites were the only two dental schools in the country, Moi University School of Dentistry (MU) and University of Nairobi School of Dental Sciences (UoN), as well as the only five dental internship training centers across the country. Ethical approval was granted by the Institutional Research Ethics Committee based in Eldoret, Kenya. Permission to conduct the study was also granted by the two dental training institutions. At the time of this study, there were 305 undergraduate dental students and 29 newly graduated dentists on internship training in the country, constituting a study population of 334. All these were eligible to participate and there were no exclusion criteria since this was a census study. Information about this study was sent out to the entire study population through their school and hospital administrations as well as class and group representatives.A self-administered questionnaire was used to collect data. No identifying information was included on the questionnaire. The questionnaire was structured with both open ended and closed questions drawn from similar studies in other parts of the world. The questions were selected and designed to bring out potential extrinsic and intrinsic factors affecting choice of a career in dentistry. The questionnaire was divided into sections. The first section collected demographic information about the participants. The second section used a five-point Likert-type scale where the students were asked to indicate their level of agreement with statements outlining various factors that could have influenced their choice of dentistry as a career. There were fourteen factors selected to closely match factors investigated in similar studies across the globe. The purpose of the study was clearly explained to the participants and all were requested to sign consent forms prior to participation in the study. It took approximately 15 minutes to complete the questionnaire.
### 2.1. Statistical Analysis
All questionnaires were verified for completeness and data manually entered into a data analysis software (Statistical Package of Social Sciences version 22, IBM-SPSS, IL, USA). Descriptive statistics were used to determine percentages of responses regarding motivations and perceptions for career choice. Independent samplest-test was used to compare means of the ages of students from the two institutions. Analysis of categorical data was conducted using cross-tabulations with chi-square tests. A p value of less than 0.05 was considered significant.
## 2.1. Statistical Analysis
All questionnaires were verified for completeness and data manually entered into a data analysis software (Statistical Package of Social Sciences version 22, IBM-SPSS, IL, USA). Descriptive statistics were used to determine percentages of responses regarding motivations and perceptions for career choice. Independent samplest-test was used to compare means of the ages of students from the two institutions. Analysis of categorical data was conducted using cross-tabulations with chi-square tests. A p value of less than 0.05 was considered significant.
## 3. Results
### 3.1. Demographics Information
Out of a total of 334 potential participants, 293 were available and took part in the study by filling and returning questionnaires, giving a total response rate of 87.7%. The response rate was 88.5% among respondents from University of Nairobi and 85.3% among respondents from Moi University. The lowest response rate was among fifth-year students at University of Nairobi (46.7%) and first-year students from Moi University (56.3%), both of whom were in preparation of examinations at the time of the study. The highest response rates (over 95%) were among third- and fourth-year respondents in both universities. There were 165 females accounting for 56.3% of the participants. Majority of the respondents (193) (65.9%) were from the University of Nairobi (Table1). Mean age of all respondents was 22.3 years. The age of the respondents ranged from 18 to 31 with a mean of 21.8 among respondents from University of Nairobi and 18 to 33 with a mean of 23.3 among respondents from Moi University. There were a higher proportion of participants aged above 26 years among respondents from Moi University (18.7%) compared to only 3% among respondents from University of Nairobi. Upon further analysis, there was a significant difference in the mean ages between students at Moi University and those at University of Nairobi (t154.93 = 4.438, p<0.001). The average age of Moi University students was 1.44 years older than the average age of students from University of Nairobi, with 95% CI [0.800, 2.08]. Regarding parents’ occupation, majority (42.8%) of the respondents indicated that their parents worked in the financial sector (business, banking, and commerce), 18.7% indicated that their parents worked in education sector (mainly as teachers, lecturers, and education administrators) and 13.4% of the respondents indicated their parents were farmers (Table 1). Only 7% of the respondents indicated that their parents worked in the health sector (mainly as doctors and nurses).Table 1
Demographic characteristics of the respondents.
VariableMoi universityUniversity of NairobiTotalAge in years(n = 96)(n = 193)(n = 289)Below 2128102130Between 22 and 255085135Above 2618624Gender(n = 99)(n = 193)(n = 292)Female52112164Male4781128Year of study(n = 99)(n = 193)(n = 292)Year 194352Year 2135366Year 3293867Year 4213253Year 5151429Internship121325Parents’ occupation(n = 77)(n = 166)(n = 243)Health21517Education242246Financial2084104Farmer161632Others152944
### 3.2. Choice of Dentistry as a Profession
The respondents were asked to indicate whether they had selected dentistry as their preferred career at the end of high school education. In cases where dentistry was not their first choice, they were asked to indicate how they ranked it among other possible choices, and what career was their preferred choice. Overall, 59.5% (n = 172) of the respondents had selected dentistry as their preferred career at the end of high school (Figure 1(a)). There was no major difference between males (59%) and females (60%) in preference for dentistry (Figure 1(b)). However, the majority of students at the University of Nairobi (67.7%) had selected dentistry as their preferred choice compared to 45.5% of students at Moi University (Figure 1(c)). This difference was found to be significant upon further analysis using Chi square test, X2 (2, N = 288) = 15.64, p<0.001.Figure 1
Preference of dentistry as a profession at the end of high school.
(a)(b)(c)There was an obvious cyclic variation regarding those who had selected dentistry as their preferred choice with level of study ranging from a majority of 76.9% among second-year students to as low as 33.3% among fifth-year students in both universities (Figure2). Regarding the time when the decision was made, 35.7% of the respondents who selected dentistry as their preferred choice indicated that they had made up their mind more than a year before the actual time of making the choice. This was twice as high when compared to the proportion (17.8%) among those who did not have dentistry as their preferred choice. Majority of respondents who did not have dentistry as their preferred choice indicated that they had it as their second choice (76.6%) while a small proportion (14%) did not have it as a choice at all. Respondents who did not have dentistry as their preferred choice had selected medicine (55.8%), engineering (25.0%), pharmacy (4.8%), and other professions (14.85) as their preferred choices. In hindsight, majority of the respondents agreed that they had made the right choice (80%, n = 225), compared to only 6% (n = 17) who were not content with the choice they had made.Figure 2
Variation in preference of dentistry as a career at the end of high school with level of study.
### 3.3. Factors Influencing Choice of Dentistry
The respondents were asked to indicate by ranking their level of agreement with a statement that certain factors did influence their choice of a career in dentistry. Of the 14 potential factors studied, majority (76.1%) of the respondents agreed that personal interest in dentistry was an important motivating factor for them. This was followed closely by a personal desire to help or serve people (74%), desire to have a flexible work schedule (63%), and desire to be self-employed (61.8%) (Table2). There was no difference between males and females regarding their choice of these as motivating factors. On the other hand, among factors that the respondents felt had the lowest influence on their choice of dentistry as a career was influence from their parents, where only 22% of the respondents agreed that this was a motivating factor for them. A slightly higher proportion of males (28.1%) than females (17.7%) indicated that persuasion by parents was a motivating factor. Only 11.9% of the respondents agreed that missing their preferred career choice was a factor in joining dental school. Other potential motivating factors such influence by friends and siblings (30.3%) as well as career talk and guidance (41.3%) were also ranked low.Table 2
Motivational factors.
Motivation factorResponse in percentageGrouped factorIndividual factorsAgreeNeutralDisagreePersonal/humanitarianPersonal interest76.117.76.2Desire to serve/help people7418.57.6Flexible work pattern6316.820.2Desire for self-employment61.825.812.4Financial/societalDesire for financial security57.425.617.1Pride in title “doctor”5424.621.5Prestige/social status as dentist40.734.624.7Influence by othersCareer talk/information41.320.538.2Prior experience of treatment3715.847.3Prior exposure to dentistry31.726.242.1Siblings’/friends’ persuasion30.317.152.6Family doctor2616.457.5Parents’ persuasion2219.658.5To allow for further analysis, potential motivating factors were grouped broadly into three as follows: (1) influence by other people, (2) personal and humanitarian factors, and (3) financial and societal factors. In general, the respondents indicated that they were motivated much more by personal and humanitarian factors when compared to the other two groups of factors (Figure3). The average agreement score for personal and humanitarian factors was 68.7%, whereas the average agreement score for financial and societal factors was 50.7%. The least ranked group of potential motivating factor was influence by other people which included influence by parents, siblings, career guides, and dentists, with an average agreement score of 31.4%.Figure 3
Motivations for choosing dentistry as a career.
## 3.1. Demographics Information
Out of a total of 334 potential participants, 293 were available and took part in the study by filling and returning questionnaires, giving a total response rate of 87.7%. The response rate was 88.5% among respondents from University of Nairobi and 85.3% among respondents from Moi University. The lowest response rate was among fifth-year students at University of Nairobi (46.7%) and first-year students from Moi University (56.3%), both of whom were in preparation of examinations at the time of the study. The highest response rates (over 95%) were among third- and fourth-year respondents in both universities. There were 165 females accounting for 56.3% of the participants. Majority of the respondents (193) (65.9%) were from the University of Nairobi (Table1). Mean age of all respondents was 22.3 years. The age of the respondents ranged from 18 to 31 with a mean of 21.8 among respondents from University of Nairobi and 18 to 33 with a mean of 23.3 among respondents from Moi University. There were a higher proportion of participants aged above 26 years among respondents from Moi University (18.7%) compared to only 3% among respondents from University of Nairobi. Upon further analysis, there was a significant difference in the mean ages between students at Moi University and those at University of Nairobi (t154.93 = 4.438, p<0.001). The average age of Moi University students was 1.44 years older than the average age of students from University of Nairobi, with 95% CI [0.800, 2.08]. Regarding parents’ occupation, majority (42.8%) of the respondents indicated that their parents worked in the financial sector (business, banking, and commerce), 18.7% indicated that their parents worked in education sector (mainly as teachers, lecturers, and education administrators) and 13.4% of the respondents indicated their parents were farmers (Table 1). Only 7% of the respondents indicated that their parents worked in the health sector (mainly as doctors and nurses).Table 1
Demographic characteristics of the respondents.
VariableMoi universityUniversity of NairobiTotalAge in years(n = 96)(n = 193)(n = 289)Below 2128102130Between 22 and 255085135Above 2618624Gender(n = 99)(n = 193)(n = 292)Female52112164Male4781128Year of study(n = 99)(n = 193)(n = 292)Year 194352Year 2135366Year 3293867Year 4213253Year 5151429Internship121325Parents’ occupation(n = 77)(n = 166)(n = 243)Health21517Education242246Financial2084104Farmer161632Others152944
## 3.2. Choice of Dentistry as a Profession
The respondents were asked to indicate whether they had selected dentistry as their preferred career at the end of high school education. In cases where dentistry was not their first choice, they were asked to indicate how they ranked it among other possible choices, and what career was their preferred choice. Overall, 59.5% (n = 172) of the respondents had selected dentistry as their preferred career at the end of high school (Figure 1(a)). There was no major difference between males (59%) and females (60%) in preference for dentistry (Figure 1(b)). However, the majority of students at the University of Nairobi (67.7%) had selected dentistry as their preferred choice compared to 45.5% of students at Moi University (Figure 1(c)). This difference was found to be significant upon further analysis using Chi square test, X2 (2, N = 288) = 15.64, p<0.001.Figure 1
Preference of dentistry as a profession at the end of high school.
(a)(b)(c)There was an obvious cyclic variation regarding those who had selected dentistry as their preferred choice with level of study ranging from a majority of 76.9% among second-year students to as low as 33.3% among fifth-year students in both universities (Figure2). Regarding the time when the decision was made, 35.7% of the respondents who selected dentistry as their preferred choice indicated that they had made up their mind more than a year before the actual time of making the choice. This was twice as high when compared to the proportion (17.8%) among those who did not have dentistry as their preferred choice. Majority of respondents who did not have dentistry as their preferred choice indicated that they had it as their second choice (76.6%) while a small proportion (14%) did not have it as a choice at all. Respondents who did not have dentistry as their preferred choice had selected medicine (55.8%), engineering (25.0%), pharmacy (4.8%), and other professions (14.85) as their preferred choices. In hindsight, majority of the respondents agreed that they had made the right choice (80%, n = 225), compared to only 6% (n = 17) who were not content with the choice they had made.Figure 2
Variation in preference of dentistry as a career at the end of high school with level of study.
## 3.3. Factors Influencing Choice of Dentistry
The respondents were asked to indicate by ranking their level of agreement with a statement that certain factors did influence their choice of a career in dentistry. Of the 14 potential factors studied, majority (76.1%) of the respondents agreed that personal interest in dentistry was an important motivating factor for them. This was followed closely by a personal desire to help or serve people (74%), desire to have a flexible work schedule (63%), and desire to be self-employed (61.8%) (Table2). There was no difference between males and females regarding their choice of these as motivating factors. On the other hand, among factors that the respondents felt had the lowest influence on their choice of dentistry as a career was influence from their parents, where only 22% of the respondents agreed that this was a motivating factor for them. A slightly higher proportion of males (28.1%) than females (17.7%) indicated that persuasion by parents was a motivating factor. Only 11.9% of the respondents agreed that missing their preferred career choice was a factor in joining dental school. Other potential motivating factors such influence by friends and siblings (30.3%) as well as career talk and guidance (41.3%) were also ranked low.Table 2
Motivational factors.
Motivation factorResponse in percentageGrouped factorIndividual factorsAgreeNeutralDisagreePersonal/humanitarianPersonal interest76.117.76.2Desire to serve/help people7418.57.6Flexible work pattern6316.820.2Desire for self-employment61.825.812.4Financial/societalDesire for financial security57.425.617.1Pride in title “doctor”5424.621.5Prestige/social status as dentist40.734.624.7Influence by othersCareer talk/information41.320.538.2Prior experience of treatment3715.847.3Prior exposure to dentistry31.726.242.1Siblings’/friends’ persuasion30.317.152.6Family doctor2616.457.5Parents’ persuasion2219.658.5To allow for further analysis, potential motivating factors were grouped broadly into three as follows: (1) influence by other people, (2) personal and humanitarian factors, and (3) financial and societal factors. In general, the respondents indicated that they were motivated much more by personal and humanitarian factors when compared to the other two groups of factors (Figure3). The average agreement score for personal and humanitarian factors was 68.7%, whereas the average agreement score for financial and societal factors was 50.7%. The least ranked group of potential motivating factor was influence by other people which included influence by parents, siblings, career guides, and dentists, with an average agreement score of 31.4%.Figure 3
Motivations for choosing dentistry as a career.
## 4. Discussion
To the best of our knowledge, this is the first study in the Eastern Africa region to investigate motivations for the choice of dentistry as a career. The study was a census study where all available and willing members of the study population in Kenya took part. A response rate of over 87% was comparable to many similar studies across the globe [1, 5, 13]. The demographic findings in this study were also comparable to findings in studies conducted in many other countries [4, 14–16] including a slightly higher number of female students than male students and an average age of about 22 years. Students from Moi University were, on average, older than those from University of Nairobi by about one and a half years. This could be attributed to a higher proportion of students at Moi University who decided to join dentistry having already trained in other professions.In a number of Asian countries including Japan [17] and Thailand [18], the occupation of family members appears to have an influence on career choices. In this study, only a small number (less than 10%) of the respondents had parents working within the health profession. Pursuing dentistry as part of family tradition in the field of health was therefore not a key factor among the respondents in this study. Two-thirds of the respondents had selected dentistry as their preferred career at the end of high school, with no gender difference in this respect. This is in contrast to findings in studies conducted in Nigeria (32%) [2] and India (38%) [17] where only about one-third of the students were found to have selected dentistry as their first career choice. The difference could be due to variations in the level of competitiveness for a career in dentistry as well as possible variations in the entry processes including entry examinations. [17].A higher proportion of students at the University of Nairobi had selected dentistry as their preferred choice when compared to the proportion among students at Moi University. Possible reasons for this include the fact that the dental school at University of Nairobi is located in the capital city of the country and is a much larger, much older and well-known dental school having been established about 50 years ago whereas the one at Moi University is only about 10 years old. Students wishing to join dentistry would therefore prefer the former school.The observed cyclic variation in proportion of those who had selected dentistry as their preferred career choice with level of study could partly be attributed to variation in student performance in university entry examinations from year to year. Given that more than one-third of those who had selected dentistry as their preferred career choice made up their mind more than one year before the actual selection, it is unlikely that factors such as career guidance and variations in national discourse on health did influence the cyclic variation. This issue should however be investigated in further studies.A number of studies have shown that dentistry is often not the preferred career for most students undertaking dental training and that medicine is usually their preference [5, 10, 19]. These studies suggest that most students only end up in dental training because they fail to attain grades to allow them to join the more competitive medical programme. This is not supported by findings in this study since majority of respondents had dentistry as their preferred choice. However, it is worth noting that about one half of those who did not have dentistry as their preferred choice indicated that they had selected medicine as their preferred choice.Findings in this study strongly point to personal interest in dentistry and personal desire to help or serve people as the most important motivating factors for the choice of a career in dentistry among dental students in Kenya. The desire to serve and help people and communities was found to be a key motivating factor among dental students in many developed countries including Sweden [20], Japan [21], UK [10], and even USA [22]. Self-motivation was also found to be a key motivation for a career in dentistry in Germany and Finland [13]. The finding that students in a developing country like Kenya considered factors such as personal interest and desire to help people more important than financial and economic factors was a notable variation from global trends.Personal interest has also been ranked highly as a motivating factor in developing countries such as Iran [23], whereas prestige and helping others were found to be key motivating factors in Jordan [24] and Nigeria [2]. In Brazil [25], personal interest was considered a key motivating factor for many dental students, but helping others was not, even though it was shown to be of increasing importance over the years. In many developing countries, financial and economic factors tend to be ranked highly as motivations for a career in dentistry. The desire to secure a good job was found to be the most important motivating factor among dental students in South Africa [26]. In Nigeria [2], key motivation for choosing dentistry as a career was linked to a need to achieve personal goals such as job opportunities abroad, financial independence, and prestige. In China [21], dental students reported that their choice for a career in dentistry was mainly for financial reasons, and for prestige.This study did not find any differences between males and females regarding their choice of motivating factors for a career in dentistry. Other motivating factors that were considered important were desire to have a flexible work schedule, and aspirations for self-employment. On the other hand, factors that had the lowest influence on choice of dentistry as a career were influence from parents and influence by friends and siblings as well as career talk and guidance. Influence by parents has been found to be an important factor why dental students in many Asian countries pursue a career in dentistry. In Japan, there are strong family connections within the dental fraternity, with a high number of dental students having parents who work as dentists [27]. In India, students were found to be highly influenced by their families in making important decisions, including career choice [5]. A possible explanation was suggested to be that, at that age, most of the students lived with their families. In this study, the respondents indicated that they were motivated much more by personal and humanitarian factors when compared to financial and societal factors. The least ranked group of potential motivating factors was influence by other people which included influence by parents.
## 5. Conclusion
In this study, it was found that dental students in Kenya were motivated much more by personal and humanitarian factors when compared to financial and societal factors. Influence by other people including influence by parents was ranked low as a motivating factor for a career in dentistry.
---
*Source: 1017979-2020-07-29.xml* | 1017979-2020-07-29_1017979-2020-07-29.md | 31,574 | Motivations for a Career in Dentistry among Dental Students and Dental Interns in Kenya | Ochiba M. Lukandu; Lilian C. Koskei; Elizabeth O. Dimba | International Journal of Dentistry
(2020) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1017979 | 1017979-2020-07-29.xml | ---
## Abstract
A number of factors have been cited as determinants for choosing a career in dentistry around the globe. The purpose of this study was to determine motivations for a career in dentistry among dental students and dental interns in Kenya. This was a cross-sectional study where 293 individuals participated by filling and returning self-administered questionnaires. The mean age of all respondents was 22.3 years. Overall, 59.5% of the respondents had selected dentistry as their preferred career at the end of high school. Majority (76.1%) of the respondents agreed that personal interest in dentistry was an important motivating factor for them. This was followed closely by a desire to help or serve people (74%), a desire for a flexible work schedule (63%), and an aspiration to be self-employed (61.8%). There was no difference between males and females regarding these as motivating factors. On the other hand, among factors that the respondents felt had the lowest influence on their choice of dentistry was parental influence, where only 22% of the respondents indicated that this was a motivating factor for them. Other potential motivating factors such as influence by friends and siblings (30.3%) as well as career talk and guidance (41.3%) were also ranked low. In general, the respondents indicated that they were motivated much more by personal and humanitarian factors, when compared to financial and societal factors.
---
## Body
## 1. Introduction
There are many professions and career paths available in modern times, and it has become a challenge for an individual to make a choice regarding which career to pursue in life. A decision regarding which career to pursue in life has a huge impact on an individuals’ future life. In Kenya, the first opportunity for a choice regarding one’s future career comes just before and immediately after sitting examinations at the end of high school. The service is provided by the Ministry of Education through a central body that coordinates the placement of high school graduates into career programmes at various universities. The placement is based upon overall performance in examinations and also on individual career choices. Similar to many Asian [1] and other African [2] countries, a predetermined grade is used to enroll successful candidates into various degree programs. Medicine and dentistry are among programs that require very high grades for enrollment. More opportunities for choice or even change of choice of future career arise in the transition through college life and even during employment or episodes of unemployment. At every stage, a number of factors are thought to motivate or influence individuals to makes certain career choices.Many factors have been cited as determinants for choosing a career within the medical field and dentistry in particular [3–9]. Dentistry is a noble profession providing essential health care to people and a great opportunity to meet new people on a regular basis. While career in dentistry may sound appealing to many people, it is important that those joining the profession have adequate information and are genuinely passionate about provision of oral health services [10]. Studies from several European countries have shown that most students who chose a career in dentistry were self-motivated [4]. Among self-motivating factors included desire and ability to help people, better opportunities for self-employment, and prestige. In India, even though economic and professional considerations were key factors influencing the choice of dentistry as a profession, many students cited influence from parents as a key motivation for their choice [5]. In the United Arab Emirates, aspirations for a reliable income in dentistry emerged as a key motivation among dental students [11]. This was particularly among male students, suggesting that motivating factors may vary with gender and culture.There is limited information on determinants of career choices by students in developing nations, with only a few studies investigating career motivations and perceptions. Dentistry is a relatively young profession in Kenya, with only two out of over 65 universities providing undergraduate training in this field. In 2018, for a population of 48 million, the country had 1302 registered dentists with only 700 in active practice [12]. Dental training in Kenya takes five years at the dental school and an additional one-year internship training within selected hospitals. About 85% of dental students are publicly funded whereas the rest are privately funded. Privately funded students pay approximately 5000 USD per year. Both groups learn within the same dental schools under similar conditions. The country has postgraduate training opportunities in the fields of oral and maxillofacial surgery, pediatric dentistry, periodontology, and prosthodontics. In 2018, the country had only 147 dentists with specialized training, mostly in the fields of oral and maxillofacial surgery, pediatric dentistry, and restorative dentistry [12].As the dental profession grows in Kenya, and more dental schools get started, career experts and dental educators will begin to take a keen interest in students’ motivations for choosing a career in dentistry. The purpose of this study was to determine factors that motivate students to choose dentistry as a profession in Kenya. It was part of a larger study that also explored students’ perceptions regarding career choice and dental training in Kenya as well as their long-term career expectation using both quantitative and qualitative methods.
## 2. Materials and Methods
The study was conducted among all undergraduate dental students and all newly graduated dentists (dentists on internship training) in Kenya. The study sites were the only two dental schools in the country, Moi University School of Dentistry (MU) and University of Nairobi School of Dental Sciences (UoN), as well as the only five dental internship training centers across the country. Ethical approval was granted by the Institutional Research Ethics Committee based in Eldoret, Kenya. Permission to conduct the study was also granted by the two dental training institutions. At the time of this study, there were 305 undergraduate dental students and 29 newly graduated dentists on internship training in the country, constituting a study population of 334. All these were eligible to participate and there were no exclusion criteria since this was a census study. Information about this study was sent out to the entire study population through their school and hospital administrations as well as class and group representatives.A self-administered questionnaire was used to collect data. No identifying information was included on the questionnaire. The questionnaire was structured with both open ended and closed questions drawn from similar studies in other parts of the world. The questions were selected and designed to bring out potential extrinsic and intrinsic factors affecting choice of a career in dentistry. The questionnaire was divided into sections. The first section collected demographic information about the participants. The second section used a five-point Likert-type scale where the students were asked to indicate their level of agreement with statements outlining various factors that could have influenced their choice of dentistry as a career. There were fourteen factors selected to closely match factors investigated in similar studies across the globe. The purpose of the study was clearly explained to the participants and all were requested to sign consent forms prior to participation in the study. It took approximately 15 minutes to complete the questionnaire.
### 2.1. Statistical Analysis
All questionnaires were verified for completeness and data manually entered into a data analysis software (Statistical Package of Social Sciences version 22, IBM-SPSS, IL, USA). Descriptive statistics were used to determine percentages of responses regarding motivations and perceptions for career choice. Independent samplest-test was used to compare means of the ages of students from the two institutions. Analysis of categorical data was conducted using cross-tabulations with chi-square tests. A p value of less than 0.05 was considered significant.
## 2.1. Statistical Analysis
All questionnaires were verified for completeness and data manually entered into a data analysis software (Statistical Package of Social Sciences version 22, IBM-SPSS, IL, USA). Descriptive statistics were used to determine percentages of responses regarding motivations and perceptions for career choice. Independent samplest-test was used to compare means of the ages of students from the two institutions. Analysis of categorical data was conducted using cross-tabulations with chi-square tests. A p value of less than 0.05 was considered significant.
## 3. Results
### 3.1. Demographics Information
Out of a total of 334 potential participants, 293 were available and took part in the study by filling and returning questionnaires, giving a total response rate of 87.7%. The response rate was 88.5% among respondents from University of Nairobi and 85.3% among respondents from Moi University. The lowest response rate was among fifth-year students at University of Nairobi (46.7%) and first-year students from Moi University (56.3%), both of whom were in preparation of examinations at the time of the study. The highest response rates (over 95%) were among third- and fourth-year respondents in both universities. There were 165 females accounting for 56.3% of the participants. Majority of the respondents (193) (65.9%) were from the University of Nairobi (Table1). Mean age of all respondents was 22.3 years. The age of the respondents ranged from 18 to 31 with a mean of 21.8 among respondents from University of Nairobi and 18 to 33 with a mean of 23.3 among respondents from Moi University. There were a higher proportion of participants aged above 26 years among respondents from Moi University (18.7%) compared to only 3% among respondents from University of Nairobi. Upon further analysis, there was a significant difference in the mean ages between students at Moi University and those at University of Nairobi (t154.93 = 4.438, p<0.001). The average age of Moi University students was 1.44 years older than the average age of students from University of Nairobi, with 95% CI [0.800, 2.08]. Regarding parents’ occupation, majority (42.8%) of the respondents indicated that their parents worked in the financial sector (business, banking, and commerce), 18.7% indicated that their parents worked in education sector (mainly as teachers, lecturers, and education administrators) and 13.4% of the respondents indicated their parents were farmers (Table 1). Only 7% of the respondents indicated that their parents worked in the health sector (mainly as doctors and nurses).Table 1
Demographic characteristics of the respondents.
VariableMoi universityUniversity of NairobiTotalAge in years(n = 96)(n = 193)(n = 289)Below 2128102130Between 22 and 255085135Above 2618624Gender(n = 99)(n = 193)(n = 292)Female52112164Male4781128Year of study(n = 99)(n = 193)(n = 292)Year 194352Year 2135366Year 3293867Year 4213253Year 5151429Internship121325Parents’ occupation(n = 77)(n = 166)(n = 243)Health21517Education242246Financial2084104Farmer161632Others152944
### 3.2. Choice of Dentistry as a Profession
The respondents were asked to indicate whether they had selected dentistry as their preferred career at the end of high school education. In cases where dentistry was not their first choice, they were asked to indicate how they ranked it among other possible choices, and what career was their preferred choice. Overall, 59.5% (n = 172) of the respondents had selected dentistry as their preferred career at the end of high school (Figure 1(a)). There was no major difference between males (59%) and females (60%) in preference for dentistry (Figure 1(b)). However, the majority of students at the University of Nairobi (67.7%) had selected dentistry as their preferred choice compared to 45.5% of students at Moi University (Figure 1(c)). This difference was found to be significant upon further analysis using Chi square test, X2 (2, N = 288) = 15.64, p<0.001.Figure 1
Preference of dentistry as a profession at the end of high school.
(a)(b)(c)There was an obvious cyclic variation regarding those who had selected dentistry as their preferred choice with level of study ranging from a majority of 76.9% among second-year students to as low as 33.3% among fifth-year students in both universities (Figure2). Regarding the time when the decision was made, 35.7% of the respondents who selected dentistry as their preferred choice indicated that they had made up their mind more than a year before the actual time of making the choice. This was twice as high when compared to the proportion (17.8%) among those who did not have dentistry as their preferred choice. Majority of respondents who did not have dentistry as their preferred choice indicated that they had it as their second choice (76.6%) while a small proportion (14%) did not have it as a choice at all. Respondents who did not have dentistry as their preferred choice had selected medicine (55.8%), engineering (25.0%), pharmacy (4.8%), and other professions (14.85) as their preferred choices. In hindsight, majority of the respondents agreed that they had made the right choice (80%, n = 225), compared to only 6% (n = 17) who were not content with the choice they had made.Figure 2
Variation in preference of dentistry as a career at the end of high school with level of study.
### 3.3. Factors Influencing Choice of Dentistry
The respondents were asked to indicate by ranking their level of agreement with a statement that certain factors did influence their choice of a career in dentistry. Of the 14 potential factors studied, majority (76.1%) of the respondents agreed that personal interest in dentistry was an important motivating factor for them. This was followed closely by a personal desire to help or serve people (74%), desire to have a flexible work schedule (63%), and desire to be self-employed (61.8%) (Table2). There was no difference between males and females regarding their choice of these as motivating factors. On the other hand, among factors that the respondents felt had the lowest influence on their choice of dentistry as a career was influence from their parents, where only 22% of the respondents agreed that this was a motivating factor for them. A slightly higher proportion of males (28.1%) than females (17.7%) indicated that persuasion by parents was a motivating factor. Only 11.9% of the respondents agreed that missing their preferred career choice was a factor in joining dental school. Other potential motivating factors such influence by friends and siblings (30.3%) as well as career talk and guidance (41.3%) were also ranked low.Table 2
Motivational factors.
Motivation factorResponse in percentageGrouped factorIndividual factorsAgreeNeutralDisagreePersonal/humanitarianPersonal interest76.117.76.2Desire to serve/help people7418.57.6Flexible work pattern6316.820.2Desire for self-employment61.825.812.4Financial/societalDesire for financial security57.425.617.1Pride in title “doctor”5424.621.5Prestige/social status as dentist40.734.624.7Influence by othersCareer talk/information41.320.538.2Prior experience of treatment3715.847.3Prior exposure to dentistry31.726.242.1Siblings’/friends’ persuasion30.317.152.6Family doctor2616.457.5Parents’ persuasion2219.658.5To allow for further analysis, potential motivating factors were grouped broadly into three as follows: (1) influence by other people, (2) personal and humanitarian factors, and (3) financial and societal factors. In general, the respondents indicated that they were motivated much more by personal and humanitarian factors when compared to the other two groups of factors (Figure3). The average agreement score for personal and humanitarian factors was 68.7%, whereas the average agreement score for financial and societal factors was 50.7%. The least ranked group of potential motivating factor was influence by other people which included influence by parents, siblings, career guides, and dentists, with an average agreement score of 31.4%.Figure 3
Motivations for choosing dentistry as a career.
## 3.1. Demographics Information
Out of a total of 334 potential participants, 293 were available and took part in the study by filling and returning questionnaires, giving a total response rate of 87.7%. The response rate was 88.5% among respondents from University of Nairobi and 85.3% among respondents from Moi University. The lowest response rate was among fifth-year students at University of Nairobi (46.7%) and first-year students from Moi University (56.3%), both of whom were in preparation of examinations at the time of the study. The highest response rates (over 95%) were among third- and fourth-year respondents in both universities. There were 165 females accounting for 56.3% of the participants. Majority of the respondents (193) (65.9%) were from the University of Nairobi (Table1). Mean age of all respondents was 22.3 years. The age of the respondents ranged from 18 to 31 with a mean of 21.8 among respondents from University of Nairobi and 18 to 33 with a mean of 23.3 among respondents from Moi University. There were a higher proportion of participants aged above 26 years among respondents from Moi University (18.7%) compared to only 3% among respondents from University of Nairobi. Upon further analysis, there was a significant difference in the mean ages between students at Moi University and those at University of Nairobi (t154.93 = 4.438, p<0.001). The average age of Moi University students was 1.44 years older than the average age of students from University of Nairobi, with 95% CI [0.800, 2.08]. Regarding parents’ occupation, majority (42.8%) of the respondents indicated that their parents worked in the financial sector (business, banking, and commerce), 18.7% indicated that their parents worked in education sector (mainly as teachers, lecturers, and education administrators) and 13.4% of the respondents indicated their parents were farmers (Table 1). Only 7% of the respondents indicated that their parents worked in the health sector (mainly as doctors and nurses).Table 1
Demographic characteristics of the respondents.
VariableMoi universityUniversity of NairobiTotalAge in years(n = 96)(n = 193)(n = 289)Below 2128102130Between 22 and 255085135Above 2618624Gender(n = 99)(n = 193)(n = 292)Female52112164Male4781128Year of study(n = 99)(n = 193)(n = 292)Year 194352Year 2135366Year 3293867Year 4213253Year 5151429Internship121325Parents’ occupation(n = 77)(n = 166)(n = 243)Health21517Education242246Financial2084104Farmer161632Others152944
## 3.2. Choice of Dentistry as a Profession
The respondents were asked to indicate whether they had selected dentistry as their preferred career at the end of high school education. In cases where dentistry was not their first choice, they were asked to indicate how they ranked it among other possible choices, and what career was their preferred choice. Overall, 59.5% (n = 172) of the respondents had selected dentistry as their preferred career at the end of high school (Figure 1(a)). There was no major difference between males (59%) and females (60%) in preference for dentistry (Figure 1(b)). However, the majority of students at the University of Nairobi (67.7%) had selected dentistry as their preferred choice compared to 45.5% of students at Moi University (Figure 1(c)). This difference was found to be significant upon further analysis using Chi square test, X2 (2, N = 288) = 15.64, p<0.001.Figure 1
Preference of dentistry as a profession at the end of high school.
(a)(b)(c)There was an obvious cyclic variation regarding those who had selected dentistry as their preferred choice with level of study ranging from a majority of 76.9% among second-year students to as low as 33.3% among fifth-year students in both universities (Figure2). Regarding the time when the decision was made, 35.7% of the respondents who selected dentistry as their preferred choice indicated that they had made up their mind more than a year before the actual time of making the choice. This was twice as high when compared to the proportion (17.8%) among those who did not have dentistry as their preferred choice. Majority of respondents who did not have dentistry as their preferred choice indicated that they had it as their second choice (76.6%) while a small proportion (14%) did not have it as a choice at all. Respondents who did not have dentistry as their preferred choice had selected medicine (55.8%), engineering (25.0%), pharmacy (4.8%), and other professions (14.85) as their preferred choices. In hindsight, majority of the respondents agreed that they had made the right choice (80%, n = 225), compared to only 6% (n = 17) who were not content with the choice they had made.Figure 2
Variation in preference of dentistry as a career at the end of high school with level of study.
## 3.3. Factors Influencing Choice of Dentistry
The respondents were asked to indicate by ranking their level of agreement with a statement that certain factors did influence their choice of a career in dentistry. Of the 14 potential factors studied, majority (76.1%) of the respondents agreed that personal interest in dentistry was an important motivating factor for them. This was followed closely by a personal desire to help or serve people (74%), desire to have a flexible work schedule (63%), and desire to be self-employed (61.8%) (Table2). There was no difference between males and females regarding their choice of these as motivating factors. On the other hand, among factors that the respondents felt had the lowest influence on their choice of dentistry as a career was influence from their parents, where only 22% of the respondents agreed that this was a motivating factor for them. A slightly higher proportion of males (28.1%) than females (17.7%) indicated that persuasion by parents was a motivating factor. Only 11.9% of the respondents agreed that missing their preferred career choice was a factor in joining dental school. Other potential motivating factors such influence by friends and siblings (30.3%) as well as career talk and guidance (41.3%) were also ranked low.Table 2
Motivational factors.
Motivation factorResponse in percentageGrouped factorIndividual factorsAgreeNeutralDisagreePersonal/humanitarianPersonal interest76.117.76.2Desire to serve/help people7418.57.6Flexible work pattern6316.820.2Desire for self-employment61.825.812.4Financial/societalDesire for financial security57.425.617.1Pride in title “doctor”5424.621.5Prestige/social status as dentist40.734.624.7Influence by othersCareer talk/information41.320.538.2Prior experience of treatment3715.847.3Prior exposure to dentistry31.726.242.1Siblings’/friends’ persuasion30.317.152.6Family doctor2616.457.5Parents’ persuasion2219.658.5To allow for further analysis, potential motivating factors were grouped broadly into three as follows: (1) influence by other people, (2) personal and humanitarian factors, and (3) financial and societal factors. In general, the respondents indicated that they were motivated much more by personal and humanitarian factors when compared to the other two groups of factors (Figure3). The average agreement score for personal and humanitarian factors was 68.7%, whereas the average agreement score for financial and societal factors was 50.7%. The least ranked group of potential motivating factor was influence by other people which included influence by parents, siblings, career guides, and dentists, with an average agreement score of 31.4%.Figure 3
Motivations for choosing dentistry as a career.
## 4. Discussion
To the best of our knowledge, this is the first study in the Eastern Africa region to investigate motivations for the choice of dentistry as a career. The study was a census study where all available and willing members of the study population in Kenya took part. A response rate of over 87% was comparable to many similar studies across the globe [1, 5, 13]. The demographic findings in this study were also comparable to findings in studies conducted in many other countries [4, 14–16] including a slightly higher number of female students than male students and an average age of about 22 years. Students from Moi University were, on average, older than those from University of Nairobi by about one and a half years. This could be attributed to a higher proportion of students at Moi University who decided to join dentistry having already trained in other professions.In a number of Asian countries including Japan [17] and Thailand [18], the occupation of family members appears to have an influence on career choices. In this study, only a small number (less than 10%) of the respondents had parents working within the health profession. Pursuing dentistry as part of family tradition in the field of health was therefore not a key factor among the respondents in this study. Two-thirds of the respondents had selected dentistry as their preferred career at the end of high school, with no gender difference in this respect. This is in contrast to findings in studies conducted in Nigeria (32%) [2] and India (38%) [17] where only about one-third of the students were found to have selected dentistry as their first career choice. The difference could be due to variations in the level of competitiveness for a career in dentistry as well as possible variations in the entry processes including entry examinations. [17].A higher proportion of students at the University of Nairobi had selected dentistry as their preferred choice when compared to the proportion among students at Moi University. Possible reasons for this include the fact that the dental school at University of Nairobi is located in the capital city of the country and is a much larger, much older and well-known dental school having been established about 50 years ago whereas the one at Moi University is only about 10 years old. Students wishing to join dentistry would therefore prefer the former school.The observed cyclic variation in proportion of those who had selected dentistry as their preferred career choice with level of study could partly be attributed to variation in student performance in university entry examinations from year to year. Given that more than one-third of those who had selected dentistry as their preferred career choice made up their mind more than one year before the actual selection, it is unlikely that factors such as career guidance and variations in national discourse on health did influence the cyclic variation. This issue should however be investigated in further studies.A number of studies have shown that dentistry is often not the preferred career for most students undertaking dental training and that medicine is usually their preference [5, 10, 19]. These studies suggest that most students only end up in dental training because they fail to attain grades to allow them to join the more competitive medical programme. This is not supported by findings in this study since majority of respondents had dentistry as their preferred choice. However, it is worth noting that about one half of those who did not have dentistry as their preferred choice indicated that they had selected medicine as their preferred choice.Findings in this study strongly point to personal interest in dentistry and personal desire to help or serve people as the most important motivating factors for the choice of a career in dentistry among dental students in Kenya. The desire to serve and help people and communities was found to be a key motivating factor among dental students in many developed countries including Sweden [20], Japan [21], UK [10], and even USA [22]. Self-motivation was also found to be a key motivation for a career in dentistry in Germany and Finland [13]. The finding that students in a developing country like Kenya considered factors such as personal interest and desire to help people more important than financial and economic factors was a notable variation from global trends.Personal interest has also been ranked highly as a motivating factor in developing countries such as Iran [23], whereas prestige and helping others were found to be key motivating factors in Jordan [24] and Nigeria [2]. In Brazil [25], personal interest was considered a key motivating factor for many dental students, but helping others was not, even though it was shown to be of increasing importance over the years. In many developing countries, financial and economic factors tend to be ranked highly as motivations for a career in dentistry. The desire to secure a good job was found to be the most important motivating factor among dental students in South Africa [26]. In Nigeria [2], key motivation for choosing dentistry as a career was linked to a need to achieve personal goals such as job opportunities abroad, financial independence, and prestige. In China [21], dental students reported that their choice for a career in dentistry was mainly for financial reasons, and for prestige.This study did not find any differences between males and females regarding their choice of motivating factors for a career in dentistry. Other motivating factors that were considered important were desire to have a flexible work schedule, and aspirations for self-employment. On the other hand, factors that had the lowest influence on choice of dentistry as a career were influence from parents and influence by friends and siblings as well as career talk and guidance. Influence by parents has been found to be an important factor why dental students in many Asian countries pursue a career in dentistry. In Japan, there are strong family connections within the dental fraternity, with a high number of dental students having parents who work as dentists [27]. In India, students were found to be highly influenced by their families in making important decisions, including career choice [5]. A possible explanation was suggested to be that, at that age, most of the students lived with their families. In this study, the respondents indicated that they were motivated much more by personal and humanitarian factors when compared to financial and societal factors. The least ranked group of potential motivating factors was influence by other people which included influence by parents.
## 5. Conclusion
In this study, it was found that dental students in Kenya were motivated much more by personal and humanitarian factors when compared to financial and societal factors. Influence by other people including influence by parents was ranked low as a motivating factor for a career in dentistry.
---
*Source: 1017979-2020-07-29.xml* | 2020 |
# Diabetes and Cancer: Epidemiological, Clinical, and Experimental Perspectives
**Authors:** Chin-Hsiao Tseng; Chien-Jen Chen; Joseph R. Landolph
**Journal:** Experimental Diabetes Research
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101802
---
## Body
---
*Source: 101802-2012-10-02.xml* | 101802-2012-10-02_101802-2012-10-02.md | 389 | Diabetes and Cancer: Epidemiological, Clinical, and Experimental Perspectives | Chin-Hsiao Tseng; Chien-Jen Chen; Joseph R. Landolph | Experimental Diabetes Research
(2012) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101802 | 101802-2012-10-02.xml | ---
## Body
---
*Source: 101802-2012-10-02.xml* | 2012 |
# Research Progress of the Functional Role of ACK1 in Breast Cancer
**Authors:** Xia Liu; Xuan Wang; Lifang Li; Baolin Han
**Journal:** BioMed Research International
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1018034
---
## Abstract
ACK1 is a nonreceptor tyrosine kinase with a unique structure, which is tightly related to the biological behavior of tumors. Previous studies have demonstrated that ACK1 was involved with multiple signaling pathways of tumor progression. Its crucial role in tumor cell proliferation, apoptosis, invasion, and metastasis was tightly related to the prognosis and clinicopathology of cancer. ACK1 has a unique way of regulating cellular pathways, different from other nonreceptor tyrosine kinases. As an oncogenic kinase, recent studies have shown that ACK1 plays a critical regulatory role in the initiation and progression of tumors. In this review, we will be summarizing the structural characteristics, activation, and regulation of ACK1 in breast cancer, aiming to deeply understand the functional and mechanistic role of ACK1 and provide novel therapeutic strategies for breast cancer treatment.
---
## Body
## 1. Introduction
Breast cancer is the most common type of cancer and the leading cause of cancer-related death among women worldwide [1]. Although modern medicine for comprehensive treatment of breast cancer has improved a lot with a reduced mortality rate, however, the prevention and treatment of breast cancer remain problematic. Triple-negative breast cancer still lacks effective drug treatment today [2]. Breast cancer research found that many ACK1 tyrosine kinase signaling proteins in many tumor cells are activated repeatedly [3–6]. ACK1 expression is positively correlated with the severity of the disease progression and negatively correlated with the survival rate in breast cancer patients [7, 8].As a nonreceptor tyrosine kinase (or cytoplasmic tyrosine kinase), ACK1 does not receive signals directly from outside the cell but is activated quickly. Its activation is tightly regulated by receptor tyrosine kinases’ activation [9–11]. The process is tightly and dynamically controlled by a series of the single signal paths or multiple phosphorylation cascades and forms tyrosine kinase connection [3, 12]. These signaling processes are dysfunctional during accelerated growth and differentiation of cells. It has been found that the overexpression of ACK1 is related to various tumors, including lung, prostate, stomach, pancreatic, breast, and ovarian cancers [8, 12–16]. Therefore, ACK1 plays a significant role in tumors, but the mechanism of activation and regulation in these tumors is not the same. This review summarizes the function and mechanism of ACK1 in breast cancer, aiming to deeply understand the relationship between ACK1 and breast cancer and providing a basis for personalized treatment of breast cancer.
## 2. Structure and Function of ACK1
Human ACK1 is a 120 kDa protein that contains 1038 amino acid residues [3, 17]. Its coding gene TNK2 is located in the region of chromosome 3q29 [6]. ACK1 plays a role based on its unique structural characteristics, and it contains many essential domains related to its functions. The biological functions of some domains have been reported. For instance, the SAM domain was involved with the membrane localization, dimerization, and activation of ACK1 [18, 19]. The CRIB domain mediates the interaction between ACK1 and Cdc42 [3, 20], and the PPXY motif mediates the interaction between ACK1 and WW domain [21]. The MHR domain mediates the interaction between ACK1 and receptor tyrosine kinase [22]. The UBA domains are involved with the regulation of ACK1 binding to ubiquitin and its polyubiquitination and degradation [23].
## 3. Activation and Degradation of ACK1 in Breast Cancer Cells
ACK1 high expression is closely related to the progress of breast cancer. ACK1 kinase domain interaction with the downstream SH3, CRIB, proline-rich sequence, and MHR domain to affect its kinase activity. A pathological condition characterized by the activation of ACK1, or excessive expression, mainly with three ways of ACK1 activation [24]: (1) Just like a variety of other receptor tyrosine kinases, ACK1 goes through protein interaction and then activates itself. Cells treated with growth factors showed not only rapid activation of their respective RTK, but also activation of ACK1 through tyrosine phosphorylation [7, 25, 26]. This phenomenon suggested that multiple RTKs may potentially interact with ACK1 to cause its activation. (2) The upregulation of the ACK1 gene results in increased mRNA and protein level, which further promotes its dimerization and activation. This process serves as another activation mechanism independent of the RTK-regulated activation in many cancer types. The upregulation of ACK1 has been previously observed in various cancer types such as cervical, ovarian, lung, head and neck squamous cell, breast, prostate, and stomach cancers [6, 7, 15, 27–30]. (3) The mutation results in abnormal activation of ACK1, which can be activated by disinhibition. Among them, four missense mutations, R34L, R99Q, E346K, and M409I, were evidently reported to be located in different regions of ACK1 [5, 7]. The ACK1-E346K mutation was the first to be identified in ovarian cancer with a significant increase in ACK1 self-activation [7, 31, 32].However, in breast cancer, ACK1 gene upregulation (3.4%) and somatic automatic activation mutation (0.1%) are relatively rare, and their activation is mainly by the interaction between RTK and ACK1 [33]. Recent studies have shown that several ubiquitination enzymes, including NEDD4-1 [21], NEDD4-2, SIAH1, and SIAH2 [34], can ubiquitinate ACK1 and induce ACK1 degradation [23, 34]. SIAH2 may be a target gene of E2/ER (estrogen receptor) and regulate the ubiquitylation and degradation of ACK1. In ER-positive breast cancer cells, the cells can activate the ER and estrogen and then ubiquitin ACK1 and reduce the ACK1 level. The lack of ER may increase the stability of ACK1, and in the absence of estrogen, breast cancer cells continually express ACK1. Therefore, it is very important to promote the survival and metastasis of breast cancer.
## 4. Biological Behavior of ACK1 in Breast Cancer Cells
The biological behavior of ACK1 in breast cancer cells is mainly manifested as promoting the growth and proliferation of tumor cells and promoting the metastasis and invasion of tumor cells. ACK1 is of great significance for the survival of cancer cells. ACK1 promotes cell survival by actively regulating survival pathways to prevent cell death [6]. Previous studies have found that the phosphorylation of ACK1 may be related to the progression of breast cancer [3, 7]. However, the functional signals expressed by ACK1 and their role in the biological behavior of breast cancer have not been well elucidated.
## 5. ACK1 Promotes the Proliferation of Breast Cancer Cells
Most mammalian cells experience cell cycle arrest, followed by apoptosis, a process that is out of control and leads to tumor development [35, 36]. Yorkie, a transcription coactivator, promotes the transcription of proliferation and antiapoptotic genes, with which ACK1 can interact to promote tissue overgrowth [37]. Besides, ACK1 S985N mutation promotes cell proliferation, migration, and epithelial-mesenchymal transition [6]. Protein kinase AKT plays a central role in the cell growth, proliferation, and cell survival; AKT activation occurs in the ligand combined with RTK and promotes AKT transport to the plasma membrane [38]. Past researches mainly focus on RTK-mediated PI3K/AKT activation, and recent studies have found that RTK/ACK1/AKT signaling pathway is independent of the path of PI3K in regulating AKT activation. Around a third of the breast cancer showed AKT abnormal activation, and ACK1 promotes cell growth and proliferation by interacting with AKT [4, 6, 39].
## 6. ACK1 Is Involved in the Metastasis and Invasion of Breast Cancer Cells
ACK1 can enhance the migration and invasion ability of breast cancer cells by strengthening the EGFR signaling pathway [40]. ACK1 enhances cancer-causing epidermal growth factor (EGFR) signaling and has been shown to increase proliferation and invasiveness of breast cancer cells [41, 42]. Clinically, more than 20% of breast cancer patients are diagnosed with positive human epidermal growth factor receptor (EGFR), which is associated with a reduced survival rate of breast cancer [43, 44]. The effect of ACK1 on the EGFR signaling pathway has been demonstrated to promote cell migration by activating CDC42. Jillian Howlin et al. observed that the significant decrease of EGFR on the cell surface knocked down by ACK1 was caused by the parallel reduction in the migration ability of breast cancer cells [42]. The primary role of EGFR activation in breast cancer cells is to stimulate exercise, and it has been demonstrated for the first time that maintaining the ability of EGFR cell surface expression through ACK1 can enhance the invasion ability of breast cancer cells. Meanwhile, ACK1 inhibits the invasion of breast cancer cells by regulating BCAR1, but the mechanism remains unclear [42].
## 7. ACK1 Serves as a Marker for Diagnosis and Prediction of Breast Cancer
ACK1 is phosphorylated with tyrosine and interacts with many protein substrates to regulate critical cellular processes [3]. Previous studies have found that phosphorylation of ACK1 may be associated with breast cancer progression; ACK1 has specificity in phosphorylation that affects signal transmission at different sites [3, 7]. Most of its phosphorylation sites are unique, and this property is caused by the unusual substrate binding ability of ACK1 [45]. Detection of its tyrosine phosphorylation will help in the diagnosis, treatment, and prognosis of breast cancer.The phosphorylation levels of ACK1’s Tyr284-phosphorylation and AKT’s Tyr176 phosphorylation are positively correlated with the severity of disease progression and negatively correlated with the survival rate of breast cancer patients. A significant increase in phosphorylation of ACK1-Tyr284 is a marker of ACK1 activation [7, 12, 26, 46], and this increased ACK1 activation is associated with poor tumor prognosis. In addition, the detection of p-Tyr176-AKT level in tumor biopsy can be used as an auxiliary diagnostic tool for personalized treatment with ACK1 inhibitors. ACK1 inhibitor combinations have been shown to benefit pancreatic, lung, breast, and prostate cancers that exhibit robust AKT Tyr176 phosphorylation [6, 12]. AKT phosphorylation at Ser473 phosphorylation (or thr308 phosphorylation) evaluates AKT activation and is generally evaluated as a positive outcome that can be treated with an inhibitor. PY518 phosphorylation is increased in triple-negative breast cancer cells [47]. Current studies have shown that ACK1 is not only hyperphosphorylated but also overexpressed in many highly aggressive triple-negative and triple-negative breast cancer cell lines and that ACK1 expression is associated with aggressive phenotypes in triple-negative breast cancer cell lines. AKT Tyr176 phosphorylation is abnormal in triple-negative breast cancer cell lines, which is sensitive to the ACK1 inhibitor (R-9BMS). Treating with ACK1 inhibitor (R-9BMS) can affect the proliferation of TNBCs, and through the detection of tyrosine phosphorylation will provide help for the diagnosis, treatment, and prognosis of breast cancer.
## 8. ACK1 and Endocrine Therapy for Breast Cancer
Breast cancer is a kind of heterogeneous disease. Endocrine therapy is an important means of comprehensive treatment of breast cancer in addition to surgery. The advantage of ER expression in breast cancer cells and the cell dependence on estrogen make tamoxifen successful in the treatment of ER-positive breast cancer, which can reduce the recurrence of breast cancer by nearly 50% [48]. Although most breast tumors initially respond well to tamoxifen therapy, most women develop tamoxifen resistance at approximately 15 months to 5 years [49]. Despite intensive research, the molecular mechanisms of tamoxifen resistance remain unclear. Drug resistance is a major clinical problem in breast cancer patients, and an in-depth understanding of this phenomenon will significantly help HER2-positive patients. Mahajan et al. demonstrated that ACK1 regulated HOXA1 expression and gave tamoxifen resistance by regulating the epigenetic activity of ER coactivator KDM3A without E2 [33]. Homeobox A1 (HOXA1) gene is a potent oncogene, and the active expression of HOXA1 is enough to cause the oncogenic transformation of human breast epithelial cells and has the ability of invasive tumor [50]. The expression of HOXA1 was significantly increased in regulatory therapy but was significantly downregulated in MCF-7 breast cancer cells treated with ACK1 inhibitor AIM-100 and dasatinib. The combined regulatory activity of ER mediated by ACK1 is crucial for promoting the transcription of HOXA1 gene. In the absence of estrogen, the regulatory protein ACK1 activates the Tyr-1114 phosphorylation site in the ER coactivator KDM3A. ER target gene transcription is promoted in estrogen-deficient environments, such as HOXA1. This may be a novel molecular mechanism for the acquisition of tamoxifen resistance in breast tumors with overexpression of HER2. This provides a new method for endocrine therapy of breast cancer patients; that is, ACK1 inhibitor AIM-100 or dasatinib inhibits ACK1 signal to alleviate the upregulation of drug resistance by HOXA1 in breast cancer patients [5]. ACK1 inhibitors have become a potential treatment for antihormone therapy of tumors in a variety of mechanisms that promote ACK1 activation in breast cancer [6]. This approach to personalized drug therapy may be beneficial for patients with tamoxifen-resistant breast cancer, revealing ACK1 inhibitor treatments such as dasatinib therapy, which is already an FDA-approved drug as an adjuvant treatment regimen.A recent study found ACK1 regulation by AR signal is the critical mechanism; the ACK1 expression increased in quite several prostate cancer samples and ACK1 tyrosin284 phosphorylation also significantly increased, and the ACK1 activation was associated with poor prognosis of tumors. ACK1 Tyr284 phosphorylation and AR Tyr267 phosphorylation were positively correlated with the severity of the disease progression [7, 26, 52]. ACK1 phosphorylates AR and then promotes transcriptional activation at the target promoter. Activated ACK1/pTyr267-AR complex was recruited into ATM (ataxia telangiectasia mutant kinase) [46]. ATM is a regulator of DNA damage and cell cycle checkpoints signal pathway, ensuring the integrity of genes in cells to respond to DNA double-strand breaks [53]. In the absence of androgen, the Tyr267 phosphorylation of AR can promote ATM transcription, and studies have shown that increased expression of ATM protein and upregulation of genes related to maintenance of gene integrity may prevent the death of CPRC tumor cells [46]. Therefore, inhibition of ACK1-AR signaling, thereby inhibiting ACK1-mediated ATM levels, maybe a new therapeutic strategy for CRPC tumors, which often exhibits radiation resistance. The main downstream effector of ACK1 is AR, and both breast and prostate cancers are hormone-regulated cancers, indicating the potential of ER as another hormone receptor that interacts with ACK1, possibly making breast cancer cells more sensitive to radiotherapy by inhibiting ACK1-ER signaling.Approximately 15–20% of breast cancers do not express the estrogen receptor, progesterone receptor, or HER2 receptor, collectively known as triple-negative breast cancer (TNBC) [54]. ER-positive breast cancer or HER2 cationic breast cancer can be treated with endocrine therapy or HER2-targeted therapy [55, 56]. Compared with other types of breast cancer, these tumors are usually aggressive and lack effective targeted treatment. There is currently no targeted therapy for TNBCs patients [57, 58]. But the nonreceptor tyrosine kinase ACK1 is activated in most aggressive TNBC cell lines. Wu et al. found that inhibiting the ACK1 signal not only reduced the proliferation of TNBC cells but also promoted the invasiveness of tumor formation in xenograft mice [8]. This phenomenon indicates the dependence of TNBCs on ACK1 signal in proliferation and invasion ability. In high-level basal-like breast cancer, the high level of ACK1 expression is closely related to poor prognosis of patients [59]. It is suggested that ACK1 is a new potential therapeutic target for TNBC. The loss of ACK1 causes the death of resistant cells to the EGFR inhibitor gefitinib [33]. Therefore, combining the inhibition of EGFR and ACK1 may be a new chemotherapy strategy to overcome resistance to gefitinib [14, 60]. Combining anti-ACK1 therapy with doxorubicin therapy in the treatment of invasive TNBCs provides a pathway for future targeted therapies based on breast cancer.
## 9. Conclusion
To sum up, ACK1 is activated in a variety of tumors by tyrosine phosphorylation of a range of proteins, particularly those essential for cell survival, growth, and proliferation and to regulate the activity [5, 27]. To date, it has been found that ACK1 interacts with a variety of receptor tyrosine kinases (EGFR), oncoproteins (AKT), tumor suppressor proteins (Wwox), and epigenetic modification regulatory proteins (KDM3A) in breast cancer [5, 6, 11, 33, 40]. Its overactivation mainly plays a vital role in the occurrence and development of breast cancer through downstream substrates. A better understanding of ACK1 signal pathway will reveal its participation in specific cell-signaling pathways for promoting growth and inhibiting apoptosis. ACK1 inhibitor drugs will have a broad prospect of clinical application, and at the same time, ACK1 Y284 phosphorylation as a marker in some breast cancer and pTyr-1114 KDM3A antibodies also has a significant clinical diagnostic value which can be used in patients for ACK1-positive breast cancer screening. However, it is not clear whether there are other mechanisms in ACK1-related tumors to promote the growth, proliferation, migration, and invasion of cancer cells through ACK1. Moreover, more ACK1 interaction proteins or substrates need to be further identified to better utilize them for personalized diagnosis and treatment of breast cancer.
---
*Source: 1018034-2019-10-20.xml* | 1018034-2019-10-20_1018034-2019-10-20.md | 18,605 | Research Progress of the Functional Role of ACK1 in Breast Cancer | Xia Liu; Xuan Wang; Lifang Li; Baolin Han | BioMed Research International
(2019) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1018034 | 1018034-2019-10-20.xml | ---
## Abstract
ACK1 is a nonreceptor tyrosine kinase with a unique structure, which is tightly related to the biological behavior of tumors. Previous studies have demonstrated that ACK1 was involved with multiple signaling pathways of tumor progression. Its crucial role in tumor cell proliferation, apoptosis, invasion, and metastasis was tightly related to the prognosis and clinicopathology of cancer. ACK1 has a unique way of regulating cellular pathways, different from other nonreceptor tyrosine kinases. As an oncogenic kinase, recent studies have shown that ACK1 plays a critical regulatory role in the initiation and progression of tumors. In this review, we will be summarizing the structural characteristics, activation, and regulation of ACK1 in breast cancer, aiming to deeply understand the functional and mechanistic role of ACK1 and provide novel therapeutic strategies for breast cancer treatment.
---
## Body
## 1. Introduction
Breast cancer is the most common type of cancer and the leading cause of cancer-related death among women worldwide [1]. Although modern medicine for comprehensive treatment of breast cancer has improved a lot with a reduced mortality rate, however, the prevention and treatment of breast cancer remain problematic. Triple-negative breast cancer still lacks effective drug treatment today [2]. Breast cancer research found that many ACK1 tyrosine kinase signaling proteins in many tumor cells are activated repeatedly [3–6]. ACK1 expression is positively correlated with the severity of the disease progression and negatively correlated with the survival rate in breast cancer patients [7, 8].As a nonreceptor tyrosine kinase (or cytoplasmic tyrosine kinase), ACK1 does not receive signals directly from outside the cell but is activated quickly. Its activation is tightly regulated by receptor tyrosine kinases’ activation [9–11]. The process is tightly and dynamically controlled by a series of the single signal paths or multiple phosphorylation cascades and forms tyrosine kinase connection [3, 12]. These signaling processes are dysfunctional during accelerated growth and differentiation of cells. It has been found that the overexpression of ACK1 is related to various tumors, including lung, prostate, stomach, pancreatic, breast, and ovarian cancers [8, 12–16]. Therefore, ACK1 plays a significant role in tumors, but the mechanism of activation and regulation in these tumors is not the same. This review summarizes the function and mechanism of ACK1 in breast cancer, aiming to deeply understand the relationship between ACK1 and breast cancer and providing a basis for personalized treatment of breast cancer.
## 2. Structure and Function of ACK1
Human ACK1 is a 120 kDa protein that contains 1038 amino acid residues [3, 17]. Its coding gene TNK2 is located in the region of chromosome 3q29 [6]. ACK1 plays a role based on its unique structural characteristics, and it contains many essential domains related to its functions. The biological functions of some domains have been reported. For instance, the SAM domain was involved with the membrane localization, dimerization, and activation of ACK1 [18, 19]. The CRIB domain mediates the interaction between ACK1 and Cdc42 [3, 20], and the PPXY motif mediates the interaction between ACK1 and WW domain [21]. The MHR domain mediates the interaction between ACK1 and receptor tyrosine kinase [22]. The UBA domains are involved with the regulation of ACK1 binding to ubiquitin and its polyubiquitination and degradation [23].
## 3. Activation and Degradation of ACK1 in Breast Cancer Cells
ACK1 high expression is closely related to the progress of breast cancer. ACK1 kinase domain interaction with the downstream SH3, CRIB, proline-rich sequence, and MHR domain to affect its kinase activity. A pathological condition characterized by the activation of ACK1, or excessive expression, mainly with three ways of ACK1 activation [24]: (1) Just like a variety of other receptor tyrosine kinases, ACK1 goes through protein interaction and then activates itself. Cells treated with growth factors showed not only rapid activation of their respective RTK, but also activation of ACK1 through tyrosine phosphorylation [7, 25, 26]. This phenomenon suggested that multiple RTKs may potentially interact with ACK1 to cause its activation. (2) The upregulation of the ACK1 gene results in increased mRNA and protein level, which further promotes its dimerization and activation. This process serves as another activation mechanism independent of the RTK-regulated activation in many cancer types. The upregulation of ACK1 has been previously observed in various cancer types such as cervical, ovarian, lung, head and neck squamous cell, breast, prostate, and stomach cancers [6, 7, 15, 27–30]. (3) The mutation results in abnormal activation of ACK1, which can be activated by disinhibition. Among them, four missense mutations, R34L, R99Q, E346K, and M409I, were evidently reported to be located in different regions of ACK1 [5, 7]. The ACK1-E346K mutation was the first to be identified in ovarian cancer with a significant increase in ACK1 self-activation [7, 31, 32].However, in breast cancer, ACK1 gene upregulation (3.4%) and somatic automatic activation mutation (0.1%) are relatively rare, and their activation is mainly by the interaction between RTK and ACK1 [33]. Recent studies have shown that several ubiquitination enzymes, including NEDD4-1 [21], NEDD4-2, SIAH1, and SIAH2 [34], can ubiquitinate ACK1 and induce ACK1 degradation [23, 34]. SIAH2 may be a target gene of E2/ER (estrogen receptor) and regulate the ubiquitylation and degradation of ACK1. In ER-positive breast cancer cells, the cells can activate the ER and estrogen and then ubiquitin ACK1 and reduce the ACK1 level. The lack of ER may increase the stability of ACK1, and in the absence of estrogen, breast cancer cells continually express ACK1. Therefore, it is very important to promote the survival and metastasis of breast cancer.
## 4. Biological Behavior of ACK1 in Breast Cancer Cells
The biological behavior of ACK1 in breast cancer cells is mainly manifested as promoting the growth and proliferation of tumor cells and promoting the metastasis and invasion of tumor cells. ACK1 is of great significance for the survival of cancer cells. ACK1 promotes cell survival by actively regulating survival pathways to prevent cell death [6]. Previous studies have found that the phosphorylation of ACK1 may be related to the progression of breast cancer [3, 7]. However, the functional signals expressed by ACK1 and their role in the biological behavior of breast cancer have not been well elucidated.
## 5. ACK1 Promotes the Proliferation of Breast Cancer Cells
Most mammalian cells experience cell cycle arrest, followed by apoptosis, a process that is out of control and leads to tumor development [35, 36]. Yorkie, a transcription coactivator, promotes the transcription of proliferation and antiapoptotic genes, with which ACK1 can interact to promote tissue overgrowth [37]. Besides, ACK1 S985N mutation promotes cell proliferation, migration, and epithelial-mesenchymal transition [6]. Protein kinase AKT plays a central role in the cell growth, proliferation, and cell survival; AKT activation occurs in the ligand combined with RTK and promotes AKT transport to the plasma membrane [38]. Past researches mainly focus on RTK-mediated PI3K/AKT activation, and recent studies have found that RTK/ACK1/AKT signaling pathway is independent of the path of PI3K in regulating AKT activation. Around a third of the breast cancer showed AKT abnormal activation, and ACK1 promotes cell growth and proliferation by interacting with AKT [4, 6, 39].
## 6. ACK1 Is Involved in the Metastasis and Invasion of Breast Cancer Cells
ACK1 can enhance the migration and invasion ability of breast cancer cells by strengthening the EGFR signaling pathway [40]. ACK1 enhances cancer-causing epidermal growth factor (EGFR) signaling and has been shown to increase proliferation and invasiveness of breast cancer cells [41, 42]. Clinically, more than 20% of breast cancer patients are diagnosed with positive human epidermal growth factor receptor (EGFR), which is associated with a reduced survival rate of breast cancer [43, 44]. The effect of ACK1 on the EGFR signaling pathway has been demonstrated to promote cell migration by activating CDC42. Jillian Howlin et al. observed that the significant decrease of EGFR on the cell surface knocked down by ACK1 was caused by the parallel reduction in the migration ability of breast cancer cells [42]. The primary role of EGFR activation in breast cancer cells is to stimulate exercise, and it has been demonstrated for the first time that maintaining the ability of EGFR cell surface expression through ACK1 can enhance the invasion ability of breast cancer cells. Meanwhile, ACK1 inhibits the invasion of breast cancer cells by regulating BCAR1, but the mechanism remains unclear [42].
## 7. ACK1 Serves as a Marker for Diagnosis and Prediction of Breast Cancer
ACK1 is phosphorylated with tyrosine and interacts with many protein substrates to regulate critical cellular processes [3]. Previous studies have found that phosphorylation of ACK1 may be associated with breast cancer progression; ACK1 has specificity in phosphorylation that affects signal transmission at different sites [3, 7]. Most of its phosphorylation sites are unique, and this property is caused by the unusual substrate binding ability of ACK1 [45]. Detection of its tyrosine phosphorylation will help in the diagnosis, treatment, and prognosis of breast cancer.The phosphorylation levels of ACK1’s Tyr284-phosphorylation and AKT’s Tyr176 phosphorylation are positively correlated with the severity of disease progression and negatively correlated with the survival rate of breast cancer patients. A significant increase in phosphorylation of ACK1-Tyr284 is a marker of ACK1 activation [7, 12, 26, 46], and this increased ACK1 activation is associated with poor tumor prognosis. In addition, the detection of p-Tyr176-AKT level in tumor biopsy can be used as an auxiliary diagnostic tool for personalized treatment with ACK1 inhibitors. ACK1 inhibitor combinations have been shown to benefit pancreatic, lung, breast, and prostate cancers that exhibit robust AKT Tyr176 phosphorylation [6, 12]. AKT phosphorylation at Ser473 phosphorylation (or thr308 phosphorylation) evaluates AKT activation and is generally evaluated as a positive outcome that can be treated with an inhibitor. PY518 phosphorylation is increased in triple-negative breast cancer cells [47]. Current studies have shown that ACK1 is not only hyperphosphorylated but also overexpressed in many highly aggressive triple-negative and triple-negative breast cancer cell lines and that ACK1 expression is associated with aggressive phenotypes in triple-negative breast cancer cell lines. AKT Tyr176 phosphorylation is abnormal in triple-negative breast cancer cell lines, which is sensitive to the ACK1 inhibitor (R-9BMS). Treating with ACK1 inhibitor (R-9BMS) can affect the proliferation of TNBCs, and through the detection of tyrosine phosphorylation will provide help for the diagnosis, treatment, and prognosis of breast cancer.
## 8. ACK1 and Endocrine Therapy for Breast Cancer
Breast cancer is a kind of heterogeneous disease. Endocrine therapy is an important means of comprehensive treatment of breast cancer in addition to surgery. The advantage of ER expression in breast cancer cells and the cell dependence on estrogen make tamoxifen successful in the treatment of ER-positive breast cancer, which can reduce the recurrence of breast cancer by nearly 50% [48]. Although most breast tumors initially respond well to tamoxifen therapy, most women develop tamoxifen resistance at approximately 15 months to 5 years [49]. Despite intensive research, the molecular mechanisms of tamoxifen resistance remain unclear. Drug resistance is a major clinical problem in breast cancer patients, and an in-depth understanding of this phenomenon will significantly help HER2-positive patients. Mahajan et al. demonstrated that ACK1 regulated HOXA1 expression and gave tamoxifen resistance by regulating the epigenetic activity of ER coactivator KDM3A without E2 [33]. Homeobox A1 (HOXA1) gene is a potent oncogene, and the active expression of HOXA1 is enough to cause the oncogenic transformation of human breast epithelial cells and has the ability of invasive tumor [50]. The expression of HOXA1 was significantly increased in regulatory therapy but was significantly downregulated in MCF-7 breast cancer cells treated with ACK1 inhibitor AIM-100 and dasatinib. The combined regulatory activity of ER mediated by ACK1 is crucial for promoting the transcription of HOXA1 gene. In the absence of estrogen, the regulatory protein ACK1 activates the Tyr-1114 phosphorylation site in the ER coactivator KDM3A. ER target gene transcription is promoted in estrogen-deficient environments, such as HOXA1. This may be a novel molecular mechanism for the acquisition of tamoxifen resistance in breast tumors with overexpression of HER2. This provides a new method for endocrine therapy of breast cancer patients; that is, ACK1 inhibitor AIM-100 or dasatinib inhibits ACK1 signal to alleviate the upregulation of drug resistance by HOXA1 in breast cancer patients [5]. ACK1 inhibitors have become a potential treatment for antihormone therapy of tumors in a variety of mechanisms that promote ACK1 activation in breast cancer [6]. This approach to personalized drug therapy may be beneficial for patients with tamoxifen-resistant breast cancer, revealing ACK1 inhibitor treatments such as dasatinib therapy, which is already an FDA-approved drug as an adjuvant treatment regimen.A recent study found ACK1 regulation by AR signal is the critical mechanism; the ACK1 expression increased in quite several prostate cancer samples and ACK1 tyrosin284 phosphorylation also significantly increased, and the ACK1 activation was associated with poor prognosis of tumors. ACK1 Tyr284 phosphorylation and AR Tyr267 phosphorylation were positively correlated with the severity of the disease progression [7, 26, 52]. ACK1 phosphorylates AR and then promotes transcriptional activation at the target promoter. Activated ACK1/pTyr267-AR complex was recruited into ATM (ataxia telangiectasia mutant kinase) [46]. ATM is a regulator of DNA damage and cell cycle checkpoints signal pathway, ensuring the integrity of genes in cells to respond to DNA double-strand breaks [53]. In the absence of androgen, the Tyr267 phosphorylation of AR can promote ATM transcription, and studies have shown that increased expression of ATM protein and upregulation of genes related to maintenance of gene integrity may prevent the death of CPRC tumor cells [46]. Therefore, inhibition of ACK1-AR signaling, thereby inhibiting ACK1-mediated ATM levels, maybe a new therapeutic strategy for CRPC tumors, which often exhibits radiation resistance. The main downstream effector of ACK1 is AR, and both breast and prostate cancers are hormone-regulated cancers, indicating the potential of ER as another hormone receptor that interacts with ACK1, possibly making breast cancer cells more sensitive to radiotherapy by inhibiting ACK1-ER signaling.Approximately 15–20% of breast cancers do not express the estrogen receptor, progesterone receptor, or HER2 receptor, collectively known as triple-negative breast cancer (TNBC) [54]. ER-positive breast cancer or HER2 cationic breast cancer can be treated with endocrine therapy or HER2-targeted therapy [55, 56]. Compared with other types of breast cancer, these tumors are usually aggressive and lack effective targeted treatment. There is currently no targeted therapy for TNBCs patients [57, 58]. But the nonreceptor tyrosine kinase ACK1 is activated in most aggressive TNBC cell lines. Wu et al. found that inhibiting the ACK1 signal not only reduced the proliferation of TNBC cells but also promoted the invasiveness of tumor formation in xenograft mice [8]. This phenomenon indicates the dependence of TNBCs on ACK1 signal in proliferation and invasion ability. In high-level basal-like breast cancer, the high level of ACK1 expression is closely related to poor prognosis of patients [59]. It is suggested that ACK1 is a new potential therapeutic target for TNBC. The loss of ACK1 causes the death of resistant cells to the EGFR inhibitor gefitinib [33]. Therefore, combining the inhibition of EGFR and ACK1 may be a new chemotherapy strategy to overcome resistance to gefitinib [14, 60]. Combining anti-ACK1 therapy with doxorubicin therapy in the treatment of invasive TNBCs provides a pathway for future targeted therapies based on breast cancer.
## 9. Conclusion
To sum up, ACK1 is activated in a variety of tumors by tyrosine phosphorylation of a range of proteins, particularly those essential for cell survival, growth, and proliferation and to regulate the activity [5, 27]. To date, it has been found that ACK1 interacts with a variety of receptor tyrosine kinases (EGFR), oncoproteins (AKT), tumor suppressor proteins (Wwox), and epigenetic modification regulatory proteins (KDM3A) in breast cancer [5, 6, 11, 33, 40]. Its overactivation mainly plays a vital role in the occurrence and development of breast cancer through downstream substrates. A better understanding of ACK1 signal pathway will reveal its participation in specific cell-signaling pathways for promoting growth and inhibiting apoptosis. ACK1 inhibitor drugs will have a broad prospect of clinical application, and at the same time, ACK1 Y284 phosphorylation as a marker in some breast cancer and pTyr-1114 KDM3A antibodies also has a significant clinical diagnostic value which can be used in patients for ACK1-positive breast cancer screening. However, it is not clear whether there are other mechanisms in ACK1-related tumors to promote the growth, proliferation, migration, and invasion of cancer cells through ACK1. Moreover, more ACK1 interaction proteins or substrates need to be further identified to better utilize them for personalized diagnosis and treatment of breast cancer.
---
*Source: 1018034-2019-10-20.xml* | 2019 |
# 1,25-Dihydroxyvitamin D3 Inhibits the RANKL Pathway and Impacts on the Production of Pathway-Associated Cytokines in Early Rheumatoid Arthritis
**Authors:** Jing Luo; Hongyan Wen; Hui Guo; Qi Cai; Shuangtian Li; Xiaofeng Li
**Journal:** BioMed Research International
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101805
---
## Abstract
Objectives. To study effects of 1,25-dihydroxyvitamin D3 (1,25(OH)2D3) on RANKL signaling pathway and pathway-associated cytokines in patients with rheumatoid arthritis (RA). Methods. Receptor activator of nuclear factor-kappa B ligand (RANKL), osteoprotegerin (OPG), IFN-γ, IL-6, TNF-α, IL-17, and IL-4 were examined in 54 patients with incipient RA using a cytometric bead array (CBA) or an enzyme-linked immunosorbent assay (ELISA). Results. After 72 hours of incubation of peripheral blood mononuclear cells (PBMCs) with 1,25(OH)2D3 in RA patients, the levels of RANKL, TNF-α, IL-17 and IL-6 significantly decreased compared to those of the control. 1,25(OH)2D3 had no significantly impact on the levels of OPG, RANKL/OPG, and IL-4. Conclusions. The present study demonstrated that 1,25(OH)2D3 reduced the production of RANKL and the secretion of TNF-α, IL-17, and IL-6 in PBMCs of RA patients, which indicated that 1,25(OH)2D3 might be able to decrease damage of cartilage and bone in RA patients by regulating the expression of RANKL signaling pathway and pathway-associated cytokines.
---
## Body
## 1. Introduction
Rheumatoid arthritis (RA) is a common chronic autoimmune disorder characterized by synovial inflammation. Bone loss in the inflamed joints [1, 2] occurs in the early stage of the disease, followed by the destruction of articular cartilage and bones. During the processing of the disease, the signaling pathway of receptor activator of nuclear factor kappa-B ligand (RANKL) and osteoprotegerin (OPG) is crucial in osteoclasts differentiation and activation [3].An abnormal proliferation of T lymphocytes is a characteristic of RA. Previous data indicate that the accumulation and proliferation of T lymphocytes occurred prior to bone destruction [4, 5]. T lymphocytes play a role in differentiation and maturation of osteoclasts [6]. T lymphocytes secrete soluble cytokines such as RANKL, macrophage colony-stimulating factor (M-CSF), and tumor necrosis factor-α (TNF-α) and thereby directly induce the formation and differentiation of osteoclasts (direct effect) [7–9]. In addition, T lymphocytes produce interleukins such as IL-1, IL-6, and IL-17, which are absorption-promoting cytokines stimulating the expression of RANKL on the cell surface of mature osteoclasts, mesenchymal cells, or fibroblasts [10–12].Subsequently, binding of RANKL to its specific receptor, receptor activator of nuclear factor kappa-B (RANK) on the surface of preosteoclasts, further increases differentiation, and maturation of osteoclasts [12]. Therefore, the published data suggest a close correlation between the RANKL pathway and joint deterioration in RA patients [13].It is well known that 1,25-dihydroxyvitamin D3 (1,25(OH)2D3) plays an important role in the bone formation [14]. Currently, studies have suggested that 1,25(OH)2D3 is also an important immune modulator [15]. It has been demonstrated that 1,25(OH)2D3 directly inhibits T-cell proliferation and reduces its secretion of IL-2 and IFN-γ [16]. However, it is unclear whether 1,25(OH)2D3 is involved in the regulation of RANKL signaling pathway.In the present study, using RA patients and healthy control peripheral blood mononuclear cells (PBMCs), we studied effects of 1,25(OH)2D3 on RANKL signaling pathway and associated cytokines. Methotrexate (MTX) was a common drug used in the treatment of RA because of its role of immune modulation [17]. Therefore, the effects of combination of 1,25(OH)2D3 and MTX on the RANKL signaling pathway as well as associated cytokines were also investigated.
## 2. Materials and Methods
### 2.1. Subjects
54 incipient RA patients were recruited from the department of Rheumatology of the Second Hospital of Shanxi Medical University, including 18 males and 36 females with an age between 30 to 65 years old. They all fulfilled the American College of Rheumatology revised criteria for RA [18]. None of the patients had ever used vitamin D, glucocorticoids, immunosuppressants, or a tumor necrosis factor antagonist prior to the study. All patients had normal liver and kidney functions. 18 healthy volunteers were used as healthy control, and gender and age were completely matched to the RA patients. This study was approved by the Research Ethics Committee of the Second Hospital of Shanxi Medical University.
### 2.2. Sample Collection
18 mL peripheral venous blood was collected from fasting subjects in the early morning. 15 mL was placed in a tube with heparin sodium anticoagulant for extracting the peripheral blood mononuclear cells (PBMCs), and the remaining 3 mL for extracting serum was placed in a tube without any anticoagulant. The blood samples without anticoagulant were kept at room temperature for 30 minutes to allow coagulating followed by centrifuging for 15 min at 1,000 rpm. After centrifugation, the supernatants (serum) were removed and stored at −80°C for future experiments.
### 2.3. In Vitro Stimulation and PBMCs Culture
Lymphocytes were isolated by density centrifugation from a 15 mL peripheral blood sample containing sodium heparin. Trypan blue staining was used to confirm that cell viability was >95%. The cells were suspended in phenol red-free Iscove’s modified Dulbecco’s medium (IMDM, Gibco, USA) supplemented with 10% charcoal-treated FCS, 100 units/mL penicillin, and 100μg/mL streptomycin, and the cell suspension was prepared at a density of 2×106/mL.The healthy control and RA patients PBMCs were plated in a 96-well plate at 200μL/well and then treated with either vehicle (no stimulant) or the combination of anti-CD3 and anti-CD28 antibody plus 1,25(OH)2D3 at various concentrations (D1=0.1 nM; D2=1 nM; D3=100 nM), MTX at various concentrations (M1=0.05 ug/mL; M2=0.5 ug/mL; M3=1 ug/mL), or with the combination of 1,25(OH)2D3 and MTX (D2M2 group). 1,25(OH)2D3 and/or MTX treatment was performed only in anti-CD3 and anti-CD28 antibody treated cells. For the vehicle control, no stimulant was added to the wells, which meant that anti-CD3, anti-CD28, MTX, and 1,25(OH)2D3 cannot be added to PBMCs. The final concentration of anti-CD3 was 300 ng/mL and of anti-CD28 was 400 ng/mL. For cells treated with 1,25(OH)2D3 and/or MTX, the cells were treated with 1,25(OH)2D3 and/or MTX plus anti-CD3 and anti-CD28 in a humidified, stable-temperature incubator at 37°C with 5% CO2 72 hours after incubation, and the cultures were harvested by centrifuging at 2000 rpm for 8 minutes. The supernatants were collected and stored at −80°C for subsequent cytokines determination.
### 2.4. Measurement of RANKL, OPG, and Associated Cytokines in the Serum and Cell Culture Supernatant
The levels of RANKL and OPG were measured using ELISA (R&D Co, Ltd.). Analysis of IFN-γ, IL-4, IL-6, TNF-α, and IL-17 was conducted using a CBA human Th1/Th2/Th17 cytokine kit (BD Co, Ltd) and analyzed on a BDFACSCalibur flow cytometer. Quantity (pg/mL) of respective cytokine was calculated using CBA software.
## 2.1. Subjects
54 incipient RA patients were recruited from the department of Rheumatology of the Second Hospital of Shanxi Medical University, including 18 males and 36 females with an age between 30 to 65 years old. They all fulfilled the American College of Rheumatology revised criteria for RA [18]. None of the patients had ever used vitamin D, glucocorticoids, immunosuppressants, or a tumor necrosis factor antagonist prior to the study. All patients had normal liver and kidney functions. 18 healthy volunteers were used as healthy control, and gender and age were completely matched to the RA patients. This study was approved by the Research Ethics Committee of the Second Hospital of Shanxi Medical University.
## 2.2. Sample Collection
18 mL peripheral venous blood was collected from fasting subjects in the early morning. 15 mL was placed in a tube with heparin sodium anticoagulant for extracting the peripheral blood mononuclear cells (PBMCs), and the remaining 3 mL for extracting serum was placed in a tube without any anticoagulant. The blood samples without anticoagulant were kept at room temperature for 30 minutes to allow coagulating followed by centrifuging for 15 min at 1,000 rpm. After centrifugation, the supernatants (serum) were removed and stored at −80°C for future experiments.
## 2.3. In Vitro Stimulation and PBMCs Culture
Lymphocytes were isolated by density centrifugation from a 15 mL peripheral blood sample containing sodium heparin. Trypan blue staining was used to confirm that cell viability was >95%. The cells were suspended in phenol red-free Iscove’s modified Dulbecco’s medium (IMDM, Gibco, USA) supplemented with 10% charcoal-treated FCS, 100 units/mL penicillin, and 100μg/mL streptomycin, and the cell suspension was prepared at a density of 2×106/mL.The healthy control and RA patients PBMCs were plated in a 96-well plate at 200μL/well and then treated with either vehicle (no stimulant) or the combination of anti-CD3 and anti-CD28 antibody plus 1,25(OH)2D3 at various concentrations (D1=0.1 nM; D2=1 nM; D3=100 nM), MTX at various concentrations (M1=0.05 ug/mL; M2=0.5 ug/mL; M3=1 ug/mL), or with the combination of 1,25(OH)2D3 and MTX (D2M2 group). 1,25(OH)2D3 and/or MTX treatment was performed only in anti-CD3 and anti-CD28 antibody treated cells. For the vehicle control, no stimulant was added to the wells, which meant that anti-CD3, anti-CD28, MTX, and 1,25(OH)2D3 cannot be added to PBMCs. The final concentration of anti-CD3 was 300 ng/mL and of anti-CD28 was 400 ng/mL. For cells treated with 1,25(OH)2D3 and/or MTX, the cells were treated with 1,25(OH)2D3 and/or MTX plus anti-CD3 and anti-CD28 in a humidified, stable-temperature incubator at 37°C with 5% CO2 72 hours after incubation, and the cultures were harvested by centrifuging at 2000 rpm for 8 minutes. The supernatants were collected and stored at −80°C for subsequent cytokines determination.
## 2.4. Measurement of RANKL, OPG, and Associated Cytokines in the Serum and Cell Culture Supernatant
The levels of RANKL and OPG were measured using ELISA (R&D Co, Ltd.). Analysis of IFN-γ, IL-4, IL-6, TNF-α, and IL-17 was conducted using a CBA human Th1/Th2/Th17 cytokine kit (BD Co, Ltd) and analyzed on a BDFACSCalibur flow cytometer. Quantity (pg/mL) of respective cytokine was calculated using CBA software.
## 3. Statistical Analyses
SPSS13.0 software was used for data analyses. All results were presented as mean ± standard deviation (M±SD). All data met the conditions for a normal distribution and homogeneity of variance. To compare two groups of data, a completely randomized, independent, two-sample t-test was used; to compare multiple groups of data, a one-way ANOVA method of square-deviation was applied, and either the Student-Newman-Keuls (SNK) test or the rank sum test was used to compare data among the groups. Pvalue<0.05 was considered to be significant.
## 4. Results
### 4.1. The Comparison of Serum Levels of RANKL, OPG and Associated Cytokines in RA Patients versus Healthy Control
We examined the expression of RANKL, OPG, and associated cytokines in the serum of RA patients and healthy control. Overall, there was a significant increase in RANKL, IL-17, IL-6, and TNF-α of RA patients when compared with those of healthy control (Table 1). Although OPG and RANKL/OPG showed a little increase in RA patients, no significant difference was observed. Further, the level of IL-4 was not significantly higher compared to that in healthy control.Table 1
The serum levels of RANKL, OPG, and associated cytokines in RA patients versus control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA group
100.17 ± 22.27
0.64 ± 0.17
169.57 ± 59.38
5.91 ± 2.53
42.56 ± 6.43
16.63 ± 12.00
2.72 ± 0.36
Healthy control group
75.82 ± 9.108
0.53 ± 0.16
149.00 ± 26.71
2.63 ± 0.27
21.10 ± 3.22
4.16 ± 2.27
2.72 ± 0.33
Values are expressed as mean ± standard deviation.
### 4.2. The Levels of Anti-CD3 Plus Anti-CD28 Induced RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA and Healthy Control PBMCs
Anti-CD3/CD28 is the activator of T lymphocytes, and our data revealed that PBMCs of RA and healthy control cultured from freshly collected peripheral blood responded to the stimulation of anti-CD3 and anti-CD28 very well. The healthy control group and RA patients PBMCs were divided into vehicle control group and anti-CD3/CD28 group. In both RA and healthy control, after 72 hours stimulation, the levels of RANKL, TNF-α, IL-17, IL-6, and IL-4 in the anti-CD3/CD28 group significantly enhanced compared with the vehicle control group (P<0.05; Table 2); although the level of OPG and RANKL/OPG in anti-CD3/CD28 group showed a little increase, the differences did not reach significance (P>0.05, Table 2).Table 2
The effect of Anti-CD3/CD28 induced the increases of inflammation-related cytokines in the PBMCs of RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Vehicle control
85.39 ± 5.54
0.53 ± 0.13
171.10 ± 49.11
12.55 ± 5.32
46.23 ± 13.03
2884.35 ± 1389.03
4.53 ± 1.37
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
Healthy control
Vehicle control
67.22 ± 11.14
0.61 ± 0.18
127.18 ± 14.07
6.68 ± 0.55
25.48 ± 3.78
152.87 ± 304.38
5.02 ± 2.53
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
9.77 ± 4.43
Values are expressed as mean ± standard deviation.
### 4.3. The Levels of RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA Patients and Healthy Control’ PBMCs Treated with MTX
MTX had been demonstrated to be one of the most effective agents in current use for the treatment of patients with active RA [19]. Healthy volunteers and RA patients’ PBMCs were divided into anti-CD3/CD28 group and three different MTX-dose-treated groups M1, M2, and M3. Our data revealed that 72 hours after incubation of PBMCs with MTX in RA patients, the levels of RANKL, TNF-α, IL-17 and IL-6 significantly decreased in MTX treated groups compared with Anti-CD3/CD28 group in RA patients (P<0.05; Table 3; Figures 1, 2, 3, and 4). However, in three MTX treated groups, the inhibitions of pervious four cytokines were not in dose-dependent manner (P>0.05; Table 3). The treatment of MTX had no significant effect on the levels of OPG, RANKL/OPG and IL-4 in MTX testing groups compared to those in anti-CD3/CD28 group in RA patients (P>0.05; Table 3; Figures 5, 6, and 7). Further, in healthy control, there was no significant difference in all seven cytokines as mentioned above between the MTX-treated groups and anti-CD3/CD28 group (P>0.05).Table 3
The impact of MTX at various concentrations on inflammation-related cytokines in RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
M1
77.60 ± 7.61
0.44 ± 0.05
175.62 ± 21.61
318.81 ± 74.45
451.50 ± 50.08
5255.36 ± 4309.03
16.30 ± 11.6
M2
82.57 ± 11.23
0.50 ± 0.06
164.74 ± 18.24
292.46 ± 58.67
372.13 ± 66.64
7251.50 ± 4455.93
10.73 ± 6.84
M3
77.12 ± 7.36
0.57 ± 0.13
139.14 ± 29.05
265.51 ± 64.08
315.10 ± 103.73
4706.41 ± 3391.34
14.13 ± 9.24
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
M1
71.02 ± 16.39
0.45 ± 0.32
158.02 ± 38.89
161.43 ± 44.09
204.60 ± 21.31
1952.67 ± 355.35
15.57 ± 27.02
M2
71.41 ± 17.13
0.53 ± 0.06
134.89 ± 34.03
144.53 ± 24.13
188.03 ± 15.41
2177.13 ± 315.55
16.15 ± 11.55
M3
80.49 ± 24.10
0.57 ± 0.07
140.29 ± 42.54
128.42 ± 24.88
187.83 ± 41.34
3823.98 ± 2478.59
7.38 ± 2.82
Values are expressed as mean ± standard deviation.Figure 1
The levels of RANKL after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of RANKL were detected and significantly decreased in the groups of 1,25(OH)2D3 and MTX compared to those of the group of anti-CD3/CD28 (P<0.05). There was no difference in RANKL expression between the group of D2M2 and the group of Anti-CD3/CD28. *Mean P<0.05.Figure 2
The levels of TNF-α after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The level of TNF-α was detected and significantly decreased in the groups of 1,25(OH)2D3, MTX and D2M2 compared to the level in the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 3
The levels of IL-17 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, 1,25(OH)2D3, MTX at various concentrations, or with the combination of 1,25(OH)2D3 and MTX (D2 M2 group). The RA patients’ PBMCs treated with either Anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of IL-17 were detected and significantly decreased in the groups of 1,25(OH)2D3, MTX and D2M2 compared to those of the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 4
The levels of IL-6 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of IL-6 were detected and significantly decreased in the groups of 1,25(OH)2D3, MTX, and D2M2 compared to those of the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 5
The levels of OPG after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). There was no difference in OPG expression between the groups of 1,25(OH)2D3, MTX, and D2M2 and the group of anti-CD3/CD28 (P>0.05).Figure 6
The levels of RANKL/OPG after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). There was no difference in RANKL/OPG expression between the groups of 1,25(OH)2D3, MTX, and D2M2 and the group of vehicle (P>0.05).Figure 7
The levels of IL-4 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). 1,25(OH)2D3, MTX and 1,25(OH)2D3 plus MTX up-regulated the level of IL-4; however, there was no significant difference in IL-4 expression in the groups of 1,25(OH)2D3, MTX but there was significant difference in D2 M2 group (P<0.05).
### 4.4. The Levels of RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA Patients and Healthy Control’ PBMCs Treated with 1,25(OH)2D3
To determine whether 1,25(OH)2D3 affected RANKL expression and associated cytokines, we tested three different doses of 1,25(OH)2D3 in anti-CD3/CD28-treated PBMCs of RA patients and healthy volunteers. 1,25(OH)2D3 treated groups were divided into D1, D2, and D3. Our data revealed that 72 hours after incubation of PBMCs with 1,25(OH)2D3 in RA patients, the levels of RANKL, TNF-α, IL-17 and IL-6 significantly decreased in 1,25(OH)2D3 treated groups compared with anti-CD3/CD28 group (P<0.05; Table 3; Figures 1, 2, 3, and 4). However there was no significant difference in previous mentioned four cytokines expression in three different dose groups and the inhibitions were not in dose-dependent manner (Table 4). The treatment of 1,25(OH)2D3 had no significant effect on the levels of OPG, RANKL/OPG and IL-4 compared to anti-CD3/CD28 group in RA patients (P>0.05; Table 4; Figures 5, 6, and 7). Further, in healthy control, there was no significant difference in all seven cytokines as mentioned above between 1,25(OH)2D3-treated groups and anti-CD3/CD28 group (P>0.05).Table 4
The impact of 1,25(OH)2D3 at various concentrations on inflammation-related cytokines in the RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
D1
80.23 ± 9.37
0.53 ± 0.15
163.92 ± 56.07
424.08 ± 81.69
533.35 ± 47.47
5513.03 ± 3429.08
11.56 ± 9.14
D2
79.01 ± 15.41
0.48 ± 0.14
167.83 ± 29.43
381.56 ± 78.79
425.75 ± 55.33
4554.65 ± 3156.50
14.83 ± 13.65
D3
93.75 ± 21.88
0.58 ± 0.13
164.90 ± 35.68
326.18 ± 87.34
318.91 ± 85.91
3747.55 ± 1918.94
9.13 ± 5.88
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
D1
82.43 ± 10.19
0.66 ± 0.12
129.67 ± 30.01
168.60 ± 50.01
219.48 ± 35.87
3229.37 ± 2029.54
7.27 ± 1.56
D2
15.46 ± 8.95
0.45 ± 0.11
175.07 ± 39.69
174.97 ± 26.36
211.48 ± 41.78
2601.70 ± 1032.23
9.47 ± 6.57
D3
74.67 ± 12.61
0.55 ± 0.09
139.44 ± 30.40
128.73 ± 19.29
206.53 ± 27.70
3236.45 ± 862.95
7.62 ± 3.31
Values are expressed as mean ± standard deviation.
### 4.5. The Levels of RANKL, OPG, RANKL/OPG, TNF-α, IL-6 and IL-17 in the PBMCs Culture Supernatant of RA Patients and Healthy Control after Cotreatment with 1,25(OH)2D3 and MTX
To determine coeffect of 1,25(OH)2D3 and MTX, RA patients and healthy volunteers’ PBMCs were divided into Anti-CD3/CD28 group and Anti-CD3/CD28+D2/M2 group. With the stimulation of anti-CD3/CD28, the cells were co-treated with MTX (M2) and 1,25(OH)2D3 (D2). Our data revealed that 72 hours after incubation of PBMCs with D2 M2 in RA, the levels of TNF-α, IL-17 and IL-6 significantly decreased compared with anti-CD3/CD28 group (P<0.05; Table 5; Figures 2, 3, and 4) and the level of IL-4 in D2/M2 group significantly increased compared with Anti-CD3/CD28 group (P<0.05; Table 5; Figure 7). Our data demonstrated that, there were no significant change in the levels of RANKL, OPG, RANKL/OPG in D2M2-treated group compared to those in the anti-CD3/CD28 group in RA (P>0.05; Table 5, Figures 1, 5, and 6). Further, in healthy control, there was no significant difference in all seven cytokinesas mentioned above between the D2M2 treated group and Anti-CD3/CD28 group (P>0.05).Table 5
The impact of 1,25(OH)2D3 and MTX cotreatment on inflammation-related cytokines in the RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
D2M2
91.60 ± 10.47
0.54 ± 0.13
174.64 ± 31.68
294.4 ± 97.24
341.53 ± 58.68
3464.63 ± 2061.39
20.82 ± 13.50
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
D2M2
70.44 ± 13.01
0.52 ± 0.43
136.28 ± 27.89
151.48 ± 32.21
197.98 ± 43.97
2427.27 ± 238.13
10.73 ± 5.59
Values are expressed as mean ± standard deviation.
## 4.1. The Comparison of Serum Levels of RANKL, OPG and Associated Cytokines in RA Patients versus Healthy Control
We examined the expression of RANKL, OPG, and associated cytokines in the serum of RA patients and healthy control. Overall, there was a significant increase in RANKL, IL-17, IL-6, and TNF-α of RA patients when compared with those of healthy control (Table 1). Although OPG and RANKL/OPG showed a little increase in RA patients, no significant difference was observed. Further, the level of IL-4 was not significantly higher compared to that in healthy control.Table 1
The serum levels of RANKL, OPG, and associated cytokines in RA patients versus control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA group
100.17 ± 22.27
0.64 ± 0.17
169.57 ± 59.38
5.91 ± 2.53
42.56 ± 6.43
16.63 ± 12.00
2.72 ± 0.36
Healthy control group
75.82 ± 9.108
0.53 ± 0.16
149.00 ± 26.71
2.63 ± 0.27
21.10 ± 3.22
4.16 ± 2.27
2.72 ± 0.33
Values are expressed as mean ± standard deviation.
## 4.2. The Levels of Anti-CD3 Plus Anti-CD28 Induced RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA and Healthy Control PBMCs
Anti-CD3/CD28 is the activator of T lymphocytes, and our data revealed that PBMCs of RA and healthy control cultured from freshly collected peripheral blood responded to the stimulation of anti-CD3 and anti-CD28 very well. The healthy control group and RA patients PBMCs were divided into vehicle control group and anti-CD3/CD28 group. In both RA and healthy control, after 72 hours stimulation, the levels of RANKL, TNF-α, IL-17, IL-6, and IL-4 in the anti-CD3/CD28 group significantly enhanced compared with the vehicle control group (P<0.05; Table 2); although the level of OPG and RANKL/OPG in anti-CD3/CD28 group showed a little increase, the differences did not reach significance (P>0.05, Table 2).Table 2
The effect of Anti-CD3/CD28 induced the increases of inflammation-related cytokines in the PBMCs of RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Vehicle control
85.39 ± 5.54
0.53 ± 0.13
171.10 ± 49.11
12.55 ± 5.32
46.23 ± 13.03
2884.35 ± 1389.03
4.53 ± 1.37
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
Healthy control
Vehicle control
67.22 ± 11.14
0.61 ± 0.18
127.18 ± 14.07
6.68 ± 0.55
25.48 ± 3.78
152.87 ± 304.38
5.02 ± 2.53
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
9.77 ± 4.43
Values are expressed as mean ± standard deviation.
## 4.3. The Levels of RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA Patients and Healthy Control’ PBMCs Treated with MTX
MTX had been demonstrated to be one of the most effective agents in current use for the treatment of patients with active RA [19]. Healthy volunteers and RA patients’ PBMCs were divided into anti-CD3/CD28 group and three different MTX-dose-treated groups M1, M2, and M3. Our data revealed that 72 hours after incubation of PBMCs with MTX in RA patients, the levels of RANKL, TNF-α, IL-17 and IL-6 significantly decreased in MTX treated groups compared with Anti-CD3/CD28 group in RA patients (P<0.05; Table 3; Figures 1, 2, 3, and 4). However, in three MTX treated groups, the inhibitions of pervious four cytokines were not in dose-dependent manner (P>0.05; Table 3). The treatment of MTX had no significant effect on the levels of OPG, RANKL/OPG and IL-4 in MTX testing groups compared to those in anti-CD3/CD28 group in RA patients (P>0.05; Table 3; Figures 5, 6, and 7). Further, in healthy control, there was no significant difference in all seven cytokines as mentioned above between the MTX-treated groups and anti-CD3/CD28 group (P>0.05).Table 3
The impact of MTX at various concentrations on inflammation-related cytokines in RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
M1
77.60 ± 7.61
0.44 ± 0.05
175.62 ± 21.61
318.81 ± 74.45
451.50 ± 50.08
5255.36 ± 4309.03
16.30 ± 11.6
M2
82.57 ± 11.23
0.50 ± 0.06
164.74 ± 18.24
292.46 ± 58.67
372.13 ± 66.64
7251.50 ± 4455.93
10.73 ± 6.84
M3
77.12 ± 7.36
0.57 ± 0.13
139.14 ± 29.05
265.51 ± 64.08
315.10 ± 103.73
4706.41 ± 3391.34
14.13 ± 9.24
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
M1
71.02 ± 16.39
0.45 ± 0.32
158.02 ± 38.89
161.43 ± 44.09
204.60 ± 21.31
1952.67 ± 355.35
15.57 ± 27.02
M2
71.41 ± 17.13
0.53 ± 0.06
134.89 ± 34.03
144.53 ± 24.13
188.03 ± 15.41
2177.13 ± 315.55
16.15 ± 11.55
M3
80.49 ± 24.10
0.57 ± 0.07
140.29 ± 42.54
128.42 ± 24.88
187.83 ± 41.34
3823.98 ± 2478.59
7.38 ± 2.82
Values are expressed as mean ± standard deviation.Figure 1
The levels of RANKL after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of RANKL were detected and significantly decreased in the groups of 1,25(OH)2D3 and MTX compared to those of the group of anti-CD3/CD28 (P<0.05). There was no difference in RANKL expression between the group of D2M2 and the group of Anti-CD3/CD28. *Mean P<0.05.Figure 2
The levels of TNF-α after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The level of TNF-α was detected and significantly decreased in the groups of 1,25(OH)2D3, MTX and D2M2 compared to the level in the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 3
The levels of IL-17 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, 1,25(OH)2D3, MTX at various concentrations, or with the combination of 1,25(OH)2D3 and MTX (D2 M2 group). The RA patients’ PBMCs treated with either Anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of IL-17 were detected and significantly decreased in the groups of 1,25(OH)2D3, MTX and D2M2 compared to those of the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 4
The levels of IL-6 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of IL-6 were detected and significantly decreased in the groups of 1,25(OH)2D3, MTX, and D2M2 compared to those of the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 5
The levels of OPG after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). There was no difference in OPG expression between the groups of 1,25(OH)2D3, MTX, and D2M2 and the group of anti-CD3/CD28 (P>0.05).Figure 6
The levels of RANKL/OPG after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). There was no difference in RANKL/OPG expression between the groups of 1,25(OH)2D3, MTX, and D2M2 and the group of vehicle (P>0.05).Figure 7
The levels of IL-4 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). 1,25(OH)2D3, MTX and 1,25(OH)2D3 plus MTX up-regulated the level of IL-4; however, there was no significant difference in IL-4 expression in the groups of 1,25(OH)2D3, MTX but there was significant difference in D2 M2 group (P<0.05).
## 4.4. The Levels of RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA Patients and Healthy Control’ PBMCs Treated with 1,25(OH)2D3
To determine whether 1,25(OH)2D3 affected RANKL expression and associated cytokines, we tested three different doses of 1,25(OH)2D3 in anti-CD3/CD28-treated PBMCs of RA patients and healthy volunteers. 1,25(OH)2D3 treated groups were divided into D1, D2, and D3. Our data revealed that 72 hours after incubation of PBMCs with 1,25(OH)2D3 in RA patients, the levels of RANKL, TNF-α, IL-17 and IL-6 significantly decreased in 1,25(OH)2D3 treated groups compared with anti-CD3/CD28 group (P<0.05; Table 3; Figures 1, 2, 3, and 4). However there was no significant difference in previous mentioned four cytokines expression in three different dose groups and the inhibitions were not in dose-dependent manner (Table 4). The treatment of 1,25(OH)2D3 had no significant effect on the levels of OPG, RANKL/OPG and IL-4 compared to anti-CD3/CD28 group in RA patients (P>0.05; Table 4; Figures 5, 6, and 7). Further, in healthy control, there was no significant difference in all seven cytokines as mentioned above between 1,25(OH)2D3-treated groups and anti-CD3/CD28 group (P>0.05).Table 4
The impact of 1,25(OH)2D3 at various concentrations on inflammation-related cytokines in the RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
D1
80.23 ± 9.37
0.53 ± 0.15
163.92 ± 56.07
424.08 ± 81.69
533.35 ± 47.47
5513.03 ± 3429.08
11.56 ± 9.14
D2
79.01 ± 15.41
0.48 ± 0.14
167.83 ± 29.43
381.56 ± 78.79
425.75 ± 55.33
4554.65 ± 3156.50
14.83 ± 13.65
D3
93.75 ± 21.88
0.58 ± 0.13
164.90 ± 35.68
326.18 ± 87.34
318.91 ± 85.91
3747.55 ± 1918.94
9.13 ± 5.88
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
D1
82.43 ± 10.19
0.66 ± 0.12
129.67 ± 30.01
168.60 ± 50.01
219.48 ± 35.87
3229.37 ± 2029.54
7.27 ± 1.56
D2
15.46 ± 8.95
0.45 ± 0.11
175.07 ± 39.69
174.97 ± 26.36
211.48 ± 41.78
2601.70 ± 1032.23
9.47 ± 6.57
D3
74.67 ± 12.61
0.55 ± 0.09
139.44 ± 30.40
128.73 ± 19.29
206.53 ± 27.70
3236.45 ± 862.95
7.62 ± 3.31
Values are expressed as mean ± standard deviation.
## 4.5. The Levels of RANKL, OPG, RANKL/OPG, TNF-α, IL-6 and IL-17 in the PBMCs Culture Supernatant of RA Patients and Healthy Control after Cotreatment with 1,25(OH)2D3 and MTX
To determine coeffect of 1,25(OH)2D3 and MTX, RA patients and healthy volunteers’ PBMCs were divided into Anti-CD3/CD28 group and Anti-CD3/CD28+D2/M2 group. With the stimulation of anti-CD3/CD28, the cells were co-treated with MTX (M2) and 1,25(OH)2D3 (D2). Our data revealed that 72 hours after incubation of PBMCs with D2 M2 in RA, the levels of TNF-α, IL-17 and IL-6 significantly decreased compared with anti-CD3/CD28 group (P<0.05; Table 5; Figures 2, 3, and 4) and the level of IL-4 in D2/M2 group significantly increased compared with Anti-CD3/CD28 group (P<0.05; Table 5; Figure 7). Our data demonstrated that, there were no significant change in the levels of RANKL, OPG, RANKL/OPG in D2M2-treated group compared to those in the anti-CD3/CD28 group in RA (P>0.05; Table 5, Figures 1, 5, and 6). Further, in healthy control, there was no significant difference in all seven cytokinesas mentioned above between the D2M2 treated group and Anti-CD3/CD28 group (P>0.05).Table 5
The impact of 1,25(OH)2D3 and MTX cotreatment on inflammation-related cytokines in the RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
D2M2
91.60 ± 10.47
0.54 ± 0.13
174.64 ± 31.68
294.4 ± 97.24
341.53 ± 58.68
3464.63 ± 2061.39
20.82 ± 13.50
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
D2M2
70.44 ± 13.01
0.52 ± 0.43
136.28 ± 27.89
151.48 ± 32.21
197.98 ± 43.97
2427.27 ± 238.13
10.73 ± 5.59
Values are expressed as mean ± standard deviation.
## 5. Discussion
Rheumatoid arthritis (RA) is a common systemic autoimmune disease characterized by the destruction of articular cartilage and bone. Bone destruction was mediated by the multinucleated giant cells, osteoclasts. It has been shown that osteoclasts were responsible for deterioration of joint function in RA patients [20]. The augmentation of RANKL secretion is indispensable for osteoclast differentiation [3, 4]. The ligation of RANKL to its receptor, RANK, on the cytoplasm membrane of osteoclasts, causes bone resorption and destruction. In addition, RANKL also increases the survival of mature osteoclasts and enhances their function and consequently increases bone destruction [3, 21, 22]. In contrast, osteoprotegerin (OPG) is a soluble decoy receptor for RANKL by interfering with the RANKL/RANK binding, and it inhibits the maturation and activation of osteoclasts and their precursors [21, 22]. Therefore, it was very important to investigate RANKL expression and how to maintain the balance between RANKL and OPG in RA patients, which might provide an insight on the new treatment in reducing or preventing joint destruction in RA patients. Consistent with this idea, in the present study, we found that, in serum of RA patients, RANKL expression substantially increased compared to that in healthy control; however, OPG expression and OPG/RANKL ratio did not reduce significantly, which was not consistent with the other report by Kim et al. [23]. It was remaining controversial whether the serum levels of OPG and OPG/RANKL reflected what was happening in bone and joints in patients. Possible explanation was that OPG in the serum of such patients was bound to a plasma protein(s) and thus rendered inactive [13], and further studies will be required to determine the significance of these observations. We noted increased production of RANKL, TNF-α, IL-17, IL-6 and IL-4, following stimulation of the PBMCs with anti-CD3/anti-CD28, which suggested changes in the peripheral T-cell compartment. This finding indicated that the effect of anti-CD3/CD28 stimulation contributed to the increased cell activity in RA patients.There is an increasing appreciation that vitamin D exert broad regulatory effects on cells of the innate and adaptive immune system. These include reducing antigen presentation through reducing the activity of dendritic cells or promoting their tolerogenic phenotype, affecting the polarization of monocytoid cells, altering B cell function, decreasing chemokine gradients and reducing tissue-specific homing [24–27]. A significant literature in humans also indicates that vitamin D increases the activity of regulatory T cells to prevent the excessive activation of autoreactive T cells [28, 29].Previous report identified that RANKL mRNA expression was inhibited by 1,25-dihydroxyvitamin [30]. Our study demonstrated that the effect of 1,25(OH)2D3 treatment on RANKL expression in the RA group reached significance, although there was no significant dose-dependent effect. In contrast, 1,25(OH)2D3 treatment did not have a significant effect on OPG levels or the RANKL/OPG ratio. Therefore, 1,25(OH)2D3 might either suppress the synthesis or decrease secretion of RANKL in PBMCs of RA patients. A previous study demonstrated that vitamin D might have clinical implications in the treatment of prostate cancer [31]. In addition, we tested the hypothesis that 1,25(OH)2D3 might show therapeutic effect in RA patients through the down regulation of RANKL expression, given that RANKL was also expressed in the synovial cells of RA patients, and the drugs might work differently in vivo versus in vitro. Therefore, in future studies, we will perform the effect of 1,25(OH)2D3 on RANKL expression in synovial cells and the extend the study.Identically, methotrexate (MTX) is widely utilized for the treatment of patients with RA. MTX inhibits the expression of RANKL in RA patients in a dose-dependent manner, and also increases the secretion of OPG in RA supernatants [32]. In the present study, MTX treatment significantly decreased RANKL in the RA group. Although MTX decreased the expression of OPG and RANKL/OPG, this decline had no significant difference. Moreover, a higher MTX dose did not lead to a greater effect on the synthesis and secretion of RANKL, OPG or the RANKL/OPG ratio in patients’ PBMCs. We postulated that MTX could effectively inhibit the synthesis and secretion of RANKL in RA patients’ PBMCs. Cotreatment with 1,25(OH)2D3 + MTX reduced RANKL expression and the RANKL/OPG ratio; however, this reduction was not significant compared to MTX alone, indicating that further investigations were needed to determine the optimal dosage for both drugs.It is well known that the cytokine expression pattern is correlated closely between local and systemic inflammation as well as bone reabsorption and bone density loss. The authors [33, 34] find that IL-6, IL-17, and TNF-α intensify the inflammation response, worsen local joint synovial inflammation, and finally lead to the acceleration of joint cartilage destruction. Moreover, there is a synergistic effect of IL-17 and TNF-α [33], particulary, during the early phase of RA, the levels of these two cytokines are closely associated with joint deterioration. Therefore, these cytokines are involved in bone and cartilage damage in RA patients. The RANKL-RANK system, together with its endogenous inhibitor, OPG, perhaps represents the most important regulation in the interaction between bones and cytokines [21]. IL-17 has a strong catabolic effect by increasing osteoclast production directly as well as indirectly through an alteration in OPG/RANKL system from the osteoblasts [35]. The RANKL-mediated enhancement of calcification of smooth muscle cell in the coculture with bone-marrow-derived macrophage was dependent on TNF-α and IL-6 [36]. The evidence suggests that sIL-6R forms a complex with IL-6 that has been induced by TNF-α or IL-17, and that the resulting IL-6/sIL-6R complex induces RANKL expression [34]. The expression of RANKL is regulated by proinflammatory cytokines such as TNF-α, IL-6, and IL-17, which had demonstrated that the level of these cytokines is high in the serum and synovial fluid of RA patients [33, 34]. The literatures also suggest that these cytokines can induce RANKL expression, which breaks the balance between RANKL and OPG, increasing the differentiation of osteoclast progenitor cells into mature osteoclasts in mice model of collagen-induced arthritis [34–36]. Our present study revealed that 1,25(OH)2D3 had a significant impact on the expression levels of TNF-α, IL-17 and IL-6. These findings supported the idea that 1,25(OH)2D3 likely inhibited the expression of RANKL through reducing the synthesis and secretion of TNF-α, IL-17, and IL-6 in RA patients’ PBMCs, which eventually decreased bone erosion. MTX and cotreatment with 1,25(OH)2D3 + MTX had the same significant impact on the expression of RANKL, TNF-α, IL-17, and IL-6. Our present study also suggested that 1,25(OH)2D3 or MTX treatment might affect the expression of RANKL or OPG through the inhibition of the aforementioned inflammation-associated cytokines and thus delay bone destruction. The further investigations will be needed to determine an optimal dose for each drug.In summary, 1,25(OH)2D3 reduces whole-body bone loss and limits bone destruction in inflamed joints in RA patients. As an immunomodulatory drug, 1,25(OH)2D3 can be used in disease prevention without causing systemic immunosuppression. The present study demonstrates that 1,25(OH)2D3 reduces the production of RANKL and the secretion of TNF-α, IL-17, and IL-6 in PBMCs of RA patients, which indicates that 1,25(OH)2D3 might be able to decrease damage of cartilage and bone in RA patients by regulating the balance between proinflammatory and anti-inflammatory cytokines. Further studies are needed to be performed to test if 1,25(OH)2D3 directly affects the expression of RANKL and associated cytokines in cartilage or bone cells.
---
*Source: 101805-2013-04-22.xml* | 101805-2013-04-22_101805-2013-04-22.md | 45,445 | 1,25-Dihydroxyvitamin D3 Inhibits the RANKL Pathway and Impacts on the Production of Pathway-Associated Cytokines in Early Rheumatoid Arthritis | Jing Luo; Hongyan Wen; Hui Guo; Qi Cai; Shuangtian Li; Xiaofeng Li | BioMed Research International
(2013) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101805 | 101805-2013-04-22.xml | ---
## Abstract
Objectives. To study effects of 1,25-dihydroxyvitamin D3 (1,25(OH)2D3) on RANKL signaling pathway and pathway-associated cytokines in patients with rheumatoid arthritis (RA). Methods. Receptor activator of nuclear factor-kappa B ligand (RANKL), osteoprotegerin (OPG), IFN-γ, IL-6, TNF-α, IL-17, and IL-4 were examined in 54 patients with incipient RA using a cytometric bead array (CBA) or an enzyme-linked immunosorbent assay (ELISA). Results. After 72 hours of incubation of peripheral blood mononuclear cells (PBMCs) with 1,25(OH)2D3 in RA patients, the levels of RANKL, TNF-α, IL-17 and IL-6 significantly decreased compared to those of the control. 1,25(OH)2D3 had no significantly impact on the levels of OPG, RANKL/OPG, and IL-4. Conclusions. The present study demonstrated that 1,25(OH)2D3 reduced the production of RANKL and the secretion of TNF-α, IL-17, and IL-6 in PBMCs of RA patients, which indicated that 1,25(OH)2D3 might be able to decrease damage of cartilage and bone in RA patients by regulating the expression of RANKL signaling pathway and pathway-associated cytokines.
---
## Body
## 1. Introduction
Rheumatoid arthritis (RA) is a common chronic autoimmune disorder characterized by synovial inflammation. Bone loss in the inflamed joints [1, 2] occurs in the early stage of the disease, followed by the destruction of articular cartilage and bones. During the processing of the disease, the signaling pathway of receptor activator of nuclear factor kappa-B ligand (RANKL) and osteoprotegerin (OPG) is crucial in osteoclasts differentiation and activation [3].An abnormal proliferation of T lymphocytes is a characteristic of RA. Previous data indicate that the accumulation and proliferation of T lymphocytes occurred prior to bone destruction [4, 5]. T lymphocytes play a role in differentiation and maturation of osteoclasts [6]. T lymphocytes secrete soluble cytokines such as RANKL, macrophage colony-stimulating factor (M-CSF), and tumor necrosis factor-α (TNF-α) and thereby directly induce the formation and differentiation of osteoclasts (direct effect) [7–9]. In addition, T lymphocytes produce interleukins such as IL-1, IL-6, and IL-17, which are absorption-promoting cytokines stimulating the expression of RANKL on the cell surface of mature osteoclasts, mesenchymal cells, or fibroblasts [10–12].Subsequently, binding of RANKL to its specific receptor, receptor activator of nuclear factor kappa-B (RANK) on the surface of preosteoclasts, further increases differentiation, and maturation of osteoclasts [12]. Therefore, the published data suggest a close correlation between the RANKL pathway and joint deterioration in RA patients [13].It is well known that 1,25-dihydroxyvitamin D3 (1,25(OH)2D3) plays an important role in the bone formation [14]. Currently, studies have suggested that 1,25(OH)2D3 is also an important immune modulator [15]. It has been demonstrated that 1,25(OH)2D3 directly inhibits T-cell proliferation and reduces its secretion of IL-2 and IFN-γ [16]. However, it is unclear whether 1,25(OH)2D3 is involved in the regulation of RANKL signaling pathway.In the present study, using RA patients and healthy control peripheral blood mononuclear cells (PBMCs), we studied effects of 1,25(OH)2D3 on RANKL signaling pathway and associated cytokines. Methotrexate (MTX) was a common drug used in the treatment of RA because of its role of immune modulation [17]. Therefore, the effects of combination of 1,25(OH)2D3 and MTX on the RANKL signaling pathway as well as associated cytokines were also investigated.
## 2. Materials and Methods
### 2.1. Subjects
54 incipient RA patients were recruited from the department of Rheumatology of the Second Hospital of Shanxi Medical University, including 18 males and 36 females with an age between 30 to 65 years old. They all fulfilled the American College of Rheumatology revised criteria for RA [18]. None of the patients had ever used vitamin D, glucocorticoids, immunosuppressants, or a tumor necrosis factor antagonist prior to the study. All patients had normal liver and kidney functions. 18 healthy volunteers were used as healthy control, and gender and age were completely matched to the RA patients. This study was approved by the Research Ethics Committee of the Second Hospital of Shanxi Medical University.
### 2.2. Sample Collection
18 mL peripheral venous blood was collected from fasting subjects in the early morning. 15 mL was placed in a tube with heparin sodium anticoagulant for extracting the peripheral blood mononuclear cells (PBMCs), and the remaining 3 mL for extracting serum was placed in a tube without any anticoagulant. The blood samples without anticoagulant were kept at room temperature for 30 minutes to allow coagulating followed by centrifuging for 15 min at 1,000 rpm. After centrifugation, the supernatants (serum) were removed and stored at −80°C for future experiments.
### 2.3. In Vitro Stimulation and PBMCs Culture
Lymphocytes were isolated by density centrifugation from a 15 mL peripheral blood sample containing sodium heparin. Trypan blue staining was used to confirm that cell viability was >95%. The cells were suspended in phenol red-free Iscove’s modified Dulbecco’s medium (IMDM, Gibco, USA) supplemented with 10% charcoal-treated FCS, 100 units/mL penicillin, and 100μg/mL streptomycin, and the cell suspension was prepared at a density of 2×106/mL.The healthy control and RA patients PBMCs were plated in a 96-well plate at 200μL/well and then treated with either vehicle (no stimulant) or the combination of anti-CD3 and anti-CD28 antibody plus 1,25(OH)2D3 at various concentrations (D1=0.1 nM; D2=1 nM; D3=100 nM), MTX at various concentrations (M1=0.05 ug/mL; M2=0.5 ug/mL; M3=1 ug/mL), or with the combination of 1,25(OH)2D3 and MTX (D2M2 group). 1,25(OH)2D3 and/or MTX treatment was performed only in anti-CD3 and anti-CD28 antibody treated cells. For the vehicle control, no stimulant was added to the wells, which meant that anti-CD3, anti-CD28, MTX, and 1,25(OH)2D3 cannot be added to PBMCs. The final concentration of anti-CD3 was 300 ng/mL and of anti-CD28 was 400 ng/mL. For cells treated with 1,25(OH)2D3 and/or MTX, the cells were treated with 1,25(OH)2D3 and/or MTX plus anti-CD3 and anti-CD28 in a humidified, stable-temperature incubator at 37°C with 5% CO2 72 hours after incubation, and the cultures were harvested by centrifuging at 2000 rpm for 8 minutes. The supernatants were collected and stored at −80°C for subsequent cytokines determination.
### 2.4. Measurement of RANKL, OPG, and Associated Cytokines in the Serum and Cell Culture Supernatant
The levels of RANKL and OPG were measured using ELISA (R&D Co, Ltd.). Analysis of IFN-γ, IL-4, IL-6, TNF-α, and IL-17 was conducted using a CBA human Th1/Th2/Th17 cytokine kit (BD Co, Ltd) and analyzed on a BDFACSCalibur flow cytometer. Quantity (pg/mL) of respective cytokine was calculated using CBA software.
## 2.1. Subjects
54 incipient RA patients were recruited from the department of Rheumatology of the Second Hospital of Shanxi Medical University, including 18 males and 36 females with an age between 30 to 65 years old. They all fulfilled the American College of Rheumatology revised criteria for RA [18]. None of the patients had ever used vitamin D, glucocorticoids, immunosuppressants, or a tumor necrosis factor antagonist prior to the study. All patients had normal liver and kidney functions. 18 healthy volunteers were used as healthy control, and gender and age were completely matched to the RA patients. This study was approved by the Research Ethics Committee of the Second Hospital of Shanxi Medical University.
## 2.2. Sample Collection
18 mL peripheral venous blood was collected from fasting subjects in the early morning. 15 mL was placed in a tube with heparin sodium anticoagulant for extracting the peripheral blood mononuclear cells (PBMCs), and the remaining 3 mL for extracting serum was placed in a tube without any anticoagulant. The blood samples without anticoagulant were kept at room temperature for 30 minutes to allow coagulating followed by centrifuging for 15 min at 1,000 rpm. After centrifugation, the supernatants (serum) were removed and stored at −80°C for future experiments.
## 2.3. In Vitro Stimulation and PBMCs Culture
Lymphocytes were isolated by density centrifugation from a 15 mL peripheral blood sample containing sodium heparin. Trypan blue staining was used to confirm that cell viability was >95%. The cells were suspended in phenol red-free Iscove’s modified Dulbecco’s medium (IMDM, Gibco, USA) supplemented with 10% charcoal-treated FCS, 100 units/mL penicillin, and 100μg/mL streptomycin, and the cell suspension was prepared at a density of 2×106/mL.The healthy control and RA patients PBMCs were plated in a 96-well plate at 200μL/well and then treated with either vehicle (no stimulant) or the combination of anti-CD3 and anti-CD28 antibody plus 1,25(OH)2D3 at various concentrations (D1=0.1 nM; D2=1 nM; D3=100 nM), MTX at various concentrations (M1=0.05 ug/mL; M2=0.5 ug/mL; M3=1 ug/mL), or with the combination of 1,25(OH)2D3 and MTX (D2M2 group). 1,25(OH)2D3 and/or MTX treatment was performed only in anti-CD3 and anti-CD28 antibody treated cells. For the vehicle control, no stimulant was added to the wells, which meant that anti-CD3, anti-CD28, MTX, and 1,25(OH)2D3 cannot be added to PBMCs. The final concentration of anti-CD3 was 300 ng/mL and of anti-CD28 was 400 ng/mL. For cells treated with 1,25(OH)2D3 and/or MTX, the cells were treated with 1,25(OH)2D3 and/or MTX plus anti-CD3 and anti-CD28 in a humidified, stable-temperature incubator at 37°C with 5% CO2 72 hours after incubation, and the cultures were harvested by centrifuging at 2000 rpm for 8 minutes. The supernatants were collected and stored at −80°C for subsequent cytokines determination.
## 2.4. Measurement of RANKL, OPG, and Associated Cytokines in the Serum and Cell Culture Supernatant
The levels of RANKL and OPG were measured using ELISA (R&D Co, Ltd.). Analysis of IFN-γ, IL-4, IL-6, TNF-α, and IL-17 was conducted using a CBA human Th1/Th2/Th17 cytokine kit (BD Co, Ltd) and analyzed on a BDFACSCalibur flow cytometer. Quantity (pg/mL) of respective cytokine was calculated using CBA software.
## 3. Statistical Analyses
SPSS13.0 software was used for data analyses. All results were presented as mean ± standard deviation (M±SD). All data met the conditions for a normal distribution and homogeneity of variance. To compare two groups of data, a completely randomized, independent, two-sample t-test was used; to compare multiple groups of data, a one-way ANOVA method of square-deviation was applied, and either the Student-Newman-Keuls (SNK) test or the rank sum test was used to compare data among the groups. Pvalue<0.05 was considered to be significant.
## 4. Results
### 4.1. The Comparison of Serum Levels of RANKL, OPG and Associated Cytokines in RA Patients versus Healthy Control
We examined the expression of RANKL, OPG, and associated cytokines in the serum of RA patients and healthy control. Overall, there was a significant increase in RANKL, IL-17, IL-6, and TNF-α of RA patients when compared with those of healthy control (Table 1). Although OPG and RANKL/OPG showed a little increase in RA patients, no significant difference was observed. Further, the level of IL-4 was not significantly higher compared to that in healthy control.Table 1
The serum levels of RANKL, OPG, and associated cytokines in RA patients versus control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA group
100.17 ± 22.27
0.64 ± 0.17
169.57 ± 59.38
5.91 ± 2.53
42.56 ± 6.43
16.63 ± 12.00
2.72 ± 0.36
Healthy control group
75.82 ± 9.108
0.53 ± 0.16
149.00 ± 26.71
2.63 ± 0.27
21.10 ± 3.22
4.16 ± 2.27
2.72 ± 0.33
Values are expressed as mean ± standard deviation.
### 4.2. The Levels of Anti-CD3 Plus Anti-CD28 Induced RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA and Healthy Control PBMCs
Anti-CD3/CD28 is the activator of T lymphocytes, and our data revealed that PBMCs of RA and healthy control cultured from freshly collected peripheral blood responded to the stimulation of anti-CD3 and anti-CD28 very well. The healthy control group and RA patients PBMCs were divided into vehicle control group and anti-CD3/CD28 group. In both RA and healthy control, after 72 hours stimulation, the levels of RANKL, TNF-α, IL-17, IL-6, and IL-4 in the anti-CD3/CD28 group significantly enhanced compared with the vehicle control group (P<0.05; Table 2); although the level of OPG and RANKL/OPG in anti-CD3/CD28 group showed a little increase, the differences did not reach significance (P>0.05, Table 2).Table 2
The effect of Anti-CD3/CD28 induced the increases of inflammation-related cytokines in the PBMCs of RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Vehicle control
85.39 ± 5.54
0.53 ± 0.13
171.10 ± 49.11
12.55 ± 5.32
46.23 ± 13.03
2884.35 ± 1389.03
4.53 ± 1.37
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
Healthy control
Vehicle control
67.22 ± 11.14
0.61 ± 0.18
127.18 ± 14.07
6.68 ± 0.55
25.48 ± 3.78
152.87 ± 304.38
5.02 ± 2.53
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
9.77 ± 4.43
Values are expressed as mean ± standard deviation.
### 4.3. The Levels of RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA Patients and Healthy Control’ PBMCs Treated with MTX
MTX had been demonstrated to be one of the most effective agents in current use for the treatment of patients with active RA [19]. Healthy volunteers and RA patients’ PBMCs were divided into anti-CD3/CD28 group and three different MTX-dose-treated groups M1, M2, and M3. Our data revealed that 72 hours after incubation of PBMCs with MTX in RA patients, the levels of RANKL, TNF-α, IL-17 and IL-6 significantly decreased in MTX treated groups compared with Anti-CD3/CD28 group in RA patients (P<0.05; Table 3; Figures 1, 2, 3, and 4). However, in three MTX treated groups, the inhibitions of pervious four cytokines were not in dose-dependent manner (P>0.05; Table 3). The treatment of MTX had no significant effect on the levels of OPG, RANKL/OPG and IL-4 in MTX testing groups compared to those in anti-CD3/CD28 group in RA patients (P>0.05; Table 3; Figures 5, 6, and 7). Further, in healthy control, there was no significant difference in all seven cytokines as mentioned above between the MTX-treated groups and anti-CD3/CD28 group (P>0.05).Table 3
The impact of MTX at various concentrations on inflammation-related cytokines in RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
M1
77.60 ± 7.61
0.44 ± 0.05
175.62 ± 21.61
318.81 ± 74.45
451.50 ± 50.08
5255.36 ± 4309.03
16.30 ± 11.6
M2
82.57 ± 11.23
0.50 ± 0.06
164.74 ± 18.24
292.46 ± 58.67
372.13 ± 66.64
7251.50 ± 4455.93
10.73 ± 6.84
M3
77.12 ± 7.36
0.57 ± 0.13
139.14 ± 29.05
265.51 ± 64.08
315.10 ± 103.73
4706.41 ± 3391.34
14.13 ± 9.24
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
M1
71.02 ± 16.39
0.45 ± 0.32
158.02 ± 38.89
161.43 ± 44.09
204.60 ± 21.31
1952.67 ± 355.35
15.57 ± 27.02
M2
71.41 ± 17.13
0.53 ± 0.06
134.89 ± 34.03
144.53 ± 24.13
188.03 ± 15.41
2177.13 ± 315.55
16.15 ± 11.55
M3
80.49 ± 24.10
0.57 ± 0.07
140.29 ± 42.54
128.42 ± 24.88
187.83 ± 41.34
3823.98 ± 2478.59
7.38 ± 2.82
Values are expressed as mean ± standard deviation.Figure 1
The levels of RANKL after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of RANKL were detected and significantly decreased in the groups of 1,25(OH)2D3 and MTX compared to those of the group of anti-CD3/CD28 (P<0.05). There was no difference in RANKL expression between the group of D2M2 and the group of Anti-CD3/CD28. *Mean P<0.05.Figure 2
The levels of TNF-α after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The level of TNF-α was detected and significantly decreased in the groups of 1,25(OH)2D3, MTX and D2M2 compared to the level in the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 3
The levels of IL-17 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, 1,25(OH)2D3, MTX at various concentrations, or with the combination of 1,25(OH)2D3 and MTX (D2 M2 group). The RA patients’ PBMCs treated with either Anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of IL-17 were detected and significantly decreased in the groups of 1,25(OH)2D3, MTX and D2M2 compared to those of the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 4
The levels of IL-6 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of IL-6 were detected and significantly decreased in the groups of 1,25(OH)2D3, MTX, and D2M2 compared to those of the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 5
The levels of OPG after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). There was no difference in OPG expression between the groups of 1,25(OH)2D3, MTX, and D2M2 and the group of anti-CD3/CD28 (P>0.05).Figure 6
The levels of RANKL/OPG after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). There was no difference in RANKL/OPG expression between the groups of 1,25(OH)2D3, MTX, and D2M2 and the group of vehicle (P>0.05).Figure 7
The levels of IL-4 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). 1,25(OH)2D3, MTX and 1,25(OH)2D3 plus MTX up-regulated the level of IL-4; however, there was no significant difference in IL-4 expression in the groups of 1,25(OH)2D3, MTX but there was significant difference in D2 M2 group (P<0.05).
### 4.4. The Levels of RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA Patients and Healthy Control’ PBMCs Treated with 1,25(OH)2D3
To determine whether 1,25(OH)2D3 affected RANKL expression and associated cytokines, we tested three different doses of 1,25(OH)2D3 in anti-CD3/CD28-treated PBMCs of RA patients and healthy volunteers. 1,25(OH)2D3 treated groups were divided into D1, D2, and D3. Our data revealed that 72 hours after incubation of PBMCs with 1,25(OH)2D3 in RA patients, the levels of RANKL, TNF-α, IL-17 and IL-6 significantly decreased in 1,25(OH)2D3 treated groups compared with anti-CD3/CD28 group (P<0.05; Table 3; Figures 1, 2, 3, and 4). However there was no significant difference in previous mentioned four cytokines expression in three different dose groups and the inhibitions were not in dose-dependent manner (Table 4). The treatment of 1,25(OH)2D3 had no significant effect on the levels of OPG, RANKL/OPG and IL-4 compared to anti-CD3/CD28 group in RA patients (P>0.05; Table 4; Figures 5, 6, and 7). Further, in healthy control, there was no significant difference in all seven cytokines as mentioned above between 1,25(OH)2D3-treated groups and anti-CD3/CD28 group (P>0.05).Table 4
The impact of 1,25(OH)2D3 at various concentrations on inflammation-related cytokines in the RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
D1
80.23 ± 9.37
0.53 ± 0.15
163.92 ± 56.07
424.08 ± 81.69
533.35 ± 47.47
5513.03 ± 3429.08
11.56 ± 9.14
D2
79.01 ± 15.41
0.48 ± 0.14
167.83 ± 29.43
381.56 ± 78.79
425.75 ± 55.33
4554.65 ± 3156.50
14.83 ± 13.65
D3
93.75 ± 21.88
0.58 ± 0.13
164.90 ± 35.68
326.18 ± 87.34
318.91 ± 85.91
3747.55 ± 1918.94
9.13 ± 5.88
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
D1
82.43 ± 10.19
0.66 ± 0.12
129.67 ± 30.01
168.60 ± 50.01
219.48 ± 35.87
3229.37 ± 2029.54
7.27 ± 1.56
D2
15.46 ± 8.95
0.45 ± 0.11
175.07 ± 39.69
174.97 ± 26.36
211.48 ± 41.78
2601.70 ± 1032.23
9.47 ± 6.57
D3
74.67 ± 12.61
0.55 ± 0.09
139.44 ± 30.40
128.73 ± 19.29
206.53 ± 27.70
3236.45 ± 862.95
7.62 ± 3.31
Values are expressed as mean ± standard deviation.
### 4.5. The Levels of RANKL, OPG, RANKL/OPG, TNF-α, IL-6 and IL-17 in the PBMCs Culture Supernatant of RA Patients and Healthy Control after Cotreatment with 1,25(OH)2D3 and MTX
To determine coeffect of 1,25(OH)2D3 and MTX, RA patients and healthy volunteers’ PBMCs were divided into Anti-CD3/CD28 group and Anti-CD3/CD28+D2/M2 group. With the stimulation of anti-CD3/CD28, the cells were co-treated with MTX (M2) and 1,25(OH)2D3 (D2). Our data revealed that 72 hours after incubation of PBMCs with D2 M2 in RA, the levels of TNF-α, IL-17 and IL-6 significantly decreased compared with anti-CD3/CD28 group (P<0.05; Table 5; Figures 2, 3, and 4) and the level of IL-4 in D2/M2 group significantly increased compared with Anti-CD3/CD28 group (P<0.05; Table 5; Figure 7). Our data demonstrated that, there were no significant change in the levels of RANKL, OPG, RANKL/OPG in D2M2-treated group compared to those in the anti-CD3/CD28 group in RA (P>0.05; Table 5, Figures 1, 5, and 6). Further, in healthy control, there was no significant difference in all seven cytokinesas mentioned above between the D2M2 treated group and Anti-CD3/CD28 group (P>0.05).Table 5
The impact of 1,25(OH)2D3 and MTX cotreatment on inflammation-related cytokines in the RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
D2M2
91.60 ± 10.47
0.54 ± 0.13
174.64 ± 31.68
294.4 ± 97.24
341.53 ± 58.68
3464.63 ± 2061.39
20.82 ± 13.50
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
D2M2
70.44 ± 13.01
0.52 ± 0.43
136.28 ± 27.89
151.48 ± 32.21
197.98 ± 43.97
2427.27 ± 238.13
10.73 ± 5.59
Values are expressed as mean ± standard deviation.
## 4.1. The Comparison of Serum Levels of RANKL, OPG and Associated Cytokines in RA Patients versus Healthy Control
We examined the expression of RANKL, OPG, and associated cytokines in the serum of RA patients and healthy control. Overall, there was a significant increase in RANKL, IL-17, IL-6, and TNF-α of RA patients when compared with those of healthy control (Table 1). Although OPG and RANKL/OPG showed a little increase in RA patients, no significant difference was observed. Further, the level of IL-4 was not significantly higher compared to that in healthy control.Table 1
The serum levels of RANKL, OPG, and associated cytokines in RA patients versus control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA group
100.17 ± 22.27
0.64 ± 0.17
169.57 ± 59.38
5.91 ± 2.53
42.56 ± 6.43
16.63 ± 12.00
2.72 ± 0.36
Healthy control group
75.82 ± 9.108
0.53 ± 0.16
149.00 ± 26.71
2.63 ± 0.27
21.10 ± 3.22
4.16 ± 2.27
2.72 ± 0.33
Values are expressed as mean ± standard deviation.
## 4.2. The Levels of Anti-CD3 Plus Anti-CD28 Induced RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA and Healthy Control PBMCs
Anti-CD3/CD28 is the activator of T lymphocytes, and our data revealed that PBMCs of RA and healthy control cultured from freshly collected peripheral blood responded to the stimulation of anti-CD3 and anti-CD28 very well. The healthy control group and RA patients PBMCs were divided into vehicle control group and anti-CD3/CD28 group. In both RA and healthy control, after 72 hours stimulation, the levels of RANKL, TNF-α, IL-17, IL-6, and IL-4 in the anti-CD3/CD28 group significantly enhanced compared with the vehicle control group (P<0.05; Table 2); although the level of OPG and RANKL/OPG in anti-CD3/CD28 group showed a little increase, the differences did not reach significance (P>0.05, Table 2).Table 2
The effect of Anti-CD3/CD28 induced the increases of inflammation-related cytokines in the PBMCs of RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Vehicle control
85.39 ± 5.54
0.53 ± 0.13
171.10 ± 49.11
12.55 ± 5.32
46.23 ± 13.03
2884.35 ± 1389.03
4.53 ± 1.37
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
Healthy control
Vehicle control
67.22 ± 11.14
0.61 ± 0.18
127.18 ± 14.07
6.68 ± 0.55
25.48 ± 3.78
152.87 ± 304.38
5.02 ± 2.53
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
9.77 ± 4.43
Values are expressed as mean ± standard deviation.
## 4.3. The Levels of RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA Patients and Healthy Control’ PBMCs Treated with MTX
MTX had been demonstrated to be one of the most effective agents in current use for the treatment of patients with active RA [19]. Healthy volunteers and RA patients’ PBMCs were divided into anti-CD3/CD28 group and three different MTX-dose-treated groups M1, M2, and M3. Our data revealed that 72 hours after incubation of PBMCs with MTX in RA patients, the levels of RANKL, TNF-α, IL-17 and IL-6 significantly decreased in MTX treated groups compared with Anti-CD3/CD28 group in RA patients (P<0.05; Table 3; Figures 1, 2, 3, and 4). However, in three MTX treated groups, the inhibitions of pervious four cytokines were not in dose-dependent manner (P>0.05; Table 3). The treatment of MTX had no significant effect on the levels of OPG, RANKL/OPG and IL-4 in MTX testing groups compared to those in anti-CD3/CD28 group in RA patients (P>0.05; Table 3; Figures 5, 6, and 7). Further, in healthy control, there was no significant difference in all seven cytokines as mentioned above between the MTX-treated groups and anti-CD3/CD28 group (P>0.05).Table 3
The impact of MTX at various concentrations on inflammation-related cytokines in RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
M1
77.60 ± 7.61
0.44 ± 0.05
175.62 ± 21.61
318.81 ± 74.45
451.50 ± 50.08
5255.36 ± 4309.03
16.30 ± 11.6
M2
82.57 ± 11.23
0.50 ± 0.06
164.74 ± 18.24
292.46 ± 58.67
372.13 ± 66.64
7251.50 ± 4455.93
10.73 ± 6.84
M3
77.12 ± 7.36
0.57 ± 0.13
139.14 ± 29.05
265.51 ± 64.08
315.10 ± 103.73
4706.41 ± 3391.34
14.13 ± 9.24
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
M1
71.02 ± 16.39
0.45 ± 0.32
158.02 ± 38.89
161.43 ± 44.09
204.60 ± 21.31
1952.67 ± 355.35
15.57 ± 27.02
M2
71.41 ± 17.13
0.53 ± 0.06
134.89 ± 34.03
144.53 ± 24.13
188.03 ± 15.41
2177.13 ± 315.55
16.15 ± 11.55
M3
80.49 ± 24.10
0.57 ± 0.07
140.29 ± 42.54
128.42 ± 24.88
187.83 ± 41.34
3823.98 ± 2478.59
7.38 ± 2.82
Values are expressed as mean ± standard deviation.Figure 1
The levels of RANKL after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of RANKL were detected and significantly decreased in the groups of 1,25(OH)2D3 and MTX compared to those of the group of anti-CD3/CD28 (P<0.05). There was no difference in RANKL expression between the group of D2M2 and the group of Anti-CD3/CD28. *Mean P<0.05.Figure 2
The levels of TNF-α after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The level of TNF-α was detected and significantly decreased in the groups of 1,25(OH)2D3, MTX and D2M2 compared to the level in the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 3
The levels of IL-17 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, 1,25(OH)2D3, MTX at various concentrations, or with the combination of 1,25(OH)2D3 and MTX (D2 M2 group). The RA patients’ PBMCs treated with either Anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of IL-17 were detected and significantly decreased in the groups of 1,25(OH)2D3, MTX and D2M2 compared to those of the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 4
The levels of IL-6 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3, MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). The levels of IL-6 were detected and significantly decreased in the groups of 1,25(OH)2D3, MTX, and D2M2 compared to those of the group of anti-CD3/CD28 (P<0.05). *Mean P<0.05.Figure 5
The levels of OPG after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). There was no difference in OPG expression between the groups of 1,25(OH)2D3, MTX, and D2M2 and the group of anti-CD3/CD28 (P>0.05).Figure 6
The levels of RANKL/OPG after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, or 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). There was no difference in RANKL/OPG expression between the groups of 1,25(OH)2D3, MTX, and D2M2 and the group of vehicle (P>0.05).Figure 7
The levels of IL-4 after treatment with 1,25(OH)2D3, MTX, and 1,25(OH)2D3 plus MTX in RA patients. The RA patients’ PBMCs are treated with either anti-CD3/CD28, 1,25(OH)2D3 and MTX at various concentrations, or the combination of 1,25(OH)2D3 and MTX (D2M2 group). 1,25(OH)2D3, MTX and 1,25(OH)2D3 plus MTX up-regulated the level of IL-4; however, there was no significant difference in IL-4 expression in the groups of 1,25(OH)2D3, MTX but there was significant difference in D2 M2 group (P<0.05).
## 4.4. The Levels of RANKL, OPG, and Associated Cytokines in the Culture Supernatant of RA Patients and Healthy Control’ PBMCs Treated with 1,25(OH)2D3
To determine whether 1,25(OH)2D3 affected RANKL expression and associated cytokines, we tested three different doses of 1,25(OH)2D3 in anti-CD3/CD28-treated PBMCs of RA patients and healthy volunteers. 1,25(OH)2D3 treated groups were divided into D1, D2, and D3. Our data revealed that 72 hours after incubation of PBMCs with 1,25(OH)2D3 in RA patients, the levels of RANKL, TNF-α, IL-17 and IL-6 significantly decreased in 1,25(OH)2D3 treated groups compared with anti-CD3/CD28 group (P<0.05; Table 3; Figures 1, 2, 3, and 4). However there was no significant difference in previous mentioned four cytokines expression in three different dose groups and the inhibitions were not in dose-dependent manner (Table 4). The treatment of 1,25(OH)2D3 had no significant effect on the levels of OPG, RANKL/OPG and IL-4 compared to anti-CD3/CD28 group in RA patients (P>0.05; Table 4; Figures 5, 6, and 7). Further, in healthy control, there was no significant difference in all seven cytokines as mentioned above between 1,25(OH)2D3-treated groups and anti-CD3/CD28 group (P>0.05).Table 4
The impact of 1,25(OH)2D3 at various concentrations on inflammation-related cytokines in the RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
D1
80.23 ± 9.37
0.53 ± 0.15
163.92 ± 56.07
424.08 ± 81.69
533.35 ± 47.47
5513.03 ± 3429.08
11.56 ± 9.14
D2
79.01 ± 15.41
0.48 ± 0.14
167.83 ± 29.43
381.56 ± 78.79
425.75 ± 55.33
4554.65 ± 3156.50
14.83 ± 13.65
D3
93.75 ± 21.88
0.58 ± 0.13
164.90 ± 35.68
326.18 ± 87.34
318.91 ± 85.91
3747.55 ± 1918.94
9.13 ± 5.88
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
D1
82.43 ± 10.19
0.66 ± 0.12
129.67 ± 30.01
168.60 ± 50.01
219.48 ± 35.87
3229.37 ± 2029.54
7.27 ± 1.56
D2
15.46 ± 8.95
0.45 ± 0.11
175.07 ± 39.69
174.97 ± 26.36
211.48 ± 41.78
2601.70 ± 1032.23
9.47 ± 6.57
D3
74.67 ± 12.61
0.55 ± 0.09
139.44 ± 30.40
128.73 ± 19.29
206.53 ± 27.70
3236.45 ± 862.95
7.62 ± 3.31
Values are expressed as mean ± standard deviation.
## 4.5. The Levels of RANKL, OPG, RANKL/OPG, TNF-α, IL-6 and IL-17 in the PBMCs Culture Supernatant of RA Patients and Healthy Control after Cotreatment with 1,25(OH)2D3 and MTX
To determine coeffect of 1,25(OH)2D3 and MTX, RA patients and healthy volunteers’ PBMCs were divided into Anti-CD3/CD28 group and Anti-CD3/CD28+D2/M2 group. With the stimulation of anti-CD3/CD28, the cells were co-treated with MTX (M2) and 1,25(OH)2D3 (D2). Our data revealed that 72 hours after incubation of PBMCs with D2 M2 in RA, the levels of TNF-α, IL-17 and IL-6 significantly decreased compared with anti-CD3/CD28 group (P<0.05; Table 5; Figures 2, 3, and 4) and the level of IL-4 in D2/M2 group significantly increased compared with Anti-CD3/CD28 group (P<0.05; Table 5; Figure 7). Our data demonstrated that, there were no significant change in the levels of RANKL, OPG, RANKL/OPG in D2M2-treated group compared to those in the anti-CD3/CD28 group in RA (P>0.05; Table 5, Figures 1, 5, and 6). Further, in healthy control, there was no significant difference in all seven cytokinesas mentioned above between the D2M2 treated group and Anti-CD3/CD28 group (P>0.05).Table 5
The impact of 1,25(OH)2D3 and MTX cotreatment on inflammation-related cytokines in the RA and healthy control group.
RANKL(pmol/L)
OPG(pmol/L)
RANKL/OPG
TNF-α (pg/mL)
IL-17(pg/mL)
IL-6(pg/mL)
IL-4(pg/mL)
RA patient
Anti-CD3/CD28
100.72 ± 11.98
0.57 ± 0.15
190.24 ± 51.25
508.52 ± 90.94
606.76 ± 49.79
7939.02 ± 2108.85
9.46 ± 4.15
D2M2
91.60 ± 10.47
0.54 ± 0.13
174.64 ± 31.68
294.4 ± 97.24
341.53 ± 58.68
3464.63 ± 2061.39
20.82 ± 13.50
Healthy control
Anti-CD3/CD28
83.09 ± 12.17
0.61 ± 0.07
136.23 ± 13.42
195.95 ± 52.83
249.87 ± 17.63
2607.90 ± 232.98
6.77 ± 4.43
D2M2
70.44 ± 13.01
0.52 ± 0.43
136.28 ± 27.89
151.48 ± 32.21
197.98 ± 43.97
2427.27 ± 238.13
10.73 ± 5.59
Values are expressed as mean ± standard deviation.
## 5. Discussion
Rheumatoid arthritis (RA) is a common systemic autoimmune disease characterized by the destruction of articular cartilage and bone. Bone destruction was mediated by the multinucleated giant cells, osteoclasts. It has been shown that osteoclasts were responsible for deterioration of joint function in RA patients [20]. The augmentation of RANKL secretion is indispensable for osteoclast differentiation [3, 4]. The ligation of RANKL to its receptor, RANK, on the cytoplasm membrane of osteoclasts, causes bone resorption and destruction. In addition, RANKL also increases the survival of mature osteoclasts and enhances their function and consequently increases bone destruction [3, 21, 22]. In contrast, osteoprotegerin (OPG) is a soluble decoy receptor for RANKL by interfering with the RANKL/RANK binding, and it inhibits the maturation and activation of osteoclasts and their precursors [21, 22]. Therefore, it was very important to investigate RANKL expression and how to maintain the balance between RANKL and OPG in RA patients, which might provide an insight on the new treatment in reducing or preventing joint destruction in RA patients. Consistent with this idea, in the present study, we found that, in serum of RA patients, RANKL expression substantially increased compared to that in healthy control; however, OPG expression and OPG/RANKL ratio did not reduce significantly, which was not consistent with the other report by Kim et al. [23]. It was remaining controversial whether the serum levels of OPG and OPG/RANKL reflected what was happening in bone and joints in patients. Possible explanation was that OPG in the serum of such patients was bound to a plasma protein(s) and thus rendered inactive [13], and further studies will be required to determine the significance of these observations. We noted increased production of RANKL, TNF-α, IL-17, IL-6 and IL-4, following stimulation of the PBMCs with anti-CD3/anti-CD28, which suggested changes in the peripheral T-cell compartment. This finding indicated that the effect of anti-CD3/CD28 stimulation contributed to the increased cell activity in RA patients.There is an increasing appreciation that vitamin D exert broad regulatory effects on cells of the innate and adaptive immune system. These include reducing antigen presentation through reducing the activity of dendritic cells or promoting their tolerogenic phenotype, affecting the polarization of monocytoid cells, altering B cell function, decreasing chemokine gradients and reducing tissue-specific homing [24–27]. A significant literature in humans also indicates that vitamin D increases the activity of regulatory T cells to prevent the excessive activation of autoreactive T cells [28, 29].Previous report identified that RANKL mRNA expression was inhibited by 1,25-dihydroxyvitamin [30]. Our study demonstrated that the effect of 1,25(OH)2D3 treatment on RANKL expression in the RA group reached significance, although there was no significant dose-dependent effect. In contrast, 1,25(OH)2D3 treatment did not have a significant effect on OPG levels or the RANKL/OPG ratio. Therefore, 1,25(OH)2D3 might either suppress the synthesis or decrease secretion of RANKL in PBMCs of RA patients. A previous study demonstrated that vitamin D might have clinical implications in the treatment of prostate cancer [31]. In addition, we tested the hypothesis that 1,25(OH)2D3 might show therapeutic effect in RA patients through the down regulation of RANKL expression, given that RANKL was also expressed in the synovial cells of RA patients, and the drugs might work differently in vivo versus in vitro. Therefore, in future studies, we will perform the effect of 1,25(OH)2D3 on RANKL expression in synovial cells and the extend the study.Identically, methotrexate (MTX) is widely utilized for the treatment of patients with RA. MTX inhibits the expression of RANKL in RA patients in a dose-dependent manner, and also increases the secretion of OPG in RA supernatants [32]. In the present study, MTX treatment significantly decreased RANKL in the RA group. Although MTX decreased the expression of OPG and RANKL/OPG, this decline had no significant difference. Moreover, a higher MTX dose did not lead to a greater effect on the synthesis and secretion of RANKL, OPG or the RANKL/OPG ratio in patients’ PBMCs. We postulated that MTX could effectively inhibit the synthesis and secretion of RANKL in RA patients’ PBMCs. Cotreatment with 1,25(OH)2D3 + MTX reduced RANKL expression and the RANKL/OPG ratio; however, this reduction was not significant compared to MTX alone, indicating that further investigations were needed to determine the optimal dosage for both drugs.It is well known that the cytokine expression pattern is correlated closely between local and systemic inflammation as well as bone reabsorption and bone density loss. The authors [33, 34] find that IL-6, IL-17, and TNF-α intensify the inflammation response, worsen local joint synovial inflammation, and finally lead to the acceleration of joint cartilage destruction. Moreover, there is a synergistic effect of IL-17 and TNF-α [33], particulary, during the early phase of RA, the levels of these two cytokines are closely associated with joint deterioration. Therefore, these cytokines are involved in bone and cartilage damage in RA patients. The RANKL-RANK system, together with its endogenous inhibitor, OPG, perhaps represents the most important regulation in the interaction between bones and cytokines [21]. IL-17 has a strong catabolic effect by increasing osteoclast production directly as well as indirectly through an alteration in OPG/RANKL system from the osteoblasts [35]. The RANKL-mediated enhancement of calcification of smooth muscle cell in the coculture with bone-marrow-derived macrophage was dependent on TNF-α and IL-6 [36]. The evidence suggests that sIL-6R forms a complex with IL-6 that has been induced by TNF-α or IL-17, and that the resulting IL-6/sIL-6R complex induces RANKL expression [34]. The expression of RANKL is regulated by proinflammatory cytokines such as TNF-α, IL-6, and IL-17, which had demonstrated that the level of these cytokines is high in the serum and synovial fluid of RA patients [33, 34]. The literatures also suggest that these cytokines can induce RANKL expression, which breaks the balance between RANKL and OPG, increasing the differentiation of osteoclast progenitor cells into mature osteoclasts in mice model of collagen-induced arthritis [34–36]. Our present study revealed that 1,25(OH)2D3 had a significant impact on the expression levels of TNF-α, IL-17 and IL-6. These findings supported the idea that 1,25(OH)2D3 likely inhibited the expression of RANKL through reducing the synthesis and secretion of TNF-α, IL-17, and IL-6 in RA patients’ PBMCs, which eventually decreased bone erosion. MTX and cotreatment with 1,25(OH)2D3 + MTX had the same significant impact on the expression of RANKL, TNF-α, IL-17, and IL-6. Our present study also suggested that 1,25(OH)2D3 or MTX treatment might affect the expression of RANKL or OPG through the inhibition of the aforementioned inflammation-associated cytokines and thus delay bone destruction. The further investigations will be needed to determine an optimal dose for each drug.In summary, 1,25(OH)2D3 reduces whole-body bone loss and limits bone destruction in inflamed joints in RA patients. As an immunomodulatory drug, 1,25(OH)2D3 can be used in disease prevention without causing systemic immunosuppression. The present study demonstrates that 1,25(OH)2D3 reduces the production of RANKL and the secretion of TNF-α, IL-17, and IL-6 in PBMCs of RA patients, which indicates that 1,25(OH)2D3 might be able to decrease damage of cartilage and bone in RA patients by regulating the balance between proinflammatory and anti-inflammatory cytokines. Further studies are needed to be performed to test if 1,25(OH)2D3 directly affects the expression of RANKL and associated cytokines in cartilage or bone cells.
---
*Source: 101805-2013-04-22.xml* | 2013 |
# Construction of a Health Management Model for Early Identification of Ischaemic Stroke in Cloud Computing
**Authors:** Yuying Yang; Qing Chang; Jing Chen; Xiangkun Zhou; Qian Xue; Aixia Song
**Journal:** Journal of Healthcare Engineering
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1018056
---
## Abstract
Knowledge discovery and cloud computing can help early identification of ischaemic stroke and provide intelligent, humane, and preventive healthcare services for patients at high risk of stroke. This study proposes constructing a health management model for early identification and warning of ischaemic stroke based on IoT and cloud computing, and discusses its connotation, constructive ideas, and research content so as to provide reference for its health management in order to develop and implement countermeasures and to compare the awareness of early stroke symptoms and first aid knowledge among stroke patients and their families before and after the activity. The rate of awareness of early symptoms and first aid among stroke patients and their families increased from 36% to 78%, and the difference was statistically significant (P<0.05) before and after the activity.
---
## Body
## 1. Introduction
A health education team was formed in the Department of Emergency Medicine and the Department of Neurology to promote early symptom recognition and first aid knowledge, and a clear division of labour and standardised teaching materials created a good space and atmosphere for teaching [1–3]. In addition, the “Green Health” WeChat public number was set up to promote stroke and first aid knowledge through the Internet in the form of graphics and videos, which not only enriched the form of education on stroke disease and first aid knowledge but also broadened the publicity channels and helped to promote the education of stroke patients and their families on stroke disease and first aid knowledge. This is conducive to the homogenisation of stroke patients’ and families’ knowledge of stroke disease and first aid, and can improve the effectiveness of education [4–6].Using WeChat as a new communication medium not only breaks the limits of time and space but also allows for the instant dissemination of new knowledge and information, enabling interactive communication between healthcare professionals and patients and their families [3] so that stroke patients and their families can learn about stroke disease knowledge and first aid anytime and anywhere, thus increasing the level of knowledge of stroke disease and first aid among patients and their families [7].Quality management is an automatic and spontaneous activity carried out by people at the work site or in interrelated areas [8], characterised by a combination of leaders, technicians, and employees, with individuals gaining a sense of involvement and achievement in their work [9]. At the same time, circle activities improve the motivation of the circle members, enhance team cohesion, and contribute to the progress and development of the stroke patient team. Therefore, quality management circle activities can improve the work ability of individual members and promote the development of stroke patient teams [10].In summary, the use of quality management circles not only improves the ability of stroke patients and their families to identify early symptoms but also gives full play to the individual potential of the circle members, improves their individual work ability, and promotes the development of the stroke patient workforce.
## 2. Stroke Management Model
### 2.1. Internet of Things
The IoT management model is based on Internet of Things (IoT) technology and with its superior bionic characteristics. It is a powerful tool in expanding human thinking, liberating human labour, and promoting social progress. In summary, the IoT management model is a four-sided linkage of the sensing side (radio frequency identification, sensors, etc.) [11], the transmission side (Internet, mobile Internet and 5G technology, etc.) [1], the cloud side (cloud computing, big data technology, artificial intelligence, etc.), [12] and the application side (various network platforms), with each side integrating technological innovation, management innovation, and institutional innovation, forming a modern intelligent management model of “four linked ends and three in one” [13].The sending end is the base layer of the IoT management model and is the necessary gateway for the exchange of information in the virtual and real space of the IoT management model [14]. In other words, the sensing side is the input side of the IoT management model system, which acts directly on the environment, and the information technology mainly relies on its various sensing devices that identify and capture information, such as radio frequency identification (RFID) technology and wearable devices.The cloud is the “brain” of the IoT management model, relying on powerful comprehensive analysis functions to continuously output power for the survival of the IoT management model [15]. After receiving the information captured by the sensing end, the cloud uses modern information technology, such as big data technology and management cloud computing, to conduct a comprehensive analysis. This process takes decision makers out of the tedious process of information analysis and allows them to focus on making good decisions [16].The “trinity” innovation system of the IoT management model, with its “four-end linkage” organisational structure, only provides a realistic framework for the IoT management model, but to make it work in practice, it is necessary to rely on the “trinity” system. “Trinity” system guarantees to clear the realistic obstacles for the IoT management model [17–19].
### 2.2. Health Management
Health management is a process of comprehensive management of health risk factors for an individual or a population [20]. This process is carried out by professionals who provide advisory guidance and follow-up counselling services to enable individuals to receive comprehensive health maintenance and protection services in multiple dimensions, including social, psychological, environmental, nutritional, and exercise, through health information collection, health testing, health assessment, personalised health management programmes, and health interventions [21].
## 2.1. Internet of Things
The IoT management model is based on Internet of Things (IoT) technology and with its superior bionic characteristics. It is a powerful tool in expanding human thinking, liberating human labour, and promoting social progress. In summary, the IoT management model is a four-sided linkage of the sensing side (radio frequency identification, sensors, etc.) [11], the transmission side (Internet, mobile Internet and 5G technology, etc.) [1], the cloud side (cloud computing, big data technology, artificial intelligence, etc.), [12] and the application side (various network platforms), with each side integrating technological innovation, management innovation, and institutional innovation, forming a modern intelligent management model of “four linked ends and three in one” [13].The sending end is the base layer of the IoT management model and is the necessary gateway for the exchange of information in the virtual and real space of the IoT management model [14]. In other words, the sensing side is the input side of the IoT management model system, which acts directly on the environment, and the information technology mainly relies on its various sensing devices that identify and capture information, such as radio frequency identification (RFID) technology and wearable devices.The cloud is the “brain” of the IoT management model, relying on powerful comprehensive analysis functions to continuously output power for the survival of the IoT management model [15]. After receiving the information captured by the sensing end, the cloud uses modern information technology, such as big data technology and management cloud computing, to conduct a comprehensive analysis. This process takes decision makers out of the tedious process of information analysis and allows them to focus on making good decisions [16].The “trinity” innovation system of the IoT management model, with its “four-end linkage” organisational structure, only provides a realistic framework for the IoT management model, but to make it work in practice, it is necessary to rely on the “trinity” system. “Trinity” system guarantees to clear the realistic obstacles for the IoT management model [17–19].
## 2.2. Health Management
Health management is a process of comprehensive management of health risk factors for an individual or a population [20]. This process is carried out by professionals who provide advisory guidance and follow-up counselling services to enable individuals to receive comprehensive health maintenance and protection services in multiple dimensions, including social, psychological, environmental, nutritional, and exercise, through health information collection, health testing, health assessment, personalised health management programmes, and health interventions [21].
## 3. Building Ideas
Data collection: patients at a risk of stroke have their signs collected by lower limb muscle monitors, voice recognition devices, and other sign monitoring devices [22]. Operation and monitoring: the data are transmitted wirelessly to the user’s terminal access device and application software, and the stroke cloud platform organises it into a database for monitoring, calculation, and analysis. Once the monitored indicators reach the threshold, an alarm signal will be sent out via GPS and healthcare professionals will take immediate action to ensure that the stroke patient is treated within a short and valuable window of time (see Figure 1).Figure 1
IoT and cloud-based health management model for early identification and warning of ischaemic stroke.To achieve partial encryption of medical stroke patient privacy information based on cloud computing, a data transfer model for encrypting medical stroke patient privacy information is constructed in a finite domain [23]. The ciphertext transfer protocol is Decryptsk,c∗A−1=T=ti,ji,j=1m, the encrypted transfer control of the medical stroke patient privacy information system is performed using public key cryptography, the system master key is constructed, and the dynamic key for encrypting medical stroke patient privacy information safely is as follows:(1)TA=t1,1,…,tm,ma1,1,…,am,m.Using the security parameterν as input, the unscripted data in the medical stroke patient privacy information are authorized to be encrypted using a random variable signed encryption algorithm, a user authentication protocol for encrypting medical stroke patient privacy information is established, and an encapsulation protocol for partially encrypting medical stroke patient privacy information is constructed [24], and the entropy and minimum entropy for encrypting medical stroke patient privacy information are obtained as shown in equation (2), respectively.(2)Decryptsk,c∗Aα1,…,αmA−1α1−1,…,αm−1T=α1−1t1,1,…,αm−1tm,mα1a1,1,…αmam,m=E.For the identity user, considering the randomness of the output certificate, the security parameter K and symmetric key K of medical stroke patient privacy information encryption are input, and the key of medical stroke patient privacy information system encryption is reset. Through the key expansion method, the key expansion sequenceX=x1,x2,…,xn of medical stroke patient privacy information is obtained. The method of p-order cyclic group mapping is adopted. The weighted vectors s⟶i,j0 and s⟶i,j1 of medical stroke patients’ privacy information with a length of Θ bits are generated, and the encrypted ciphertext sequence of medical stroke patients’ privacy information is Sn=x1+,x2+⋯+xn. The user IDi and message M are entered, and the ciphertext of layer L+1 of medical stroke patients’ privacy information transmission data is shown in the following equation:(3)Decryptsk,c∗AA−1=t1,1,…,tm,m=E.Using the keyword encryption algorithm, the userIDi private key skIDi and two identities IDi, IDj are entered and the output sequence of plaintext medical stroke patient privacy information is as follows:(4)Decryptsk,c∗Aα1,⋯,αm−1=α1−1t1,1,⋯,αm−1tm,m=A−1α1−1,…,αm−1T.The resulting arithmetic coding model for encrypting private information of medical stroke patients is shown in Figure2.Figure 2
Arithmetic coding model for encryption of private information of medical stroke patients.This paper proposes a cloud-based data encryption algorithm for establishing an arithmetic coding model for stroke patient privacy information and designing a key for stroke patient privacy information [25].(5)P−value=2+1φSobs=2+1−2π∑−∞Sobse−2u2=3−2π∑−∞Sobse−2u2.The decrypted private key ofIDi is used for key resigning to obtain a statistic P value ≥ 0.01 for the decrypted key, and the transformed ciphertext is processed for cloud information fusion when it satisfies KS∈0,1, by generating four empty lists H1−list, H2−list, dsk−list, and rsk−list to obtain a linear encoding distribution function for the privacy information of medical stroke patients, as shown in the following equation:(6)f−1I=p+I,s=0,11+p∗I,s=1,where Ι denotes the private key of the medical care private message sender; the initial value I = [0, 1] is set, f is rewritten as fy=∑i=0qHciyi to obtain the feature vector v∈Zμ×μ, the public key of the medical care private message transmission pk=se2,ui,j1≤i,j≤μ,se3,δi00≤i,δi10≤i private key sk=si,si,j1≤i,j≤μ is calculated to obtain the medical stroke patient privacy information encryption and decryption protocol [24, 26].h1′,…,hq′∈Zp1∗ is randomly selected to achieve key construction of private information for medical stroke patients, as shown in Figure 3.Figure 3
Key design for private information of medical stroke patients.
## 4. Case Studies
### 4.1. Object
110 stroke patients and their families in the Emergency Medicine and Neurology Departments from 18 March 2020 to 18 April 2020 were selected for the preprotocol circle activity, and 110 stroke patients and their families hospitalized from 25 April 2020 to 25 June 2020 were selected for the postprotocol circle activity.Stroke patients and their families have little knowledge of stroke risk factors and early warning signs, with a 23.7% awareness rate of stroke warning signs [27]. The majority of stroke patients in China do not recognise stroke symptoms at an early stage thus delaying the best time for treatment, which seriously affects the outcome and prognosis of stroke patients. Therefore, the theme of this campaign is to raise awareness of early warning for stroke patients and their families so as to improve their ability to recognise stroke symptoms and knowledge of first aid and to enable them to seek timely medical attention and receive effective treatment [28].
### 4.2. Results
After the implementation of the campaign, the awareness rate of patients and family members about early stroke symptoms and first aid was 78%, based on the formula: target achievement rate = (postimprovement data - preimprovement data)/(target value - preimprovement data) × 100%, resulting in a target achievement rate of 116.7%. Knowledge of early stroke symptoms and first aid among stroke patients and their families before and after the event is shown in Table1.Table 1
Comparison of stroke patients’ and families’ knowledge of early stroke symptom recognition and first aid.
TimeNumber of peopleKnow (person)Awareness rate (%)Before development1104036After development1108678The circle members have grown in the use of QC techniques, team spirit, responsibility and honour, communication and coordination, motivation, logical thinking, professional knowledge, and personal potential, especially in the area of QC techniques and personal potential (see Figure4).Figure 4
Five star result radar map.
## 4.1. Object
110 stroke patients and their families in the Emergency Medicine and Neurology Departments from 18 March 2020 to 18 April 2020 were selected for the preprotocol circle activity, and 110 stroke patients and their families hospitalized from 25 April 2020 to 25 June 2020 were selected for the postprotocol circle activity.Stroke patients and their families have little knowledge of stroke risk factors and early warning signs, with a 23.7% awareness rate of stroke warning signs [27]. The majority of stroke patients in China do not recognise stroke symptoms at an early stage thus delaying the best time for treatment, which seriously affects the outcome and prognosis of stroke patients. Therefore, the theme of this campaign is to raise awareness of early warning for stroke patients and their families so as to improve their ability to recognise stroke symptoms and knowledge of first aid and to enable them to seek timely medical attention and receive effective treatment [28].
## 4.2. Results
After the implementation of the campaign, the awareness rate of patients and family members about early stroke symptoms and first aid was 78%, based on the formula: target achievement rate = (postimprovement data - preimprovement data)/(target value - preimprovement data) × 100%, resulting in a target achievement rate of 116.7%. Knowledge of early stroke symptoms and first aid among stroke patients and their families before and after the event is shown in Table1.Table 1
Comparison of stroke patients’ and families’ knowledge of early stroke symptom recognition and first aid.
TimeNumber of peopleKnow (person)Awareness rate (%)Before development1104036After development1108678The circle members have grown in the use of QC techniques, team spirit, responsibility and honour, communication and coordination, motivation, logical thinking, professional knowledge, and personal potential, especially in the area of QC techniques and personal potential (see Figure4).Figure 4
Five star result radar map.
## 5. Simulation Test Analysis
MATLAB was used to design the experiments. The key extraction homomorphic vector wasOγ+β+Θμ2.η+λ=Oλ5.5=25, and the sampling frequency of the private information of stroke patients was 15 MHz Table 2.Table 2
Experimental parameter settings.
Parameter settingtmaxMLPmPcParameter value1226130.450.79Based on the above parameter settings, the partial encryption of private information of medical stroke patients was obtained, and the time-domain distribution of the original encrypted data was obtained [29, 30], as shown in Figure 5.Figure 5
Time-domain distribution of data encryption.Using the data in Figure5 as the study object, the partial encryption of stroke patients’ private information was performed and the encryption results were obtained, as shown in Figure 6.Figure 6
Encrypted data output.Analysis of Figure6 shows that the method in this paper can effectively achieve the partial encryption of private information of medical stroke patients, with good encryption resistance to attacks. The encryption depth was tested, and the comparison results were obtained, as shown in Table 3.Table 3
Medical stroke patient privacy information encryption depth test.
Iterations (time)This method/dBReference [4] (dB)Reference [5] (dB)10013.578.2111.7620024.7511.4613.5430032.4415.5322.5340042.6520.4625.57As can be seen from Table3, when the number of iterations is 100, 200, 300, 400, etc., the test results obtained by using this paper for the encryption depth of medical stroke patients’ privacy information are higher than those obtained by traditional methods, and when the number of iterations is 400, the encryption depth of medical stroke patients’ privacy information by using this paper is as high as 400 dB. The encryption is more resistant to attacks, and the encryption depth is deeper.
## 6. Conclusions
The incidence of stroke is currently high, but the state of implementation of early stroke recognition is worrying. The delay in admission of stroke patients is serious, with a delay in the attendance rate of approximately 60%. The early identification of ischaemic stroke patients needs urgent improvement. The application of this model, consisting of muscle force acquisition sensors, voice acquisition sensors, network data storage, cloud data processing, mobile cloud clients, and network transmission, can achieve accurate and timely identification of ischaemic stroke patients who have just developed a stroke at any time and any place, and provide timely alerts through Internet of Things and cloud platform [29, 30].
---
*Source: 1018056-2022-03-22.xml* | 1018056-2022-03-22_1018056-2022-03-22.md | 20,962 | Construction of a Health Management Model for Early Identification of Ischaemic Stroke in Cloud Computing | Yuying Yang; Qing Chang; Jing Chen; Xiangkun Zhou; Qian Xue; Aixia Song | Journal of Healthcare Engineering
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1018056 | 1018056-2022-03-22.xml | ---
## Abstract
Knowledge discovery and cloud computing can help early identification of ischaemic stroke and provide intelligent, humane, and preventive healthcare services for patients at high risk of stroke. This study proposes constructing a health management model for early identification and warning of ischaemic stroke based on IoT and cloud computing, and discusses its connotation, constructive ideas, and research content so as to provide reference for its health management in order to develop and implement countermeasures and to compare the awareness of early stroke symptoms and first aid knowledge among stroke patients and their families before and after the activity. The rate of awareness of early symptoms and first aid among stroke patients and their families increased from 36% to 78%, and the difference was statistically significant (P<0.05) before and after the activity.
---
## Body
## 1. Introduction
A health education team was formed in the Department of Emergency Medicine and the Department of Neurology to promote early symptom recognition and first aid knowledge, and a clear division of labour and standardised teaching materials created a good space and atmosphere for teaching [1–3]. In addition, the “Green Health” WeChat public number was set up to promote stroke and first aid knowledge through the Internet in the form of graphics and videos, which not only enriched the form of education on stroke disease and first aid knowledge but also broadened the publicity channels and helped to promote the education of stroke patients and their families on stroke disease and first aid knowledge. This is conducive to the homogenisation of stroke patients’ and families’ knowledge of stroke disease and first aid, and can improve the effectiveness of education [4–6].Using WeChat as a new communication medium not only breaks the limits of time and space but also allows for the instant dissemination of new knowledge and information, enabling interactive communication between healthcare professionals and patients and their families [3] so that stroke patients and their families can learn about stroke disease knowledge and first aid anytime and anywhere, thus increasing the level of knowledge of stroke disease and first aid among patients and their families [7].Quality management is an automatic and spontaneous activity carried out by people at the work site or in interrelated areas [8], characterised by a combination of leaders, technicians, and employees, with individuals gaining a sense of involvement and achievement in their work [9]. At the same time, circle activities improve the motivation of the circle members, enhance team cohesion, and contribute to the progress and development of the stroke patient team. Therefore, quality management circle activities can improve the work ability of individual members and promote the development of stroke patient teams [10].In summary, the use of quality management circles not only improves the ability of stroke patients and their families to identify early symptoms but also gives full play to the individual potential of the circle members, improves their individual work ability, and promotes the development of the stroke patient workforce.
## 2. Stroke Management Model
### 2.1. Internet of Things
The IoT management model is based on Internet of Things (IoT) technology and with its superior bionic characteristics. It is a powerful tool in expanding human thinking, liberating human labour, and promoting social progress. In summary, the IoT management model is a four-sided linkage of the sensing side (radio frequency identification, sensors, etc.) [11], the transmission side (Internet, mobile Internet and 5G technology, etc.) [1], the cloud side (cloud computing, big data technology, artificial intelligence, etc.), [12] and the application side (various network platforms), with each side integrating technological innovation, management innovation, and institutional innovation, forming a modern intelligent management model of “four linked ends and three in one” [13].The sending end is the base layer of the IoT management model and is the necessary gateway for the exchange of information in the virtual and real space of the IoT management model [14]. In other words, the sensing side is the input side of the IoT management model system, which acts directly on the environment, and the information technology mainly relies on its various sensing devices that identify and capture information, such as radio frequency identification (RFID) technology and wearable devices.The cloud is the “brain” of the IoT management model, relying on powerful comprehensive analysis functions to continuously output power for the survival of the IoT management model [15]. After receiving the information captured by the sensing end, the cloud uses modern information technology, such as big data technology and management cloud computing, to conduct a comprehensive analysis. This process takes decision makers out of the tedious process of information analysis and allows them to focus on making good decisions [16].The “trinity” innovation system of the IoT management model, with its “four-end linkage” organisational structure, only provides a realistic framework for the IoT management model, but to make it work in practice, it is necessary to rely on the “trinity” system. “Trinity” system guarantees to clear the realistic obstacles for the IoT management model [17–19].
### 2.2. Health Management
Health management is a process of comprehensive management of health risk factors for an individual or a population [20]. This process is carried out by professionals who provide advisory guidance and follow-up counselling services to enable individuals to receive comprehensive health maintenance and protection services in multiple dimensions, including social, psychological, environmental, nutritional, and exercise, through health information collection, health testing, health assessment, personalised health management programmes, and health interventions [21].
## 2.1. Internet of Things
The IoT management model is based on Internet of Things (IoT) technology and with its superior bionic characteristics. It is a powerful tool in expanding human thinking, liberating human labour, and promoting social progress. In summary, the IoT management model is a four-sided linkage of the sensing side (radio frequency identification, sensors, etc.) [11], the transmission side (Internet, mobile Internet and 5G technology, etc.) [1], the cloud side (cloud computing, big data technology, artificial intelligence, etc.), [12] and the application side (various network platforms), with each side integrating technological innovation, management innovation, and institutional innovation, forming a modern intelligent management model of “four linked ends and three in one” [13].The sending end is the base layer of the IoT management model and is the necessary gateway for the exchange of information in the virtual and real space of the IoT management model [14]. In other words, the sensing side is the input side of the IoT management model system, which acts directly on the environment, and the information technology mainly relies on its various sensing devices that identify and capture information, such as radio frequency identification (RFID) technology and wearable devices.The cloud is the “brain” of the IoT management model, relying on powerful comprehensive analysis functions to continuously output power for the survival of the IoT management model [15]. After receiving the information captured by the sensing end, the cloud uses modern information technology, such as big data technology and management cloud computing, to conduct a comprehensive analysis. This process takes decision makers out of the tedious process of information analysis and allows them to focus on making good decisions [16].The “trinity” innovation system of the IoT management model, with its “four-end linkage” organisational structure, only provides a realistic framework for the IoT management model, but to make it work in practice, it is necessary to rely on the “trinity” system. “Trinity” system guarantees to clear the realistic obstacles for the IoT management model [17–19].
## 2.2. Health Management
Health management is a process of comprehensive management of health risk factors for an individual or a population [20]. This process is carried out by professionals who provide advisory guidance and follow-up counselling services to enable individuals to receive comprehensive health maintenance and protection services in multiple dimensions, including social, psychological, environmental, nutritional, and exercise, through health information collection, health testing, health assessment, personalised health management programmes, and health interventions [21].
## 3. Building Ideas
Data collection: patients at a risk of stroke have their signs collected by lower limb muscle monitors, voice recognition devices, and other sign monitoring devices [22]. Operation and monitoring: the data are transmitted wirelessly to the user’s terminal access device and application software, and the stroke cloud platform organises it into a database for monitoring, calculation, and analysis. Once the monitored indicators reach the threshold, an alarm signal will be sent out via GPS and healthcare professionals will take immediate action to ensure that the stroke patient is treated within a short and valuable window of time (see Figure 1).Figure 1
IoT and cloud-based health management model for early identification and warning of ischaemic stroke.To achieve partial encryption of medical stroke patient privacy information based on cloud computing, a data transfer model for encrypting medical stroke patient privacy information is constructed in a finite domain [23]. The ciphertext transfer protocol is Decryptsk,c∗A−1=T=ti,ji,j=1m, the encrypted transfer control of the medical stroke patient privacy information system is performed using public key cryptography, the system master key is constructed, and the dynamic key for encrypting medical stroke patient privacy information safely is as follows:(1)TA=t1,1,…,tm,ma1,1,…,am,m.Using the security parameterν as input, the unscripted data in the medical stroke patient privacy information are authorized to be encrypted using a random variable signed encryption algorithm, a user authentication protocol for encrypting medical stroke patient privacy information is established, and an encapsulation protocol for partially encrypting medical stroke patient privacy information is constructed [24], and the entropy and minimum entropy for encrypting medical stroke patient privacy information are obtained as shown in equation (2), respectively.(2)Decryptsk,c∗Aα1,…,αmA−1α1−1,…,αm−1T=α1−1t1,1,…,αm−1tm,mα1a1,1,…αmam,m=E.For the identity user, considering the randomness of the output certificate, the security parameter K and symmetric key K of medical stroke patient privacy information encryption are input, and the key of medical stroke patient privacy information system encryption is reset. Through the key expansion method, the key expansion sequenceX=x1,x2,…,xn of medical stroke patient privacy information is obtained. The method of p-order cyclic group mapping is adopted. The weighted vectors s⟶i,j0 and s⟶i,j1 of medical stroke patients’ privacy information with a length of Θ bits are generated, and the encrypted ciphertext sequence of medical stroke patients’ privacy information is Sn=x1+,x2+⋯+xn. The user IDi and message M are entered, and the ciphertext of layer L+1 of medical stroke patients’ privacy information transmission data is shown in the following equation:(3)Decryptsk,c∗AA−1=t1,1,…,tm,m=E.Using the keyword encryption algorithm, the userIDi private key skIDi and two identities IDi, IDj are entered and the output sequence of plaintext medical stroke patient privacy information is as follows:(4)Decryptsk,c∗Aα1,⋯,αm−1=α1−1t1,1,⋯,αm−1tm,m=A−1α1−1,…,αm−1T.The resulting arithmetic coding model for encrypting private information of medical stroke patients is shown in Figure2.Figure 2
Arithmetic coding model for encryption of private information of medical stroke patients.This paper proposes a cloud-based data encryption algorithm for establishing an arithmetic coding model for stroke patient privacy information and designing a key for stroke patient privacy information [25].(5)P−value=2+1φSobs=2+1−2π∑−∞Sobse−2u2=3−2π∑−∞Sobse−2u2.The decrypted private key ofIDi is used for key resigning to obtain a statistic P value ≥ 0.01 for the decrypted key, and the transformed ciphertext is processed for cloud information fusion when it satisfies KS∈0,1, by generating four empty lists H1−list, H2−list, dsk−list, and rsk−list to obtain a linear encoding distribution function for the privacy information of medical stroke patients, as shown in the following equation:(6)f−1I=p+I,s=0,11+p∗I,s=1,where Ι denotes the private key of the medical care private message sender; the initial value I = [0, 1] is set, f is rewritten as fy=∑i=0qHciyi to obtain the feature vector v∈Zμ×μ, the public key of the medical care private message transmission pk=se2,ui,j1≤i,j≤μ,se3,δi00≤i,δi10≤i private key sk=si,si,j1≤i,j≤μ is calculated to obtain the medical stroke patient privacy information encryption and decryption protocol [24, 26].h1′,…,hq′∈Zp1∗ is randomly selected to achieve key construction of private information for medical stroke patients, as shown in Figure 3.Figure 3
Key design for private information of medical stroke patients.
## 4. Case Studies
### 4.1. Object
110 stroke patients and their families in the Emergency Medicine and Neurology Departments from 18 March 2020 to 18 April 2020 were selected for the preprotocol circle activity, and 110 stroke patients and their families hospitalized from 25 April 2020 to 25 June 2020 were selected for the postprotocol circle activity.Stroke patients and their families have little knowledge of stroke risk factors and early warning signs, with a 23.7% awareness rate of stroke warning signs [27]. The majority of stroke patients in China do not recognise stroke symptoms at an early stage thus delaying the best time for treatment, which seriously affects the outcome and prognosis of stroke patients. Therefore, the theme of this campaign is to raise awareness of early warning for stroke patients and their families so as to improve their ability to recognise stroke symptoms and knowledge of first aid and to enable them to seek timely medical attention and receive effective treatment [28].
### 4.2. Results
After the implementation of the campaign, the awareness rate of patients and family members about early stroke symptoms and first aid was 78%, based on the formula: target achievement rate = (postimprovement data - preimprovement data)/(target value - preimprovement data) × 100%, resulting in a target achievement rate of 116.7%. Knowledge of early stroke symptoms and first aid among stroke patients and their families before and after the event is shown in Table1.Table 1
Comparison of stroke patients’ and families’ knowledge of early stroke symptom recognition and first aid.
TimeNumber of peopleKnow (person)Awareness rate (%)Before development1104036After development1108678The circle members have grown in the use of QC techniques, team spirit, responsibility and honour, communication and coordination, motivation, logical thinking, professional knowledge, and personal potential, especially in the area of QC techniques and personal potential (see Figure4).Figure 4
Five star result radar map.
## 4.1. Object
110 stroke patients and their families in the Emergency Medicine and Neurology Departments from 18 March 2020 to 18 April 2020 were selected for the preprotocol circle activity, and 110 stroke patients and their families hospitalized from 25 April 2020 to 25 June 2020 were selected for the postprotocol circle activity.Stroke patients and their families have little knowledge of stroke risk factors and early warning signs, with a 23.7% awareness rate of stroke warning signs [27]. The majority of stroke patients in China do not recognise stroke symptoms at an early stage thus delaying the best time for treatment, which seriously affects the outcome and prognosis of stroke patients. Therefore, the theme of this campaign is to raise awareness of early warning for stroke patients and their families so as to improve their ability to recognise stroke symptoms and knowledge of first aid and to enable them to seek timely medical attention and receive effective treatment [28].
## 4.2. Results
After the implementation of the campaign, the awareness rate of patients and family members about early stroke symptoms and first aid was 78%, based on the formula: target achievement rate = (postimprovement data - preimprovement data)/(target value - preimprovement data) × 100%, resulting in a target achievement rate of 116.7%. Knowledge of early stroke symptoms and first aid among stroke patients and their families before and after the event is shown in Table1.Table 1
Comparison of stroke patients’ and families’ knowledge of early stroke symptom recognition and first aid.
TimeNumber of peopleKnow (person)Awareness rate (%)Before development1104036After development1108678The circle members have grown in the use of QC techniques, team spirit, responsibility and honour, communication and coordination, motivation, logical thinking, professional knowledge, and personal potential, especially in the area of QC techniques and personal potential (see Figure4).Figure 4
Five star result radar map.
## 5. Simulation Test Analysis
MATLAB was used to design the experiments. The key extraction homomorphic vector wasOγ+β+Θμ2.η+λ=Oλ5.5=25, and the sampling frequency of the private information of stroke patients was 15 MHz Table 2.Table 2
Experimental parameter settings.
Parameter settingtmaxMLPmPcParameter value1226130.450.79Based on the above parameter settings, the partial encryption of private information of medical stroke patients was obtained, and the time-domain distribution of the original encrypted data was obtained [29, 30], as shown in Figure 5.Figure 5
Time-domain distribution of data encryption.Using the data in Figure5 as the study object, the partial encryption of stroke patients’ private information was performed and the encryption results were obtained, as shown in Figure 6.Figure 6
Encrypted data output.Analysis of Figure6 shows that the method in this paper can effectively achieve the partial encryption of private information of medical stroke patients, with good encryption resistance to attacks. The encryption depth was tested, and the comparison results were obtained, as shown in Table 3.Table 3
Medical stroke patient privacy information encryption depth test.
Iterations (time)This method/dBReference [4] (dB)Reference [5] (dB)10013.578.2111.7620024.7511.4613.5430032.4415.5322.5340042.6520.4625.57As can be seen from Table3, when the number of iterations is 100, 200, 300, 400, etc., the test results obtained by using this paper for the encryption depth of medical stroke patients’ privacy information are higher than those obtained by traditional methods, and when the number of iterations is 400, the encryption depth of medical stroke patients’ privacy information by using this paper is as high as 400 dB. The encryption is more resistant to attacks, and the encryption depth is deeper.
## 6. Conclusions
The incidence of stroke is currently high, but the state of implementation of early stroke recognition is worrying. The delay in admission of stroke patients is serious, with a delay in the attendance rate of approximately 60%. The early identification of ischaemic stroke patients needs urgent improvement. The application of this model, consisting of muscle force acquisition sensors, voice acquisition sensors, network data storage, cloud data processing, mobile cloud clients, and network transmission, can achieve accurate and timely identification of ischaemic stroke patients who have just developed a stroke at any time and any place, and provide timely alerts through Internet of Things and cloud platform [29, 30].
---
*Source: 1018056-2022-03-22.xml* | 2022 |
# Generating Moving Average Trading Rules on the Oil Futures Market with Genetic Algorithms
**Authors:** Lijun Wang; Haizhong An; Xiaohua Xia; Xiaojia Liu; Xiaoqi Sun; Xuan Huang
**Journal:** Mathematical Problems in Engineering
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101808
---
## Abstract
The crude oil futures market plays a critical role in energy finance. To gain greater investment return, scholars and traders use technical indicators when selecting trading strategies in oil futures market. In this paper, the authors used moving average prices of oil futures with genetic algorithms to generate profitable trading rules. We defined individuals with different combinations of period lengths and calculation methods as moving average trading rules and used genetic algorithms to search for the suitable lengths of moving average periods and the appropriate calculation methods. The authors used daily crude oil prices of NYMEX futures from 1983 to 2013 to evaluate and select moving average rules. We compared the generated trading rules with the buy-and-hold (BH) strategy to determine whether generated moving average trading rules can obtain excess returns in the crude oil futures market. Through 420 experiments, we determine that the generated trading rules help traders make profits when there are obvious price fluctuations. Generated trading rules can realize excess returns when price falls and experiences significant fluctuations, while BH strategy is better when price increases or is smooth with few fluctuations. The results can help traders choose better strategies in different circumstances.
---
## Body
## 1. Introduction
Energy is vital for economic development. Household activities, industrial production, and infrastructure investments all consume energy directly or indirectly, no matter in developing or developed countries [1]. Issues pertaining to energy trade [2], energy efficiency [3], energy policy [4–6], energy consumption [7], and energy finance [8] have received more importance in recent years. Crude oil futures market is a crucial part of energy finance within the scope of the global energy market. Traders and researchers employ technical analysis tools to identify gainful trading rules in financial markets. Accordingly, moving average indicators are commonly used in technical analysis to actualize greater returns. This paper attempts to answer whether in real life an investor can use moving average technical trading rules to obtain excess returns through searching for profitable moving average trading rules with genetic algorithms in the crude oil futures market.Genetic algorithms are widely used in social sciences [9, 10], especially in certain complex issues where it is difficult to conduct precise calculations. It is a trend to apply physical or mathematical methods in energy and resource economics [11–16]. Researchers have applied genetic algorithms to the prediction of coal production-environmental pollution [17], the internal selection and market selection behavior in the market [18], the crude oil demand forecast [19], the minimization of fuel costs and gaseous emissions of electric power generation [20], and the Forex trading system [21]. With respect to the financial technical analysis issues, scholars use genetic algorithms to search best trading rules and profitable technical indicators when making investment decisions [22–25]. Genetic algorithms are combined with other tools such as the agent-based model [26], fuzzy math theory [27], and neural networks [28]. There are also some studies that have used genetic algorithms to forecast the price trends in the financial market [29, 30] or the exchange rate of the foreign exchange market [31]. As there are a vast number of technical trading rules and technical indicators available in the crude oil futures market, it is impractical to use ergodic calculations or certain other accurate calculation methods. Therefore, using genetic algorithms is a feasible way to resolve this issue.Moving average indicators have been widely used in studies of stocks and futures markets [32–37]. Two moving averages of different lengths are compared to forecast the price trends in different markets. Short moving averages are more sensitive to price changes than long ones. If a short moving average price is higher than a long period moving average price, traders will believe the price will rise and take long positions. When the short moving average price falls and crosses with the long one, opposite trading activities will be taken [38]. Allen and Karjalainen (AK) [39] used genetic algorithms to identify technical trading rules in stock markets with daily prices of the S&P 500. The moving average price was used as one of the many indicators of the technical rules. Other indicators, such as the mean value and maximum value, are also used when making investment decisions. Wang [40] conducted similar research on spot and futures markets using genetic programing, while How [41] applied AK’s method to different cap stocks to determine the relevance of size. William, comparing different technical rules and artificial neural network (ANN) rules regarding oil futures market, determined that the ANN is a good tool, thus casting doubt on the efficiency of the oil market [38]. All of these studies combine moving average indicators with other indicators to generate trading rules. However, in this paper, we utilize moving averages to generate trading rules, which may be a simple and efficient approach.The performance of a moving average trading rule is affected significantly by the period lengths [42]. Therefore, finding optimal lengths of the two periods above is a central issue in technical analysis literature. A variety of lengths have been tried in existing research projects [43–48]. In the existing research, most of moving average rules use fixed moving average period lengths and single moving average calculation method. However, it is better to use variable lengths for different investment periods [49, 50] and there are different types of moving average calculation method that can be used in technical analysis.In this paper, considering that the optimal length of the moving average periods and the best calculation method may vary from one occasion to another we use genetic algorithms to determine the suitable length of the moving average period and the appropriate method. Six moving average calculation methods are considered in this paper and genetic algorithms can help us find out the best method and appropriate period lengths for different circumstances. Accordingly, we are able to present the most suitable moving average trading rules for traders in the crude oil futures market.
## 2. Data and Method
### 2.1. Data
We use the daily prices of the crude oil future contract 1 for the period 1983 to 2013 from the New York Mercantile Exchange (Data Source:http://www.eia.gov/dnav/pet/pet_pri_fut_s1_d.htm). We select 20 groups of sample data, each containing 1000 daily prices. In the 1000 daily prices, a 500-day price series is used to train trading rules in every generation. The following 200 prices are used to select the best generated trading rule from all generations, and the last 300 daily prices are used to determine whether the generated rule can acquire excess returns. The first group begins in 1985, the last group ends in 2013, and each 1000-day price series with a step of 300 is selected. We must also include 500 more daily prices before each sample series to calculate the moving prices for the sample period. Thus, every independent experiment requires a 1500-day price series. The data we use are presented in Figure 1.Figure 1
Data selection.
### 2.2. Method
Moving average trading rules facilitate decision-making for traders by comparing two moving averages of different periods. In this way, traders can predict the price trend by analyzing the volatility of the moving average prices. There are six moving average indictors usually used in technical analysis: simple moving average (SMA), weighted moving average (WMA), exponential moving average (EMA), adaptive moving average (AMA), typical price moving average (TPMA), and triangular moving average (TMA). The calculation methods of moving average indicators are presented in Table1.Table 1
Details of the six moving average indicators.
Indicator
Calculation method (p denotes price)
SMA
SMA
(
k
)
=
1
n
∑
i
=
0
n
-
1
p
k
-
i.
WMA
WMA
(
k
)
=
∑
i
=
0
n
-
1
(
n
-
i
)
p
k
-
i
∑
i
=
0
n
-
1
(
n
-
i
).
EMA
EMA
(
k
)
=
EMA
(
k
-
1
)
+
SC
(
p
k
-
EMA
(
k
-
1
)
), where SC
=
2
(
1
+
n
).
AMA
AMA
(
k
)
=
AMA
(
k
-
1
)
+
SS
C
k
2
(
p
k
-
AMA
(
k
-
1
)
), where SS
C
k
=
E
R
k
(
fastSC
-
slowSC
)
+
slowSC,
fastSC
=
2
(
1
+
2
), slowSC
=
2
(
1
+
30
),
E
R
k
=
|
p
k
-
p
k
-
n
|
∑
i
=
k
-
n
+
1
k
|
p
i
-
p
i
-
1
|.
TPMA
TPMA
(
k
)
=
(
high
+
low
+
close
)
3, where high
=
max
(
p
m
,
p
m
-
1
,
…
,
p
m
-
n
+
1
), low
=
min
(
p
m
,
p
m
-
1
,
…
,
p
m
-
n
+
1
), close
=
p
m.
TMA
TMA
(
k
)
=
1
n
∑
i
=
0
n
-
1
SAM
(
k
-
i
).To use a moving average trading rule in the oil futures market, at least three parameters must be set to establish a trading strategy. These parameters include the lengths of two moving average periods and the choice of the moving average method from the above six types. Other researchers have used different lengths of sample periods in their studies. In this paper, we use genetic algorithms to determine appropriate lengths of the moving average period. According to existing literature, the long period is generally between 20 and 200 days (very few studies use periods longer than 200 days) [38, 39], and the short period is generally no longer than 60 days.If the long average price is lower than the short average price, a trader will take a long position. It follows that in opposite situations, opposite strategies will be adopted. Noting the price volatility in the futures market, taking a long position when the short average price exceeds the long average price by at least one standard deviation in the short period may be a good rule. Conversely, taking a short position may also be a good rule. Therefore, we designed the two rules in our initial trading rules. The detailed calculation methods of the six moving averages are presented in Figure2.Figure 2
Structure of trading rules.A 17-binary string is used to represent a trading rule in which a seven-binary substring represents(M-N) (M is the long period length and N is the short period length); a six-binary substring is N (N belongs to the range of 1 to 64); a three-binary substring represents the calculation method of average prices. In this paper, the range of M to N is 5 to 132. The last binary determines whether to change trading strategies only when there is more than one standard deviation difference between two moving average prices. The structure of trading rules is presented in Figure 2. The fitness of a trading rule is calculated according to the profit it can make in the crude oil futures market. To compare generated trading rules with the BH (buy-and-hold, taking the long position throughout the period) strategy, the profit of a generated rule is the excess return rate that exceeds the BH strategy.The calculation method of the return rate references AK’s method. The difference is that we allow a trader to hold a position for a long time, and we do not calculate the return every day. Consider(1)
Ra
=
Ral
+
Ras
+
Rf
-
Rbh
Ral
=
∑
i
=
1
n
(
(
P
out
-
P
in
)
/
P
in
*
(
1
-
c
)
/
(
1
+
c
)
)
Rm
Ras
=
∑
i
=
1
m
(
(
P
out
-
P
in
)
/
P
in
*
(
1
-
c
)
/
(
1
+
c
)
)
Rm
Rbh
=
(
P
begin
-
P
end
)
P
begin
*
(
1
-
c
)
/
(
1
+
c
)
Rm
.Ra is the excess return rate of a long position strategy, that is, the sum of the return of the long position and short position. Rf is the risk free return when out of market, and Rbh is the return rate of the BH strategy in the sample period. Rm is the margin ratio of the futures market. The parameter c denotes the one-way transaction cost rate. P
in and P
out represent the opening price and closing price of a position (long or short), respectively. P
begin is the price of the first day in a whole period and P
end is the price of the last day. As we ignore the amount of change in the everyday margin and the deadline of the contract, a trader can maintain his strategy by taking new positions when a contract nears its closing date.The fitness value is a number between 0 and 2 calculated through nonlinear conversion according to Ra. The fitness value calculation, selection, crossover, and mutation of individuals are implemented using the GA toolbox of Sheffield in the Matlab platform. In every generation, to avoid the overfitting of training data, the best trading rule in every generation will be tested in a selection sample period (the 200-day price series). Only when the fitness value is higher than the best value in the last generation or when the two values are almost the same (f
last
-
f
now < 0.05) can the trading rule be marked as the best thus far. In every generation, 90 percent of the population will be selected to form a new generation, while the other 10 percent will be randomly generated. Accordingly, the evolution of individuals using genetic algorithms in a single independent experiment can be summarized as follows.Step 1 (initialize population).
Randomly create an initial population of 20 moving average trading rules.Step 2 (evaluate individuals).
The fitness of every individual is calculated in the evaluation step. The program calculates the moving average prices in two different scales during the training period using the auxiliary data and determines the positions on each trading day. The excess return rate of every individual is then calculated. Finally, the fitness value of each individual is calculated according to the excess return rate.Step 3 (remember the best trading rule).
Select the rule with the highest fitness value and evaluate it for the selection period to obtain its return rate. If it is better than or not inferior to the current best rule, it will be marked as the best trading rule. If its return rate is lower than or less than 0.05 higher than the current rate, we retain the current rule as the best one.Step 4 (generate new population).
Selecting 18 individuals according to their fitness values, the same individual could be selected more than once. Therefore, randomly create 2 additional trading rules. With a probability of 0.7, perform a recombination operation to generate a new population. Accordingly, all the recombination rules will be mutated with a probability of 0.05.Step 5.
Return to Step2 and repeat 50 times.Step 6 (test the best trading rule).
Test the best trading rule as identified by the above program. This will generate the return rate and indicate whether genetic algorithms can help traders actualize excess returns during this sample period.
## 2.1. Data
We use the daily prices of the crude oil future contract 1 for the period 1983 to 2013 from the New York Mercantile Exchange (Data Source:http://www.eia.gov/dnav/pet/pet_pri_fut_s1_d.htm). We select 20 groups of sample data, each containing 1000 daily prices. In the 1000 daily prices, a 500-day price series is used to train trading rules in every generation. The following 200 prices are used to select the best generated trading rule from all generations, and the last 300 daily prices are used to determine whether the generated rule can acquire excess returns. The first group begins in 1985, the last group ends in 2013, and each 1000-day price series with a step of 300 is selected. We must also include 500 more daily prices before each sample series to calculate the moving prices for the sample period. Thus, every independent experiment requires a 1500-day price series. The data we use are presented in Figure 1.Figure 1
Data selection.
## 2.2. Method
Moving average trading rules facilitate decision-making for traders by comparing two moving averages of different periods. In this way, traders can predict the price trend by analyzing the volatility of the moving average prices. There are six moving average indictors usually used in technical analysis: simple moving average (SMA), weighted moving average (WMA), exponential moving average (EMA), adaptive moving average (AMA), typical price moving average (TPMA), and triangular moving average (TMA). The calculation methods of moving average indicators are presented in Table1.Table 1
Details of the six moving average indicators.
Indicator
Calculation method (p denotes price)
SMA
SMA
(
k
)
=
1
n
∑
i
=
0
n
-
1
p
k
-
i.
WMA
WMA
(
k
)
=
∑
i
=
0
n
-
1
(
n
-
i
)
p
k
-
i
∑
i
=
0
n
-
1
(
n
-
i
).
EMA
EMA
(
k
)
=
EMA
(
k
-
1
)
+
SC
(
p
k
-
EMA
(
k
-
1
)
), where SC
=
2
(
1
+
n
).
AMA
AMA
(
k
)
=
AMA
(
k
-
1
)
+
SS
C
k
2
(
p
k
-
AMA
(
k
-
1
)
), where SS
C
k
=
E
R
k
(
fastSC
-
slowSC
)
+
slowSC,
fastSC
=
2
(
1
+
2
), slowSC
=
2
(
1
+
30
),
E
R
k
=
|
p
k
-
p
k
-
n
|
∑
i
=
k
-
n
+
1
k
|
p
i
-
p
i
-
1
|.
TPMA
TPMA
(
k
)
=
(
high
+
low
+
close
)
3, where high
=
max
(
p
m
,
p
m
-
1
,
…
,
p
m
-
n
+
1
), low
=
min
(
p
m
,
p
m
-
1
,
…
,
p
m
-
n
+
1
), close
=
p
m.
TMA
TMA
(
k
)
=
1
n
∑
i
=
0
n
-
1
SAM
(
k
-
i
).To use a moving average trading rule in the oil futures market, at least three parameters must be set to establish a trading strategy. These parameters include the lengths of two moving average periods and the choice of the moving average method from the above six types. Other researchers have used different lengths of sample periods in their studies. In this paper, we use genetic algorithms to determine appropriate lengths of the moving average period. According to existing literature, the long period is generally between 20 and 200 days (very few studies use periods longer than 200 days) [38, 39], and the short period is generally no longer than 60 days.If the long average price is lower than the short average price, a trader will take a long position. It follows that in opposite situations, opposite strategies will be adopted. Noting the price volatility in the futures market, taking a long position when the short average price exceeds the long average price by at least one standard deviation in the short period may be a good rule. Conversely, taking a short position may also be a good rule. Therefore, we designed the two rules in our initial trading rules. The detailed calculation methods of the six moving averages are presented in Figure2.Figure 2
Structure of trading rules.A 17-binary string is used to represent a trading rule in which a seven-binary substring represents(M-N) (M is the long period length and N is the short period length); a six-binary substring is N (N belongs to the range of 1 to 64); a three-binary substring represents the calculation method of average prices. In this paper, the range of M to N is 5 to 132. The last binary determines whether to change trading strategies only when there is more than one standard deviation difference between two moving average prices. The structure of trading rules is presented in Figure 2. The fitness of a trading rule is calculated according to the profit it can make in the crude oil futures market. To compare generated trading rules with the BH (buy-and-hold, taking the long position throughout the period) strategy, the profit of a generated rule is the excess return rate that exceeds the BH strategy.The calculation method of the return rate references AK’s method. The difference is that we allow a trader to hold a position for a long time, and we do not calculate the return every day. Consider(1)
Ra
=
Ral
+
Ras
+
Rf
-
Rbh
Ral
=
∑
i
=
1
n
(
(
P
out
-
P
in
)
/
P
in
*
(
1
-
c
)
/
(
1
+
c
)
)
Rm
Ras
=
∑
i
=
1
m
(
(
P
out
-
P
in
)
/
P
in
*
(
1
-
c
)
/
(
1
+
c
)
)
Rm
Rbh
=
(
P
begin
-
P
end
)
P
begin
*
(
1
-
c
)
/
(
1
+
c
)
Rm
.Ra is the excess return rate of a long position strategy, that is, the sum of the return of the long position and short position. Rf is the risk free return when out of market, and Rbh is the return rate of the BH strategy in the sample period. Rm is the margin ratio of the futures market. The parameter c denotes the one-way transaction cost rate. P
in and P
out represent the opening price and closing price of a position (long or short), respectively. P
begin is the price of the first day in a whole period and P
end is the price of the last day. As we ignore the amount of change in the everyday margin and the deadline of the contract, a trader can maintain his strategy by taking new positions when a contract nears its closing date.The fitness value is a number between 0 and 2 calculated through nonlinear conversion according to Ra. The fitness value calculation, selection, crossover, and mutation of individuals are implemented using the GA toolbox of Sheffield in the Matlab platform. In every generation, to avoid the overfitting of training data, the best trading rule in every generation will be tested in a selection sample period (the 200-day price series). Only when the fitness value is higher than the best value in the last generation or when the two values are almost the same (f
last
-
f
now < 0.05) can the trading rule be marked as the best thus far. In every generation, 90 percent of the population will be selected to form a new generation, while the other 10 percent will be randomly generated. Accordingly, the evolution of individuals using genetic algorithms in a single independent experiment can be summarized as follows.Step 1 (initialize population).
Randomly create an initial population of 20 moving average trading rules.Step 2 (evaluate individuals).
The fitness of every individual is calculated in the evaluation step. The program calculates the moving average prices in two different scales during the training period using the auxiliary data and determines the positions on each trading day. The excess return rate of every individual is then calculated. Finally, the fitness value of each individual is calculated according to the excess return rate.Step 3 (remember the best trading rule).
Select the rule with the highest fitness value and evaluate it for the selection period to obtain its return rate. If it is better than or not inferior to the current best rule, it will be marked as the best trading rule. If its return rate is lower than or less than 0.05 higher than the current rate, we retain the current rule as the best one.Step 4 (generate new population).
Selecting 18 individuals according to their fitness values, the same individual could be selected more than once. Therefore, randomly create 2 additional trading rules. With a probability of 0.7, perform a recombination operation to generate a new population. Accordingly, all the recombination rules will be mutated with a probability of 0.05.Step 5.
Return to Step2 and repeat 50 times.Step 6 (test the best trading rule).
Test the best trading rule as identified by the above program. This will generate the return rate and indicate whether genetic algorithms can help traders actualize excess returns during this sample period.
## 3. Results
Because, in this paper, we have not considered the amount of assets, we assume the margin ratio to be 0.05. In fact, as the parameter has no significant effect on our experiment results, the return rate is increased twenty times. With 20 trials in each period, 420 independent experiments are conducted to determine useful moving average trading rules in the crude oil futures market. The prices we used for the 21 periods are shown in Figure3.Figure 3
Sample data.Based on previous studies [39, 40, 51] and on the decision to select an intermediate value for this study, the transaction cost rate is set at 0.1% for the 420 experiments. The risk free return rate is 2%, which is based primarily on the short-term treasury bond rate [41].Of the 420 trials, 226 earn profits. With an average return rate of 1.446, it is concluded that genetic algorithms can facilitate traders to obtain returns in the crude oil futures market. However, moving average trading rules identified by genetic algorithms do not result in excess returns as there are only 8 periods in which generated trading rules resulted in traders receiving excess returns. Given that the price of crude oil futures increased many times during the sample period, we further contend that genetic algorithms are helpful in investments.For a better understanding, we divide the 21 periods into 4 categories according to the results (see the last column of Table2).Table 2
Results of experiment.
Period
Q
a
Q
Rbh
R
-
R
a
-
Classification
1
3
12
3.768
0.663
−3.105
3
2
20
20
−2.946
2.899
5.845
1
3
15
16
5.299
7.717
2.419
1
4
1
6
0.647
−0.862
−1.510
4
5
20
11
−7.185
−0.097
7.088
2
6
1
12
4.976
0.808
−4.168
3
7
5
11
5.240
1.854
−3.386
3
8
10
9
−1.175
−0.357
0.818
2
9
19
12
−5.610
0.691
6.301
1
10
0
16
16.338
6.943
−9.396
3
11
1
13
2.740
0.127
−2.613
3
12
15
11
−1.420
−0.576
0.844
2
13
0
7
4.343
−2.296
−6.640
4
14
0
14
14.069
3.141
−10.928
3
15
7
8
2.287
0.273
−2.014
3
16
12
9
−0.715
−0.092
0.622
2
17
0
20
26.227
21.037
−5.190
3
18
20
3
−9.746
−4.131
5.615
2
19
0
0
3.512
−6.091
−9.603
4
20
1
10
2.968
−0.193
−3.161
4
21
7
6
−0.162
−1.092
−0.930
4
Q
a is the number of generated trading rules receiving positive excess returns compared with the BH strategy in the 20 trials. Q is the number of generated trading rules acquiring returns in the test periods. Rbh is the return rate of the BH strategy.
R
- is the average return rate of the 20 trials in each period. R
a
- is the average excess return rate of 20 trials in each period.Category 1 (periods 2, 3, and 9).
In these periods, generated trading rules not only help traders obtain returns but also help them to realize excess returns. Generated trading rules generate more profits than the BH strategy in periods 3 and 9. In period 2, the BH strategy loses money, while the generated trading rules, as determined by the genetic algorithms, result in profits. Thus, the generated trading rules are far superior to the BH strategy in this period. A common feature of these three periods in Category 1 is that the crude oil prices fell during the test period and experienced significant fluctuations.Category 2 (periods 5, 8, 12, 16, and 18).
Generated moving average trading rules fail to generate profits during these five periods. Even so, the generated rules performed better than the BH strategy, as they significantly reduced losses. In these periods, prices declined smoothly, experiencing some small fluctuations during the process.Category 3 (periods 1, 6, 7, 10, 11, 14, 15, and 17).
In these eight sample data periods, genetic algorithms help traders to identify suitable moving average trading rules. However, the traders failed to obtain excess returns. While prices steadily increase in these periods, there are also some minor fluctuations, which cause the genetic algorithms to be inferior to the BH strategy in these periods.Category 4 (periods 4, 13, 19, 20, and 21).
Genetic algorithm trading rules demonstrate poor performance in these five periods. In period 21, the BH strategy yields negative returns. Our genetic trading rules yield more severe losses. The BH strategy is considered superior to the generated trading rules in the other four periods as the BH strategy yields some returns. While there are no significant changes in the prices level in these periods, the prices are in volatile states throughout the five periods. Slight price changes with no apparent trends render the generated trading rules helpless in predicting price changes and providing returns.We use genetic algorithms to search good moving average trading rules for traders in the crude oil market. Table3, which shows the average number of M and N for every period, indicates that the value of long period (M) has a close relationship with the volatility of the prices in the sample period. A large M is set in periods with significant fluctuations and a small M is selected for periods in which price is relatively stable.Table 3
The average value ofM and N in each period.
Period
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Avg.
Avg. (M)
104
108
65
121
101
59
51
55
143
107
140
131
118
81
78
96
160
119
112
101
71
101
Avg. (N)
24
36
15
45
29
29
26
16
47
37
36
43
44
47
22
18
53
24
19
31
40
32The distribution ofM is shown in Figure 4. The value ofProbability is very small and M does not follow normal distribution. The figure presents a typical fat tail characteristic with a kurtosis of 2.36. Compared with normal distribution, there are more values located in the tails of the distribution in our results. Only in half of the 420 experiments, M is between 70 days and 130 days. The values are decentralized and we believe it is more scientific to choose the best lengths of the two periods using a training process that we have used in this paper in actual investment.Figure 4
Distribution ofM.Among the six moving average calculation methods, AMA and TMA are used more often than the other four (see Table4), as more than half of the generated moving average trading rules use AMA or TMA. A small number of generated trading rules use WMA and EMA, while TPMA and SMA, which are easy to calculate, are frequently used in some periods, such as periods 1, 2, 3, 12, 19, and 21.Table 4
Calculation methods of moving average price in each period.
Method
Period
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Count
SMA
7
4
1
4
3
3
7
4
10
1
2
1
13
6
3
69
WMA
1
1
1
5
3
2
1
3
6
23
EMA
1
4
7
2
2
15
31
AMA
1
1
20
15
14
8
8
2
9
4
18
16
18
2
1
8
145
TPMA
8
19
12
2
3
2
6
2
1
3
9
67
TMA
2
4
3
1
8
11
11
2
13
2
19
1
1
7
85The selection of calculation method is associated with the price trends and volatility. Figure5 shows that TPMA is used 31 times in the 60 independent experiments in periods 2, 3, and 9 (Category 1). Different from the overall proportion, TPMA is the most popular calculation method when price falls during the period and experienced significant fluctuations. AMA is the most popular method in the other three categories. EMA is never used in Categories 1 and 4. However, it takes a 24% proportion in Category 2, more than TMA, SMA, TPMA, and WMA. The proportions of TMA and SMA have no significant differences in different categories. In category 4, prices change with no apparent trends. No one method has obvious advantage over the others.Figure 5
Proportions of methods in different categories.The results of 20 experiments in the same period indicate high consistency on the valuesd (Table 5). When prices fluctuate, such as in periods 1, 2, 7, 8, 13, 19, and 20, then not opening positions until one average price exceeds another by at least one standard deviation is the best option. When the price is relatively stable, an investment decision should be made immediately as long as the two moving averages cross.Table 5
Numbers of trading rules in whichsd = 1.
Period
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Count
Count
19
19
20
0
5
3
13
12
0
2
0
0
13
2
0
2
0
1
20
19
0
150
## 4. Discussion
This paper attempts to generate moving average trading rules in the oil futures market using genetic algorithms. Different from other studies, we use only moving averages as technical indicators to identify useful moving average trading rules, without any other complex technical analysis tools or indicators. Moving average trading rules are easy for traders to operate, and they are straightforward regardless of the situation. To identify the best trading rules in the crude oil futures market, we use genetic algorithms to select all the parameters in the moving average trading rules dynamically rather than doing so in a fixed manner.According to our genetic calculations, using genetic algorithms to find out the best lengths of the two moving average periods is advocated because the generated lengths differ from each other in different price trends. Static moving average trading rules with fixed period lengths cannot adapt to complex fluctuations of price in different periods. A training process, however, which takes dynamic features of price fluctuations into consideration, can help traders find out the optimal lengths of the two moving periods of a trading rule.Among the six moving average methods, the AMA and TMA are the most popular among the generated trading rules as these two methods have the ability to adapt to the price trends. The AMA can change the weights of the current price according to the volatility in the last several days. As the TMA is the average of the SMA, it more accurately reflects the price level. However, the selection of best moving average calculation method is affected by price trends. Traders can choose methods more scientifically according to the price trends and fluctuations. Based on our experiment results, TPMA is an optimal choice when price experiences a decline process with significant fluctuations, and generating moving average trading rules are outstanding compared with BH strategy in these occasions. Although EMA takes a very small proportion in the total 420 experiments, it is also an applicable method other than AMA when price falls smoothly.For the periods in which the price volatility is apparent, decisions will not be made until the difference between the two averages exceeds the standard deviation of the short sample prices, thereby reducing the transaction risk. However, this method is not suitable for a period in which the price is relatively stable. In these situations, hesitation may sometimes cause traders to miss possible profit opportunities.As a whole, generated moving average trading rules can help traders make profits in the long term. However, genetic algorithms cannot guarantee access to additional revenue in every period as they are only useful in acquiring excess returns in special situations. The generated moving average trading rules demonstrate outstanding performance when the crude oil futures price falls with significant fluctuations. The BH strategy will lose on these occasions, while the generated trading rule can help traders foresee a decline in price and reduce losses. Our trading rules also yield positive returns during the fluctuations by the timely changing of positions.When the price falls smoothly with few fluctuations in the process, generated trading rules can yield excess returns compared to the BH strategy. Although genetic algorithms cannot help traders receive positive returns during these periods, the algorithms can help traders reduce loss by changing positions with the change of price trends. When the price is stable or rising smoothly, the generated rules may generate returns. However, they cannot generate more returns than the BH strategy. Limited returns cannot afford the transaction costs. When the price falls, the generated rules may be superior to the BH strategy. Genetic algorithms can also help traders make profits in the process of price increases with small fluctuations. In these periods, the BH strategy is better than generated trading rules because the transactions in the process generate transaction costs and may miss some profit opportunities. Generated moving average trading rules have poor performance if there are no notable trends in the price change. In these periods, moving average indicators cannot find profit opportunities because the volatility is too small. The trends of price changes are delayed by the moving average method. Therefore, when a decision is made, the price trend must also change, and as a result, there is no doubt that the trader will experience deficits.Using genetic algorithms, moving average trading rules do help traders to gain returns in the actual futures market. We also identified the best lengths for the two periods with respect to moving average rules and recommend the moving average calculation method for the crude oil futures market. Technical trading rules with only moving average indicators generated by genetic algorithms demonstrate no sufficient advantages compared to the BH strategy because the overall price increased during the 30-year period. Nevertheless, generated moving trading rules are beneficial for traders under certain circumstances, especially when there are significant changes in prices.In this paper, we search best trading rules according to the return rate of each one without regard to asset conditions and open interest, which proves to be the greatest limitation of the study. To improve the accuracy of the results, a simulation with actual assets is recommended. Accordingly, we will undertake this endeavor in a subsequent research.
## 5. Concluding Remarks
We conclude that the genetic algorithms identify better technical rules that allow traders to actualize profits from their investments. While we have no evidence to demonstrate that generated trading rules result in greater returns than does the BH strategy, our conclusion is consistent with the efficient market hypothesis. While generated trading rules facilitate traders in realizing excess returns with respect to their investing activities under specific circumstances, they cannot, at least by using moving average trading rules, ensure more long-term excess returns than the BH strategy. With respect to the selection of two periods, finding out optimal lengths using genetic algorithms is helpful for making more profits. Of the six moving average indicators, AMA and TMA are the most popular moving average calculation methods for the crude oil futures market in total, while TPMA is an outstanding method in some occasion. When the crude oil prices demonstrate notable volatility, a trader is advised to wait until the difference of the two moving averages exceeds the standard deviation of the short period and vice versa.Based on the above analysis, it is better to use BH strategy when the price increases or is stable. However, generated moving average trading rules are better than BH strategy when crude oil futures price decreases. With respect to the moving average calculation method, it is advocated to use TPMA when price falls with significant fluctuations and AMA when price falls smoothly, although TPMA is not a popular method overall. We propose variable moving average trading rules generated by training processes rather than static moving average trading rules in the crude oil futures markets.
---
*Source: 101808-2014-05-26.xml* | 101808-2014-05-26_101808-2014-05-26.md | 38,751 | Generating Moving Average Trading Rules on the Oil Futures Market with Genetic Algorithms | Lijun Wang; Haizhong An; Xiaohua Xia; Xiaojia Liu; Xiaoqi Sun; Xuan Huang | Mathematical Problems in Engineering
(2014) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101808 | 101808-2014-05-26.xml | ---
## Abstract
The crude oil futures market plays a critical role in energy finance. To gain greater investment return, scholars and traders use technical indicators when selecting trading strategies in oil futures market. In this paper, the authors used moving average prices of oil futures with genetic algorithms to generate profitable trading rules. We defined individuals with different combinations of period lengths and calculation methods as moving average trading rules and used genetic algorithms to search for the suitable lengths of moving average periods and the appropriate calculation methods. The authors used daily crude oil prices of NYMEX futures from 1983 to 2013 to evaluate and select moving average rules. We compared the generated trading rules with the buy-and-hold (BH) strategy to determine whether generated moving average trading rules can obtain excess returns in the crude oil futures market. Through 420 experiments, we determine that the generated trading rules help traders make profits when there are obvious price fluctuations. Generated trading rules can realize excess returns when price falls and experiences significant fluctuations, while BH strategy is better when price increases or is smooth with few fluctuations. The results can help traders choose better strategies in different circumstances.
---
## Body
## 1. Introduction
Energy is vital for economic development. Household activities, industrial production, and infrastructure investments all consume energy directly or indirectly, no matter in developing or developed countries [1]. Issues pertaining to energy trade [2], energy efficiency [3], energy policy [4–6], energy consumption [7], and energy finance [8] have received more importance in recent years. Crude oil futures market is a crucial part of energy finance within the scope of the global energy market. Traders and researchers employ technical analysis tools to identify gainful trading rules in financial markets. Accordingly, moving average indicators are commonly used in technical analysis to actualize greater returns. This paper attempts to answer whether in real life an investor can use moving average technical trading rules to obtain excess returns through searching for profitable moving average trading rules with genetic algorithms in the crude oil futures market.Genetic algorithms are widely used in social sciences [9, 10], especially in certain complex issues where it is difficult to conduct precise calculations. It is a trend to apply physical or mathematical methods in energy and resource economics [11–16]. Researchers have applied genetic algorithms to the prediction of coal production-environmental pollution [17], the internal selection and market selection behavior in the market [18], the crude oil demand forecast [19], the minimization of fuel costs and gaseous emissions of electric power generation [20], and the Forex trading system [21]. With respect to the financial technical analysis issues, scholars use genetic algorithms to search best trading rules and profitable technical indicators when making investment decisions [22–25]. Genetic algorithms are combined with other tools such as the agent-based model [26], fuzzy math theory [27], and neural networks [28]. There are also some studies that have used genetic algorithms to forecast the price trends in the financial market [29, 30] or the exchange rate of the foreign exchange market [31]. As there are a vast number of technical trading rules and technical indicators available in the crude oil futures market, it is impractical to use ergodic calculations or certain other accurate calculation methods. Therefore, using genetic algorithms is a feasible way to resolve this issue.Moving average indicators have been widely used in studies of stocks and futures markets [32–37]. Two moving averages of different lengths are compared to forecast the price trends in different markets. Short moving averages are more sensitive to price changes than long ones. If a short moving average price is higher than a long period moving average price, traders will believe the price will rise and take long positions. When the short moving average price falls and crosses with the long one, opposite trading activities will be taken [38]. Allen and Karjalainen (AK) [39] used genetic algorithms to identify technical trading rules in stock markets with daily prices of the S&P 500. The moving average price was used as one of the many indicators of the technical rules. Other indicators, such as the mean value and maximum value, are also used when making investment decisions. Wang [40] conducted similar research on spot and futures markets using genetic programing, while How [41] applied AK’s method to different cap stocks to determine the relevance of size. William, comparing different technical rules and artificial neural network (ANN) rules regarding oil futures market, determined that the ANN is a good tool, thus casting doubt on the efficiency of the oil market [38]. All of these studies combine moving average indicators with other indicators to generate trading rules. However, in this paper, we utilize moving averages to generate trading rules, which may be a simple and efficient approach.The performance of a moving average trading rule is affected significantly by the period lengths [42]. Therefore, finding optimal lengths of the two periods above is a central issue in technical analysis literature. A variety of lengths have been tried in existing research projects [43–48]. In the existing research, most of moving average rules use fixed moving average period lengths and single moving average calculation method. However, it is better to use variable lengths for different investment periods [49, 50] and there are different types of moving average calculation method that can be used in technical analysis.In this paper, considering that the optimal length of the moving average periods and the best calculation method may vary from one occasion to another we use genetic algorithms to determine the suitable length of the moving average period and the appropriate method. Six moving average calculation methods are considered in this paper and genetic algorithms can help us find out the best method and appropriate period lengths for different circumstances. Accordingly, we are able to present the most suitable moving average trading rules for traders in the crude oil futures market.
## 2. Data and Method
### 2.1. Data
We use the daily prices of the crude oil future contract 1 for the period 1983 to 2013 from the New York Mercantile Exchange (Data Source:http://www.eia.gov/dnav/pet/pet_pri_fut_s1_d.htm). We select 20 groups of sample data, each containing 1000 daily prices. In the 1000 daily prices, a 500-day price series is used to train trading rules in every generation. The following 200 prices are used to select the best generated trading rule from all generations, and the last 300 daily prices are used to determine whether the generated rule can acquire excess returns. The first group begins in 1985, the last group ends in 2013, and each 1000-day price series with a step of 300 is selected. We must also include 500 more daily prices before each sample series to calculate the moving prices for the sample period. Thus, every independent experiment requires a 1500-day price series. The data we use are presented in Figure 1.Figure 1
Data selection.
### 2.2. Method
Moving average trading rules facilitate decision-making for traders by comparing two moving averages of different periods. In this way, traders can predict the price trend by analyzing the volatility of the moving average prices. There are six moving average indictors usually used in technical analysis: simple moving average (SMA), weighted moving average (WMA), exponential moving average (EMA), adaptive moving average (AMA), typical price moving average (TPMA), and triangular moving average (TMA). The calculation methods of moving average indicators are presented in Table1.Table 1
Details of the six moving average indicators.
Indicator
Calculation method (p denotes price)
SMA
SMA
(
k
)
=
1
n
∑
i
=
0
n
-
1
p
k
-
i.
WMA
WMA
(
k
)
=
∑
i
=
0
n
-
1
(
n
-
i
)
p
k
-
i
∑
i
=
0
n
-
1
(
n
-
i
).
EMA
EMA
(
k
)
=
EMA
(
k
-
1
)
+
SC
(
p
k
-
EMA
(
k
-
1
)
), where SC
=
2
(
1
+
n
).
AMA
AMA
(
k
)
=
AMA
(
k
-
1
)
+
SS
C
k
2
(
p
k
-
AMA
(
k
-
1
)
), where SS
C
k
=
E
R
k
(
fastSC
-
slowSC
)
+
slowSC,
fastSC
=
2
(
1
+
2
), slowSC
=
2
(
1
+
30
),
E
R
k
=
|
p
k
-
p
k
-
n
|
∑
i
=
k
-
n
+
1
k
|
p
i
-
p
i
-
1
|.
TPMA
TPMA
(
k
)
=
(
high
+
low
+
close
)
3, where high
=
max
(
p
m
,
p
m
-
1
,
…
,
p
m
-
n
+
1
), low
=
min
(
p
m
,
p
m
-
1
,
…
,
p
m
-
n
+
1
), close
=
p
m.
TMA
TMA
(
k
)
=
1
n
∑
i
=
0
n
-
1
SAM
(
k
-
i
).To use a moving average trading rule in the oil futures market, at least three parameters must be set to establish a trading strategy. These parameters include the lengths of two moving average periods and the choice of the moving average method from the above six types. Other researchers have used different lengths of sample periods in their studies. In this paper, we use genetic algorithms to determine appropriate lengths of the moving average period. According to existing literature, the long period is generally between 20 and 200 days (very few studies use periods longer than 200 days) [38, 39], and the short period is generally no longer than 60 days.If the long average price is lower than the short average price, a trader will take a long position. It follows that in opposite situations, opposite strategies will be adopted. Noting the price volatility in the futures market, taking a long position when the short average price exceeds the long average price by at least one standard deviation in the short period may be a good rule. Conversely, taking a short position may also be a good rule. Therefore, we designed the two rules in our initial trading rules. The detailed calculation methods of the six moving averages are presented in Figure2.Figure 2
Structure of trading rules.A 17-binary string is used to represent a trading rule in which a seven-binary substring represents(M-N) (M is the long period length and N is the short period length); a six-binary substring is N (N belongs to the range of 1 to 64); a three-binary substring represents the calculation method of average prices. In this paper, the range of M to N is 5 to 132. The last binary determines whether to change trading strategies only when there is more than one standard deviation difference between two moving average prices. The structure of trading rules is presented in Figure 2. The fitness of a trading rule is calculated according to the profit it can make in the crude oil futures market. To compare generated trading rules with the BH (buy-and-hold, taking the long position throughout the period) strategy, the profit of a generated rule is the excess return rate that exceeds the BH strategy.The calculation method of the return rate references AK’s method. The difference is that we allow a trader to hold a position for a long time, and we do not calculate the return every day. Consider(1)
Ra
=
Ral
+
Ras
+
Rf
-
Rbh
Ral
=
∑
i
=
1
n
(
(
P
out
-
P
in
)
/
P
in
*
(
1
-
c
)
/
(
1
+
c
)
)
Rm
Ras
=
∑
i
=
1
m
(
(
P
out
-
P
in
)
/
P
in
*
(
1
-
c
)
/
(
1
+
c
)
)
Rm
Rbh
=
(
P
begin
-
P
end
)
P
begin
*
(
1
-
c
)
/
(
1
+
c
)
Rm
.Ra is the excess return rate of a long position strategy, that is, the sum of the return of the long position and short position. Rf is the risk free return when out of market, and Rbh is the return rate of the BH strategy in the sample period. Rm is the margin ratio of the futures market. The parameter c denotes the one-way transaction cost rate. P
in and P
out represent the opening price and closing price of a position (long or short), respectively. P
begin is the price of the first day in a whole period and P
end is the price of the last day. As we ignore the amount of change in the everyday margin and the deadline of the contract, a trader can maintain his strategy by taking new positions when a contract nears its closing date.The fitness value is a number between 0 and 2 calculated through nonlinear conversion according to Ra. The fitness value calculation, selection, crossover, and mutation of individuals are implemented using the GA toolbox of Sheffield in the Matlab platform. In every generation, to avoid the overfitting of training data, the best trading rule in every generation will be tested in a selection sample period (the 200-day price series). Only when the fitness value is higher than the best value in the last generation or when the two values are almost the same (f
last
-
f
now < 0.05) can the trading rule be marked as the best thus far. In every generation, 90 percent of the population will be selected to form a new generation, while the other 10 percent will be randomly generated. Accordingly, the evolution of individuals using genetic algorithms in a single independent experiment can be summarized as follows.Step 1 (initialize population).
Randomly create an initial population of 20 moving average trading rules.Step 2 (evaluate individuals).
The fitness of every individual is calculated in the evaluation step. The program calculates the moving average prices in two different scales during the training period using the auxiliary data and determines the positions on each trading day. The excess return rate of every individual is then calculated. Finally, the fitness value of each individual is calculated according to the excess return rate.Step 3 (remember the best trading rule).
Select the rule with the highest fitness value and evaluate it for the selection period to obtain its return rate. If it is better than or not inferior to the current best rule, it will be marked as the best trading rule. If its return rate is lower than or less than 0.05 higher than the current rate, we retain the current rule as the best one.Step 4 (generate new population).
Selecting 18 individuals according to their fitness values, the same individual could be selected more than once. Therefore, randomly create 2 additional trading rules. With a probability of 0.7, perform a recombination operation to generate a new population. Accordingly, all the recombination rules will be mutated with a probability of 0.05.Step 5.
Return to Step2 and repeat 50 times.Step 6 (test the best trading rule).
Test the best trading rule as identified by the above program. This will generate the return rate and indicate whether genetic algorithms can help traders actualize excess returns during this sample period.
## 2.1. Data
We use the daily prices of the crude oil future contract 1 for the period 1983 to 2013 from the New York Mercantile Exchange (Data Source:http://www.eia.gov/dnav/pet/pet_pri_fut_s1_d.htm). We select 20 groups of sample data, each containing 1000 daily prices. In the 1000 daily prices, a 500-day price series is used to train trading rules in every generation. The following 200 prices are used to select the best generated trading rule from all generations, and the last 300 daily prices are used to determine whether the generated rule can acquire excess returns. The first group begins in 1985, the last group ends in 2013, and each 1000-day price series with a step of 300 is selected. We must also include 500 more daily prices before each sample series to calculate the moving prices for the sample period. Thus, every independent experiment requires a 1500-day price series. The data we use are presented in Figure 1.Figure 1
Data selection.
## 2.2. Method
Moving average trading rules facilitate decision-making for traders by comparing two moving averages of different periods. In this way, traders can predict the price trend by analyzing the volatility of the moving average prices. There are six moving average indictors usually used in technical analysis: simple moving average (SMA), weighted moving average (WMA), exponential moving average (EMA), adaptive moving average (AMA), typical price moving average (TPMA), and triangular moving average (TMA). The calculation methods of moving average indicators are presented in Table1.Table 1
Details of the six moving average indicators.
Indicator
Calculation method (p denotes price)
SMA
SMA
(
k
)
=
1
n
∑
i
=
0
n
-
1
p
k
-
i.
WMA
WMA
(
k
)
=
∑
i
=
0
n
-
1
(
n
-
i
)
p
k
-
i
∑
i
=
0
n
-
1
(
n
-
i
).
EMA
EMA
(
k
)
=
EMA
(
k
-
1
)
+
SC
(
p
k
-
EMA
(
k
-
1
)
), where SC
=
2
(
1
+
n
).
AMA
AMA
(
k
)
=
AMA
(
k
-
1
)
+
SS
C
k
2
(
p
k
-
AMA
(
k
-
1
)
), where SS
C
k
=
E
R
k
(
fastSC
-
slowSC
)
+
slowSC,
fastSC
=
2
(
1
+
2
), slowSC
=
2
(
1
+
30
),
E
R
k
=
|
p
k
-
p
k
-
n
|
∑
i
=
k
-
n
+
1
k
|
p
i
-
p
i
-
1
|.
TPMA
TPMA
(
k
)
=
(
high
+
low
+
close
)
3, where high
=
max
(
p
m
,
p
m
-
1
,
…
,
p
m
-
n
+
1
), low
=
min
(
p
m
,
p
m
-
1
,
…
,
p
m
-
n
+
1
), close
=
p
m.
TMA
TMA
(
k
)
=
1
n
∑
i
=
0
n
-
1
SAM
(
k
-
i
).To use a moving average trading rule in the oil futures market, at least three parameters must be set to establish a trading strategy. These parameters include the lengths of two moving average periods and the choice of the moving average method from the above six types. Other researchers have used different lengths of sample periods in their studies. In this paper, we use genetic algorithms to determine appropriate lengths of the moving average period. According to existing literature, the long period is generally between 20 and 200 days (very few studies use periods longer than 200 days) [38, 39], and the short period is generally no longer than 60 days.If the long average price is lower than the short average price, a trader will take a long position. It follows that in opposite situations, opposite strategies will be adopted. Noting the price volatility in the futures market, taking a long position when the short average price exceeds the long average price by at least one standard deviation in the short period may be a good rule. Conversely, taking a short position may also be a good rule. Therefore, we designed the two rules in our initial trading rules. The detailed calculation methods of the six moving averages are presented in Figure2.Figure 2
Structure of trading rules.A 17-binary string is used to represent a trading rule in which a seven-binary substring represents(M-N) (M is the long period length and N is the short period length); a six-binary substring is N (N belongs to the range of 1 to 64); a three-binary substring represents the calculation method of average prices. In this paper, the range of M to N is 5 to 132. The last binary determines whether to change trading strategies only when there is more than one standard deviation difference between two moving average prices. The structure of trading rules is presented in Figure 2. The fitness of a trading rule is calculated according to the profit it can make in the crude oil futures market. To compare generated trading rules with the BH (buy-and-hold, taking the long position throughout the period) strategy, the profit of a generated rule is the excess return rate that exceeds the BH strategy.The calculation method of the return rate references AK’s method. The difference is that we allow a trader to hold a position for a long time, and we do not calculate the return every day. Consider(1)
Ra
=
Ral
+
Ras
+
Rf
-
Rbh
Ral
=
∑
i
=
1
n
(
(
P
out
-
P
in
)
/
P
in
*
(
1
-
c
)
/
(
1
+
c
)
)
Rm
Ras
=
∑
i
=
1
m
(
(
P
out
-
P
in
)
/
P
in
*
(
1
-
c
)
/
(
1
+
c
)
)
Rm
Rbh
=
(
P
begin
-
P
end
)
P
begin
*
(
1
-
c
)
/
(
1
+
c
)
Rm
.Ra is the excess return rate of a long position strategy, that is, the sum of the return of the long position and short position. Rf is the risk free return when out of market, and Rbh is the return rate of the BH strategy in the sample period. Rm is the margin ratio of the futures market. The parameter c denotes the one-way transaction cost rate. P
in and P
out represent the opening price and closing price of a position (long or short), respectively. P
begin is the price of the first day in a whole period and P
end is the price of the last day. As we ignore the amount of change in the everyday margin and the deadline of the contract, a trader can maintain his strategy by taking new positions when a contract nears its closing date.The fitness value is a number between 0 and 2 calculated through nonlinear conversion according to Ra. The fitness value calculation, selection, crossover, and mutation of individuals are implemented using the GA toolbox of Sheffield in the Matlab platform. In every generation, to avoid the overfitting of training data, the best trading rule in every generation will be tested in a selection sample period (the 200-day price series). Only when the fitness value is higher than the best value in the last generation or when the two values are almost the same (f
last
-
f
now < 0.05) can the trading rule be marked as the best thus far. In every generation, 90 percent of the population will be selected to form a new generation, while the other 10 percent will be randomly generated. Accordingly, the evolution of individuals using genetic algorithms in a single independent experiment can be summarized as follows.Step 1 (initialize population).
Randomly create an initial population of 20 moving average trading rules.Step 2 (evaluate individuals).
The fitness of every individual is calculated in the evaluation step. The program calculates the moving average prices in two different scales during the training period using the auxiliary data and determines the positions on each trading day. The excess return rate of every individual is then calculated. Finally, the fitness value of each individual is calculated according to the excess return rate.Step 3 (remember the best trading rule).
Select the rule with the highest fitness value and evaluate it for the selection period to obtain its return rate. If it is better than or not inferior to the current best rule, it will be marked as the best trading rule. If its return rate is lower than or less than 0.05 higher than the current rate, we retain the current rule as the best one.Step 4 (generate new population).
Selecting 18 individuals according to their fitness values, the same individual could be selected more than once. Therefore, randomly create 2 additional trading rules. With a probability of 0.7, perform a recombination operation to generate a new population. Accordingly, all the recombination rules will be mutated with a probability of 0.05.Step 5.
Return to Step2 and repeat 50 times.Step 6 (test the best trading rule).
Test the best trading rule as identified by the above program. This will generate the return rate and indicate whether genetic algorithms can help traders actualize excess returns during this sample period.
## 3. Results
Because, in this paper, we have not considered the amount of assets, we assume the margin ratio to be 0.05. In fact, as the parameter has no significant effect on our experiment results, the return rate is increased twenty times. With 20 trials in each period, 420 independent experiments are conducted to determine useful moving average trading rules in the crude oil futures market. The prices we used for the 21 periods are shown in Figure3.Figure 3
Sample data.Based on previous studies [39, 40, 51] and on the decision to select an intermediate value for this study, the transaction cost rate is set at 0.1% for the 420 experiments. The risk free return rate is 2%, which is based primarily on the short-term treasury bond rate [41].Of the 420 trials, 226 earn profits. With an average return rate of 1.446, it is concluded that genetic algorithms can facilitate traders to obtain returns in the crude oil futures market. However, moving average trading rules identified by genetic algorithms do not result in excess returns as there are only 8 periods in which generated trading rules resulted in traders receiving excess returns. Given that the price of crude oil futures increased many times during the sample period, we further contend that genetic algorithms are helpful in investments.For a better understanding, we divide the 21 periods into 4 categories according to the results (see the last column of Table2).Table 2
Results of experiment.
Period
Q
a
Q
Rbh
R
-
R
a
-
Classification
1
3
12
3.768
0.663
−3.105
3
2
20
20
−2.946
2.899
5.845
1
3
15
16
5.299
7.717
2.419
1
4
1
6
0.647
−0.862
−1.510
4
5
20
11
−7.185
−0.097
7.088
2
6
1
12
4.976
0.808
−4.168
3
7
5
11
5.240
1.854
−3.386
3
8
10
9
−1.175
−0.357
0.818
2
9
19
12
−5.610
0.691
6.301
1
10
0
16
16.338
6.943
−9.396
3
11
1
13
2.740
0.127
−2.613
3
12
15
11
−1.420
−0.576
0.844
2
13
0
7
4.343
−2.296
−6.640
4
14
0
14
14.069
3.141
−10.928
3
15
7
8
2.287
0.273
−2.014
3
16
12
9
−0.715
−0.092
0.622
2
17
0
20
26.227
21.037
−5.190
3
18
20
3
−9.746
−4.131
5.615
2
19
0
0
3.512
−6.091
−9.603
4
20
1
10
2.968
−0.193
−3.161
4
21
7
6
−0.162
−1.092
−0.930
4
Q
a is the number of generated trading rules receiving positive excess returns compared with the BH strategy in the 20 trials. Q is the number of generated trading rules acquiring returns in the test periods. Rbh is the return rate of the BH strategy.
R
- is the average return rate of the 20 trials in each period. R
a
- is the average excess return rate of 20 trials in each period.Category 1 (periods 2, 3, and 9).
In these periods, generated trading rules not only help traders obtain returns but also help them to realize excess returns. Generated trading rules generate more profits than the BH strategy in periods 3 and 9. In period 2, the BH strategy loses money, while the generated trading rules, as determined by the genetic algorithms, result in profits. Thus, the generated trading rules are far superior to the BH strategy in this period. A common feature of these three periods in Category 1 is that the crude oil prices fell during the test period and experienced significant fluctuations.Category 2 (periods 5, 8, 12, 16, and 18).
Generated moving average trading rules fail to generate profits during these five periods. Even so, the generated rules performed better than the BH strategy, as they significantly reduced losses. In these periods, prices declined smoothly, experiencing some small fluctuations during the process.Category 3 (periods 1, 6, 7, 10, 11, 14, 15, and 17).
In these eight sample data periods, genetic algorithms help traders to identify suitable moving average trading rules. However, the traders failed to obtain excess returns. While prices steadily increase in these periods, there are also some minor fluctuations, which cause the genetic algorithms to be inferior to the BH strategy in these periods.Category 4 (periods 4, 13, 19, 20, and 21).
Genetic algorithm trading rules demonstrate poor performance in these five periods. In period 21, the BH strategy yields negative returns. Our genetic trading rules yield more severe losses. The BH strategy is considered superior to the generated trading rules in the other four periods as the BH strategy yields some returns. While there are no significant changes in the prices level in these periods, the prices are in volatile states throughout the five periods. Slight price changes with no apparent trends render the generated trading rules helpless in predicting price changes and providing returns.We use genetic algorithms to search good moving average trading rules for traders in the crude oil market. Table3, which shows the average number of M and N for every period, indicates that the value of long period (M) has a close relationship with the volatility of the prices in the sample period. A large M is set in periods with significant fluctuations and a small M is selected for periods in which price is relatively stable.Table 3
The average value ofM and N in each period.
Period
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Avg.
Avg. (M)
104
108
65
121
101
59
51
55
143
107
140
131
118
81
78
96
160
119
112
101
71
101
Avg. (N)
24
36
15
45
29
29
26
16
47
37
36
43
44
47
22
18
53
24
19
31
40
32The distribution ofM is shown in Figure 4. The value ofProbability is very small and M does not follow normal distribution. The figure presents a typical fat tail characteristic with a kurtosis of 2.36. Compared with normal distribution, there are more values located in the tails of the distribution in our results. Only in half of the 420 experiments, M is between 70 days and 130 days. The values are decentralized and we believe it is more scientific to choose the best lengths of the two periods using a training process that we have used in this paper in actual investment.Figure 4
Distribution ofM.Among the six moving average calculation methods, AMA and TMA are used more often than the other four (see Table4), as more than half of the generated moving average trading rules use AMA or TMA. A small number of generated trading rules use WMA and EMA, while TPMA and SMA, which are easy to calculate, are frequently used in some periods, such as periods 1, 2, 3, 12, 19, and 21.Table 4
Calculation methods of moving average price in each period.
Method
Period
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Count
SMA
7
4
1
4
3
3
7
4
10
1
2
1
13
6
3
69
WMA
1
1
1
5
3
2
1
3
6
23
EMA
1
4
7
2
2
15
31
AMA
1
1
20
15
14
8
8
2
9
4
18
16
18
2
1
8
145
TPMA
8
19
12
2
3
2
6
2
1
3
9
67
TMA
2
4
3
1
8
11
11
2
13
2
19
1
1
7
85The selection of calculation method is associated with the price trends and volatility. Figure5 shows that TPMA is used 31 times in the 60 independent experiments in periods 2, 3, and 9 (Category 1). Different from the overall proportion, TPMA is the most popular calculation method when price falls during the period and experienced significant fluctuations. AMA is the most popular method in the other three categories. EMA is never used in Categories 1 and 4. However, it takes a 24% proportion in Category 2, more than TMA, SMA, TPMA, and WMA. The proportions of TMA and SMA have no significant differences in different categories. In category 4, prices change with no apparent trends. No one method has obvious advantage over the others.Figure 5
Proportions of methods in different categories.The results of 20 experiments in the same period indicate high consistency on the valuesd (Table 5). When prices fluctuate, such as in periods 1, 2, 7, 8, 13, 19, and 20, then not opening positions until one average price exceeds another by at least one standard deviation is the best option. When the price is relatively stable, an investment decision should be made immediately as long as the two moving averages cross.Table 5
Numbers of trading rules in whichsd = 1.
Period
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Count
Count
19
19
20
0
5
3
13
12
0
2
0
0
13
2
0
2
0
1
20
19
0
150
## 4. Discussion
This paper attempts to generate moving average trading rules in the oil futures market using genetic algorithms. Different from other studies, we use only moving averages as technical indicators to identify useful moving average trading rules, without any other complex technical analysis tools or indicators. Moving average trading rules are easy for traders to operate, and they are straightforward regardless of the situation. To identify the best trading rules in the crude oil futures market, we use genetic algorithms to select all the parameters in the moving average trading rules dynamically rather than doing so in a fixed manner.According to our genetic calculations, using genetic algorithms to find out the best lengths of the two moving average periods is advocated because the generated lengths differ from each other in different price trends. Static moving average trading rules with fixed period lengths cannot adapt to complex fluctuations of price in different periods. A training process, however, which takes dynamic features of price fluctuations into consideration, can help traders find out the optimal lengths of the two moving periods of a trading rule.Among the six moving average methods, the AMA and TMA are the most popular among the generated trading rules as these two methods have the ability to adapt to the price trends. The AMA can change the weights of the current price according to the volatility in the last several days. As the TMA is the average of the SMA, it more accurately reflects the price level. However, the selection of best moving average calculation method is affected by price trends. Traders can choose methods more scientifically according to the price trends and fluctuations. Based on our experiment results, TPMA is an optimal choice when price experiences a decline process with significant fluctuations, and generating moving average trading rules are outstanding compared with BH strategy in these occasions. Although EMA takes a very small proportion in the total 420 experiments, it is also an applicable method other than AMA when price falls smoothly.For the periods in which the price volatility is apparent, decisions will not be made until the difference between the two averages exceeds the standard deviation of the short sample prices, thereby reducing the transaction risk. However, this method is not suitable for a period in which the price is relatively stable. In these situations, hesitation may sometimes cause traders to miss possible profit opportunities.As a whole, generated moving average trading rules can help traders make profits in the long term. However, genetic algorithms cannot guarantee access to additional revenue in every period as they are only useful in acquiring excess returns in special situations. The generated moving average trading rules demonstrate outstanding performance when the crude oil futures price falls with significant fluctuations. The BH strategy will lose on these occasions, while the generated trading rule can help traders foresee a decline in price and reduce losses. Our trading rules also yield positive returns during the fluctuations by the timely changing of positions.When the price falls smoothly with few fluctuations in the process, generated trading rules can yield excess returns compared to the BH strategy. Although genetic algorithms cannot help traders receive positive returns during these periods, the algorithms can help traders reduce loss by changing positions with the change of price trends. When the price is stable or rising smoothly, the generated rules may generate returns. However, they cannot generate more returns than the BH strategy. Limited returns cannot afford the transaction costs. When the price falls, the generated rules may be superior to the BH strategy. Genetic algorithms can also help traders make profits in the process of price increases with small fluctuations. In these periods, the BH strategy is better than generated trading rules because the transactions in the process generate transaction costs and may miss some profit opportunities. Generated moving average trading rules have poor performance if there are no notable trends in the price change. In these periods, moving average indicators cannot find profit opportunities because the volatility is too small. The trends of price changes are delayed by the moving average method. Therefore, when a decision is made, the price trend must also change, and as a result, there is no doubt that the trader will experience deficits.Using genetic algorithms, moving average trading rules do help traders to gain returns in the actual futures market. We also identified the best lengths for the two periods with respect to moving average rules and recommend the moving average calculation method for the crude oil futures market. Technical trading rules with only moving average indicators generated by genetic algorithms demonstrate no sufficient advantages compared to the BH strategy because the overall price increased during the 30-year period. Nevertheless, generated moving trading rules are beneficial for traders under certain circumstances, especially when there are significant changes in prices.In this paper, we search best trading rules according to the return rate of each one without regard to asset conditions and open interest, which proves to be the greatest limitation of the study. To improve the accuracy of the results, a simulation with actual assets is recommended. Accordingly, we will undertake this endeavor in a subsequent research.
## 5. Concluding Remarks
We conclude that the genetic algorithms identify better technical rules that allow traders to actualize profits from their investments. While we have no evidence to demonstrate that generated trading rules result in greater returns than does the BH strategy, our conclusion is consistent with the efficient market hypothesis. While generated trading rules facilitate traders in realizing excess returns with respect to their investing activities under specific circumstances, they cannot, at least by using moving average trading rules, ensure more long-term excess returns than the BH strategy. With respect to the selection of two periods, finding out optimal lengths using genetic algorithms is helpful for making more profits. Of the six moving average indicators, AMA and TMA are the most popular moving average calculation methods for the crude oil futures market in total, while TPMA is an outstanding method in some occasion. When the crude oil prices demonstrate notable volatility, a trader is advised to wait until the difference of the two moving averages exceeds the standard deviation of the short period and vice versa.Based on the above analysis, it is better to use BH strategy when the price increases or is stable. However, generated moving average trading rules are better than BH strategy when crude oil futures price decreases. With respect to the moving average calculation method, it is advocated to use TPMA when price falls with significant fluctuations and AMA when price falls smoothly, although TPMA is not a popular method overall. We propose variable moving average trading rules generated by training processes rather than static moving average trading rules in the crude oil futures markets.
---
*Source: 101808-2014-05-26.xml* | 2014 |
# Stabilization of Teleoperation Systems with Communication Delays: An IMC Approach
**Authors:** Yuling Li
**Journal:** Journal of Robotics
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1018086
---
## Abstract
The presence of time delays in communication introduces a limitation to the stability of bilateral teleoperation systems. This paper considers internal model control (IMC) design of linear teleoperation system with time delays, and the stability of the closed-loop system is analyzed. It is shown that the stability is guaranteed delay-independently. The passivity assumption for external forces is removed for the proposed design of teleoperation systems. The behavior of the resulting teleoperation system is illustrated by simulations.
---
## Body
## 1. Introduction
Teleoperation systems enable humans to extend their capacity to manipulate interfaces with better safety, at less cost, and even with better accuracy. A typical teleoperation system is composed of five parts: a human operator, a master robot which is operated by the human operator, a slave robot, the environment interacting with the slave, and the communication channel between the master and the slave. The main objectives of control design for a bilateral system are the stability of the closed-loop system, position coordination between the master and the slave, and the haptic force display of environment to the human operator.When there exist significant delays in the communication channel of the teleoperation system, one major issue is the stability of the system [1]. The passivity-based control approaches which are based on scattering theory [1] or wave variable [2] concept are widely used to design stable teleoperation system with time delays. These controllers render the communication link passive and, thus, guarantee stable bilateral teleoperation of any passive environment by any passive user; see [3] and the references therein. For passivity-based controlled teleoperation system, passivity assumption for external forces is required. However, in reality, it is not so easy to satisfy this assumption, and thus this control strategy has its own limitation in real applications.Realizing the disadvantages in applying passivity-based control schemes in controlling teleoperation systems, we propose an alternative method in which internal model control (IMC) structure is introduced to linear teleoperation system design in this work. IMC was a well-established control strategy around the 1980s and its original configuration and several modified structures have been successfully applied to various applications from chemical processes to automotive systems (see [4–7] and many references therein). The extension to nonlinear systems has also been reported as it shows attractive properties beyond linear system design [8]. Some intelligent methods were introduced to the modification of IMC structures and nonlinear extensions [9–12]. Moreover, there have been some efforts on extending the IMC design method to nonlinear systems in the linear parameter varying (LPV) framework [13, 14] in recent years. The application of IMC in teleoperation systems is also not prior art. Hayn and Schwarzmann have employed IMC structure to design positions controllers for a teleoperation system with a hydraulic manipulator as the slave and a haptic device as the master [15]. However, the authors in [15] assume that no delay exists in the communication. The authors in [16] proposed an IMC design for teleoperation systems with time-varying delays, while, actually, it is a smith-predictor-based design of teleoperation systems. In this work, an IMC-based control structure for delayed teleoperation systems is proposed and no restrictive assumptions are made on this structure. The passivity assumption is not required for the proposed control scheme.The arrangement of the remainder of this paper is as follows. In Section2 the system modeling and some preliminaries are given. In Section 3, a control architecture is given and its stability is analyzed in Section 4. A simple DOF teleoperation system is given as an example to show the effectiveness of the proposed method in Section 5. Finally, the summary and conclusion of this paper are given in Section 6.
## 2. Preliminaries
A single-DOF linear master/slave manipulator can be written as [17](1)mmx¨mt+bmx˙mt+kmxmt=fht+fmt,msx¨st+bsx˙st+ksxst=-fet+fst,where xi,x˙i,x¨i are the joint positions, velocities, and acceleration values of the master and slave devices with i=m or s representing master or slave robot manipulators, respectively. Similarly, mi,bi,ki are the effective mass, damping, and spring coefficients of the master and slave devices. External forces applied to the devices by the human operator and the environment are represented by fh, fe, respectively, while fm, fs stand for control signals.For simplicity, the transfer functions of the master and the slave are given as follows:(2)Gm:Gms=ymsfhs+fms=1mms2+bms+km,(3)Gs:Gss=yss-fes+fss=1mss2+bss+ks.In this work, the following assumptions are required.Assumption 1.
The forward and backward delays through the communication channel denoted byT1,T2, respectively, are assumed to be constant, but of arbitrary value.Remark 2.
Assumption1 is made for simplicity. The main results in this paper are also valid for teleoperation systems with time-varying delays. In Section 5, we also provide some simulation study for the case with time-varying delays.
## 3. IMC Structure for Teleoperation Systems
This section proposes an IMC-based control structure which guarantees the delay-independent stability of the closed-loop system.Let us start with the controller design process from one side of the teleoperation system, that is, from the master side, if no confusion arises. Inspired by the traditional IMC structure, we postulate the IMC structure for the master in Figure1 where Gm represents the master dynamics as in (2) and G~m represents the model of the master manipulator. Note that the two-degree-of-freedom IMC structure [4] is utilized since the dynamic characteristics of the two inputs fh,rsd are substantially different. To describe the transparency, the reference signal rsd should be the signals from the slave. Considering the communication delay between the master and the slave, the reference rsd is delayed; that is,(4)rsdt=rst-T2. Obviously, the controllers C11,C12 are an operator of rm,rsd, respectively; that is, C11(·)=C11(rm(t)) and C12=C12(rsd(t)). The human operator fh(t) seems to be a kind of “disturbance” from the original IMC design interpretation; however, we may find that it should not be canceled in our design. The detailed discussion will be given later.Figure 1
IMC structure of the master subsystem.Analogically, the IMC structure for the slave side is depicted in Figure2, where C21,C22 are linear operators of rs and rmd; that is, C21(·)≔C21(rmd(t)) and C22(·)≔C22(rs(t)), and (5)rmdt=rmt-T1.Figure 2
IMC structure of the slave subsystem.The coordinating torques are given as(6)fmt=C12rst-T2-C11rmt,fst=C21rmt-T1-C22rst.Equations (6) can also be represented by their S-functions:(7)Fms=-C11sC12se-sT2RmsRss,Fss=C21se-sT1-C22sRmsRss.The control architecture is shown in Figure3.Figure 3
IMC structure of teleoperation systems.Remark 3.
From Figure3 and compared with the classical IMC structure [4], we may find that the external forces fh,fe are the “disturbances” according to the classical IMC interpretation. The difference is that these “disturbances” act on the master and the slave directly. In reality, these forces are kind of “excitation signals” and should not be canceled.Remark 4.
Actually, the outputym,ys can be positions or velocities, or even the states of the master and the slave. In this paper, we assume that only the position is available for measurement; that is, ym=xm and ys=xs.
## 4. Stability Analysis
In this section, the stability of the closed-loop system is discussed.Supposer(t)≔rm(t)rs(t), u(t)≔fm(t)fs(t), yo(t)≔ym(t)ys(t), and w(t)≔fh(t)-fe(t); then the closed-loop system in Figure 3 can be redrawn as in Figure 4 where G(s)=Gm(s)00Gs(s) and G~(s)=G~m(s)00G~s(s).Figure 4
Equivalent IMC structure of teleoperation systems.LetR,U be the Laplace transform of r,u; then the controllers (7) can be rewritten as (8)Us=-CsRs,where (9)Cs=C11s-C12se-sT2-C21e-sT1C22s.From the block diagram of the IMC structure shown in Figure 4, the output yo is related to the input w as(10)Yos=GI+CI-G~C-1G-1Ws.SinceC(I-G~C)-1=(I-CG~)-1C, (10) can be rewritten as (11)Yos=GI-CG~+CG-1I-CG~Ws.When the system modelG~ matches the plant G perfectly, that is, G~=G, one has (12)Yos=GI-CGWs.It can be seen that the internal stability is always ensured as long as a stable parameterC0(s)=C11(s)C12(s)C21(s)C22(s) is used to control the stable plant G. Like the traditional IMC system, we have the following important property.Theorem 5 (dual stability).
Assume that the master/slave model and the master/slave manipulator dynamics match perfectly; that is,Gi(s)=G~i(i=m,s). Then the stability of C0 is sufficient for the stability of the overall closed-loop system.Remark 6.
In the proposed control design, there are no passivity assumptions for human forcesfh and environmental forces fe. These forces can be any bounded signals from the IMC interpretation.Remark 7.
Actually, whenG=G~, the system is basically open loop. Hence this IMC-based design of teleoperation systems provides the open-loop advantages. When the perfect model is not available, that is, G≠G~, the overall system is a closed-loop system. Thus, the IMC control strategy has the advantages of both the open-loop and the closed-loop structures [18]. However, the stability condition for the case that G≠G~ becomes complex, since the subdiagonal delays exist in C, and this will be a subject of ongoing research.Remark 8.
For the caseG=G~, the benefit of this structure is that the communication time delays do not enter into the feedback channel. Hence the stability of the overall system can be simplified given the stable controllers C11(s), C12(s), C21(s), and C22(s). In another way, it means that this control structure can guarantee the stability of the overall system with the communication delays varying from 0 to arbitrary value. This implies a way to choose suitable C11(s), C12(s), C21(s), and C22(s) with sound performance.Remark 9.
Theorem5 is also applicable to teleoperation systems with time-varying delays since the communication delays do not enter into the feedback loop.
## 5. Simulation and Results
We consider a simple single-DOF teleoperation system with the dynamics (1) with mm=0.3kg, ms=1kg, bm=1 Ns/m, bs=3 Ns/m, km=10 N/m, and ks=10 N/m.The operator is assumed to be with the following dynamics:(13)fht=ft-bopx˙mt-kopxmt, where f(t) is a rectangle signal depicted in Figure 5, bop=3, and kop=200. The environmental force is chosen as fe(t)=-kenvxs(t)(kenv=400).Figure 5
External forcef(t).We first implement the control in Figure3 assuming that there are no delays in communication channel and obtain the simulation results shown in Figures 7 and 8. The chosen control parameters Cij are represented by their step responses, which are depicted in Figure 6. It can be seen that the master and the slave respond stably. As shown in Figure 7, the slave’s motion follows the master’s quickly with no delay; however, there is a bit of oscillation in the position of the master, which means the control parameters Cij could be better chosen. Even though the positions between the masters and the slaves achieve perfect tracking, there is static error between the human force and the environmental force (Figure 8). This error may not be canceled since the position tracking and the force tracking are two objectives which require trade-offs.Figure 6
Step responses ofC11(s), C12(s), C21(s), and C22(s).Figure 7
Position tracking performance when there are no delays in the communication channel.Figure 8
Force tracking performance when there are no delays in the communication channel.Now we make simulations for the case when there exist delays in the communication channel. Let us firstly assume that the time delays in the forward channel and backward channel are symmetric; that is,T1=T2=1s. The system performance with the designed controller under this case is depicted in Figures 9 and 10. It can be seen from Figure 10 that when the operator exerted the force to the master around t=1 s, the slave contacted the environment after the delay of 1 second and then received an active force fe. The (delayed) position tracking performance depicted in Figure 9 is similar to the one when there are no delays in the communication channel.Figure 9
Position tracking performance (T1=T2=1 s).Figure 10
Force tracking performance (T1=T2=1 s).We furthermore make simulations for the system with sudden appearing communication delays and sudden disappearing communication delays (both aroundt=4s), but the delays are symmetric. For simplicity, we only provide the position tracking performances which are depicted in Figures 11 and 12. It is easy to see that the closed-loop system stays stable in this case. For the time interval with communication delays, the slave follows the master’s motion after corresponding delays.Figure 11
Position tracking performance with sudden appearing delays.Figure 12
Position tracking performance with sudden disappearing delays.We further concern ourselves with the system’s performance when the forward and backward communication delays are not the same. We decrease the forward communication delay to 0.5 s and increase the backward communication delay to 2 s and implement the control in Figure3, and then we obtain the simulation results as in Figures 13 and 14. These figures implied that the closed-loop system’s stability is guaranteed with unsymmetrical communication delays. Hence, it showed that the designed controller is a delay-dependent stable controller.Figure 13
Position tracking performance (T1=0.5s,T2=2s).Figure 14
Force tracking performance (T1=0.5s,T2=2s).Finally, we make simulations for the teleoperation system with varying communication delays. The forward and backward time delays in the communication channel are modeled asTi(t)=Xit(i=1,2), where Xi are random variables with normal distribution characterized by its mean τv and standard deviation δ, denoted by standard notation Xi(·)~N(τv,δ2). The forward and backward time delays in the communication channel are chosen with Xi~N(0.4,0.01). The position tracking performance and force tracking performance are depicted in Figures 15 and 16. It is shown that the system is stable with good tracking performance even with bounded stochastic communication delays. This means that our method is also valid for teleoperation systems with varying delays.Figure 15
Position tracking performance (Ti(t)=Xit(i=1,2),Xi~N(0.4,0.01)).Figure 16
Force tracking performance (Ti(t)=|Xi(t)|(i=1,2),Xi~N(0.4,0.01)).
## 6. Conclusion
In this paper, we investigate the IMC-based control design of linear teleoperation system with communication delays. The stability of the overall system is guaranteed if the perfect model is available. This method removes the passivity assumption for external forces. Simulations of a single-DOF linear teleoperation system show that the stability is guaranteed when the designed controller is applied. Good tracking performance can be achieved if the parametersCij are chosen suitably. Extensions to the case when the models and the plants are not perfectly matched and to nonlinear teleoperation systems are under study and the research results will be reported in the near future.
---
*Source: 1018086-2018-02-20.xml* | 1018086-2018-02-20_1018086-2018-02-20.md | 15,884 | Stabilization of Teleoperation Systems with Communication Delays: An IMC Approach | Yuling Li | Journal of Robotics
(2018) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1018086 | 1018086-2018-02-20.xml | ---
## Abstract
The presence of time delays in communication introduces a limitation to the stability of bilateral teleoperation systems. This paper considers internal model control (IMC) design of linear teleoperation system with time delays, and the stability of the closed-loop system is analyzed. It is shown that the stability is guaranteed delay-independently. The passivity assumption for external forces is removed for the proposed design of teleoperation systems. The behavior of the resulting teleoperation system is illustrated by simulations.
---
## Body
## 1. Introduction
Teleoperation systems enable humans to extend their capacity to manipulate interfaces with better safety, at less cost, and even with better accuracy. A typical teleoperation system is composed of five parts: a human operator, a master robot which is operated by the human operator, a slave robot, the environment interacting with the slave, and the communication channel between the master and the slave. The main objectives of control design for a bilateral system are the stability of the closed-loop system, position coordination between the master and the slave, and the haptic force display of environment to the human operator.When there exist significant delays in the communication channel of the teleoperation system, one major issue is the stability of the system [1]. The passivity-based control approaches which are based on scattering theory [1] or wave variable [2] concept are widely used to design stable teleoperation system with time delays. These controllers render the communication link passive and, thus, guarantee stable bilateral teleoperation of any passive environment by any passive user; see [3] and the references therein. For passivity-based controlled teleoperation system, passivity assumption for external forces is required. However, in reality, it is not so easy to satisfy this assumption, and thus this control strategy has its own limitation in real applications.Realizing the disadvantages in applying passivity-based control schemes in controlling teleoperation systems, we propose an alternative method in which internal model control (IMC) structure is introduced to linear teleoperation system design in this work. IMC was a well-established control strategy around the 1980s and its original configuration and several modified structures have been successfully applied to various applications from chemical processes to automotive systems (see [4–7] and many references therein). The extension to nonlinear systems has also been reported as it shows attractive properties beyond linear system design [8]. Some intelligent methods were introduced to the modification of IMC structures and nonlinear extensions [9–12]. Moreover, there have been some efforts on extending the IMC design method to nonlinear systems in the linear parameter varying (LPV) framework [13, 14] in recent years. The application of IMC in teleoperation systems is also not prior art. Hayn and Schwarzmann have employed IMC structure to design positions controllers for a teleoperation system with a hydraulic manipulator as the slave and a haptic device as the master [15]. However, the authors in [15] assume that no delay exists in the communication. The authors in [16] proposed an IMC design for teleoperation systems with time-varying delays, while, actually, it is a smith-predictor-based design of teleoperation systems. In this work, an IMC-based control structure for delayed teleoperation systems is proposed and no restrictive assumptions are made on this structure. The passivity assumption is not required for the proposed control scheme.The arrangement of the remainder of this paper is as follows. In Section2 the system modeling and some preliminaries are given. In Section 3, a control architecture is given and its stability is analyzed in Section 4. A simple DOF teleoperation system is given as an example to show the effectiveness of the proposed method in Section 5. Finally, the summary and conclusion of this paper are given in Section 6.
## 2. Preliminaries
A single-DOF linear master/slave manipulator can be written as [17](1)mmx¨mt+bmx˙mt+kmxmt=fht+fmt,msx¨st+bsx˙st+ksxst=-fet+fst,where xi,x˙i,x¨i are the joint positions, velocities, and acceleration values of the master and slave devices with i=m or s representing master or slave robot manipulators, respectively. Similarly, mi,bi,ki are the effective mass, damping, and spring coefficients of the master and slave devices. External forces applied to the devices by the human operator and the environment are represented by fh, fe, respectively, while fm, fs stand for control signals.For simplicity, the transfer functions of the master and the slave are given as follows:(2)Gm:Gms=ymsfhs+fms=1mms2+bms+km,(3)Gs:Gss=yss-fes+fss=1mss2+bss+ks.In this work, the following assumptions are required.Assumption 1.
The forward and backward delays through the communication channel denoted byT1,T2, respectively, are assumed to be constant, but of arbitrary value.Remark 2.
Assumption1 is made for simplicity. The main results in this paper are also valid for teleoperation systems with time-varying delays. In Section 5, we also provide some simulation study for the case with time-varying delays.
## 3. IMC Structure for Teleoperation Systems
This section proposes an IMC-based control structure which guarantees the delay-independent stability of the closed-loop system.Let us start with the controller design process from one side of the teleoperation system, that is, from the master side, if no confusion arises. Inspired by the traditional IMC structure, we postulate the IMC structure for the master in Figure1 where Gm represents the master dynamics as in (2) and G~m represents the model of the master manipulator. Note that the two-degree-of-freedom IMC structure [4] is utilized since the dynamic characteristics of the two inputs fh,rsd are substantially different. To describe the transparency, the reference signal rsd should be the signals from the slave. Considering the communication delay between the master and the slave, the reference rsd is delayed; that is,(4)rsdt=rst-T2. Obviously, the controllers C11,C12 are an operator of rm,rsd, respectively; that is, C11(·)=C11(rm(t)) and C12=C12(rsd(t)). The human operator fh(t) seems to be a kind of “disturbance” from the original IMC design interpretation; however, we may find that it should not be canceled in our design. The detailed discussion will be given later.Figure 1
IMC structure of the master subsystem.Analogically, the IMC structure for the slave side is depicted in Figure2, where C21,C22 are linear operators of rs and rmd; that is, C21(·)≔C21(rmd(t)) and C22(·)≔C22(rs(t)), and (5)rmdt=rmt-T1.Figure 2
IMC structure of the slave subsystem.The coordinating torques are given as(6)fmt=C12rst-T2-C11rmt,fst=C21rmt-T1-C22rst.Equations (6) can also be represented by their S-functions:(7)Fms=-C11sC12se-sT2RmsRss,Fss=C21se-sT1-C22sRmsRss.The control architecture is shown in Figure3.Figure 3
IMC structure of teleoperation systems.Remark 3.
From Figure3 and compared with the classical IMC structure [4], we may find that the external forces fh,fe are the “disturbances” according to the classical IMC interpretation. The difference is that these “disturbances” act on the master and the slave directly. In reality, these forces are kind of “excitation signals” and should not be canceled.Remark 4.
Actually, the outputym,ys can be positions or velocities, or even the states of the master and the slave. In this paper, we assume that only the position is available for measurement; that is, ym=xm and ys=xs.
## 4. Stability Analysis
In this section, the stability of the closed-loop system is discussed.Supposer(t)≔rm(t)rs(t), u(t)≔fm(t)fs(t), yo(t)≔ym(t)ys(t), and w(t)≔fh(t)-fe(t); then the closed-loop system in Figure 3 can be redrawn as in Figure 4 where G(s)=Gm(s)00Gs(s) and G~(s)=G~m(s)00G~s(s).Figure 4
Equivalent IMC structure of teleoperation systems.LetR,U be the Laplace transform of r,u; then the controllers (7) can be rewritten as (8)Us=-CsRs,where (9)Cs=C11s-C12se-sT2-C21e-sT1C22s.From the block diagram of the IMC structure shown in Figure 4, the output yo is related to the input w as(10)Yos=GI+CI-G~C-1G-1Ws.SinceC(I-G~C)-1=(I-CG~)-1C, (10) can be rewritten as (11)Yos=GI-CG~+CG-1I-CG~Ws.When the system modelG~ matches the plant G perfectly, that is, G~=G, one has (12)Yos=GI-CGWs.It can be seen that the internal stability is always ensured as long as a stable parameterC0(s)=C11(s)C12(s)C21(s)C22(s) is used to control the stable plant G. Like the traditional IMC system, we have the following important property.Theorem 5 (dual stability).
Assume that the master/slave model and the master/slave manipulator dynamics match perfectly; that is,Gi(s)=G~i(i=m,s). Then the stability of C0 is sufficient for the stability of the overall closed-loop system.Remark 6.
In the proposed control design, there are no passivity assumptions for human forcesfh and environmental forces fe. These forces can be any bounded signals from the IMC interpretation.Remark 7.
Actually, whenG=G~, the system is basically open loop. Hence this IMC-based design of teleoperation systems provides the open-loop advantages. When the perfect model is not available, that is, G≠G~, the overall system is a closed-loop system. Thus, the IMC control strategy has the advantages of both the open-loop and the closed-loop structures [18]. However, the stability condition for the case that G≠G~ becomes complex, since the subdiagonal delays exist in C, and this will be a subject of ongoing research.Remark 8.
For the caseG=G~, the benefit of this structure is that the communication time delays do not enter into the feedback channel. Hence the stability of the overall system can be simplified given the stable controllers C11(s), C12(s), C21(s), and C22(s). In another way, it means that this control structure can guarantee the stability of the overall system with the communication delays varying from 0 to arbitrary value. This implies a way to choose suitable C11(s), C12(s), C21(s), and C22(s) with sound performance.Remark 9.
Theorem5 is also applicable to teleoperation systems with time-varying delays since the communication delays do not enter into the feedback loop.
## 5. Simulation and Results
We consider a simple single-DOF teleoperation system with the dynamics (1) with mm=0.3kg, ms=1kg, bm=1 Ns/m, bs=3 Ns/m, km=10 N/m, and ks=10 N/m.The operator is assumed to be with the following dynamics:(13)fht=ft-bopx˙mt-kopxmt, where f(t) is a rectangle signal depicted in Figure 5, bop=3, and kop=200. The environmental force is chosen as fe(t)=-kenvxs(t)(kenv=400).Figure 5
External forcef(t).We first implement the control in Figure3 assuming that there are no delays in communication channel and obtain the simulation results shown in Figures 7 and 8. The chosen control parameters Cij are represented by their step responses, which are depicted in Figure 6. It can be seen that the master and the slave respond stably. As shown in Figure 7, the slave’s motion follows the master’s quickly with no delay; however, there is a bit of oscillation in the position of the master, which means the control parameters Cij could be better chosen. Even though the positions between the masters and the slaves achieve perfect tracking, there is static error between the human force and the environmental force (Figure 8). This error may not be canceled since the position tracking and the force tracking are two objectives which require trade-offs.Figure 6
Step responses ofC11(s), C12(s), C21(s), and C22(s).Figure 7
Position tracking performance when there are no delays in the communication channel.Figure 8
Force tracking performance when there are no delays in the communication channel.Now we make simulations for the case when there exist delays in the communication channel. Let us firstly assume that the time delays in the forward channel and backward channel are symmetric; that is,T1=T2=1s. The system performance with the designed controller under this case is depicted in Figures 9 and 10. It can be seen from Figure 10 that when the operator exerted the force to the master around t=1 s, the slave contacted the environment after the delay of 1 second and then received an active force fe. The (delayed) position tracking performance depicted in Figure 9 is similar to the one when there are no delays in the communication channel.Figure 9
Position tracking performance (T1=T2=1 s).Figure 10
Force tracking performance (T1=T2=1 s).We furthermore make simulations for the system with sudden appearing communication delays and sudden disappearing communication delays (both aroundt=4s), but the delays are symmetric. For simplicity, we only provide the position tracking performances which are depicted in Figures 11 and 12. It is easy to see that the closed-loop system stays stable in this case. For the time interval with communication delays, the slave follows the master’s motion after corresponding delays.Figure 11
Position tracking performance with sudden appearing delays.Figure 12
Position tracking performance with sudden disappearing delays.We further concern ourselves with the system’s performance when the forward and backward communication delays are not the same. We decrease the forward communication delay to 0.5 s and increase the backward communication delay to 2 s and implement the control in Figure3, and then we obtain the simulation results as in Figures 13 and 14. These figures implied that the closed-loop system’s stability is guaranteed with unsymmetrical communication delays. Hence, it showed that the designed controller is a delay-dependent stable controller.Figure 13
Position tracking performance (T1=0.5s,T2=2s).Figure 14
Force tracking performance (T1=0.5s,T2=2s).Finally, we make simulations for the teleoperation system with varying communication delays. The forward and backward time delays in the communication channel are modeled asTi(t)=Xit(i=1,2), where Xi are random variables with normal distribution characterized by its mean τv and standard deviation δ, denoted by standard notation Xi(·)~N(τv,δ2). The forward and backward time delays in the communication channel are chosen with Xi~N(0.4,0.01). The position tracking performance and force tracking performance are depicted in Figures 15 and 16. It is shown that the system is stable with good tracking performance even with bounded stochastic communication delays. This means that our method is also valid for teleoperation systems with varying delays.Figure 15
Position tracking performance (Ti(t)=Xit(i=1,2),Xi~N(0.4,0.01)).Figure 16
Force tracking performance (Ti(t)=|Xi(t)|(i=1,2),Xi~N(0.4,0.01)).
## 6. Conclusion
In this paper, we investigate the IMC-based control design of linear teleoperation system with communication delays. The stability of the overall system is guaranteed if the perfect model is available. This method removes the passivity assumption for external forces. Simulations of a single-DOF linear teleoperation system show that the stability is guaranteed when the designed controller is applied. Good tracking performance can be achieved if the parametersCij are chosen suitably. Extensions to the case when the models and the plants are not perfectly matched and to nonlinear teleoperation systems are under study and the research results will be reported in the near future.
---
*Source: 1018086-2018-02-20.xml* | 2018 |
# Secure Cooperative Spectrum Sensing for the Cognitive Radio Network Using Nonuniform Reliability
**Authors:** Muhammad Usman; Insoo Koo
**Journal:** The Scientific World Journal
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101809
---
## Abstract
Both reliable detection of the primary signal in a noisy and fading environment and nullifying the effect of unauthorized users are important tasks in cognitive radio networks. To address these issues, we consider a cooperative spectrum sensing approach where each user is assigned nonuniform reliability based on the sensing performance. Users with poor channel or faulty sensor are assigned low reliability. The nonuniform reliabilities serve as identification tags and are used to isolate users with malicious behavior. We consider a link layer attack similar to the Byzantine attack, which falsifies the spectrum sensing data. Three different strategies are presented in this paper to ignore unreliable and malicious users in the network. Considering only reliable users for global decision improves sensing time and decreases collisions in the control channel. The fusion center uses the degree of reliability as a weighting factor to determine the global decision in scheme I. Schemes II and III consider the unreliability of users, which makes the computations even simpler. The proposed schemes reduce the number of sensing reports and increase the inference accuracy. The advantages of our proposed schemes over conventional cooperative spectrum sensing and the Chair-Varshney optimum rule are demonstrated through simulations.
---
## Body
## 1. Introduction
The increasing demand for wireless services has driven the need for intelligent allocation and efficient use of the wireless spectrum. Conventional spectrum allocation results in spatiotemporal underutilization and scarcity of the spectrum. According to the Federal Communications Commission (FCC), the spatial and temporal variations in the utilization of the assigned spectrum range from 15% to 85% [1, 2].Cognitive radio (CR) technology has been proposed to combat the spectrum shortage problem by allowing the opportunistic use of the wireless spectrum, which is primarily allocated to primary (licensed) users (PU), by secondary (unlicensed) users (SUs) under a given level of interference to the PU [3, 4]. Such a scheme requires the SU to detect the PU signal accurately and quickly [5]. Some of the various techniques used for spectrum sensing are energy detection, cyclostationary detection, matched filter detection, wavelet detection, and covariance detection. Energy detection is the method of choice due to its computational simplicity and ease of implementation, as well as its minimal requirement of prior knowledge of the primary signal. However, sensing performance of a single SU is greatly affected by the destructive channel effects such as shadowing and fading, thereby hindering the ability of the SU to distinguish between a deep fade and white space. Cooperative spectrum sensing (CSS) is used to overcome the channel effects and exploit location diversity to detect even a weak primary signal [6].The presence of a malicious user (MU) deteriorates the detection performance of cooperative spectrum sensing. An MU is an unwelcome and unauthorized user who impersonates a legal user and propagates false information about the status of the primary signal. Generally known types of MUs include always busy (AB), always free (AF), always opposite (AO), and an MU that transmits high signal with probabilityα and low signal with probability 1
-
α, and we name it αMU. In AB and AF types, an MU always generates either a high (H
1) or a low (H
0) signal, respectively, regardless of the actual status of the primary signal. In the case of AO, an MU always generates a signal about the status of the PU that is opposite of its local observation. The AO MU is considered to be the most dangerous type, especially, when the decision is taken opposite to the real status of PU (if global decision or real status of the PU is known).Cooperative sensing can improve the detection and false alarm probabilities [7], however, a high number of cooperative users, where majority of users have low SNR, may not produce optimal performance [8] and may have a negative impact on the complexity of the network, sensing time (latency), control channel bandwidth, collision in the control channel, and energy consumption. The number of SUs can be controlled by assigning reliability to them according to their sensing reports. Such reliability is based on correlation with the global decision. Users may send a deviant result due to either channel effects or malfunctioning of the sensors. The consistently deviant users are excluded from participation in the global decision, which leaves fewer but only reliable users in the network. Three different schemes are proposed in this paper to identify and remove consistently unreliable users and MUs which results in a less complex network consisting of fewer and more reliable nodes, which in turn reduces the computational burden on the fusion center (FC) and decreases the latency and overall energy consumption of the network.Cooperative spectrum sensing increases the sensing performance of a CR network by using the location diversity of SUs [7]. However, presence of even few MUs severely degrades the performance of CSS. In [8], the authors have shown that a certain number of users (not all the users) with highest SNR achieve optimal sensing performance. However, the authors do not consider malicious behavior of the SUs and the decision of fusion center is solely based on high SNR users even if they report false data. To nullify the effect of MUs, reputation-based CSS with assistance from trusted nodes has been considered [9]. In [10], a statistical model of the PU was used in a soft reputation-based secure CSS scheme. Such an approach utilizes assistance from trusted nodes in the network. The assumption of trusted nodes is not practical due to the unavailability of such nodes in most cases. Furthermore, the significance of cooperative spectrum sensing is reduced if trusted nodes are the primary source for a result. In [11], an extended sequential CSS scheme was used in which SUs were polled to send their sensing result according to their reputation order. Uniform and fixed reputation degrees were employed for CUs in [12], while uniform reputation with no MU was used in [13]. In all of the above-cited studies, uniform reliability was assigned to users regardless of whether they produce good, normal, or bad results. Furthermore, only two types of MUs (AB and AF) were considered. None of the studies has addressed α-based MU and AO, the most dangerous types of MU.In our previous work [14], the decision of disengagement of an SU and an MU, of types AO and AB, is taken by the FC based on reliability of the SU. In this paper, we extend our work by proposing three different schemes to deal with unreliable and malicious users. We also mitigate the effect of the MU that transmits high and low PU status based on probabilistic parameter α. In the first two schemes, an identification tag (IT) is used to restrict MUs, while reliabilities and unreliabilities are used to isolate unreliable users. The IT represents the reliability value of each user. It is calculated on the basis of correlation between the result of each user and FC and is communicated to the SUs in encrypted form. Unauthorized or malicious user would be unable to decrypt the IT. In the third scheme, the detection performance depends on honesty of the users. Dishonest and MUs severely degrade the performance of the network. Our proposed schemes are advantageous due to their computational simplicity, which makes them more practical and easy to implement. With a lower number of users and an avoidance of complex algorithms, the proposed approaches produce results that are comparable to (in terms of detection performance) and better than (in terms of the number of users) those obtained with the Chair-Varshney scheme and better (in all aspects for certain types of MUs) than those attained with the conventional CSS technique.The remainder of this paper is organized as follows. The system model is described in Section2, and our proposed schemes are presented in Section 3. Simulation results and discussion are given in Section 4. Conclusion is presented in Section 5.
## 2. System Model
We consider a network consisting of one PU andN SUs with M
(
M
≤
N
) reliable users and L malicious users such that 0
≤
L
≪
M, shown in Figure 1. The remaining N
-
M
-
L users are unreliable users. Initially M is equal to N (if there is no MU); however, with the training of the CR network, M gets smaller than N due to disappearance of the unreliable users but remains above a minimum threshold, N
min
. The maximum number of MUs is L
max
. The number of reliable users (users with a good channel) is assumed to be more than the number of unreliable users (users with a poor channel) and MUs. Each MU may adopt one of the malicious modes described earlier. We consider an m-bit error-free common control channel between the SU and FC.Figure 1
Cooperative users in a CR network.Detection of the primary signal is a binary hypothesis testing problem. The signal received by theith SU is given as
(1)
H
0
:
x
i
(
n
)
=
u
(
n
)
,
i
=
1,2
,
…
,
N
,
H
1
:
x
i
(
n
)
=
h
i
(
n
)
s
(
n
)
+
u
(
n
)
,
n
=
1,2
,
…
,
S
,
where H
0 and H
1 correspond to the hypotheses that the PU signal is absent and present, respectively, s
(
n
) represents the primary signal received at the SU, h
i
(
n
) is the amplitude gain of the channel, u
(
n
) is the additive white Gaussian noise (AWGN) with zero mean and σ
u
2 variance, N is the number of SUs, and S is the number of samples. We assume that s
(
n
) and u
(
n
) are completely independent. Without loss of generality, the variance of noise is assumed to be the same at every sensor.Each SU usesS samples in the sensing interval to perform spectrum sensing using the energy detection technique [15]. The local observation of the ith user is given by
(2)
y
i
=
∑
n
=
1
S
|
x
i
(
n
)
|
2
,
where S is the number of samples and is equal to 2
T
W, and T and W are the sensing time and bandwidth, respectively. When S is relatively large (e.g., S
>
200), Y
i can be well approximated as a Gaussian random variable under both hypotheses H
0 and H
1 with means μ
0, μ
1 and variances σ
0
2, σ
1
2, respectively, as follows [16]:
(3)
H
0
:
μ
0
=
S
σ
u
2
,
σ
0
2
=
2
S
σ
u
4
,
H
1
:
μ
1
=
S
(
γ
i
+
1
)
σ
u
2
,
σ
1
2
=
2
S
(
2
γ
i
+
1
)
σ
u
4
,
where γ
i is the signal-to-noise ratio (SNR) of the primary signal at theith SU. In each time slot, the FC broadcasts a request to all SUs to perform local sensing. After the sensing period, each SU reports its observation to the FC in the reporting period. The FC combines the received local observations and makes a global decision. We assume that the global decision taken by the FC is correct all of the time. The FC also computes the reliability of each user based on the compliance of an SU’s local observation with the global result. Finally, the global decision along with respective reliability in the encrypted form as identification tag is communicated to each user.Authentication is an integral component of the security protocols [17–19]. A three-stage security protocol consisting of prevention, detection, and cure is proposed in [17]. The prevention stage includes authentication and authorization; the participating users and their data will be authenticated in the authentication stage while recognition of the users is performed in the authorization stage. In [18], the authors proposed remote based smart card authentication scheme where an additional security stage called registration is introduced, in which details of users along with specific details given by the server are stored. In [19], a lightweight authentication scheme is used to guarantee security and privacy in global mobility networks. In [20], basic and extended features are used to detect malicious activity by applying adaptive support vector machines. In [21], a cryptographic technique like blind signature and electronic coin is used to achieve mobility, reliability, anonymity, and flexibility in a mobile wireless network. In this paper, we use an encrypted identification tag for the authentication of users and reliability test for the detection of unreliable and malicious users. The identification tag is assigned to users based on their reported observations.
## 3. Secure Reliability-Based CSS
In conventional CSS, each SU performs local sensing and forwards either its quantized local observationy
i ((2) in the case of a soft decision) or local decision H
1 or H
0 ((4) in the case of a hard decision) to the FC through a dedicated control channel. Here,
(4)
y
i
>
H
1
<
H
0
λ
i
,
i
=
1,2
,
…
,
N
,
where λ
i is the local energy threshold at the ith SU. The detection performance of the CR network is measured by the probability of detection P
d which is a measure of the interference to the PU and the probability of false alarm P
F which sets the upper bound on spectrum utilization. A higher value of P
d will protect the quality of service (QoS) of the PU, and a lower value of P
f will result in higher spectrum utilization. The detection and false alarm probabilities of the ith user are given, respectively, as
(5)
P
d
,
i
=
P
(
y
i
>
λ
i
∣
H
1
)
=
Q
(
λ
i
-
S
(
γ
i
+
1
)
σ
u
2
σ
u
2
2
S
(
2
γ
i
+
1
)
)
,
P
f
,
i
=
P
(
y
i
>
λ
i
∣
H
0
)
=
Q
(
λ
i
-
S
σ
u
2
σ
u
2
2
S
)
,
where Q
(
·
) is a monotonically decreasing function defined as Q
(
x
)
=
(
1
/
2
π
)
∫
x
∞
exp
(
-
t
2
/
2
)
d
t. Sensing results from several SUs are combined at the FC as weighted sum and given as
(6)
Z
=
∑
i
=
1
M
w
k
-
1
i
×
y
i
,
where w
k
-
1
i is the weighting coefficient or reliability of the ith SU in the previous slot which is computed in Section 3.1.1; it is used to highlight or suppress the result of a certain SU based on detection performance. Finally, the status of the primary signal is determined as
(7)
Z
<
λ
,
H
0
,
Z
≥
λ
,
H
1
,
where λ is the global threshold. The global detection and false alarm probabilities are expressed as
(8)
P
D
=
P
(
Z
>
λ
∣
H
1
)
=
Q
(
λ
-
S
∑
i
=
1
M
w
i
(
γ
i
+
1
)
σ
u
2
2
S
∑
i
=
1
M
w
i
2
(
2
γ
i
+
1
)
σ
u
4
)
,
P
F
=
P
(
Z
>
λ
∣
H
0
)
=
Q
(
λ
-
S
∑
i
=
1
M
w
i
σ
u
2
2
S
∑
i
=
1
M
w
i
2
σ
u
4
)
,
respectively.
### 3.1. Our Proposed Schemes
It is assumed that FC maintainsM queues which are collectively called the reliability queue and is represented by Q, as shown in Figure 2. The size of each queue is K which denotes the previous history of reliability maintained for each of the reliable SUs. The value of K reflects a trade-off between the sensing accuracy and speed. FC receives sensing results from all SUs with equal initial reliability which is updated based on the distance between the local sensing result and the global decision in scheme I. The SU that produces more congruent result with respect to the global decision is assigned a higher reliability and vice versa. In contrast, schemes II and III use unreliability, instead of reliability, to evaluate an SU for participation in the global decision. An MU is detected and isolated from the network in schemes I and II because FC takes the decision by a two-tier checking process. However, the decision of disengagement from the network is taken by SUs (not FC) themselves and thus, an MU cannot be detected in scheme III. In this case, the fusion center relies upon the rectitude of the SUs.Figure 2
Reliability queue at the fusion center.
#### 3.1.1. Proposed Scheme I
Rather than using complex calculations to compute the reliability of SUs, a simple method is proposed in this study. Each SU performs local sensing in the sensing period and forwards its observation to the FC in the reporting period. FC accepts the receiving data from the SUs with equal initial reliability and takes a global decision using data fusion (soft decision) technique. The initial reliability (weight) can be assigned to each SU as discussed in [10] but for simplicity, in this work, we assign equal initial reliability to the SUs that makes the initial weighting coefficient equal in (6) for each SU’s report. The channel condition between PU and SU is then quantified into reliability which is measured on the basis of how much the SU supports or deviates from the global result. Based on the reliabilities in the previous slot and reports from the users in the current slot, FC takes the global decision. In (6) weights of all the SUs are taken into account for the global decision. However, to calculate/update weight of the ith SU, local observations of all SUs except the ith SU should be considered in order to minimize bias of the ith SU in weights assignment [22]. In [22, 23] the authors update the weight coefficients using the Chair-Varshney technique. However, in practical scenarios the detection and false alarm probabilities are not known a priori. Further, they do not handle the malicious users. In this work, we propose the update of weights based on the reported observations of the SUs. The global decision, excluding the ith SU, can be computed as below:
(9)
Z
i
=
∑
j
=
1
M
w
k
-
1
j
y
j
-
w
k
-
1
i
y
i
=
∑
j
=
1
,
j
≠
i
M
w
k
-
1
j
y
j
.
The set of all energies reported by the SUs is represented by Y as
(10)
Y
=
{
y
1
,
y
2
,
…
,
y
M
}
.
To update weight of the ith SU, M
-
1 users are considered by excluding the ith SU as follows:
(11)
Y
i
o
⊂
Y
=
{
y
l
:
l
=
1,2
,
…
,
M
,
l
≠
i
}
,
where Y
i
o is the set of energies of all SUs except the ith SU. Y
i
o is sorted into the ordered set Y
J (ascending or descending order depending on the global decision H
1 or H
0, resp., based on weights of the SUs in the previous slot) as follows:
(12)
Y
J
=
{
Y
(
1
)
<
Y
(
2
)
<
⋯
<
Y
(
M
-
1
)
,
H
1
Y
(
M
-
1
)
<
Y
(
M
-
1
)
<
⋯
<
Y
(
1
)
,
H
0
,
where, in case of H
1, Y
(
1
) and Y
(
M
-
1
) are the min
(
Y
i
o
) and max
(
Y
i
o
), respectively, whereas, in case of H
0, Y
(
1
) and Y
(
M
-
1
) are the max
(
Y
i
o
) and min
(
Y
i
o
), respectively. In addition to minimizing effect of the SUs with either a faulty sensor or a continuous weak channel due to deep fading, the ascending order suppresses the effect of AF and AO types of MUs, whereas the descending order suppresses the effect of AB and AO types of MUs by assigning low reliability to them. The M
-
1 SUs in set Y
i
o are assigned normalized reliabilities according to the following two equations:
(13)
r
i
l
o
=
arg
J
∈
(
M
-
1
)
(
Y
(
J
)
=
y
l
∣
y
l
∈
Y
i
o
,
l
≠
i
)
,
R
l
i
=
{
r
i
l
o
×
2
M
(
M
-
1
)
,
l
≠
i
0
,
l
=
i
.
R
l
i is an M
×
M matrix where the diagonal elements are zeros showing exclusion of the ith SU in the assignment of weights. Each row of the matrix shows the reliability given to ith SU when the other SUs are excluded one at a time. Finally, normalized weight of the ith SU is computed by adding elements of the ith row of the matrix (all weights assigned to the ith SU by others except himself, i.e., numerator in (14)) and divided by the summation of all rows (denominator in (14)), as given by the following equation:
(14)
w
i
=
R
i
=
∑
l
=
1
,
l
≠
i
M
R
i
l
∑
n
=
1
M
∑
l
=
1
,
l
≠
n
M
R
n
l
.The reliability of a user is stored in the database (reliability queue) at the FC and is also communicated to the user in the encrypted form as its identification tag (IT) for future use along with the global decision:(15)
Q
(
i
,
k
)
=
R
i
,
I
T
k
i
=
R
i
,
where Q
(
i
,
k
) shows the kth slot of the ith queue and IT
k
i is the IT assigned to theith user in the current slot. We assume that only legal SUs know the decryption key, which is updated and exchanged periodically between FC and legal SUs, which enables them to successfully decrypt the IT. In the next time slot, each SU transmits its local sensing result along with the previously decrypted reliability (IT). The FC first applies the MUscreening testby checking the SU’s reported IT with the reliability stored in the corresponding slot for the user in its own database Q
(
i
,
k
-
1
). If a mismatch is found, the FC will declare the user as an MU. Further, the current input (sensing result) from that SU is discarded, and no future reports will be accepted from him:
(16)
S
U
i
=
MU
,
if
I
T
k
-
1
i
≠
Q
(
i
,
k
-
1
)
,
where IT
k
-
1
i is the IT reported by the ith user.If an MU is smart enough to deceive the FC by clearing the MU screening test, which is possible only if the MU produces exactly the same reliability as is assigned to a legal user in the previous slot, then the FC performs areliability test to detect MUs and consistently unreliable SUs. The reliability test is comparatively slower because data from the past few slots must be gathered in order to identify the behavior and evaluate the credibility of the user. The purpose of the reliability test is to detect consistently unreliable sensors so that their results can be ignored. In going against the global decision, an MU will also be among the most consistent producers of unreliable results and will thus be stopped after a few slots.The consistently unreliable SUs are identified by finding the cumulative reliability which is computed by adding the previously storedK slots reliabilities as
(17)
R
i
cum
=
∑
j
=
1
K
R
j
,
where jis the index for slots. The SUs with a cumulative reliability smaller than a predetermined reliability threshold, λ
R, are discarded:
(18)
r
i
=
{
1
,
R
i
cum
<
λ
R
0
,
R
i
cum
≥
λ
R
,
(19)
r
=
∑
i
N
r
i
,
where r is the number of users that have unacceptable reliabilities that includes both unreliable and malicious users. Finally, only the remaining users, M
=
N
-
r, are considered by the fusion center when making a global decision. The final decision is dependent upon the global threshold and weighting coefficients (reliabilities).
#### 3.1.2. Proposed Scheme II
In this scheme, computations are further simplified. Instead of computing the reliability for each user based on previous results, reliability (renewed in every time slot) is randomly assigned to each user by the FC. The random reliability (RR) is used as IT for the SU and is also stored in the database of the FC for future decisions as(20)
I
T
k
i
=
Q
(
i
,
k
)
=
R
R
i
,
where RR
i is the random reliability assigned to the ith SU and is stored at Q
(
i
,
k
). The global decision and the respective IT values are communicated to the SUs at the end of each time slot.Since soft fusion rule is used for global decision in this scheme, therefore, all SUs report their current local observations along with the previously assigned IT (in the decrypted form) to the fusion center, where they are combined with equal weights and a global decision is made about the status of the primary signal. If the IT sent by an SU does not match with the recently (previous slot) stored IT in the reliability queue at the FC, that SU is deemed to be malicious. On the other hand, if a match is found, then the unreliability of the SU is computed. If the local observation does not match with the global decision, the reliability of that particular SU is decreased. In other words, the unreliability,U
i, of that SU is increased:
(21)
U
i
=
U
i
+
(
Z
⊕
y
i
)
,
where Z and Y
i are the 1-bit global and local decisions, respectively, and ⊕ is the exclusive-OR operation that produces 1 when local and global decisions are different and produces 0 otherwise. For computation of the unreliability, 1-bit global and local decisions are considered by the FC, whereas soft fusion rule is used for the global decision. The 1-bit local decision of each user is computed by the fusion center based on the reported observation of the respective user. We assume the same threshold for all SUs to get the 1-bit local decision at the FC.If the MU screening test fails (i.e., MU produces exactly the same IT as that stored in the queue), then the MU is detected by the reliability test because MU produces a result that frequently deviates from the actual status of the primary signal (global decision). Every time the MU reports a deviant result, its unreliability will increase which occurs more frequently than a user in fading or shadowing. An SU (it may be an MU or a normal SU producing consistently wrong results due to the channel condition or sensor malfunctioning) is stopped from sending reports to the FC when its unreliability reaches a predefined threshold. Only the remaining users that are reliable in terms of generating accurate results contribute to determining the PU status. The dropped SU, represented bySU
D, is not involved in future global decisions and is determined by the following equation:
(22)
S
U
D
=
arg
max
i
=
1,2
,
…
,
M
(
U
i
)
if
U
thr
≤
max
i
=
1,2
,
…
,
M
(
U
i
)
.In this scheme, the decision of dropping an unreliable SU and MU is taken by the FC.
#### 3.1.3. Proposed Scheme III
The unreliability in this scheme is computed by every SU individually by comparing the local and global decision. To be consistent with the previous schemes we use soft decision approach in this scheme. However, hard decision rule will be more befitting for this scenario. Three types of users are considered here: honest, dishonest, and malicious. Honest users are those who stop reporting when their unreliability exceeds a certain value. In the case of honest users, with time, only reliable users that are less than the total number of users contribute to the detection of the primary signal. Dishonest users continue reporting their untrusted observations even if their unreliability exceeds the threshold. Users with malicious behavior continuously send false data irrespective of the real status of the primary signal and thus severely degrade the detection performance of the network. Dishonest and MUs try to falsify results so as to suit their own selfish interests. As the decision of disengagement from the network is taken at the user level, this approach has no solution for dealing with MUs. Only consistently unreliable users (those with malfunctioning sensor or in deep fades) are restricted. The FC relies on the honesty of SUs and accepts reports from all users. In an environment composed entirely of honest users, consistently unreliable users disengage themselves from reporting when their unreliability reaches a certain limit.In this approach, each SU performs local sensing, sends its observation to the FC, and waits for the global decision. If the received global decision is different from the local decision, the SU increments its unreliability according to (18). An SU remains in the network as long as its unreliability does not exceed a certain threshold similar to (19). In contrast to schemes I and II, no IT or other reliability calculations are used in scheme III, which makes the approach simple and fast. At any given time, there will be M reliable nodes in the CR network. In the case of all honest users, M is normally smaller than N because unreliable SUs leave the network, thereby keeping the calculations simple and the CR network manageable. If both malicious and honest users are present, the number of users will be less than N but greater than M. In the case of all dishonest users, the number of users remains fixed and is equal to the total number of users, N. Computational simplicity is the main advantage of this strategy; however its disadvantages include lack of control over the MU and unreliable users, as the decision is taken at the SU and results in an increased number of users when they are dishonest.
## 3.1. Our Proposed Schemes
It is assumed that FC maintainsM queues which are collectively called the reliability queue and is represented by Q, as shown in Figure 2. The size of each queue is K which denotes the previous history of reliability maintained for each of the reliable SUs. The value of K reflects a trade-off between the sensing accuracy and speed. FC receives sensing results from all SUs with equal initial reliability which is updated based on the distance between the local sensing result and the global decision in scheme I. The SU that produces more congruent result with respect to the global decision is assigned a higher reliability and vice versa. In contrast, schemes II and III use unreliability, instead of reliability, to evaluate an SU for participation in the global decision. An MU is detected and isolated from the network in schemes I and II because FC takes the decision by a two-tier checking process. However, the decision of disengagement from the network is taken by SUs (not FC) themselves and thus, an MU cannot be detected in scheme III. In this case, the fusion center relies upon the rectitude of the SUs.Figure 2
Reliability queue at the fusion center.
### 3.1.1. Proposed Scheme I
Rather than using complex calculations to compute the reliability of SUs, a simple method is proposed in this study. Each SU performs local sensing in the sensing period and forwards its observation to the FC in the reporting period. FC accepts the receiving data from the SUs with equal initial reliability and takes a global decision using data fusion (soft decision) technique. The initial reliability (weight) can be assigned to each SU as discussed in [10] but for simplicity, in this work, we assign equal initial reliability to the SUs that makes the initial weighting coefficient equal in (6) for each SU’s report. The channel condition between PU and SU is then quantified into reliability which is measured on the basis of how much the SU supports or deviates from the global result. Based on the reliabilities in the previous slot and reports from the users in the current slot, FC takes the global decision. In (6) weights of all the SUs are taken into account for the global decision. However, to calculate/update weight of the ith SU, local observations of all SUs except the ith SU should be considered in order to minimize bias of the ith SU in weights assignment [22]. In [22, 23] the authors update the weight coefficients using the Chair-Varshney technique. However, in practical scenarios the detection and false alarm probabilities are not known a priori. Further, they do not handle the malicious users. In this work, we propose the update of weights based on the reported observations of the SUs. The global decision, excluding the ith SU, can be computed as below:
(9)
Z
i
=
∑
j
=
1
M
w
k
-
1
j
y
j
-
w
k
-
1
i
y
i
=
∑
j
=
1
,
j
≠
i
M
w
k
-
1
j
y
j
.
The set of all energies reported by the SUs is represented by Y as
(10)
Y
=
{
y
1
,
y
2
,
…
,
y
M
}
.
To update weight of the ith SU, M
-
1 users are considered by excluding the ith SU as follows:
(11)
Y
i
o
⊂
Y
=
{
y
l
:
l
=
1,2
,
…
,
M
,
l
≠
i
}
,
where Y
i
o is the set of energies of all SUs except the ith SU. Y
i
o is sorted into the ordered set Y
J (ascending or descending order depending on the global decision H
1 or H
0, resp., based on weights of the SUs in the previous slot) as follows:
(12)
Y
J
=
{
Y
(
1
)
<
Y
(
2
)
<
⋯
<
Y
(
M
-
1
)
,
H
1
Y
(
M
-
1
)
<
Y
(
M
-
1
)
<
⋯
<
Y
(
1
)
,
H
0
,
where, in case of H
1, Y
(
1
) and Y
(
M
-
1
) are the min
(
Y
i
o
) and max
(
Y
i
o
), respectively, whereas, in case of H
0, Y
(
1
) and Y
(
M
-
1
) are the max
(
Y
i
o
) and min
(
Y
i
o
), respectively. In addition to minimizing effect of the SUs with either a faulty sensor or a continuous weak channel due to deep fading, the ascending order suppresses the effect of AF and AO types of MUs, whereas the descending order suppresses the effect of AB and AO types of MUs by assigning low reliability to them. The M
-
1 SUs in set Y
i
o are assigned normalized reliabilities according to the following two equations:
(13)
r
i
l
o
=
arg
J
∈
(
M
-
1
)
(
Y
(
J
)
=
y
l
∣
y
l
∈
Y
i
o
,
l
≠
i
)
,
R
l
i
=
{
r
i
l
o
×
2
M
(
M
-
1
)
,
l
≠
i
0
,
l
=
i
.
R
l
i is an M
×
M matrix where the diagonal elements are zeros showing exclusion of the ith SU in the assignment of weights. Each row of the matrix shows the reliability given to ith SU when the other SUs are excluded one at a time. Finally, normalized weight of the ith SU is computed by adding elements of the ith row of the matrix (all weights assigned to the ith SU by others except himself, i.e., numerator in (14)) and divided by the summation of all rows (denominator in (14)), as given by the following equation:
(14)
w
i
=
R
i
=
∑
l
=
1
,
l
≠
i
M
R
i
l
∑
n
=
1
M
∑
l
=
1
,
l
≠
n
M
R
n
l
.The reliability of a user is stored in the database (reliability queue) at the FC and is also communicated to the user in the encrypted form as its identification tag (IT) for future use along with the global decision:(15)
Q
(
i
,
k
)
=
R
i
,
I
T
k
i
=
R
i
,
where Q
(
i
,
k
) shows the kth slot of the ith queue and IT
k
i is the IT assigned to theith user in the current slot. We assume that only legal SUs know the decryption key, which is updated and exchanged periodically between FC and legal SUs, which enables them to successfully decrypt the IT. In the next time slot, each SU transmits its local sensing result along with the previously decrypted reliability (IT). The FC first applies the MUscreening testby checking the SU’s reported IT with the reliability stored in the corresponding slot for the user in its own database Q
(
i
,
k
-
1
). If a mismatch is found, the FC will declare the user as an MU. Further, the current input (sensing result) from that SU is discarded, and no future reports will be accepted from him:
(16)
S
U
i
=
MU
,
if
I
T
k
-
1
i
≠
Q
(
i
,
k
-
1
)
,
where IT
k
-
1
i is the IT reported by the ith user.If an MU is smart enough to deceive the FC by clearing the MU screening test, which is possible only if the MU produces exactly the same reliability as is assigned to a legal user in the previous slot, then the FC performs areliability test to detect MUs and consistently unreliable SUs. The reliability test is comparatively slower because data from the past few slots must be gathered in order to identify the behavior and evaluate the credibility of the user. The purpose of the reliability test is to detect consistently unreliable sensors so that their results can be ignored. In going against the global decision, an MU will also be among the most consistent producers of unreliable results and will thus be stopped after a few slots.The consistently unreliable SUs are identified by finding the cumulative reliability which is computed by adding the previously storedK slots reliabilities as
(17)
R
i
cum
=
∑
j
=
1
K
R
j
,
where jis the index for slots. The SUs with a cumulative reliability smaller than a predetermined reliability threshold, λ
R, are discarded:
(18)
r
i
=
{
1
,
R
i
cum
<
λ
R
0
,
R
i
cum
≥
λ
R
,
(19)
r
=
∑
i
N
r
i
,
where r is the number of users that have unacceptable reliabilities that includes both unreliable and malicious users. Finally, only the remaining users, M
=
N
-
r, are considered by the fusion center when making a global decision. The final decision is dependent upon the global threshold and weighting coefficients (reliabilities).
### 3.1.2. Proposed Scheme II
In this scheme, computations are further simplified. Instead of computing the reliability for each user based on previous results, reliability (renewed in every time slot) is randomly assigned to each user by the FC. The random reliability (RR) is used as IT for the SU and is also stored in the database of the FC for future decisions as(20)
I
T
k
i
=
Q
(
i
,
k
)
=
R
R
i
,
where RR
i is the random reliability assigned to the ith SU and is stored at Q
(
i
,
k
). The global decision and the respective IT values are communicated to the SUs at the end of each time slot.Since soft fusion rule is used for global decision in this scheme, therefore, all SUs report their current local observations along with the previously assigned IT (in the decrypted form) to the fusion center, where they are combined with equal weights and a global decision is made about the status of the primary signal. If the IT sent by an SU does not match with the recently (previous slot) stored IT in the reliability queue at the FC, that SU is deemed to be malicious. On the other hand, if a match is found, then the unreliability of the SU is computed. If the local observation does not match with the global decision, the reliability of that particular SU is decreased. In other words, the unreliability,U
i, of that SU is increased:
(21)
U
i
=
U
i
+
(
Z
⊕
y
i
)
,
where Z and Y
i are the 1-bit global and local decisions, respectively, and ⊕ is the exclusive-OR operation that produces 1 when local and global decisions are different and produces 0 otherwise. For computation of the unreliability, 1-bit global and local decisions are considered by the FC, whereas soft fusion rule is used for the global decision. The 1-bit local decision of each user is computed by the fusion center based on the reported observation of the respective user. We assume the same threshold for all SUs to get the 1-bit local decision at the FC.If the MU screening test fails (i.e., MU produces exactly the same IT as that stored in the queue), then the MU is detected by the reliability test because MU produces a result that frequently deviates from the actual status of the primary signal (global decision). Every time the MU reports a deviant result, its unreliability will increase which occurs more frequently than a user in fading or shadowing. An SU (it may be an MU or a normal SU producing consistently wrong results due to the channel condition or sensor malfunctioning) is stopped from sending reports to the FC when its unreliability reaches a predefined threshold. Only the remaining users that are reliable in terms of generating accurate results contribute to determining the PU status. The dropped SU, represented bySU
D, is not involved in future global decisions and is determined by the following equation:
(22)
S
U
D
=
arg
max
i
=
1,2
,
…
,
M
(
U
i
)
if
U
thr
≤
max
i
=
1,2
,
…
,
M
(
U
i
)
.In this scheme, the decision of dropping an unreliable SU and MU is taken by the FC.
### 3.1.3. Proposed Scheme III
The unreliability in this scheme is computed by every SU individually by comparing the local and global decision. To be consistent with the previous schemes we use soft decision approach in this scheme. However, hard decision rule will be more befitting for this scenario. Three types of users are considered here: honest, dishonest, and malicious. Honest users are those who stop reporting when their unreliability exceeds a certain value. In the case of honest users, with time, only reliable users that are less than the total number of users contribute to the detection of the primary signal. Dishonest users continue reporting their untrusted observations even if their unreliability exceeds the threshold. Users with malicious behavior continuously send false data irrespective of the real status of the primary signal and thus severely degrade the detection performance of the network. Dishonest and MUs try to falsify results so as to suit their own selfish interests. As the decision of disengagement from the network is taken at the user level, this approach has no solution for dealing with MUs. Only consistently unreliable users (those with malfunctioning sensor or in deep fades) are restricted. The FC relies on the honesty of SUs and accepts reports from all users. In an environment composed entirely of honest users, consistently unreliable users disengage themselves from reporting when their unreliability reaches a certain limit.In this approach, each SU performs local sensing, sends its observation to the FC, and waits for the global decision. If the received global decision is different from the local decision, the SU increments its unreliability according to (18). An SU remains in the network as long as its unreliability does not exceed a certain threshold similar to (19). In contrast to schemes I and II, no IT or other reliability calculations are used in scheme III, which makes the approach simple and fast. At any given time, there will be M reliable nodes in the CR network. In the case of all honest users, M is normally smaller than N because unreliable SUs leave the network, thereby keeping the calculations simple and the CR network manageable. If both malicious and honest users are present, the number of users will be less than N but greater than M. In the case of all dishonest users, the number of users remains fixed and is equal to the total number of users, N. Computational simplicity is the main advantage of this strategy; however its disadvantages include lack of control over the MU and unreliable users, as the decision is taken at the SU and results in an increased number of users when they are dishonest.
## 3.1.1. Proposed Scheme I
Rather than using complex calculations to compute the reliability of SUs, a simple method is proposed in this study. Each SU performs local sensing in the sensing period and forwards its observation to the FC in the reporting period. FC accepts the receiving data from the SUs with equal initial reliability and takes a global decision using data fusion (soft decision) technique. The initial reliability (weight) can be assigned to each SU as discussed in [10] but for simplicity, in this work, we assign equal initial reliability to the SUs that makes the initial weighting coefficient equal in (6) for each SU’s report. The channel condition between PU and SU is then quantified into reliability which is measured on the basis of how much the SU supports or deviates from the global result. Based on the reliabilities in the previous slot and reports from the users in the current slot, FC takes the global decision. In (6) weights of all the SUs are taken into account for the global decision. However, to calculate/update weight of the ith SU, local observations of all SUs except the ith SU should be considered in order to minimize bias of the ith SU in weights assignment [22]. In [22, 23] the authors update the weight coefficients using the Chair-Varshney technique. However, in practical scenarios the detection and false alarm probabilities are not known a priori. Further, they do not handle the malicious users. In this work, we propose the update of weights based on the reported observations of the SUs. The global decision, excluding the ith SU, can be computed as below:
(9)
Z
i
=
∑
j
=
1
M
w
k
-
1
j
y
j
-
w
k
-
1
i
y
i
=
∑
j
=
1
,
j
≠
i
M
w
k
-
1
j
y
j
.
The set of all energies reported by the SUs is represented by Y as
(10)
Y
=
{
y
1
,
y
2
,
…
,
y
M
}
.
To update weight of the ith SU, M
-
1 users are considered by excluding the ith SU as follows:
(11)
Y
i
o
⊂
Y
=
{
y
l
:
l
=
1,2
,
…
,
M
,
l
≠
i
}
,
where Y
i
o is the set of energies of all SUs except the ith SU. Y
i
o is sorted into the ordered set Y
J (ascending or descending order depending on the global decision H
1 or H
0, resp., based on weights of the SUs in the previous slot) as follows:
(12)
Y
J
=
{
Y
(
1
)
<
Y
(
2
)
<
⋯
<
Y
(
M
-
1
)
,
H
1
Y
(
M
-
1
)
<
Y
(
M
-
1
)
<
⋯
<
Y
(
1
)
,
H
0
,
where, in case of H
1, Y
(
1
) and Y
(
M
-
1
) are the min
(
Y
i
o
) and max
(
Y
i
o
), respectively, whereas, in case of H
0, Y
(
1
) and Y
(
M
-
1
) are the max
(
Y
i
o
) and min
(
Y
i
o
), respectively. In addition to minimizing effect of the SUs with either a faulty sensor or a continuous weak channel due to deep fading, the ascending order suppresses the effect of AF and AO types of MUs, whereas the descending order suppresses the effect of AB and AO types of MUs by assigning low reliability to them. The M
-
1 SUs in set Y
i
o are assigned normalized reliabilities according to the following two equations:
(13)
r
i
l
o
=
arg
J
∈
(
M
-
1
)
(
Y
(
J
)
=
y
l
∣
y
l
∈
Y
i
o
,
l
≠
i
)
,
R
l
i
=
{
r
i
l
o
×
2
M
(
M
-
1
)
,
l
≠
i
0
,
l
=
i
.
R
l
i is an M
×
M matrix where the diagonal elements are zeros showing exclusion of the ith SU in the assignment of weights. Each row of the matrix shows the reliability given to ith SU when the other SUs are excluded one at a time. Finally, normalized weight of the ith SU is computed by adding elements of the ith row of the matrix (all weights assigned to the ith SU by others except himself, i.e., numerator in (14)) and divided by the summation of all rows (denominator in (14)), as given by the following equation:
(14)
w
i
=
R
i
=
∑
l
=
1
,
l
≠
i
M
R
i
l
∑
n
=
1
M
∑
l
=
1
,
l
≠
n
M
R
n
l
.The reliability of a user is stored in the database (reliability queue) at the FC and is also communicated to the user in the encrypted form as its identification tag (IT) for future use along with the global decision:(15)
Q
(
i
,
k
)
=
R
i
,
I
T
k
i
=
R
i
,
where Q
(
i
,
k
) shows the kth slot of the ith queue and IT
k
i is the IT assigned to theith user in the current slot. We assume that only legal SUs know the decryption key, which is updated and exchanged periodically between FC and legal SUs, which enables them to successfully decrypt the IT. In the next time slot, each SU transmits its local sensing result along with the previously decrypted reliability (IT). The FC first applies the MUscreening testby checking the SU’s reported IT with the reliability stored in the corresponding slot for the user in its own database Q
(
i
,
k
-
1
). If a mismatch is found, the FC will declare the user as an MU. Further, the current input (sensing result) from that SU is discarded, and no future reports will be accepted from him:
(16)
S
U
i
=
MU
,
if
I
T
k
-
1
i
≠
Q
(
i
,
k
-
1
)
,
where IT
k
-
1
i is the IT reported by the ith user.If an MU is smart enough to deceive the FC by clearing the MU screening test, which is possible only if the MU produces exactly the same reliability as is assigned to a legal user in the previous slot, then the FC performs areliability test to detect MUs and consistently unreliable SUs. The reliability test is comparatively slower because data from the past few slots must be gathered in order to identify the behavior and evaluate the credibility of the user. The purpose of the reliability test is to detect consistently unreliable sensors so that their results can be ignored. In going against the global decision, an MU will also be among the most consistent producers of unreliable results and will thus be stopped after a few slots.The consistently unreliable SUs are identified by finding the cumulative reliability which is computed by adding the previously storedK slots reliabilities as
(17)
R
i
cum
=
∑
j
=
1
K
R
j
,
where jis the index for slots. The SUs with a cumulative reliability smaller than a predetermined reliability threshold, λ
R, are discarded:
(18)
r
i
=
{
1
,
R
i
cum
<
λ
R
0
,
R
i
cum
≥
λ
R
,
(19)
r
=
∑
i
N
r
i
,
where r is the number of users that have unacceptable reliabilities that includes both unreliable and malicious users. Finally, only the remaining users, M
=
N
-
r, are considered by the fusion center when making a global decision. The final decision is dependent upon the global threshold and weighting coefficients (reliabilities).
## 3.1.2. Proposed Scheme II
In this scheme, computations are further simplified. Instead of computing the reliability for each user based on previous results, reliability (renewed in every time slot) is randomly assigned to each user by the FC. The random reliability (RR) is used as IT for the SU and is also stored in the database of the FC for future decisions as(20)
I
T
k
i
=
Q
(
i
,
k
)
=
R
R
i
,
where RR
i is the random reliability assigned to the ith SU and is stored at Q
(
i
,
k
). The global decision and the respective IT values are communicated to the SUs at the end of each time slot.Since soft fusion rule is used for global decision in this scheme, therefore, all SUs report their current local observations along with the previously assigned IT (in the decrypted form) to the fusion center, where they are combined with equal weights and a global decision is made about the status of the primary signal. If the IT sent by an SU does not match with the recently (previous slot) stored IT in the reliability queue at the FC, that SU is deemed to be malicious. On the other hand, if a match is found, then the unreliability of the SU is computed. If the local observation does not match with the global decision, the reliability of that particular SU is decreased. In other words, the unreliability,U
i, of that SU is increased:
(21)
U
i
=
U
i
+
(
Z
⊕
y
i
)
,
where Z and Y
i are the 1-bit global and local decisions, respectively, and ⊕ is the exclusive-OR operation that produces 1 when local and global decisions are different and produces 0 otherwise. For computation of the unreliability, 1-bit global and local decisions are considered by the FC, whereas soft fusion rule is used for the global decision. The 1-bit local decision of each user is computed by the fusion center based on the reported observation of the respective user. We assume the same threshold for all SUs to get the 1-bit local decision at the FC.If the MU screening test fails (i.e., MU produces exactly the same IT as that stored in the queue), then the MU is detected by the reliability test because MU produces a result that frequently deviates from the actual status of the primary signal (global decision). Every time the MU reports a deviant result, its unreliability will increase which occurs more frequently than a user in fading or shadowing. An SU (it may be an MU or a normal SU producing consistently wrong results due to the channel condition or sensor malfunctioning) is stopped from sending reports to the FC when its unreliability reaches a predefined threshold. Only the remaining users that are reliable in terms of generating accurate results contribute to determining the PU status. The dropped SU, represented bySU
D, is not involved in future global decisions and is determined by the following equation:
(22)
S
U
D
=
arg
max
i
=
1,2
,
…
,
M
(
U
i
)
if
U
thr
≤
max
i
=
1,2
,
…
,
M
(
U
i
)
.In this scheme, the decision of dropping an unreliable SU and MU is taken by the FC.
## 3.1.3. Proposed Scheme III
The unreliability in this scheme is computed by every SU individually by comparing the local and global decision. To be consistent with the previous schemes we use soft decision approach in this scheme. However, hard decision rule will be more befitting for this scenario. Three types of users are considered here: honest, dishonest, and malicious. Honest users are those who stop reporting when their unreliability exceeds a certain value. In the case of honest users, with time, only reliable users that are less than the total number of users contribute to the detection of the primary signal. Dishonest users continue reporting their untrusted observations even if their unreliability exceeds the threshold. Users with malicious behavior continuously send false data irrespective of the real status of the primary signal and thus severely degrade the detection performance of the network. Dishonest and MUs try to falsify results so as to suit their own selfish interests. As the decision of disengagement from the network is taken at the user level, this approach has no solution for dealing with MUs. Only consistently unreliable users (those with malfunctioning sensor or in deep fades) are restricted. The FC relies on the honesty of SUs and accepts reports from all users. In an environment composed entirely of honest users, consistently unreliable users disengage themselves from reporting when their unreliability reaches a certain limit.In this approach, each SU performs local sensing, sends its observation to the FC, and waits for the global decision. If the received global decision is different from the local decision, the SU increments its unreliability according to (18). An SU remains in the network as long as its unreliability does not exceed a certain threshold similar to (19). In contrast to schemes I and II, no IT or other reliability calculations are used in scheme III, which makes the approach simple and fast. At any given time, there will be M reliable nodes in the CR network. In the case of all honest users, M is normally smaller than N because unreliable SUs leave the network, thereby keeping the calculations simple and the CR network manageable. If both malicious and honest users are present, the number of users will be less than N but greater than M. In the case of all dishonest users, the number of users remains fixed and is equal to the total number of users, N. Computational simplicity is the main advantage of this strategy; however its disadvantages include lack of control over the MU and unreliable users, as the decision is taken at the SU and results in an increased number of users when they are dishonest.
## 4. Simulation Results
In this section, we use simulations to compare our proposed strategies with the Chair-Varshney and conventional cooperative spectrum sensing schemes; schemes I and II are considered. In scheme III, the effect of dishonest and MUs is compared with that of honest users. The effects of always opposite MU, always busy MU, always free MU, andαMU with α
H
1 and (
1
-
α
)
H
0 are illustrated in the simulations. We evaluate the detection performance of our proposed schemes by plotting receiver operating characteristics (ROC) curve. The simulation parameters are summarized in Table 1.Table 1
System parameters.
Description
Symbol
Value
Number of iterations
l
5000
Number of SUs
N
15
PU busy probability
p
H
1
0.5
Sensing duration
T
s
1 ms
Sampling frequency
f
s
300 KHz
Number of samples
S
600
Signal-to-noise ratio
γ
[−25 dB, −10 dB] with 1 dB decrement
Maximum number of MUs
L
max
3
Minimum number of reliable SUs
M
min
5
Size of queue
Q
15 × 50
Depth of each user queue (each row inQ)
K
50
Unreliability threshold
U
thr
10
Probability of H1 transmission byαMU
α
[0.2, 0.5, 0.8]
### 4.1. Results of Proposed Scheme I
Figure3 shows the detection performance of our proposed scheme I, Chair-Varshney (CV) rule, and conventional CSS scheme under the effect of zero, one, and two MUs of the AO type. It is clear from the figure that as the number of AO MUs increases, detection performance of all schemes decreases. By observing the figure it is evident that the detection performance of the Chair-Varshney rule drops quickly when the number of MUs increases to two. Chair-Varshney is the optimum rule but the detection performance of our proposed scheme matches the CV rule for two MUs. Conventional CSS is most severely affected by the AO MUs. When there is no MU, our proposed and conventional CSS schemes both show almost similar results, but our proposed scheme has the advantage of utilizing a smaller number of users. By introducing malicious users (i.e., one or two MUs), our proposed scheme exhibits more robustness and efficiency compared to the conventional CSS. It is also evident from the figure that Chair-Varshney is the optimal detection scheme and provides an upper bound for the other schemes when there is no MU. However, it has a disadvantage in that all users, including consistently unreliable and malicious users, are considered. Our proposed scheme has the advantage of using fewer users for the global decision which is shown in Figure 9.Figure 3
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme I with always opposite (AO) MUs.Figure4 shows the detection performance of the proposed scheme I when there is one AB, AF, or αMU. αMU is an MU that transmits high signal (H
1), that is, behaving like AB, with probability α, and low signal (H
0), that is, behaving like AF, with probability 1
-
α. To differentiate the effect of AB and AF, P
(
H
1
) is set to 0.7 and P
(
H
0
) is set to 0.3 such that AF could produce more deviating results compared to AB MU. The detection performance curve of αMU is sandwiched between that of AF and AB MU types for 0
<
α
<
1. On one extreme, such MU behaves like AF when α
=
0 and on the other extreme it behaves like AB when α
=
1. The effect of such MU is shown in Figure 4 for α
=
0.5, 0.8, and 0.2, respectively. The performance curve of αMU lies in the middle of AF and AB MUs for α
=
0.5. For α
=
0.8, the curve of αMU is shifted toward AB and for α
=
0.2 it is shifted towards AF MU.Figure 4
Effect of the AB, AF, and MU that transmits high signal with probabilityα and low signal with probability 1
-
α on proposed scheme I.
### 4.2. Results of Proposed Scheme II
This scheme uses a different approach to identify malicious and unreliable users. The advantage of this scheme is its computational simplicity. With a simple approach and without computing reliability for each user, it shows almost similar results with the previous approach. However, the disadvantage is that more users are considered for global decision in this scheme shown by Figure9.Figure5 shows the effect of AO MUs on the performance of the examined schemes. Similar to Figure 3, conventional CSS exhibits the worst performance, in terms of detection performance and the number of users considered for global decision, when exposed to AO MUs. The performance of our proposed method improves to that of the Chair-Varshney approach when the number of MUs increases to two, even though our scheme has the advantage of requiring fewer users (shown in Figure 9).Figure 5
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always opposite (AO) MUs.Figures6 and 7 show the effect of AB and AF type of MUs, respectively, on the performance of the Chair-Varshney approach, conventional CSS, and our proposed scheme II. The detection performances of the latter two methods are almost similar due to the following reasons. First, the number of MUs considered is very few compared to legal and reliable users. The detection performance of conventional CSS will be severely affected if the number of MUs is increased. Secondly, due to the equal probabilities of H
1 and H
0, AF and AB MUs show the same effect on the detection performance. Lastly, if the probability of PU arrival is high and AB MUs are present in the network or the idle probability of the PU is high and AF MUs are present in the network, then the effect of AB and AF type of MUs will be low because most of the time the actual status of the PU and sensing report of the MU will be similar which has comparatively less effect on the detection performance. The advantage of our scheme includes the fewer number of (reliable) users that are taken into account for a global decision, as demonstrated in Figure 9 where the average number of users is equal to the total number of users in the case of conventional CSS but fewer for our proposed scheme. This number continues to decrease as the number of MUs increases.Figure 6
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always busy (AB) MUs.Figure 7
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always free (AF) MUs.
### 4.3. Results of Proposed Scheme III
As discussed in Section3.1.3, detection performance with this strategy depends on honesty of the users. In the case of dishonest users, every user attempts to influence the global result by showing himself to be reliable (in fact false reliable). All users, including honest users, report their sensing observation to the FC. If the number of dishonest users is small compared to the number of honest users, then the effect of the former will be minimal.Figure8 shows the performance comparison for the case that honest users exist only, the case that dishonest users are mixed with honest users, and the case that MUs are mixed with honest users. It is clear from the figure that similar sensing performance is achieved in the honest and dishonest cases because there are very few dishonest users present. In the case that all users are honest, the average number of users will always be less than the number of users when all of them are dishonest. No control over MUs is achieved in this scheme and thus, MUs severely contaminate the sensing performance. As is clearly evident from the figure, by increasing the number of MUs from 1 to 2, a high deterioration is observed in the detection performance.Figure 8
Performance comparison of honest, dishonest, and malicious users in scheme III.Figure 9
Average number of users for global decision in conventional CSS, CV (Chair-Varshney), and our proposed schemes for numbers 0, 1, 2, and 3 of malicious users.
### 4.4. Comparison of the Average Number of Users for Global Decision by Our Proposed Schemes and Other Schemes
Figure9 shows the average number of SUs considered for the global decision when conventional CSS, Chair-Varshney rule, and our proposed schemes are considered in the presence of zero, one, two, and three MUs. It is observed from the figure that the average number of users, in the case of the conventional CSS and Chair-Varshney rule, is equal to the total number of SUs in the network. However, fewer users (that even decrease further with increasing MUs) are used for global decision in our proposed schemes. Scheme I outperforms all the other schemes in terms of the number of users and shows almost similar detection performance to scheme II. The average number of users in scheme I decreases as the number of MUs increases because the MUs are successfully blocked by scheme I which reduces the number of users. It is also visible from the figure that the average number of users in scheme II is more than that in scheme I. The reason is that in scheme I each user has a relative weight depending on the accuracy of the result. Thus, unreliable users get less weight and are suppressed from the network which decreases the average number of users. In contrast, all users (reliable and unreliable) have equal weights in scheme II and are excluded only when their unreliability reaches a certain limit. In scheme III, the average number of users in the dishonest case is 15 (total users), while for the honest case it is less than maximum but increases with the number of MUs because MUs pretend to be honest and remain in the network. Since schemes II and III use unreliability to ignore a user, therefore the number of users, when there is no MU, is equal in both schemes. Scheme I and scheme II use 43% and 17% reduced users, respectively, to show similar detection performance to that of conventional CSS when there is no MU. The detection performance improves further with the improvement of users (decrease in the number of users) of 52% and 28% in scheme I and scheme II, respectively, when there are two MUs in the network. However, the number of users considered for global decision in scheme III is 17% and 3% less than the conventional and CV schemes when MU = 0 and MU = 2, respectively.
## 4.1. Results of Proposed Scheme I
Figure3 shows the detection performance of our proposed scheme I, Chair-Varshney (CV) rule, and conventional CSS scheme under the effect of zero, one, and two MUs of the AO type. It is clear from the figure that as the number of AO MUs increases, detection performance of all schemes decreases. By observing the figure it is evident that the detection performance of the Chair-Varshney rule drops quickly when the number of MUs increases to two. Chair-Varshney is the optimum rule but the detection performance of our proposed scheme matches the CV rule for two MUs. Conventional CSS is most severely affected by the AO MUs. When there is no MU, our proposed and conventional CSS schemes both show almost similar results, but our proposed scheme has the advantage of utilizing a smaller number of users. By introducing malicious users (i.e., one or two MUs), our proposed scheme exhibits more robustness and efficiency compared to the conventional CSS. It is also evident from the figure that Chair-Varshney is the optimal detection scheme and provides an upper bound for the other schemes when there is no MU. However, it has a disadvantage in that all users, including consistently unreliable and malicious users, are considered. Our proposed scheme has the advantage of using fewer users for the global decision which is shown in Figure 9.Figure 3
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme I with always opposite (AO) MUs.Figure4 shows the detection performance of the proposed scheme I when there is one AB, AF, or αMU. αMU is an MU that transmits high signal (H
1), that is, behaving like AB, with probability α, and low signal (H
0), that is, behaving like AF, with probability 1
-
α. To differentiate the effect of AB and AF, P
(
H
1
) is set to 0.7 and P
(
H
0
) is set to 0.3 such that AF could produce more deviating results compared to AB MU. The detection performance curve of αMU is sandwiched between that of AF and AB MU types for 0
<
α
<
1. On one extreme, such MU behaves like AF when α
=
0 and on the other extreme it behaves like AB when α
=
1. The effect of such MU is shown in Figure 4 for α
=
0.5, 0.8, and 0.2, respectively. The performance curve of αMU lies in the middle of AF and AB MUs for α
=
0.5. For α
=
0.8, the curve of αMU is shifted toward AB and for α
=
0.2 it is shifted towards AF MU.Figure 4
Effect of the AB, AF, and MU that transmits high signal with probabilityα and low signal with probability 1
-
α on proposed scheme I.
## 4.2. Results of Proposed Scheme II
This scheme uses a different approach to identify malicious and unreliable users. The advantage of this scheme is its computational simplicity. With a simple approach and without computing reliability for each user, it shows almost similar results with the previous approach. However, the disadvantage is that more users are considered for global decision in this scheme shown by Figure9.Figure5 shows the effect of AO MUs on the performance of the examined schemes. Similar to Figure 3, conventional CSS exhibits the worst performance, in terms of detection performance and the number of users considered for global decision, when exposed to AO MUs. The performance of our proposed method improves to that of the Chair-Varshney approach when the number of MUs increases to two, even though our scheme has the advantage of requiring fewer users (shown in Figure 9).Figure 5
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always opposite (AO) MUs.Figures6 and 7 show the effect of AB and AF type of MUs, respectively, on the performance of the Chair-Varshney approach, conventional CSS, and our proposed scheme II. The detection performances of the latter two methods are almost similar due to the following reasons. First, the number of MUs considered is very few compared to legal and reliable users. The detection performance of conventional CSS will be severely affected if the number of MUs is increased. Secondly, due to the equal probabilities of H
1 and H
0, AF and AB MUs show the same effect on the detection performance. Lastly, if the probability of PU arrival is high and AB MUs are present in the network or the idle probability of the PU is high and AF MUs are present in the network, then the effect of AB and AF type of MUs will be low because most of the time the actual status of the PU and sensing report of the MU will be similar which has comparatively less effect on the detection performance. The advantage of our scheme includes the fewer number of (reliable) users that are taken into account for a global decision, as demonstrated in Figure 9 where the average number of users is equal to the total number of users in the case of conventional CSS but fewer for our proposed scheme. This number continues to decrease as the number of MUs increases.Figure 6
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always busy (AB) MUs.Figure 7
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always free (AF) MUs.
## 4.3. Results of Proposed Scheme III
As discussed in Section3.1.3, detection performance with this strategy depends on honesty of the users. In the case of dishonest users, every user attempts to influence the global result by showing himself to be reliable (in fact false reliable). All users, including honest users, report their sensing observation to the FC. If the number of dishonest users is small compared to the number of honest users, then the effect of the former will be minimal.Figure8 shows the performance comparison for the case that honest users exist only, the case that dishonest users are mixed with honest users, and the case that MUs are mixed with honest users. It is clear from the figure that similar sensing performance is achieved in the honest and dishonest cases because there are very few dishonest users present. In the case that all users are honest, the average number of users will always be less than the number of users when all of them are dishonest. No control over MUs is achieved in this scheme and thus, MUs severely contaminate the sensing performance. As is clearly evident from the figure, by increasing the number of MUs from 1 to 2, a high deterioration is observed in the detection performance.Figure 8
Performance comparison of honest, dishonest, and malicious users in scheme III.Figure 9
Average number of users for global decision in conventional CSS, CV (Chair-Varshney), and our proposed schemes for numbers 0, 1, 2, and 3 of malicious users.
## 4.4. Comparison of the Average Number of Users for Global Decision by Our Proposed Schemes and Other Schemes
Figure9 shows the average number of SUs considered for the global decision when conventional CSS, Chair-Varshney rule, and our proposed schemes are considered in the presence of zero, one, two, and three MUs. It is observed from the figure that the average number of users, in the case of the conventional CSS and Chair-Varshney rule, is equal to the total number of SUs in the network. However, fewer users (that even decrease further with increasing MUs) are used for global decision in our proposed schemes. Scheme I outperforms all the other schemes in terms of the number of users and shows almost similar detection performance to scheme II. The average number of users in scheme I decreases as the number of MUs increases because the MUs are successfully blocked by scheme I which reduces the number of users. It is also visible from the figure that the average number of users in scheme II is more than that in scheme I. The reason is that in scheme I each user has a relative weight depending on the accuracy of the result. Thus, unreliable users get less weight and are suppressed from the network which decreases the average number of users. In contrast, all users (reliable and unreliable) have equal weights in scheme II and are excluded only when their unreliability reaches a certain limit. In scheme III, the average number of users in the dishonest case is 15 (total users), while for the honest case it is less than maximum but increases with the number of MUs because MUs pretend to be honest and remain in the network. Since schemes II and III use unreliability to ignore a user, therefore the number of users, when there is no MU, is equal in both schemes. Scheme I and scheme II use 43% and 17% reduced users, respectively, to show similar detection performance to that of conventional CSS when there is no MU. The detection performance improves further with the improvement of users (decrease in the number of users) of 52% and 28% in scheme I and scheme II, respectively, when there are two MUs in the network. However, the number of users considered for global decision in scheme III is 17% and 3% less than the conventional and CV schemes when MU = 0 and MU = 2, respectively.
## 5. Conclusion
In this paper, we have proposed simple but effective schemes to combat MUs and control consistently unreliable users. Nonuniform reliability and reliability-based IT are used to isolate unreliable and malicious users in scheme I. Unreliability and randomly chosen IT are used to control unreliable and MUs in scheme II. In scheme III, honest users stop sending reports when their trust level decreases below a certain threshold. The results produced by consistently unreliable users due to either permanent deep fades or sensor malfunctioning are restricted so as to minimize their effect on the global result. Restricting the number of users to only those that are reliable makes the network manageable and reduces the computational cost and other overhead.We intend to extend this work in the future by analyzing latency and energy consumption of the CR network with our proposed schemes.
---
*Source: 101809-2014-09-11.xml* | 101809-2014-09-11_101809-2014-09-11.md | 73,377 | Secure Cooperative Spectrum Sensing for the Cognitive Radio Network Using Nonuniform Reliability | Muhammad Usman; Insoo Koo | The Scientific World Journal
(2014) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101809 | 101809-2014-09-11.xml | ---
## Abstract
Both reliable detection of the primary signal in a noisy and fading environment and nullifying the effect of unauthorized users are important tasks in cognitive radio networks. To address these issues, we consider a cooperative spectrum sensing approach where each user is assigned nonuniform reliability based on the sensing performance. Users with poor channel or faulty sensor are assigned low reliability. The nonuniform reliabilities serve as identification tags and are used to isolate users with malicious behavior. We consider a link layer attack similar to the Byzantine attack, which falsifies the spectrum sensing data. Three different strategies are presented in this paper to ignore unreliable and malicious users in the network. Considering only reliable users for global decision improves sensing time and decreases collisions in the control channel. The fusion center uses the degree of reliability as a weighting factor to determine the global decision in scheme I. Schemes II and III consider the unreliability of users, which makes the computations even simpler. The proposed schemes reduce the number of sensing reports and increase the inference accuracy. The advantages of our proposed schemes over conventional cooperative spectrum sensing and the Chair-Varshney optimum rule are demonstrated through simulations.
---
## Body
## 1. Introduction
The increasing demand for wireless services has driven the need for intelligent allocation and efficient use of the wireless spectrum. Conventional spectrum allocation results in spatiotemporal underutilization and scarcity of the spectrum. According to the Federal Communications Commission (FCC), the spatial and temporal variations in the utilization of the assigned spectrum range from 15% to 85% [1, 2].Cognitive radio (CR) technology has been proposed to combat the spectrum shortage problem by allowing the opportunistic use of the wireless spectrum, which is primarily allocated to primary (licensed) users (PU), by secondary (unlicensed) users (SUs) under a given level of interference to the PU [3, 4]. Such a scheme requires the SU to detect the PU signal accurately and quickly [5]. Some of the various techniques used for spectrum sensing are energy detection, cyclostationary detection, matched filter detection, wavelet detection, and covariance detection. Energy detection is the method of choice due to its computational simplicity and ease of implementation, as well as its minimal requirement of prior knowledge of the primary signal. However, sensing performance of a single SU is greatly affected by the destructive channel effects such as shadowing and fading, thereby hindering the ability of the SU to distinguish between a deep fade and white space. Cooperative spectrum sensing (CSS) is used to overcome the channel effects and exploit location diversity to detect even a weak primary signal [6].The presence of a malicious user (MU) deteriorates the detection performance of cooperative spectrum sensing. An MU is an unwelcome and unauthorized user who impersonates a legal user and propagates false information about the status of the primary signal. Generally known types of MUs include always busy (AB), always free (AF), always opposite (AO), and an MU that transmits high signal with probabilityα and low signal with probability 1
-
α, and we name it αMU. In AB and AF types, an MU always generates either a high (H
1) or a low (H
0) signal, respectively, regardless of the actual status of the primary signal. In the case of AO, an MU always generates a signal about the status of the PU that is opposite of its local observation. The AO MU is considered to be the most dangerous type, especially, when the decision is taken opposite to the real status of PU (if global decision or real status of the PU is known).Cooperative sensing can improve the detection and false alarm probabilities [7], however, a high number of cooperative users, where majority of users have low SNR, may not produce optimal performance [8] and may have a negative impact on the complexity of the network, sensing time (latency), control channel bandwidth, collision in the control channel, and energy consumption. The number of SUs can be controlled by assigning reliability to them according to their sensing reports. Such reliability is based on correlation with the global decision. Users may send a deviant result due to either channel effects or malfunctioning of the sensors. The consistently deviant users are excluded from participation in the global decision, which leaves fewer but only reliable users in the network. Three different schemes are proposed in this paper to identify and remove consistently unreliable users and MUs which results in a less complex network consisting of fewer and more reliable nodes, which in turn reduces the computational burden on the fusion center (FC) and decreases the latency and overall energy consumption of the network.Cooperative spectrum sensing increases the sensing performance of a CR network by using the location diversity of SUs [7]. However, presence of even few MUs severely degrades the performance of CSS. In [8], the authors have shown that a certain number of users (not all the users) with highest SNR achieve optimal sensing performance. However, the authors do not consider malicious behavior of the SUs and the decision of fusion center is solely based on high SNR users even if they report false data. To nullify the effect of MUs, reputation-based CSS with assistance from trusted nodes has been considered [9]. In [10], a statistical model of the PU was used in a soft reputation-based secure CSS scheme. Such an approach utilizes assistance from trusted nodes in the network. The assumption of trusted nodes is not practical due to the unavailability of such nodes in most cases. Furthermore, the significance of cooperative spectrum sensing is reduced if trusted nodes are the primary source for a result. In [11], an extended sequential CSS scheme was used in which SUs were polled to send their sensing result according to their reputation order. Uniform and fixed reputation degrees were employed for CUs in [12], while uniform reputation with no MU was used in [13]. In all of the above-cited studies, uniform reliability was assigned to users regardless of whether they produce good, normal, or bad results. Furthermore, only two types of MUs (AB and AF) were considered. None of the studies has addressed α-based MU and AO, the most dangerous types of MU.In our previous work [14], the decision of disengagement of an SU and an MU, of types AO and AB, is taken by the FC based on reliability of the SU. In this paper, we extend our work by proposing three different schemes to deal with unreliable and malicious users. We also mitigate the effect of the MU that transmits high and low PU status based on probabilistic parameter α. In the first two schemes, an identification tag (IT) is used to restrict MUs, while reliabilities and unreliabilities are used to isolate unreliable users. The IT represents the reliability value of each user. It is calculated on the basis of correlation between the result of each user and FC and is communicated to the SUs in encrypted form. Unauthorized or malicious user would be unable to decrypt the IT. In the third scheme, the detection performance depends on honesty of the users. Dishonest and MUs severely degrade the performance of the network. Our proposed schemes are advantageous due to their computational simplicity, which makes them more practical and easy to implement. With a lower number of users and an avoidance of complex algorithms, the proposed approaches produce results that are comparable to (in terms of detection performance) and better than (in terms of the number of users) those obtained with the Chair-Varshney scheme and better (in all aspects for certain types of MUs) than those attained with the conventional CSS technique.The remainder of this paper is organized as follows. The system model is described in Section2, and our proposed schemes are presented in Section 3. Simulation results and discussion are given in Section 4. Conclusion is presented in Section 5.
## 2. System Model
We consider a network consisting of one PU andN SUs with M
(
M
≤
N
) reliable users and L malicious users such that 0
≤
L
≪
M, shown in Figure 1. The remaining N
-
M
-
L users are unreliable users. Initially M is equal to N (if there is no MU); however, with the training of the CR network, M gets smaller than N due to disappearance of the unreliable users but remains above a minimum threshold, N
min
. The maximum number of MUs is L
max
. The number of reliable users (users with a good channel) is assumed to be more than the number of unreliable users (users with a poor channel) and MUs. Each MU may adopt one of the malicious modes described earlier. We consider an m-bit error-free common control channel between the SU and FC.Figure 1
Cooperative users in a CR network.Detection of the primary signal is a binary hypothesis testing problem. The signal received by theith SU is given as
(1)
H
0
:
x
i
(
n
)
=
u
(
n
)
,
i
=
1,2
,
…
,
N
,
H
1
:
x
i
(
n
)
=
h
i
(
n
)
s
(
n
)
+
u
(
n
)
,
n
=
1,2
,
…
,
S
,
where H
0 and H
1 correspond to the hypotheses that the PU signal is absent and present, respectively, s
(
n
) represents the primary signal received at the SU, h
i
(
n
) is the amplitude gain of the channel, u
(
n
) is the additive white Gaussian noise (AWGN) with zero mean and σ
u
2 variance, N is the number of SUs, and S is the number of samples. We assume that s
(
n
) and u
(
n
) are completely independent. Without loss of generality, the variance of noise is assumed to be the same at every sensor.Each SU usesS samples in the sensing interval to perform spectrum sensing using the energy detection technique [15]. The local observation of the ith user is given by
(2)
y
i
=
∑
n
=
1
S
|
x
i
(
n
)
|
2
,
where S is the number of samples and is equal to 2
T
W, and T and W are the sensing time and bandwidth, respectively. When S is relatively large (e.g., S
>
200), Y
i can be well approximated as a Gaussian random variable under both hypotheses H
0 and H
1 with means μ
0, μ
1 and variances σ
0
2, σ
1
2, respectively, as follows [16]:
(3)
H
0
:
μ
0
=
S
σ
u
2
,
σ
0
2
=
2
S
σ
u
4
,
H
1
:
μ
1
=
S
(
γ
i
+
1
)
σ
u
2
,
σ
1
2
=
2
S
(
2
γ
i
+
1
)
σ
u
4
,
where γ
i is the signal-to-noise ratio (SNR) of the primary signal at theith SU. In each time slot, the FC broadcasts a request to all SUs to perform local sensing. After the sensing period, each SU reports its observation to the FC in the reporting period. The FC combines the received local observations and makes a global decision. We assume that the global decision taken by the FC is correct all of the time. The FC also computes the reliability of each user based on the compliance of an SU’s local observation with the global result. Finally, the global decision along with respective reliability in the encrypted form as identification tag is communicated to each user.Authentication is an integral component of the security protocols [17–19]. A three-stage security protocol consisting of prevention, detection, and cure is proposed in [17]. The prevention stage includes authentication and authorization; the participating users and their data will be authenticated in the authentication stage while recognition of the users is performed in the authorization stage. In [18], the authors proposed remote based smart card authentication scheme where an additional security stage called registration is introduced, in which details of users along with specific details given by the server are stored. In [19], a lightweight authentication scheme is used to guarantee security and privacy in global mobility networks. In [20], basic and extended features are used to detect malicious activity by applying adaptive support vector machines. In [21], a cryptographic technique like blind signature and electronic coin is used to achieve mobility, reliability, anonymity, and flexibility in a mobile wireless network. In this paper, we use an encrypted identification tag for the authentication of users and reliability test for the detection of unreliable and malicious users. The identification tag is assigned to users based on their reported observations.
## 3. Secure Reliability-Based CSS
In conventional CSS, each SU performs local sensing and forwards either its quantized local observationy
i ((2) in the case of a soft decision) or local decision H
1 or H
0 ((4) in the case of a hard decision) to the FC through a dedicated control channel. Here,
(4)
y
i
>
H
1
<
H
0
λ
i
,
i
=
1,2
,
…
,
N
,
where λ
i is the local energy threshold at the ith SU. The detection performance of the CR network is measured by the probability of detection P
d which is a measure of the interference to the PU and the probability of false alarm P
F which sets the upper bound on spectrum utilization. A higher value of P
d will protect the quality of service (QoS) of the PU, and a lower value of P
f will result in higher spectrum utilization. The detection and false alarm probabilities of the ith user are given, respectively, as
(5)
P
d
,
i
=
P
(
y
i
>
λ
i
∣
H
1
)
=
Q
(
λ
i
-
S
(
γ
i
+
1
)
σ
u
2
σ
u
2
2
S
(
2
γ
i
+
1
)
)
,
P
f
,
i
=
P
(
y
i
>
λ
i
∣
H
0
)
=
Q
(
λ
i
-
S
σ
u
2
σ
u
2
2
S
)
,
where Q
(
·
) is a monotonically decreasing function defined as Q
(
x
)
=
(
1
/
2
π
)
∫
x
∞
exp
(
-
t
2
/
2
)
d
t. Sensing results from several SUs are combined at the FC as weighted sum and given as
(6)
Z
=
∑
i
=
1
M
w
k
-
1
i
×
y
i
,
where w
k
-
1
i is the weighting coefficient or reliability of the ith SU in the previous slot which is computed in Section 3.1.1; it is used to highlight or suppress the result of a certain SU based on detection performance. Finally, the status of the primary signal is determined as
(7)
Z
<
λ
,
H
0
,
Z
≥
λ
,
H
1
,
where λ is the global threshold. The global detection and false alarm probabilities are expressed as
(8)
P
D
=
P
(
Z
>
λ
∣
H
1
)
=
Q
(
λ
-
S
∑
i
=
1
M
w
i
(
γ
i
+
1
)
σ
u
2
2
S
∑
i
=
1
M
w
i
2
(
2
γ
i
+
1
)
σ
u
4
)
,
P
F
=
P
(
Z
>
λ
∣
H
0
)
=
Q
(
λ
-
S
∑
i
=
1
M
w
i
σ
u
2
2
S
∑
i
=
1
M
w
i
2
σ
u
4
)
,
respectively.
### 3.1. Our Proposed Schemes
It is assumed that FC maintainsM queues which are collectively called the reliability queue and is represented by Q, as shown in Figure 2. The size of each queue is K which denotes the previous history of reliability maintained for each of the reliable SUs. The value of K reflects a trade-off between the sensing accuracy and speed. FC receives sensing results from all SUs with equal initial reliability which is updated based on the distance between the local sensing result and the global decision in scheme I. The SU that produces more congruent result with respect to the global decision is assigned a higher reliability and vice versa. In contrast, schemes II and III use unreliability, instead of reliability, to evaluate an SU for participation in the global decision. An MU is detected and isolated from the network in schemes I and II because FC takes the decision by a two-tier checking process. However, the decision of disengagement from the network is taken by SUs (not FC) themselves and thus, an MU cannot be detected in scheme III. In this case, the fusion center relies upon the rectitude of the SUs.Figure 2
Reliability queue at the fusion center.
#### 3.1.1. Proposed Scheme I
Rather than using complex calculations to compute the reliability of SUs, a simple method is proposed in this study. Each SU performs local sensing in the sensing period and forwards its observation to the FC in the reporting period. FC accepts the receiving data from the SUs with equal initial reliability and takes a global decision using data fusion (soft decision) technique. The initial reliability (weight) can be assigned to each SU as discussed in [10] but for simplicity, in this work, we assign equal initial reliability to the SUs that makes the initial weighting coefficient equal in (6) for each SU’s report. The channel condition between PU and SU is then quantified into reliability which is measured on the basis of how much the SU supports or deviates from the global result. Based on the reliabilities in the previous slot and reports from the users in the current slot, FC takes the global decision. In (6) weights of all the SUs are taken into account for the global decision. However, to calculate/update weight of the ith SU, local observations of all SUs except the ith SU should be considered in order to minimize bias of the ith SU in weights assignment [22]. In [22, 23] the authors update the weight coefficients using the Chair-Varshney technique. However, in practical scenarios the detection and false alarm probabilities are not known a priori. Further, they do not handle the malicious users. In this work, we propose the update of weights based on the reported observations of the SUs. The global decision, excluding the ith SU, can be computed as below:
(9)
Z
i
=
∑
j
=
1
M
w
k
-
1
j
y
j
-
w
k
-
1
i
y
i
=
∑
j
=
1
,
j
≠
i
M
w
k
-
1
j
y
j
.
The set of all energies reported by the SUs is represented by Y as
(10)
Y
=
{
y
1
,
y
2
,
…
,
y
M
}
.
To update weight of the ith SU, M
-
1 users are considered by excluding the ith SU as follows:
(11)
Y
i
o
⊂
Y
=
{
y
l
:
l
=
1,2
,
…
,
M
,
l
≠
i
}
,
where Y
i
o is the set of energies of all SUs except the ith SU. Y
i
o is sorted into the ordered set Y
J (ascending or descending order depending on the global decision H
1 or H
0, resp., based on weights of the SUs in the previous slot) as follows:
(12)
Y
J
=
{
Y
(
1
)
<
Y
(
2
)
<
⋯
<
Y
(
M
-
1
)
,
H
1
Y
(
M
-
1
)
<
Y
(
M
-
1
)
<
⋯
<
Y
(
1
)
,
H
0
,
where, in case of H
1, Y
(
1
) and Y
(
M
-
1
) are the min
(
Y
i
o
) and max
(
Y
i
o
), respectively, whereas, in case of H
0, Y
(
1
) and Y
(
M
-
1
) are the max
(
Y
i
o
) and min
(
Y
i
o
), respectively. In addition to minimizing effect of the SUs with either a faulty sensor or a continuous weak channel due to deep fading, the ascending order suppresses the effect of AF and AO types of MUs, whereas the descending order suppresses the effect of AB and AO types of MUs by assigning low reliability to them. The M
-
1 SUs in set Y
i
o are assigned normalized reliabilities according to the following two equations:
(13)
r
i
l
o
=
arg
J
∈
(
M
-
1
)
(
Y
(
J
)
=
y
l
∣
y
l
∈
Y
i
o
,
l
≠
i
)
,
R
l
i
=
{
r
i
l
o
×
2
M
(
M
-
1
)
,
l
≠
i
0
,
l
=
i
.
R
l
i is an M
×
M matrix where the diagonal elements are zeros showing exclusion of the ith SU in the assignment of weights. Each row of the matrix shows the reliability given to ith SU when the other SUs are excluded one at a time. Finally, normalized weight of the ith SU is computed by adding elements of the ith row of the matrix (all weights assigned to the ith SU by others except himself, i.e., numerator in (14)) and divided by the summation of all rows (denominator in (14)), as given by the following equation:
(14)
w
i
=
R
i
=
∑
l
=
1
,
l
≠
i
M
R
i
l
∑
n
=
1
M
∑
l
=
1
,
l
≠
n
M
R
n
l
.The reliability of a user is stored in the database (reliability queue) at the FC and is also communicated to the user in the encrypted form as its identification tag (IT) for future use along with the global decision:(15)
Q
(
i
,
k
)
=
R
i
,
I
T
k
i
=
R
i
,
where Q
(
i
,
k
) shows the kth slot of the ith queue and IT
k
i is the IT assigned to theith user in the current slot. We assume that only legal SUs know the decryption key, which is updated and exchanged periodically between FC and legal SUs, which enables them to successfully decrypt the IT. In the next time slot, each SU transmits its local sensing result along with the previously decrypted reliability (IT). The FC first applies the MUscreening testby checking the SU’s reported IT with the reliability stored in the corresponding slot for the user in its own database Q
(
i
,
k
-
1
). If a mismatch is found, the FC will declare the user as an MU. Further, the current input (sensing result) from that SU is discarded, and no future reports will be accepted from him:
(16)
S
U
i
=
MU
,
if
I
T
k
-
1
i
≠
Q
(
i
,
k
-
1
)
,
where IT
k
-
1
i is the IT reported by the ith user.If an MU is smart enough to deceive the FC by clearing the MU screening test, which is possible only if the MU produces exactly the same reliability as is assigned to a legal user in the previous slot, then the FC performs areliability test to detect MUs and consistently unreliable SUs. The reliability test is comparatively slower because data from the past few slots must be gathered in order to identify the behavior and evaluate the credibility of the user. The purpose of the reliability test is to detect consistently unreliable sensors so that their results can be ignored. In going against the global decision, an MU will also be among the most consistent producers of unreliable results and will thus be stopped after a few slots.The consistently unreliable SUs are identified by finding the cumulative reliability which is computed by adding the previously storedK slots reliabilities as
(17)
R
i
cum
=
∑
j
=
1
K
R
j
,
where jis the index for slots. The SUs with a cumulative reliability smaller than a predetermined reliability threshold, λ
R, are discarded:
(18)
r
i
=
{
1
,
R
i
cum
<
λ
R
0
,
R
i
cum
≥
λ
R
,
(19)
r
=
∑
i
N
r
i
,
where r is the number of users that have unacceptable reliabilities that includes both unreliable and malicious users. Finally, only the remaining users, M
=
N
-
r, are considered by the fusion center when making a global decision. The final decision is dependent upon the global threshold and weighting coefficients (reliabilities).
#### 3.1.2. Proposed Scheme II
In this scheme, computations are further simplified. Instead of computing the reliability for each user based on previous results, reliability (renewed in every time slot) is randomly assigned to each user by the FC. The random reliability (RR) is used as IT for the SU and is also stored in the database of the FC for future decisions as(20)
I
T
k
i
=
Q
(
i
,
k
)
=
R
R
i
,
where RR
i is the random reliability assigned to the ith SU and is stored at Q
(
i
,
k
). The global decision and the respective IT values are communicated to the SUs at the end of each time slot.Since soft fusion rule is used for global decision in this scheme, therefore, all SUs report their current local observations along with the previously assigned IT (in the decrypted form) to the fusion center, where they are combined with equal weights and a global decision is made about the status of the primary signal. If the IT sent by an SU does not match with the recently (previous slot) stored IT in the reliability queue at the FC, that SU is deemed to be malicious. On the other hand, if a match is found, then the unreliability of the SU is computed. If the local observation does not match with the global decision, the reliability of that particular SU is decreased. In other words, the unreliability,U
i, of that SU is increased:
(21)
U
i
=
U
i
+
(
Z
⊕
y
i
)
,
where Z and Y
i are the 1-bit global and local decisions, respectively, and ⊕ is the exclusive-OR operation that produces 1 when local and global decisions are different and produces 0 otherwise. For computation of the unreliability, 1-bit global and local decisions are considered by the FC, whereas soft fusion rule is used for the global decision. The 1-bit local decision of each user is computed by the fusion center based on the reported observation of the respective user. We assume the same threshold for all SUs to get the 1-bit local decision at the FC.If the MU screening test fails (i.e., MU produces exactly the same IT as that stored in the queue), then the MU is detected by the reliability test because MU produces a result that frequently deviates from the actual status of the primary signal (global decision). Every time the MU reports a deviant result, its unreliability will increase which occurs more frequently than a user in fading or shadowing. An SU (it may be an MU or a normal SU producing consistently wrong results due to the channel condition or sensor malfunctioning) is stopped from sending reports to the FC when its unreliability reaches a predefined threshold. Only the remaining users that are reliable in terms of generating accurate results contribute to determining the PU status. The dropped SU, represented bySU
D, is not involved in future global decisions and is determined by the following equation:
(22)
S
U
D
=
arg
max
i
=
1,2
,
…
,
M
(
U
i
)
if
U
thr
≤
max
i
=
1,2
,
…
,
M
(
U
i
)
.In this scheme, the decision of dropping an unreliable SU and MU is taken by the FC.
#### 3.1.3. Proposed Scheme III
The unreliability in this scheme is computed by every SU individually by comparing the local and global decision. To be consistent with the previous schemes we use soft decision approach in this scheme. However, hard decision rule will be more befitting for this scenario. Three types of users are considered here: honest, dishonest, and malicious. Honest users are those who stop reporting when their unreliability exceeds a certain value. In the case of honest users, with time, only reliable users that are less than the total number of users contribute to the detection of the primary signal. Dishonest users continue reporting their untrusted observations even if their unreliability exceeds the threshold. Users with malicious behavior continuously send false data irrespective of the real status of the primary signal and thus severely degrade the detection performance of the network. Dishonest and MUs try to falsify results so as to suit their own selfish interests. As the decision of disengagement from the network is taken at the user level, this approach has no solution for dealing with MUs. Only consistently unreliable users (those with malfunctioning sensor or in deep fades) are restricted. The FC relies on the honesty of SUs and accepts reports from all users. In an environment composed entirely of honest users, consistently unreliable users disengage themselves from reporting when their unreliability reaches a certain limit.In this approach, each SU performs local sensing, sends its observation to the FC, and waits for the global decision. If the received global decision is different from the local decision, the SU increments its unreliability according to (18). An SU remains in the network as long as its unreliability does not exceed a certain threshold similar to (19). In contrast to schemes I and II, no IT or other reliability calculations are used in scheme III, which makes the approach simple and fast. At any given time, there will be M reliable nodes in the CR network. In the case of all honest users, M is normally smaller than N because unreliable SUs leave the network, thereby keeping the calculations simple and the CR network manageable. If both malicious and honest users are present, the number of users will be less than N but greater than M. In the case of all dishonest users, the number of users remains fixed and is equal to the total number of users, N. Computational simplicity is the main advantage of this strategy; however its disadvantages include lack of control over the MU and unreliable users, as the decision is taken at the SU and results in an increased number of users when they are dishonest.
## 3.1. Our Proposed Schemes
It is assumed that FC maintainsM queues which are collectively called the reliability queue and is represented by Q, as shown in Figure 2. The size of each queue is K which denotes the previous history of reliability maintained for each of the reliable SUs. The value of K reflects a trade-off between the sensing accuracy and speed. FC receives sensing results from all SUs with equal initial reliability which is updated based on the distance between the local sensing result and the global decision in scheme I. The SU that produces more congruent result with respect to the global decision is assigned a higher reliability and vice versa. In contrast, schemes II and III use unreliability, instead of reliability, to evaluate an SU for participation in the global decision. An MU is detected and isolated from the network in schemes I and II because FC takes the decision by a two-tier checking process. However, the decision of disengagement from the network is taken by SUs (not FC) themselves and thus, an MU cannot be detected in scheme III. In this case, the fusion center relies upon the rectitude of the SUs.Figure 2
Reliability queue at the fusion center.
### 3.1.1. Proposed Scheme I
Rather than using complex calculations to compute the reliability of SUs, a simple method is proposed in this study. Each SU performs local sensing in the sensing period and forwards its observation to the FC in the reporting period. FC accepts the receiving data from the SUs with equal initial reliability and takes a global decision using data fusion (soft decision) technique. The initial reliability (weight) can be assigned to each SU as discussed in [10] but for simplicity, in this work, we assign equal initial reliability to the SUs that makes the initial weighting coefficient equal in (6) for each SU’s report. The channel condition between PU and SU is then quantified into reliability which is measured on the basis of how much the SU supports or deviates from the global result. Based on the reliabilities in the previous slot and reports from the users in the current slot, FC takes the global decision. In (6) weights of all the SUs are taken into account for the global decision. However, to calculate/update weight of the ith SU, local observations of all SUs except the ith SU should be considered in order to minimize bias of the ith SU in weights assignment [22]. In [22, 23] the authors update the weight coefficients using the Chair-Varshney technique. However, in practical scenarios the detection and false alarm probabilities are not known a priori. Further, they do not handle the malicious users. In this work, we propose the update of weights based on the reported observations of the SUs. The global decision, excluding the ith SU, can be computed as below:
(9)
Z
i
=
∑
j
=
1
M
w
k
-
1
j
y
j
-
w
k
-
1
i
y
i
=
∑
j
=
1
,
j
≠
i
M
w
k
-
1
j
y
j
.
The set of all energies reported by the SUs is represented by Y as
(10)
Y
=
{
y
1
,
y
2
,
…
,
y
M
}
.
To update weight of the ith SU, M
-
1 users are considered by excluding the ith SU as follows:
(11)
Y
i
o
⊂
Y
=
{
y
l
:
l
=
1,2
,
…
,
M
,
l
≠
i
}
,
where Y
i
o is the set of energies of all SUs except the ith SU. Y
i
o is sorted into the ordered set Y
J (ascending or descending order depending on the global decision H
1 or H
0, resp., based on weights of the SUs in the previous slot) as follows:
(12)
Y
J
=
{
Y
(
1
)
<
Y
(
2
)
<
⋯
<
Y
(
M
-
1
)
,
H
1
Y
(
M
-
1
)
<
Y
(
M
-
1
)
<
⋯
<
Y
(
1
)
,
H
0
,
where, in case of H
1, Y
(
1
) and Y
(
M
-
1
) are the min
(
Y
i
o
) and max
(
Y
i
o
), respectively, whereas, in case of H
0, Y
(
1
) and Y
(
M
-
1
) are the max
(
Y
i
o
) and min
(
Y
i
o
), respectively. In addition to minimizing effect of the SUs with either a faulty sensor or a continuous weak channel due to deep fading, the ascending order suppresses the effect of AF and AO types of MUs, whereas the descending order suppresses the effect of AB and AO types of MUs by assigning low reliability to them. The M
-
1 SUs in set Y
i
o are assigned normalized reliabilities according to the following two equations:
(13)
r
i
l
o
=
arg
J
∈
(
M
-
1
)
(
Y
(
J
)
=
y
l
∣
y
l
∈
Y
i
o
,
l
≠
i
)
,
R
l
i
=
{
r
i
l
o
×
2
M
(
M
-
1
)
,
l
≠
i
0
,
l
=
i
.
R
l
i is an M
×
M matrix where the diagonal elements are zeros showing exclusion of the ith SU in the assignment of weights. Each row of the matrix shows the reliability given to ith SU when the other SUs are excluded one at a time. Finally, normalized weight of the ith SU is computed by adding elements of the ith row of the matrix (all weights assigned to the ith SU by others except himself, i.e., numerator in (14)) and divided by the summation of all rows (denominator in (14)), as given by the following equation:
(14)
w
i
=
R
i
=
∑
l
=
1
,
l
≠
i
M
R
i
l
∑
n
=
1
M
∑
l
=
1
,
l
≠
n
M
R
n
l
.The reliability of a user is stored in the database (reliability queue) at the FC and is also communicated to the user in the encrypted form as its identification tag (IT) for future use along with the global decision:(15)
Q
(
i
,
k
)
=
R
i
,
I
T
k
i
=
R
i
,
where Q
(
i
,
k
) shows the kth slot of the ith queue and IT
k
i is the IT assigned to theith user in the current slot. We assume that only legal SUs know the decryption key, which is updated and exchanged periodically between FC and legal SUs, which enables them to successfully decrypt the IT. In the next time slot, each SU transmits its local sensing result along with the previously decrypted reliability (IT). The FC first applies the MUscreening testby checking the SU’s reported IT with the reliability stored in the corresponding slot for the user in its own database Q
(
i
,
k
-
1
). If a mismatch is found, the FC will declare the user as an MU. Further, the current input (sensing result) from that SU is discarded, and no future reports will be accepted from him:
(16)
S
U
i
=
MU
,
if
I
T
k
-
1
i
≠
Q
(
i
,
k
-
1
)
,
where IT
k
-
1
i is the IT reported by the ith user.If an MU is smart enough to deceive the FC by clearing the MU screening test, which is possible only if the MU produces exactly the same reliability as is assigned to a legal user in the previous slot, then the FC performs areliability test to detect MUs and consistently unreliable SUs. The reliability test is comparatively slower because data from the past few slots must be gathered in order to identify the behavior and evaluate the credibility of the user. The purpose of the reliability test is to detect consistently unreliable sensors so that their results can be ignored. In going against the global decision, an MU will also be among the most consistent producers of unreliable results and will thus be stopped after a few slots.The consistently unreliable SUs are identified by finding the cumulative reliability which is computed by adding the previously storedK slots reliabilities as
(17)
R
i
cum
=
∑
j
=
1
K
R
j
,
where jis the index for slots. The SUs with a cumulative reliability smaller than a predetermined reliability threshold, λ
R, are discarded:
(18)
r
i
=
{
1
,
R
i
cum
<
λ
R
0
,
R
i
cum
≥
λ
R
,
(19)
r
=
∑
i
N
r
i
,
where r is the number of users that have unacceptable reliabilities that includes both unreliable and malicious users. Finally, only the remaining users, M
=
N
-
r, are considered by the fusion center when making a global decision. The final decision is dependent upon the global threshold and weighting coefficients (reliabilities).
### 3.1.2. Proposed Scheme II
In this scheme, computations are further simplified. Instead of computing the reliability for each user based on previous results, reliability (renewed in every time slot) is randomly assigned to each user by the FC. The random reliability (RR) is used as IT for the SU and is also stored in the database of the FC for future decisions as(20)
I
T
k
i
=
Q
(
i
,
k
)
=
R
R
i
,
where RR
i is the random reliability assigned to the ith SU and is stored at Q
(
i
,
k
). The global decision and the respective IT values are communicated to the SUs at the end of each time slot.Since soft fusion rule is used for global decision in this scheme, therefore, all SUs report their current local observations along with the previously assigned IT (in the decrypted form) to the fusion center, where they are combined with equal weights and a global decision is made about the status of the primary signal. If the IT sent by an SU does not match with the recently (previous slot) stored IT in the reliability queue at the FC, that SU is deemed to be malicious. On the other hand, if a match is found, then the unreliability of the SU is computed. If the local observation does not match with the global decision, the reliability of that particular SU is decreased. In other words, the unreliability,U
i, of that SU is increased:
(21)
U
i
=
U
i
+
(
Z
⊕
y
i
)
,
where Z and Y
i are the 1-bit global and local decisions, respectively, and ⊕ is the exclusive-OR operation that produces 1 when local and global decisions are different and produces 0 otherwise. For computation of the unreliability, 1-bit global and local decisions are considered by the FC, whereas soft fusion rule is used for the global decision. The 1-bit local decision of each user is computed by the fusion center based on the reported observation of the respective user. We assume the same threshold for all SUs to get the 1-bit local decision at the FC.If the MU screening test fails (i.e., MU produces exactly the same IT as that stored in the queue), then the MU is detected by the reliability test because MU produces a result that frequently deviates from the actual status of the primary signal (global decision). Every time the MU reports a deviant result, its unreliability will increase which occurs more frequently than a user in fading or shadowing. An SU (it may be an MU or a normal SU producing consistently wrong results due to the channel condition or sensor malfunctioning) is stopped from sending reports to the FC when its unreliability reaches a predefined threshold. Only the remaining users that are reliable in terms of generating accurate results contribute to determining the PU status. The dropped SU, represented bySU
D, is not involved in future global decisions and is determined by the following equation:
(22)
S
U
D
=
arg
max
i
=
1,2
,
…
,
M
(
U
i
)
if
U
thr
≤
max
i
=
1,2
,
…
,
M
(
U
i
)
.In this scheme, the decision of dropping an unreliable SU and MU is taken by the FC.
### 3.1.3. Proposed Scheme III
The unreliability in this scheme is computed by every SU individually by comparing the local and global decision. To be consistent with the previous schemes we use soft decision approach in this scheme. However, hard decision rule will be more befitting for this scenario. Three types of users are considered here: honest, dishonest, and malicious. Honest users are those who stop reporting when their unreliability exceeds a certain value. In the case of honest users, with time, only reliable users that are less than the total number of users contribute to the detection of the primary signal. Dishonest users continue reporting their untrusted observations even if their unreliability exceeds the threshold. Users with malicious behavior continuously send false data irrespective of the real status of the primary signal and thus severely degrade the detection performance of the network. Dishonest and MUs try to falsify results so as to suit their own selfish interests. As the decision of disengagement from the network is taken at the user level, this approach has no solution for dealing with MUs. Only consistently unreliable users (those with malfunctioning sensor or in deep fades) are restricted. The FC relies on the honesty of SUs and accepts reports from all users. In an environment composed entirely of honest users, consistently unreliable users disengage themselves from reporting when their unreliability reaches a certain limit.In this approach, each SU performs local sensing, sends its observation to the FC, and waits for the global decision. If the received global decision is different from the local decision, the SU increments its unreliability according to (18). An SU remains in the network as long as its unreliability does not exceed a certain threshold similar to (19). In contrast to schemes I and II, no IT or other reliability calculations are used in scheme III, which makes the approach simple and fast. At any given time, there will be M reliable nodes in the CR network. In the case of all honest users, M is normally smaller than N because unreliable SUs leave the network, thereby keeping the calculations simple and the CR network manageable. If both malicious and honest users are present, the number of users will be less than N but greater than M. In the case of all dishonest users, the number of users remains fixed and is equal to the total number of users, N. Computational simplicity is the main advantage of this strategy; however its disadvantages include lack of control over the MU and unreliable users, as the decision is taken at the SU and results in an increased number of users when they are dishonest.
## 3.1.1. Proposed Scheme I
Rather than using complex calculations to compute the reliability of SUs, a simple method is proposed in this study. Each SU performs local sensing in the sensing period and forwards its observation to the FC in the reporting period. FC accepts the receiving data from the SUs with equal initial reliability and takes a global decision using data fusion (soft decision) technique. The initial reliability (weight) can be assigned to each SU as discussed in [10] but for simplicity, in this work, we assign equal initial reliability to the SUs that makes the initial weighting coefficient equal in (6) for each SU’s report. The channel condition between PU and SU is then quantified into reliability which is measured on the basis of how much the SU supports or deviates from the global result. Based on the reliabilities in the previous slot and reports from the users in the current slot, FC takes the global decision. In (6) weights of all the SUs are taken into account for the global decision. However, to calculate/update weight of the ith SU, local observations of all SUs except the ith SU should be considered in order to minimize bias of the ith SU in weights assignment [22]. In [22, 23] the authors update the weight coefficients using the Chair-Varshney technique. However, in practical scenarios the detection and false alarm probabilities are not known a priori. Further, they do not handle the malicious users. In this work, we propose the update of weights based on the reported observations of the SUs. The global decision, excluding the ith SU, can be computed as below:
(9)
Z
i
=
∑
j
=
1
M
w
k
-
1
j
y
j
-
w
k
-
1
i
y
i
=
∑
j
=
1
,
j
≠
i
M
w
k
-
1
j
y
j
.
The set of all energies reported by the SUs is represented by Y as
(10)
Y
=
{
y
1
,
y
2
,
…
,
y
M
}
.
To update weight of the ith SU, M
-
1 users are considered by excluding the ith SU as follows:
(11)
Y
i
o
⊂
Y
=
{
y
l
:
l
=
1,2
,
…
,
M
,
l
≠
i
}
,
where Y
i
o is the set of energies of all SUs except the ith SU. Y
i
o is sorted into the ordered set Y
J (ascending or descending order depending on the global decision H
1 or H
0, resp., based on weights of the SUs in the previous slot) as follows:
(12)
Y
J
=
{
Y
(
1
)
<
Y
(
2
)
<
⋯
<
Y
(
M
-
1
)
,
H
1
Y
(
M
-
1
)
<
Y
(
M
-
1
)
<
⋯
<
Y
(
1
)
,
H
0
,
where, in case of H
1, Y
(
1
) and Y
(
M
-
1
) are the min
(
Y
i
o
) and max
(
Y
i
o
), respectively, whereas, in case of H
0, Y
(
1
) and Y
(
M
-
1
) are the max
(
Y
i
o
) and min
(
Y
i
o
), respectively. In addition to minimizing effect of the SUs with either a faulty sensor or a continuous weak channel due to deep fading, the ascending order suppresses the effect of AF and AO types of MUs, whereas the descending order suppresses the effect of AB and AO types of MUs by assigning low reliability to them. The M
-
1 SUs in set Y
i
o are assigned normalized reliabilities according to the following two equations:
(13)
r
i
l
o
=
arg
J
∈
(
M
-
1
)
(
Y
(
J
)
=
y
l
∣
y
l
∈
Y
i
o
,
l
≠
i
)
,
R
l
i
=
{
r
i
l
o
×
2
M
(
M
-
1
)
,
l
≠
i
0
,
l
=
i
.
R
l
i is an M
×
M matrix where the diagonal elements are zeros showing exclusion of the ith SU in the assignment of weights. Each row of the matrix shows the reliability given to ith SU when the other SUs are excluded one at a time. Finally, normalized weight of the ith SU is computed by adding elements of the ith row of the matrix (all weights assigned to the ith SU by others except himself, i.e., numerator in (14)) and divided by the summation of all rows (denominator in (14)), as given by the following equation:
(14)
w
i
=
R
i
=
∑
l
=
1
,
l
≠
i
M
R
i
l
∑
n
=
1
M
∑
l
=
1
,
l
≠
n
M
R
n
l
.The reliability of a user is stored in the database (reliability queue) at the FC and is also communicated to the user in the encrypted form as its identification tag (IT) for future use along with the global decision:(15)
Q
(
i
,
k
)
=
R
i
,
I
T
k
i
=
R
i
,
where Q
(
i
,
k
) shows the kth slot of the ith queue and IT
k
i is the IT assigned to theith user in the current slot. We assume that only legal SUs know the decryption key, which is updated and exchanged periodically between FC and legal SUs, which enables them to successfully decrypt the IT. In the next time slot, each SU transmits its local sensing result along with the previously decrypted reliability (IT). The FC first applies the MUscreening testby checking the SU’s reported IT with the reliability stored in the corresponding slot for the user in its own database Q
(
i
,
k
-
1
). If a mismatch is found, the FC will declare the user as an MU. Further, the current input (sensing result) from that SU is discarded, and no future reports will be accepted from him:
(16)
S
U
i
=
MU
,
if
I
T
k
-
1
i
≠
Q
(
i
,
k
-
1
)
,
where IT
k
-
1
i is the IT reported by the ith user.If an MU is smart enough to deceive the FC by clearing the MU screening test, which is possible only if the MU produces exactly the same reliability as is assigned to a legal user in the previous slot, then the FC performs areliability test to detect MUs and consistently unreliable SUs. The reliability test is comparatively slower because data from the past few slots must be gathered in order to identify the behavior and evaluate the credibility of the user. The purpose of the reliability test is to detect consistently unreliable sensors so that their results can be ignored. In going against the global decision, an MU will also be among the most consistent producers of unreliable results and will thus be stopped after a few slots.The consistently unreliable SUs are identified by finding the cumulative reliability which is computed by adding the previously storedK slots reliabilities as
(17)
R
i
cum
=
∑
j
=
1
K
R
j
,
where jis the index for slots. The SUs with a cumulative reliability smaller than a predetermined reliability threshold, λ
R, are discarded:
(18)
r
i
=
{
1
,
R
i
cum
<
λ
R
0
,
R
i
cum
≥
λ
R
,
(19)
r
=
∑
i
N
r
i
,
where r is the number of users that have unacceptable reliabilities that includes both unreliable and malicious users. Finally, only the remaining users, M
=
N
-
r, are considered by the fusion center when making a global decision. The final decision is dependent upon the global threshold and weighting coefficients (reliabilities).
## 3.1.2. Proposed Scheme II
In this scheme, computations are further simplified. Instead of computing the reliability for each user based on previous results, reliability (renewed in every time slot) is randomly assigned to each user by the FC. The random reliability (RR) is used as IT for the SU and is also stored in the database of the FC for future decisions as(20)
I
T
k
i
=
Q
(
i
,
k
)
=
R
R
i
,
where RR
i is the random reliability assigned to the ith SU and is stored at Q
(
i
,
k
). The global decision and the respective IT values are communicated to the SUs at the end of each time slot.Since soft fusion rule is used for global decision in this scheme, therefore, all SUs report their current local observations along with the previously assigned IT (in the decrypted form) to the fusion center, where they are combined with equal weights and a global decision is made about the status of the primary signal. If the IT sent by an SU does not match with the recently (previous slot) stored IT in the reliability queue at the FC, that SU is deemed to be malicious. On the other hand, if a match is found, then the unreliability of the SU is computed. If the local observation does not match with the global decision, the reliability of that particular SU is decreased. In other words, the unreliability,U
i, of that SU is increased:
(21)
U
i
=
U
i
+
(
Z
⊕
y
i
)
,
where Z and Y
i are the 1-bit global and local decisions, respectively, and ⊕ is the exclusive-OR operation that produces 1 when local and global decisions are different and produces 0 otherwise. For computation of the unreliability, 1-bit global and local decisions are considered by the FC, whereas soft fusion rule is used for the global decision. The 1-bit local decision of each user is computed by the fusion center based on the reported observation of the respective user. We assume the same threshold for all SUs to get the 1-bit local decision at the FC.If the MU screening test fails (i.e., MU produces exactly the same IT as that stored in the queue), then the MU is detected by the reliability test because MU produces a result that frequently deviates from the actual status of the primary signal (global decision). Every time the MU reports a deviant result, its unreliability will increase which occurs more frequently than a user in fading or shadowing. An SU (it may be an MU or a normal SU producing consistently wrong results due to the channel condition or sensor malfunctioning) is stopped from sending reports to the FC when its unreliability reaches a predefined threshold. Only the remaining users that are reliable in terms of generating accurate results contribute to determining the PU status. The dropped SU, represented bySU
D, is not involved in future global decisions and is determined by the following equation:
(22)
S
U
D
=
arg
max
i
=
1,2
,
…
,
M
(
U
i
)
if
U
thr
≤
max
i
=
1,2
,
…
,
M
(
U
i
)
.In this scheme, the decision of dropping an unreliable SU and MU is taken by the FC.
## 3.1.3. Proposed Scheme III
The unreliability in this scheme is computed by every SU individually by comparing the local and global decision. To be consistent with the previous schemes we use soft decision approach in this scheme. However, hard decision rule will be more befitting for this scenario. Three types of users are considered here: honest, dishonest, and malicious. Honest users are those who stop reporting when their unreliability exceeds a certain value. In the case of honest users, with time, only reliable users that are less than the total number of users contribute to the detection of the primary signal. Dishonest users continue reporting their untrusted observations even if their unreliability exceeds the threshold. Users with malicious behavior continuously send false data irrespective of the real status of the primary signal and thus severely degrade the detection performance of the network. Dishonest and MUs try to falsify results so as to suit their own selfish interests. As the decision of disengagement from the network is taken at the user level, this approach has no solution for dealing with MUs. Only consistently unreliable users (those with malfunctioning sensor or in deep fades) are restricted. The FC relies on the honesty of SUs and accepts reports from all users. In an environment composed entirely of honest users, consistently unreliable users disengage themselves from reporting when their unreliability reaches a certain limit.In this approach, each SU performs local sensing, sends its observation to the FC, and waits for the global decision. If the received global decision is different from the local decision, the SU increments its unreliability according to (18). An SU remains in the network as long as its unreliability does not exceed a certain threshold similar to (19). In contrast to schemes I and II, no IT or other reliability calculations are used in scheme III, which makes the approach simple and fast. At any given time, there will be M reliable nodes in the CR network. In the case of all honest users, M is normally smaller than N because unreliable SUs leave the network, thereby keeping the calculations simple and the CR network manageable. If both malicious and honest users are present, the number of users will be less than N but greater than M. In the case of all dishonest users, the number of users remains fixed and is equal to the total number of users, N. Computational simplicity is the main advantage of this strategy; however its disadvantages include lack of control over the MU and unreliable users, as the decision is taken at the SU and results in an increased number of users when they are dishonest.
## 4. Simulation Results
In this section, we use simulations to compare our proposed strategies with the Chair-Varshney and conventional cooperative spectrum sensing schemes; schemes I and II are considered. In scheme III, the effect of dishonest and MUs is compared with that of honest users. The effects of always opposite MU, always busy MU, always free MU, andαMU with α
H
1 and (
1
-
α
)
H
0 are illustrated in the simulations. We evaluate the detection performance of our proposed schemes by plotting receiver operating characteristics (ROC) curve. The simulation parameters are summarized in Table 1.Table 1
System parameters.
Description
Symbol
Value
Number of iterations
l
5000
Number of SUs
N
15
PU busy probability
p
H
1
0.5
Sensing duration
T
s
1 ms
Sampling frequency
f
s
300 KHz
Number of samples
S
600
Signal-to-noise ratio
γ
[−25 dB, −10 dB] with 1 dB decrement
Maximum number of MUs
L
max
3
Minimum number of reliable SUs
M
min
5
Size of queue
Q
15 × 50
Depth of each user queue (each row inQ)
K
50
Unreliability threshold
U
thr
10
Probability of H1 transmission byαMU
α
[0.2, 0.5, 0.8]
### 4.1. Results of Proposed Scheme I
Figure3 shows the detection performance of our proposed scheme I, Chair-Varshney (CV) rule, and conventional CSS scheme under the effect of zero, one, and two MUs of the AO type. It is clear from the figure that as the number of AO MUs increases, detection performance of all schemes decreases. By observing the figure it is evident that the detection performance of the Chair-Varshney rule drops quickly when the number of MUs increases to two. Chair-Varshney is the optimum rule but the detection performance of our proposed scheme matches the CV rule for two MUs. Conventional CSS is most severely affected by the AO MUs. When there is no MU, our proposed and conventional CSS schemes both show almost similar results, but our proposed scheme has the advantage of utilizing a smaller number of users. By introducing malicious users (i.e., one or two MUs), our proposed scheme exhibits more robustness and efficiency compared to the conventional CSS. It is also evident from the figure that Chair-Varshney is the optimal detection scheme and provides an upper bound for the other schemes when there is no MU. However, it has a disadvantage in that all users, including consistently unreliable and malicious users, are considered. Our proposed scheme has the advantage of using fewer users for the global decision which is shown in Figure 9.Figure 3
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme I with always opposite (AO) MUs.Figure4 shows the detection performance of the proposed scheme I when there is one AB, AF, or αMU. αMU is an MU that transmits high signal (H
1), that is, behaving like AB, with probability α, and low signal (H
0), that is, behaving like AF, with probability 1
-
α. To differentiate the effect of AB and AF, P
(
H
1
) is set to 0.7 and P
(
H
0
) is set to 0.3 such that AF could produce more deviating results compared to AB MU. The detection performance curve of αMU is sandwiched between that of AF and AB MU types for 0
<
α
<
1. On one extreme, such MU behaves like AF when α
=
0 and on the other extreme it behaves like AB when α
=
1. The effect of such MU is shown in Figure 4 for α
=
0.5, 0.8, and 0.2, respectively. The performance curve of αMU lies in the middle of AF and AB MUs for α
=
0.5. For α
=
0.8, the curve of αMU is shifted toward AB and for α
=
0.2 it is shifted towards AF MU.Figure 4
Effect of the AB, AF, and MU that transmits high signal with probabilityα and low signal with probability 1
-
α on proposed scheme I.
### 4.2. Results of Proposed Scheme II
This scheme uses a different approach to identify malicious and unreliable users. The advantage of this scheme is its computational simplicity. With a simple approach and without computing reliability for each user, it shows almost similar results with the previous approach. However, the disadvantage is that more users are considered for global decision in this scheme shown by Figure9.Figure5 shows the effect of AO MUs on the performance of the examined schemes. Similar to Figure 3, conventional CSS exhibits the worst performance, in terms of detection performance and the number of users considered for global decision, when exposed to AO MUs. The performance of our proposed method improves to that of the Chair-Varshney approach when the number of MUs increases to two, even though our scheme has the advantage of requiring fewer users (shown in Figure 9).Figure 5
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always opposite (AO) MUs.Figures6 and 7 show the effect of AB and AF type of MUs, respectively, on the performance of the Chair-Varshney approach, conventional CSS, and our proposed scheme II. The detection performances of the latter two methods are almost similar due to the following reasons. First, the number of MUs considered is very few compared to legal and reliable users. The detection performance of conventional CSS will be severely affected if the number of MUs is increased. Secondly, due to the equal probabilities of H
1 and H
0, AF and AB MUs show the same effect on the detection performance. Lastly, if the probability of PU arrival is high and AB MUs are present in the network or the idle probability of the PU is high and AF MUs are present in the network, then the effect of AB and AF type of MUs will be low because most of the time the actual status of the PU and sensing report of the MU will be similar which has comparatively less effect on the detection performance. The advantage of our scheme includes the fewer number of (reliable) users that are taken into account for a global decision, as demonstrated in Figure 9 where the average number of users is equal to the total number of users in the case of conventional CSS but fewer for our proposed scheme. This number continues to decrease as the number of MUs increases.Figure 6
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always busy (AB) MUs.Figure 7
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always free (AF) MUs.
### 4.3. Results of Proposed Scheme III
As discussed in Section3.1.3, detection performance with this strategy depends on honesty of the users. In the case of dishonest users, every user attempts to influence the global result by showing himself to be reliable (in fact false reliable). All users, including honest users, report their sensing observation to the FC. If the number of dishonest users is small compared to the number of honest users, then the effect of the former will be minimal.Figure8 shows the performance comparison for the case that honest users exist only, the case that dishonest users are mixed with honest users, and the case that MUs are mixed with honest users. It is clear from the figure that similar sensing performance is achieved in the honest and dishonest cases because there are very few dishonest users present. In the case that all users are honest, the average number of users will always be less than the number of users when all of them are dishonest. No control over MUs is achieved in this scheme and thus, MUs severely contaminate the sensing performance. As is clearly evident from the figure, by increasing the number of MUs from 1 to 2, a high deterioration is observed in the detection performance.Figure 8
Performance comparison of honest, dishonest, and malicious users in scheme III.Figure 9
Average number of users for global decision in conventional CSS, CV (Chair-Varshney), and our proposed schemes for numbers 0, 1, 2, and 3 of malicious users.
### 4.4. Comparison of the Average Number of Users for Global Decision by Our Proposed Schemes and Other Schemes
Figure9 shows the average number of SUs considered for the global decision when conventional CSS, Chair-Varshney rule, and our proposed schemes are considered in the presence of zero, one, two, and three MUs. It is observed from the figure that the average number of users, in the case of the conventional CSS and Chair-Varshney rule, is equal to the total number of SUs in the network. However, fewer users (that even decrease further with increasing MUs) are used for global decision in our proposed schemes. Scheme I outperforms all the other schemes in terms of the number of users and shows almost similar detection performance to scheme II. The average number of users in scheme I decreases as the number of MUs increases because the MUs are successfully blocked by scheme I which reduces the number of users. It is also visible from the figure that the average number of users in scheme II is more than that in scheme I. The reason is that in scheme I each user has a relative weight depending on the accuracy of the result. Thus, unreliable users get less weight and are suppressed from the network which decreases the average number of users. In contrast, all users (reliable and unreliable) have equal weights in scheme II and are excluded only when their unreliability reaches a certain limit. In scheme III, the average number of users in the dishonest case is 15 (total users), while for the honest case it is less than maximum but increases with the number of MUs because MUs pretend to be honest and remain in the network. Since schemes II and III use unreliability to ignore a user, therefore the number of users, when there is no MU, is equal in both schemes. Scheme I and scheme II use 43% and 17% reduced users, respectively, to show similar detection performance to that of conventional CSS when there is no MU. The detection performance improves further with the improvement of users (decrease in the number of users) of 52% and 28% in scheme I and scheme II, respectively, when there are two MUs in the network. However, the number of users considered for global decision in scheme III is 17% and 3% less than the conventional and CV schemes when MU = 0 and MU = 2, respectively.
## 4.1. Results of Proposed Scheme I
Figure3 shows the detection performance of our proposed scheme I, Chair-Varshney (CV) rule, and conventional CSS scheme under the effect of zero, one, and two MUs of the AO type. It is clear from the figure that as the number of AO MUs increases, detection performance of all schemes decreases. By observing the figure it is evident that the detection performance of the Chair-Varshney rule drops quickly when the number of MUs increases to two. Chair-Varshney is the optimum rule but the detection performance of our proposed scheme matches the CV rule for two MUs. Conventional CSS is most severely affected by the AO MUs. When there is no MU, our proposed and conventional CSS schemes both show almost similar results, but our proposed scheme has the advantage of utilizing a smaller number of users. By introducing malicious users (i.e., one or two MUs), our proposed scheme exhibits more robustness and efficiency compared to the conventional CSS. It is also evident from the figure that Chair-Varshney is the optimal detection scheme and provides an upper bound for the other schemes when there is no MU. However, it has a disadvantage in that all users, including consistently unreliable and malicious users, are considered. Our proposed scheme has the advantage of using fewer users for the global decision which is shown in Figure 9.Figure 3
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme I with always opposite (AO) MUs.Figure4 shows the detection performance of the proposed scheme I when there is one AB, AF, or αMU. αMU is an MU that transmits high signal (H
1), that is, behaving like AB, with probability α, and low signal (H
0), that is, behaving like AF, with probability 1
-
α. To differentiate the effect of AB and AF, P
(
H
1
) is set to 0.7 and P
(
H
0
) is set to 0.3 such that AF could produce more deviating results compared to AB MU. The detection performance curve of αMU is sandwiched between that of AF and AB MU types for 0
<
α
<
1. On one extreme, such MU behaves like AF when α
=
0 and on the other extreme it behaves like AB when α
=
1. The effect of such MU is shown in Figure 4 for α
=
0.5, 0.8, and 0.2, respectively. The performance curve of αMU lies in the middle of AF and AB MUs for α
=
0.5. For α
=
0.8, the curve of αMU is shifted toward AB and for α
=
0.2 it is shifted towards AF MU.Figure 4
Effect of the AB, AF, and MU that transmits high signal with probabilityα and low signal with probability 1
-
α on proposed scheme I.
## 4.2. Results of Proposed Scheme II
This scheme uses a different approach to identify malicious and unreliable users. The advantage of this scheme is its computational simplicity. With a simple approach and without computing reliability for each user, it shows almost similar results with the previous approach. However, the disadvantage is that more users are considered for global decision in this scheme shown by Figure9.Figure5 shows the effect of AO MUs on the performance of the examined schemes. Similar to Figure 3, conventional CSS exhibits the worst performance, in terms of detection performance and the number of users considered for global decision, when exposed to AO MUs. The performance of our proposed method improves to that of the Chair-Varshney approach when the number of MUs increases to two, even though our scheme has the advantage of requiring fewer users (shown in Figure 9).Figure 5
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always opposite (AO) MUs.Figures6 and 7 show the effect of AB and AF type of MUs, respectively, on the performance of the Chair-Varshney approach, conventional CSS, and our proposed scheme II. The detection performances of the latter two methods are almost similar due to the following reasons. First, the number of MUs considered is very few compared to legal and reliable users. The detection performance of conventional CSS will be severely affected if the number of MUs is increased. Secondly, due to the equal probabilities of H
1 and H
0, AF and AB MUs show the same effect on the detection performance. Lastly, if the probability of PU arrival is high and AB MUs are present in the network or the idle probability of the PU is high and AF MUs are present in the network, then the effect of AB and AF type of MUs will be low because most of the time the actual status of the PU and sensing report of the MU will be similar which has comparatively less effect on the detection performance. The advantage of our scheme includes the fewer number of (reliable) users that are taken into account for a global decision, as demonstrated in Figure 9 where the average number of users is equal to the total number of users in the case of conventional CSS but fewer for our proposed scheme. This number continues to decrease as the number of MUs increases.Figure 6
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always busy (AB) MUs.Figure 7
Performance comparison of the Chair-Varshney, conventional CSS, and proposed scheme II with always free (AF) MUs.
## 4.3. Results of Proposed Scheme III
As discussed in Section3.1.3, detection performance with this strategy depends on honesty of the users. In the case of dishonest users, every user attempts to influence the global result by showing himself to be reliable (in fact false reliable). All users, including honest users, report their sensing observation to the FC. If the number of dishonest users is small compared to the number of honest users, then the effect of the former will be minimal.Figure8 shows the performance comparison for the case that honest users exist only, the case that dishonest users are mixed with honest users, and the case that MUs are mixed with honest users. It is clear from the figure that similar sensing performance is achieved in the honest and dishonest cases because there are very few dishonest users present. In the case that all users are honest, the average number of users will always be less than the number of users when all of them are dishonest. No control over MUs is achieved in this scheme and thus, MUs severely contaminate the sensing performance. As is clearly evident from the figure, by increasing the number of MUs from 1 to 2, a high deterioration is observed in the detection performance.Figure 8
Performance comparison of honest, dishonest, and malicious users in scheme III.Figure 9
Average number of users for global decision in conventional CSS, CV (Chair-Varshney), and our proposed schemes for numbers 0, 1, 2, and 3 of malicious users.
## 4.4. Comparison of the Average Number of Users for Global Decision by Our Proposed Schemes and Other Schemes
Figure9 shows the average number of SUs considered for the global decision when conventional CSS, Chair-Varshney rule, and our proposed schemes are considered in the presence of zero, one, two, and three MUs. It is observed from the figure that the average number of users, in the case of the conventional CSS and Chair-Varshney rule, is equal to the total number of SUs in the network. However, fewer users (that even decrease further with increasing MUs) are used for global decision in our proposed schemes. Scheme I outperforms all the other schemes in terms of the number of users and shows almost similar detection performance to scheme II. The average number of users in scheme I decreases as the number of MUs increases because the MUs are successfully blocked by scheme I which reduces the number of users. It is also visible from the figure that the average number of users in scheme II is more than that in scheme I. The reason is that in scheme I each user has a relative weight depending on the accuracy of the result. Thus, unreliable users get less weight and are suppressed from the network which decreases the average number of users. In contrast, all users (reliable and unreliable) have equal weights in scheme II and are excluded only when their unreliability reaches a certain limit. In scheme III, the average number of users in the dishonest case is 15 (total users), while for the honest case it is less than maximum but increases with the number of MUs because MUs pretend to be honest and remain in the network. Since schemes II and III use unreliability to ignore a user, therefore the number of users, when there is no MU, is equal in both schemes. Scheme I and scheme II use 43% and 17% reduced users, respectively, to show similar detection performance to that of conventional CSS when there is no MU. The detection performance improves further with the improvement of users (decrease in the number of users) of 52% and 28% in scheme I and scheme II, respectively, when there are two MUs in the network. However, the number of users considered for global decision in scheme III is 17% and 3% less than the conventional and CV schemes when MU = 0 and MU = 2, respectively.
## 5. Conclusion
In this paper, we have proposed simple but effective schemes to combat MUs and control consistently unreliable users. Nonuniform reliability and reliability-based IT are used to isolate unreliable and malicious users in scheme I. Unreliability and randomly chosen IT are used to control unreliable and MUs in scheme II. In scheme III, honest users stop sending reports when their trust level decreases below a certain threshold. The results produced by consistently unreliable users due to either permanent deep fades or sensor malfunctioning are restricted so as to minimize their effect on the global result. Restricting the number of users to only those that are reliable makes the network manageable and reduces the computational cost and other overhead.We intend to extend this work in the future by analyzing latency and energy consumption of the CR network with our proposed schemes.
---
*Source: 101809-2014-09-11.xml* | 2014 |
# Hydrothermal Synthesis of Lanthanum-Doped MgAl-Layered Double Hydroxide/Graphene Oxide Hybrid and Its Application as Flame Retardant for Thermoplastic Polyurethane
**Authors:** Yi Qian; Peng Qiao; Long Li; Haoyue Han; Haiming Zhang; Guozhang Chang
**Journal:** Advances in Polymer Technology
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1018093
---
## Abstract
A novel lanthanum-doped MgAl-layered double hydroxide/graphene oxide hybrid (La LDH/GO) with a La3+/Al3+ molar ratio of 0.05 was successfully synthesized by the hydrothermal method. The structure and morphology of as-prepared samples were characterized by X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and transmission electron microscopy (TEM). Then, La LDH/GO was added into thermoplastic polyurethane (TPU) to investigate its effect on flame retardancy, smoke suppression, and thermal stability of TPU composites. The cone calorimeter test (CCT) results indicated that the peak heat release rate (PHRR) and peak smoke production rate (PSPR) values of TPU with La LDH/GO decreased by 33.1% and 51% compared with neat TPU, respectively. Therefore, La LDH/GO can play a good role in flame retardancy and smoke suppression of TPU matrix during combustion. In the meantime, La LDH/GO could improve the char yield of TPU composites, which is attributed to the interaction between the physical barrier effect of GO and the catalytic effect of 0.05 La LDH.
---
## Body
## 1. Introduction
As a type of burgeoning and widely used material, the terminal products of polymer-based materials have covered many fields such as electronic, electrical appliance, textile, furniture, transportation, and building material [1]. Thermoplastic polyurethane (TPU), an engineering thermoplastic, has been increasingly applied owing to its good flexibility, high compressive strength, and good abrasion resistance. However, like most polymers, TPU is highly flammable, which significantly increases the potential fire hazard in the place where it is used [2]. At present, in order to reduce fire damage and improve the safe level of mankind existence environment, flame retardancy has become one of the important factors that people often consider when choosing TPU for different purposes [3]. Hence, numerous flame retardants as additives have been added into TPU to enhance flame retardancy. With the enhancing awareness of environmental protection, halogen-based flame retardants are gradually being reduced or even banned due to their ecological damage during combustion [4]. Some new type of halogen-free flame retardants, for instance, organic phosphorus compounds [5], carbon nanotubes (CNTs) [6], and polyhedral oligomeric silsesquioxane (POSS) [7], have become more and more attractive in recent decades.Layered double hydroxide (LDH), also known as hydrotalcite compounds (HT) or anionic clay, is a kind of layered compounds composed of positively charged metal hydroxide layers with interlayer spaces containing exchangeable anions. Thus, LDH has superior flame retardancy and smoke suppression properties because it has crystal-waters and hydroxyl groups (-OH) among layered structure [1, 8]. In recent years, there have been many studies of LDH in flame retardant polymers. Han et al. [9] synthesized sodium dodecyl benzene sulfonate (SDBS) intercalated CoAl, MgAl, NiAl, and ZnAl LDH as flame retardant for polystyrene (PS). The results showed that peak heat release rate (PHRR) of nanocomposites were reduced by 7% and 12% with 5 wt% MgAl-SDBS LDH and ZnAl-SDBS LDH loading, respectively. Zhang et al. [10] prepared phosphotungstic acid- (PWA-) intercalated MgAl LDH and investigated the effect of it on the intumescent flame retardant (IFR) poly(lactic acid) (PLA) composites. When the MgAl-PWA LDH loading was 2 wt%, the PHRR of PLA composites significantly decreased from 306.3 kW/m2 of neat PLA to 40.1 kW/m2. The limiting oxygen index (LOI) value reached 48 and passed the UL-94 V-0 rating.The controllability of the composition and structure of LDH makes it possible to intercalate different cations. And, the flame retardancy of LDH within polymers can be further improved [8]. Lanthanum (La), a representative element of rare earths (REs), could be served as rare earth thermal stabilizers due to more coordination numbers of rare earth ion [11]. Wen et al. [12] introduced La3+ into the ZnAl-CO3 LDH with different Zn/Al/La molar ratios to prepared ZnAlLa-CO3 LDH as heating stabilizer in poly(vinyl chloride) (PVC) resin. Their results showed that ZnAlLa-CO3 LDH could significantly enhance the thermal stability of PVC samples. For another rare earth element, cerium (Ce), Yi et al. [13] prepared MgAlCe-CO3 LDH and as stabilizer for PVC resin. They found that the PVC composite containing MgAlCe-CO3 LDH showed better thermal stability when the amount of MgAlCe-CO3 LDH filler is 3 phr.The favourable dispersion of LDH in the polymer matrix is a critical prerequisite for obtaining the excellent flame retardancy of polymer-based materials. However, owing to the strong electrostatic interaction between the hydroxide layers, LDH is apt to agglomerate, which limits its flame retardant performance in polymers [14]. Graphene oxide (GO), with an ideal two-dimensional structure and large specific surface area, could solve the reaggregation of LDH in polymer matrix as a carrier [15]. For instance, Xu et al. [16] synthesized heptaheptamolybdate (Mo7O246−) intercalated MgAl LDH loaded graphene hybrids and investigated their flame retardant properties in polyurethane elastomer (PUE). With 2 wt% RGO-LDH/Mo loading, the PHRR of PUE composites decreased by 58.6%. Simultaneously, the TEM results showed that the RGO-LDH and RGO-LDH/Mo had no obvious agglomeration in PUE. Nevertheless, the research of REs doped LDH/GO hybrid as flame retardants was still rarely reported.In this paper, La-doped MgAl LDH was obtained via hydrothermal synthesis. Afterwards, La LDH and GO were hybridized to synthesize La LDH/GO hybrid, i.e., La LDH sheets were loaded on GO layers. Concurrently, the structure and morphology of the as-prepared samples were characterized. Then the TPU composites filled with LDH and LDH/GO were prepared by the melt blending method. Meanwhile, the flame retardancy and smoke suppression of all TPU composites were comprehensively analyzed further.
## 2. Experimental
### 2.1. Materials
Sulfuric acid (98%), hydrogen peroxide (30%), nitric acid (68%), aqueous ammonia (25%), graphite powder, Al(NO3)3·9H2O, Mg(NO3)2·6H2O, KMnO4, and NaNO3 were all purchased from Sinopharm Chemical Reagent Co., Ltd. (China). La2O3 was bought from Shanghai Aladdin Bio-Chem Technology Co., Ltd. (China). La(NO3)3 solution was prepared through dissolving La2O3 in dilute nitric acid. Commercial TPU (9380A) was obtained from Bayer, German.
### 2.2. Synthesis of La LDH
The as-prepared La LDH samples were synthesized by using precipitation and hydrothermal methods. 0.03 mol Mg(NO3)2·6H2O and 0.012 mol Al(NO3)3·9H2O were dissolved in 60 ml deionized water, and then La(NO3)3 solution was added into the above mixture and the La3+/Al3+ molar ratio was varied at 0.02, 0.05, and 0.1 for comparison. Concurrently, the total mole amount of La3+ and Al3+ was maintained at 0.012 mol. After that, pH of the mixture was adjusted to 10 by adding dilute ammonia aqueous solution (5%) dropwise. The mixture was heated at 65°C for 30 min with rapid stirring. Then, the resulting suspension was transferred to a 100 mL Teflon-lined autoclave, and it was kept under 130°C for 12 h. After autoclave was cooled to room temperature, the resulting precipitates were filtered, washed several times with deionized water, and dried at 60°C for 24 h.
### 2.3. Synthesis of La LDH/GO Hybrid
GO was synthesized from graphite powder by using the Hummers method [17]. In addition, La LDH/GO hybrid was prepared at the similar experimental conditions, except that GO solution was added to above mixture containing Mg2+, La3+, and Al3+ with a La3+/Al3+ molar ratio of 0.5.
### 2.4. Synthesis of TPU Composites
TPU composites were prepared by the melt blending method. For example, a certain amount of TPU was put in the internal mixer under 180°C for 3 min, and the rate of agitation was 30 rpm. Then, La LDH/GO was added to the mixer, and stirred constantly at the same temperature for 10 min. Finally, the TPU composite contains La LDH/GO was hot-pressed for 10 min at 180°C and 10 MPa to form sheet with the size of 100 × 100 × 3 mm3. Moreover, the TPU composites containing MgAl LDH and La LDH were prepared under the same conditions, respectively. The formulas of all TPU composites are displayed in Table 1.Table 1
Formulas of TPU and TPU composites.
Sample
TPU (wt%)
MgAl LDH (wt%)
0.05 La LDH (wt%)
La LDH/GO (wt%)
TPU
100
0
0
0
TPU1
98
2
0
0
TPU2
98
0
2
0
TPU3
98
0
0
2
### 2.5. Characterization
X-ray diffraction (XRD) measurements were taken by using a Rigaku X-ray diffractometer (Japan) with Cu-Kα tube and Ni filter (λ = 0.1542 nm). Fourier transform infrared spectroscopy (FTIR) studies were recorded on a Nicolet 6700 FTIR spectrophotometer (USA) with KBr pellet technique. Scanning electron microscopy (SEM) measurements were performed by using a JSM-6700F instrument (Japan). Transmission electron microscope-energy dispersive spectrometer (TEM-EDS) measurements were taken by a JEM-2100Plus instrument (Japan) with an acceleration voltage of 200 kV. Cone calorimeter test (CCT) was undertaken with a JCZ-2 cone calorimeter (China) according to ISO 5660 standard procedures. Specimens with the size of 100 × 100 × 3 mm3 were irradiated under a heat flux of 50 kW/m2. Limiting oxygen index (LOI) measurements were carried out with an HC-2 oxygen index meter (China) according to ASTM D2863. The size of the specimens used for the test was 100 × 6.5 × 3 mm3. Thermalgravimetric analysis (TGA) was carried out on a DT-50 instrument (France). The samples were heated from room temperature to 800°C. The heating rates were set as 20°C/min (nitrogen atmosphere, flow rate of 20 mL/min).
## 2.1. Materials
Sulfuric acid (98%), hydrogen peroxide (30%), nitric acid (68%), aqueous ammonia (25%), graphite powder, Al(NO3)3·9H2O, Mg(NO3)2·6H2O, KMnO4, and NaNO3 were all purchased from Sinopharm Chemical Reagent Co., Ltd. (China). La2O3 was bought from Shanghai Aladdin Bio-Chem Technology Co., Ltd. (China). La(NO3)3 solution was prepared through dissolving La2O3 in dilute nitric acid. Commercial TPU (9380A) was obtained from Bayer, German.
## 2.2. Synthesis of La LDH
The as-prepared La LDH samples were synthesized by using precipitation and hydrothermal methods. 0.03 mol Mg(NO3)2·6H2O and 0.012 mol Al(NO3)3·9H2O were dissolved in 60 ml deionized water, and then La(NO3)3 solution was added into the above mixture and the La3+/Al3+ molar ratio was varied at 0.02, 0.05, and 0.1 for comparison. Concurrently, the total mole amount of La3+ and Al3+ was maintained at 0.012 mol. After that, pH of the mixture was adjusted to 10 by adding dilute ammonia aqueous solution (5%) dropwise. The mixture was heated at 65°C for 30 min with rapid stirring. Then, the resulting suspension was transferred to a 100 mL Teflon-lined autoclave, and it was kept under 130°C for 12 h. After autoclave was cooled to room temperature, the resulting precipitates were filtered, washed several times with deionized water, and dried at 60°C for 24 h.
## 2.3. Synthesis of La LDH/GO Hybrid
GO was synthesized from graphite powder by using the Hummers method [17]. In addition, La LDH/GO hybrid was prepared at the similar experimental conditions, except that GO solution was added to above mixture containing Mg2+, La3+, and Al3+ with a La3+/Al3+ molar ratio of 0.5.
## 2.4. Synthesis of TPU Composites
TPU composites were prepared by the melt blending method. For example, a certain amount of TPU was put in the internal mixer under 180°C for 3 min, and the rate of agitation was 30 rpm. Then, La LDH/GO was added to the mixer, and stirred constantly at the same temperature for 10 min. Finally, the TPU composite contains La LDH/GO was hot-pressed for 10 min at 180°C and 10 MPa to form sheet with the size of 100 × 100 × 3 mm3. Moreover, the TPU composites containing MgAl LDH and La LDH were prepared under the same conditions, respectively. The formulas of all TPU composites are displayed in Table 1.Table 1
Formulas of TPU and TPU composites.
Sample
TPU (wt%)
MgAl LDH (wt%)
0.05 La LDH (wt%)
La LDH/GO (wt%)
TPU
100
0
0
0
TPU1
98
2
0
0
TPU2
98
0
2
0
TPU3
98
0
0
2
## 2.5. Characterization
X-ray diffraction (XRD) measurements were taken by using a Rigaku X-ray diffractometer (Japan) with Cu-Kα tube and Ni filter (λ = 0.1542 nm). Fourier transform infrared spectroscopy (FTIR) studies were recorded on a Nicolet 6700 FTIR spectrophotometer (USA) with KBr pellet technique. Scanning electron microscopy (SEM) measurements were performed by using a JSM-6700F instrument (Japan). Transmission electron microscope-energy dispersive spectrometer (TEM-EDS) measurements were taken by a JEM-2100Plus instrument (Japan) with an acceleration voltage of 200 kV. Cone calorimeter test (CCT) was undertaken with a JCZ-2 cone calorimeter (China) according to ISO 5660 standard procedures. Specimens with the size of 100 × 100 × 3 mm3 were irradiated under a heat flux of 50 kW/m2. Limiting oxygen index (LOI) measurements were carried out with an HC-2 oxygen index meter (China) according to ASTM D2863. The size of the specimens used for the test was 100 × 6.5 × 3 mm3. Thermalgravimetric analysis (TGA) was carried out on a DT-50 instrument (France). The samples were heated from room temperature to 800°C. The heating rates were set as 20°C/min (nitrogen atmosphere, flow rate of 20 mL/min).
## 3. Results and Discussion
### 3.1. Characterization of As-Prepared Samples
XRD can be used to determine the crystal structure of materials. The XRD spectra of La LDH with different La3+/Al3+ molar ratio are shown in Figure 1. As can be seen from the figure, the diffraction peaks of MgAl LDH at 2θ = 9.9°, 20.0°, 34.6°, 37.6°, 42.7°, 60.8°, 61.8°, and 64.7° indicate the (003), (006), (012), (015), (018), (110), (113), and (116) planes of the hydrotalcite structure, respectively [18]. The interlayer spacing of MgAl LDH is 0.89 nm from the (003) plane, showing the intercalation of NO3− into the interlayer gallery [19]. In addition, the interlayer spacing of all La LDH remains unchanged after doping La3+ to MgAl LDH. With the increase of La3+ content on MgAl LDH laminates, the intensities of three peaks between 30° and 50° are weakened. It is mainly because the ionic radius of La3+ is too large, which destroys the hexagonal structure of MgAl LDH. It is noteworthy that when the molar ratio of La3+/Al3+ is 0.1, the impurity phase appears. Simultaneously, the peaks for (100), (110), (101), (201), and (211) planes of La(OH)3 can be obtained at 2θ = 15.7°, 27.4°, 28.0°, 39.5°, and 48.8° (JCPDS card no.83-2034) [12].Figure 1
XRD spectra of MgAl LDH, 0.02 La LDH, 0.05 La LDH, and 0.1 La LDH.Based on the above research on La LDH with diverse La3+/Al3+ molar ratio, 0.05 La LDH without new phase and GO were selected to synthesis La LDH/GO hybrid. Figure 2 displays the XRD spectra of GO, 0.05 La LDH and La LDH/GO. The spectrum of GO has a strong diffraction peak at 2θ = 11.48°, corresponding to the (002) plane and the interlayer spacing is 0.77 nm according to Bragg’s equation [20]. Furthermore, La LDH/GO and 0.05 La LDH have the same diffraction peaks, but the peak intensities of La LDH/GO become weaker. Furthermore, the diffraction peak of GO disappears, indicating that 0.05 La LDH is well dispersed on GO layer.Figure 2
XRD spectra of GO, 0.05 La LDH, and La LDH/GO.FTIR is used to obtain information about chemical bonds or functional groups contained in materials. The FTIR spectra of GO, MgAl LDH, 0.05 La LDH, and La LDH/GO are presented in Figure3. From the figure, we could see that characteristic peaks of GO at 3395, 1721, 1402, and 1054 cm−1 are attributed to the stretching vibrations of the O-H, C=O, epoxy C-O, and alkoxy C-O, respectively. In addition, the absorption peak of GO at 1617 cm−1 is corresponded to the deformation vibration of adsorbed water [21]. As for MgAl LDH, 0.05 La LDH, and La LDH/GO, all spectra have similar trends. For instance, characteristic bands at 3450 and 1637 cm−1 are ascribed to the vibrations of O-H and water molecules, respectively. The absorption peaks at 1377 and 826 cm−1 are assigned to the vibration of NO3− (ν3 and ν2) as interlayer anion [19, 22]. Furthermore, the characteristic bands below 700 cm−1 are attributed to the lattice vibrations of Al-O and Mg-O in the LDH [9, 23]. More significantly, compared with MgAl LDH, the absorption peak of Al-O bond blue-shifts 11 wavenumbers (from 666 cm−1 to 655 cm−1) in the FTIR spectra of 0.05 La LDH and La LDH/GO. And, the Al-O peaks of 0.05 La LDH and La LDH/GO at 555 cm−1 disappear. It is mainly because the La3+ partially replaces Al3+ and destroys the lattice structure of LDH [12, 13].Figure 3
FTIR spectra of GO, MgAl LDH, 0.05 La LDH and La LDH/GO.The morphology and internal structure of GO and La LDH/GO can be observed by TEM. It can be seen from the Figure4(a), GO has a two-dimensional layered structure with the size of several hundred nanometers. Meanwhile, in some areas, GO layers fold each other, thus showing different degrees of restacking. This is largely due to the existence of Van Der Waals forces between GO layers. As shown in Figure 4(b), the lateral size of 0.05 La LDH platelets is around 50–100 nm and many 0.05 La LDH sheets appear on the GO layers, thus the folded areas are significantly reduced [22]. Owing to the successful loading of LDH on GO layers, the restacking of GO sheets is effectively inhibited. Furthermore, the elements of C, O, N, Mg, Al and La can be observed from the EDS spectrum of La LDH/GO (Figure 4(c)). The molar ratio of Mg/Al/La is 2.4/1/0.05 agrees with the theoretical values, indicating that La LDH/GO hybrid is successfully synthesized.Figure 4
TEM images of (a) GO, (b) La LDH/GO, and (c) EDS analysis of La LDH/GO.
(a)
(b)
(c)
### 3.2. Flame Retardancy of TPU Composites
The numerous parameters related to the potential fire hazard of the materials can be obtained by cone calorimeter, which is the most ideal test instrument for investigating the combustion performance of the materials during a fire. The heat release rate (HRR) is the most important fire characteristic parameter of the materials [24, 25]. Figure 5 gives the HRR curves of neat TPU and TPU composites. It can be seen that neat TPU has high peak heat release rate (PHRR) with a value of 1103 kW/m2, which indicates that it is highly flammable and belongs to intermediate thickness non-charring samples. As for TPU1 containing MgAl LDH, compared with neat TPU, the PHRR value decreased by 23.3% to 846 kW/m2. This can be explained by the fact that MgAl LDH absorbs heat during thermal decomposition, reducing the temperature on the surface of TPU, and decreasing the thermal decomposition and combustion rate of the polymer. Meanwhile, MgAl LDH can form a protective carbon layer on the degradation products, which prevents heat and gas transfer [26]. Furthermore, the PHRR value of TPU2 decreased by 30.3% in comparison with neat TPU. The decline in PHRR is mainly due to the further improvement of flame retardancy of MgAl LDH by the introduction of rare earth lanthanum. Among all the samples, TPU3 has the lowest PHRR value, which is 33.1% lower than that of neat TPU, indicating the physical barrier effect of the GO sheets [27]. It is worth noting that the time taken to reach the peak of all TPU composites are less than that of neat TPU, which is attributed to the decomposition of LDH at low temperature.Figure 5
HRR curves of TPU and TPU composites.The cone calorimeter data of neat TPU and TPU composites are displayed in Table2. From TPU1 to TPU3, total heat release (THR) values are basically unchanged, indicating that heat is kept constant before and after combustion. But average mass loss rate (AvMLR) values decreased, showing that 0.05 La LDH and GO play the role of flame retardancy by the mechanism of charring in the TPU composites. In the meantime, the average heat release rate (AvHRR) values of TPU1, TPU2, and TPU3 decreased; in turn, the respective HRR curves also became more and more flat after reaching the peak value. Combining the results of Figure 5, it is shown that all TPU composites belong to thermally thick charring (residue forming) samples [4]. In addition, the average effective heat combustion (AvEHC), THR, AvHRR, and AvMLR values of TPU1 are all slightly higher than those of neat TPU as listed in Table 2, which indicates that the flame retardant effect of low MgAl LDH content on TPU is not significant.Table 2
Cone calorimeter data of TPU and TPU composites.
Sample
PHRR (kW/m2)
THR (MJ/m2)
AvHRR (kW/m2)
AvEHC (MJ/kg)
AvMLR (g/s)
PSPR (m2/s)
TSP (m2)
AvSEA (m2/kg)
PCOY (kg/kg)
PCO2Y (kg/kg)
TPU
1103
139.1
261.2
15.6
0.089
0.100
13.22
159.6
0.090
8.802
TPU1
846
139.6
283.8
16.3
0.092
0.061
10.62
142.5
0.063
8.537
TPU2
769
140.3
237.2
15.8
0.078
0.055
11.06
129.2
0.043
8.502
TPU3
738
140.0
209.8
13.3
0.068
0.049
10.76
106.0
0.006
0.172In order to further comprehensively analyze the effect of La LDH/GO hybrid on the flame retardancy of TPU, an oxygen index meter was used to obtain the LOI value. Figure6 presents LOI values of neat TPU and TPU composites. As can be seen from the figure, the LOI value of neat TPU is 21.4%, while the LOI values of TPU composites are 21.8%, 22%, and 23.2%, respectively. The LOI increased only by 1.8 from neat TPU to TPU3 (with 2 wt% La LDH/GO loading), showing that La LDH/GO fails to significantly increase the LOI value of TPU composites. Therefore, the above results show that if La LDH/GO is used as a synergistic flame retardant, it will have a better effect on TPU matrix [28].Figure 6
LOI results of TPU and TPU composites.
### 3.3. Smoke Suppression of TPU Composites
The fire hazard of polymer materials is not only related to heat, but also to smoke. The smoke parameters obtained by cone calorimeter can also be used to evaluate the smoke suppression performance of materials [29]. Figure 7 shows the smoke production rate (SPR) curves of neat TPU and TPU composites. As can be seen from Figure 7, neat TPU has the highest peak smoke production rate (PSPR) value of 0.1 m2/s in all samples, which proves that TPU produces heavy smoke during combustion process. However, compared with neat TPU, the PSPR reductions of TPU1, TPU2, and TPU3 are 39%, 45%, and 51%, respectively. The smoke suppression performance of TPU1 is attributed to the presence of MgAl LDH. On the one hand, the water vapor produced by thermal decomposition of LDH can dilute and absorb part of the smoke; on the other hand, the MgAl LDH lamellae also contains basic metal ions (such as Mg2+) besides large specific surface area, so it has a good adsorption effect on acidic gases [30]. For TPU2 containing 0.05 La LDH, the catalytic effect of rare earth lanthanum could promote the charring of TPU and form protective carbon layer, thus protecting the polymer matrix. When GO and 0.05 La LDH are incorporated into TPU, the char formation is further promoted by combining the physical barrier effect of GO [31].Figure 7
SPR curves of TPU and TPU composites.Table2 also illustrates certain smoke parameters of neat TPU and TPU composites. It can be observed that total smoke production (TSP) values of all TPU composites decreased in comparison with neat TPU. Nevertheless, the TSP values of TPU2 and TPU3 are mildly higher than those of TPU1, which is mainly attributed to the prolongation of burnout time by 0.05 La LDH and GO. Through the SPR curves of TPU composites after 250 s, this phenomenon could also be explained. The peak CO yield (PCOY) and peak CO2 yield (PCO2Y) are also important parameters to characterize the smoke emission behavior of materials [31]. As shown in Table 2, from neat TPU to TPU3, the values of PCOY and PCO2Y are reduced in turn. Compared with neat TPU, the reduction for PCOY and PCO2Y of TPU3 are 93% and 98%, respectively, which ascribed to the adsorption of La LDH and GO on CO and CO2. The specific extinction area (SEA) is used to characterize the relationship between volatile products and smoke release during combustion process of materials, and it has a fine correlation with smoke parameters in large-scale experiments. As revealed in Table 2, compared with neat TPU, the average specific extinction area (AvSEA) values of all TPU composites decreased by varying degrees. Moreover, TPU3 has the lowest AvSEA value of 106.0 m2/kg in all samples, with a decrease of 33.6%. Therefore, these results show that La LDH/GO has a better smoke suppression effect.
### 3.4. Char Residues Analysis of TPU Composites
The structure and morphology of carbon layer also affect the flame retardancy of polymers [32]. In order to further explore the flame retardancy mechanism in condensed phase, the morphology and structure of the char residues left after CCT were investigated by SEM and XRD. The SEM images of char residues of TPU and TPU composites after CCT are presented in Figure 8. As Figure 8(a) shows, when neat TPU is burned out, the char residue is loose, porous, and fragile, which shows that TPU is a porous material and prone to smoldering. As for TPU1 containing MgAl LDH shown in Figure 8(b), the number of holes on the surface of char residue are reduced, but the char residue is still loose. Compared with neat TPU, the holes on the surface of char residue from TPU2 (with 0.05 La LDH added) become hollows and the cracks basically do not exist. The main reason for this phenomenon is that 0.05 La LDH plays a catalytic role in the pyrolysis of TPU, thus promoting the cross-linking of TPU. Noticeably, after the incorporation of 0.05 La LDH and GO into TPU, it can be seen that the surface of char residue of TPU3 is compact and there are no holes and cracks, showing that GO can enhance the barrier effect of carbon layer. Figure 9 shows the XRD patterns of char residues of TPU2 and TPU3 composites. It can be seen that (002) diffraction peak representing the symmetric vibration of graphite crystallite appears near 25° for both TPU2 and TPU3, which further indicates the existence of graphitized structure in the carbon layer. However, the intensity of the (002) diffraction peak of TPU3 is stronger than that of TPU2, which means that the carbon layers formed by GO and 0.05 La LDH after combustion constitute an enhanced double-carbon layer structure, thus playing a more effective barrier role [33].Figure 8
SEM images of char residues of (a) TPU, (b) TPU1, (c) TPU2, and (d) TPU3 after CCT.
(a)
(b)
(c)
(d)Figure 9
XRD patterns of char residues of TPU2 and TPU3 composites.
### 3.5. Thermal Behavior of TPU Composites
Thermogravimetric analysis (TGA) is a thermal analysis technique for measuring the relationship between the quality and temperature change of the samples under programmed temperature control, which is used to investigate the thermal stability and composition of the materials [34]. The TGA and derivative thermogravimetry (DTG) curves of neat TPU and TPU composites in nitrogen atmosphere are shown in Figure 10, and the detailed data are summarized in Table 3. As shown in Figure 10(a) and 10(b), Tonset (defined as the temperature at which the mass loss of sample is 5 wt%) and Tmax (defined as the maximum temperature at which the mass loss rate of sample is the fastest) decreased in different levels in comparison with neat TPU. In the meantime, the flame retardancy of polymers is closely related to their char yield during pyrolysis or combustion. It can be seen form Table 3 that the char residues of TPU1, TPU2, and TPU3 are 8.4%, 8.8%, and 9.7% at 800°C, which are higher than those of neat TPU, especially the TPU3 with La LDH/GO is added. The thermal stability of TPU matrix improved by La LDH/GO can be attributed not only to the catalytic effect of 0.05 La LDH on the formation of protective carbon layer, but also to the high thermal conductivity and physical barrier effect of GO [35]. The heat conduction and coke blockage produce the so-called labyrinth effect, resulting in heat and combustion gas must follow the tortuous path to fuel, which effectively prevents the spread of flame [31, 36].Figure 10
TGA (a) and DTG (b) curves of TPU and TPU composites.
(a)
(b)Table 3
TGA data of TPU and TPU composites in nitrogen atmosphere.
Sample
T
onset (°C)
T
max (°C)
Char yield (%)
TPU
311.1
417.6
8.2
TPU1
299.2
374.5
8.4
TPU2
284.2
369.6
8.8
TPU3
288.5
372.0
9.7
## 3.1. Characterization of As-Prepared Samples
XRD can be used to determine the crystal structure of materials. The XRD spectra of La LDH with different La3+/Al3+ molar ratio are shown in Figure 1. As can be seen from the figure, the diffraction peaks of MgAl LDH at 2θ = 9.9°, 20.0°, 34.6°, 37.6°, 42.7°, 60.8°, 61.8°, and 64.7° indicate the (003), (006), (012), (015), (018), (110), (113), and (116) planes of the hydrotalcite structure, respectively [18]. The interlayer spacing of MgAl LDH is 0.89 nm from the (003) plane, showing the intercalation of NO3− into the interlayer gallery [19]. In addition, the interlayer spacing of all La LDH remains unchanged after doping La3+ to MgAl LDH. With the increase of La3+ content on MgAl LDH laminates, the intensities of three peaks between 30° and 50° are weakened. It is mainly because the ionic radius of La3+ is too large, which destroys the hexagonal structure of MgAl LDH. It is noteworthy that when the molar ratio of La3+/Al3+ is 0.1, the impurity phase appears. Simultaneously, the peaks for (100), (110), (101), (201), and (211) planes of La(OH)3 can be obtained at 2θ = 15.7°, 27.4°, 28.0°, 39.5°, and 48.8° (JCPDS card no.83-2034) [12].Figure 1
XRD spectra of MgAl LDH, 0.02 La LDH, 0.05 La LDH, and 0.1 La LDH.Based on the above research on La LDH with diverse La3+/Al3+ molar ratio, 0.05 La LDH without new phase and GO were selected to synthesis La LDH/GO hybrid. Figure 2 displays the XRD spectra of GO, 0.05 La LDH and La LDH/GO. The spectrum of GO has a strong diffraction peak at 2θ = 11.48°, corresponding to the (002) plane and the interlayer spacing is 0.77 nm according to Bragg’s equation [20]. Furthermore, La LDH/GO and 0.05 La LDH have the same diffraction peaks, but the peak intensities of La LDH/GO become weaker. Furthermore, the diffraction peak of GO disappears, indicating that 0.05 La LDH is well dispersed on GO layer.Figure 2
XRD spectra of GO, 0.05 La LDH, and La LDH/GO.FTIR is used to obtain information about chemical bonds or functional groups contained in materials. The FTIR spectra of GO, MgAl LDH, 0.05 La LDH, and La LDH/GO are presented in Figure3. From the figure, we could see that characteristic peaks of GO at 3395, 1721, 1402, and 1054 cm−1 are attributed to the stretching vibrations of the O-H, C=O, epoxy C-O, and alkoxy C-O, respectively. In addition, the absorption peak of GO at 1617 cm−1 is corresponded to the deformation vibration of adsorbed water [21]. As for MgAl LDH, 0.05 La LDH, and La LDH/GO, all spectra have similar trends. For instance, characteristic bands at 3450 and 1637 cm−1 are ascribed to the vibrations of O-H and water molecules, respectively. The absorption peaks at 1377 and 826 cm−1 are assigned to the vibration of NO3− (ν3 and ν2) as interlayer anion [19, 22]. Furthermore, the characteristic bands below 700 cm−1 are attributed to the lattice vibrations of Al-O and Mg-O in the LDH [9, 23]. More significantly, compared with MgAl LDH, the absorption peak of Al-O bond blue-shifts 11 wavenumbers (from 666 cm−1 to 655 cm−1) in the FTIR spectra of 0.05 La LDH and La LDH/GO. And, the Al-O peaks of 0.05 La LDH and La LDH/GO at 555 cm−1 disappear. It is mainly because the La3+ partially replaces Al3+ and destroys the lattice structure of LDH [12, 13].Figure 3
FTIR spectra of GO, MgAl LDH, 0.05 La LDH and La LDH/GO.The morphology and internal structure of GO and La LDH/GO can be observed by TEM. It can be seen from the Figure4(a), GO has a two-dimensional layered structure with the size of several hundred nanometers. Meanwhile, in some areas, GO layers fold each other, thus showing different degrees of restacking. This is largely due to the existence of Van Der Waals forces between GO layers. As shown in Figure 4(b), the lateral size of 0.05 La LDH platelets is around 50–100 nm and many 0.05 La LDH sheets appear on the GO layers, thus the folded areas are significantly reduced [22]. Owing to the successful loading of LDH on GO layers, the restacking of GO sheets is effectively inhibited. Furthermore, the elements of C, O, N, Mg, Al and La can be observed from the EDS spectrum of La LDH/GO (Figure 4(c)). The molar ratio of Mg/Al/La is 2.4/1/0.05 agrees with the theoretical values, indicating that La LDH/GO hybrid is successfully synthesized.Figure 4
TEM images of (a) GO, (b) La LDH/GO, and (c) EDS analysis of La LDH/GO.
(a)
(b)
(c)
## 3.2. Flame Retardancy of TPU Composites
The numerous parameters related to the potential fire hazard of the materials can be obtained by cone calorimeter, which is the most ideal test instrument for investigating the combustion performance of the materials during a fire. The heat release rate (HRR) is the most important fire characteristic parameter of the materials [24, 25]. Figure 5 gives the HRR curves of neat TPU and TPU composites. It can be seen that neat TPU has high peak heat release rate (PHRR) with a value of 1103 kW/m2, which indicates that it is highly flammable and belongs to intermediate thickness non-charring samples. As for TPU1 containing MgAl LDH, compared with neat TPU, the PHRR value decreased by 23.3% to 846 kW/m2. This can be explained by the fact that MgAl LDH absorbs heat during thermal decomposition, reducing the temperature on the surface of TPU, and decreasing the thermal decomposition and combustion rate of the polymer. Meanwhile, MgAl LDH can form a protective carbon layer on the degradation products, which prevents heat and gas transfer [26]. Furthermore, the PHRR value of TPU2 decreased by 30.3% in comparison with neat TPU. The decline in PHRR is mainly due to the further improvement of flame retardancy of MgAl LDH by the introduction of rare earth lanthanum. Among all the samples, TPU3 has the lowest PHRR value, which is 33.1% lower than that of neat TPU, indicating the physical barrier effect of the GO sheets [27]. It is worth noting that the time taken to reach the peak of all TPU composites are less than that of neat TPU, which is attributed to the decomposition of LDH at low temperature.Figure 5
HRR curves of TPU and TPU composites.The cone calorimeter data of neat TPU and TPU composites are displayed in Table2. From TPU1 to TPU3, total heat release (THR) values are basically unchanged, indicating that heat is kept constant before and after combustion. But average mass loss rate (AvMLR) values decreased, showing that 0.05 La LDH and GO play the role of flame retardancy by the mechanism of charring in the TPU composites. In the meantime, the average heat release rate (AvHRR) values of TPU1, TPU2, and TPU3 decreased; in turn, the respective HRR curves also became more and more flat after reaching the peak value. Combining the results of Figure 5, it is shown that all TPU composites belong to thermally thick charring (residue forming) samples [4]. In addition, the average effective heat combustion (AvEHC), THR, AvHRR, and AvMLR values of TPU1 are all slightly higher than those of neat TPU as listed in Table 2, which indicates that the flame retardant effect of low MgAl LDH content on TPU is not significant.Table 2
Cone calorimeter data of TPU and TPU composites.
Sample
PHRR (kW/m2)
THR (MJ/m2)
AvHRR (kW/m2)
AvEHC (MJ/kg)
AvMLR (g/s)
PSPR (m2/s)
TSP (m2)
AvSEA (m2/kg)
PCOY (kg/kg)
PCO2Y (kg/kg)
TPU
1103
139.1
261.2
15.6
0.089
0.100
13.22
159.6
0.090
8.802
TPU1
846
139.6
283.8
16.3
0.092
0.061
10.62
142.5
0.063
8.537
TPU2
769
140.3
237.2
15.8
0.078
0.055
11.06
129.2
0.043
8.502
TPU3
738
140.0
209.8
13.3
0.068
0.049
10.76
106.0
0.006
0.172In order to further comprehensively analyze the effect of La LDH/GO hybrid on the flame retardancy of TPU, an oxygen index meter was used to obtain the LOI value. Figure6 presents LOI values of neat TPU and TPU composites. As can be seen from the figure, the LOI value of neat TPU is 21.4%, while the LOI values of TPU composites are 21.8%, 22%, and 23.2%, respectively. The LOI increased only by 1.8 from neat TPU to TPU3 (with 2 wt% La LDH/GO loading), showing that La LDH/GO fails to significantly increase the LOI value of TPU composites. Therefore, the above results show that if La LDH/GO is used as a synergistic flame retardant, it will have a better effect on TPU matrix [28].Figure 6
LOI results of TPU and TPU composites.
## 3.3. Smoke Suppression of TPU Composites
The fire hazard of polymer materials is not only related to heat, but also to smoke. The smoke parameters obtained by cone calorimeter can also be used to evaluate the smoke suppression performance of materials [29]. Figure 7 shows the smoke production rate (SPR) curves of neat TPU and TPU composites. As can be seen from Figure 7, neat TPU has the highest peak smoke production rate (PSPR) value of 0.1 m2/s in all samples, which proves that TPU produces heavy smoke during combustion process. However, compared with neat TPU, the PSPR reductions of TPU1, TPU2, and TPU3 are 39%, 45%, and 51%, respectively. The smoke suppression performance of TPU1 is attributed to the presence of MgAl LDH. On the one hand, the water vapor produced by thermal decomposition of LDH can dilute and absorb part of the smoke; on the other hand, the MgAl LDH lamellae also contains basic metal ions (such as Mg2+) besides large specific surface area, so it has a good adsorption effect on acidic gases [30]. For TPU2 containing 0.05 La LDH, the catalytic effect of rare earth lanthanum could promote the charring of TPU and form protective carbon layer, thus protecting the polymer matrix. When GO and 0.05 La LDH are incorporated into TPU, the char formation is further promoted by combining the physical barrier effect of GO [31].Figure 7
SPR curves of TPU and TPU composites.Table2 also illustrates certain smoke parameters of neat TPU and TPU composites. It can be observed that total smoke production (TSP) values of all TPU composites decreased in comparison with neat TPU. Nevertheless, the TSP values of TPU2 and TPU3 are mildly higher than those of TPU1, which is mainly attributed to the prolongation of burnout time by 0.05 La LDH and GO. Through the SPR curves of TPU composites after 250 s, this phenomenon could also be explained. The peak CO yield (PCOY) and peak CO2 yield (PCO2Y) are also important parameters to characterize the smoke emission behavior of materials [31]. As shown in Table 2, from neat TPU to TPU3, the values of PCOY and PCO2Y are reduced in turn. Compared with neat TPU, the reduction for PCOY and PCO2Y of TPU3 are 93% and 98%, respectively, which ascribed to the adsorption of La LDH and GO on CO and CO2. The specific extinction area (SEA) is used to characterize the relationship between volatile products and smoke release during combustion process of materials, and it has a fine correlation with smoke parameters in large-scale experiments. As revealed in Table 2, compared with neat TPU, the average specific extinction area (AvSEA) values of all TPU composites decreased by varying degrees. Moreover, TPU3 has the lowest AvSEA value of 106.0 m2/kg in all samples, with a decrease of 33.6%. Therefore, these results show that La LDH/GO has a better smoke suppression effect.
## 3.4. Char Residues Analysis of TPU Composites
The structure and morphology of carbon layer also affect the flame retardancy of polymers [32]. In order to further explore the flame retardancy mechanism in condensed phase, the morphology and structure of the char residues left after CCT were investigated by SEM and XRD. The SEM images of char residues of TPU and TPU composites after CCT are presented in Figure 8. As Figure 8(a) shows, when neat TPU is burned out, the char residue is loose, porous, and fragile, which shows that TPU is a porous material and prone to smoldering. As for TPU1 containing MgAl LDH shown in Figure 8(b), the number of holes on the surface of char residue are reduced, but the char residue is still loose. Compared with neat TPU, the holes on the surface of char residue from TPU2 (with 0.05 La LDH added) become hollows and the cracks basically do not exist. The main reason for this phenomenon is that 0.05 La LDH plays a catalytic role in the pyrolysis of TPU, thus promoting the cross-linking of TPU. Noticeably, after the incorporation of 0.05 La LDH and GO into TPU, it can be seen that the surface of char residue of TPU3 is compact and there are no holes and cracks, showing that GO can enhance the barrier effect of carbon layer. Figure 9 shows the XRD patterns of char residues of TPU2 and TPU3 composites. It can be seen that (002) diffraction peak representing the symmetric vibration of graphite crystallite appears near 25° for both TPU2 and TPU3, which further indicates the existence of graphitized structure in the carbon layer. However, the intensity of the (002) diffraction peak of TPU3 is stronger than that of TPU2, which means that the carbon layers formed by GO and 0.05 La LDH after combustion constitute an enhanced double-carbon layer structure, thus playing a more effective barrier role [33].Figure 8
SEM images of char residues of (a) TPU, (b) TPU1, (c) TPU2, and (d) TPU3 after CCT.
(a)
(b)
(c)
(d)Figure 9
XRD patterns of char residues of TPU2 and TPU3 composites.
## 3.5. Thermal Behavior of TPU Composites
Thermogravimetric analysis (TGA) is a thermal analysis technique for measuring the relationship between the quality and temperature change of the samples under programmed temperature control, which is used to investigate the thermal stability and composition of the materials [34]. The TGA and derivative thermogravimetry (DTG) curves of neat TPU and TPU composites in nitrogen atmosphere are shown in Figure 10, and the detailed data are summarized in Table 3. As shown in Figure 10(a) and 10(b), Tonset (defined as the temperature at which the mass loss of sample is 5 wt%) and Tmax (defined as the maximum temperature at which the mass loss rate of sample is the fastest) decreased in different levels in comparison with neat TPU. In the meantime, the flame retardancy of polymers is closely related to their char yield during pyrolysis or combustion. It can be seen form Table 3 that the char residues of TPU1, TPU2, and TPU3 are 8.4%, 8.8%, and 9.7% at 800°C, which are higher than those of neat TPU, especially the TPU3 with La LDH/GO is added. The thermal stability of TPU matrix improved by La LDH/GO can be attributed not only to the catalytic effect of 0.05 La LDH on the formation of protective carbon layer, but also to the high thermal conductivity and physical barrier effect of GO [35]. The heat conduction and coke blockage produce the so-called labyrinth effect, resulting in heat and combustion gas must follow the tortuous path to fuel, which effectively prevents the spread of flame [31, 36].Figure 10
TGA (a) and DTG (b) curves of TPU and TPU composites.
(a)
(b)Table 3
TGA data of TPU and TPU composites in nitrogen atmosphere.
Sample
T
onset (°C)
T
max (°C)
Char yield (%)
TPU
311.1
417.6
8.2
TPU1
299.2
374.5
8.4
TPU2
284.2
369.6
8.8
TPU3
288.5
372.0
9.7
## 4. Conclusions
To sum up, a La LDH/GO hybrid was synthesized by the hydrothermal method and characterized using XRD, FTIR, and TEM. The results showed that La3+ has been doped into MgAl LDH, and La LDH/GO hybrid was successfully prepared. Afterwards, TPU composites containing MgAl LDH, 0.05 La LDH, and La LDH/GO were prepared through melt blending, respectively. Of all TPU composites, the TPU3 filled with 2 wt% La LDH/GO had better flame retardancy and smoke suppression performance. Compared with neat TPU, the PHRR and PSPR values of TPU3 decreased by 33.1% and 51%, respectively. Meanwhile, the char residue quality and char yield of TPU3 were also further improved. The reduced fire hazard of TPU3 could attribute to the interaction of 0.05 La LDH and GO. For one thing, the flame retardancy of 0.05 La LDH is due to the combination of heat absorption, gas dilution, and char formation. On the other hand, the carbon layers formed by GO and 0.05 La LDH after combustion constitute an enhanced double-carbon layer structure, thus playing a more effective barrier role.
---
*Source: 1018093-2020-02-13.xml* | 1018093-2020-02-13_1018093-2020-02-13.md | 45,308 | Hydrothermal Synthesis of Lanthanum-Doped MgAl-Layered Double Hydroxide/Graphene Oxide Hybrid and Its Application as Flame Retardant for Thermoplastic Polyurethane | Yi Qian; Peng Qiao; Long Li; Haoyue Han; Haiming Zhang; Guozhang Chang | Advances in Polymer Technology
(2020) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1018093 | 1018093-2020-02-13.xml | ---
## Abstract
A novel lanthanum-doped MgAl-layered double hydroxide/graphene oxide hybrid (La LDH/GO) with a La3+/Al3+ molar ratio of 0.05 was successfully synthesized by the hydrothermal method. The structure and morphology of as-prepared samples were characterized by X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and transmission electron microscopy (TEM). Then, La LDH/GO was added into thermoplastic polyurethane (TPU) to investigate its effect on flame retardancy, smoke suppression, and thermal stability of TPU composites. The cone calorimeter test (CCT) results indicated that the peak heat release rate (PHRR) and peak smoke production rate (PSPR) values of TPU with La LDH/GO decreased by 33.1% and 51% compared with neat TPU, respectively. Therefore, La LDH/GO can play a good role in flame retardancy and smoke suppression of TPU matrix during combustion. In the meantime, La LDH/GO could improve the char yield of TPU composites, which is attributed to the interaction between the physical barrier effect of GO and the catalytic effect of 0.05 La LDH.
---
## Body
## 1. Introduction
As a type of burgeoning and widely used material, the terminal products of polymer-based materials have covered many fields such as electronic, electrical appliance, textile, furniture, transportation, and building material [1]. Thermoplastic polyurethane (TPU), an engineering thermoplastic, has been increasingly applied owing to its good flexibility, high compressive strength, and good abrasion resistance. However, like most polymers, TPU is highly flammable, which significantly increases the potential fire hazard in the place where it is used [2]. At present, in order to reduce fire damage and improve the safe level of mankind existence environment, flame retardancy has become one of the important factors that people often consider when choosing TPU for different purposes [3]. Hence, numerous flame retardants as additives have been added into TPU to enhance flame retardancy. With the enhancing awareness of environmental protection, halogen-based flame retardants are gradually being reduced or even banned due to their ecological damage during combustion [4]. Some new type of halogen-free flame retardants, for instance, organic phosphorus compounds [5], carbon nanotubes (CNTs) [6], and polyhedral oligomeric silsesquioxane (POSS) [7], have become more and more attractive in recent decades.Layered double hydroxide (LDH), also known as hydrotalcite compounds (HT) or anionic clay, is a kind of layered compounds composed of positively charged metal hydroxide layers with interlayer spaces containing exchangeable anions. Thus, LDH has superior flame retardancy and smoke suppression properties because it has crystal-waters and hydroxyl groups (-OH) among layered structure [1, 8]. In recent years, there have been many studies of LDH in flame retardant polymers. Han et al. [9] synthesized sodium dodecyl benzene sulfonate (SDBS) intercalated CoAl, MgAl, NiAl, and ZnAl LDH as flame retardant for polystyrene (PS). The results showed that peak heat release rate (PHRR) of nanocomposites were reduced by 7% and 12% with 5 wt% MgAl-SDBS LDH and ZnAl-SDBS LDH loading, respectively. Zhang et al. [10] prepared phosphotungstic acid- (PWA-) intercalated MgAl LDH and investigated the effect of it on the intumescent flame retardant (IFR) poly(lactic acid) (PLA) composites. When the MgAl-PWA LDH loading was 2 wt%, the PHRR of PLA composites significantly decreased from 306.3 kW/m2 of neat PLA to 40.1 kW/m2. The limiting oxygen index (LOI) value reached 48 and passed the UL-94 V-0 rating.The controllability of the composition and structure of LDH makes it possible to intercalate different cations. And, the flame retardancy of LDH within polymers can be further improved [8]. Lanthanum (La), a representative element of rare earths (REs), could be served as rare earth thermal stabilizers due to more coordination numbers of rare earth ion [11]. Wen et al. [12] introduced La3+ into the ZnAl-CO3 LDH with different Zn/Al/La molar ratios to prepared ZnAlLa-CO3 LDH as heating stabilizer in poly(vinyl chloride) (PVC) resin. Their results showed that ZnAlLa-CO3 LDH could significantly enhance the thermal stability of PVC samples. For another rare earth element, cerium (Ce), Yi et al. [13] prepared MgAlCe-CO3 LDH and as stabilizer for PVC resin. They found that the PVC composite containing MgAlCe-CO3 LDH showed better thermal stability when the amount of MgAlCe-CO3 LDH filler is 3 phr.The favourable dispersion of LDH in the polymer matrix is a critical prerequisite for obtaining the excellent flame retardancy of polymer-based materials. However, owing to the strong electrostatic interaction between the hydroxide layers, LDH is apt to agglomerate, which limits its flame retardant performance in polymers [14]. Graphene oxide (GO), with an ideal two-dimensional structure and large specific surface area, could solve the reaggregation of LDH in polymer matrix as a carrier [15]. For instance, Xu et al. [16] synthesized heptaheptamolybdate (Mo7O246−) intercalated MgAl LDH loaded graphene hybrids and investigated their flame retardant properties in polyurethane elastomer (PUE). With 2 wt% RGO-LDH/Mo loading, the PHRR of PUE composites decreased by 58.6%. Simultaneously, the TEM results showed that the RGO-LDH and RGO-LDH/Mo had no obvious agglomeration in PUE. Nevertheless, the research of REs doped LDH/GO hybrid as flame retardants was still rarely reported.In this paper, La-doped MgAl LDH was obtained via hydrothermal synthesis. Afterwards, La LDH and GO were hybridized to synthesize La LDH/GO hybrid, i.e., La LDH sheets were loaded on GO layers. Concurrently, the structure and morphology of the as-prepared samples were characterized. Then the TPU composites filled with LDH and LDH/GO were prepared by the melt blending method. Meanwhile, the flame retardancy and smoke suppression of all TPU composites were comprehensively analyzed further.
## 2. Experimental
### 2.1. Materials
Sulfuric acid (98%), hydrogen peroxide (30%), nitric acid (68%), aqueous ammonia (25%), graphite powder, Al(NO3)3·9H2O, Mg(NO3)2·6H2O, KMnO4, and NaNO3 were all purchased from Sinopharm Chemical Reagent Co., Ltd. (China). La2O3 was bought from Shanghai Aladdin Bio-Chem Technology Co., Ltd. (China). La(NO3)3 solution was prepared through dissolving La2O3 in dilute nitric acid. Commercial TPU (9380A) was obtained from Bayer, German.
### 2.2. Synthesis of La LDH
The as-prepared La LDH samples were synthesized by using precipitation and hydrothermal methods. 0.03 mol Mg(NO3)2·6H2O and 0.012 mol Al(NO3)3·9H2O were dissolved in 60 ml deionized water, and then La(NO3)3 solution was added into the above mixture and the La3+/Al3+ molar ratio was varied at 0.02, 0.05, and 0.1 for comparison. Concurrently, the total mole amount of La3+ and Al3+ was maintained at 0.012 mol. After that, pH of the mixture was adjusted to 10 by adding dilute ammonia aqueous solution (5%) dropwise. The mixture was heated at 65°C for 30 min with rapid stirring. Then, the resulting suspension was transferred to a 100 mL Teflon-lined autoclave, and it was kept under 130°C for 12 h. After autoclave was cooled to room temperature, the resulting precipitates were filtered, washed several times with deionized water, and dried at 60°C for 24 h.
### 2.3. Synthesis of La LDH/GO Hybrid
GO was synthesized from graphite powder by using the Hummers method [17]. In addition, La LDH/GO hybrid was prepared at the similar experimental conditions, except that GO solution was added to above mixture containing Mg2+, La3+, and Al3+ with a La3+/Al3+ molar ratio of 0.5.
### 2.4. Synthesis of TPU Composites
TPU composites were prepared by the melt blending method. For example, a certain amount of TPU was put in the internal mixer under 180°C for 3 min, and the rate of agitation was 30 rpm. Then, La LDH/GO was added to the mixer, and stirred constantly at the same temperature for 10 min. Finally, the TPU composite contains La LDH/GO was hot-pressed for 10 min at 180°C and 10 MPa to form sheet with the size of 100 × 100 × 3 mm3. Moreover, the TPU composites containing MgAl LDH and La LDH were prepared under the same conditions, respectively. The formulas of all TPU composites are displayed in Table 1.Table 1
Formulas of TPU and TPU composites.
Sample
TPU (wt%)
MgAl LDH (wt%)
0.05 La LDH (wt%)
La LDH/GO (wt%)
TPU
100
0
0
0
TPU1
98
2
0
0
TPU2
98
0
2
0
TPU3
98
0
0
2
### 2.5. Characterization
X-ray diffraction (XRD) measurements were taken by using a Rigaku X-ray diffractometer (Japan) with Cu-Kα tube and Ni filter (λ = 0.1542 nm). Fourier transform infrared spectroscopy (FTIR) studies were recorded on a Nicolet 6700 FTIR spectrophotometer (USA) with KBr pellet technique. Scanning electron microscopy (SEM) measurements were performed by using a JSM-6700F instrument (Japan). Transmission electron microscope-energy dispersive spectrometer (TEM-EDS) measurements were taken by a JEM-2100Plus instrument (Japan) with an acceleration voltage of 200 kV. Cone calorimeter test (CCT) was undertaken with a JCZ-2 cone calorimeter (China) according to ISO 5660 standard procedures. Specimens with the size of 100 × 100 × 3 mm3 were irradiated under a heat flux of 50 kW/m2. Limiting oxygen index (LOI) measurements were carried out with an HC-2 oxygen index meter (China) according to ASTM D2863. The size of the specimens used for the test was 100 × 6.5 × 3 mm3. Thermalgravimetric analysis (TGA) was carried out on a DT-50 instrument (France). The samples were heated from room temperature to 800°C. The heating rates were set as 20°C/min (nitrogen atmosphere, flow rate of 20 mL/min).
## 2.1. Materials
Sulfuric acid (98%), hydrogen peroxide (30%), nitric acid (68%), aqueous ammonia (25%), graphite powder, Al(NO3)3·9H2O, Mg(NO3)2·6H2O, KMnO4, and NaNO3 were all purchased from Sinopharm Chemical Reagent Co., Ltd. (China). La2O3 was bought from Shanghai Aladdin Bio-Chem Technology Co., Ltd. (China). La(NO3)3 solution was prepared through dissolving La2O3 in dilute nitric acid. Commercial TPU (9380A) was obtained from Bayer, German.
## 2.2. Synthesis of La LDH
The as-prepared La LDH samples were synthesized by using precipitation and hydrothermal methods. 0.03 mol Mg(NO3)2·6H2O and 0.012 mol Al(NO3)3·9H2O were dissolved in 60 ml deionized water, and then La(NO3)3 solution was added into the above mixture and the La3+/Al3+ molar ratio was varied at 0.02, 0.05, and 0.1 for comparison. Concurrently, the total mole amount of La3+ and Al3+ was maintained at 0.012 mol. After that, pH of the mixture was adjusted to 10 by adding dilute ammonia aqueous solution (5%) dropwise. The mixture was heated at 65°C for 30 min with rapid stirring. Then, the resulting suspension was transferred to a 100 mL Teflon-lined autoclave, and it was kept under 130°C for 12 h. After autoclave was cooled to room temperature, the resulting precipitates were filtered, washed several times with deionized water, and dried at 60°C for 24 h.
## 2.3. Synthesis of La LDH/GO Hybrid
GO was synthesized from graphite powder by using the Hummers method [17]. In addition, La LDH/GO hybrid was prepared at the similar experimental conditions, except that GO solution was added to above mixture containing Mg2+, La3+, and Al3+ with a La3+/Al3+ molar ratio of 0.5.
## 2.4. Synthesis of TPU Composites
TPU composites were prepared by the melt blending method. For example, a certain amount of TPU was put in the internal mixer under 180°C for 3 min, and the rate of agitation was 30 rpm. Then, La LDH/GO was added to the mixer, and stirred constantly at the same temperature for 10 min. Finally, the TPU composite contains La LDH/GO was hot-pressed for 10 min at 180°C and 10 MPa to form sheet with the size of 100 × 100 × 3 mm3. Moreover, the TPU composites containing MgAl LDH and La LDH were prepared under the same conditions, respectively. The formulas of all TPU composites are displayed in Table 1.Table 1
Formulas of TPU and TPU composites.
Sample
TPU (wt%)
MgAl LDH (wt%)
0.05 La LDH (wt%)
La LDH/GO (wt%)
TPU
100
0
0
0
TPU1
98
2
0
0
TPU2
98
0
2
0
TPU3
98
0
0
2
## 2.5. Characterization
X-ray diffraction (XRD) measurements were taken by using a Rigaku X-ray diffractometer (Japan) with Cu-Kα tube and Ni filter (λ = 0.1542 nm). Fourier transform infrared spectroscopy (FTIR) studies were recorded on a Nicolet 6700 FTIR spectrophotometer (USA) with KBr pellet technique. Scanning electron microscopy (SEM) measurements were performed by using a JSM-6700F instrument (Japan). Transmission electron microscope-energy dispersive spectrometer (TEM-EDS) measurements were taken by a JEM-2100Plus instrument (Japan) with an acceleration voltage of 200 kV. Cone calorimeter test (CCT) was undertaken with a JCZ-2 cone calorimeter (China) according to ISO 5660 standard procedures. Specimens with the size of 100 × 100 × 3 mm3 were irradiated under a heat flux of 50 kW/m2. Limiting oxygen index (LOI) measurements were carried out with an HC-2 oxygen index meter (China) according to ASTM D2863. The size of the specimens used for the test was 100 × 6.5 × 3 mm3. Thermalgravimetric analysis (TGA) was carried out on a DT-50 instrument (France). The samples were heated from room temperature to 800°C. The heating rates were set as 20°C/min (nitrogen atmosphere, flow rate of 20 mL/min).
## 3. Results and Discussion
### 3.1. Characterization of As-Prepared Samples
XRD can be used to determine the crystal structure of materials. The XRD spectra of La LDH with different La3+/Al3+ molar ratio are shown in Figure 1. As can be seen from the figure, the diffraction peaks of MgAl LDH at 2θ = 9.9°, 20.0°, 34.6°, 37.6°, 42.7°, 60.8°, 61.8°, and 64.7° indicate the (003), (006), (012), (015), (018), (110), (113), and (116) planes of the hydrotalcite structure, respectively [18]. The interlayer spacing of MgAl LDH is 0.89 nm from the (003) plane, showing the intercalation of NO3− into the interlayer gallery [19]. In addition, the interlayer spacing of all La LDH remains unchanged after doping La3+ to MgAl LDH. With the increase of La3+ content on MgAl LDH laminates, the intensities of three peaks between 30° and 50° are weakened. It is mainly because the ionic radius of La3+ is too large, which destroys the hexagonal structure of MgAl LDH. It is noteworthy that when the molar ratio of La3+/Al3+ is 0.1, the impurity phase appears. Simultaneously, the peaks for (100), (110), (101), (201), and (211) planes of La(OH)3 can be obtained at 2θ = 15.7°, 27.4°, 28.0°, 39.5°, and 48.8° (JCPDS card no.83-2034) [12].Figure 1
XRD spectra of MgAl LDH, 0.02 La LDH, 0.05 La LDH, and 0.1 La LDH.Based on the above research on La LDH with diverse La3+/Al3+ molar ratio, 0.05 La LDH without new phase and GO were selected to synthesis La LDH/GO hybrid. Figure 2 displays the XRD spectra of GO, 0.05 La LDH and La LDH/GO. The spectrum of GO has a strong diffraction peak at 2θ = 11.48°, corresponding to the (002) plane and the interlayer spacing is 0.77 nm according to Bragg’s equation [20]. Furthermore, La LDH/GO and 0.05 La LDH have the same diffraction peaks, but the peak intensities of La LDH/GO become weaker. Furthermore, the diffraction peak of GO disappears, indicating that 0.05 La LDH is well dispersed on GO layer.Figure 2
XRD spectra of GO, 0.05 La LDH, and La LDH/GO.FTIR is used to obtain information about chemical bonds or functional groups contained in materials. The FTIR spectra of GO, MgAl LDH, 0.05 La LDH, and La LDH/GO are presented in Figure3. From the figure, we could see that characteristic peaks of GO at 3395, 1721, 1402, and 1054 cm−1 are attributed to the stretching vibrations of the O-H, C=O, epoxy C-O, and alkoxy C-O, respectively. In addition, the absorption peak of GO at 1617 cm−1 is corresponded to the deformation vibration of adsorbed water [21]. As for MgAl LDH, 0.05 La LDH, and La LDH/GO, all spectra have similar trends. For instance, characteristic bands at 3450 and 1637 cm−1 are ascribed to the vibrations of O-H and water molecules, respectively. The absorption peaks at 1377 and 826 cm−1 are assigned to the vibration of NO3− (ν3 and ν2) as interlayer anion [19, 22]. Furthermore, the characteristic bands below 700 cm−1 are attributed to the lattice vibrations of Al-O and Mg-O in the LDH [9, 23]. More significantly, compared with MgAl LDH, the absorption peak of Al-O bond blue-shifts 11 wavenumbers (from 666 cm−1 to 655 cm−1) in the FTIR spectra of 0.05 La LDH and La LDH/GO. And, the Al-O peaks of 0.05 La LDH and La LDH/GO at 555 cm−1 disappear. It is mainly because the La3+ partially replaces Al3+ and destroys the lattice structure of LDH [12, 13].Figure 3
FTIR spectra of GO, MgAl LDH, 0.05 La LDH and La LDH/GO.The morphology and internal structure of GO and La LDH/GO can be observed by TEM. It can be seen from the Figure4(a), GO has a two-dimensional layered structure with the size of several hundred nanometers. Meanwhile, in some areas, GO layers fold each other, thus showing different degrees of restacking. This is largely due to the existence of Van Der Waals forces between GO layers. As shown in Figure 4(b), the lateral size of 0.05 La LDH platelets is around 50–100 nm and many 0.05 La LDH sheets appear on the GO layers, thus the folded areas are significantly reduced [22]. Owing to the successful loading of LDH on GO layers, the restacking of GO sheets is effectively inhibited. Furthermore, the elements of C, O, N, Mg, Al and La can be observed from the EDS spectrum of La LDH/GO (Figure 4(c)). The molar ratio of Mg/Al/La is 2.4/1/0.05 agrees with the theoretical values, indicating that La LDH/GO hybrid is successfully synthesized.Figure 4
TEM images of (a) GO, (b) La LDH/GO, and (c) EDS analysis of La LDH/GO.
(a)
(b)
(c)
### 3.2. Flame Retardancy of TPU Composites
The numerous parameters related to the potential fire hazard of the materials can be obtained by cone calorimeter, which is the most ideal test instrument for investigating the combustion performance of the materials during a fire. The heat release rate (HRR) is the most important fire characteristic parameter of the materials [24, 25]. Figure 5 gives the HRR curves of neat TPU and TPU composites. It can be seen that neat TPU has high peak heat release rate (PHRR) with a value of 1103 kW/m2, which indicates that it is highly flammable and belongs to intermediate thickness non-charring samples. As for TPU1 containing MgAl LDH, compared with neat TPU, the PHRR value decreased by 23.3% to 846 kW/m2. This can be explained by the fact that MgAl LDH absorbs heat during thermal decomposition, reducing the temperature on the surface of TPU, and decreasing the thermal decomposition and combustion rate of the polymer. Meanwhile, MgAl LDH can form a protective carbon layer on the degradation products, which prevents heat and gas transfer [26]. Furthermore, the PHRR value of TPU2 decreased by 30.3% in comparison with neat TPU. The decline in PHRR is mainly due to the further improvement of flame retardancy of MgAl LDH by the introduction of rare earth lanthanum. Among all the samples, TPU3 has the lowest PHRR value, which is 33.1% lower than that of neat TPU, indicating the physical barrier effect of the GO sheets [27]. It is worth noting that the time taken to reach the peak of all TPU composites are less than that of neat TPU, which is attributed to the decomposition of LDH at low temperature.Figure 5
HRR curves of TPU and TPU composites.The cone calorimeter data of neat TPU and TPU composites are displayed in Table2. From TPU1 to TPU3, total heat release (THR) values are basically unchanged, indicating that heat is kept constant before and after combustion. But average mass loss rate (AvMLR) values decreased, showing that 0.05 La LDH and GO play the role of flame retardancy by the mechanism of charring in the TPU composites. In the meantime, the average heat release rate (AvHRR) values of TPU1, TPU2, and TPU3 decreased; in turn, the respective HRR curves also became more and more flat after reaching the peak value. Combining the results of Figure 5, it is shown that all TPU composites belong to thermally thick charring (residue forming) samples [4]. In addition, the average effective heat combustion (AvEHC), THR, AvHRR, and AvMLR values of TPU1 are all slightly higher than those of neat TPU as listed in Table 2, which indicates that the flame retardant effect of low MgAl LDH content on TPU is not significant.Table 2
Cone calorimeter data of TPU and TPU composites.
Sample
PHRR (kW/m2)
THR (MJ/m2)
AvHRR (kW/m2)
AvEHC (MJ/kg)
AvMLR (g/s)
PSPR (m2/s)
TSP (m2)
AvSEA (m2/kg)
PCOY (kg/kg)
PCO2Y (kg/kg)
TPU
1103
139.1
261.2
15.6
0.089
0.100
13.22
159.6
0.090
8.802
TPU1
846
139.6
283.8
16.3
0.092
0.061
10.62
142.5
0.063
8.537
TPU2
769
140.3
237.2
15.8
0.078
0.055
11.06
129.2
0.043
8.502
TPU3
738
140.0
209.8
13.3
0.068
0.049
10.76
106.0
0.006
0.172In order to further comprehensively analyze the effect of La LDH/GO hybrid on the flame retardancy of TPU, an oxygen index meter was used to obtain the LOI value. Figure6 presents LOI values of neat TPU and TPU composites. As can be seen from the figure, the LOI value of neat TPU is 21.4%, while the LOI values of TPU composites are 21.8%, 22%, and 23.2%, respectively. The LOI increased only by 1.8 from neat TPU to TPU3 (with 2 wt% La LDH/GO loading), showing that La LDH/GO fails to significantly increase the LOI value of TPU composites. Therefore, the above results show that if La LDH/GO is used as a synergistic flame retardant, it will have a better effect on TPU matrix [28].Figure 6
LOI results of TPU and TPU composites.
### 3.3. Smoke Suppression of TPU Composites
The fire hazard of polymer materials is not only related to heat, but also to smoke. The smoke parameters obtained by cone calorimeter can also be used to evaluate the smoke suppression performance of materials [29]. Figure 7 shows the smoke production rate (SPR) curves of neat TPU and TPU composites. As can be seen from Figure 7, neat TPU has the highest peak smoke production rate (PSPR) value of 0.1 m2/s in all samples, which proves that TPU produces heavy smoke during combustion process. However, compared with neat TPU, the PSPR reductions of TPU1, TPU2, and TPU3 are 39%, 45%, and 51%, respectively. The smoke suppression performance of TPU1 is attributed to the presence of MgAl LDH. On the one hand, the water vapor produced by thermal decomposition of LDH can dilute and absorb part of the smoke; on the other hand, the MgAl LDH lamellae also contains basic metal ions (such as Mg2+) besides large specific surface area, so it has a good adsorption effect on acidic gases [30]. For TPU2 containing 0.05 La LDH, the catalytic effect of rare earth lanthanum could promote the charring of TPU and form protective carbon layer, thus protecting the polymer matrix. When GO and 0.05 La LDH are incorporated into TPU, the char formation is further promoted by combining the physical barrier effect of GO [31].Figure 7
SPR curves of TPU and TPU composites.Table2 also illustrates certain smoke parameters of neat TPU and TPU composites. It can be observed that total smoke production (TSP) values of all TPU composites decreased in comparison with neat TPU. Nevertheless, the TSP values of TPU2 and TPU3 are mildly higher than those of TPU1, which is mainly attributed to the prolongation of burnout time by 0.05 La LDH and GO. Through the SPR curves of TPU composites after 250 s, this phenomenon could also be explained. The peak CO yield (PCOY) and peak CO2 yield (PCO2Y) are also important parameters to characterize the smoke emission behavior of materials [31]. As shown in Table 2, from neat TPU to TPU3, the values of PCOY and PCO2Y are reduced in turn. Compared with neat TPU, the reduction for PCOY and PCO2Y of TPU3 are 93% and 98%, respectively, which ascribed to the adsorption of La LDH and GO on CO and CO2. The specific extinction area (SEA) is used to characterize the relationship between volatile products and smoke release during combustion process of materials, and it has a fine correlation with smoke parameters in large-scale experiments. As revealed in Table 2, compared with neat TPU, the average specific extinction area (AvSEA) values of all TPU composites decreased by varying degrees. Moreover, TPU3 has the lowest AvSEA value of 106.0 m2/kg in all samples, with a decrease of 33.6%. Therefore, these results show that La LDH/GO has a better smoke suppression effect.
### 3.4. Char Residues Analysis of TPU Composites
The structure and morphology of carbon layer also affect the flame retardancy of polymers [32]. In order to further explore the flame retardancy mechanism in condensed phase, the morphology and structure of the char residues left after CCT were investigated by SEM and XRD. The SEM images of char residues of TPU and TPU composites after CCT are presented in Figure 8. As Figure 8(a) shows, when neat TPU is burned out, the char residue is loose, porous, and fragile, which shows that TPU is a porous material and prone to smoldering. As for TPU1 containing MgAl LDH shown in Figure 8(b), the number of holes on the surface of char residue are reduced, but the char residue is still loose. Compared with neat TPU, the holes on the surface of char residue from TPU2 (with 0.05 La LDH added) become hollows and the cracks basically do not exist. The main reason for this phenomenon is that 0.05 La LDH plays a catalytic role in the pyrolysis of TPU, thus promoting the cross-linking of TPU. Noticeably, after the incorporation of 0.05 La LDH and GO into TPU, it can be seen that the surface of char residue of TPU3 is compact and there are no holes and cracks, showing that GO can enhance the barrier effect of carbon layer. Figure 9 shows the XRD patterns of char residues of TPU2 and TPU3 composites. It can be seen that (002) diffraction peak representing the symmetric vibration of graphite crystallite appears near 25° for both TPU2 and TPU3, which further indicates the existence of graphitized structure in the carbon layer. However, the intensity of the (002) diffraction peak of TPU3 is stronger than that of TPU2, which means that the carbon layers formed by GO and 0.05 La LDH after combustion constitute an enhanced double-carbon layer structure, thus playing a more effective barrier role [33].Figure 8
SEM images of char residues of (a) TPU, (b) TPU1, (c) TPU2, and (d) TPU3 after CCT.
(a)
(b)
(c)
(d)Figure 9
XRD patterns of char residues of TPU2 and TPU3 composites.
### 3.5. Thermal Behavior of TPU Composites
Thermogravimetric analysis (TGA) is a thermal analysis technique for measuring the relationship between the quality and temperature change of the samples under programmed temperature control, which is used to investigate the thermal stability and composition of the materials [34]. The TGA and derivative thermogravimetry (DTG) curves of neat TPU and TPU composites in nitrogen atmosphere are shown in Figure 10, and the detailed data are summarized in Table 3. As shown in Figure 10(a) and 10(b), Tonset (defined as the temperature at which the mass loss of sample is 5 wt%) and Tmax (defined as the maximum temperature at which the mass loss rate of sample is the fastest) decreased in different levels in comparison with neat TPU. In the meantime, the flame retardancy of polymers is closely related to their char yield during pyrolysis or combustion. It can be seen form Table 3 that the char residues of TPU1, TPU2, and TPU3 are 8.4%, 8.8%, and 9.7% at 800°C, which are higher than those of neat TPU, especially the TPU3 with La LDH/GO is added. The thermal stability of TPU matrix improved by La LDH/GO can be attributed not only to the catalytic effect of 0.05 La LDH on the formation of protective carbon layer, but also to the high thermal conductivity and physical barrier effect of GO [35]. The heat conduction and coke blockage produce the so-called labyrinth effect, resulting in heat and combustion gas must follow the tortuous path to fuel, which effectively prevents the spread of flame [31, 36].Figure 10
TGA (a) and DTG (b) curves of TPU and TPU composites.
(a)
(b)Table 3
TGA data of TPU and TPU composites in nitrogen atmosphere.
Sample
T
onset (°C)
T
max (°C)
Char yield (%)
TPU
311.1
417.6
8.2
TPU1
299.2
374.5
8.4
TPU2
284.2
369.6
8.8
TPU3
288.5
372.0
9.7
## 3.1. Characterization of As-Prepared Samples
XRD can be used to determine the crystal structure of materials. The XRD spectra of La LDH with different La3+/Al3+ molar ratio are shown in Figure 1. As can be seen from the figure, the diffraction peaks of MgAl LDH at 2θ = 9.9°, 20.0°, 34.6°, 37.6°, 42.7°, 60.8°, 61.8°, and 64.7° indicate the (003), (006), (012), (015), (018), (110), (113), and (116) planes of the hydrotalcite structure, respectively [18]. The interlayer spacing of MgAl LDH is 0.89 nm from the (003) plane, showing the intercalation of NO3− into the interlayer gallery [19]. In addition, the interlayer spacing of all La LDH remains unchanged after doping La3+ to MgAl LDH. With the increase of La3+ content on MgAl LDH laminates, the intensities of three peaks between 30° and 50° are weakened. It is mainly because the ionic radius of La3+ is too large, which destroys the hexagonal structure of MgAl LDH. It is noteworthy that when the molar ratio of La3+/Al3+ is 0.1, the impurity phase appears. Simultaneously, the peaks for (100), (110), (101), (201), and (211) planes of La(OH)3 can be obtained at 2θ = 15.7°, 27.4°, 28.0°, 39.5°, and 48.8° (JCPDS card no.83-2034) [12].Figure 1
XRD spectra of MgAl LDH, 0.02 La LDH, 0.05 La LDH, and 0.1 La LDH.Based on the above research on La LDH with diverse La3+/Al3+ molar ratio, 0.05 La LDH without new phase and GO were selected to synthesis La LDH/GO hybrid. Figure 2 displays the XRD spectra of GO, 0.05 La LDH and La LDH/GO. The spectrum of GO has a strong diffraction peak at 2θ = 11.48°, corresponding to the (002) plane and the interlayer spacing is 0.77 nm according to Bragg’s equation [20]. Furthermore, La LDH/GO and 0.05 La LDH have the same diffraction peaks, but the peak intensities of La LDH/GO become weaker. Furthermore, the diffraction peak of GO disappears, indicating that 0.05 La LDH is well dispersed on GO layer.Figure 2
XRD spectra of GO, 0.05 La LDH, and La LDH/GO.FTIR is used to obtain information about chemical bonds or functional groups contained in materials. The FTIR spectra of GO, MgAl LDH, 0.05 La LDH, and La LDH/GO are presented in Figure3. From the figure, we could see that characteristic peaks of GO at 3395, 1721, 1402, and 1054 cm−1 are attributed to the stretching vibrations of the O-H, C=O, epoxy C-O, and alkoxy C-O, respectively. In addition, the absorption peak of GO at 1617 cm−1 is corresponded to the deformation vibration of adsorbed water [21]. As for MgAl LDH, 0.05 La LDH, and La LDH/GO, all spectra have similar trends. For instance, characteristic bands at 3450 and 1637 cm−1 are ascribed to the vibrations of O-H and water molecules, respectively. The absorption peaks at 1377 and 826 cm−1 are assigned to the vibration of NO3− (ν3 and ν2) as interlayer anion [19, 22]. Furthermore, the characteristic bands below 700 cm−1 are attributed to the lattice vibrations of Al-O and Mg-O in the LDH [9, 23]. More significantly, compared with MgAl LDH, the absorption peak of Al-O bond blue-shifts 11 wavenumbers (from 666 cm−1 to 655 cm−1) in the FTIR spectra of 0.05 La LDH and La LDH/GO. And, the Al-O peaks of 0.05 La LDH and La LDH/GO at 555 cm−1 disappear. It is mainly because the La3+ partially replaces Al3+ and destroys the lattice structure of LDH [12, 13].Figure 3
FTIR spectra of GO, MgAl LDH, 0.05 La LDH and La LDH/GO.The morphology and internal structure of GO and La LDH/GO can be observed by TEM. It can be seen from the Figure4(a), GO has a two-dimensional layered structure with the size of several hundred nanometers. Meanwhile, in some areas, GO layers fold each other, thus showing different degrees of restacking. This is largely due to the existence of Van Der Waals forces between GO layers. As shown in Figure 4(b), the lateral size of 0.05 La LDH platelets is around 50–100 nm and many 0.05 La LDH sheets appear on the GO layers, thus the folded areas are significantly reduced [22]. Owing to the successful loading of LDH on GO layers, the restacking of GO sheets is effectively inhibited. Furthermore, the elements of C, O, N, Mg, Al and La can be observed from the EDS spectrum of La LDH/GO (Figure 4(c)). The molar ratio of Mg/Al/La is 2.4/1/0.05 agrees with the theoretical values, indicating that La LDH/GO hybrid is successfully synthesized.Figure 4
TEM images of (a) GO, (b) La LDH/GO, and (c) EDS analysis of La LDH/GO.
(a)
(b)
(c)
## 3.2. Flame Retardancy of TPU Composites
The numerous parameters related to the potential fire hazard of the materials can be obtained by cone calorimeter, which is the most ideal test instrument for investigating the combustion performance of the materials during a fire. The heat release rate (HRR) is the most important fire characteristic parameter of the materials [24, 25]. Figure 5 gives the HRR curves of neat TPU and TPU composites. It can be seen that neat TPU has high peak heat release rate (PHRR) with a value of 1103 kW/m2, which indicates that it is highly flammable and belongs to intermediate thickness non-charring samples. As for TPU1 containing MgAl LDH, compared with neat TPU, the PHRR value decreased by 23.3% to 846 kW/m2. This can be explained by the fact that MgAl LDH absorbs heat during thermal decomposition, reducing the temperature on the surface of TPU, and decreasing the thermal decomposition and combustion rate of the polymer. Meanwhile, MgAl LDH can form a protective carbon layer on the degradation products, which prevents heat and gas transfer [26]. Furthermore, the PHRR value of TPU2 decreased by 30.3% in comparison with neat TPU. The decline in PHRR is mainly due to the further improvement of flame retardancy of MgAl LDH by the introduction of rare earth lanthanum. Among all the samples, TPU3 has the lowest PHRR value, which is 33.1% lower than that of neat TPU, indicating the physical barrier effect of the GO sheets [27]. It is worth noting that the time taken to reach the peak of all TPU composites are less than that of neat TPU, which is attributed to the decomposition of LDH at low temperature.Figure 5
HRR curves of TPU and TPU composites.The cone calorimeter data of neat TPU and TPU composites are displayed in Table2. From TPU1 to TPU3, total heat release (THR) values are basically unchanged, indicating that heat is kept constant before and after combustion. But average mass loss rate (AvMLR) values decreased, showing that 0.05 La LDH and GO play the role of flame retardancy by the mechanism of charring in the TPU composites. In the meantime, the average heat release rate (AvHRR) values of TPU1, TPU2, and TPU3 decreased; in turn, the respective HRR curves also became more and more flat after reaching the peak value. Combining the results of Figure 5, it is shown that all TPU composites belong to thermally thick charring (residue forming) samples [4]. In addition, the average effective heat combustion (AvEHC), THR, AvHRR, and AvMLR values of TPU1 are all slightly higher than those of neat TPU as listed in Table 2, which indicates that the flame retardant effect of low MgAl LDH content on TPU is not significant.Table 2
Cone calorimeter data of TPU and TPU composites.
Sample
PHRR (kW/m2)
THR (MJ/m2)
AvHRR (kW/m2)
AvEHC (MJ/kg)
AvMLR (g/s)
PSPR (m2/s)
TSP (m2)
AvSEA (m2/kg)
PCOY (kg/kg)
PCO2Y (kg/kg)
TPU
1103
139.1
261.2
15.6
0.089
0.100
13.22
159.6
0.090
8.802
TPU1
846
139.6
283.8
16.3
0.092
0.061
10.62
142.5
0.063
8.537
TPU2
769
140.3
237.2
15.8
0.078
0.055
11.06
129.2
0.043
8.502
TPU3
738
140.0
209.8
13.3
0.068
0.049
10.76
106.0
0.006
0.172In order to further comprehensively analyze the effect of La LDH/GO hybrid on the flame retardancy of TPU, an oxygen index meter was used to obtain the LOI value. Figure6 presents LOI values of neat TPU and TPU composites. As can be seen from the figure, the LOI value of neat TPU is 21.4%, while the LOI values of TPU composites are 21.8%, 22%, and 23.2%, respectively. The LOI increased only by 1.8 from neat TPU to TPU3 (with 2 wt% La LDH/GO loading), showing that La LDH/GO fails to significantly increase the LOI value of TPU composites. Therefore, the above results show that if La LDH/GO is used as a synergistic flame retardant, it will have a better effect on TPU matrix [28].Figure 6
LOI results of TPU and TPU composites.
## 3.3. Smoke Suppression of TPU Composites
The fire hazard of polymer materials is not only related to heat, but also to smoke. The smoke parameters obtained by cone calorimeter can also be used to evaluate the smoke suppression performance of materials [29]. Figure 7 shows the smoke production rate (SPR) curves of neat TPU and TPU composites. As can be seen from Figure 7, neat TPU has the highest peak smoke production rate (PSPR) value of 0.1 m2/s in all samples, which proves that TPU produces heavy smoke during combustion process. However, compared with neat TPU, the PSPR reductions of TPU1, TPU2, and TPU3 are 39%, 45%, and 51%, respectively. The smoke suppression performance of TPU1 is attributed to the presence of MgAl LDH. On the one hand, the water vapor produced by thermal decomposition of LDH can dilute and absorb part of the smoke; on the other hand, the MgAl LDH lamellae also contains basic metal ions (such as Mg2+) besides large specific surface area, so it has a good adsorption effect on acidic gases [30]. For TPU2 containing 0.05 La LDH, the catalytic effect of rare earth lanthanum could promote the charring of TPU and form protective carbon layer, thus protecting the polymer matrix. When GO and 0.05 La LDH are incorporated into TPU, the char formation is further promoted by combining the physical barrier effect of GO [31].Figure 7
SPR curves of TPU and TPU composites.Table2 also illustrates certain smoke parameters of neat TPU and TPU composites. It can be observed that total smoke production (TSP) values of all TPU composites decreased in comparison with neat TPU. Nevertheless, the TSP values of TPU2 and TPU3 are mildly higher than those of TPU1, which is mainly attributed to the prolongation of burnout time by 0.05 La LDH and GO. Through the SPR curves of TPU composites after 250 s, this phenomenon could also be explained. The peak CO yield (PCOY) and peak CO2 yield (PCO2Y) are also important parameters to characterize the smoke emission behavior of materials [31]. As shown in Table 2, from neat TPU to TPU3, the values of PCOY and PCO2Y are reduced in turn. Compared with neat TPU, the reduction for PCOY and PCO2Y of TPU3 are 93% and 98%, respectively, which ascribed to the adsorption of La LDH and GO on CO and CO2. The specific extinction area (SEA) is used to characterize the relationship between volatile products and smoke release during combustion process of materials, and it has a fine correlation with smoke parameters in large-scale experiments. As revealed in Table 2, compared with neat TPU, the average specific extinction area (AvSEA) values of all TPU composites decreased by varying degrees. Moreover, TPU3 has the lowest AvSEA value of 106.0 m2/kg in all samples, with a decrease of 33.6%. Therefore, these results show that La LDH/GO has a better smoke suppression effect.
## 3.4. Char Residues Analysis of TPU Composites
The structure and morphology of carbon layer also affect the flame retardancy of polymers [32]. In order to further explore the flame retardancy mechanism in condensed phase, the morphology and structure of the char residues left after CCT were investigated by SEM and XRD. The SEM images of char residues of TPU and TPU composites after CCT are presented in Figure 8. As Figure 8(a) shows, when neat TPU is burned out, the char residue is loose, porous, and fragile, which shows that TPU is a porous material and prone to smoldering. As for TPU1 containing MgAl LDH shown in Figure 8(b), the number of holes on the surface of char residue are reduced, but the char residue is still loose. Compared with neat TPU, the holes on the surface of char residue from TPU2 (with 0.05 La LDH added) become hollows and the cracks basically do not exist. The main reason for this phenomenon is that 0.05 La LDH plays a catalytic role in the pyrolysis of TPU, thus promoting the cross-linking of TPU. Noticeably, after the incorporation of 0.05 La LDH and GO into TPU, it can be seen that the surface of char residue of TPU3 is compact and there are no holes and cracks, showing that GO can enhance the barrier effect of carbon layer. Figure 9 shows the XRD patterns of char residues of TPU2 and TPU3 composites. It can be seen that (002) diffraction peak representing the symmetric vibration of graphite crystallite appears near 25° for both TPU2 and TPU3, which further indicates the existence of graphitized structure in the carbon layer. However, the intensity of the (002) diffraction peak of TPU3 is stronger than that of TPU2, which means that the carbon layers formed by GO and 0.05 La LDH after combustion constitute an enhanced double-carbon layer structure, thus playing a more effective barrier role [33].Figure 8
SEM images of char residues of (a) TPU, (b) TPU1, (c) TPU2, and (d) TPU3 after CCT.
(a)
(b)
(c)
(d)Figure 9
XRD patterns of char residues of TPU2 and TPU3 composites.
## 3.5. Thermal Behavior of TPU Composites
Thermogravimetric analysis (TGA) is a thermal analysis technique for measuring the relationship between the quality and temperature change of the samples under programmed temperature control, which is used to investigate the thermal stability and composition of the materials [34]. The TGA and derivative thermogravimetry (DTG) curves of neat TPU and TPU composites in nitrogen atmosphere are shown in Figure 10, and the detailed data are summarized in Table 3. As shown in Figure 10(a) and 10(b), Tonset (defined as the temperature at which the mass loss of sample is 5 wt%) and Tmax (defined as the maximum temperature at which the mass loss rate of sample is the fastest) decreased in different levels in comparison with neat TPU. In the meantime, the flame retardancy of polymers is closely related to their char yield during pyrolysis or combustion. It can be seen form Table 3 that the char residues of TPU1, TPU2, and TPU3 are 8.4%, 8.8%, and 9.7% at 800°C, which are higher than those of neat TPU, especially the TPU3 with La LDH/GO is added. The thermal stability of TPU matrix improved by La LDH/GO can be attributed not only to the catalytic effect of 0.05 La LDH on the formation of protective carbon layer, but also to the high thermal conductivity and physical barrier effect of GO [35]. The heat conduction and coke blockage produce the so-called labyrinth effect, resulting in heat and combustion gas must follow the tortuous path to fuel, which effectively prevents the spread of flame [31, 36].Figure 10
TGA (a) and DTG (b) curves of TPU and TPU composites.
(a)
(b)Table 3
TGA data of TPU and TPU composites in nitrogen atmosphere.
Sample
T
onset (°C)
T
max (°C)
Char yield (%)
TPU
311.1
417.6
8.2
TPU1
299.2
374.5
8.4
TPU2
284.2
369.6
8.8
TPU3
288.5
372.0
9.7
## 4. Conclusions
To sum up, a La LDH/GO hybrid was synthesized by the hydrothermal method and characterized using XRD, FTIR, and TEM. The results showed that La3+ has been doped into MgAl LDH, and La LDH/GO hybrid was successfully prepared. Afterwards, TPU composites containing MgAl LDH, 0.05 La LDH, and La LDH/GO were prepared through melt blending, respectively. Of all TPU composites, the TPU3 filled with 2 wt% La LDH/GO had better flame retardancy and smoke suppression performance. Compared with neat TPU, the PHRR and PSPR values of TPU3 decreased by 33.1% and 51%, respectively. Meanwhile, the char residue quality and char yield of TPU3 were also further improved. The reduced fire hazard of TPU3 could attribute to the interaction of 0.05 La LDH and GO. For one thing, the flame retardancy of 0.05 La LDH is due to the combination of heat absorption, gas dilution, and char formation. On the other hand, the carbon layers formed by GO and 0.05 La LDH after combustion constitute an enhanced double-carbon layer structure, thus playing a more effective barrier role.
---
*Source: 1018093-2020-02-13.xml* | 2020 |
# Evaluation of Subchronic Toxicity and Genotoxicity of Ethanolic Extract ofAster glehni Leaves and Stems
**Authors:** Mi Kyung Lim; Ju Yeon Kim; Jeongho Jeong; Eun Hye Han; Sang Ho Lee; Soyeon Lee; Sun-Don Kim; Jinu Lee
**Journal:** Evidence-Based Complementary and Alternative Medicine
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1018101
---
## Abstract
Aster glehni, a traditional plant on Ulleung Island in the Republic of Korea, has been recognized for its multiple medicinal properties. However, potential toxicity and safety analyses of A. glehni have not been previously investigated. Therefore, this study aimed to evaluate the safety profile of ethanolic extract of A. glehni leaves and stems (EAG) in terms of genotoxicity and subchronic oral animal toxicity under OECD guidelines and GLP conditions. Toxicological assessments were performed at doses of 1,250, 2,500, and 5,000 mg/kg/day in a 13-week oral repeated-dose toxicity study of EAG in male and female SD rats. In addition, an Ames test, an in vitro mammalian chromosomal aberration test, and a micronucleus test were performed. No toxicological changes in clinical signs, body weights, water and food consumption, urinalysis, hematology, clinical biochemistry, gross findings, and histopathological examinations were observed in subchronic oral animal toxicity. In addition, EAG gave negative results when evaluated using in vitro and in vivo genotoxicity tests. In conclusion, the no-observed-adverse-effect level (NOAEL) of EAG was considered to be 5,000 mg/kg/day, and no target organs were identified in both sexes of rats. EAG was also classified as nonmutagenic and nonclastogenic in genotoxicity testing. Collectively, these results show a lack of general toxicity and genotoxicity for EAG that supports clinical work for development as a herbal medicine.
---
## Body
## 1. Introduction
Medicinal plants have been used traditionally therapeutic agents, but recently, they have seen more and more as substitutes for chemical agents with side effects or drug resistance [1, 2]. Herbal medicine derived from medicinal plants often has anti-oxidant, anti-microbial, and anti-inflammatory properties so they may provide potential options for the treatment of diseases such as COVID-19 that yet has no approved drug [3–7]. Medicinal plants are utilized for the treatment of various diseases based on their unique biological properties such as anti-cancer, thrombolytic, and gastrointestinal function control, as well as for the improvement of neurological diseases by anti-nociceptive, anti-depressant, and anxiolytic activity [1, 2, 8–15]. However, some medicinal plants have reported toxicity like hepatotoxicity and renal toxicity at high doses and long-term use [5, 6, 16]. Therefore, it is essential to evaluate toxicity profiles for human safety.Aster glehni Fr. Schm., widely distributed on Ulleung Island, Republic of Korea, is known to be a traditional edible herb. In a Korean traditional medical encyclopedia known as Dongui Bogam, it is described that A. glehni has anti-pyretic and analgesic effects and suppresses phlegm and coughing [17]. A. glehni has been used for the treatment of a variety of diseases including diabetes mellitus, hypercholesterolemia, and cardiovascular disease [18]. In addition, it has been reported that ethanolic extract of A. glehni shows anti-adipogenic, hypouricemic, and anti-inflammatory effects [18–20].A. glehni contains caffeoylquinic acid (CQ) derivatives such as 3,5-di-O-caffeoylquinic acid (3,5-DCQA), 5-O-caffeoylquinic acid, 3-O-caffeoylquinic acid, and 3-O-p-coumaroylquinic acid and flavonoids such as astragalin and kaempferol [18, 21]. According to recent research, ethanolic extract of A. glehni (EAG) and 3,5-DCQA have ameliorating effects on memory impairment caused by scopolamine in male ICR mice [22, 23]. It has also been reported that 3,5-DCQA inhibits the activity of acetylcholinesterase (AChE) and amyloid-beta (Aβ) induced cytotoxicity in SH-SY5Y neuroblastoma cells [23–25]. These results suggest that 3,5-DCQA might play an important role in the ameliorative effects of EAG on memory dysfunction.Although effects of EAG have been generally extensively perceived to be therapeutic, to date, adverse effects of EAG use in humans have not been reported. Thus, the present study of EAG was designed under Regulation on Approval and Notification of Herbal (crude) Medicinal Preparation, Etc. of the Korea Ministry of Food and Drug Safety (MFDS) [26] to provide safety information for a subsequent clinical trial. The toxicity studies of EAG were conducted as 2-week and 13-week repeated-dose oral toxicity tests in SD rats and genotoxicity tests following the Good Laboratory Practice regulations of the Organization for Economic Cooperation and Development [27] and MFDS [28].
## 2. Materials and Methods
### 2.1. Preparation of Ethanolic Extract of Aerial Parts ofA. glehni
The aerial parts, leaves, and stems, ofA. glehni were collected from Ulleung Island and dried naturally. The EAG was prepared by the method as previously described [23]. Briefly, the finely chopped sample was extracted in a 15-fold mass of 70% ethanol. The first extract was collected; then the second extract was obtained in a 10-fold mass of 70% ethanol. After mixing with diatomite, it was filtered and concentrated to 10–20 Brix. After adding an equal amount of dextrin to the concentrates, the mixtures were sterilized at 95°C for 30 min. The sterilized samples were spray-dried and filtered through a 60-mesh sieve to obtain a solid extract powder (Specimen Voucher No. AG-D022). For quality assurance, the final A. glehni extract was standardized by 3,5-dicaffeoylquinic acid (3,5-DCQA) based on high-performance liquid chromatography (HPLC) at 330 nm. The content of the marker compound (3,5-DCQA) in the EAG was 2.37 mg/g. The results of the amino acid composition analysis are shown in Table 1. The total protein content in EAG was 1,759.84 mg/100 g. Proline (35.1%), aspartic acid (21.6%), and glutamic acid (11.2%) were the main amino acids existing in EAG.Table 1
Amino acid composition of EAG.
Amino acidaTyrGlySerAlaGluLysLeuMetValArgAspIleThrPheProHisCysTrpAGND225.0183.4161.9355.047.1193.719.3141.153.6292.6117.2154.6146.2203.739.596.913.0EAGND53.978.367.9197.67.184.6ND41.326.7380.146.460.443.0616.916.728.210.9aUnit: mg/100 g, Tyr, tyrosine; Gly, glycine; Ser, serine; Ala, alanine; Glu, glutamic acid; Lys, lysine; Leu, leucine; Met, methionine; Val, valine; Arg, arginine; Asp, aspartic acid; Ile, isoleucine; Thr, threonine; Phe, phenylalanine; Pro, proline; His, histidine; Cys, cysteine; Trp, tryptophan; ND, not detected; AG, ethanolic leaf extract of Aster glehni; and EAG, ethanolic leaf and stem extract of Aster glehni.
### 2.2. Experimental Animals and Maintenance
Specific pathogen-free (SPF) SD rats were obtained from Orient Bio Inc. (Seongnam, Korea). The animals were maintained in the facility with temperature (23 ± 3°C), relative humidity (55 ± 15%), and ventilation (10–20 air changes/hour) at Chemon Inc. in accordance with the Guide for the Care and Use of Laboratory Animals, 8th edition [29]. Food and water were provided, ad libitum, with a 12 hours light:12 hours dark cycle. All procedures and protocols were reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) of Chemon Inc. performed in accordance with the guideline published by the Organization for Economic Cooperation and Development (OECD) as well as the Good Laboratory Practice (GLP) regulations for Nonclinical Laboratory Studies of the Ministry of Food Drug Safety (MFDS) in the Republic of Korea [27, 28, 30, 31].
### 2.3. Thirteen-Week Repeated Oral Toxicity Study
For the 13-week repeat-dose toxicity study, in accordance with OECD Guideline 408 [32], healthy 6-week old male and female SD rats weighing 186.56 ± 8.70 g and 144.41 ± 7.63 g, respectively, were randomly assigned to 4 groups (10/sex/group) under GLP regulations. Vehicle (distilled water for injection) or graded doses of EAG (1,250, 2,500, and 5,000 mg/kg according to body weight) were administered to rats by oral gavage once daily for 13 weeks at a dose of 10 mL/kg of body weight after completion of a 14-day repeated oral toxicity dose range finding (DRF) study where no adverse finding was seen dosing up to 5,000 mg/kg/day. The high dose was selected according to the results of an acute toxicity study in which no significant test article-related changes in mortalities and clinical signs at 5,000 mg/kg/day were observed (data not shown). The rats were observed daily for clinical signs including mortality, general appearance, and behavioral abnormality until terminal sacrifice. Body weights and food/water consumption were recorded weekly throughout the study. Ophthalmological examination was conducted in the last week of observation and anterior parts of the eye, optic media, and fundus were examined with a fundus camera (Vantage Plus Digital LED; Keeler Instruments Inc., Malvern, PA, USA). At study termination, all rats were euthanized by isoflurane (2% to 5%) inhalation for blood sample collection.
### 2.4. Urinalysis, Hematology, and Clinical Biochemistry
Urinalysis, hematological analyses, and serum biochemistry analyses were conducted as described previously [31].
### 2.5. Gross Findings, Organ Weights, and Histopathological Examinations
At necropsy, the animals were sacrificed to analyze both macroscopic and microscopic features of the internal organs. The organ weights were measured as described previously [31]. All tissues from each animal were preserved, and lesions were graded using a five-step scale in the order of increasing severity (minimal, mild, moderate, severe, and massive). Brain, jejunum, peripheral nerve, pituitary gland, ileum, femorotibial joint, lung, cecum, urinary bladder, heart, colon, testis, thymus, rectum, epididymis, spleen, eye with optic nerve, prostate gland, adrenal gland, thyroid gland with parathyroid gland, seminal vesicle with coagulating gland, kidney, Harderian gland, ovary, liver, salivary gland, uterus with cervix, tongue, aorta, vagina, trachea, sternum with bone marrow, skin, esophagus, mandibular lymph node, mammary gland, stomach, mesenteric lymph node, skeletal muscle, pancreas, thoracic spinal cord, gross lesion, and duodenum were processed for histopathological examination using Pristima® (Xybion, Lawrenceville, NJ, USA). Diagnostic terms in the Lexicon of Pristima® were used primarily. Standardized System of Nomenclature and Diagnostic Criteria-Guides for Toxicologic Pathology [33] and Covance Glossary [34] were also utilized.
### 2.6. Bacterial Reverse Mutation Assay
Four histidine auxotroph strains ofSalmonella typhimurium (TA100, TA1535, TA98, and TA1537) [35] and a tryptophan auxotroph strain of Escherichia coli WP2 uvrA [36] were used to confirm mutagenicity of EAG according to OECD Guideline 471 [37] under GLP conditions. The mutagenic activity of EAG was assessed both in the presence and absence of an external metabolic activation system from rat livers (S9 fraction) using the direct plate incorporation method. For the plating assay, 0.5 mL of S9 mix (or sodium phosphate buffer, pH 7.4 for nonactivation plates), 0.1 mL of bacterial culture (containing approximately 108 viable cells), and 0.1 mL of test article were mixed with 2.0 mL of overlay agar. The contents of each tube were mixed and poured over the surface of a minimal agar plate. The overlay agar was allowed to solidify before incubation. After the top layers solidified, plates were inverted and incubated at 37 ± 2°C for 50 ± 2 h and revertant colonies were counted by the unaided eye. EAG was applied at dose levels of 50, 150, 500, 1,500, 3,000, and 5,000 µg/plate. The positive control substances were 2-aminoanthracene (2-AA), benzo[a]pyrene (B[a]P), sodium azide (SA), 2-nitrofluorene (2-NF), acridine mutagen ICR 191 (ICR-191), and 4-nitroquinoline-1-oxide (4NQO). At least three independent experiments were performed using triplicate plates for each concentration. Results are expressed as revertant colonies and mutagenic indexes (MI).
### 2.7. In Vitro Chromosomal Aberration Test
A chromosomal aberration test was performed to evaluate the mutagenic potential to induce structural and/or numerical chromosomal aberrations in a CHL/IU cell line derived from the lung of a female Chinese hamster lung fibroblasts under OECD Guideline 473 [38]. The treatment methods were classified under three types according to the presence and absence of the metabolic active system. Treatment 1 was performed for 6 h using a metabolic activation system (S9 mix), and 18 h of recovery time was allowed to observe the chromosomal aberrations. Treatments 2 and 3 were performed for 6 h and 24 h, respectively, without the use of S9 mix and followed by an 18 h and 0 h recovery, respectively. In Treatment 1, EAG was used at concentrations of 0 (negative control), 350, 700, 1,300, and 1,400 µg/mL. Treatments 2 and 3 were applied at 0, 300, 600, 1,100, and 1,200 µg/mL and 0, 225, 450, 800, and 900 µg/mL, respectively. Approximately 22 hours after treatment, 50 μL of colchicine solution was added to each culture (final concentration of 1 μM) and incubated for 2 hours for mitotic arrest. The mitotic cells were detached by gentle shaking. The media containing mitotic cells were centrifuged, and the cell pellets were resuspended in 75 mM potassium chloride solution for hypotonic treatment. Then cells were fixed with fixative (methanol: glacial acetic acid = 3:1 v/v), and slides were prepared by the air-drying method. Slides were stained with 5% Giemsa solution. Two slides were prepared for each culture. The results were expressed as frequency (%) of metaphases with structural or numerical aberrations per 300 metaphases. The relative increase in cell count (RICC %) was used as an indicator of concurrent cytotoxicity to determine the high concentration. With the cell counts, RICC (%) was calculated as follows:(1)RICC%=Cell count of treated flask−Initial cell countCell count of control flask−Initial cell count×100.
### 2.8. In Vivo Micronucleus Test in Mammalian Bone Marrow
Eight-week-old male ICR mice, 35.3 ± 1.3 g, were administered orally once a day for two consecutive days at doses of 500, 1,000, and 2,000 mg/kg/day (n = 6 in each group) according to OECD Guideline 474 [39]. Sterile distilled water for injection (10 mL/kg) was used as a negative control. Cyclophosphamide monohydrate (CPA) 70 mg/kg was administered once intraperitoneally on the day of the second administration as a positive control. All mice were daily observed for clinical signs. All animals were sacrificed about 24 h after the final administration, and bone marrow preparations were made for the evaluation of micronuclei and cytotoxicity. The bone marrow cells were fixed with methanol according to the method described in Schmid [40] and stained with acridine orange prepared based on the method of Hayashi [41]. The cells were observed and counted using a fluorescence microscope, and the identification of micronuclei was confirmed by the method of Hayashi [41]. Micronucleated polychromatic erythrocytes (MNPCE) were counted among 4,000 polychromatic erythrocytes (PCE) per animal. The ratio of PCE to total erythrocytes (red blood cell), an indicator of cytotoxicity [42], was determined by counting 500 erythrocytes per animal.
### 2.9. Statistical Analysis
SPSS Statistics 22 for Medical Science was used for all statistical analyzes, and the level of significance wasP<0.05. Body weights, food and water consumption, urine volume, hematological and clinical biochemistry parameters, and organ weights were assumed to be normally distributed and analyzed by parametric one-way analysis of variance (ANOVA). The assumption of homogeneity was tested using Levene’s test. The urinalysis data were rank-transformed and analyzed by the nonparametric Kruskal–Wallis H test. Fisher’s exact test was used to compare the frequency of aberrant metaphases between the negative control and test article-treated groups in the chromosomal aberration test. In the micronucleus test, the frequency of micronucleus was analyzed by the nonparametric Kruskal–Wallis H test. The negative and positive control groups were compared by the Mann–Whitney U test. The dose-responsiveness was tested by the linear-by-linear association of the chi-square test. The PCE:RBC ratio was assumed to be normally distributed and analyzed by one-way ANOVA, and the assumption of homogeneity of variance was tested using Levene’s test. The Student’s t-test was used to test for a difference between means of the negative and positive control.
## 2.1. Preparation of Ethanolic Extract of Aerial Parts ofA. glehni
The aerial parts, leaves, and stems, ofA. glehni were collected from Ulleung Island and dried naturally. The EAG was prepared by the method as previously described [23]. Briefly, the finely chopped sample was extracted in a 15-fold mass of 70% ethanol. The first extract was collected; then the second extract was obtained in a 10-fold mass of 70% ethanol. After mixing with diatomite, it was filtered and concentrated to 10–20 Brix. After adding an equal amount of dextrin to the concentrates, the mixtures were sterilized at 95°C for 30 min. The sterilized samples were spray-dried and filtered through a 60-mesh sieve to obtain a solid extract powder (Specimen Voucher No. AG-D022). For quality assurance, the final A. glehni extract was standardized by 3,5-dicaffeoylquinic acid (3,5-DCQA) based on high-performance liquid chromatography (HPLC) at 330 nm. The content of the marker compound (3,5-DCQA) in the EAG was 2.37 mg/g. The results of the amino acid composition analysis are shown in Table 1. The total protein content in EAG was 1,759.84 mg/100 g. Proline (35.1%), aspartic acid (21.6%), and glutamic acid (11.2%) were the main amino acids existing in EAG.Table 1
Amino acid composition of EAG.
Amino acidaTyrGlySerAlaGluLysLeuMetValArgAspIleThrPheProHisCysTrpAGND225.0183.4161.9355.047.1193.719.3141.153.6292.6117.2154.6146.2203.739.596.913.0EAGND53.978.367.9197.67.184.6ND41.326.7380.146.460.443.0616.916.728.210.9aUnit: mg/100 g, Tyr, tyrosine; Gly, glycine; Ser, serine; Ala, alanine; Glu, glutamic acid; Lys, lysine; Leu, leucine; Met, methionine; Val, valine; Arg, arginine; Asp, aspartic acid; Ile, isoleucine; Thr, threonine; Phe, phenylalanine; Pro, proline; His, histidine; Cys, cysteine; Trp, tryptophan; ND, not detected; AG, ethanolic leaf extract of Aster glehni; and EAG, ethanolic leaf and stem extract of Aster glehni.
## 2.2. Experimental Animals and Maintenance
Specific pathogen-free (SPF) SD rats were obtained from Orient Bio Inc. (Seongnam, Korea). The animals were maintained in the facility with temperature (23 ± 3°C), relative humidity (55 ± 15%), and ventilation (10–20 air changes/hour) at Chemon Inc. in accordance with the Guide for the Care and Use of Laboratory Animals, 8th edition [29]. Food and water were provided, ad libitum, with a 12 hours light:12 hours dark cycle. All procedures and protocols were reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) of Chemon Inc. performed in accordance with the guideline published by the Organization for Economic Cooperation and Development (OECD) as well as the Good Laboratory Practice (GLP) regulations for Nonclinical Laboratory Studies of the Ministry of Food Drug Safety (MFDS) in the Republic of Korea [27, 28, 30, 31].
## 2.3. Thirteen-Week Repeated Oral Toxicity Study
For the 13-week repeat-dose toxicity study, in accordance with OECD Guideline 408 [32], healthy 6-week old male and female SD rats weighing 186.56 ± 8.70 g and 144.41 ± 7.63 g, respectively, were randomly assigned to 4 groups (10/sex/group) under GLP regulations. Vehicle (distilled water for injection) or graded doses of EAG (1,250, 2,500, and 5,000 mg/kg according to body weight) were administered to rats by oral gavage once daily for 13 weeks at a dose of 10 mL/kg of body weight after completion of a 14-day repeated oral toxicity dose range finding (DRF) study where no adverse finding was seen dosing up to 5,000 mg/kg/day. The high dose was selected according to the results of an acute toxicity study in which no significant test article-related changes in mortalities and clinical signs at 5,000 mg/kg/day were observed (data not shown). The rats were observed daily for clinical signs including mortality, general appearance, and behavioral abnormality until terminal sacrifice. Body weights and food/water consumption were recorded weekly throughout the study. Ophthalmological examination was conducted in the last week of observation and anterior parts of the eye, optic media, and fundus were examined with a fundus camera (Vantage Plus Digital LED; Keeler Instruments Inc., Malvern, PA, USA). At study termination, all rats were euthanized by isoflurane (2% to 5%) inhalation for blood sample collection.
## 2.4. Urinalysis, Hematology, and Clinical Biochemistry
Urinalysis, hematological analyses, and serum biochemistry analyses were conducted as described previously [31].
## 2.5. Gross Findings, Organ Weights, and Histopathological Examinations
At necropsy, the animals were sacrificed to analyze both macroscopic and microscopic features of the internal organs. The organ weights were measured as described previously [31]. All tissues from each animal were preserved, and lesions were graded using a five-step scale in the order of increasing severity (minimal, mild, moderate, severe, and massive). Brain, jejunum, peripheral nerve, pituitary gland, ileum, femorotibial joint, lung, cecum, urinary bladder, heart, colon, testis, thymus, rectum, epididymis, spleen, eye with optic nerve, prostate gland, adrenal gland, thyroid gland with parathyroid gland, seminal vesicle with coagulating gland, kidney, Harderian gland, ovary, liver, salivary gland, uterus with cervix, tongue, aorta, vagina, trachea, sternum with bone marrow, skin, esophagus, mandibular lymph node, mammary gland, stomach, mesenteric lymph node, skeletal muscle, pancreas, thoracic spinal cord, gross lesion, and duodenum were processed for histopathological examination using Pristima® (Xybion, Lawrenceville, NJ, USA). Diagnostic terms in the Lexicon of Pristima® were used primarily. Standardized System of Nomenclature and Diagnostic Criteria-Guides for Toxicologic Pathology [33] and Covance Glossary [34] were also utilized.
## 2.6. Bacterial Reverse Mutation Assay
Four histidine auxotroph strains ofSalmonella typhimurium (TA100, TA1535, TA98, and TA1537) [35] and a tryptophan auxotroph strain of Escherichia coli WP2 uvrA [36] were used to confirm mutagenicity of EAG according to OECD Guideline 471 [37] under GLP conditions. The mutagenic activity of EAG was assessed both in the presence and absence of an external metabolic activation system from rat livers (S9 fraction) using the direct plate incorporation method. For the plating assay, 0.5 mL of S9 mix (or sodium phosphate buffer, pH 7.4 for nonactivation plates), 0.1 mL of bacterial culture (containing approximately 108 viable cells), and 0.1 mL of test article were mixed with 2.0 mL of overlay agar. The contents of each tube were mixed and poured over the surface of a minimal agar plate. The overlay agar was allowed to solidify before incubation. After the top layers solidified, plates were inverted and incubated at 37 ± 2°C for 50 ± 2 h and revertant colonies were counted by the unaided eye. EAG was applied at dose levels of 50, 150, 500, 1,500, 3,000, and 5,000 µg/plate. The positive control substances were 2-aminoanthracene (2-AA), benzo[a]pyrene (B[a]P), sodium azide (SA), 2-nitrofluorene (2-NF), acridine mutagen ICR 191 (ICR-191), and 4-nitroquinoline-1-oxide (4NQO). At least three independent experiments were performed using triplicate plates for each concentration. Results are expressed as revertant colonies and mutagenic indexes (MI).
## 2.7. In Vitro Chromosomal Aberration Test
A chromosomal aberration test was performed to evaluate the mutagenic potential to induce structural and/or numerical chromosomal aberrations in a CHL/IU cell line derived from the lung of a female Chinese hamster lung fibroblasts under OECD Guideline 473 [38]. The treatment methods were classified under three types according to the presence and absence of the metabolic active system. Treatment 1 was performed for 6 h using a metabolic activation system (S9 mix), and 18 h of recovery time was allowed to observe the chromosomal aberrations. Treatments 2 and 3 were performed for 6 h and 24 h, respectively, without the use of S9 mix and followed by an 18 h and 0 h recovery, respectively. In Treatment 1, EAG was used at concentrations of 0 (negative control), 350, 700, 1,300, and 1,400 µg/mL. Treatments 2 and 3 were applied at 0, 300, 600, 1,100, and 1,200 µg/mL and 0, 225, 450, 800, and 900 µg/mL, respectively. Approximately 22 hours after treatment, 50 μL of colchicine solution was added to each culture (final concentration of 1 μM) and incubated for 2 hours for mitotic arrest. The mitotic cells were detached by gentle shaking. The media containing mitotic cells were centrifuged, and the cell pellets were resuspended in 75 mM potassium chloride solution for hypotonic treatment. Then cells were fixed with fixative (methanol: glacial acetic acid = 3:1 v/v), and slides were prepared by the air-drying method. Slides were stained with 5% Giemsa solution. Two slides were prepared for each culture. The results were expressed as frequency (%) of metaphases with structural or numerical aberrations per 300 metaphases. The relative increase in cell count (RICC %) was used as an indicator of concurrent cytotoxicity to determine the high concentration. With the cell counts, RICC (%) was calculated as follows:(1)RICC%=Cell count of treated flask−Initial cell countCell count of control flask−Initial cell count×100.
## 2.8. In Vivo Micronucleus Test in Mammalian Bone Marrow
Eight-week-old male ICR mice, 35.3 ± 1.3 g, were administered orally once a day for two consecutive days at doses of 500, 1,000, and 2,000 mg/kg/day (n = 6 in each group) according to OECD Guideline 474 [39]. Sterile distilled water for injection (10 mL/kg) was used as a negative control. Cyclophosphamide monohydrate (CPA) 70 mg/kg was administered once intraperitoneally on the day of the second administration as a positive control. All mice were daily observed for clinical signs. All animals were sacrificed about 24 h after the final administration, and bone marrow preparations were made for the evaluation of micronuclei and cytotoxicity. The bone marrow cells were fixed with methanol according to the method described in Schmid [40] and stained with acridine orange prepared based on the method of Hayashi [41]. The cells were observed and counted using a fluorescence microscope, and the identification of micronuclei was confirmed by the method of Hayashi [41]. Micronucleated polychromatic erythrocytes (MNPCE) were counted among 4,000 polychromatic erythrocytes (PCE) per animal. The ratio of PCE to total erythrocytes (red blood cell), an indicator of cytotoxicity [42], was determined by counting 500 erythrocytes per animal.
## 2.9. Statistical Analysis
SPSS Statistics 22 for Medical Science was used for all statistical analyzes, and the level of significance wasP<0.05. Body weights, food and water consumption, urine volume, hematological and clinical biochemistry parameters, and organ weights were assumed to be normally distributed and analyzed by parametric one-way analysis of variance (ANOVA). The assumption of homogeneity was tested using Levene’s test. The urinalysis data were rank-transformed and analyzed by the nonparametric Kruskal–Wallis H test. Fisher’s exact test was used to compare the frequency of aberrant metaphases between the negative control and test article-treated groups in the chromosomal aberration test. In the micronucleus test, the frequency of micronucleus was analyzed by the nonparametric Kruskal–Wallis H test. The negative and positive control groups were compared by the Mann–Whitney U test. The dose-responsiveness was tested by the linear-by-linear association of the chi-square test. The PCE:RBC ratio was assumed to be normally distributed and analyzed by one-way ANOVA, and the assumption of homogeneity of variance was tested using Levene’s test. The Student’s t-test was used to test for a difference between means of the negative and positive control.
## 3. Results
### 3.1. Thirteen-Week Repeated Oral Toxicity Study
There is still insufficient toxicological information on the oral toxicity of EAG after long-term exposure. Therefore, a repeated-dose toxicity DRF study of EAG at doses of 1,250, 2,500, and 5,000 mg/kg/day administered by oral gavage for 14 days was performed to assess initial toxicity. As a result, no EAG-related changes in mortalities, clinical signs, body weights, food and water consumption, ophthalmological examination, urinalysis, hematological and clinical biochemistry tests, organ weight, and gross findings were observed during the 2-week treatment period (body weights as shown in Figure1 and other data not shown).Figure 1
Effect of ethanolic extract ofA. glehni on body weights in SD rats. (a) Mean body weights of male rats and (b) mean body weight of female rats treated with EAG for 2 weeks. (c) Mean body weights of male rats and (d) mean body weight of female rats treated with EAG for 13 weeks. Values are expressed as mean ± SD (n = 9–10 per group). Significant difference at ∗P<0.05 and ∗∗P<0.01 levels compared with the negative control.
(a)(b)(c)(d)In the 13-week repeated-dose toxicity study, although one male rat treated with 1,250 mg/kg/day of EAG died on day 65, there were no clinical signs or any lesions in histopathological examination. The compound-colored stool was observed at 5,000 mg/kg/day in both sexes from day 10 to necropsy day, and salivation was sporadically observed in males at 5,000 mg/kg/day. Significant decreases in mean body weight were observed in males at 1,250 and 2,500 mg/kg/day (P<0.05 and P<0.01; Figure 1), but these changes did not occur in a dose-dependent manner, and the values were within the normal physiological ranges [43, 44]. No significant changes were found in female body weight between the treatment and control groups. There were no EAG-related effects in food intake, water intake, organ weights, and ophthalmological test in both sexes (data not shown).A few instances of mean values of urinalysis parameters differing with statistical significance from the negative control were observed (P<0.05 and P<0.01; Table 2). Ketone body in males at 5,000 mg/kg/day and specific gravity at all doses in females was significantly higher than that of the negative control. In addition, pH in females at all EAG groups was significantly higher and 24 hours total volume of urine in females at 1,250 and 5,000 mg/kg/day were significantly lower than those of negative control. However, these changes were within the normal physiological ranges [43, 44]. Therefore, these observations were not considered to be toxicologically significant.Table 2
Urinalysis of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsResultEAG (mg/kg/day)MaleFemale01,2502,5005,00001,2502,5005,000No. of animals examined55555555GLUNegative55555555BILNegative55545555Small00010000KETNegative31415544Trace241000111500030000400001∗0000SG≤1.005100051101.010424203231.015021201111.020010100∗1∗1∗∗pH6.5000010007.0000010007.5100031008.0200100108.5255404∗4∗∗5∗∗Volume (mL)13.0 ± 4.611.6 ± 1.915.2 ± 1.111.4 ± 4.417.6 ± 5.510.8 ± 2.8∗12.4 ± 4.68.8 ± 4.0∗∗∗/∗∗Significant difference at P<0.05/P<0.01 levels compared with the negative control by the Mann–Whitney U test. GLU, glucose (mg/dL); BIL, bilirubin (mg/dL); KET, ketone body (mg/dL); and SG, specific gravity.Hematology evaluation showed lymphocyte count at 5,000 mg/kg/day in males was significantly higher, and prothrombin time at all doses in females was significantly lower compared with the negative control (P<0.05 and P<0.01; Table 3). However, these results were also within the normal physiological ranges [43, 44]. The results of the clinical biochemistry test were presented in Table 4. EAG-related changes in clinical biochemistry parameters were not found in both sexes.Table 3
Hematological parameters of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsEAG (mg/kg/day)01,2502,5005,000MaleRBC (106/μL)8.99 ± 0.559.01 ± 0.22a8.81 ± 0.378.87 ± 0.42HGB (g/dL)15.3 ± 0.515.5 ± 0.4a15.0 ± 0.515.0 ± 0.4HCT (%)47.3 ± 1.847.9 ± 1.4a46.7 ± 1.346.6 ± 1.6MCV (fL)52.7 ± 1.953.2 ± 1.0a53.1 ± 2.652.5 ± 1.1MCH (pg)17.1 ± 0.817.2 ± 0.3a17.0 ± 1.016.9 ± 0.5MCHC (g/dL)32.4 ± 0.532.3 ± 0.4a32.1 ± 0.832.3 ± 0.5PLT (103/μL)919.2 ± 61.3905.9 ± 93.0a890.4 ± 71.7933.3 ± 74.8WBC (103/μL)6.30 ± 1.377.22 ± 2.18a7.54 ± 1.167.91 ± 0.97NEU (103/μL)1.3 ± 0.31.5 ± 0.6a1.6 ± 0.71.1 ± 0.2LYM (103/μL)4.6 ± 1.25.2 ± 1.6a5.4 ± 1.06.3 ± 1.0∗MONO (103/μL)0.28 ± 0.120.31 ± 0.10a0.32 ± 0.110.30 ± 0.06EOS (103/μL)0.11 ± 0.040.11 ± 0.03a0.13 ± 0.020.10 ± 0.03BASO (103/μL)0.01 ± 0.010.01 ± 0.01a0.01 ± 0.000.01 ± 0.00PT (sec)8.0 ± 0.28.1 ± 0.2a8.0 ± 0.27.8 ± 0.2FemaleRBC (106/μL)7.98 ± 0.357.72 ± 0.307.86 ± 0.227.94 ± 0.28HGB (g/dL)14.3 ± 0.314.0 ± 0.414.1 ± 0.314.3 ± 0.4HCT (%)43.5 ± 1.342.8 ± 1.243.2 ± 1.943.7 ± 1.2MCV (fL)54.6 ± 1.855.5 ± 2.154.9 ± 0.855.0 ± 0.8MCH (pg)17.9 ± 0.618.1 ± 0.718.0 ± 0.417.9 ± 0.3MCHC (g/dL)32.8 ± 0.232.7 ± 0.432.7 ± 0.432.6 ± 0.4PLT (103/μL)969.9 ± 60.91023.9 ± 89.3977.4 ± 87.8950.3 ± 66.4WBC (103/μL)3.67 ± 0.953.75 ± 1.033.84 ± 1.224.01 ± 1.18NEU (103/μL)0.5 ± 0.10.5 ± 0.10.5 ± 0.20.5 ± 0.2LYM (103/μL)3.0 ± 0.93.0 ± 0.93.1 ± 1.03.3 ± 0.9MONO (103/μL)0.09 ± 0.040.11 ± 0.030.11 ± 0.040.13 ± 0.05EOS (103/μL)0.08 ± 0.030.08 ± 0.020.08 ± 0.030.07 ± 0.03BASO (103/μL)0.01 ± 0.010.00 ± 0.010.00 ± 0.000.00 ± 0.01PT (sec)7.7 ± 0.27.4 ± 0.2##7.3 ± 0.2##7.4 ± 0.2##Data are expressed as mean ± standard deviation.∗Significant difference at P<0.05 levels compared with the negative control by Scheffe multiple range test. ##Significant difference at P<0.01 levels compared with the negative control by Duncan multiple range test. aNumber of animals in the group was 9; otherwise mean of 10 animals/sex/group. RBC, red blood cell; HGB, hemoglobin concentration; HCT, hematocrit; MCV, mean corpuscular volume; MCH, mean cell hemoglobin; MCHC, mean cell hemoglobin concentration; PLT, platelet count; WBC, white blood cell; NEU, neutrophil; LYM, lymphocyte; MONO, monocyte; EOS, eosinophil; BASO, basophil; and PT, prothrombin time.Table 4
Clinical biochemistry parameters of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsEAG (mg/kg/day)01,2502,5005,000MaleAST (U/L)83.7 ± 16.777.2 ± 14.9a82.6 ± 15.970.6 ± 6.6ALT (U/L)33.3 ± 5.832.6 ± 6.3a33.1 ± 4.131.8 ± 3.1ALP (U/L)88.1 ± 15.482.1 ± 16.0a89.8 ± 17.893.5 ± 18.0CPK (U/L)160.9 ± 80.9173.8 ± 124.2a157.1 ± 94.7118.0 ± 50.8TBIL (mg/dL)0.149 ± 0.0300.145 ± 0.032a0.145 ± 0.0200.145 ± 0.020GLU (mg/dL)155.0 ± 19.3149.7 ± 14.8a151.1 ± 22.1145.0 ± 17.0TCHO (mg/dL)89.0 ± 21.2101.8 ± 21.3a101.4 ± 24.0104.8 ± 24.2TG (mg/dL)56.3 ± 25.863.2 ± 26.2a60.6 ± 19.965.6 ± 28.2TP (g/dL)6.27 ± 0.166.37 ± 0.19a6.29 ± 0.296.30 ± 0.26ALB (g/dL)2.90 ± 0.072.95 ± 0.11a2.95 ± 0.112.93 ± 0.09BUN (mg/dL)13.9 ± 1.614.7 ± 1.1a14.4 ± 2.313.6 ± 1.9CRE (mg/dL)0.40 ± 0.030.39 ± 0.02a0.39 ± 0.020.40 ± 0.03FemaleAST (U/L)70.1 ± 11.276.8 ± 13.876.4 ± 14.273.0 ± 18.0ALT (U/L)22.1 ± 3.524.1 ± 6.325.0 ± 4.625.5 ± 3.6ALP (U/L)43.5 ± 15.654.2 ± 14.146.8 ± 13.445.8 ± 11.1CPK (U/L)146.6 ± 126.3126.4 ± 84.9149.6 ± 94.6128.1 ± 54.7TBIL (mg/dL)0.169 ± 0.0240.190 ± 0.0380.176 ± 0.0200.174 ± 0.018GLU (mg/dL)121.4 ± 14.5129.3 ± 16.1122.7 ± 14.6122.0 ± 10.7TCHO (mg/dL)86.2 ± 20.092.5 ± 8.5100.5 ± 17.685.3 ± 10.9TG (mg/dL)35.6 ± 6.335.0 ± 4.736.3 ± 8.432.9 ± 8.4TP (g/dL)5.89 ± 0.266.11 ± 0.216.02 ± 0.205.97 ± 0.17ALB (g/dL)2.99 ± 0.133.11 ± 0.153.05 ± 0.113.12 ± 0.11BUN (mg/dL)15.8 ± 2.315.0 ± 1.315.0 ± 2.414.3 ± 1.2CRE (mg/dL)0.48 ± 0.040.47 ± 0.030.49 ± 0.060.46 ± 0.02Data are expressed as mean ± standard deviation.aNumber of animals in group was 9; otherwise mean of 10 animals/sex/group. AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; CPK, creatine phosphokinase; TBIL, total bilirubin; GLU, glucose; TCHO, total cholesterol; TG, triglyceride; TP, total protein; ALB, albumin; BUN, blood urea nitrogen; and CRE, creatinine.In histopathological examinations, notable change was observed in the nonglandular stomach (Table5). Squamous hyperplasia of the limiting ridge in the stomach was found in the EAG-treated groups in both sexes. The changes were observed in seven males at 2,500 mg/kg/day and in all males and eight females at 5,000 mg/kg/day (P<0.01 and P<0.001). However, there were no toxicologically significant changes in histopathological examinations. Other lesions that have been well known to occur spontaneously in the same age of SD rats were observed [45, 46].Table 5
Histopathologic findings of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
OrgansFindingsEAG (mg/kg/day)MaleFemale01,2502,5005,00001,2502,5005,000Nonglandular stomachHyperplasia, squamous cells, limiting ridge037∗∗10∗∗∗0138∗∗∗No. of animals examined1010101010101010∗∗/∗∗∗Significant difference at P<0.01/P<0.001 levels compared with the negative control by Fisher two-tailed test.
### 3.2. Genotoxicity Test
#### 3.2.1. Bacterial Reverse Mutation Test (Ames Test)
No precipitation or other abnormality was observed on the bottom agar at the time of plate scoring. There was a dose-related increase in a number of colonies in TA98 that is one of the histidine-requiring strains at 3,000 and 5,000µg/plate, and the number of revertants was 2.3 and 2.5 times higher than that of the negative control in the presence of S9 mix, respectively (Table 6). However, EAG is composed of various amino acids, including histidine (Table 1). In other test strains, no substantial increases in numbers of revertants per plate were observed in any dose level of EAG. Moreover, there were no signs of cytotoxicity at any dose level in all test strains. The results suggest that EAG is not mutagenic in the test strains. The mean revertants in the positive control for each strain showed a clear increase over the mean revertants in the negative control for that strain.Table 6
Results of bacterial reverse mutation assay.
Test articleDose (μg/plate)Colonies/plate [factor]aTA98TA100TA1535TA1537WP2uvrA−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mixEAG026 ± 425 ± 3114 ± 8107 ± 914 ± 215 ± 215 ± 215 ± 222 ± 329 ± 4——————————5025 ± 628 ± 3111 ± 6102 ± 912 ± 416 ± 513 ± 213 ± 322 ± 425 ± 4[1.0][1.1][1.0][0.9][0.9][1.1][0.9][0.9][1.0][0.9]15027 ± 430 ± 5107 ± 15120 ± 1610 ± 115 ± 115 ± 112 ± 318 ± 326 ± 7[1.1][1.2][0.9][1.1][0.8][1.0][1.0][0.8][0.8][0.9]50034 ± 534 ± 4103 ± 7105 ± 513 ± 117 ± 113 ± 114 ± 120 ± 226 ± 6[1.3][1.4][0.9][1.0][1.0][1.1][0.9][1.0][0.9][0.9]1,50026 ± 540 ± 3109 ± 6120 ± 1011 ± 115 ± 212 ± 213 ± 223 ± 426 ± 4[1.0][1.6][1.0][1.1][0.8][1.0][0.8][0.9][1.0][0.9]3,00034 ± 556 ± 5121 ± 2137 ± 113 ± 213 ± 313 ± 317 ± 322 ± 227 ± 2[1.3][2.3][1.1][1.3][0.9][0.9][0.9][1.2][1.0][0.9]5,00034 ± 263 ± 6116 ± 12137 ± 313 ± 113 ± 112 ± 220 ± 125 ± 223 ± 3[1.3][2.5][1.0][1.3][0.9][0.8][0.8][1.3][1.1][0.8]Positive controlb225 ± 16118 ± 8465 ± 601504 ± 102408 ± 12142 ± 19265 ± 20182 ± 22227 ± 25104 ± 8[8.8][4.8][4.1][14.0][29.9][9.3][18.1][12.1][10.3][3.6]Data are expressed as mean ± standard deviation.aThree plates were used each dose. Factor = no. of colonies of treated plate/no. of colonies of negative control plate. bTA98: 2-NF 2 μg/plate (−S9 mix), B[a]P 1 μg/plate (+S9 mix); TA100:SA 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); TA1535:SA 0.5 μg/plate (−S9 mix), 2-AA 2 μg/plate (+S9 mix); TA1537:ICR-191 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); and WP2 uvrA:4NQO 0.5 μg/plate (−S9 mix), 2-AA 6 μg/plate (+S9 mix). 2-NF, 2-nitrofluorene; B[a]P, benzo[a]pyrene; SA, sodium azide; 2-AA, 2-aminoanthracene; ICR-191, acridine mutagen ICR 191; and 4NQO, 4-nitroquinoline N-oxide.
#### 3.2.2. Chromosome Aberration Test Using CHL Cells
In this experiment, no turbidity or precipitation was observed at all dose levels of EAG. As shown in Table7, there was no statistically significant increase at any dose level of EAG compared to the negative control, and there was no dose-response relationship or increase in the frequency of aberrant metaphases in all treatment series. In the positive control, there was a statistically significant increase in the mean frequency of aberrant metaphases with structural aberrations in all treatment series (P<0.01).Table 7
In vitro chromosome aberration test in Chinese hamster lung cells with EAG.
Treatment scheduleaS9 mixDose (µg/mL)PP + ER (%)Ratio of aberrant metaphaseb (%)Cell countscMeanRICCd (%)Flask AFlask B06–18+00.000.008,6628,2648,4631003500.330.338,2638,3878,325977000.670.678,5638,7908,6761041,3000.000.336,0836,2626,172571,4000.000.005,4405,3825,41143B[a]P 200.0015.00∗∗6,0005,8505,9255206–18−00.000.009,1629,3489,2551003000.330.009,5288,7229,124986000.670.008,6708,9128,791921,1000.000.336,6096,6576,633571,2000.670.006,0325,8645,947464NQO 0.40.0010.33∗∗7,3097,1927,2506724–0−00.330.338,9008,9108,9051002250.670.008,9969,2739,1341044500.330.009,5899,2739,4311098001.000.676,2456,5526,398569000.000.336,0315,8895,960494NQO 0.40.009.33∗∗7,0126,6696,84064Initial cell count3,1423,1623,152∗∗Significant difference at P<0.01 levels compared with the negative control by Fisher’s exact test. aTreatment time – recovery time, hours, bGap excludes, 150 metaphases were examined per culture. cAfter harvesting mitotic cells, each culture was trypsinized and suspended with 0.5 mL of 0.1% trypsin and 5 mL of culture medium. The cell suspensions of 0.4 mL/culture were diluted 50 times with 19.6 mL of Isoton® sol. The cells in 0.5 mL of Isoton® sol. were counted twice/culture using Coulter Counter model Z2. The actual number of cells per flask = mean cell count × 550. dRelative increase in cell count = ((cell count of treated flask – initial cell count)/(cell count of the negative control flask – initial cell count)) × 100, PP, polyploid; ER, endoreduplication; B[a]P, benzo[a]pyrene (positive control); and 4NQO, 4-nitroquinoline-1-oxide (positive control).
#### 3.2.3. Micronucleus Test Using Mouse Bone Marrow Cells
One mouse at 500 mg/kg/day died after the first administration, but it was not considered to be EAG-related, and there were no noticeable macroscopic signs in all other survivors that could be attributed to EAG. There was no statistically significant increase or a dose-related increase in the frequencies of MNPCE at any dose level of EAG compared to the negative control (Table8). The PCE:RBC ratio showed no difference at any dose level of EAG. In contrast, the micronucleus and PCE:RBC ratio were significantly changed by the positive control (P<0.01) when compared to the negative control.Table 8
Observations of micronucleus and PCE:RBC ratio.
Test articleDose (mg/kg/day)Animals per doseMNPCEa (mean ± SD)PCE:RBC ratio (mean ± SD)% controlEAG061.33 ± 1.030.57 ± 0.011005005b1.20 ± 0.840.58 ± 0.021011,00061.00 ± 1.100.57 ± 0.021002,00061.50 ± 1.380.57 ± 0.0199CPA706110.50 ± 29.71∗∗0.39 ± 0.02##69∗∗Significant difference at P<0.01 levels compared with the negative control by the Mann–Whitney. ##Significant difference at P<0.01 levels compared with the control by Student’s t-test. aRatio of MNPCE with 4,000 PCE, bOne of the mice was died. PCE, polychromatic erythrocyte; RBC, red blood cells (polychromatic erythrocyte + normochromatic erythrocyte); MNPCE, micronucleated polychromatic erythrocyte; and CPA, cyclophosphamide monohydrate (positive control).
## 3.1. Thirteen-Week Repeated Oral Toxicity Study
There is still insufficient toxicological information on the oral toxicity of EAG after long-term exposure. Therefore, a repeated-dose toxicity DRF study of EAG at doses of 1,250, 2,500, and 5,000 mg/kg/day administered by oral gavage for 14 days was performed to assess initial toxicity. As a result, no EAG-related changes in mortalities, clinical signs, body weights, food and water consumption, ophthalmological examination, urinalysis, hematological and clinical biochemistry tests, organ weight, and gross findings were observed during the 2-week treatment period (body weights as shown in Figure1 and other data not shown).Figure 1
Effect of ethanolic extract ofA. glehni on body weights in SD rats. (a) Mean body weights of male rats and (b) mean body weight of female rats treated with EAG for 2 weeks. (c) Mean body weights of male rats and (d) mean body weight of female rats treated with EAG for 13 weeks. Values are expressed as mean ± SD (n = 9–10 per group). Significant difference at ∗P<0.05 and ∗∗P<0.01 levels compared with the negative control.
(a)(b)(c)(d)In the 13-week repeated-dose toxicity study, although one male rat treated with 1,250 mg/kg/day of EAG died on day 65, there were no clinical signs or any lesions in histopathological examination. The compound-colored stool was observed at 5,000 mg/kg/day in both sexes from day 10 to necropsy day, and salivation was sporadically observed in males at 5,000 mg/kg/day. Significant decreases in mean body weight were observed in males at 1,250 and 2,500 mg/kg/day (P<0.05 and P<0.01; Figure 1), but these changes did not occur in a dose-dependent manner, and the values were within the normal physiological ranges [43, 44]. No significant changes were found in female body weight between the treatment and control groups. There were no EAG-related effects in food intake, water intake, organ weights, and ophthalmological test in both sexes (data not shown).A few instances of mean values of urinalysis parameters differing with statistical significance from the negative control were observed (P<0.05 and P<0.01; Table 2). Ketone body in males at 5,000 mg/kg/day and specific gravity at all doses in females was significantly higher than that of the negative control. In addition, pH in females at all EAG groups was significantly higher and 24 hours total volume of urine in females at 1,250 and 5,000 mg/kg/day were significantly lower than those of negative control. However, these changes were within the normal physiological ranges [43, 44]. Therefore, these observations were not considered to be toxicologically significant.Table 2
Urinalysis of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsResultEAG (mg/kg/day)MaleFemale01,2502,5005,00001,2502,5005,000No. of animals examined55555555GLUNegative55555555BILNegative55545555Small00010000KETNegative31415544Trace241000111500030000400001∗0000SG≤1.005100051101.010424203231.015021201111.020010100∗1∗1∗∗pH6.5000010007.0000010007.5100031008.0200100108.5255404∗4∗∗5∗∗Volume (mL)13.0 ± 4.611.6 ± 1.915.2 ± 1.111.4 ± 4.417.6 ± 5.510.8 ± 2.8∗12.4 ± 4.68.8 ± 4.0∗∗∗/∗∗Significant difference at P<0.05/P<0.01 levels compared with the negative control by the Mann–Whitney U test. GLU, glucose (mg/dL); BIL, bilirubin (mg/dL); KET, ketone body (mg/dL); and SG, specific gravity.Hematology evaluation showed lymphocyte count at 5,000 mg/kg/day in males was significantly higher, and prothrombin time at all doses in females was significantly lower compared with the negative control (P<0.05 and P<0.01; Table 3). However, these results were also within the normal physiological ranges [43, 44]. The results of the clinical biochemistry test were presented in Table 4. EAG-related changes in clinical biochemistry parameters were not found in both sexes.Table 3
Hematological parameters of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsEAG (mg/kg/day)01,2502,5005,000MaleRBC (106/μL)8.99 ± 0.559.01 ± 0.22a8.81 ± 0.378.87 ± 0.42HGB (g/dL)15.3 ± 0.515.5 ± 0.4a15.0 ± 0.515.0 ± 0.4HCT (%)47.3 ± 1.847.9 ± 1.4a46.7 ± 1.346.6 ± 1.6MCV (fL)52.7 ± 1.953.2 ± 1.0a53.1 ± 2.652.5 ± 1.1MCH (pg)17.1 ± 0.817.2 ± 0.3a17.0 ± 1.016.9 ± 0.5MCHC (g/dL)32.4 ± 0.532.3 ± 0.4a32.1 ± 0.832.3 ± 0.5PLT (103/μL)919.2 ± 61.3905.9 ± 93.0a890.4 ± 71.7933.3 ± 74.8WBC (103/μL)6.30 ± 1.377.22 ± 2.18a7.54 ± 1.167.91 ± 0.97NEU (103/μL)1.3 ± 0.31.5 ± 0.6a1.6 ± 0.71.1 ± 0.2LYM (103/μL)4.6 ± 1.25.2 ± 1.6a5.4 ± 1.06.3 ± 1.0∗MONO (103/μL)0.28 ± 0.120.31 ± 0.10a0.32 ± 0.110.30 ± 0.06EOS (103/μL)0.11 ± 0.040.11 ± 0.03a0.13 ± 0.020.10 ± 0.03BASO (103/μL)0.01 ± 0.010.01 ± 0.01a0.01 ± 0.000.01 ± 0.00PT (sec)8.0 ± 0.28.1 ± 0.2a8.0 ± 0.27.8 ± 0.2FemaleRBC (106/μL)7.98 ± 0.357.72 ± 0.307.86 ± 0.227.94 ± 0.28HGB (g/dL)14.3 ± 0.314.0 ± 0.414.1 ± 0.314.3 ± 0.4HCT (%)43.5 ± 1.342.8 ± 1.243.2 ± 1.943.7 ± 1.2MCV (fL)54.6 ± 1.855.5 ± 2.154.9 ± 0.855.0 ± 0.8MCH (pg)17.9 ± 0.618.1 ± 0.718.0 ± 0.417.9 ± 0.3MCHC (g/dL)32.8 ± 0.232.7 ± 0.432.7 ± 0.432.6 ± 0.4PLT (103/μL)969.9 ± 60.91023.9 ± 89.3977.4 ± 87.8950.3 ± 66.4WBC (103/μL)3.67 ± 0.953.75 ± 1.033.84 ± 1.224.01 ± 1.18NEU (103/μL)0.5 ± 0.10.5 ± 0.10.5 ± 0.20.5 ± 0.2LYM (103/μL)3.0 ± 0.93.0 ± 0.93.1 ± 1.03.3 ± 0.9MONO (103/μL)0.09 ± 0.040.11 ± 0.030.11 ± 0.040.13 ± 0.05EOS (103/μL)0.08 ± 0.030.08 ± 0.020.08 ± 0.030.07 ± 0.03BASO (103/μL)0.01 ± 0.010.00 ± 0.010.00 ± 0.000.00 ± 0.01PT (sec)7.7 ± 0.27.4 ± 0.2##7.3 ± 0.2##7.4 ± 0.2##Data are expressed as mean ± standard deviation.∗Significant difference at P<0.05 levels compared with the negative control by Scheffe multiple range test. ##Significant difference at P<0.01 levels compared with the negative control by Duncan multiple range test. aNumber of animals in the group was 9; otherwise mean of 10 animals/sex/group. RBC, red blood cell; HGB, hemoglobin concentration; HCT, hematocrit; MCV, mean corpuscular volume; MCH, mean cell hemoglobin; MCHC, mean cell hemoglobin concentration; PLT, platelet count; WBC, white blood cell; NEU, neutrophil; LYM, lymphocyte; MONO, monocyte; EOS, eosinophil; BASO, basophil; and PT, prothrombin time.Table 4
Clinical biochemistry parameters of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsEAG (mg/kg/day)01,2502,5005,000MaleAST (U/L)83.7 ± 16.777.2 ± 14.9a82.6 ± 15.970.6 ± 6.6ALT (U/L)33.3 ± 5.832.6 ± 6.3a33.1 ± 4.131.8 ± 3.1ALP (U/L)88.1 ± 15.482.1 ± 16.0a89.8 ± 17.893.5 ± 18.0CPK (U/L)160.9 ± 80.9173.8 ± 124.2a157.1 ± 94.7118.0 ± 50.8TBIL (mg/dL)0.149 ± 0.0300.145 ± 0.032a0.145 ± 0.0200.145 ± 0.020GLU (mg/dL)155.0 ± 19.3149.7 ± 14.8a151.1 ± 22.1145.0 ± 17.0TCHO (mg/dL)89.0 ± 21.2101.8 ± 21.3a101.4 ± 24.0104.8 ± 24.2TG (mg/dL)56.3 ± 25.863.2 ± 26.2a60.6 ± 19.965.6 ± 28.2TP (g/dL)6.27 ± 0.166.37 ± 0.19a6.29 ± 0.296.30 ± 0.26ALB (g/dL)2.90 ± 0.072.95 ± 0.11a2.95 ± 0.112.93 ± 0.09BUN (mg/dL)13.9 ± 1.614.7 ± 1.1a14.4 ± 2.313.6 ± 1.9CRE (mg/dL)0.40 ± 0.030.39 ± 0.02a0.39 ± 0.020.40 ± 0.03FemaleAST (U/L)70.1 ± 11.276.8 ± 13.876.4 ± 14.273.0 ± 18.0ALT (U/L)22.1 ± 3.524.1 ± 6.325.0 ± 4.625.5 ± 3.6ALP (U/L)43.5 ± 15.654.2 ± 14.146.8 ± 13.445.8 ± 11.1CPK (U/L)146.6 ± 126.3126.4 ± 84.9149.6 ± 94.6128.1 ± 54.7TBIL (mg/dL)0.169 ± 0.0240.190 ± 0.0380.176 ± 0.0200.174 ± 0.018GLU (mg/dL)121.4 ± 14.5129.3 ± 16.1122.7 ± 14.6122.0 ± 10.7TCHO (mg/dL)86.2 ± 20.092.5 ± 8.5100.5 ± 17.685.3 ± 10.9TG (mg/dL)35.6 ± 6.335.0 ± 4.736.3 ± 8.432.9 ± 8.4TP (g/dL)5.89 ± 0.266.11 ± 0.216.02 ± 0.205.97 ± 0.17ALB (g/dL)2.99 ± 0.133.11 ± 0.153.05 ± 0.113.12 ± 0.11BUN (mg/dL)15.8 ± 2.315.0 ± 1.315.0 ± 2.414.3 ± 1.2CRE (mg/dL)0.48 ± 0.040.47 ± 0.030.49 ± 0.060.46 ± 0.02Data are expressed as mean ± standard deviation.aNumber of animals in group was 9; otherwise mean of 10 animals/sex/group. AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; CPK, creatine phosphokinase; TBIL, total bilirubin; GLU, glucose; TCHO, total cholesterol; TG, triglyceride; TP, total protein; ALB, albumin; BUN, blood urea nitrogen; and CRE, creatinine.In histopathological examinations, notable change was observed in the nonglandular stomach (Table5). Squamous hyperplasia of the limiting ridge in the stomach was found in the EAG-treated groups in both sexes. The changes were observed in seven males at 2,500 mg/kg/day and in all males and eight females at 5,000 mg/kg/day (P<0.01 and P<0.001). However, there were no toxicologically significant changes in histopathological examinations. Other lesions that have been well known to occur spontaneously in the same age of SD rats were observed [45, 46].Table 5
Histopathologic findings of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
OrgansFindingsEAG (mg/kg/day)MaleFemale01,2502,5005,00001,2502,5005,000Nonglandular stomachHyperplasia, squamous cells, limiting ridge037∗∗10∗∗∗0138∗∗∗No. of animals examined1010101010101010∗∗/∗∗∗Significant difference at P<0.01/P<0.001 levels compared with the negative control by Fisher two-tailed test.
## 3.2. Genotoxicity Test
### 3.2.1. Bacterial Reverse Mutation Test (Ames Test)
No precipitation or other abnormality was observed on the bottom agar at the time of plate scoring. There was a dose-related increase in a number of colonies in TA98 that is one of the histidine-requiring strains at 3,000 and 5,000µg/plate, and the number of revertants was 2.3 and 2.5 times higher than that of the negative control in the presence of S9 mix, respectively (Table 6). However, EAG is composed of various amino acids, including histidine (Table 1). In other test strains, no substantial increases in numbers of revertants per plate were observed in any dose level of EAG. Moreover, there were no signs of cytotoxicity at any dose level in all test strains. The results suggest that EAG is not mutagenic in the test strains. The mean revertants in the positive control for each strain showed a clear increase over the mean revertants in the negative control for that strain.Table 6
Results of bacterial reverse mutation assay.
Test articleDose (μg/plate)Colonies/plate [factor]aTA98TA100TA1535TA1537WP2uvrA−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mixEAG026 ± 425 ± 3114 ± 8107 ± 914 ± 215 ± 215 ± 215 ± 222 ± 329 ± 4——————————5025 ± 628 ± 3111 ± 6102 ± 912 ± 416 ± 513 ± 213 ± 322 ± 425 ± 4[1.0][1.1][1.0][0.9][0.9][1.1][0.9][0.9][1.0][0.9]15027 ± 430 ± 5107 ± 15120 ± 1610 ± 115 ± 115 ± 112 ± 318 ± 326 ± 7[1.1][1.2][0.9][1.1][0.8][1.0][1.0][0.8][0.8][0.9]50034 ± 534 ± 4103 ± 7105 ± 513 ± 117 ± 113 ± 114 ± 120 ± 226 ± 6[1.3][1.4][0.9][1.0][1.0][1.1][0.9][1.0][0.9][0.9]1,50026 ± 540 ± 3109 ± 6120 ± 1011 ± 115 ± 212 ± 213 ± 223 ± 426 ± 4[1.0][1.6][1.0][1.1][0.8][1.0][0.8][0.9][1.0][0.9]3,00034 ± 556 ± 5121 ± 2137 ± 113 ± 213 ± 313 ± 317 ± 322 ± 227 ± 2[1.3][2.3][1.1][1.3][0.9][0.9][0.9][1.2][1.0][0.9]5,00034 ± 263 ± 6116 ± 12137 ± 313 ± 113 ± 112 ± 220 ± 125 ± 223 ± 3[1.3][2.5][1.0][1.3][0.9][0.8][0.8][1.3][1.1][0.8]Positive controlb225 ± 16118 ± 8465 ± 601504 ± 102408 ± 12142 ± 19265 ± 20182 ± 22227 ± 25104 ± 8[8.8][4.8][4.1][14.0][29.9][9.3][18.1][12.1][10.3][3.6]Data are expressed as mean ± standard deviation.aThree plates were used each dose. Factor = no. of colonies of treated plate/no. of colonies of negative control plate. bTA98: 2-NF 2 μg/plate (−S9 mix), B[a]P 1 μg/plate (+S9 mix); TA100:SA 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); TA1535:SA 0.5 μg/plate (−S9 mix), 2-AA 2 μg/plate (+S9 mix); TA1537:ICR-191 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); and WP2 uvrA:4NQO 0.5 μg/plate (−S9 mix), 2-AA 6 μg/plate (+S9 mix). 2-NF, 2-nitrofluorene; B[a]P, benzo[a]pyrene; SA, sodium azide; 2-AA, 2-aminoanthracene; ICR-191, acridine mutagen ICR 191; and 4NQO, 4-nitroquinoline N-oxide.
### 3.2.2. Chromosome Aberration Test Using CHL Cells
In this experiment, no turbidity or precipitation was observed at all dose levels of EAG. As shown in Table7, there was no statistically significant increase at any dose level of EAG compared to the negative control, and there was no dose-response relationship or increase in the frequency of aberrant metaphases in all treatment series. In the positive control, there was a statistically significant increase in the mean frequency of aberrant metaphases with structural aberrations in all treatment series (P<0.01).Table 7
In vitro chromosome aberration test in Chinese hamster lung cells with EAG.
Treatment scheduleaS9 mixDose (µg/mL)PP + ER (%)Ratio of aberrant metaphaseb (%)Cell countscMeanRICCd (%)Flask AFlask B06–18+00.000.008,6628,2648,4631003500.330.338,2638,3878,325977000.670.678,5638,7908,6761041,3000.000.336,0836,2626,172571,4000.000.005,4405,3825,41143B[a]P 200.0015.00∗∗6,0005,8505,9255206–18−00.000.009,1629,3489,2551003000.330.009,5288,7229,124986000.670.008,6708,9128,791921,1000.000.336,6096,6576,633571,2000.670.006,0325,8645,947464NQO 0.40.0010.33∗∗7,3097,1927,2506724–0−00.330.338,9008,9108,9051002250.670.008,9969,2739,1341044500.330.009,5899,2739,4311098001.000.676,2456,5526,398569000.000.336,0315,8895,960494NQO 0.40.009.33∗∗7,0126,6696,84064Initial cell count3,1423,1623,152∗∗Significant difference at P<0.01 levels compared with the negative control by Fisher’s exact test. aTreatment time – recovery time, hours, bGap excludes, 150 metaphases were examined per culture. cAfter harvesting mitotic cells, each culture was trypsinized and suspended with 0.5 mL of 0.1% trypsin and 5 mL of culture medium. The cell suspensions of 0.4 mL/culture were diluted 50 times with 19.6 mL of Isoton® sol. The cells in 0.5 mL of Isoton® sol. were counted twice/culture using Coulter Counter model Z2. The actual number of cells per flask = mean cell count × 550. dRelative increase in cell count = ((cell count of treated flask – initial cell count)/(cell count of the negative control flask – initial cell count)) × 100, PP, polyploid; ER, endoreduplication; B[a]P, benzo[a]pyrene (positive control); and 4NQO, 4-nitroquinoline-1-oxide (positive control).
### 3.2.3. Micronucleus Test Using Mouse Bone Marrow Cells
One mouse at 500 mg/kg/day died after the first administration, but it was not considered to be EAG-related, and there were no noticeable macroscopic signs in all other survivors that could be attributed to EAG. There was no statistically significant increase or a dose-related increase in the frequencies of MNPCE at any dose level of EAG compared to the negative control (Table8). The PCE:RBC ratio showed no difference at any dose level of EAG. In contrast, the micronucleus and PCE:RBC ratio were significantly changed by the positive control (P<0.01) when compared to the negative control.Table 8
Observations of micronucleus and PCE:RBC ratio.
Test articleDose (mg/kg/day)Animals per doseMNPCEa (mean ± SD)PCE:RBC ratio (mean ± SD)% controlEAG061.33 ± 1.030.57 ± 0.011005005b1.20 ± 0.840.58 ± 0.021011,00061.00 ± 1.100.57 ± 0.021002,00061.50 ± 1.380.57 ± 0.0199CPA706110.50 ± 29.71∗∗0.39 ± 0.02##69∗∗Significant difference at P<0.01 levels compared with the negative control by the Mann–Whitney. ##Significant difference at P<0.01 levels compared with the control by Student’s t-test. aRatio of MNPCE with 4,000 PCE, bOne of the mice was died. PCE, polychromatic erythrocyte; RBC, red blood cells (polychromatic erythrocyte + normochromatic erythrocyte); MNPCE, micronucleated polychromatic erythrocyte; and CPA, cyclophosphamide monohydrate (positive control).
## 3.2.1. Bacterial Reverse Mutation Test (Ames Test)
No precipitation or other abnormality was observed on the bottom agar at the time of plate scoring. There was a dose-related increase in a number of colonies in TA98 that is one of the histidine-requiring strains at 3,000 and 5,000µg/plate, and the number of revertants was 2.3 and 2.5 times higher than that of the negative control in the presence of S9 mix, respectively (Table 6). However, EAG is composed of various amino acids, including histidine (Table 1). In other test strains, no substantial increases in numbers of revertants per plate were observed in any dose level of EAG. Moreover, there were no signs of cytotoxicity at any dose level in all test strains. The results suggest that EAG is not mutagenic in the test strains. The mean revertants in the positive control for each strain showed a clear increase over the mean revertants in the negative control for that strain.Table 6
Results of bacterial reverse mutation assay.
Test articleDose (μg/plate)Colonies/plate [factor]aTA98TA100TA1535TA1537WP2uvrA−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mixEAG026 ± 425 ± 3114 ± 8107 ± 914 ± 215 ± 215 ± 215 ± 222 ± 329 ± 4——————————5025 ± 628 ± 3111 ± 6102 ± 912 ± 416 ± 513 ± 213 ± 322 ± 425 ± 4[1.0][1.1][1.0][0.9][0.9][1.1][0.9][0.9][1.0][0.9]15027 ± 430 ± 5107 ± 15120 ± 1610 ± 115 ± 115 ± 112 ± 318 ± 326 ± 7[1.1][1.2][0.9][1.1][0.8][1.0][1.0][0.8][0.8][0.9]50034 ± 534 ± 4103 ± 7105 ± 513 ± 117 ± 113 ± 114 ± 120 ± 226 ± 6[1.3][1.4][0.9][1.0][1.0][1.1][0.9][1.0][0.9][0.9]1,50026 ± 540 ± 3109 ± 6120 ± 1011 ± 115 ± 212 ± 213 ± 223 ± 426 ± 4[1.0][1.6][1.0][1.1][0.8][1.0][0.8][0.9][1.0][0.9]3,00034 ± 556 ± 5121 ± 2137 ± 113 ± 213 ± 313 ± 317 ± 322 ± 227 ± 2[1.3][2.3][1.1][1.3][0.9][0.9][0.9][1.2][1.0][0.9]5,00034 ± 263 ± 6116 ± 12137 ± 313 ± 113 ± 112 ± 220 ± 125 ± 223 ± 3[1.3][2.5][1.0][1.3][0.9][0.8][0.8][1.3][1.1][0.8]Positive controlb225 ± 16118 ± 8465 ± 601504 ± 102408 ± 12142 ± 19265 ± 20182 ± 22227 ± 25104 ± 8[8.8][4.8][4.1][14.0][29.9][9.3][18.1][12.1][10.3][3.6]Data are expressed as mean ± standard deviation.aThree plates were used each dose. Factor = no. of colonies of treated plate/no. of colonies of negative control plate. bTA98: 2-NF 2 μg/plate (−S9 mix), B[a]P 1 μg/plate (+S9 mix); TA100:SA 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); TA1535:SA 0.5 μg/plate (−S9 mix), 2-AA 2 μg/plate (+S9 mix); TA1537:ICR-191 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); and WP2 uvrA:4NQO 0.5 μg/plate (−S9 mix), 2-AA 6 μg/plate (+S9 mix). 2-NF, 2-nitrofluorene; B[a]P, benzo[a]pyrene; SA, sodium azide; 2-AA, 2-aminoanthracene; ICR-191, acridine mutagen ICR 191; and 4NQO, 4-nitroquinoline N-oxide.
## 3.2.2. Chromosome Aberration Test Using CHL Cells
In this experiment, no turbidity or precipitation was observed at all dose levels of EAG. As shown in Table7, there was no statistically significant increase at any dose level of EAG compared to the negative control, and there was no dose-response relationship or increase in the frequency of aberrant metaphases in all treatment series. In the positive control, there was a statistically significant increase in the mean frequency of aberrant metaphases with structural aberrations in all treatment series (P<0.01).Table 7
In vitro chromosome aberration test in Chinese hamster lung cells with EAG.
Treatment scheduleaS9 mixDose (µg/mL)PP + ER (%)Ratio of aberrant metaphaseb (%)Cell countscMeanRICCd (%)Flask AFlask B06–18+00.000.008,6628,2648,4631003500.330.338,2638,3878,325977000.670.678,5638,7908,6761041,3000.000.336,0836,2626,172571,4000.000.005,4405,3825,41143B[a]P 200.0015.00∗∗6,0005,8505,9255206–18−00.000.009,1629,3489,2551003000.330.009,5288,7229,124986000.670.008,6708,9128,791921,1000.000.336,6096,6576,633571,2000.670.006,0325,8645,947464NQO 0.40.0010.33∗∗7,3097,1927,2506724–0−00.330.338,9008,9108,9051002250.670.008,9969,2739,1341044500.330.009,5899,2739,4311098001.000.676,2456,5526,398569000.000.336,0315,8895,960494NQO 0.40.009.33∗∗7,0126,6696,84064Initial cell count3,1423,1623,152∗∗Significant difference at P<0.01 levels compared with the negative control by Fisher’s exact test. aTreatment time – recovery time, hours, bGap excludes, 150 metaphases were examined per culture. cAfter harvesting mitotic cells, each culture was trypsinized and suspended with 0.5 mL of 0.1% trypsin and 5 mL of culture medium. The cell suspensions of 0.4 mL/culture were diluted 50 times with 19.6 mL of Isoton® sol. The cells in 0.5 mL of Isoton® sol. were counted twice/culture using Coulter Counter model Z2. The actual number of cells per flask = mean cell count × 550. dRelative increase in cell count = ((cell count of treated flask – initial cell count)/(cell count of the negative control flask – initial cell count)) × 100, PP, polyploid; ER, endoreduplication; B[a]P, benzo[a]pyrene (positive control); and 4NQO, 4-nitroquinoline-1-oxide (positive control).
## 3.2.3. Micronucleus Test Using Mouse Bone Marrow Cells
One mouse at 500 mg/kg/day died after the first administration, but it was not considered to be EAG-related, and there were no noticeable macroscopic signs in all other survivors that could be attributed to EAG. There was no statistically significant increase or a dose-related increase in the frequencies of MNPCE at any dose level of EAG compared to the negative control (Table8). The PCE:RBC ratio showed no difference at any dose level of EAG. In contrast, the micronucleus and PCE:RBC ratio were significantly changed by the positive control (P<0.01) when compared to the negative control.Table 8
Observations of micronucleus and PCE:RBC ratio.
Test articleDose (mg/kg/day)Animals per doseMNPCEa (mean ± SD)PCE:RBC ratio (mean ± SD)% controlEAG061.33 ± 1.030.57 ± 0.011005005b1.20 ± 0.840.58 ± 0.021011,00061.00 ± 1.100.57 ± 0.021002,00061.50 ± 1.380.57 ± 0.0199CPA706110.50 ± 29.71∗∗0.39 ± 0.02##69∗∗Significant difference at P<0.01 levels compared with the negative control by the Mann–Whitney. ##Significant difference at P<0.01 levels compared with the control by Student’s t-test. aRatio of MNPCE with 4,000 PCE, bOne of the mice was died. PCE, polychromatic erythrocyte; RBC, red blood cells (polychromatic erythrocyte + normochromatic erythrocyte); MNPCE, micronucleated polychromatic erythrocyte; and CPA, cyclophosphamide monohydrate (positive control).
## 4. Discussion
Regarding the utilization ofA. glehni, several studies have concentrated on its pharmacological and therapeutic effects on diverse diseases [17–21]. Recently, it has been reported that ethanolic extract of A. glehni has ameliorating effects on scopolamine-induced memory dysfunction including long-term or working memory and improves memory function [22]. According to the study, its effects are due to the inhibition of acetylcholinesterase activity and the activation of ERK-CREB-BDNF and PI3K-Akt-GSK-3β pathways [22, 23]. However, further application of A. glehni in herbal medicine or functional food has been limited as there is inadequate knowledge of its safety. The present study evaluated the potential toxicity of EAG after 2- and 13-week repeated oral administration in SD rats. Moreover, genotoxicity studies including bacterial reverse mutation assay, chromosomal aberration test, and micronucleus test in mammalian bone marrow were performed to investigate genotoxicity of EAG.In the acute toxicity study, EAG was found to be nontoxic in rats, and the approximate lethal dose was higher than 5,000 mg/kg (data not shown). In the 2- and 13-week repeated oral toxicity studies, no changes by the test article were found in body weights, food and water consumption, organ weights, ophthalmological test, hematological test, and clinical biochemistry test. Although one male rat at 1,250 mg/kg/day died, there were no noticeable clinical signs or findings to ascertain the cause of death. Necropsy findings of retention of dark brown substance in the lung and the thoracic cavity were determined to be related to an administration error. In addition, the compound-colored stool was observed and considered to be related to EAG-treatment. However, this clinical sign was regarded as a change attributable to the excretion of EAG. Therefore, these findings were not considered to be adverse effects [47].Urinalysis, involving an evaluation of renal functions, is often affected by test article toxicity and is evaluated by testing urinalysis parameters as an indirect indicator of kidney damage [48]. In urinalysis, significant changes were observed for the ketone body at 5,000 mg/kg/day in males and specific gravity at all doses in females under present experimental conditions. However, these results were not considered as toxic effects by EAG since the degree of changes were small and there were no related histopathological findings in the kidney. At necropsy, adhesion of irregular surface on the middle lobe of liver with the diaphragm was observed in female at 1,250 mg/kg/day and in male negative control. The irregular surface of the stomach and weak brown discoloration of the kidney were observed only once in males at 5,000 mg/kg/day. Retention of clear fluid was observed in all-female groups, and one female at 1,250 mg/kg/day exhibited dark red fluid. Other abnormalities, including partly black half spots on a glandular region of the stomach and nodule of ovary, were observed only once in females at 1,250 mg/kg/day and 2,500 mg/kg/day, respectively. All gross findings were microscopically confirmed as corresponding findings. However, these changes were not considered to be related to EAG because the incidence was low, there was no dose-response relationship, and they have been reported to be spontaneous and incidental [45, 46].From the histopathological examination, EAG-related change was observed in the nonglandular region (limiting ridge) in the stomach. Squamous cell hyperplasia of limiting ridge in stomach increased in the EAG-treated groups in both sexes, and there was a dose-response relationship. However, the change was considered a nonreversible effect because the degree of the lesion was graded near mild and cellular atypia was not observed. In addition, it has been shown that if cellular atypia is not observed, squamous cell hyperplasia caused by ethyl acrylate can be recovered completely [45]. In addition, the forestomach is present only in rodents, and no EAG-related changes were observed in the other digestive organs including the glandular stomach. Therefore, toxicological significance was considered minimal. Consequently, the no-observed-adverse-effect level (NOAEL) was 5,000 mg/kg/day in both sexes, and no target organ was observed.The genotoxic evaluation of EAG was tested by the Ames test,in vitro chromosomal aberration test in CHL cells, and in vivo mammalian micronucleus test. In the bacterial reverse mutation test, four histidine auxotroph strains of S. typhimurium and a tryptophan auxotroph strain of E. coli were tested both in the presence and absence of an exogenous metabolic activation system. The number of revertants did not increase at any dose level of EAG under present experimental conditions except for the TA98 strain. In the TA98 strain in the presence of S9 mix, there was a dose-related increase in the number of revertants, and the increase was reproducible. However, it was confirmed that EAG contains histidine so the results of the Ames test were deemed inconclusive. As an additional test for histidine content, two test articles were evaluated: leaf extract (histidine content 39.52 mg/100 g) and leaf and stem extract (histidine content 16.69 mg/100 g) of A. glehni at 5,000 µg/plate. The number of revertants of leaf extract of A. glehni increased about 1.5-fold compared to leaf and stem extract of A. glehni so the finding in TA98 was related to the amount of the histidine content. It has been reported that the increases of revertants in the Ames test could be caused by a test article including histidine [49]. Histidine compounds can generate additional background growth of S. typhimurium on minimal medium plates, thereby resulting in spontaneous his+ revertants [50, 51]. Besides, it was noted that plants and their metabolites have possibly caused false positives by containing histidine [52–54]. Therefore, the result of revertants in TA98 strain was assessed to be a false positive due to histidine content in EAG.For the chromosome aberration test, CHL cells were used to investigate the potential to induce chromosomal aberrations both in the presence and absence of an exogenous metabolic activation system. The result was regarded as clear negative if there was no statistically significant increase in the frequencies of aberrant metaphases at any dose level compared to the negative control, and there was no dose-response relationship or increase in the frequency of aberrant metaphases in all treatment series. The results met the criteria so EAG was clearly negative.The results of the micronucleus test using mouse bone marrow cells showed that EAG did not induce any statistically significant increase or dose-related increase in the frequencies of MNPCE per 4,000 PCE at any dose level. In addition, there was no significant difference in the PCE:RBC ratio. The results indicate that EAG did not induce micronuclei in ICR mice mammalian bone marrow cells under present experimental conditions. Taken together, these results revealed that EAG was nongenotoxic in bothin vitro and in vivo models.
## 5. Conclusion
This study assessed the safety of EAG using different model approaches, including subchronic oral toxicity studies and a battery of genotoxicity studies. When rats were given 2- or 13-week repeated-dose oral administration of EAG, at up to 5,000 mg/kg/day, the NOAEL was considered to be 5,000 mg/kg/day, and no target organs were identified in both sexes under present experimental conditions of this study. Moreover, EAG was also classified as nonmutagenic and nonclastogenic in genotoxicity testing. Collectively, these results show a lack of general toxicity and genotoxicity for EAG that supports clinical work for development as a herbal medicine.
---
*Source: 1018101-2021-12-30.xml* | 1018101-2021-12-30_1018101-2021-12-30.md | 78,562 | Evaluation of Subchronic Toxicity and Genotoxicity of Ethanolic Extract ofAster glehni Leaves and Stems | Mi Kyung Lim; Ju Yeon Kim; Jeongho Jeong; Eun Hye Han; Sang Ho Lee; Soyeon Lee; Sun-Don Kim; Jinu Lee | Evidence-Based Complementary and Alternative Medicine
(2021) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1018101 | 1018101-2021-12-30.xml | ---
## Abstract
Aster glehni, a traditional plant on Ulleung Island in the Republic of Korea, has been recognized for its multiple medicinal properties. However, potential toxicity and safety analyses of A. glehni have not been previously investigated. Therefore, this study aimed to evaluate the safety profile of ethanolic extract of A. glehni leaves and stems (EAG) in terms of genotoxicity and subchronic oral animal toxicity under OECD guidelines and GLP conditions. Toxicological assessments were performed at doses of 1,250, 2,500, and 5,000 mg/kg/day in a 13-week oral repeated-dose toxicity study of EAG in male and female SD rats. In addition, an Ames test, an in vitro mammalian chromosomal aberration test, and a micronucleus test were performed. No toxicological changes in clinical signs, body weights, water and food consumption, urinalysis, hematology, clinical biochemistry, gross findings, and histopathological examinations were observed in subchronic oral animal toxicity. In addition, EAG gave negative results when evaluated using in vitro and in vivo genotoxicity tests. In conclusion, the no-observed-adverse-effect level (NOAEL) of EAG was considered to be 5,000 mg/kg/day, and no target organs were identified in both sexes of rats. EAG was also classified as nonmutagenic and nonclastogenic in genotoxicity testing. Collectively, these results show a lack of general toxicity and genotoxicity for EAG that supports clinical work for development as a herbal medicine.
---
## Body
## 1. Introduction
Medicinal plants have been used traditionally therapeutic agents, but recently, they have seen more and more as substitutes for chemical agents with side effects or drug resistance [1, 2]. Herbal medicine derived from medicinal plants often has anti-oxidant, anti-microbial, and anti-inflammatory properties so they may provide potential options for the treatment of diseases such as COVID-19 that yet has no approved drug [3–7]. Medicinal plants are utilized for the treatment of various diseases based on their unique biological properties such as anti-cancer, thrombolytic, and gastrointestinal function control, as well as for the improvement of neurological diseases by anti-nociceptive, anti-depressant, and anxiolytic activity [1, 2, 8–15]. However, some medicinal plants have reported toxicity like hepatotoxicity and renal toxicity at high doses and long-term use [5, 6, 16]. Therefore, it is essential to evaluate toxicity profiles for human safety.Aster glehni Fr. Schm., widely distributed on Ulleung Island, Republic of Korea, is known to be a traditional edible herb. In a Korean traditional medical encyclopedia known as Dongui Bogam, it is described that A. glehni has anti-pyretic and analgesic effects and suppresses phlegm and coughing [17]. A. glehni has been used for the treatment of a variety of diseases including diabetes mellitus, hypercholesterolemia, and cardiovascular disease [18]. In addition, it has been reported that ethanolic extract of A. glehni shows anti-adipogenic, hypouricemic, and anti-inflammatory effects [18–20].A. glehni contains caffeoylquinic acid (CQ) derivatives such as 3,5-di-O-caffeoylquinic acid (3,5-DCQA), 5-O-caffeoylquinic acid, 3-O-caffeoylquinic acid, and 3-O-p-coumaroylquinic acid and flavonoids such as astragalin and kaempferol [18, 21]. According to recent research, ethanolic extract of A. glehni (EAG) and 3,5-DCQA have ameliorating effects on memory impairment caused by scopolamine in male ICR mice [22, 23]. It has also been reported that 3,5-DCQA inhibits the activity of acetylcholinesterase (AChE) and amyloid-beta (Aβ) induced cytotoxicity in SH-SY5Y neuroblastoma cells [23–25]. These results suggest that 3,5-DCQA might play an important role in the ameliorative effects of EAG on memory dysfunction.Although effects of EAG have been generally extensively perceived to be therapeutic, to date, adverse effects of EAG use in humans have not been reported. Thus, the present study of EAG was designed under Regulation on Approval and Notification of Herbal (crude) Medicinal Preparation, Etc. of the Korea Ministry of Food and Drug Safety (MFDS) [26] to provide safety information for a subsequent clinical trial. The toxicity studies of EAG were conducted as 2-week and 13-week repeated-dose oral toxicity tests in SD rats and genotoxicity tests following the Good Laboratory Practice regulations of the Organization for Economic Cooperation and Development [27] and MFDS [28].
## 2. Materials and Methods
### 2.1. Preparation of Ethanolic Extract of Aerial Parts ofA. glehni
The aerial parts, leaves, and stems, ofA. glehni were collected from Ulleung Island and dried naturally. The EAG was prepared by the method as previously described [23]. Briefly, the finely chopped sample was extracted in a 15-fold mass of 70% ethanol. The first extract was collected; then the second extract was obtained in a 10-fold mass of 70% ethanol. After mixing with diatomite, it was filtered and concentrated to 10–20 Brix. After adding an equal amount of dextrin to the concentrates, the mixtures were sterilized at 95°C for 30 min. The sterilized samples were spray-dried and filtered through a 60-mesh sieve to obtain a solid extract powder (Specimen Voucher No. AG-D022). For quality assurance, the final A. glehni extract was standardized by 3,5-dicaffeoylquinic acid (3,5-DCQA) based on high-performance liquid chromatography (HPLC) at 330 nm. The content of the marker compound (3,5-DCQA) in the EAG was 2.37 mg/g. The results of the amino acid composition analysis are shown in Table 1. The total protein content in EAG was 1,759.84 mg/100 g. Proline (35.1%), aspartic acid (21.6%), and glutamic acid (11.2%) were the main amino acids existing in EAG.Table 1
Amino acid composition of EAG.
Amino acidaTyrGlySerAlaGluLysLeuMetValArgAspIleThrPheProHisCysTrpAGND225.0183.4161.9355.047.1193.719.3141.153.6292.6117.2154.6146.2203.739.596.913.0EAGND53.978.367.9197.67.184.6ND41.326.7380.146.460.443.0616.916.728.210.9aUnit: mg/100 g, Tyr, tyrosine; Gly, glycine; Ser, serine; Ala, alanine; Glu, glutamic acid; Lys, lysine; Leu, leucine; Met, methionine; Val, valine; Arg, arginine; Asp, aspartic acid; Ile, isoleucine; Thr, threonine; Phe, phenylalanine; Pro, proline; His, histidine; Cys, cysteine; Trp, tryptophan; ND, not detected; AG, ethanolic leaf extract of Aster glehni; and EAG, ethanolic leaf and stem extract of Aster glehni.
### 2.2. Experimental Animals and Maintenance
Specific pathogen-free (SPF) SD rats were obtained from Orient Bio Inc. (Seongnam, Korea). The animals were maintained in the facility with temperature (23 ± 3°C), relative humidity (55 ± 15%), and ventilation (10–20 air changes/hour) at Chemon Inc. in accordance with the Guide for the Care and Use of Laboratory Animals, 8th edition [29]. Food and water were provided, ad libitum, with a 12 hours light:12 hours dark cycle. All procedures and protocols were reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) of Chemon Inc. performed in accordance with the guideline published by the Organization for Economic Cooperation and Development (OECD) as well as the Good Laboratory Practice (GLP) regulations for Nonclinical Laboratory Studies of the Ministry of Food Drug Safety (MFDS) in the Republic of Korea [27, 28, 30, 31].
### 2.3. Thirteen-Week Repeated Oral Toxicity Study
For the 13-week repeat-dose toxicity study, in accordance with OECD Guideline 408 [32], healthy 6-week old male and female SD rats weighing 186.56 ± 8.70 g and 144.41 ± 7.63 g, respectively, were randomly assigned to 4 groups (10/sex/group) under GLP regulations. Vehicle (distilled water for injection) or graded doses of EAG (1,250, 2,500, and 5,000 mg/kg according to body weight) were administered to rats by oral gavage once daily for 13 weeks at a dose of 10 mL/kg of body weight after completion of a 14-day repeated oral toxicity dose range finding (DRF) study where no adverse finding was seen dosing up to 5,000 mg/kg/day. The high dose was selected according to the results of an acute toxicity study in which no significant test article-related changes in mortalities and clinical signs at 5,000 mg/kg/day were observed (data not shown). The rats were observed daily for clinical signs including mortality, general appearance, and behavioral abnormality until terminal sacrifice. Body weights and food/water consumption were recorded weekly throughout the study. Ophthalmological examination was conducted in the last week of observation and anterior parts of the eye, optic media, and fundus were examined with a fundus camera (Vantage Plus Digital LED; Keeler Instruments Inc., Malvern, PA, USA). At study termination, all rats were euthanized by isoflurane (2% to 5%) inhalation for blood sample collection.
### 2.4. Urinalysis, Hematology, and Clinical Biochemistry
Urinalysis, hematological analyses, and serum biochemistry analyses were conducted as described previously [31].
### 2.5. Gross Findings, Organ Weights, and Histopathological Examinations
At necropsy, the animals were sacrificed to analyze both macroscopic and microscopic features of the internal organs. The organ weights were measured as described previously [31]. All tissues from each animal were preserved, and lesions were graded using a five-step scale in the order of increasing severity (minimal, mild, moderate, severe, and massive). Brain, jejunum, peripheral nerve, pituitary gland, ileum, femorotibial joint, lung, cecum, urinary bladder, heart, colon, testis, thymus, rectum, epididymis, spleen, eye with optic nerve, prostate gland, adrenal gland, thyroid gland with parathyroid gland, seminal vesicle with coagulating gland, kidney, Harderian gland, ovary, liver, salivary gland, uterus with cervix, tongue, aorta, vagina, trachea, sternum with bone marrow, skin, esophagus, mandibular lymph node, mammary gland, stomach, mesenteric lymph node, skeletal muscle, pancreas, thoracic spinal cord, gross lesion, and duodenum were processed for histopathological examination using Pristima® (Xybion, Lawrenceville, NJ, USA). Diagnostic terms in the Lexicon of Pristima® were used primarily. Standardized System of Nomenclature and Diagnostic Criteria-Guides for Toxicologic Pathology [33] and Covance Glossary [34] were also utilized.
### 2.6. Bacterial Reverse Mutation Assay
Four histidine auxotroph strains ofSalmonella typhimurium (TA100, TA1535, TA98, and TA1537) [35] and a tryptophan auxotroph strain of Escherichia coli WP2 uvrA [36] were used to confirm mutagenicity of EAG according to OECD Guideline 471 [37] under GLP conditions. The mutagenic activity of EAG was assessed both in the presence and absence of an external metabolic activation system from rat livers (S9 fraction) using the direct plate incorporation method. For the plating assay, 0.5 mL of S9 mix (or sodium phosphate buffer, pH 7.4 for nonactivation plates), 0.1 mL of bacterial culture (containing approximately 108 viable cells), and 0.1 mL of test article were mixed with 2.0 mL of overlay agar. The contents of each tube were mixed and poured over the surface of a minimal agar plate. The overlay agar was allowed to solidify before incubation. After the top layers solidified, plates were inverted and incubated at 37 ± 2°C for 50 ± 2 h and revertant colonies were counted by the unaided eye. EAG was applied at dose levels of 50, 150, 500, 1,500, 3,000, and 5,000 µg/plate. The positive control substances were 2-aminoanthracene (2-AA), benzo[a]pyrene (B[a]P), sodium azide (SA), 2-nitrofluorene (2-NF), acridine mutagen ICR 191 (ICR-191), and 4-nitroquinoline-1-oxide (4NQO). At least three independent experiments were performed using triplicate plates for each concentration. Results are expressed as revertant colonies and mutagenic indexes (MI).
### 2.7. In Vitro Chromosomal Aberration Test
A chromosomal aberration test was performed to evaluate the mutagenic potential to induce structural and/or numerical chromosomal aberrations in a CHL/IU cell line derived from the lung of a female Chinese hamster lung fibroblasts under OECD Guideline 473 [38]. The treatment methods were classified under three types according to the presence and absence of the metabolic active system. Treatment 1 was performed for 6 h using a metabolic activation system (S9 mix), and 18 h of recovery time was allowed to observe the chromosomal aberrations. Treatments 2 and 3 were performed for 6 h and 24 h, respectively, without the use of S9 mix and followed by an 18 h and 0 h recovery, respectively. In Treatment 1, EAG was used at concentrations of 0 (negative control), 350, 700, 1,300, and 1,400 µg/mL. Treatments 2 and 3 were applied at 0, 300, 600, 1,100, and 1,200 µg/mL and 0, 225, 450, 800, and 900 µg/mL, respectively. Approximately 22 hours after treatment, 50 μL of colchicine solution was added to each culture (final concentration of 1 μM) and incubated for 2 hours for mitotic arrest. The mitotic cells were detached by gentle shaking. The media containing mitotic cells were centrifuged, and the cell pellets were resuspended in 75 mM potassium chloride solution for hypotonic treatment. Then cells were fixed with fixative (methanol: glacial acetic acid = 3:1 v/v), and slides were prepared by the air-drying method. Slides were stained with 5% Giemsa solution. Two slides were prepared for each culture. The results were expressed as frequency (%) of metaphases with structural or numerical aberrations per 300 metaphases. The relative increase in cell count (RICC %) was used as an indicator of concurrent cytotoxicity to determine the high concentration. With the cell counts, RICC (%) was calculated as follows:(1)RICC%=Cell count of treated flask−Initial cell countCell count of control flask−Initial cell count×100.
### 2.8. In Vivo Micronucleus Test in Mammalian Bone Marrow
Eight-week-old male ICR mice, 35.3 ± 1.3 g, were administered orally once a day for two consecutive days at doses of 500, 1,000, and 2,000 mg/kg/day (n = 6 in each group) according to OECD Guideline 474 [39]. Sterile distilled water for injection (10 mL/kg) was used as a negative control. Cyclophosphamide monohydrate (CPA) 70 mg/kg was administered once intraperitoneally on the day of the second administration as a positive control. All mice were daily observed for clinical signs. All animals were sacrificed about 24 h after the final administration, and bone marrow preparations were made for the evaluation of micronuclei and cytotoxicity. The bone marrow cells were fixed with methanol according to the method described in Schmid [40] and stained with acridine orange prepared based on the method of Hayashi [41]. The cells were observed and counted using a fluorescence microscope, and the identification of micronuclei was confirmed by the method of Hayashi [41]. Micronucleated polychromatic erythrocytes (MNPCE) were counted among 4,000 polychromatic erythrocytes (PCE) per animal. The ratio of PCE to total erythrocytes (red blood cell), an indicator of cytotoxicity [42], was determined by counting 500 erythrocytes per animal.
### 2.9. Statistical Analysis
SPSS Statistics 22 for Medical Science was used for all statistical analyzes, and the level of significance wasP<0.05. Body weights, food and water consumption, urine volume, hematological and clinical biochemistry parameters, and organ weights were assumed to be normally distributed and analyzed by parametric one-way analysis of variance (ANOVA). The assumption of homogeneity was tested using Levene’s test. The urinalysis data were rank-transformed and analyzed by the nonparametric Kruskal–Wallis H test. Fisher’s exact test was used to compare the frequency of aberrant metaphases between the negative control and test article-treated groups in the chromosomal aberration test. In the micronucleus test, the frequency of micronucleus was analyzed by the nonparametric Kruskal–Wallis H test. The negative and positive control groups were compared by the Mann–Whitney U test. The dose-responsiveness was tested by the linear-by-linear association of the chi-square test. The PCE:RBC ratio was assumed to be normally distributed and analyzed by one-way ANOVA, and the assumption of homogeneity of variance was tested using Levene’s test. The Student’s t-test was used to test for a difference between means of the negative and positive control.
## 2.1. Preparation of Ethanolic Extract of Aerial Parts ofA. glehni
The aerial parts, leaves, and stems, ofA. glehni were collected from Ulleung Island and dried naturally. The EAG was prepared by the method as previously described [23]. Briefly, the finely chopped sample was extracted in a 15-fold mass of 70% ethanol. The first extract was collected; then the second extract was obtained in a 10-fold mass of 70% ethanol. After mixing with diatomite, it was filtered and concentrated to 10–20 Brix. After adding an equal amount of dextrin to the concentrates, the mixtures were sterilized at 95°C for 30 min. The sterilized samples were spray-dried and filtered through a 60-mesh sieve to obtain a solid extract powder (Specimen Voucher No. AG-D022). For quality assurance, the final A. glehni extract was standardized by 3,5-dicaffeoylquinic acid (3,5-DCQA) based on high-performance liquid chromatography (HPLC) at 330 nm. The content of the marker compound (3,5-DCQA) in the EAG was 2.37 mg/g. The results of the amino acid composition analysis are shown in Table 1. The total protein content in EAG was 1,759.84 mg/100 g. Proline (35.1%), aspartic acid (21.6%), and glutamic acid (11.2%) were the main amino acids existing in EAG.Table 1
Amino acid composition of EAG.
Amino acidaTyrGlySerAlaGluLysLeuMetValArgAspIleThrPheProHisCysTrpAGND225.0183.4161.9355.047.1193.719.3141.153.6292.6117.2154.6146.2203.739.596.913.0EAGND53.978.367.9197.67.184.6ND41.326.7380.146.460.443.0616.916.728.210.9aUnit: mg/100 g, Tyr, tyrosine; Gly, glycine; Ser, serine; Ala, alanine; Glu, glutamic acid; Lys, lysine; Leu, leucine; Met, methionine; Val, valine; Arg, arginine; Asp, aspartic acid; Ile, isoleucine; Thr, threonine; Phe, phenylalanine; Pro, proline; His, histidine; Cys, cysteine; Trp, tryptophan; ND, not detected; AG, ethanolic leaf extract of Aster glehni; and EAG, ethanolic leaf and stem extract of Aster glehni.
## 2.2. Experimental Animals and Maintenance
Specific pathogen-free (SPF) SD rats were obtained from Orient Bio Inc. (Seongnam, Korea). The animals were maintained in the facility with temperature (23 ± 3°C), relative humidity (55 ± 15%), and ventilation (10–20 air changes/hour) at Chemon Inc. in accordance with the Guide for the Care and Use of Laboratory Animals, 8th edition [29]. Food and water were provided, ad libitum, with a 12 hours light:12 hours dark cycle. All procedures and protocols were reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) of Chemon Inc. performed in accordance with the guideline published by the Organization for Economic Cooperation and Development (OECD) as well as the Good Laboratory Practice (GLP) regulations for Nonclinical Laboratory Studies of the Ministry of Food Drug Safety (MFDS) in the Republic of Korea [27, 28, 30, 31].
## 2.3. Thirteen-Week Repeated Oral Toxicity Study
For the 13-week repeat-dose toxicity study, in accordance with OECD Guideline 408 [32], healthy 6-week old male and female SD rats weighing 186.56 ± 8.70 g and 144.41 ± 7.63 g, respectively, were randomly assigned to 4 groups (10/sex/group) under GLP regulations. Vehicle (distilled water for injection) or graded doses of EAG (1,250, 2,500, and 5,000 mg/kg according to body weight) were administered to rats by oral gavage once daily for 13 weeks at a dose of 10 mL/kg of body weight after completion of a 14-day repeated oral toxicity dose range finding (DRF) study where no adverse finding was seen dosing up to 5,000 mg/kg/day. The high dose was selected according to the results of an acute toxicity study in which no significant test article-related changes in mortalities and clinical signs at 5,000 mg/kg/day were observed (data not shown). The rats were observed daily for clinical signs including mortality, general appearance, and behavioral abnormality until terminal sacrifice. Body weights and food/water consumption were recorded weekly throughout the study. Ophthalmological examination was conducted in the last week of observation and anterior parts of the eye, optic media, and fundus were examined with a fundus camera (Vantage Plus Digital LED; Keeler Instruments Inc., Malvern, PA, USA). At study termination, all rats were euthanized by isoflurane (2% to 5%) inhalation for blood sample collection.
## 2.4. Urinalysis, Hematology, and Clinical Biochemistry
Urinalysis, hematological analyses, and serum biochemistry analyses were conducted as described previously [31].
## 2.5. Gross Findings, Organ Weights, and Histopathological Examinations
At necropsy, the animals were sacrificed to analyze both macroscopic and microscopic features of the internal organs. The organ weights were measured as described previously [31]. All tissues from each animal were preserved, and lesions were graded using a five-step scale in the order of increasing severity (minimal, mild, moderate, severe, and massive). Brain, jejunum, peripheral nerve, pituitary gland, ileum, femorotibial joint, lung, cecum, urinary bladder, heart, colon, testis, thymus, rectum, epididymis, spleen, eye with optic nerve, prostate gland, adrenal gland, thyroid gland with parathyroid gland, seminal vesicle with coagulating gland, kidney, Harderian gland, ovary, liver, salivary gland, uterus with cervix, tongue, aorta, vagina, trachea, sternum with bone marrow, skin, esophagus, mandibular lymph node, mammary gland, stomach, mesenteric lymph node, skeletal muscle, pancreas, thoracic spinal cord, gross lesion, and duodenum were processed for histopathological examination using Pristima® (Xybion, Lawrenceville, NJ, USA). Diagnostic terms in the Lexicon of Pristima® were used primarily. Standardized System of Nomenclature and Diagnostic Criteria-Guides for Toxicologic Pathology [33] and Covance Glossary [34] were also utilized.
## 2.6. Bacterial Reverse Mutation Assay
Four histidine auxotroph strains ofSalmonella typhimurium (TA100, TA1535, TA98, and TA1537) [35] and a tryptophan auxotroph strain of Escherichia coli WP2 uvrA [36] were used to confirm mutagenicity of EAG according to OECD Guideline 471 [37] under GLP conditions. The mutagenic activity of EAG was assessed both in the presence and absence of an external metabolic activation system from rat livers (S9 fraction) using the direct plate incorporation method. For the plating assay, 0.5 mL of S9 mix (or sodium phosphate buffer, pH 7.4 for nonactivation plates), 0.1 mL of bacterial culture (containing approximately 108 viable cells), and 0.1 mL of test article were mixed with 2.0 mL of overlay agar. The contents of each tube were mixed and poured over the surface of a minimal agar plate. The overlay agar was allowed to solidify before incubation. After the top layers solidified, plates were inverted and incubated at 37 ± 2°C for 50 ± 2 h and revertant colonies were counted by the unaided eye. EAG was applied at dose levels of 50, 150, 500, 1,500, 3,000, and 5,000 µg/plate. The positive control substances were 2-aminoanthracene (2-AA), benzo[a]pyrene (B[a]P), sodium azide (SA), 2-nitrofluorene (2-NF), acridine mutagen ICR 191 (ICR-191), and 4-nitroquinoline-1-oxide (4NQO). At least three independent experiments were performed using triplicate plates for each concentration. Results are expressed as revertant colonies and mutagenic indexes (MI).
## 2.7. In Vitro Chromosomal Aberration Test
A chromosomal aberration test was performed to evaluate the mutagenic potential to induce structural and/or numerical chromosomal aberrations in a CHL/IU cell line derived from the lung of a female Chinese hamster lung fibroblasts under OECD Guideline 473 [38]. The treatment methods were classified under three types according to the presence and absence of the metabolic active system. Treatment 1 was performed for 6 h using a metabolic activation system (S9 mix), and 18 h of recovery time was allowed to observe the chromosomal aberrations. Treatments 2 and 3 were performed for 6 h and 24 h, respectively, without the use of S9 mix and followed by an 18 h and 0 h recovery, respectively. In Treatment 1, EAG was used at concentrations of 0 (negative control), 350, 700, 1,300, and 1,400 µg/mL. Treatments 2 and 3 were applied at 0, 300, 600, 1,100, and 1,200 µg/mL and 0, 225, 450, 800, and 900 µg/mL, respectively. Approximately 22 hours after treatment, 50 μL of colchicine solution was added to each culture (final concentration of 1 μM) and incubated for 2 hours for mitotic arrest. The mitotic cells were detached by gentle shaking. The media containing mitotic cells were centrifuged, and the cell pellets were resuspended in 75 mM potassium chloride solution for hypotonic treatment. Then cells were fixed with fixative (methanol: glacial acetic acid = 3:1 v/v), and slides were prepared by the air-drying method. Slides were stained with 5% Giemsa solution. Two slides were prepared for each culture. The results were expressed as frequency (%) of metaphases with structural or numerical aberrations per 300 metaphases. The relative increase in cell count (RICC %) was used as an indicator of concurrent cytotoxicity to determine the high concentration. With the cell counts, RICC (%) was calculated as follows:(1)RICC%=Cell count of treated flask−Initial cell countCell count of control flask−Initial cell count×100.
## 2.8. In Vivo Micronucleus Test in Mammalian Bone Marrow
Eight-week-old male ICR mice, 35.3 ± 1.3 g, were administered orally once a day for two consecutive days at doses of 500, 1,000, and 2,000 mg/kg/day (n = 6 in each group) according to OECD Guideline 474 [39]. Sterile distilled water for injection (10 mL/kg) was used as a negative control. Cyclophosphamide monohydrate (CPA) 70 mg/kg was administered once intraperitoneally on the day of the second administration as a positive control. All mice were daily observed for clinical signs. All animals were sacrificed about 24 h after the final administration, and bone marrow preparations were made for the evaluation of micronuclei and cytotoxicity. The bone marrow cells were fixed with methanol according to the method described in Schmid [40] and stained with acridine orange prepared based on the method of Hayashi [41]. The cells were observed and counted using a fluorescence microscope, and the identification of micronuclei was confirmed by the method of Hayashi [41]. Micronucleated polychromatic erythrocytes (MNPCE) were counted among 4,000 polychromatic erythrocytes (PCE) per animal. The ratio of PCE to total erythrocytes (red blood cell), an indicator of cytotoxicity [42], was determined by counting 500 erythrocytes per animal.
## 2.9. Statistical Analysis
SPSS Statistics 22 for Medical Science was used for all statistical analyzes, and the level of significance wasP<0.05. Body weights, food and water consumption, urine volume, hematological and clinical biochemistry parameters, and organ weights were assumed to be normally distributed and analyzed by parametric one-way analysis of variance (ANOVA). The assumption of homogeneity was tested using Levene’s test. The urinalysis data were rank-transformed and analyzed by the nonparametric Kruskal–Wallis H test. Fisher’s exact test was used to compare the frequency of aberrant metaphases between the negative control and test article-treated groups in the chromosomal aberration test. In the micronucleus test, the frequency of micronucleus was analyzed by the nonparametric Kruskal–Wallis H test. The negative and positive control groups were compared by the Mann–Whitney U test. The dose-responsiveness was tested by the linear-by-linear association of the chi-square test. The PCE:RBC ratio was assumed to be normally distributed and analyzed by one-way ANOVA, and the assumption of homogeneity of variance was tested using Levene’s test. The Student’s t-test was used to test for a difference between means of the negative and positive control.
## 3. Results
### 3.1. Thirteen-Week Repeated Oral Toxicity Study
There is still insufficient toxicological information on the oral toxicity of EAG after long-term exposure. Therefore, a repeated-dose toxicity DRF study of EAG at doses of 1,250, 2,500, and 5,000 mg/kg/day administered by oral gavage for 14 days was performed to assess initial toxicity. As a result, no EAG-related changes in mortalities, clinical signs, body weights, food and water consumption, ophthalmological examination, urinalysis, hematological and clinical biochemistry tests, organ weight, and gross findings were observed during the 2-week treatment period (body weights as shown in Figure1 and other data not shown).Figure 1
Effect of ethanolic extract ofA. glehni on body weights in SD rats. (a) Mean body weights of male rats and (b) mean body weight of female rats treated with EAG for 2 weeks. (c) Mean body weights of male rats and (d) mean body weight of female rats treated with EAG for 13 weeks. Values are expressed as mean ± SD (n = 9–10 per group). Significant difference at ∗P<0.05 and ∗∗P<0.01 levels compared with the negative control.
(a)(b)(c)(d)In the 13-week repeated-dose toxicity study, although one male rat treated with 1,250 mg/kg/day of EAG died on day 65, there were no clinical signs or any lesions in histopathological examination. The compound-colored stool was observed at 5,000 mg/kg/day in both sexes from day 10 to necropsy day, and salivation was sporadically observed in males at 5,000 mg/kg/day. Significant decreases in mean body weight were observed in males at 1,250 and 2,500 mg/kg/day (P<0.05 and P<0.01; Figure 1), but these changes did not occur in a dose-dependent manner, and the values were within the normal physiological ranges [43, 44]. No significant changes were found in female body weight between the treatment and control groups. There were no EAG-related effects in food intake, water intake, organ weights, and ophthalmological test in both sexes (data not shown).A few instances of mean values of urinalysis parameters differing with statistical significance from the negative control were observed (P<0.05 and P<0.01; Table 2). Ketone body in males at 5,000 mg/kg/day and specific gravity at all doses in females was significantly higher than that of the negative control. In addition, pH in females at all EAG groups was significantly higher and 24 hours total volume of urine in females at 1,250 and 5,000 mg/kg/day were significantly lower than those of negative control. However, these changes were within the normal physiological ranges [43, 44]. Therefore, these observations were not considered to be toxicologically significant.Table 2
Urinalysis of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsResultEAG (mg/kg/day)MaleFemale01,2502,5005,00001,2502,5005,000No. of animals examined55555555GLUNegative55555555BILNegative55545555Small00010000KETNegative31415544Trace241000111500030000400001∗0000SG≤1.005100051101.010424203231.015021201111.020010100∗1∗1∗∗pH6.5000010007.0000010007.5100031008.0200100108.5255404∗4∗∗5∗∗Volume (mL)13.0 ± 4.611.6 ± 1.915.2 ± 1.111.4 ± 4.417.6 ± 5.510.8 ± 2.8∗12.4 ± 4.68.8 ± 4.0∗∗∗/∗∗Significant difference at P<0.05/P<0.01 levels compared with the negative control by the Mann–Whitney U test. GLU, glucose (mg/dL); BIL, bilirubin (mg/dL); KET, ketone body (mg/dL); and SG, specific gravity.Hematology evaluation showed lymphocyte count at 5,000 mg/kg/day in males was significantly higher, and prothrombin time at all doses in females was significantly lower compared with the negative control (P<0.05 and P<0.01; Table 3). However, these results were also within the normal physiological ranges [43, 44]. The results of the clinical biochemistry test were presented in Table 4. EAG-related changes in clinical biochemistry parameters were not found in both sexes.Table 3
Hematological parameters of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsEAG (mg/kg/day)01,2502,5005,000MaleRBC (106/μL)8.99 ± 0.559.01 ± 0.22a8.81 ± 0.378.87 ± 0.42HGB (g/dL)15.3 ± 0.515.5 ± 0.4a15.0 ± 0.515.0 ± 0.4HCT (%)47.3 ± 1.847.9 ± 1.4a46.7 ± 1.346.6 ± 1.6MCV (fL)52.7 ± 1.953.2 ± 1.0a53.1 ± 2.652.5 ± 1.1MCH (pg)17.1 ± 0.817.2 ± 0.3a17.0 ± 1.016.9 ± 0.5MCHC (g/dL)32.4 ± 0.532.3 ± 0.4a32.1 ± 0.832.3 ± 0.5PLT (103/μL)919.2 ± 61.3905.9 ± 93.0a890.4 ± 71.7933.3 ± 74.8WBC (103/μL)6.30 ± 1.377.22 ± 2.18a7.54 ± 1.167.91 ± 0.97NEU (103/μL)1.3 ± 0.31.5 ± 0.6a1.6 ± 0.71.1 ± 0.2LYM (103/μL)4.6 ± 1.25.2 ± 1.6a5.4 ± 1.06.3 ± 1.0∗MONO (103/μL)0.28 ± 0.120.31 ± 0.10a0.32 ± 0.110.30 ± 0.06EOS (103/μL)0.11 ± 0.040.11 ± 0.03a0.13 ± 0.020.10 ± 0.03BASO (103/μL)0.01 ± 0.010.01 ± 0.01a0.01 ± 0.000.01 ± 0.00PT (sec)8.0 ± 0.28.1 ± 0.2a8.0 ± 0.27.8 ± 0.2FemaleRBC (106/μL)7.98 ± 0.357.72 ± 0.307.86 ± 0.227.94 ± 0.28HGB (g/dL)14.3 ± 0.314.0 ± 0.414.1 ± 0.314.3 ± 0.4HCT (%)43.5 ± 1.342.8 ± 1.243.2 ± 1.943.7 ± 1.2MCV (fL)54.6 ± 1.855.5 ± 2.154.9 ± 0.855.0 ± 0.8MCH (pg)17.9 ± 0.618.1 ± 0.718.0 ± 0.417.9 ± 0.3MCHC (g/dL)32.8 ± 0.232.7 ± 0.432.7 ± 0.432.6 ± 0.4PLT (103/μL)969.9 ± 60.91023.9 ± 89.3977.4 ± 87.8950.3 ± 66.4WBC (103/μL)3.67 ± 0.953.75 ± 1.033.84 ± 1.224.01 ± 1.18NEU (103/μL)0.5 ± 0.10.5 ± 0.10.5 ± 0.20.5 ± 0.2LYM (103/μL)3.0 ± 0.93.0 ± 0.93.1 ± 1.03.3 ± 0.9MONO (103/μL)0.09 ± 0.040.11 ± 0.030.11 ± 0.040.13 ± 0.05EOS (103/μL)0.08 ± 0.030.08 ± 0.020.08 ± 0.030.07 ± 0.03BASO (103/μL)0.01 ± 0.010.00 ± 0.010.00 ± 0.000.00 ± 0.01PT (sec)7.7 ± 0.27.4 ± 0.2##7.3 ± 0.2##7.4 ± 0.2##Data are expressed as mean ± standard deviation.∗Significant difference at P<0.05 levels compared with the negative control by Scheffe multiple range test. ##Significant difference at P<0.01 levels compared with the negative control by Duncan multiple range test. aNumber of animals in the group was 9; otherwise mean of 10 animals/sex/group. RBC, red blood cell; HGB, hemoglobin concentration; HCT, hematocrit; MCV, mean corpuscular volume; MCH, mean cell hemoglobin; MCHC, mean cell hemoglobin concentration; PLT, platelet count; WBC, white blood cell; NEU, neutrophil; LYM, lymphocyte; MONO, monocyte; EOS, eosinophil; BASO, basophil; and PT, prothrombin time.Table 4
Clinical biochemistry parameters of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsEAG (mg/kg/day)01,2502,5005,000MaleAST (U/L)83.7 ± 16.777.2 ± 14.9a82.6 ± 15.970.6 ± 6.6ALT (U/L)33.3 ± 5.832.6 ± 6.3a33.1 ± 4.131.8 ± 3.1ALP (U/L)88.1 ± 15.482.1 ± 16.0a89.8 ± 17.893.5 ± 18.0CPK (U/L)160.9 ± 80.9173.8 ± 124.2a157.1 ± 94.7118.0 ± 50.8TBIL (mg/dL)0.149 ± 0.0300.145 ± 0.032a0.145 ± 0.0200.145 ± 0.020GLU (mg/dL)155.0 ± 19.3149.7 ± 14.8a151.1 ± 22.1145.0 ± 17.0TCHO (mg/dL)89.0 ± 21.2101.8 ± 21.3a101.4 ± 24.0104.8 ± 24.2TG (mg/dL)56.3 ± 25.863.2 ± 26.2a60.6 ± 19.965.6 ± 28.2TP (g/dL)6.27 ± 0.166.37 ± 0.19a6.29 ± 0.296.30 ± 0.26ALB (g/dL)2.90 ± 0.072.95 ± 0.11a2.95 ± 0.112.93 ± 0.09BUN (mg/dL)13.9 ± 1.614.7 ± 1.1a14.4 ± 2.313.6 ± 1.9CRE (mg/dL)0.40 ± 0.030.39 ± 0.02a0.39 ± 0.020.40 ± 0.03FemaleAST (U/L)70.1 ± 11.276.8 ± 13.876.4 ± 14.273.0 ± 18.0ALT (U/L)22.1 ± 3.524.1 ± 6.325.0 ± 4.625.5 ± 3.6ALP (U/L)43.5 ± 15.654.2 ± 14.146.8 ± 13.445.8 ± 11.1CPK (U/L)146.6 ± 126.3126.4 ± 84.9149.6 ± 94.6128.1 ± 54.7TBIL (mg/dL)0.169 ± 0.0240.190 ± 0.0380.176 ± 0.0200.174 ± 0.018GLU (mg/dL)121.4 ± 14.5129.3 ± 16.1122.7 ± 14.6122.0 ± 10.7TCHO (mg/dL)86.2 ± 20.092.5 ± 8.5100.5 ± 17.685.3 ± 10.9TG (mg/dL)35.6 ± 6.335.0 ± 4.736.3 ± 8.432.9 ± 8.4TP (g/dL)5.89 ± 0.266.11 ± 0.216.02 ± 0.205.97 ± 0.17ALB (g/dL)2.99 ± 0.133.11 ± 0.153.05 ± 0.113.12 ± 0.11BUN (mg/dL)15.8 ± 2.315.0 ± 1.315.0 ± 2.414.3 ± 1.2CRE (mg/dL)0.48 ± 0.040.47 ± 0.030.49 ± 0.060.46 ± 0.02Data are expressed as mean ± standard deviation.aNumber of animals in group was 9; otherwise mean of 10 animals/sex/group. AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; CPK, creatine phosphokinase; TBIL, total bilirubin; GLU, glucose; TCHO, total cholesterol; TG, triglyceride; TP, total protein; ALB, albumin; BUN, blood urea nitrogen; and CRE, creatinine.In histopathological examinations, notable change was observed in the nonglandular stomach (Table5). Squamous hyperplasia of the limiting ridge in the stomach was found in the EAG-treated groups in both sexes. The changes were observed in seven males at 2,500 mg/kg/day and in all males and eight females at 5,000 mg/kg/day (P<0.01 and P<0.001). However, there were no toxicologically significant changes in histopathological examinations. Other lesions that have been well known to occur spontaneously in the same age of SD rats were observed [45, 46].Table 5
Histopathologic findings of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
OrgansFindingsEAG (mg/kg/day)MaleFemale01,2502,5005,00001,2502,5005,000Nonglandular stomachHyperplasia, squamous cells, limiting ridge037∗∗10∗∗∗0138∗∗∗No. of animals examined1010101010101010∗∗/∗∗∗Significant difference at P<0.01/P<0.001 levels compared with the negative control by Fisher two-tailed test.
### 3.2. Genotoxicity Test
#### 3.2.1. Bacterial Reverse Mutation Test (Ames Test)
No precipitation or other abnormality was observed on the bottom agar at the time of plate scoring. There was a dose-related increase in a number of colonies in TA98 that is one of the histidine-requiring strains at 3,000 and 5,000µg/plate, and the number of revertants was 2.3 and 2.5 times higher than that of the negative control in the presence of S9 mix, respectively (Table 6). However, EAG is composed of various amino acids, including histidine (Table 1). In other test strains, no substantial increases in numbers of revertants per plate were observed in any dose level of EAG. Moreover, there were no signs of cytotoxicity at any dose level in all test strains. The results suggest that EAG is not mutagenic in the test strains. The mean revertants in the positive control for each strain showed a clear increase over the mean revertants in the negative control for that strain.Table 6
Results of bacterial reverse mutation assay.
Test articleDose (μg/plate)Colonies/plate [factor]aTA98TA100TA1535TA1537WP2uvrA−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mixEAG026 ± 425 ± 3114 ± 8107 ± 914 ± 215 ± 215 ± 215 ± 222 ± 329 ± 4——————————5025 ± 628 ± 3111 ± 6102 ± 912 ± 416 ± 513 ± 213 ± 322 ± 425 ± 4[1.0][1.1][1.0][0.9][0.9][1.1][0.9][0.9][1.0][0.9]15027 ± 430 ± 5107 ± 15120 ± 1610 ± 115 ± 115 ± 112 ± 318 ± 326 ± 7[1.1][1.2][0.9][1.1][0.8][1.0][1.0][0.8][0.8][0.9]50034 ± 534 ± 4103 ± 7105 ± 513 ± 117 ± 113 ± 114 ± 120 ± 226 ± 6[1.3][1.4][0.9][1.0][1.0][1.1][0.9][1.0][0.9][0.9]1,50026 ± 540 ± 3109 ± 6120 ± 1011 ± 115 ± 212 ± 213 ± 223 ± 426 ± 4[1.0][1.6][1.0][1.1][0.8][1.0][0.8][0.9][1.0][0.9]3,00034 ± 556 ± 5121 ± 2137 ± 113 ± 213 ± 313 ± 317 ± 322 ± 227 ± 2[1.3][2.3][1.1][1.3][0.9][0.9][0.9][1.2][1.0][0.9]5,00034 ± 263 ± 6116 ± 12137 ± 313 ± 113 ± 112 ± 220 ± 125 ± 223 ± 3[1.3][2.5][1.0][1.3][0.9][0.8][0.8][1.3][1.1][0.8]Positive controlb225 ± 16118 ± 8465 ± 601504 ± 102408 ± 12142 ± 19265 ± 20182 ± 22227 ± 25104 ± 8[8.8][4.8][4.1][14.0][29.9][9.3][18.1][12.1][10.3][3.6]Data are expressed as mean ± standard deviation.aThree plates were used each dose. Factor = no. of colonies of treated plate/no. of colonies of negative control plate. bTA98: 2-NF 2 μg/plate (−S9 mix), B[a]P 1 μg/plate (+S9 mix); TA100:SA 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); TA1535:SA 0.5 μg/plate (−S9 mix), 2-AA 2 μg/plate (+S9 mix); TA1537:ICR-191 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); and WP2 uvrA:4NQO 0.5 μg/plate (−S9 mix), 2-AA 6 μg/plate (+S9 mix). 2-NF, 2-nitrofluorene; B[a]P, benzo[a]pyrene; SA, sodium azide; 2-AA, 2-aminoanthracene; ICR-191, acridine mutagen ICR 191; and 4NQO, 4-nitroquinoline N-oxide.
#### 3.2.2. Chromosome Aberration Test Using CHL Cells
In this experiment, no turbidity or precipitation was observed at all dose levels of EAG. As shown in Table7, there was no statistically significant increase at any dose level of EAG compared to the negative control, and there was no dose-response relationship or increase in the frequency of aberrant metaphases in all treatment series. In the positive control, there was a statistically significant increase in the mean frequency of aberrant metaphases with structural aberrations in all treatment series (P<0.01).Table 7
In vitro chromosome aberration test in Chinese hamster lung cells with EAG.
Treatment scheduleaS9 mixDose (µg/mL)PP + ER (%)Ratio of aberrant metaphaseb (%)Cell countscMeanRICCd (%)Flask AFlask B06–18+00.000.008,6628,2648,4631003500.330.338,2638,3878,325977000.670.678,5638,7908,6761041,3000.000.336,0836,2626,172571,4000.000.005,4405,3825,41143B[a]P 200.0015.00∗∗6,0005,8505,9255206–18−00.000.009,1629,3489,2551003000.330.009,5288,7229,124986000.670.008,6708,9128,791921,1000.000.336,6096,6576,633571,2000.670.006,0325,8645,947464NQO 0.40.0010.33∗∗7,3097,1927,2506724–0−00.330.338,9008,9108,9051002250.670.008,9969,2739,1341044500.330.009,5899,2739,4311098001.000.676,2456,5526,398569000.000.336,0315,8895,960494NQO 0.40.009.33∗∗7,0126,6696,84064Initial cell count3,1423,1623,152∗∗Significant difference at P<0.01 levels compared with the negative control by Fisher’s exact test. aTreatment time – recovery time, hours, bGap excludes, 150 metaphases were examined per culture. cAfter harvesting mitotic cells, each culture was trypsinized and suspended with 0.5 mL of 0.1% trypsin and 5 mL of culture medium. The cell suspensions of 0.4 mL/culture were diluted 50 times with 19.6 mL of Isoton® sol. The cells in 0.5 mL of Isoton® sol. were counted twice/culture using Coulter Counter model Z2. The actual number of cells per flask = mean cell count × 550. dRelative increase in cell count = ((cell count of treated flask – initial cell count)/(cell count of the negative control flask – initial cell count)) × 100, PP, polyploid; ER, endoreduplication; B[a]P, benzo[a]pyrene (positive control); and 4NQO, 4-nitroquinoline-1-oxide (positive control).
#### 3.2.3. Micronucleus Test Using Mouse Bone Marrow Cells
One mouse at 500 mg/kg/day died after the first administration, but it was not considered to be EAG-related, and there were no noticeable macroscopic signs in all other survivors that could be attributed to EAG. There was no statistically significant increase or a dose-related increase in the frequencies of MNPCE at any dose level of EAG compared to the negative control (Table8). The PCE:RBC ratio showed no difference at any dose level of EAG. In contrast, the micronucleus and PCE:RBC ratio were significantly changed by the positive control (P<0.01) when compared to the negative control.Table 8
Observations of micronucleus and PCE:RBC ratio.
Test articleDose (mg/kg/day)Animals per doseMNPCEa (mean ± SD)PCE:RBC ratio (mean ± SD)% controlEAG061.33 ± 1.030.57 ± 0.011005005b1.20 ± 0.840.58 ± 0.021011,00061.00 ± 1.100.57 ± 0.021002,00061.50 ± 1.380.57 ± 0.0199CPA706110.50 ± 29.71∗∗0.39 ± 0.02##69∗∗Significant difference at P<0.01 levels compared with the negative control by the Mann–Whitney. ##Significant difference at P<0.01 levels compared with the control by Student’s t-test. aRatio of MNPCE with 4,000 PCE, bOne of the mice was died. PCE, polychromatic erythrocyte; RBC, red blood cells (polychromatic erythrocyte + normochromatic erythrocyte); MNPCE, micronucleated polychromatic erythrocyte; and CPA, cyclophosphamide monohydrate (positive control).
## 3.1. Thirteen-Week Repeated Oral Toxicity Study
There is still insufficient toxicological information on the oral toxicity of EAG after long-term exposure. Therefore, a repeated-dose toxicity DRF study of EAG at doses of 1,250, 2,500, and 5,000 mg/kg/day administered by oral gavage for 14 days was performed to assess initial toxicity. As a result, no EAG-related changes in mortalities, clinical signs, body weights, food and water consumption, ophthalmological examination, urinalysis, hematological and clinical biochemistry tests, organ weight, and gross findings were observed during the 2-week treatment period (body weights as shown in Figure1 and other data not shown).Figure 1
Effect of ethanolic extract ofA. glehni on body weights in SD rats. (a) Mean body weights of male rats and (b) mean body weight of female rats treated with EAG for 2 weeks. (c) Mean body weights of male rats and (d) mean body weight of female rats treated with EAG for 13 weeks. Values are expressed as mean ± SD (n = 9–10 per group). Significant difference at ∗P<0.05 and ∗∗P<0.01 levels compared with the negative control.
(a)(b)(c)(d)In the 13-week repeated-dose toxicity study, although one male rat treated with 1,250 mg/kg/day of EAG died on day 65, there were no clinical signs or any lesions in histopathological examination. The compound-colored stool was observed at 5,000 mg/kg/day in both sexes from day 10 to necropsy day, and salivation was sporadically observed in males at 5,000 mg/kg/day. Significant decreases in mean body weight were observed in males at 1,250 and 2,500 mg/kg/day (P<0.05 and P<0.01; Figure 1), but these changes did not occur in a dose-dependent manner, and the values were within the normal physiological ranges [43, 44]. No significant changes were found in female body weight between the treatment and control groups. There were no EAG-related effects in food intake, water intake, organ weights, and ophthalmological test in both sexes (data not shown).A few instances of mean values of urinalysis parameters differing with statistical significance from the negative control were observed (P<0.05 and P<0.01; Table 2). Ketone body in males at 5,000 mg/kg/day and specific gravity at all doses in females was significantly higher than that of the negative control. In addition, pH in females at all EAG groups was significantly higher and 24 hours total volume of urine in females at 1,250 and 5,000 mg/kg/day were significantly lower than those of negative control. However, these changes were within the normal physiological ranges [43, 44]. Therefore, these observations were not considered to be toxicologically significant.Table 2
Urinalysis of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsResultEAG (mg/kg/day)MaleFemale01,2502,5005,00001,2502,5005,000No. of animals examined55555555GLUNegative55555555BILNegative55545555Small00010000KETNegative31415544Trace241000111500030000400001∗0000SG≤1.005100051101.010424203231.015021201111.020010100∗1∗1∗∗pH6.5000010007.0000010007.5100031008.0200100108.5255404∗4∗∗5∗∗Volume (mL)13.0 ± 4.611.6 ± 1.915.2 ± 1.111.4 ± 4.417.6 ± 5.510.8 ± 2.8∗12.4 ± 4.68.8 ± 4.0∗∗∗/∗∗Significant difference at P<0.05/P<0.01 levels compared with the negative control by the Mann–Whitney U test. GLU, glucose (mg/dL); BIL, bilirubin (mg/dL); KET, ketone body (mg/dL); and SG, specific gravity.Hematology evaluation showed lymphocyte count at 5,000 mg/kg/day in males was significantly higher, and prothrombin time at all doses in females was significantly lower compared with the negative control (P<0.05 and P<0.01; Table 3). However, these results were also within the normal physiological ranges [43, 44]. The results of the clinical biochemistry test were presented in Table 4. EAG-related changes in clinical biochemistry parameters were not found in both sexes.Table 3
Hematological parameters of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsEAG (mg/kg/day)01,2502,5005,000MaleRBC (106/μL)8.99 ± 0.559.01 ± 0.22a8.81 ± 0.378.87 ± 0.42HGB (g/dL)15.3 ± 0.515.5 ± 0.4a15.0 ± 0.515.0 ± 0.4HCT (%)47.3 ± 1.847.9 ± 1.4a46.7 ± 1.346.6 ± 1.6MCV (fL)52.7 ± 1.953.2 ± 1.0a53.1 ± 2.652.5 ± 1.1MCH (pg)17.1 ± 0.817.2 ± 0.3a17.0 ± 1.016.9 ± 0.5MCHC (g/dL)32.4 ± 0.532.3 ± 0.4a32.1 ± 0.832.3 ± 0.5PLT (103/μL)919.2 ± 61.3905.9 ± 93.0a890.4 ± 71.7933.3 ± 74.8WBC (103/μL)6.30 ± 1.377.22 ± 2.18a7.54 ± 1.167.91 ± 0.97NEU (103/μL)1.3 ± 0.31.5 ± 0.6a1.6 ± 0.71.1 ± 0.2LYM (103/μL)4.6 ± 1.25.2 ± 1.6a5.4 ± 1.06.3 ± 1.0∗MONO (103/μL)0.28 ± 0.120.31 ± 0.10a0.32 ± 0.110.30 ± 0.06EOS (103/μL)0.11 ± 0.040.11 ± 0.03a0.13 ± 0.020.10 ± 0.03BASO (103/μL)0.01 ± 0.010.01 ± 0.01a0.01 ± 0.000.01 ± 0.00PT (sec)8.0 ± 0.28.1 ± 0.2a8.0 ± 0.27.8 ± 0.2FemaleRBC (106/μL)7.98 ± 0.357.72 ± 0.307.86 ± 0.227.94 ± 0.28HGB (g/dL)14.3 ± 0.314.0 ± 0.414.1 ± 0.314.3 ± 0.4HCT (%)43.5 ± 1.342.8 ± 1.243.2 ± 1.943.7 ± 1.2MCV (fL)54.6 ± 1.855.5 ± 2.154.9 ± 0.855.0 ± 0.8MCH (pg)17.9 ± 0.618.1 ± 0.718.0 ± 0.417.9 ± 0.3MCHC (g/dL)32.8 ± 0.232.7 ± 0.432.7 ± 0.432.6 ± 0.4PLT (103/μL)969.9 ± 60.91023.9 ± 89.3977.4 ± 87.8950.3 ± 66.4WBC (103/μL)3.67 ± 0.953.75 ± 1.033.84 ± 1.224.01 ± 1.18NEU (103/μL)0.5 ± 0.10.5 ± 0.10.5 ± 0.20.5 ± 0.2LYM (103/μL)3.0 ± 0.93.0 ± 0.93.1 ± 1.03.3 ± 0.9MONO (103/μL)0.09 ± 0.040.11 ± 0.030.11 ± 0.040.13 ± 0.05EOS (103/μL)0.08 ± 0.030.08 ± 0.020.08 ± 0.030.07 ± 0.03BASO (103/μL)0.01 ± 0.010.00 ± 0.010.00 ± 0.000.00 ± 0.01PT (sec)7.7 ± 0.27.4 ± 0.2##7.3 ± 0.2##7.4 ± 0.2##Data are expressed as mean ± standard deviation.∗Significant difference at P<0.05 levels compared with the negative control by Scheffe multiple range test. ##Significant difference at P<0.01 levels compared with the negative control by Duncan multiple range test. aNumber of animals in the group was 9; otherwise mean of 10 animals/sex/group. RBC, red blood cell; HGB, hemoglobin concentration; HCT, hematocrit; MCV, mean corpuscular volume; MCH, mean cell hemoglobin; MCHC, mean cell hemoglobin concentration; PLT, platelet count; WBC, white blood cell; NEU, neutrophil; LYM, lymphocyte; MONO, monocyte; EOS, eosinophil; BASO, basophil; and PT, prothrombin time.Table 4
Clinical biochemistry parameters of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
TestsEAG (mg/kg/day)01,2502,5005,000MaleAST (U/L)83.7 ± 16.777.2 ± 14.9a82.6 ± 15.970.6 ± 6.6ALT (U/L)33.3 ± 5.832.6 ± 6.3a33.1 ± 4.131.8 ± 3.1ALP (U/L)88.1 ± 15.482.1 ± 16.0a89.8 ± 17.893.5 ± 18.0CPK (U/L)160.9 ± 80.9173.8 ± 124.2a157.1 ± 94.7118.0 ± 50.8TBIL (mg/dL)0.149 ± 0.0300.145 ± 0.032a0.145 ± 0.0200.145 ± 0.020GLU (mg/dL)155.0 ± 19.3149.7 ± 14.8a151.1 ± 22.1145.0 ± 17.0TCHO (mg/dL)89.0 ± 21.2101.8 ± 21.3a101.4 ± 24.0104.8 ± 24.2TG (mg/dL)56.3 ± 25.863.2 ± 26.2a60.6 ± 19.965.6 ± 28.2TP (g/dL)6.27 ± 0.166.37 ± 0.19a6.29 ± 0.296.30 ± 0.26ALB (g/dL)2.90 ± 0.072.95 ± 0.11a2.95 ± 0.112.93 ± 0.09BUN (mg/dL)13.9 ± 1.614.7 ± 1.1a14.4 ± 2.313.6 ± 1.9CRE (mg/dL)0.40 ± 0.030.39 ± 0.02a0.39 ± 0.020.40 ± 0.03FemaleAST (U/L)70.1 ± 11.276.8 ± 13.876.4 ± 14.273.0 ± 18.0ALT (U/L)22.1 ± 3.524.1 ± 6.325.0 ± 4.625.5 ± 3.6ALP (U/L)43.5 ± 15.654.2 ± 14.146.8 ± 13.445.8 ± 11.1CPK (U/L)146.6 ± 126.3126.4 ± 84.9149.6 ± 94.6128.1 ± 54.7TBIL (mg/dL)0.169 ± 0.0240.190 ± 0.0380.176 ± 0.0200.174 ± 0.018GLU (mg/dL)121.4 ± 14.5129.3 ± 16.1122.7 ± 14.6122.0 ± 10.7TCHO (mg/dL)86.2 ± 20.092.5 ± 8.5100.5 ± 17.685.3 ± 10.9TG (mg/dL)35.6 ± 6.335.0 ± 4.736.3 ± 8.432.9 ± 8.4TP (g/dL)5.89 ± 0.266.11 ± 0.216.02 ± 0.205.97 ± 0.17ALB (g/dL)2.99 ± 0.133.11 ± 0.153.05 ± 0.113.12 ± 0.11BUN (mg/dL)15.8 ± 2.315.0 ± 1.315.0 ± 2.414.3 ± 1.2CRE (mg/dL)0.48 ± 0.040.47 ± 0.030.49 ± 0.060.46 ± 0.02Data are expressed as mean ± standard deviation.aNumber of animals in group was 9; otherwise mean of 10 animals/sex/group. AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; CPK, creatine phosphokinase; TBIL, total bilirubin; GLU, glucose; TCHO, total cholesterol; TG, triglyceride; TP, total protein; ALB, albumin; BUN, blood urea nitrogen; and CRE, creatinine.In histopathological examinations, notable change was observed in the nonglandular stomach (Table5). Squamous hyperplasia of the limiting ridge in the stomach was found in the EAG-treated groups in both sexes. The changes were observed in seven males at 2,500 mg/kg/day and in all males and eight females at 5,000 mg/kg/day (P<0.01 and P<0.001). However, there were no toxicologically significant changes in histopathological examinations. Other lesions that have been well known to occur spontaneously in the same age of SD rats were observed [45, 46].Table 5
Histopathologic findings of male and female SD rats in the 13-week repeated oral toxicity study of EAG.
OrgansFindingsEAG (mg/kg/day)MaleFemale01,2502,5005,00001,2502,5005,000Nonglandular stomachHyperplasia, squamous cells, limiting ridge037∗∗10∗∗∗0138∗∗∗No. of animals examined1010101010101010∗∗/∗∗∗Significant difference at P<0.01/P<0.001 levels compared with the negative control by Fisher two-tailed test.
## 3.2. Genotoxicity Test
### 3.2.1. Bacterial Reverse Mutation Test (Ames Test)
No precipitation or other abnormality was observed on the bottom agar at the time of plate scoring. There was a dose-related increase in a number of colonies in TA98 that is one of the histidine-requiring strains at 3,000 and 5,000µg/plate, and the number of revertants was 2.3 and 2.5 times higher than that of the negative control in the presence of S9 mix, respectively (Table 6). However, EAG is composed of various amino acids, including histidine (Table 1). In other test strains, no substantial increases in numbers of revertants per plate were observed in any dose level of EAG. Moreover, there were no signs of cytotoxicity at any dose level in all test strains. The results suggest that EAG is not mutagenic in the test strains. The mean revertants in the positive control for each strain showed a clear increase over the mean revertants in the negative control for that strain.Table 6
Results of bacterial reverse mutation assay.
Test articleDose (μg/plate)Colonies/plate [factor]aTA98TA100TA1535TA1537WP2uvrA−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mixEAG026 ± 425 ± 3114 ± 8107 ± 914 ± 215 ± 215 ± 215 ± 222 ± 329 ± 4——————————5025 ± 628 ± 3111 ± 6102 ± 912 ± 416 ± 513 ± 213 ± 322 ± 425 ± 4[1.0][1.1][1.0][0.9][0.9][1.1][0.9][0.9][1.0][0.9]15027 ± 430 ± 5107 ± 15120 ± 1610 ± 115 ± 115 ± 112 ± 318 ± 326 ± 7[1.1][1.2][0.9][1.1][0.8][1.0][1.0][0.8][0.8][0.9]50034 ± 534 ± 4103 ± 7105 ± 513 ± 117 ± 113 ± 114 ± 120 ± 226 ± 6[1.3][1.4][0.9][1.0][1.0][1.1][0.9][1.0][0.9][0.9]1,50026 ± 540 ± 3109 ± 6120 ± 1011 ± 115 ± 212 ± 213 ± 223 ± 426 ± 4[1.0][1.6][1.0][1.1][0.8][1.0][0.8][0.9][1.0][0.9]3,00034 ± 556 ± 5121 ± 2137 ± 113 ± 213 ± 313 ± 317 ± 322 ± 227 ± 2[1.3][2.3][1.1][1.3][0.9][0.9][0.9][1.2][1.0][0.9]5,00034 ± 263 ± 6116 ± 12137 ± 313 ± 113 ± 112 ± 220 ± 125 ± 223 ± 3[1.3][2.5][1.0][1.3][0.9][0.8][0.8][1.3][1.1][0.8]Positive controlb225 ± 16118 ± 8465 ± 601504 ± 102408 ± 12142 ± 19265 ± 20182 ± 22227 ± 25104 ± 8[8.8][4.8][4.1][14.0][29.9][9.3][18.1][12.1][10.3][3.6]Data are expressed as mean ± standard deviation.aThree plates were used each dose. Factor = no. of colonies of treated plate/no. of colonies of negative control plate. bTA98: 2-NF 2 μg/plate (−S9 mix), B[a]P 1 μg/plate (+S9 mix); TA100:SA 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); TA1535:SA 0.5 μg/plate (−S9 mix), 2-AA 2 μg/plate (+S9 mix); TA1537:ICR-191 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); and WP2 uvrA:4NQO 0.5 μg/plate (−S9 mix), 2-AA 6 μg/plate (+S9 mix). 2-NF, 2-nitrofluorene; B[a]P, benzo[a]pyrene; SA, sodium azide; 2-AA, 2-aminoanthracene; ICR-191, acridine mutagen ICR 191; and 4NQO, 4-nitroquinoline N-oxide.
### 3.2.2. Chromosome Aberration Test Using CHL Cells
In this experiment, no turbidity or precipitation was observed at all dose levels of EAG. As shown in Table7, there was no statistically significant increase at any dose level of EAG compared to the negative control, and there was no dose-response relationship or increase in the frequency of aberrant metaphases in all treatment series. In the positive control, there was a statistically significant increase in the mean frequency of aberrant metaphases with structural aberrations in all treatment series (P<0.01).Table 7
In vitro chromosome aberration test in Chinese hamster lung cells with EAG.
Treatment scheduleaS9 mixDose (µg/mL)PP + ER (%)Ratio of aberrant metaphaseb (%)Cell countscMeanRICCd (%)Flask AFlask B06–18+00.000.008,6628,2648,4631003500.330.338,2638,3878,325977000.670.678,5638,7908,6761041,3000.000.336,0836,2626,172571,4000.000.005,4405,3825,41143B[a]P 200.0015.00∗∗6,0005,8505,9255206–18−00.000.009,1629,3489,2551003000.330.009,5288,7229,124986000.670.008,6708,9128,791921,1000.000.336,6096,6576,633571,2000.670.006,0325,8645,947464NQO 0.40.0010.33∗∗7,3097,1927,2506724–0−00.330.338,9008,9108,9051002250.670.008,9969,2739,1341044500.330.009,5899,2739,4311098001.000.676,2456,5526,398569000.000.336,0315,8895,960494NQO 0.40.009.33∗∗7,0126,6696,84064Initial cell count3,1423,1623,152∗∗Significant difference at P<0.01 levels compared with the negative control by Fisher’s exact test. aTreatment time – recovery time, hours, bGap excludes, 150 metaphases were examined per culture. cAfter harvesting mitotic cells, each culture was trypsinized and suspended with 0.5 mL of 0.1% trypsin and 5 mL of culture medium. The cell suspensions of 0.4 mL/culture were diluted 50 times with 19.6 mL of Isoton® sol. The cells in 0.5 mL of Isoton® sol. were counted twice/culture using Coulter Counter model Z2. The actual number of cells per flask = mean cell count × 550. dRelative increase in cell count = ((cell count of treated flask – initial cell count)/(cell count of the negative control flask – initial cell count)) × 100, PP, polyploid; ER, endoreduplication; B[a]P, benzo[a]pyrene (positive control); and 4NQO, 4-nitroquinoline-1-oxide (positive control).
### 3.2.3. Micronucleus Test Using Mouse Bone Marrow Cells
One mouse at 500 mg/kg/day died after the first administration, but it was not considered to be EAG-related, and there were no noticeable macroscopic signs in all other survivors that could be attributed to EAG. There was no statistically significant increase or a dose-related increase in the frequencies of MNPCE at any dose level of EAG compared to the negative control (Table8). The PCE:RBC ratio showed no difference at any dose level of EAG. In contrast, the micronucleus and PCE:RBC ratio were significantly changed by the positive control (P<0.01) when compared to the negative control.Table 8
Observations of micronucleus and PCE:RBC ratio.
Test articleDose (mg/kg/day)Animals per doseMNPCEa (mean ± SD)PCE:RBC ratio (mean ± SD)% controlEAG061.33 ± 1.030.57 ± 0.011005005b1.20 ± 0.840.58 ± 0.021011,00061.00 ± 1.100.57 ± 0.021002,00061.50 ± 1.380.57 ± 0.0199CPA706110.50 ± 29.71∗∗0.39 ± 0.02##69∗∗Significant difference at P<0.01 levels compared with the negative control by the Mann–Whitney. ##Significant difference at P<0.01 levels compared with the control by Student’s t-test. aRatio of MNPCE with 4,000 PCE, bOne of the mice was died. PCE, polychromatic erythrocyte; RBC, red blood cells (polychromatic erythrocyte + normochromatic erythrocyte); MNPCE, micronucleated polychromatic erythrocyte; and CPA, cyclophosphamide monohydrate (positive control).
## 3.2.1. Bacterial Reverse Mutation Test (Ames Test)
No precipitation or other abnormality was observed on the bottom agar at the time of plate scoring. There was a dose-related increase in a number of colonies in TA98 that is one of the histidine-requiring strains at 3,000 and 5,000µg/plate, and the number of revertants was 2.3 and 2.5 times higher than that of the negative control in the presence of S9 mix, respectively (Table 6). However, EAG is composed of various amino acids, including histidine (Table 1). In other test strains, no substantial increases in numbers of revertants per plate were observed in any dose level of EAG. Moreover, there were no signs of cytotoxicity at any dose level in all test strains. The results suggest that EAG is not mutagenic in the test strains. The mean revertants in the positive control for each strain showed a clear increase over the mean revertants in the negative control for that strain.Table 6
Results of bacterial reverse mutation assay.
Test articleDose (μg/plate)Colonies/plate [factor]aTA98TA100TA1535TA1537WP2uvrA−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mix−S9 mix+S9 mixEAG026 ± 425 ± 3114 ± 8107 ± 914 ± 215 ± 215 ± 215 ± 222 ± 329 ± 4——————————5025 ± 628 ± 3111 ± 6102 ± 912 ± 416 ± 513 ± 213 ± 322 ± 425 ± 4[1.0][1.1][1.0][0.9][0.9][1.1][0.9][0.9][1.0][0.9]15027 ± 430 ± 5107 ± 15120 ± 1610 ± 115 ± 115 ± 112 ± 318 ± 326 ± 7[1.1][1.2][0.9][1.1][0.8][1.0][1.0][0.8][0.8][0.9]50034 ± 534 ± 4103 ± 7105 ± 513 ± 117 ± 113 ± 114 ± 120 ± 226 ± 6[1.3][1.4][0.9][1.0][1.0][1.1][0.9][1.0][0.9][0.9]1,50026 ± 540 ± 3109 ± 6120 ± 1011 ± 115 ± 212 ± 213 ± 223 ± 426 ± 4[1.0][1.6][1.0][1.1][0.8][1.0][0.8][0.9][1.0][0.9]3,00034 ± 556 ± 5121 ± 2137 ± 113 ± 213 ± 313 ± 317 ± 322 ± 227 ± 2[1.3][2.3][1.1][1.3][0.9][0.9][0.9][1.2][1.0][0.9]5,00034 ± 263 ± 6116 ± 12137 ± 313 ± 113 ± 112 ± 220 ± 125 ± 223 ± 3[1.3][2.5][1.0][1.3][0.9][0.8][0.8][1.3][1.1][0.8]Positive controlb225 ± 16118 ± 8465 ± 601504 ± 102408 ± 12142 ± 19265 ± 20182 ± 22227 ± 25104 ± 8[8.8][4.8][4.1][14.0][29.9][9.3][18.1][12.1][10.3][3.6]Data are expressed as mean ± standard deviation.aThree plates were used each dose. Factor = no. of colonies of treated plate/no. of colonies of negative control plate. bTA98: 2-NF 2 μg/plate (−S9 mix), B[a]P 1 μg/plate (+S9 mix); TA100:SA 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); TA1535:SA 0.5 μg/plate (−S9 mix), 2-AA 2 μg/plate (+S9 mix); TA1537:ICR-191 0.5 μg/plate (−S9 mix), 2-AA 1 μg/plate (+S9 mix); and WP2 uvrA:4NQO 0.5 μg/plate (−S9 mix), 2-AA 6 μg/plate (+S9 mix). 2-NF, 2-nitrofluorene; B[a]P, benzo[a]pyrene; SA, sodium azide; 2-AA, 2-aminoanthracene; ICR-191, acridine mutagen ICR 191; and 4NQO, 4-nitroquinoline N-oxide.
## 3.2.2. Chromosome Aberration Test Using CHL Cells
In this experiment, no turbidity or precipitation was observed at all dose levels of EAG. As shown in Table7, there was no statistically significant increase at any dose level of EAG compared to the negative control, and there was no dose-response relationship or increase in the frequency of aberrant metaphases in all treatment series. In the positive control, there was a statistically significant increase in the mean frequency of aberrant metaphases with structural aberrations in all treatment series (P<0.01).Table 7
In vitro chromosome aberration test in Chinese hamster lung cells with EAG.
Treatment scheduleaS9 mixDose (µg/mL)PP + ER (%)Ratio of aberrant metaphaseb (%)Cell countscMeanRICCd (%)Flask AFlask B06–18+00.000.008,6628,2648,4631003500.330.338,2638,3878,325977000.670.678,5638,7908,6761041,3000.000.336,0836,2626,172571,4000.000.005,4405,3825,41143B[a]P 200.0015.00∗∗6,0005,8505,9255206–18−00.000.009,1629,3489,2551003000.330.009,5288,7229,124986000.670.008,6708,9128,791921,1000.000.336,6096,6576,633571,2000.670.006,0325,8645,947464NQO 0.40.0010.33∗∗7,3097,1927,2506724–0−00.330.338,9008,9108,9051002250.670.008,9969,2739,1341044500.330.009,5899,2739,4311098001.000.676,2456,5526,398569000.000.336,0315,8895,960494NQO 0.40.009.33∗∗7,0126,6696,84064Initial cell count3,1423,1623,152∗∗Significant difference at P<0.01 levels compared with the negative control by Fisher’s exact test. aTreatment time – recovery time, hours, bGap excludes, 150 metaphases were examined per culture. cAfter harvesting mitotic cells, each culture was trypsinized and suspended with 0.5 mL of 0.1% trypsin and 5 mL of culture medium. The cell suspensions of 0.4 mL/culture were diluted 50 times with 19.6 mL of Isoton® sol. The cells in 0.5 mL of Isoton® sol. were counted twice/culture using Coulter Counter model Z2. The actual number of cells per flask = mean cell count × 550. dRelative increase in cell count = ((cell count of treated flask – initial cell count)/(cell count of the negative control flask – initial cell count)) × 100, PP, polyploid; ER, endoreduplication; B[a]P, benzo[a]pyrene (positive control); and 4NQO, 4-nitroquinoline-1-oxide (positive control).
## 3.2.3. Micronucleus Test Using Mouse Bone Marrow Cells
One mouse at 500 mg/kg/day died after the first administration, but it was not considered to be EAG-related, and there were no noticeable macroscopic signs in all other survivors that could be attributed to EAG. There was no statistically significant increase or a dose-related increase in the frequencies of MNPCE at any dose level of EAG compared to the negative control (Table8). The PCE:RBC ratio showed no difference at any dose level of EAG. In contrast, the micronucleus and PCE:RBC ratio were significantly changed by the positive control (P<0.01) when compared to the negative control.Table 8
Observations of micronucleus and PCE:RBC ratio.
Test articleDose (mg/kg/day)Animals per doseMNPCEa (mean ± SD)PCE:RBC ratio (mean ± SD)% controlEAG061.33 ± 1.030.57 ± 0.011005005b1.20 ± 0.840.58 ± 0.021011,00061.00 ± 1.100.57 ± 0.021002,00061.50 ± 1.380.57 ± 0.0199CPA706110.50 ± 29.71∗∗0.39 ± 0.02##69∗∗Significant difference at P<0.01 levels compared with the negative control by the Mann–Whitney. ##Significant difference at P<0.01 levels compared with the control by Student’s t-test. aRatio of MNPCE with 4,000 PCE, bOne of the mice was died. PCE, polychromatic erythrocyte; RBC, red blood cells (polychromatic erythrocyte + normochromatic erythrocyte); MNPCE, micronucleated polychromatic erythrocyte; and CPA, cyclophosphamide monohydrate (positive control).
## 4. Discussion
Regarding the utilization ofA. glehni, several studies have concentrated on its pharmacological and therapeutic effects on diverse diseases [17–21]. Recently, it has been reported that ethanolic extract of A. glehni has ameliorating effects on scopolamine-induced memory dysfunction including long-term or working memory and improves memory function [22]. According to the study, its effects are due to the inhibition of acetylcholinesterase activity and the activation of ERK-CREB-BDNF and PI3K-Akt-GSK-3β pathways [22, 23]. However, further application of A. glehni in herbal medicine or functional food has been limited as there is inadequate knowledge of its safety. The present study evaluated the potential toxicity of EAG after 2- and 13-week repeated oral administration in SD rats. Moreover, genotoxicity studies including bacterial reverse mutation assay, chromosomal aberration test, and micronucleus test in mammalian bone marrow were performed to investigate genotoxicity of EAG.In the acute toxicity study, EAG was found to be nontoxic in rats, and the approximate lethal dose was higher than 5,000 mg/kg (data not shown). In the 2- and 13-week repeated oral toxicity studies, no changes by the test article were found in body weights, food and water consumption, organ weights, ophthalmological test, hematological test, and clinical biochemistry test. Although one male rat at 1,250 mg/kg/day died, there were no noticeable clinical signs or findings to ascertain the cause of death. Necropsy findings of retention of dark brown substance in the lung and the thoracic cavity were determined to be related to an administration error. In addition, the compound-colored stool was observed and considered to be related to EAG-treatment. However, this clinical sign was regarded as a change attributable to the excretion of EAG. Therefore, these findings were not considered to be adverse effects [47].Urinalysis, involving an evaluation of renal functions, is often affected by test article toxicity and is evaluated by testing urinalysis parameters as an indirect indicator of kidney damage [48]. In urinalysis, significant changes were observed for the ketone body at 5,000 mg/kg/day in males and specific gravity at all doses in females under present experimental conditions. However, these results were not considered as toxic effects by EAG since the degree of changes were small and there were no related histopathological findings in the kidney. At necropsy, adhesion of irregular surface on the middle lobe of liver with the diaphragm was observed in female at 1,250 mg/kg/day and in male negative control. The irregular surface of the stomach and weak brown discoloration of the kidney were observed only once in males at 5,000 mg/kg/day. Retention of clear fluid was observed in all-female groups, and one female at 1,250 mg/kg/day exhibited dark red fluid. Other abnormalities, including partly black half spots on a glandular region of the stomach and nodule of ovary, were observed only once in females at 1,250 mg/kg/day and 2,500 mg/kg/day, respectively. All gross findings were microscopically confirmed as corresponding findings. However, these changes were not considered to be related to EAG because the incidence was low, there was no dose-response relationship, and they have been reported to be spontaneous and incidental [45, 46].From the histopathological examination, EAG-related change was observed in the nonglandular region (limiting ridge) in the stomach. Squamous cell hyperplasia of limiting ridge in stomach increased in the EAG-treated groups in both sexes, and there was a dose-response relationship. However, the change was considered a nonreversible effect because the degree of the lesion was graded near mild and cellular atypia was not observed. In addition, it has been shown that if cellular atypia is not observed, squamous cell hyperplasia caused by ethyl acrylate can be recovered completely [45]. In addition, the forestomach is present only in rodents, and no EAG-related changes were observed in the other digestive organs including the glandular stomach. Therefore, toxicological significance was considered minimal. Consequently, the no-observed-adverse-effect level (NOAEL) was 5,000 mg/kg/day in both sexes, and no target organ was observed.The genotoxic evaluation of EAG was tested by the Ames test,in vitro chromosomal aberration test in CHL cells, and in vivo mammalian micronucleus test. In the bacterial reverse mutation test, four histidine auxotroph strains of S. typhimurium and a tryptophan auxotroph strain of E. coli were tested both in the presence and absence of an exogenous metabolic activation system. The number of revertants did not increase at any dose level of EAG under present experimental conditions except for the TA98 strain. In the TA98 strain in the presence of S9 mix, there was a dose-related increase in the number of revertants, and the increase was reproducible. However, it was confirmed that EAG contains histidine so the results of the Ames test were deemed inconclusive. As an additional test for histidine content, two test articles were evaluated: leaf extract (histidine content 39.52 mg/100 g) and leaf and stem extract (histidine content 16.69 mg/100 g) of A. glehni at 5,000 µg/plate. The number of revertants of leaf extract of A. glehni increased about 1.5-fold compared to leaf and stem extract of A. glehni so the finding in TA98 was related to the amount of the histidine content. It has been reported that the increases of revertants in the Ames test could be caused by a test article including histidine [49]. Histidine compounds can generate additional background growth of S. typhimurium on minimal medium plates, thereby resulting in spontaneous his+ revertants [50, 51]. Besides, it was noted that plants and their metabolites have possibly caused false positives by containing histidine [52–54]. Therefore, the result of revertants in TA98 strain was assessed to be a false positive due to histidine content in EAG.For the chromosome aberration test, CHL cells were used to investigate the potential to induce chromosomal aberrations both in the presence and absence of an exogenous metabolic activation system. The result was regarded as clear negative if there was no statistically significant increase in the frequencies of aberrant metaphases at any dose level compared to the negative control, and there was no dose-response relationship or increase in the frequency of aberrant metaphases in all treatment series. The results met the criteria so EAG was clearly negative.The results of the micronucleus test using mouse bone marrow cells showed that EAG did not induce any statistically significant increase or dose-related increase in the frequencies of MNPCE per 4,000 PCE at any dose level. In addition, there was no significant difference in the PCE:RBC ratio. The results indicate that EAG did not induce micronuclei in ICR mice mammalian bone marrow cells under present experimental conditions. Taken together, these results revealed that EAG was nongenotoxic in bothin vitro and in vivo models.
## 5. Conclusion
This study assessed the safety of EAG using different model approaches, including subchronic oral toxicity studies and a battery of genotoxicity studies. When rats were given 2- or 13-week repeated-dose oral administration of EAG, at up to 5,000 mg/kg/day, the NOAEL was considered to be 5,000 mg/kg/day, and no target organs were identified in both sexes under present experimental conditions of this study. Moreover, EAG was also classified as nonmutagenic and nonclastogenic in genotoxicity testing. Collectively, these results show a lack of general toxicity and genotoxicity for EAG that supports clinical work for development as a herbal medicine.
---
*Source: 1018101-2021-12-30.xml* | 2021 |
# Robust Modeling of Low-Cost MEMS Sensor Errors in Mobile Devices Using Fast Orthogonal Search
**Authors:** M. Tamazin; A. Noureldin; M. J. Korenberg
**Journal:** Journal of Sensors
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101820
---
## Abstract
Accessibility to inertial navigation systems (INS) has been severely limited by cost in the past. The introduction of low-cost microelectromechanical system-based INS to be integrated with GPS in order to provide a reliable positioning solution has provided more wide spread use in mobile devices. The random errors of the MEMS inertial sensors may deteriorate the overall system accuracy in mobile devices. These errors are modeled stochastically and are included in the error model of the estimated techniques used such as Kalman filter or Particle filter. First-order Gauss-Markov model is usually used to describe the stochastic nature of these errors. However, if the autocorrelation sequences of these random components are examined, it can be determined that first-order Gauss-Markov model is not adequate to describe such stochastic behavior. A robust modeling technique based on fast orthogonal search is introduced to remove MEMS-based inertial sensor errors inside mobile devices that are used for several location-based services. The proposed method is applied to MEMS-based gyroscopes and accelerometers. Results show that the proposed method models low-cost MEMS sensors errors with no need for denoising techniques and using smaller model order and less computation, outperforming traditional methods by two orders of magnitude.
---
## Body
## 1. Introduction
Presently, GPS-enabled mobile devices offer various positioning capabilities to pedestrians, drivers, and cyclists. GPS provides absolute positioning information, but when signal reception is attenuated and becomes unreliable due to multipath, interference, and signal blockage, augmentation of GPS with inertial navigation systems (INS) or the like is needed. INS is inherently immune to the signal jamming, spoofing, and blockage vulnerabilities of GPS, but the accuracy of INS is significantly affected by the error characteristics of the inertial sensors it employs [1].GPS/INS integrated navigation systems are extensively used [2], for example, in mobile devices that require low-cost microelectromechanical System (MEMS) inertial sensors (gyroscopes and accelerometers) due to their low cost, low power consumption, small size, and portability. The inadequate long-term performance of most commercially available MEMS-based INS limits their usefulness in providing reliable navigation solutions. MEMSs are challenging in any consumer navigation system because of their large errors, extreme stochastic variance, and quickly changing error characteristics.According to [3], the inertial sensor errors of a low-cost INS consist of two parts: a deterministic part and a random part. The deterministic part includes biases and scale factors, which are determined by calibration and then removed from the raw measurements. The random part is correlated over time and is basically due to the variations in the INS sensor bias terms. These errors are mathematically integrated during the INS mechanization process, which results in increasingly inaccurate position and attitude over time. Therefore, these errors must be modeled.The fusion of INS and GPS data is a highly synergistic coupling as INS can provide reliable short-term positioning information during GPS outages, while GPS can correct for longer-term INS errors [1]. INS and GPS integration (i.e., data fusion) is typically achieved through an optimal estimation technique, such as the Kalman filter or Particle filter [4].Despite having an INS/GPS integration algorithm to correct for INS errors, it is still advantageous to have an accurate INS solution before the data fusion process. This requires preprocessing (i.e., prefiltering or denoising) each of the inertial sensor (gyroscope and accelerometer) signals before they are used to compute position, velocity, and attitude. This paper offers a robust method based on fast orthogonal search (FOS) to model the stochastic errors of low-cost MEMS sensors for smart mobile phones.Orthogonal search [5] is a technique developed for identifying difference equation and functional expansion models by orthogonalizing over the actual data record. It mainly utilizes Gram-Schmidt orthogonalization to create a series of orthogonal functions from a given set of arbitrary functions. This enables signal representation by a functional expansion of arbitrary functions and therefore provides a wider selection of candidate functions that can be used to represent the signal. FOS is a variant of orthogonal search [6] where one major difference is that FOS achieves orthogonal identification without creating orthogonal functions at any stage of the process. As a result FOS is many times faster and less memory storage intensive than the earlier technique, while equally as accurate and robust [5–7].Many techniques have been used previously to denoise and stochastically model the inertial sensor errors [3, 8, 9]. For example, several levels of wavelet decomposition have been used to denoise the raw INS data and eliminate high-frequency disturbances [3, 8, 9]. Modeling inertial sensor errors using autoregressive (AR) models was performed in [3], where the Yule-Walker, the covariance, and Burg AR methods were used. The AR model parameters were estimated after reducing the INS sensor measurements noise using wavelet denoising techniques.FOS has been applied before in several applications [5–7, 10–12]. In [13], FOS was used to augment a Kalman filter (KF) to enhance the accuracy of a low-cost 2D MEMS-based navigation system by modeling only the azimuth error. FOS is used in this paper to model the raw MEMS gyroscope and accelerometer measurement errors in the time domain. In this paper, the performance of FOS is compared to linear modeling techniques such as Yule-Walker, the covariance and Burg AR methods in terms of mean-square errors (MSEs) and computational time.
## 2. Problem Statement
It is generally accepted that the long-term errors are modeled as correlated random noise. Correlated random noise is typically characterized by an exponentially decaying autocorrelation function with a finite correlation time. When the autocorrelation function of some of the noise sequences of MEMS measurements is studied, it has been shown that a first-order Gauss-Markov (GM) process may not be adequate in all cases to model such noise behavior. The shape of the autocorrelation sequence is often different from that of a first-order GM process, which is represented by a decaying exponential as shown in Figure1. The GM process is characterized by an autocorrelation function of the form Rxx(τ)=σ2e-β|τ|, where σ2 is variance of the process and the correlation time (1/e point) is given by 1/β. The autocorrelation function approaches zero as τ→∞, as depicted in Figure 1, indicating that the process gradually becomes less and less correlated as the time separation between samples increases [1].Figure 1
The autocorrelation sequence of a first-order Gauss-Markov (GM) process.Most of the computed autocorrelation sequences follow higher-order GM processes. An example of such computed autocorrelation sequences for one hour of static data of an MEMS accelerometer is shown in Figure2. It clearly represents a higher-order GM process. These higher-order GM processes can be modeled using an autoregressive (AR) process of an appropriate order. In [3] it has been decided to model the randomness of the inertial sensor measurements using an AR process of order higher than one. With the present computational efficiency of microprocessor systems, efficient modeling of MEMS residual biases can be realized, and, thus, accurate prediction and estimation of such errors can be provided.Figure 2
The computed autocorrelation sequence for an MEMS accelerometer data.
## 3. Modeling Methods of AR Processes
The autoregressive moving average (ARMA) modeling is based on the mathematical modeling of a time series of measurements assuming that each value of such series is dependent on (a) a weighted sum of the “previous” values of the same series (AR part) and (b) a weighted sum of the “present and previous” values of a different time series (MA part) [14]. The ARMA process can be described using a pole-zero (AR-MA) transfer function system H(z) as follows:
(1)H(z)=Y(z)W(z)=B(z)A(z)=∑k=0qbkz-k1+∑k=1pakz-k,
where W(z) is the z-transform of the input w(n), Y(z) is the z-transform of the output y(n), p is the order of the AR process, q is the order of the MA process, and a1,a2,…,ap and b1,b2,…,bq are the AR and MA process parameters (weights), respectively. The AR process is a special case of an ARMA process, where q in (1) will be zero and thus H(z) will be only an all-pole transfer function of the form
(2)H(z)=Y(z)W(z)=B(z)A(z)=b01+∑k=1pakz-k.
Therefore, the name “Autoregressive” comes from the fact that each signal sample is regressed on (or predicted from) the previous values of itself [3]. In the time domain, the previous AR transfer function relationship can be obtained after applying the inverse z-transform for (2).The resultant equation is written as [14]
(3)y(n)=-∑k=1paky(n-k)+b0w(n)y(n)=-a1y(n-1)-a2y(n-2)-⋯-apy(n-p)+b0w(n).
The previous input-output relationship in both frequency and time domains is shown in Figure 3.Figure 3
The input-output relationship of an autoregressive (AR) process.The problem in this case is to determine the values of the AR model parameters (predictor coefficients)ak that optimally represent the random part of the inertial sensor biases. This is performed by minimizing the error e(n) between the original signal y(n) represented by the “AR process” of (3) and the estimated signal y^(n), which is estimated by an “AR model” of the form [8]
(4)y^(n)=-∑k=1paky(n-k).
The cost function for this minimization problem is the energy E of e(n), which is given as
(5)E=∑n=1Ne2(n)=∑n=1N[y(n)-y^(n)]2=∑n=1N[-∑k=1paky(n-k)+b0w(n)+∑k=1paky(n-k)]2=∑n=1Nb02w2(n)=min,
where N is the total number of data samples. In this case, w(n) is a sequence of stationary uncorrelated sequences (white noise) with zero mean and unity variance.Therefore, the resultant energy from (5) [∑n=1Nb02w2(n)] will be b02. Therefore, b02 represents the estimated variance σw2 of the white noise input to the AR model or, more generally, the prediction mean-square errorσe2. This is due to the fact that the AR model order p is completely negligible with respect to the MEMS data sample size N.Several methods have been reported to estimate theak parameter values by fitting an AR model to the input data. Three AR methods are considered in this paper, namely, the Yule-Walker method, the covariance method, and Burg’s method. In principle, all of these estimation techniques should lead to approximately the same parameter values if fairly large data samples are used [3].
### 3.1. The Yule-Walker Method
The Yule-Walker method, which is also known as the autocorrelation method, determines first the autocorrelation sequenceR(τ) of the input signal (inertial sensor residual bias in our case). Then, the AR model parameters are optimally computed by solving a set of linear normal equations. These normal equations are obtained using the formula [15]
(6)∂E∂ak=0,
which leads to the following set of normal equations:
(7)Ra=-r⟷a=-R-1r,
where(8a)a=[a1a2⋮ap](8b)r=[R(1)R(2)⋮R(p)],(8c)R=[R(0)R(1)…R(p-1)R(1)R(0)⋯R(p-2)⋮⋮…⋮R(p-1)R(p-2)…R(0)].If the mean-square errorσe2 is also required, it can be determined by(9a)[R(0)R(1)…R(p-1)R(1)R(0)⋯R(p-2)⋮⋮…⋮R(p-1)R(p-2)…R(0)][1a1⋮ap]=[σe20⋮0].Equations (7) and (9a) are known as the Yule-Walker equations [7–9, 13]. Instead of solving (9a) directly (i.e., by first computing R-1), it can efficiently be solved using the Levinson-Durbin (LD) algorithm which proceeds recursively to compute a1,a2,…,ap, and σe2. The LD algorithm is an iterative technique that computes the next prediction coefficient (AR parameter) from the previous one. This LD recursive procedure can be summarized in the following [9]:
(9b)E0=R(0)(9c)γk=-R(k)+∑i=1k-1ai,k-1R(k-i)Ek-1,1≤k≤p(9d)ak,k=γk(9e)ai,k=ai,k-1+γkak-i,k-1,1≤i≤k-1(9f)Ek=(1-γk2)Ek-1.
Equations (9b)–(9f) are solved recursively for k=1,2,…,p and the final solution for the AR parameters is provided by
(9g)ai=ai,p,1≤i≤p.Therefore, the values of the AR prediction coefficients in the Yule-Walker method are provided directly based on minimizing the forward prediction erroref(n) in the least-squares sense. The intermediate quantities γk represented by (9c) are known as the reflection coefficients. In (9f), both energies Ek and Ek-1 are positive, and, thus, the magnitude of γk should be less than one to guarantee the stability of the all-pole filter.However, the Yule-Walker method performs adequately only for long data records [15]. The inadequate performance in case of short data records is usually due to the data windowing applied by the Yule-Walker algorithm. Moreover, the Yule-Walker method may introduce a large bias in the AR-estimated coefficients since it does not guarantee a stable solution of the model [16].
### 3.2. The Covariance Method
The covariance method is similar to the Yule-Walker method in that it minimizes the forward prediction error in the least-squares sense, but it does not consider any windowing of the data. Instead, the windowing is performed with respect to the prediction error to be minimized. Therefore, the AR model obtained by this method is typically more accurate than the one obtained from the Yule-Walker method [17].Furthermore, it uses the covarianceC(τi,τj) instead of R(τ). In this case, the Toeplitz structure of the normal equations used in the autocorrelation method is lost, and hence the LD algorithm cannot be used for the computations. To achieve an efficient C-1 in this case, Cholesky factorization is usually utilized [15].The method provides more accurate estimates than the Yule-Walker method especially for short data records. However, the covariance method may lead to unstable AR models since the LD algorithm is not used for solving the covariance normal equations [18].
### 3.3. Burg’s Method
Burg’s method was introduced in 1967 to overcome most of the drawbacks of the other AR modeling techniques by providing both stable models and high resolution (i.e., more accurate estimates) for short data records [19]. Burg’s method tries to make the maximum use of the data by defining both a forward and a backward prediction error terms, ef(n) and eb(n). The energy to be minimized in this case (EBurg) is the sum of both the forward and backward prediction error energies; that is,
(10)EBurg=∑n=1N[ef2(n)+eb2(n)]=min,
where ef and eb are defined as(11a)ef=y(n)+a1y(n-1)+a2y(n-2)+⋯+apy(n-p)(11b)eb=y(n-p)+a1y(n-p+1)+a2y(n-p+2)+⋯+apy(n).The forward and backward prediction error criteria are the same, and, hence, they have the same optimal solution for the model coefficients [20]. Considering the energies in (9f) to be EBurg, the forward and backward prediction errors can, therefore, be expressed recursively as(12a)efk(n)=efk-1(n)+γkebk-1(n-1)(12b)ebk(n)=ebk-1(n-1)+γkefk-1(n).These recursion formulas form the basis of what is called Lattice (or Ladder) realization of a prediction error filtering (see Figure4).Figure 4
The forward-backward predication error lattice filter general structure.As has been shown for the Yule-Walker method, the accuracy of the estimated parametersa1,a2,…,ap, and σe2 depends mainly on accurate estimates of the autocorrelation sequence R(τ). However, this can be rarely achieved due to the prewindowing of data [17] or the existence of large measurement noise [21]. To avoid the difficulties of the computation of the autocorrelation sequences, Burg in his method estimates first the reflection coefficients γk using another formula instead of (9c). This formula is derived by substituting (12a) and (12b) into (13) and setting the derivative of EBurg with respect to γk (instead of ak in the Yule-Walker and covariance methods) to zero.This leads to the form(13)γk=-2∑n=1N[efk-1(n)ebk-1(n-1)]∑n=1Nefk-12(n)+∑n=1Nebk-12(n-1)
which shows clearly that the magnitude of γk is forced (guaranteed) to be less than one, and thus the obtained model is guaranteed to be stable. Equations (12a), (12b), and (13) form the recursive structure of Burg’s lattice filter, which is shown in Figure 4 with the initial conditions of ef0(n)=eb0(n)=y(n). Finally, the prediction coefficients ak are obtained by constraining them to satisfy (9e) in the LD algorithm.Therefore, the utilization of (9e) and (13) together will always ensure the stability of Burg’s method solution [22]. Moreover, the utilization of both forward and backward prediction errors minimization usually yields better estimation results than using the forward prediction approach used in the previous two methods. Finally, it has been reported by [23] that Burg’s method generally provides better residual estimates than the Yule-Walker method [19].
## 3.1. The Yule-Walker Method
The Yule-Walker method, which is also known as the autocorrelation method, determines first the autocorrelation sequenceR(τ) of the input signal (inertial sensor residual bias in our case). Then, the AR model parameters are optimally computed by solving a set of linear normal equations. These normal equations are obtained using the formula [15]
(6)∂E∂ak=0,
which leads to the following set of normal equations:
(7)Ra=-r⟷a=-R-1r,
where(8a)a=[a1a2⋮ap](8b)r=[R(1)R(2)⋮R(p)],(8c)R=[R(0)R(1)…R(p-1)R(1)R(0)⋯R(p-2)⋮⋮…⋮R(p-1)R(p-2)…R(0)].If the mean-square errorσe2 is also required, it can be determined by(9a)[R(0)R(1)…R(p-1)R(1)R(0)⋯R(p-2)⋮⋮…⋮R(p-1)R(p-2)…R(0)][1a1⋮ap]=[σe20⋮0].Equations (7) and (9a) are known as the Yule-Walker equations [7–9, 13]. Instead of solving (9a) directly (i.e., by first computing R-1), it can efficiently be solved using the Levinson-Durbin (LD) algorithm which proceeds recursively to compute a1,a2,…,ap, and σe2. The LD algorithm is an iterative technique that computes the next prediction coefficient (AR parameter) from the previous one. This LD recursive procedure can be summarized in the following [9]:
(9b)E0=R(0)(9c)γk=-R(k)+∑i=1k-1ai,k-1R(k-i)Ek-1,1≤k≤p(9d)ak,k=γk(9e)ai,k=ai,k-1+γkak-i,k-1,1≤i≤k-1(9f)Ek=(1-γk2)Ek-1.
Equations (9b)–(9f) are solved recursively for k=1,2,…,p and the final solution for the AR parameters is provided by
(9g)ai=ai,p,1≤i≤p.Therefore, the values of the AR prediction coefficients in the Yule-Walker method are provided directly based on minimizing the forward prediction erroref(n) in the least-squares sense. The intermediate quantities γk represented by (9c) are known as the reflection coefficients. In (9f), both energies Ek and Ek-1 are positive, and, thus, the magnitude of γk should be less than one to guarantee the stability of the all-pole filter.However, the Yule-Walker method performs adequately only for long data records [15]. The inadequate performance in case of short data records is usually due to the data windowing applied by the Yule-Walker algorithm. Moreover, the Yule-Walker method may introduce a large bias in the AR-estimated coefficients since it does not guarantee a stable solution of the model [16].
## 3.2. The Covariance Method
The covariance method is similar to the Yule-Walker method in that it minimizes the forward prediction error in the least-squares sense, but it does not consider any windowing of the data. Instead, the windowing is performed with respect to the prediction error to be minimized. Therefore, the AR model obtained by this method is typically more accurate than the one obtained from the Yule-Walker method [17].Furthermore, it uses the covarianceC(τi,τj) instead of R(τ). In this case, the Toeplitz structure of the normal equations used in the autocorrelation method is lost, and hence the LD algorithm cannot be used for the computations. To achieve an efficient C-1 in this case, Cholesky factorization is usually utilized [15].The method provides more accurate estimates than the Yule-Walker method especially for short data records. However, the covariance method may lead to unstable AR models since the LD algorithm is not used for solving the covariance normal equations [18].
## 3.3. Burg’s Method
Burg’s method was introduced in 1967 to overcome most of the drawbacks of the other AR modeling techniques by providing both stable models and high resolution (i.e., more accurate estimates) for short data records [19]. Burg’s method tries to make the maximum use of the data by defining both a forward and a backward prediction error terms, ef(n) and eb(n). The energy to be minimized in this case (EBurg) is the sum of both the forward and backward prediction error energies; that is,
(10)EBurg=∑n=1N[ef2(n)+eb2(n)]=min,
where ef and eb are defined as(11a)ef=y(n)+a1y(n-1)+a2y(n-2)+⋯+apy(n-p)(11b)eb=y(n-p)+a1y(n-p+1)+a2y(n-p+2)+⋯+apy(n).The forward and backward prediction error criteria are the same, and, hence, they have the same optimal solution for the model coefficients [20]. Considering the energies in (9f) to be EBurg, the forward and backward prediction errors can, therefore, be expressed recursively as(12a)efk(n)=efk-1(n)+γkebk-1(n-1)(12b)ebk(n)=ebk-1(n-1)+γkefk-1(n).These recursion formulas form the basis of what is called Lattice (or Ladder) realization of a prediction error filtering (see Figure4).Figure 4
The forward-backward predication error lattice filter general structure.As has been shown for the Yule-Walker method, the accuracy of the estimated parametersa1,a2,…,ap, and σe2 depends mainly on accurate estimates of the autocorrelation sequence R(τ). However, this can be rarely achieved due to the prewindowing of data [17] or the existence of large measurement noise [21]. To avoid the difficulties of the computation of the autocorrelation sequences, Burg in his method estimates first the reflection coefficients γk using another formula instead of (9c). This formula is derived by substituting (12a) and (12b) into (13) and setting the derivative of EBurg with respect to γk (instead of ak in the Yule-Walker and covariance methods) to zero.This leads to the form(13)γk=-2∑n=1N[efk-1(n)ebk-1(n-1)]∑n=1Nefk-12(n)+∑n=1Nebk-12(n-1)
which shows clearly that the magnitude of γk is forced (guaranteed) to be less than one, and thus the obtained model is guaranteed to be stable. Equations (12a), (12b), and (13) form the recursive structure of Burg’s lattice filter, which is shown in Figure 4 with the initial conditions of ef0(n)=eb0(n)=y(n). Finally, the prediction coefficients ak are obtained by constraining them to satisfy (9e) in the LD algorithm.Therefore, the utilization of (9e) and (13) together will always ensure the stability of Burg’s method solution [22]. Moreover, the utilization of both forward and backward prediction errors minimization usually yields better estimation results than using the forward prediction approach used in the previous two methods. Finally, it has been reported by [23] that Burg’s method generally provides better residual estimates than the Yule-Walker method [19].
## 4. Fast Orthogonal Search (FOS) Method
FOS [5–7, 10–12] is a general purpose modeling technique, which can be applied to spectral estimation and time-frequency analysis. The algorithm uses an arbitrary set of nonorthogonal candidate functions pm(n) and finds a functional expansion of an input y(n) in order to minimize the mean square error (MSE) between the input and the functional expansion.The functional expansion of the inputy(n) in terms of the arbitrary candidate functions pm(n) is given by
(14)y(n)=∑m=0MamPm(n)+e(n),
where am is the set of weights of the functional expansion, P0(n)=1, and the Pm(n) are the model terms selected from the set of candidate functions, and e(n) is the modeling error.These model terms can involve the system inputx and output y and cross-products and powers thereof:
(15)Pm(n)=y(n-l1)⋯y(n-li)x(n-k1)⋯x(n-kj),m≥1,i≥0,j≥0,∀i:1≤li≤L,∀j:0≤kj≤K.
By choosing non-orthogonal candidate functions, there is no unique solution for (14). However, FOS may model the input with fewer model terms than an orthogonal functional expansion [11]. For the FFT to model a frequency that does not have an integral number of periods in the record length, energy is spread into all the other frequencies, which is a phenomenon known as spectral leakage [24]. By using candidate functions that are non-orthogonal, FOS may be able to model this frequency between two FFT bins with a single term resulting in many fewer weighting terms in the model [5, 25].FOS begins by creating a functional expansion using orthogonal basis functions such that(16)y(n)=∑m=0Mgmwm(n)+e(n),
where wm(n) is a set of orthogonal functions derived from the candidate functions pm(n), gm is the weight, and e(n) is an error term. The orthogonal functions wm(n) are derived from the candidate functions pm(n) using the Gram-Schmidt (GS) orthogonalization algorithm. The orthogonal functions wm(n) are implicitly defined by the Gram-Schmidt coefficients αmr and do not need to be computed point-by-point.The Gram-Schmidt coefficientsαmr and the orthogonal weights gm can be found recursively using the equations [11]
(17)w0=p0(n)(18)D(m,0)=pm(n)p0(n)-(19)D(m,r)=pm(n)pr(n)--∑i=0r-1αriD(m,i)(20)αmr=pm(n)wr(n)-wr2-(n)=D(m,r)D(r,r)(21)C(0)=y(n)p0(n)-(22)C(m)=y(n)pm(n)--∑r=0m-1αmrC(r)(23)gm=C(m)D(m,m).
In its last stage, FOS calculates the weights of the original functional expansion am (6), from the weights of the orthogonal series expansion gm and Gram-Schmidt coefficients αmr. The value of am can be found recursively using
(24)am=∑i=mMgivi,
where vm=1 and
(25)vi=-∑r=mi-1αirvr,i=m+1,m+2,…,M.
FOS requires the calculation of the correlation between the candidate functions and the calculation of the correlation between the input and the candidate functions. The correlation between the input and the candidate function y(n)pm(n)- is typically calculated point-by-point once at the start of the algorithm and then stored for later quick retrieval.The MSE of the orthogonal function expansion has been shown to be [5, 6, 11]
(26)ε2(n)-=y2(n)--∑m=0Mgm2wm2(n)-.
It then follows that the MSE reduction given by the mth candidate function is given by
(27)Qm=gm2wm2(n)-=gm2D(m,m),
The candidate with the greatest value for Q is selected as the model term, but optionally its addition to the model may be subject to its Q value exceeding a threshold level [5, 6, 11]. The residual MSE after the addition of each term can be computed by
(28)MSEm=MSEm-1-Qm.
The search algorithm may be stopped when an acceptably small residual MSE has been achieved (i.e., a ratio of the MSE over the mean-squared value of the input [12] or an acceptably small percentage of the variance of the time series being modeled [5]). The search may also stop when a certain number of terms have been fitted. Another stopping criterion is when none of the remaining candidates can yield a sufficient MSE reduction value (this criterion would be representative of not having any candidates that would yield an MSE reduction value greater than the addition of a white Gaussian noise series).
## 5. Experimental Results
The data were collected by a low-cost MEMS-based inertial measurement unit (IMU CC-300, Crossbow). These measurements were collected during a one-hour experiment to obtain stochastic error models of both gyroscopes and accelerometers. To illustrate the performance, two sensors were selected as an example (accelerometer-Y, Gyro-Y) while the other inertial sensors gave similar results. Figure5 shows one hour of sampled accelerometer-Y and Gyro-Y acquired at 200 Hz.Figure 5
Accelerometer-Y and Gyro-Y specific force measurements.FOS is applied directly on the raw inertial sensor 200 Hz data without any preprocessing or denoising. Traditional methods like Yule Walker, Covariance, and Burg perform poorly on the raw data, so we first applied wavelet denoising of up to 4 levels of decomposition that resulted in band limiting the spectrum of the raw inertial sensor data to 12.5 Hz. Therefore, unlike FOS, the other 3 methods operate on the denoised version of the same data. After denoising, AR model parameters were then estimated as well as the corresponding prediction MSE for all sensors using Yule-Walker, Covariance, and Burg methods.For FOS, the raw INS data were divided into three datasets for model training, evaluation, and prediction stages. The first 3 minutes of the INS raw data were utilized for model training, which uses the FOS algorithm to identify several possibly nonlinear AR equations. Different models can be obtained by changing the maximum delayL in the output and the degree of output cross-products (CP). The next 3 minutes of the data were used for the evaluation stage. Here, models are compared and the best one, fitting the real output with minimum MSE, is chosen. As an example, the FOS model (CP=1) of the accelerometer-Y is shown as follows:
(29)Y[n]=6.03×10-6+2.25y[n-1]-1.2y[n-2]+1.02y[n-4]-0.67y[n-3]-0.26y[n-5]+0.62y[n-7]-0.51y[n-6]-0.32y[n-8]+0.08y[n-9].
In the prediction stage, the output and MSE of the chosen model are computed over the remaining (novel) raw INS data. Figure 6 shows prediction MSE of Accelerometer-Y samples by Yule-Walker, Covariance, Burg, and FOS methods. For FOS, an AR model of order 3 or 4 suffices, and MSE decreases when the degree of the cross-product terms is raised to 2 (nonlinear model). An AR model of order 7 or 8 is required for Burg or Covariance method and order 9 or 10 for Yule-Walker. Large AR model order complicates the estimation method (like KF) used for the INS/GPS integration.Figure 6
Accelerometer-Y prediction MSE using Yule-Walker, covariance, Burg, and FOS methods. For CPdegree=1, only linear candidate terms up to a maximum output lag L=10 were allowed. For CP degree=2, both linear and y(n-l1)y(n-l2) candidates were allowed, up to maximum output lag L=10.Table1 shows a summary of the performance of three conventional stochastic modeling methods (Yule-Walker, Covariance, and Burg) and the proposed FOS-based method with cross-product order set to 1 (i.e., linear model) and 2 (i.e., nonlinear model) for different model orders 1 and 10 over one hour of MEMS Accelerometer-Y measurements. The FOS model is capable of denoising the Accelerometer-Y measurements without appreciable degradation of the original signal. FOS achieves better performance in terms of lower MSE and less computational time than the traditional methods with no need for any denoising techniques. Increasing the cross-product degree to 2 for FOS improves model accuracy and lessens position error by an order of magnitude (for L=10) but increases computation time.Table 1
Performance summary of AR models obtained by Yule-Walker, Burg, and FOS over one hour of accelerometer-Y measurements: first-order FOS: only linear model terms; second-order FOS: linear and cross-product model terms.
Modelling technique
Model MSE (m/s2)2
Corresponding position error (m)
Computational time (s)
Model order (maximum output lagL) 1
Yule-Walker
5 × 10−9
458
0.25
Covariance/Burg
3 × 10−11
35
0.23
CP degree = 1 FOS
1 × 10−11
20
0.13
CP degree = 2 FOS
1 × 10−11
20
0.36
Model order (maximum output lagL) 10
Yule-Walker
5 × 10−9
458
0.69
Covariance/Burg
2 × 10−12
9
0.45
CP degree = 1 FOS
3 × 10−14
1
0.30
CP degree = 2 FOS
2 × 10−16
0.09
0.60A similar procedure was performed for the Gyro-Y sensor measurements. Figure7 shows the prediction MSE of Gyro-Y samples by using Yule-Walker, the Covariance, Burg, and FOS methods. It is clear that FOS method achieves minimum MSE error with less model order than the other AR methods. Table 2 shows a summary of the performance of three conventional stochastic modeling methods (Yule-Walker, Covariance, and Burg) and the proposed FOS-based method with cross-product order set to 1 (i.e., linear model) and 2 (i.e., nonlinear model) for different model orders 1 and 10 over one hour of Gyro-Y measurements.Table 2
Performance summary of both AR Yule-Walker and Burg models and FOS model over one hour of Gyro-Y measurements: first-order FOS: only linear model terms; second-order FOS: linear and cross-product model terms.
Modeling technique
Model MSE (deg/h)2
Corresponding position error (m)
Computational time (s)
model order (maximum output lagL) 1
Yule-Walker
7 × 10−6
978
0.40
Covariance/Burg
5 × 10−6
826
0.23
CP degree = 1 FOS
1 × 10−6
369
0.09
CP degree = 2 FOS
1 × 10−6
369
0.35
Model order (maximum output lagL) 10
Yule-Walker
2 × 10−6
523
0.44
Covariance/Burg
2 × 10−7
165
0.28
CP degree = 1 FOS
4 × 10−10
7
0.18
CP degree = 2 FOS
3 × 10−11
2
0.6Figure 7
Gyro-Y prediction MSE using Yule-Walker, covariance, Burg, and FOS methods. For CPorder=1, only linear candidate terms up to a maximum output lag L=10 were allowed. For CP order=2, both linear and y(n-l1)y(n-l2) candidates were allowed, up to a maximum output lag L=10.Similar to the accelerometer case, the stochastic model obtained for Gyro-Y using FOS surpasses the models obtained by the other methods in the MSE, the corresponding position error that would result from the residual errors and the computation time.
## 6. Conclusions
Inertial sensor errors are the most significant contributors to INS errors. Thus, techniques to model these sensor errors are of interest to researchers. The current state of the art in modeling inertial sensor signals includes low-pass filtering and using wavelet denoising techniques, which have had limited success in removing long-term inertial sensor errors.This paper suggested using FOS to model the MEMS sensor errors in time domain. The FOS MSE and computational time are compared with those from Yule-Walker, the covariance, and Burg methods. FOS was applied directly to the one-hour raw inertial sensor 200 Hz data without any pre-processing or denoising. The other traditional 3 methods operated on the denoised version of the same data after we applied the wavelet denoising of up to 4 levels of decomposition.For either gyroscope or accelerometer case, the FOS model surpasses those obtained by traditional methods. The results demonstrate the advantages of the proposed FOS-based method including the absence of a need for preprocessing or denoising, lower computation time and MSE, and achieving better performance with smaller model order. Increasing the cross-product degree for FOS improves model accuracy and lessens position error but increases computation time.
---
*Source: 101820-2013-07-10.xml* | 101820-2013-07-10_101820-2013-07-10.md | 35,247 | Robust Modeling of Low-Cost MEMS Sensor Errors in Mobile Devices Using Fast Orthogonal Search | M. Tamazin; A. Noureldin; M. J. Korenberg | Journal of Sensors
(2013) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101820 | 101820-2013-07-10.xml | ---
## Abstract
Accessibility to inertial navigation systems (INS) has been severely limited by cost in the past. The introduction of low-cost microelectromechanical system-based INS to be integrated with GPS in order to provide a reliable positioning solution has provided more wide spread use in mobile devices. The random errors of the MEMS inertial sensors may deteriorate the overall system accuracy in mobile devices. These errors are modeled stochastically and are included in the error model of the estimated techniques used such as Kalman filter or Particle filter. First-order Gauss-Markov model is usually used to describe the stochastic nature of these errors. However, if the autocorrelation sequences of these random components are examined, it can be determined that first-order Gauss-Markov model is not adequate to describe such stochastic behavior. A robust modeling technique based on fast orthogonal search is introduced to remove MEMS-based inertial sensor errors inside mobile devices that are used for several location-based services. The proposed method is applied to MEMS-based gyroscopes and accelerometers. Results show that the proposed method models low-cost MEMS sensors errors with no need for denoising techniques and using smaller model order and less computation, outperforming traditional methods by two orders of magnitude.
---
## Body
## 1. Introduction
Presently, GPS-enabled mobile devices offer various positioning capabilities to pedestrians, drivers, and cyclists. GPS provides absolute positioning information, but when signal reception is attenuated and becomes unreliable due to multipath, interference, and signal blockage, augmentation of GPS with inertial navigation systems (INS) or the like is needed. INS is inherently immune to the signal jamming, spoofing, and blockage vulnerabilities of GPS, but the accuracy of INS is significantly affected by the error characteristics of the inertial sensors it employs [1].GPS/INS integrated navigation systems are extensively used [2], for example, in mobile devices that require low-cost microelectromechanical System (MEMS) inertial sensors (gyroscopes and accelerometers) due to their low cost, low power consumption, small size, and portability. The inadequate long-term performance of most commercially available MEMS-based INS limits their usefulness in providing reliable navigation solutions. MEMSs are challenging in any consumer navigation system because of their large errors, extreme stochastic variance, and quickly changing error characteristics.According to [3], the inertial sensor errors of a low-cost INS consist of two parts: a deterministic part and a random part. The deterministic part includes biases and scale factors, which are determined by calibration and then removed from the raw measurements. The random part is correlated over time and is basically due to the variations in the INS sensor bias terms. These errors are mathematically integrated during the INS mechanization process, which results in increasingly inaccurate position and attitude over time. Therefore, these errors must be modeled.The fusion of INS and GPS data is a highly synergistic coupling as INS can provide reliable short-term positioning information during GPS outages, while GPS can correct for longer-term INS errors [1]. INS and GPS integration (i.e., data fusion) is typically achieved through an optimal estimation technique, such as the Kalman filter or Particle filter [4].Despite having an INS/GPS integration algorithm to correct for INS errors, it is still advantageous to have an accurate INS solution before the data fusion process. This requires preprocessing (i.e., prefiltering or denoising) each of the inertial sensor (gyroscope and accelerometer) signals before they are used to compute position, velocity, and attitude. This paper offers a robust method based on fast orthogonal search (FOS) to model the stochastic errors of low-cost MEMS sensors for smart mobile phones.Orthogonal search [5] is a technique developed for identifying difference equation and functional expansion models by orthogonalizing over the actual data record. It mainly utilizes Gram-Schmidt orthogonalization to create a series of orthogonal functions from a given set of arbitrary functions. This enables signal representation by a functional expansion of arbitrary functions and therefore provides a wider selection of candidate functions that can be used to represent the signal. FOS is a variant of orthogonal search [6] where one major difference is that FOS achieves orthogonal identification without creating orthogonal functions at any stage of the process. As a result FOS is many times faster and less memory storage intensive than the earlier technique, while equally as accurate and robust [5–7].Many techniques have been used previously to denoise and stochastically model the inertial sensor errors [3, 8, 9]. For example, several levels of wavelet decomposition have been used to denoise the raw INS data and eliminate high-frequency disturbances [3, 8, 9]. Modeling inertial sensor errors using autoregressive (AR) models was performed in [3], where the Yule-Walker, the covariance, and Burg AR methods were used. The AR model parameters were estimated after reducing the INS sensor measurements noise using wavelet denoising techniques.FOS has been applied before in several applications [5–7, 10–12]. In [13], FOS was used to augment a Kalman filter (KF) to enhance the accuracy of a low-cost 2D MEMS-based navigation system by modeling only the azimuth error. FOS is used in this paper to model the raw MEMS gyroscope and accelerometer measurement errors in the time domain. In this paper, the performance of FOS is compared to linear modeling techniques such as Yule-Walker, the covariance and Burg AR methods in terms of mean-square errors (MSEs) and computational time.
## 2. Problem Statement
It is generally accepted that the long-term errors are modeled as correlated random noise. Correlated random noise is typically characterized by an exponentially decaying autocorrelation function with a finite correlation time. When the autocorrelation function of some of the noise sequences of MEMS measurements is studied, it has been shown that a first-order Gauss-Markov (GM) process may not be adequate in all cases to model such noise behavior. The shape of the autocorrelation sequence is often different from that of a first-order GM process, which is represented by a decaying exponential as shown in Figure1. The GM process is characterized by an autocorrelation function of the form Rxx(τ)=σ2e-β|τ|, where σ2 is variance of the process and the correlation time (1/e point) is given by 1/β. The autocorrelation function approaches zero as τ→∞, as depicted in Figure 1, indicating that the process gradually becomes less and less correlated as the time separation between samples increases [1].Figure 1
The autocorrelation sequence of a first-order Gauss-Markov (GM) process.Most of the computed autocorrelation sequences follow higher-order GM processes. An example of such computed autocorrelation sequences for one hour of static data of an MEMS accelerometer is shown in Figure2. It clearly represents a higher-order GM process. These higher-order GM processes can be modeled using an autoregressive (AR) process of an appropriate order. In [3] it has been decided to model the randomness of the inertial sensor measurements using an AR process of order higher than one. With the present computational efficiency of microprocessor systems, efficient modeling of MEMS residual biases can be realized, and, thus, accurate prediction and estimation of such errors can be provided.Figure 2
The computed autocorrelation sequence for an MEMS accelerometer data.
## 3. Modeling Methods of AR Processes
The autoregressive moving average (ARMA) modeling is based on the mathematical modeling of a time series of measurements assuming that each value of such series is dependent on (a) a weighted sum of the “previous” values of the same series (AR part) and (b) a weighted sum of the “present and previous” values of a different time series (MA part) [14]. The ARMA process can be described using a pole-zero (AR-MA) transfer function system H(z) as follows:
(1)H(z)=Y(z)W(z)=B(z)A(z)=∑k=0qbkz-k1+∑k=1pakz-k,
where W(z) is the z-transform of the input w(n), Y(z) is the z-transform of the output y(n), p is the order of the AR process, q is the order of the MA process, and a1,a2,…,ap and b1,b2,…,bq are the AR and MA process parameters (weights), respectively. The AR process is a special case of an ARMA process, where q in (1) will be zero and thus H(z) will be only an all-pole transfer function of the form
(2)H(z)=Y(z)W(z)=B(z)A(z)=b01+∑k=1pakz-k.
Therefore, the name “Autoregressive” comes from the fact that each signal sample is regressed on (or predicted from) the previous values of itself [3]. In the time domain, the previous AR transfer function relationship can be obtained after applying the inverse z-transform for (2).The resultant equation is written as [14]
(3)y(n)=-∑k=1paky(n-k)+b0w(n)y(n)=-a1y(n-1)-a2y(n-2)-⋯-apy(n-p)+b0w(n).
The previous input-output relationship in both frequency and time domains is shown in Figure 3.Figure 3
The input-output relationship of an autoregressive (AR) process.The problem in this case is to determine the values of the AR model parameters (predictor coefficients)ak that optimally represent the random part of the inertial sensor biases. This is performed by minimizing the error e(n) between the original signal y(n) represented by the “AR process” of (3) and the estimated signal y^(n), which is estimated by an “AR model” of the form [8]
(4)y^(n)=-∑k=1paky(n-k).
The cost function for this minimization problem is the energy E of e(n), which is given as
(5)E=∑n=1Ne2(n)=∑n=1N[y(n)-y^(n)]2=∑n=1N[-∑k=1paky(n-k)+b0w(n)+∑k=1paky(n-k)]2=∑n=1Nb02w2(n)=min,
where N is the total number of data samples. In this case, w(n) is a sequence of stationary uncorrelated sequences (white noise) with zero mean and unity variance.Therefore, the resultant energy from (5) [∑n=1Nb02w2(n)] will be b02. Therefore, b02 represents the estimated variance σw2 of the white noise input to the AR model or, more generally, the prediction mean-square errorσe2. This is due to the fact that the AR model order p is completely negligible with respect to the MEMS data sample size N.Several methods have been reported to estimate theak parameter values by fitting an AR model to the input data. Three AR methods are considered in this paper, namely, the Yule-Walker method, the covariance method, and Burg’s method. In principle, all of these estimation techniques should lead to approximately the same parameter values if fairly large data samples are used [3].
### 3.1. The Yule-Walker Method
The Yule-Walker method, which is also known as the autocorrelation method, determines first the autocorrelation sequenceR(τ) of the input signal (inertial sensor residual bias in our case). Then, the AR model parameters are optimally computed by solving a set of linear normal equations. These normal equations are obtained using the formula [15]
(6)∂E∂ak=0,
which leads to the following set of normal equations:
(7)Ra=-r⟷a=-R-1r,
where(8a)a=[a1a2⋮ap](8b)r=[R(1)R(2)⋮R(p)],(8c)R=[R(0)R(1)…R(p-1)R(1)R(0)⋯R(p-2)⋮⋮…⋮R(p-1)R(p-2)…R(0)].If the mean-square errorσe2 is also required, it can be determined by(9a)[R(0)R(1)…R(p-1)R(1)R(0)⋯R(p-2)⋮⋮…⋮R(p-1)R(p-2)…R(0)][1a1⋮ap]=[σe20⋮0].Equations (7) and (9a) are known as the Yule-Walker equations [7–9, 13]. Instead of solving (9a) directly (i.e., by first computing R-1), it can efficiently be solved using the Levinson-Durbin (LD) algorithm which proceeds recursively to compute a1,a2,…,ap, and σe2. The LD algorithm is an iterative technique that computes the next prediction coefficient (AR parameter) from the previous one. This LD recursive procedure can be summarized in the following [9]:
(9b)E0=R(0)(9c)γk=-R(k)+∑i=1k-1ai,k-1R(k-i)Ek-1,1≤k≤p(9d)ak,k=γk(9e)ai,k=ai,k-1+γkak-i,k-1,1≤i≤k-1(9f)Ek=(1-γk2)Ek-1.
Equations (9b)–(9f) are solved recursively for k=1,2,…,p and the final solution for the AR parameters is provided by
(9g)ai=ai,p,1≤i≤p.Therefore, the values of the AR prediction coefficients in the Yule-Walker method are provided directly based on minimizing the forward prediction erroref(n) in the least-squares sense. The intermediate quantities γk represented by (9c) are known as the reflection coefficients. In (9f), both energies Ek and Ek-1 are positive, and, thus, the magnitude of γk should be less than one to guarantee the stability of the all-pole filter.However, the Yule-Walker method performs adequately only for long data records [15]. The inadequate performance in case of short data records is usually due to the data windowing applied by the Yule-Walker algorithm. Moreover, the Yule-Walker method may introduce a large bias in the AR-estimated coefficients since it does not guarantee a stable solution of the model [16].
### 3.2. The Covariance Method
The covariance method is similar to the Yule-Walker method in that it minimizes the forward prediction error in the least-squares sense, but it does not consider any windowing of the data. Instead, the windowing is performed with respect to the prediction error to be minimized. Therefore, the AR model obtained by this method is typically more accurate than the one obtained from the Yule-Walker method [17].Furthermore, it uses the covarianceC(τi,τj) instead of R(τ). In this case, the Toeplitz structure of the normal equations used in the autocorrelation method is lost, and hence the LD algorithm cannot be used for the computations. To achieve an efficient C-1 in this case, Cholesky factorization is usually utilized [15].The method provides more accurate estimates than the Yule-Walker method especially for short data records. However, the covariance method may lead to unstable AR models since the LD algorithm is not used for solving the covariance normal equations [18].
### 3.3. Burg’s Method
Burg’s method was introduced in 1967 to overcome most of the drawbacks of the other AR modeling techniques by providing both stable models and high resolution (i.e., more accurate estimates) for short data records [19]. Burg’s method tries to make the maximum use of the data by defining both a forward and a backward prediction error terms, ef(n) and eb(n). The energy to be minimized in this case (EBurg) is the sum of both the forward and backward prediction error energies; that is,
(10)EBurg=∑n=1N[ef2(n)+eb2(n)]=min,
where ef and eb are defined as(11a)ef=y(n)+a1y(n-1)+a2y(n-2)+⋯+apy(n-p)(11b)eb=y(n-p)+a1y(n-p+1)+a2y(n-p+2)+⋯+apy(n).The forward and backward prediction error criteria are the same, and, hence, they have the same optimal solution for the model coefficients [20]. Considering the energies in (9f) to be EBurg, the forward and backward prediction errors can, therefore, be expressed recursively as(12a)efk(n)=efk-1(n)+γkebk-1(n-1)(12b)ebk(n)=ebk-1(n-1)+γkefk-1(n).These recursion formulas form the basis of what is called Lattice (or Ladder) realization of a prediction error filtering (see Figure4).Figure 4
The forward-backward predication error lattice filter general structure.As has been shown for the Yule-Walker method, the accuracy of the estimated parametersa1,a2,…,ap, and σe2 depends mainly on accurate estimates of the autocorrelation sequence R(τ). However, this can be rarely achieved due to the prewindowing of data [17] or the existence of large measurement noise [21]. To avoid the difficulties of the computation of the autocorrelation sequences, Burg in his method estimates first the reflection coefficients γk using another formula instead of (9c). This formula is derived by substituting (12a) and (12b) into (13) and setting the derivative of EBurg with respect to γk (instead of ak in the Yule-Walker and covariance methods) to zero.This leads to the form(13)γk=-2∑n=1N[efk-1(n)ebk-1(n-1)]∑n=1Nefk-12(n)+∑n=1Nebk-12(n-1)
which shows clearly that the magnitude of γk is forced (guaranteed) to be less than one, and thus the obtained model is guaranteed to be stable. Equations (12a), (12b), and (13) form the recursive structure of Burg’s lattice filter, which is shown in Figure 4 with the initial conditions of ef0(n)=eb0(n)=y(n). Finally, the prediction coefficients ak are obtained by constraining them to satisfy (9e) in the LD algorithm.Therefore, the utilization of (9e) and (13) together will always ensure the stability of Burg’s method solution [22]. Moreover, the utilization of both forward and backward prediction errors minimization usually yields better estimation results than using the forward prediction approach used in the previous two methods. Finally, it has been reported by [23] that Burg’s method generally provides better residual estimates than the Yule-Walker method [19].
## 3.1. The Yule-Walker Method
The Yule-Walker method, which is also known as the autocorrelation method, determines first the autocorrelation sequenceR(τ) of the input signal (inertial sensor residual bias in our case). Then, the AR model parameters are optimally computed by solving a set of linear normal equations. These normal equations are obtained using the formula [15]
(6)∂E∂ak=0,
which leads to the following set of normal equations:
(7)Ra=-r⟷a=-R-1r,
where(8a)a=[a1a2⋮ap](8b)r=[R(1)R(2)⋮R(p)],(8c)R=[R(0)R(1)…R(p-1)R(1)R(0)⋯R(p-2)⋮⋮…⋮R(p-1)R(p-2)…R(0)].If the mean-square errorσe2 is also required, it can be determined by(9a)[R(0)R(1)…R(p-1)R(1)R(0)⋯R(p-2)⋮⋮…⋮R(p-1)R(p-2)…R(0)][1a1⋮ap]=[σe20⋮0].Equations (7) and (9a) are known as the Yule-Walker equations [7–9, 13]. Instead of solving (9a) directly (i.e., by first computing R-1), it can efficiently be solved using the Levinson-Durbin (LD) algorithm which proceeds recursively to compute a1,a2,…,ap, and σe2. The LD algorithm is an iterative technique that computes the next prediction coefficient (AR parameter) from the previous one. This LD recursive procedure can be summarized in the following [9]:
(9b)E0=R(0)(9c)γk=-R(k)+∑i=1k-1ai,k-1R(k-i)Ek-1,1≤k≤p(9d)ak,k=γk(9e)ai,k=ai,k-1+γkak-i,k-1,1≤i≤k-1(9f)Ek=(1-γk2)Ek-1.
Equations (9b)–(9f) are solved recursively for k=1,2,…,p and the final solution for the AR parameters is provided by
(9g)ai=ai,p,1≤i≤p.Therefore, the values of the AR prediction coefficients in the Yule-Walker method are provided directly based on minimizing the forward prediction erroref(n) in the least-squares sense. The intermediate quantities γk represented by (9c) are known as the reflection coefficients. In (9f), both energies Ek and Ek-1 are positive, and, thus, the magnitude of γk should be less than one to guarantee the stability of the all-pole filter.However, the Yule-Walker method performs adequately only for long data records [15]. The inadequate performance in case of short data records is usually due to the data windowing applied by the Yule-Walker algorithm. Moreover, the Yule-Walker method may introduce a large bias in the AR-estimated coefficients since it does not guarantee a stable solution of the model [16].
## 3.2. The Covariance Method
The covariance method is similar to the Yule-Walker method in that it minimizes the forward prediction error in the least-squares sense, but it does not consider any windowing of the data. Instead, the windowing is performed with respect to the prediction error to be minimized. Therefore, the AR model obtained by this method is typically more accurate than the one obtained from the Yule-Walker method [17].Furthermore, it uses the covarianceC(τi,τj) instead of R(τ). In this case, the Toeplitz structure of the normal equations used in the autocorrelation method is lost, and hence the LD algorithm cannot be used for the computations. To achieve an efficient C-1 in this case, Cholesky factorization is usually utilized [15].The method provides more accurate estimates than the Yule-Walker method especially for short data records. However, the covariance method may lead to unstable AR models since the LD algorithm is not used for solving the covariance normal equations [18].
## 3.3. Burg’s Method
Burg’s method was introduced in 1967 to overcome most of the drawbacks of the other AR modeling techniques by providing both stable models and high resolution (i.e., more accurate estimates) for short data records [19]. Burg’s method tries to make the maximum use of the data by defining both a forward and a backward prediction error terms, ef(n) and eb(n). The energy to be minimized in this case (EBurg) is the sum of both the forward and backward prediction error energies; that is,
(10)EBurg=∑n=1N[ef2(n)+eb2(n)]=min,
where ef and eb are defined as(11a)ef=y(n)+a1y(n-1)+a2y(n-2)+⋯+apy(n-p)(11b)eb=y(n-p)+a1y(n-p+1)+a2y(n-p+2)+⋯+apy(n).The forward and backward prediction error criteria are the same, and, hence, they have the same optimal solution for the model coefficients [20]. Considering the energies in (9f) to be EBurg, the forward and backward prediction errors can, therefore, be expressed recursively as(12a)efk(n)=efk-1(n)+γkebk-1(n-1)(12b)ebk(n)=ebk-1(n-1)+γkefk-1(n).These recursion formulas form the basis of what is called Lattice (or Ladder) realization of a prediction error filtering (see Figure4).Figure 4
The forward-backward predication error lattice filter general structure.As has been shown for the Yule-Walker method, the accuracy of the estimated parametersa1,a2,…,ap, and σe2 depends mainly on accurate estimates of the autocorrelation sequence R(τ). However, this can be rarely achieved due to the prewindowing of data [17] or the existence of large measurement noise [21]. To avoid the difficulties of the computation of the autocorrelation sequences, Burg in his method estimates first the reflection coefficients γk using another formula instead of (9c). This formula is derived by substituting (12a) and (12b) into (13) and setting the derivative of EBurg with respect to γk (instead of ak in the Yule-Walker and covariance methods) to zero.This leads to the form(13)γk=-2∑n=1N[efk-1(n)ebk-1(n-1)]∑n=1Nefk-12(n)+∑n=1Nebk-12(n-1)
which shows clearly that the magnitude of γk is forced (guaranteed) to be less than one, and thus the obtained model is guaranteed to be stable. Equations (12a), (12b), and (13) form the recursive structure of Burg’s lattice filter, which is shown in Figure 4 with the initial conditions of ef0(n)=eb0(n)=y(n). Finally, the prediction coefficients ak are obtained by constraining them to satisfy (9e) in the LD algorithm.Therefore, the utilization of (9e) and (13) together will always ensure the stability of Burg’s method solution [22]. Moreover, the utilization of both forward and backward prediction errors minimization usually yields better estimation results than using the forward prediction approach used in the previous two methods. Finally, it has been reported by [23] that Burg’s method generally provides better residual estimates than the Yule-Walker method [19].
## 4. Fast Orthogonal Search (FOS) Method
FOS [5–7, 10–12] is a general purpose modeling technique, which can be applied to spectral estimation and time-frequency analysis. The algorithm uses an arbitrary set of nonorthogonal candidate functions pm(n) and finds a functional expansion of an input y(n) in order to minimize the mean square error (MSE) between the input and the functional expansion.The functional expansion of the inputy(n) in terms of the arbitrary candidate functions pm(n) is given by
(14)y(n)=∑m=0MamPm(n)+e(n),
where am is the set of weights of the functional expansion, P0(n)=1, and the Pm(n) are the model terms selected from the set of candidate functions, and e(n) is the modeling error.These model terms can involve the system inputx and output y and cross-products and powers thereof:
(15)Pm(n)=y(n-l1)⋯y(n-li)x(n-k1)⋯x(n-kj),m≥1,i≥0,j≥0,∀i:1≤li≤L,∀j:0≤kj≤K.
By choosing non-orthogonal candidate functions, there is no unique solution for (14). However, FOS may model the input with fewer model terms than an orthogonal functional expansion [11]. For the FFT to model a frequency that does not have an integral number of periods in the record length, energy is spread into all the other frequencies, which is a phenomenon known as spectral leakage [24]. By using candidate functions that are non-orthogonal, FOS may be able to model this frequency between two FFT bins with a single term resulting in many fewer weighting terms in the model [5, 25].FOS begins by creating a functional expansion using orthogonal basis functions such that(16)y(n)=∑m=0Mgmwm(n)+e(n),
where wm(n) is a set of orthogonal functions derived from the candidate functions pm(n), gm is the weight, and e(n) is an error term. The orthogonal functions wm(n) are derived from the candidate functions pm(n) using the Gram-Schmidt (GS) orthogonalization algorithm. The orthogonal functions wm(n) are implicitly defined by the Gram-Schmidt coefficients αmr and do not need to be computed point-by-point.The Gram-Schmidt coefficientsαmr and the orthogonal weights gm can be found recursively using the equations [11]
(17)w0=p0(n)(18)D(m,0)=pm(n)p0(n)-(19)D(m,r)=pm(n)pr(n)--∑i=0r-1αriD(m,i)(20)αmr=pm(n)wr(n)-wr2-(n)=D(m,r)D(r,r)(21)C(0)=y(n)p0(n)-(22)C(m)=y(n)pm(n)--∑r=0m-1αmrC(r)(23)gm=C(m)D(m,m).
In its last stage, FOS calculates the weights of the original functional expansion am (6), from the weights of the orthogonal series expansion gm and Gram-Schmidt coefficients αmr. The value of am can be found recursively using
(24)am=∑i=mMgivi,
where vm=1 and
(25)vi=-∑r=mi-1αirvr,i=m+1,m+2,…,M.
FOS requires the calculation of the correlation between the candidate functions and the calculation of the correlation between the input and the candidate functions. The correlation between the input and the candidate function y(n)pm(n)- is typically calculated point-by-point once at the start of the algorithm and then stored for later quick retrieval.The MSE of the orthogonal function expansion has been shown to be [5, 6, 11]
(26)ε2(n)-=y2(n)--∑m=0Mgm2wm2(n)-.
It then follows that the MSE reduction given by the mth candidate function is given by
(27)Qm=gm2wm2(n)-=gm2D(m,m),
The candidate with the greatest value for Q is selected as the model term, but optionally its addition to the model may be subject to its Q value exceeding a threshold level [5, 6, 11]. The residual MSE after the addition of each term can be computed by
(28)MSEm=MSEm-1-Qm.
The search algorithm may be stopped when an acceptably small residual MSE has been achieved (i.e., a ratio of the MSE over the mean-squared value of the input [12] or an acceptably small percentage of the variance of the time series being modeled [5]). The search may also stop when a certain number of terms have been fitted. Another stopping criterion is when none of the remaining candidates can yield a sufficient MSE reduction value (this criterion would be representative of not having any candidates that would yield an MSE reduction value greater than the addition of a white Gaussian noise series).
## 5. Experimental Results
The data were collected by a low-cost MEMS-based inertial measurement unit (IMU CC-300, Crossbow). These measurements were collected during a one-hour experiment to obtain stochastic error models of both gyroscopes and accelerometers. To illustrate the performance, two sensors were selected as an example (accelerometer-Y, Gyro-Y) while the other inertial sensors gave similar results. Figure5 shows one hour of sampled accelerometer-Y and Gyro-Y acquired at 200 Hz.Figure 5
Accelerometer-Y and Gyro-Y specific force measurements.FOS is applied directly on the raw inertial sensor 200 Hz data without any preprocessing or denoising. Traditional methods like Yule Walker, Covariance, and Burg perform poorly on the raw data, so we first applied wavelet denoising of up to 4 levels of decomposition that resulted in band limiting the spectrum of the raw inertial sensor data to 12.5 Hz. Therefore, unlike FOS, the other 3 methods operate on the denoised version of the same data. After denoising, AR model parameters were then estimated as well as the corresponding prediction MSE for all sensors using Yule-Walker, Covariance, and Burg methods.For FOS, the raw INS data were divided into three datasets for model training, evaluation, and prediction stages. The first 3 minutes of the INS raw data were utilized for model training, which uses the FOS algorithm to identify several possibly nonlinear AR equations. Different models can be obtained by changing the maximum delayL in the output and the degree of output cross-products (CP). The next 3 minutes of the data were used for the evaluation stage. Here, models are compared and the best one, fitting the real output with minimum MSE, is chosen. As an example, the FOS model (CP=1) of the accelerometer-Y is shown as follows:
(29)Y[n]=6.03×10-6+2.25y[n-1]-1.2y[n-2]+1.02y[n-4]-0.67y[n-3]-0.26y[n-5]+0.62y[n-7]-0.51y[n-6]-0.32y[n-8]+0.08y[n-9].
In the prediction stage, the output and MSE of the chosen model are computed over the remaining (novel) raw INS data. Figure 6 shows prediction MSE of Accelerometer-Y samples by Yule-Walker, Covariance, Burg, and FOS methods. For FOS, an AR model of order 3 or 4 suffices, and MSE decreases when the degree of the cross-product terms is raised to 2 (nonlinear model). An AR model of order 7 or 8 is required for Burg or Covariance method and order 9 or 10 for Yule-Walker. Large AR model order complicates the estimation method (like KF) used for the INS/GPS integration.Figure 6
Accelerometer-Y prediction MSE using Yule-Walker, covariance, Burg, and FOS methods. For CPdegree=1, only linear candidate terms up to a maximum output lag L=10 were allowed. For CP degree=2, both linear and y(n-l1)y(n-l2) candidates were allowed, up to maximum output lag L=10.Table1 shows a summary of the performance of three conventional stochastic modeling methods (Yule-Walker, Covariance, and Burg) and the proposed FOS-based method with cross-product order set to 1 (i.e., linear model) and 2 (i.e., nonlinear model) for different model orders 1 and 10 over one hour of MEMS Accelerometer-Y measurements. The FOS model is capable of denoising the Accelerometer-Y measurements without appreciable degradation of the original signal. FOS achieves better performance in terms of lower MSE and less computational time than the traditional methods with no need for any denoising techniques. Increasing the cross-product degree to 2 for FOS improves model accuracy and lessens position error by an order of magnitude (for L=10) but increases computation time.Table 1
Performance summary of AR models obtained by Yule-Walker, Burg, and FOS over one hour of accelerometer-Y measurements: first-order FOS: only linear model terms; second-order FOS: linear and cross-product model terms.
Modelling technique
Model MSE (m/s2)2
Corresponding position error (m)
Computational time (s)
Model order (maximum output lagL) 1
Yule-Walker
5 × 10−9
458
0.25
Covariance/Burg
3 × 10−11
35
0.23
CP degree = 1 FOS
1 × 10−11
20
0.13
CP degree = 2 FOS
1 × 10−11
20
0.36
Model order (maximum output lagL) 10
Yule-Walker
5 × 10−9
458
0.69
Covariance/Burg
2 × 10−12
9
0.45
CP degree = 1 FOS
3 × 10−14
1
0.30
CP degree = 2 FOS
2 × 10−16
0.09
0.60A similar procedure was performed for the Gyro-Y sensor measurements. Figure7 shows the prediction MSE of Gyro-Y samples by using Yule-Walker, the Covariance, Burg, and FOS methods. It is clear that FOS method achieves minimum MSE error with less model order than the other AR methods. Table 2 shows a summary of the performance of three conventional stochastic modeling methods (Yule-Walker, Covariance, and Burg) and the proposed FOS-based method with cross-product order set to 1 (i.e., linear model) and 2 (i.e., nonlinear model) for different model orders 1 and 10 over one hour of Gyro-Y measurements.Table 2
Performance summary of both AR Yule-Walker and Burg models and FOS model over one hour of Gyro-Y measurements: first-order FOS: only linear model terms; second-order FOS: linear and cross-product model terms.
Modeling technique
Model MSE (deg/h)2
Corresponding position error (m)
Computational time (s)
model order (maximum output lagL) 1
Yule-Walker
7 × 10−6
978
0.40
Covariance/Burg
5 × 10−6
826
0.23
CP degree = 1 FOS
1 × 10−6
369
0.09
CP degree = 2 FOS
1 × 10−6
369
0.35
Model order (maximum output lagL) 10
Yule-Walker
2 × 10−6
523
0.44
Covariance/Burg
2 × 10−7
165
0.28
CP degree = 1 FOS
4 × 10−10
7
0.18
CP degree = 2 FOS
3 × 10−11
2
0.6Figure 7
Gyro-Y prediction MSE using Yule-Walker, covariance, Burg, and FOS methods. For CPorder=1, only linear candidate terms up to a maximum output lag L=10 were allowed. For CP order=2, both linear and y(n-l1)y(n-l2) candidates were allowed, up to a maximum output lag L=10.Similar to the accelerometer case, the stochastic model obtained for Gyro-Y using FOS surpasses the models obtained by the other methods in the MSE, the corresponding position error that would result from the residual errors and the computation time.
## 6. Conclusions
Inertial sensor errors are the most significant contributors to INS errors. Thus, techniques to model these sensor errors are of interest to researchers. The current state of the art in modeling inertial sensor signals includes low-pass filtering and using wavelet denoising techniques, which have had limited success in removing long-term inertial sensor errors.This paper suggested using FOS to model the MEMS sensor errors in time domain. The FOS MSE and computational time are compared with those from Yule-Walker, the covariance, and Burg methods. FOS was applied directly to the one-hour raw inertial sensor 200 Hz data without any pre-processing or denoising. The other traditional 3 methods operated on the denoised version of the same data after we applied the wavelet denoising of up to 4 levels of decomposition.For either gyroscope or accelerometer case, the FOS model surpasses those obtained by traditional methods. The results demonstrate the advantages of the proposed FOS-based method including the absence of a need for preprocessing or denoising, lower computation time and MSE, and achieving better performance with smaller model order. Increasing the cross-product degree for FOS improves model accuracy and lessens position error but increases computation time.
---
*Source: 101820-2013-07-10.xml* | 2013 |
# Design and Fabrication of Diffractive Light-Collecting Microoptical Device with 1D and 2D Lamellar Grating Structures
**Authors:** ChaBum Lee
**Journal:** International Journal of Manufacturing Engineering
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101823
---
## Abstract
This paper presents the optimal design method of diffractive light-collecting microoptical device and its fabrication method by E-beam lithography, fast atom beam etching, and hot-embossing processes. The light-collecting device proposed in the paper is comprised of 9 (3 × 3) blocks of optical elements: 4 blocks of 1D lamellar grating structures, 4 blocks of 2D lamellar grating structures, and a single block of nonpatterned element at the center, which acts for lens to be able to collect the diffracted and transmitted lights from the lamellar grating structures into the focus area. The overall size of the light-collecting device is 300 × 300μm2, and the size of each block was practically designed as 100 × 100 μm2. The performance of 1D and 2D lamellar grating structures was characterized in terms of diffraction efficiency and diffraction angle using a rigorous coupled-wave
analysis (RCWA) method, and those geometric parameters, depth, pitch, and orientation, were optimized to achieve a high light-collecting efficiency. The master molds for the optimized structures were fabricated on Si substrate by E-beam lithography and fast atom beam etching processes. The 100 μm thick patterned polymethyl methacrylate (PMMA) film was then replicated by a hot-embossing process. As a result, the patterned PMMA film collected 63.0% more incident light than a nonpatterned one.
---
## Body
## 1. Introduction
Microoptical devices or hybrid integrated optical devices have been very important in the field of optical application systems such as optical communication systems, optical information processing systems, and optical sensing systems to achieve compactness and high performance [1–4]. Laser diodes (LD) and light emitting diodes (LEDs) have been widely used as light sources in these systems given their compactness, low driving current, and capability of high speed modulation [4]. The light source transmits through optical fibers in many cases; thus the output beam emitting from the source has far-field radiation angles that need an external lens such as a collimating lens or a focusing lens. But these lenses are so bulky that it is difficult to compact collimated or focus light sources for microoptical applications [3–5].The diffractive optical elements (DOEs) can overcome such problems. Recently, a great deal of research in DOEs has been performed [6–13]. The DOEs play an important role in many optical applications, including optical telecommunications components, multiple imaging, light-collecting, and spectroscopy applications, because of their high uniformity, light weight, and miniaturization in size [1–4]. The most popular light-collecting device is a Fresnel lens as seen in Figure 1. A Fresnel lens is an optical component which can be used as a cost-effective, light-weight alternative to conventional continuous surface optics. Many fabrication methods have been proposed to produce Fresnel lenses. Fugita et al. fabricated the stepwise Fresnel lens on a Si substrate by using E-beam lithography [9, 10]. Yan et al. fabricated the continuous Fresnel lens on a germanium (Ge) substrate by using ultraprecision machining [11], and Joo et al. fabricated the continuous Fresnel lens on polymeric substrate by using ultraprecision machining [12]. However, MEMS-based fabrication methods such as E-beam lithography, photolithography, and holography are limited to machining circularly curved facets and low material removal efficiency [6–10]. On the other hand, ultraprecision machining-based fabrication methods rely on more complicated mechanisms depending on the degree of the size effect due to small ratio of depth of cut to the tool edge radius. In addition, both MEMS-based and ultraprecision machining-based machining methods are expensive [11, 14]. An evident disadvantage of using a lens with grooves is the possibility of losing light due to incidence on the draft facet. Making the facet perfectly vertical (i.e., perpendicular to the incident plane of light) works to minimize the optical loss (draft loss) [9–12]. However there is little work regarding thin film-type Fresnel lens for microoptics light-collecting applications.Figure 1
Schematics of general Fresnel lens:d depth of lens, f
d focal length, and R radius of curvature.Keeping pace with these interests in DOEs, many analysis methods have been introduced [6, 12, 13, 15–17]. Scalar diffraction theory is one of the widely used methods for the design and analysis of DOEs. The Fresnel or Fraunhofer diffraction integrals are commonly employed, and these integrals are generally calculated with the fast Fourier transform (FFT). Although this approach is relatively simple and agrees with experimental results to some extent, its applicability is limited due to diffraction limit [15–17]. When the grating size becomes so small that its scale is comparable to or less than the wavelength, the simplified approximations made in the scalar method are not valid and the polarizing nature of the light cannot be ignored in this regime. Therefore a more accurate fully electromagnetic analysis is required. The rigorous coupled-wave analysis (RCWA) provides means to solve these problems and enables characterizing the diffraction phenomenon of the DOEs with respect to TE (electric field perpendicular to the incident plane) and TM (magnetic field perpendicular to the incident plane) polarization directions.Here thin film-type and polymethyl-methacrylate (PMMA) diffractive light-collecting microoptical device were proposed. To minimize the possibility of lost light due to incidence on the draft facet, the facet of the Fresnel lens was designed perfectly vertical. The proposed device was designed by RCWA and fabricated by E-beam lithography, fast atom beam (FAB) etching, and hot-embossing processes. The optical testing method and its performance are discussed in the following.
## 2. Principles
The configuration of the light-collecting device is presented in Figure2, where L is length of a single block, F is the filling factor, P
1, P
2, and P
3 are periods of each block, d is the depth of lens, f
d is the focal length, and n
1 and n
0 are refractive index of PMMA and air. It is comprised of 9 (3
×
3) blocks of optical elements: 4 blocks of 1D lamellar grating structures, 4 blocks of 2D lamellar grating structures, and a single block of nonpatterned element at the center, which acts for lens to be able to collect the diffractive and transmitted lights from the lamellar grating structures into the center of the structure. The overall size of the light-collecting device is 300 × 300 μm2, and the size of each block was practically designed as 100 × 100 μm2. As seen in Figure 2(b), the incident light is diffracted from 8 blocks of 1D and 2D lamellar grating structures, and each diffracted light will be focused on a projected nonpatterned area with f
d.Schematics of the proposed light-collecting device: (a) overall layout: 9 (3 × 3) blocks of optical elements: 4 blocks of 1D lamellar grating structures, 4 blocks of 2D lamellar grating structures, and a single block of nonpatterned element at the center and (b) cross-sectional view:d depth of lens and f
d focal length.
(a)
(b)The pitches of 1D and 2D grating structures were determined by [5]
(1)
n
0
sin
(
θ
0
)
+
n
1
sin
(
θ
1
)
=
m
λ
P
,
where m is the number of diffraction order. The pitches of 1D grating structures, P
2 and P
3, were practically set to 2.0 μm, which corresponded to diffraction angle, ±12.6° in degree. In accordance with the pitch of 1D grating structures, the pitch of 2D grating structures was determined to make the incident light diffract into the center area, which was ±17.7° in degree. The phase-matched depth, d, was optimized using RCWA under the optical and geometrical properties as summarized in Table 1. Diffraction efficiencies were calculated by RCWA with respect to the grating depth as seen in Figure 3. The 0th order diffraction efficiencies reached their minimum at depths of 0.63 μm and 0.66 μm for TE and TM waves, respectively, while the ±1st order diffraction efficiencies reached their maximum at these depths. The ±1st order diffraction efficiencies with respect to polarization directions, η
1
,
TE and η
1
,
TM, were estimated to be 38.3% and 38.7%, at a depth of 0.63 μm and 0.66 μm, respectively. Although small difference existed between diffraction efficiencies with respect to TE and TM waves, it is small enough that the depth was set to 0.65 μm. Diffraction angles were calculated to confirm the estimated angles using the fast Fourier transform as seen in Figure 4, which showed good agreement with the estimated ones. From the pitches of grating structures, the focal length was calculated, 448 μm.Table 1
Optical and geometrical conditions of 1D and 2D DOEs for optical analysis.
Type
λ [
μm]
n
1
/
n
0
P
[
μm]
F
D
[
μm]
1D
0.65
1.49
2.0
0.5
0.65
2D
1.428Figure 3
Diffraction efficiency curve with respect to the grating depth.The estimated diffraction angles of 1D and 2D DOEs; (a)θ ±1st = ±12.6° for 1D, (b) θ ±1st = ±17.7° for 2D.
(a)
(b)
## 3. Fabrication
The diffractive light-collecting microoptical device as a mold master was fabricated on a Si substrate by E-beam lithography (ELS 3700, Elionix) and FAB (FAB 60ML, Abara) processes, and the PMMA replica was duplicated by hot-embossing process. E-beam lithography and FAB processes were shown in Figure5. An n-type of 500 μm thick Si substrate (20 mm by 20 mm) was used for E-beam direct writing. For the fabrication, 1000 nm thick PR (photoresistor, ZEP 520, Nippon Zeon Co., Ltd.) was coated on the silicon substrate and prebaked at 180 degree for 3 min as seen in Figure 5(b). After that, E-beam direct writing was used to fabricate the PR patterned 1D and 2D DOEs; the acceleration voltage, beam current, and dose time were 30 KV, 10 pA, and 35μs, respectively, which corresponds to a dose of 56μC/cm2. Next, E-beam exposed silicon substrate was developed in ZEP 520 developer for 5 min and washed in the rinse-filled beaker (methyl isobutyl ketone) for 30 sec in the water pool at a temperature of 23°C and then postbaked for 3 min at 110°C. As seen in Figure 5(d), the PR patterned 1D and 2D DOEs (300 μm by 300 μm) on the silicon substrate were fabricated. FAB etching has a good directionality and a constant etching ratio (1 : 1) between the silicon and PR [10, 13, 18, 19]. The FAB etching ratio, 21.0 nm/min, was calibrated under an etching condition as shown in Figure 6. The DOEs with a depth of 0.65 μm and a period of 2.0 μm and 1.43 μm were successfully fabricated in Figure 5(e) and both fabricated 1D and 2D DOEs were taken by scanning electron microscope and etched depth was measured by surface profiler (Tencor). These results were shown in Figure 7. The etched depth of Si substrate was approximately 0.65 μm the same as design value. Last, the patterned Si substrate was hydrophobically coated to make the mold release easier. The 100 μm thick PMMA patterned film was replicated from the patterned Si substrate as a master mold by hot-embossing operation. The optimal hot-embossing condition was investigated by its process sequence: preheating, pressing, cooling, and demolding [10, 12, 13]. The hot-embossing condition was set: molding temperature 150°C, applied pressure 0.6 MPa, pressing time 5 min, and demolding temperature 25°C.Fabrication process of DOEs; (a) silicon substrate cleaning, (b) PR coating, (c) E-beam lithography, (d) developing, (e) FAB etching, and (f) hot-embossing molding.
(a)
(b)
(c)
(d)
(e)
(f)Figure 6
Calibration of FAB etching ratio: 21.0 nm/min.Scanning electron micrograph of the fabricated 1D and 2D DOEs and the depth measured by surface profiler; (a) horizontal 1D DOEs (yellow structures in Figure2), (b) vertical 1D DOEs (gray structures in Figure 2), (c) 2D DOEs, and (d) the depth of each DOE.
(a)
(b)
(c)
(d)
## 4. Results
The measurement setup was shown in Figure8. The LD (λ = 0.65 μm, Neoark) was used as the light source with a Gaussian intensity distribution, and the optical fiber was used to deliver the light from the LD. The fabricated device was placed after the optical fiber, and the light-collecting result was measured by the CCD while moving the objective lens upward and downward manually. The digital dial gauge was used to measure the moving height. The measurement results were observed at every 50 μm from 0 to 700 μm and these images were photo-taken by CCD. The intensities of each image were estimated by gray scale, and the normalized intensity ratio was calculated with respect to focal distances as shown in Figure 9. As a result, the maximum intensity was measured at a focal distance with 450 μm and increased by 63.0% compared to the intensity of nonpatterned area only, which has good agreement with estimated focal length, 448 μm.Figure 8
Experimental setup for focal length measurement.Figure 9
Measurement result of the intensity collected at each focal distance.
## 5. Conclusion
The new microoptical device for the purpose of the light-collecting was proposed and fabricated. Grating structures were optimized by RCWA and fabricated by E-beam lithography and fast atom beam etching processes. The mold master fabricated on Si substrate was replicated into thin film-type (100μm) PMMA replica by hot embossing. From the measurement results, the device was successfully fabricated, and the shape of grating structures, pitch, filling factor, and depth were the same as designed. The maximum intensity was measured at a focal distance of 450 μm and increased by 63.0% compared to the intensity of nonpatterned area only, which has good agreement with estimated focal length, 448 μm. As a result, the proposed light-collecting lens system showed high performance including high uniformity, light weight, and miniaturization in size for the purpose of the LD light focusing applications. This design approach is expected to be applied to various microoptical applications such as color mixing/filtering, light collimation, and light focusing areas.
---
*Source: 101823-2014-06-05.xml* | 101823-2014-06-05_101823-2014-06-05.md | 14,660 | Design and Fabrication of Diffractive Light-Collecting Microoptical Device with 1D and 2D Lamellar Grating Structures | ChaBum Lee | International Journal of Manufacturing Engineering
(2014) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101823 | 101823-2014-06-05.xml | ---
## Abstract
This paper presents the optimal design method of diffractive light-collecting microoptical device and its fabrication method by E-beam lithography, fast atom beam etching, and hot-embossing processes. The light-collecting device proposed in the paper is comprised of 9 (3 × 3) blocks of optical elements: 4 blocks of 1D lamellar grating structures, 4 blocks of 2D lamellar grating structures, and a single block of nonpatterned element at the center, which acts for lens to be able to collect the diffracted and transmitted lights from the lamellar grating structures into the focus area. The overall size of the light-collecting device is 300 × 300μm2, and the size of each block was practically designed as 100 × 100 μm2. The performance of 1D and 2D lamellar grating structures was characterized in terms of diffraction efficiency and diffraction angle using a rigorous coupled-wave
analysis (RCWA) method, and those geometric parameters, depth, pitch, and orientation, were optimized to achieve a high light-collecting efficiency. The master molds for the optimized structures were fabricated on Si substrate by E-beam lithography and fast atom beam etching processes. The 100 μm thick patterned polymethyl methacrylate (PMMA) film was then replicated by a hot-embossing process. As a result, the patterned PMMA film collected 63.0% more incident light than a nonpatterned one.
---
## Body
## 1. Introduction
Microoptical devices or hybrid integrated optical devices have been very important in the field of optical application systems such as optical communication systems, optical information processing systems, and optical sensing systems to achieve compactness and high performance [1–4]. Laser diodes (LD) and light emitting diodes (LEDs) have been widely used as light sources in these systems given their compactness, low driving current, and capability of high speed modulation [4]. The light source transmits through optical fibers in many cases; thus the output beam emitting from the source has far-field radiation angles that need an external lens such as a collimating lens or a focusing lens. But these lenses are so bulky that it is difficult to compact collimated or focus light sources for microoptical applications [3–5].The diffractive optical elements (DOEs) can overcome such problems. Recently, a great deal of research in DOEs has been performed [6–13]. The DOEs play an important role in many optical applications, including optical telecommunications components, multiple imaging, light-collecting, and spectroscopy applications, because of their high uniformity, light weight, and miniaturization in size [1–4]. The most popular light-collecting device is a Fresnel lens as seen in Figure 1. A Fresnel lens is an optical component which can be used as a cost-effective, light-weight alternative to conventional continuous surface optics. Many fabrication methods have been proposed to produce Fresnel lenses. Fugita et al. fabricated the stepwise Fresnel lens on a Si substrate by using E-beam lithography [9, 10]. Yan et al. fabricated the continuous Fresnel lens on a germanium (Ge) substrate by using ultraprecision machining [11], and Joo et al. fabricated the continuous Fresnel lens on polymeric substrate by using ultraprecision machining [12]. However, MEMS-based fabrication methods such as E-beam lithography, photolithography, and holography are limited to machining circularly curved facets and low material removal efficiency [6–10]. On the other hand, ultraprecision machining-based fabrication methods rely on more complicated mechanisms depending on the degree of the size effect due to small ratio of depth of cut to the tool edge radius. In addition, both MEMS-based and ultraprecision machining-based machining methods are expensive [11, 14]. An evident disadvantage of using a lens with grooves is the possibility of losing light due to incidence on the draft facet. Making the facet perfectly vertical (i.e., perpendicular to the incident plane of light) works to minimize the optical loss (draft loss) [9–12]. However there is little work regarding thin film-type Fresnel lens for microoptics light-collecting applications.Figure 1
Schematics of general Fresnel lens:d depth of lens, f
d focal length, and R radius of curvature.Keeping pace with these interests in DOEs, many analysis methods have been introduced [6, 12, 13, 15–17]. Scalar diffraction theory is one of the widely used methods for the design and analysis of DOEs. The Fresnel or Fraunhofer diffraction integrals are commonly employed, and these integrals are generally calculated with the fast Fourier transform (FFT). Although this approach is relatively simple and agrees with experimental results to some extent, its applicability is limited due to diffraction limit [15–17]. When the grating size becomes so small that its scale is comparable to or less than the wavelength, the simplified approximations made in the scalar method are not valid and the polarizing nature of the light cannot be ignored in this regime. Therefore a more accurate fully electromagnetic analysis is required. The rigorous coupled-wave analysis (RCWA) provides means to solve these problems and enables characterizing the diffraction phenomenon of the DOEs with respect to TE (electric field perpendicular to the incident plane) and TM (magnetic field perpendicular to the incident plane) polarization directions.Here thin film-type and polymethyl-methacrylate (PMMA) diffractive light-collecting microoptical device were proposed. To minimize the possibility of lost light due to incidence on the draft facet, the facet of the Fresnel lens was designed perfectly vertical. The proposed device was designed by RCWA and fabricated by E-beam lithography, fast atom beam (FAB) etching, and hot-embossing processes. The optical testing method and its performance are discussed in the following.
## 2. Principles
The configuration of the light-collecting device is presented in Figure2, where L is length of a single block, F is the filling factor, P
1, P
2, and P
3 are periods of each block, d is the depth of lens, f
d is the focal length, and n
1 and n
0 are refractive index of PMMA and air. It is comprised of 9 (3
×
3) blocks of optical elements: 4 blocks of 1D lamellar grating structures, 4 blocks of 2D lamellar grating structures, and a single block of nonpatterned element at the center, which acts for lens to be able to collect the diffractive and transmitted lights from the lamellar grating structures into the center of the structure. The overall size of the light-collecting device is 300 × 300 μm2, and the size of each block was practically designed as 100 × 100 μm2. As seen in Figure 2(b), the incident light is diffracted from 8 blocks of 1D and 2D lamellar grating structures, and each diffracted light will be focused on a projected nonpatterned area with f
d.Schematics of the proposed light-collecting device: (a) overall layout: 9 (3 × 3) blocks of optical elements: 4 blocks of 1D lamellar grating structures, 4 blocks of 2D lamellar grating structures, and a single block of nonpatterned element at the center and (b) cross-sectional view:d depth of lens and f
d focal length.
(a)
(b)The pitches of 1D and 2D grating structures were determined by [5]
(1)
n
0
sin
(
θ
0
)
+
n
1
sin
(
θ
1
)
=
m
λ
P
,
where m is the number of diffraction order. The pitches of 1D grating structures, P
2 and P
3, were practically set to 2.0 μm, which corresponded to diffraction angle, ±12.6° in degree. In accordance with the pitch of 1D grating structures, the pitch of 2D grating structures was determined to make the incident light diffract into the center area, which was ±17.7° in degree. The phase-matched depth, d, was optimized using RCWA under the optical and geometrical properties as summarized in Table 1. Diffraction efficiencies were calculated by RCWA with respect to the grating depth as seen in Figure 3. The 0th order diffraction efficiencies reached their minimum at depths of 0.63 μm and 0.66 μm for TE and TM waves, respectively, while the ±1st order diffraction efficiencies reached their maximum at these depths. The ±1st order diffraction efficiencies with respect to polarization directions, η
1
,
TE and η
1
,
TM, were estimated to be 38.3% and 38.7%, at a depth of 0.63 μm and 0.66 μm, respectively. Although small difference existed between diffraction efficiencies with respect to TE and TM waves, it is small enough that the depth was set to 0.65 μm. Diffraction angles were calculated to confirm the estimated angles using the fast Fourier transform as seen in Figure 4, which showed good agreement with the estimated ones. From the pitches of grating structures, the focal length was calculated, 448 μm.Table 1
Optical and geometrical conditions of 1D and 2D DOEs for optical analysis.
Type
λ [
μm]
n
1
/
n
0
P
[
μm]
F
D
[
μm]
1D
0.65
1.49
2.0
0.5
0.65
2D
1.428Figure 3
Diffraction efficiency curve with respect to the grating depth.The estimated diffraction angles of 1D and 2D DOEs; (a)θ ±1st = ±12.6° for 1D, (b) θ ±1st = ±17.7° for 2D.
(a)
(b)
## 3. Fabrication
The diffractive light-collecting microoptical device as a mold master was fabricated on a Si substrate by E-beam lithography (ELS 3700, Elionix) and FAB (FAB 60ML, Abara) processes, and the PMMA replica was duplicated by hot-embossing process. E-beam lithography and FAB processes were shown in Figure5. An n-type of 500 μm thick Si substrate (20 mm by 20 mm) was used for E-beam direct writing. For the fabrication, 1000 nm thick PR (photoresistor, ZEP 520, Nippon Zeon Co., Ltd.) was coated on the silicon substrate and prebaked at 180 degree for 3 min as seen in Figure 5(b). After that, E-beam direct writing was used to fabricate the PR patterned 1D and 2D DOEs; the acceleration voltage, beam current, and dose time were 30 KV, 10 pA, and 35μs, respectively, which corresponds to a dose of 56μC/cm2. Next, E-beam exposed silicon substrate was developed in ZEP 520 developer for 5 min and washed in the rinse-filled beaker (methyl isobutyl ketone) for 30 sec in the water pool at a temperature of 23°C and then postbaked for 3 min at 110°C. As seen in Figure 5(d), the PR patterned 1D and 2D DOEs (300 μm by 300 μm) on the silicon substrate were fabricated. FAB etching has a good directionality and a constant etching ratio (1 : 1) between the silicon and PR [10, 13, 18, 19]. The FAB etching ratio, 21.0 nm/min, was calibrated under an etching condition as shown in Figure 6. The DOEs with a depth of 0.65 μm and a period of 2.0 μm and 1.43 μm were successfully fabricated in Figure 5(e) and both fabricated 1D and 2D DOEs were taken by scanning electron microscope and etched depth was measured by surface profiler (Tencor). These results were shown in Figure 7. The etched depth of Si substrate was approximately 0.65 μm the same as design value. Last, the patterned Si substrate was hydrophobically coated to make the mold release easier. The 100 μm thick PMMA patterned film was replicated from the patterned Si substrate as a master mold by hot-embossing operation. The optimal hot-embossing condition was investigated by its process sequence: preheating, pressing, cooling, and demolding [10, 12, 13]. The hot-embossing condition was set: molding temperature 150°C, applied pressure 0.6 MPa, pressing time 5 min, and demolding temperature 25°C.Fabrication process of DOEs; (a) silicon substrate cleaning, (b) PR coating, (c) E-beam lithography, (d) developing, (e) FAB etching, and (f) hot-embossing molding.
(a)
(b)
(c)
(d)
(e)
(f)Figure 6
Calibration of FAB etching ratio: 21.0 nm/min.Scanning electron micrograph of the fabricated 1D and 2D DOEs and the depth measured by surface profiler; (a) horizontal 1D DOEs (yellow structures in Figure2), (b) vertical 1D DOEs (gray structures in Figure 2), (c) 2D DOEs, and (d) the depth of each DOE.
(a)
(b)
(c)
(d)
## 4. Results
The measurement setup was shown in Figure8. The LD (λ = 0.65 μm, Neoark) was used as the light source with a Gaussian intensity distribution, and the optical fiber was used to deliver the light from the LD. The fabricated device was placed after the optical fiber, and the light-collecting result was measured by the CCD while moving the objective lens upward and downward manually. The digital dial gauge was used to measure the moving height. The measurement results were observed at every 50 μm from 0 to 700 μm and these images were photo-taken by CCD. The intensities of each image were estimated by gray scale, and the normalized intensity ratio was calculated with respect to focal distances as shown in Figure 9. As a result, the maximum intensity was measured at a focal distance with 450 μm and increased by 63.0% compared to the intensity of nonpatterned area only, which has good agreement with estimated focal length, 448 μm.Figure 8
Experimental setup for focal length measurement.Figure 9
Measurement result of the intensity collected at each focal distance.
## 5. Conclusion
The new microoptical device for the purpose of the light-collecting was proposed and fabricated. Grating structures were optimized by RCWA and fabricated by E-beam lithography and fast atom beam etching processes. The mold master fabricated on Si substrate was replicated into thin film-type (100μm) PMMA replica by hot embossing. From the measurement results, the device was successfully fabricated, and the shape of grating structures, pitch, filling factor, and depth were the same as designed. The maximum intensity was measured at a focal distance of 450 μm and increased by 63.0% compared to the intensity of nonpatterned area only, which has good agreement with estimated focal length, 448 μm. As a result, the proposed light-collecting lens system showed high performance including high uniformity, light weight, and miniaturization in size for the purpose of the LD light focusing applications. This design approach is expected to be applied to various microoptical applications such as color mixing/filtering, light collimation, and light focusing areas.
---
*Source: 101823-2014-06-05.xml* | 2014 |
# Diagnosis and Treatment of Lower Motor Neuron Disease in Australian Dogs and Cats
**Authors:** A. M. Herndon; A. T. Thompson; C. Mack
**Journal:** Journal of Veterinary Medicine
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1018230
---
## Abstract
Diseases presenting with lower motor neuron (LMN) signs are frequently seen in small animal veterinary practice in Australia. In addition to the most common causes of LMN disease seen world-wide, such as idiopathic polyradiculoneuritis and myasthenia gravis, there are several conditions presenting with LMN signs that are peculiar to the continent of Australia. These include snake envenomation by tiger (Notechisspp.), brown (Pseudonajaspp.), and black snakes (Pseudechisspp.), tick paralysis associated withIxodes holocyclus andIxodes coronatus, and tetrodotoxins from marine animals such as puffer fish (Tetraodontidae spp.) and blue-ring octopus (Hapalochlaenaspp.). The wide range of differential diagnoses along with the number of etiological-specific treatments (e.g., antivenin, acetylcholinesterase inhibitors) and highly variable prognoses underscores the importance of a complete physical exam and comprehensive history to aid in rapid and accurate diagnosis of LMN disease in Australian dogs and cats. The purpose of this review is to discuss diagnosis and treatment of LMN diseases seen in dogs and cats in Australia.
---
## Body
## 1. Introduction, History, and Physical Examination
Lower motor neuron disease (LMND) broadly refers to conditions that preferentially affect the motor nerve bodies originating in the ventral horn of the spinal cord grey matter, their axons, the neuromuscular junction, and the muscle fibre. Disruption of motor unit function results in diminished motor function (e.g., paresis or paralysis) of the affected region, flaccid muscle tone, and diminished or absent reflex arcs. Depending on the region and type of disease, this change in function may be regional, such as megaoesophagus seen in focal myasthenia gravis, or generalised, such as tetraparesis with pharyngeal and respiratory muscle paralysis that can be seen with tick paralysis cases. Lower motor neuron disease is the result of a wide variety of underlying disease pathologies. Immune-mediated targeting of nerves or motor endplate is seen in myasthenia gravis and idiopathic polyradiculoneuritis [1–3].The importance of a good history cannot be overemphasized. As certain causes of LMND in Australia show a strong regional distribution (e.g., tick paralysis), understanding the lifestyle and recent travel history of the pet may assist in shortening the list of differential diagnoses [4–9]. In addition to travel history, other important questions include the following: recent history of antigenic stimulation (e.g., vaccine); use of acaricides; onset, duration, and progression of clinical signs; sighting of ticks or snakes in the pet’s environment; history of similar, previous events; and history of raw or undercooked animal products in the diet.Neurolocalisation is a critical first step in identifying a disease with a LMND component. A summary of clinical abnormalities on neurologic exam and their relationship to LMND is included in Table1. The clinical findings of a good neurologic exam are frequently enough to localise the neurologic lesion to brain, brain stem, spinal cord segments, or lower motor neuron. The common theme among all neurologic abnormalities is the failure of the motor unit to function normally, despite normal sensory input.Table 1
Summary of abnormal findings on a clinical neurological exam and whether these findings are consistent with lower motor neuron disease.
Neurological Exam Abnormality
Typical of LMND
Seizures
NO
Altered mentation
Pacing
Head pressing
Head tilt
Head turn
Gait abnormalities
Short gait, stilted gait, sits frequently
YES
Ataxia
Normal muscle tone, abnormal movement
NO
Hypermetria
NO
Lameness
NO
Tires easily/weakness after exercise
YES
Proprioception/Postural reactions are ABNORMAL or ABSENT
NO
∗
∗
Only evaluate when patient is PROPERLY SUPPORTED when reactions are tested.
∗
∗
Decreased muscle tone and/or muscleatrophy
YES
Spinal Reflexes
patellar, triceps, perineal, sciatic DIMINISHED or EXHAUSTABLE with repetition
YES, although perineal reflexes and motor function to tail may be preserved
Reflexes clonic or exaggerated
NO
Nociception diminished or absent
NO
Dysphonia
YES
Dysphagia
Spinal pain
NO (rare with acute PRN)
Cranial nerves
Bilateral abnormalities in PLR, facial nerve weakness, diminished swallow or gag
Not typical of ALL LMND, but common with tick paralysis
UNILATERAL abnormalities?
NO
Megaoesophagus
Not typical of all LMND, but frequently seen with MG and tick paralysisThe clinical hallmark of lower motor neuron disease is skeletal muscle weakness, although there are examples of smooth and cardiac muscle involvement associated with diseases affecting the lower motor neuron [10]. LMND weakness is infrequently global, or involving all muscle groups equally. In most instances there will be a progression of signs with the pelvic limbs being affected first followed by thoracic limbs, oesophagus, and then cranial motor nerves to the face, pharynx, and larynx [3, 11–15]. Occasionally, disease is limited to a specific region, such as oesophagus in myasthenia gravis [3, 11, 13, 16]. Even less commonly, LMND signs may first be seen in the thoracic limbs [11]. Gait abnormalities such as goose-stepping, crossing limbs, spasmodic movements, or head tilts and turns are not consistent with LMND and disorders of the CNS should be considered.It is important to keep in mind that in LMND the sensory component of the nervous system is intact. Therefore, patients with LMND have intact nociception, proprioception, and spatial awareness. The skeletal muscle weakness may have the appearance of ataxia, but with support these patients will attempt normal responses to postural reactions and positioning. Postural reaction and nociceptive testing are important tools for differentiating LMND from spinal or brain disease. Polyneuropathies affecting both motor and sensory pathways may present with ataxia, and this may be seen along with classical LMN signs. Another important discriminator is that alterations in mentation and/or seizures are always associated with forebrain disease and are not consistent with LMND as a sole aetiology.Assessment of ventilation parameters can be critical in advanced or rapidly progressive LMND. Patients may be tachypneic, but due to weakened diaphragm and intercostal muscles the patient may be seriously underventilated. Dogs with tick paralysis often demonstrate a classic expiratory “grunt.” In the absence of severe pulmonary disease, identifying hypercapnia on venous blood gas sample is highly suggestive of hypoventilation and is an indication for ventilatory support. Lower motor neuron disease patients unable to maintain adequate ventilation require rapid intervention, possibly intubation with intermittent positive pressure ventilation or maintenance via mechanical ventilator.Dysphonia, dysphagia, and megaoesophagus are characteristics of some LMN diseases, especially with tick paralysis and myasthenia gravis [15, 17–21]. Ptyalism, slow or absent gag, slow or absent swallow on laryngeal palpation, and regurgitation are all consistent with pharyngeal, laryngeal, and oesophageal disease. As normal laryngeal function is required for the reflex movement of the epiglottis to protect the airway, patients with abnormal laryngeal function or those with passive regurgitation are at extremely high risk of aspiration. It is important that any patient with evidence of dysphagia or megaoesophagus be maintained in sternal recumbency with the head elevated at all time. As many LMND patients are unable to support their heads, stacks of towels or pillows may be required to keep the patient in an upright position.Nutritional requirements of dogs and cats with LMND will vary depending on the severity of disease and the effect on the animal’s ability to prehend food. In cases of focal or pelvic limb weakness, appetite, drinking, and eliminations are frequently normal. However, patients with laryngeal paralysis or megaoesophagus may even benefit from a gastric or nasogastric feeding tube in order to provide nutrition in the face of an inability to prehend food. Any patient that is unable to eat within three days or where the prognosis is such that they are unlikely to be able to safely prehend food within the next several days or weeks is a candidate for a supportive enteral feeding catheter, for example, a percutaneous gastric feeding tube.The degree and duration of supportive care required for animals affected by LMN diseases completely depend on the severity of disease and expected duration of clinical signs. Specific diseases are discussed in the next section. It follows logic that the more support a patient initially requires in hospital, the more prolonged their recovery may be. Cases of polyradiculoneuritis, for instance, may require days to weeks in hospital followed by weeks or even months of home care to fully resolve [11, 22, 23]. Some cases of tick paralysis respond rapidly to tick removal and/or antiserum and may improve from nearly lateral recumbency to normal ambulation within 24 to 48 hours [19]. The prognosis associated with the various LMND is discussed in the next section.
## 2. Idiopathic Polyradiculoneuritis
Idiopathic polyradiculoneuritis, or acute canine polyradiculoneuritis (ACP), is an ascending lower motor neuron paralysis first identified in dogs in the southern United States after exposures to raccoon saliva by way of a bite wound [24–26]. Identical signs were described in dogs not exposed to raccoon bites and in these dogs the condition eventually became known as idiopathic polyradiculoneuritis [11]. However, the original moniker of “Coonhound paralysis” still persists even though that only describes a specific subset of cases.Polyradiculoneuritis is an immune-mediated disease that can be triggered by a wide variety of stimuli, including vaccines, raccoon saliva, and some infectious agents (includingToxoplasma gondii) [25, 27–30]. The precise pathophysiology is not known for all cases of ACP, but immune targeting of gangliosides in nerve bodies and axons has been identified. Anti-ganglioside autoantibodies are also seen in the human disease Guillian-Barré, and canine polyradiculoneuritis is seen as a homolog to Guillian-Barré in people [31]. There is no age, breed, or sex predilection associated with development of ACP.Diagnosis of ACP is made based on presenting clinical signs and ruling out other cases of LMND. A complete history and thorough tick-check are a necessary first step in ruling out other common causes of LMND such as snake bite, tick paralysis, or myasthenia. A Snake Venom Detection Kit may be necessary to help rule out snake envenomation if there is a high enough index of suspicion. Acetylcholine receptor antibody titres are valuable to attempt to rule out acquired myasthenia gravis.Toxoplasma gondii serology should be considered in all cases of ACP diagnosed in Australia [29].Clinical signs of ACP typically begin in the pelvic limbs and slowly (over days) progress to involve the thoracic limbs and cervical muscles. In severe cases, motor function to the larynx and facial nerves is affected and dysphonia and absent swallow are possible [22, 23, 28, 32–34]. Paralysis of respiratory muscles is not common but possible. Motor function to the perineum and tail are typically spared and ACP dogs are able to wag their tail and perform normal, intentional urination and defecation [11]. ACP patients are typically bright and alert. Megaoesophagus is not a characteristic of ACP in dogs. Hence, a lack of a megaoesophagus may support a diagnosis of ACP.Although ACP is immune-mediated, immunosuppression is not advised. There have been no controlled trials investigating this in veterinary patients, and this advice is based on strong evidence in the human literature against employing corticosteroids in the treatment of Guillian-Barré [35]. The use of human intravenous immunoglobulin (IVIG) has been investigated in one study, but the benefits of immunoglobulin therapy were not clear [22].The mainstay of treatment of ACP is supportive care including nutritional support and physical therapy. ACP patients have normal sensory input and attention should be paid to provide adequately soft bedding and regular repositioning to avoid sores and discomfort. Every effort should be made to maintain sternal recumbency to facilitate easier eating and interaction with their environment. It is important to manage owner expectations as well in cases of ACP. Complete recovery is common (barring significant complication such as aspiration pneumonia) but can be prolonged. Most patients will gradually recover within a few weeks, but complete recovery may require several months of supportive nursing care [11, 23–25, 27, 28, 33].
## 3. Myasthenia Gravis
Myasthenia gravis (MG) is the result of immune-mediated targeting of the sarcolemmal acetylcholine receptors in the neuromuscular junction. The resulting blockade and associated remodelling cause insensitivity to acetylcholine and a failure of excitatory signalling to propagate to the myocyte and a flaccid paresis/paralysis [3].Myasthenia gravis can be seen at any age, but age at onset of clinical signs is distributed largely along two peaks, with one seen in young adulthood between two and four years of age, and a second one later in life between the ages of nine and thirteen [18, 21]. In dogs, MG is associated with a mediastinal mass approximately 3.4% of cases, whereas in cats the proportion is markedly higher with around half (52%) of all MG cats in one case series having a concurrent diagnosis of a mediastinal mass [36, 37].There are known breed predispositions for MG in both dogs and cats. Somali and Abyssinian breeds appear to be overrepresented among cats and the Akita, German Shepherd dog, German Shorthaired Pointer, Newfoundland, and most terrier breeds are overrepresented among dogs [36–39].Presenting clinical signs can vary from mild to severe, focal or generalised, and include weakness or paresis, collapse, megaoesophagus, and dysphonia. Signs are commonly more noticeable in the pelvic limbs. Owners typically report exercise intolerance deteriorating into a short, stilted gait which resolves after rest being a frequent complaint. In one study, approximately 43% of myasthenic patients did not have clinically detectable limb weakness at the time of diagnosis [37]. Some dogs and cats will present with only megaoesophagus or dysphonia and no evidence of limb weakness [16, 20, 21, 39].Myasthenia gravis has been reported as a paraneoplastic syndrome associated with numerous tumours. Most commonly associated with MG are mediastinal masses, especially thymoma, as mentioned above. However, various other sarcomas and haematopoetic tumours have also been associated with the onset of MG [40–43].A definitive diagnosis of MG requires demonstration of a positive antibody titre of greater than 0.6 nmol/L in dogs and 0.3 nmol/L in cats to acetylcholine receptors [20]. Occasionally, patients presenting early in the course of disease may have titres within the reference interval [44]. These patients may develop a positive titre if rechecked in 2-3 weeks. A very small percentage of tetraparetic or fulminant MG patients may have a negative titre, but this is reported as less than 2% [20, 45]. At the time of this publication, the only location running ACh receptor antibody titre is in North America and the turn-around time for testing a sample from an Australian patient is close to three weeks. Therefore, a presumptive diagnosis is usually required while awaiting the definitive diagnosis. Because of the variety of clinical presentations for acute MG it can be difficult to differentiate acute, fulminant MG from other causes of acute LMND based on presenting signs alone. For instance, as many as 10% of acute MG patients may not present with megaoesophagus. In such a case, fulminant MG may be difficult or impossible to differentiate from ACP.In cases where snake envenomation and tick are less likely (or have been ruled out) and the index of suspicion for MG is high, the clinical response to a test dose of edrophonium or pyridostigmine may be a good diagnostic option. Edrophonium has been unavailable on the Australian market for some time. However, if available, it provides an immediate, but very short lived, reversal of clinical signs in most MG cases of mild to moderate severity. Edrophonium is dosed at 0.11-0.22 mg/kg IV once for dogs and 0.25-0.5 mg per cat. The effect is typically seen within seconds and lasts less than two minutes. Alternatively, neostigmine bromide can be administered at a dose of 0.02 mg/kg given slowly IV. The effects of a single dose of neostigmine are not as dramatic as those seen with edrophonium, but a good clinical response (increased strength, ability to walk) within 15 to 30 minutes would support a diagnosis of MG. Both drugs must be used with caution in cats as they are more sensitive to the cholinergic side-effects than dogs. Pretreatment with atropine is recommended in all cats and should be kept on-hand in the event of an adverse reaction in dogs shortly after administration. It is important to remember that a positive response to acetylcholine esterase inhibitor is not pathognomonic for MG as other diseases of the neuromuscular junction may also experience a transient response.Treatment of MG in dogs and cats focuses on increasing the amount of acetylcholine available at the neuromuscular junction by inhibition of acetylcholinesterase enzyme. Pyridostigmine bromide (Mestinon®) is dosed at 1-3 mg/kg by mouth every eight to twelve hours to start and the dose may be slowly increased over a period of weeks if necessary to see clinical improvement. Side-effects (nausea, diarrhoea, salivation, and lacrimation) are usually mild and resolve with time but are occasionally significant in some patients and require dose adjustments or concurrent administration of atropine to minimise side-effects. Nutritional support may be a challenge in MG patients with concurrent megaoesophagus. Elevated feeding, slurry feeding, and even use of gastric feeding tubes may be necessary to aid in passive movement of food into the stomach and to limit the possibility of an aspiration event.Immunosuppression is not routinely necessary in mild or focal cases of MG and the myopathy associated weakness seen with high-dose steroid administration may only complicate the disease [46]. However, in severe or refractory cases of MG, immunomodulatory therapy may need to be considered. Corticosteroids such as prednisolone dosed at 0.5 mg/kg/day in dogs or cats is usually appropriate to start [20]. The dose is conservative at first to avoid side-effects such as muscle weakness but can be gradually increased over a period of days and weeks if needed to immunosuppressive dosing of around 1.5 mg/kg/day. If enteral administration of medication is complicated by megaoesophagus, parenteral dexamethasone at 0.15 mg/kg/day may also be considered. Cyclosporine or mycophenolate mofetil may be an alternative in patients where steroid use is contraindicated (diabetic, septic/pneumonia cases) [46, 47]. Azathioprine can be an excellent adjunct therapy or even a good long-term monotherapy, but slower onset of action may limit its utility in acute or fulminant cases of MG [20, 48]. Additionally, azathioprine is not appropriate for use in cats with MG as it is associated with neuromuscular blockade in this species. In human cases of MG, plasmapheresis and intravenous immunoglobulin treatments are associated with clinical improvement, but these therapies are not available, practical, or appropriate for most canine or feline patients [49–52].Spontaneous remission of disease is possible in cases of MG in dogs. In fact, some reports suggest nearly 89% of dogs may experience spontaneous remission [53]. Remission does not appear to be a characteristic of the disease in cats [36]. Clinical remission of MG in dogs may occur within as little as one month but on average occurs around six months after diagnosis [53].
## 4. Tick Paralysis
Tick paralysis is not exclusive to Australia. The disease is seen very infrequently in the Pacific Northwest and Atlantic South East of the United states and rarely in Europe [15, 54, 55]. By comparison, tick paralysis is fairly common in Australia with one study describing over 3,400 cases in eastern Australia over just two tick seasons, from September 2010 to January 2012 [56]. The disease is associated with toxins produced by the salivary glands of hard-bodied ticks in the genusIxodes, specificallyIxodes holocyclus, coronatus, andneumann [4, 7, 15, 57–59]. This is in contrast to tick paralysis in the United States that is associated with the genusDermacentor [54, 55]. The exact mechanism of holocylotoxin-induced LMND is incompletely understood. The effect of the toxin appears to be focused on the presynaptic surface of the motor endplate and appears to block calcium influx, thereby preventing depolarization of nerve endings and propagation of signal across the neuromuscular junction [12, 15].Cases of tick paralysis show a very strong geographical and seasonal distribution. TheIxodes ticks are distributed along the east and Southeastern coasts of Queensland, New South Wales, and Victoria, and roughly follow the native habitat of their preferred bandicoot and possum hosts [7, 15, 17, 60–63]. Only the female tick has long enough mouthparts to pierce and hold fast. Therefore, only females feed for a long enough time to produce tick paralysis. This helps explain much of the seasonality of tick paralysis with the numbers of female ticks seeking large meals in preparation for egg laying peak in the spring and early summer months. Tick paralysis cases can be seen year-round, but nearly three-quarters of all cases occur between September and December and a further 14% of cases over the summer months [17, 56].Tick feeding behaviour is not a simple “attach and start feeding” process. Over the first few days the tick will draw blood in and out of her mouthparts as she prepares both herself and the host’s local environment for a larger blood meal. Over these several days, the amount of holocylotoxin the salivary glands produce increases [15]. For this reason, ticks typically must remain attached for several days to see disease, with paralysis starting on the third day of feeding and progressing on subsequent days. This also highlights the importance of daily tick-checks and rapid-kill acaricides as effective preventative measures as they prevent tick attachment long enough to allow clinical disease to manifest. There are currently several acaricides on the market in Australia with documented rapid kill ofIxodes species [64–67]. Although no definitive studies have been published to date, early evidence supports a possible effect of isoxazoline parasiticides in decreasing the incidence of tick paralysis in Australia [68].Definitive diagnosis of tick paralysis is almost always made based on finding an engorged tick or a recent “crater” or site where the tick was recently attached. When tick paralysis is a likely differential, it is advised to clip all hair on the body and perform a thorough search for embedded ticks. Particular attention must be paid to the head and neck, thorax, areas of skin folds (axilla, vulva, and groin), and the interdigital spaces [63]. Successful treatment and a positive outcome are impossible unless all ticks attached to the patient are identified and removed. It is not necessary to find an engorged tick as the offending tick may be early in her feeding or have finished her feed and detached prior to the onset of significant clinical signs. Finding any tick or “crater” from attachment is diagnostic. It is common practice to apply a topical acaricide, such as permethrin or fipronil, to kill any ticks that may have been missed during the search. These treatments do not substitute for a comprehensive search and tick removal as killing an already attached and feeding tick is not as effective at resolving clinical signs as mechanically removing the feeding tick.The clinical signs of tick paralysis commonly start as an ascending limb paralysis first noted in the pelvic limbs and eventually involving the thoracic limbs. Involvement of the larynx and oesophagus as well as paralysis of the facial nerve is possible as the disease progresses [11, 15, 56, 63]. Advanced cases, particularly in cats, may see dilated and unresponsive pupils as the oculomotor nerve is involved [62]. Asymmetry in presenting signs is occasionally encountered with thoracic limbs more affected than pelvic limbs or right and left cranial nerves asymmetrically affected [14]. A clinical scoring system for tick paralysis is commonly used to standardise the assessment of tick paralysis patients [17] (Table 2). This scoring system considers the motor function (mild paresis through to lateral recumbency, graded 1-4) as well as respiratory score (no respiratory problems through to severe distress and cyanosis, graded A-D). Use of this scoring terminology allows for efficient communication of disease status between clinicians. Additionally, accurate staging can assist in formulating a prognosis for recovery, as discussed later in this section [17, 56, 62, 63, 69].Table 2
Clinical scoring system used to standardise the clinical severity of tick paralysis in dogs and cats. Adapted fromAtwell et al., 2001.
Neuromuscular Score
Respiratory Signs
1
Normal or mild weakness and incoordination
A
Normal
2
Ambulatory but with obvious weakness
B
Increased respiratory rate, but normal effort
3
Unable to stand, but can right self
C
Restrictive breathing pattern, gagging, retching
4
Unable to right self, moribund
D
Expiratory grunt, dyspnoea, cyanosisTreatment of tick paralysis requires removal of the offending tick (as described above), supportive care of the patient, and, in some patients, administration of tick antiserum (TAS). There are no clear-cut guidelines for when TAS is indicated in cases of tick paralysis. In one nationwide retrospective study, TAS was used in less than 2% of all cases [56]. This study did not report the tick clinical score of these cases and it is unknown if the low rates of TAS administration were due to a predominance of mild cases or other reasons such as financial constraints. This is in stark contrast to a recent retrospective cohort study of 2077 cases of tick paralysis over an eight-year period. In this study, TAS was administered in 95% of the 1742 feline cases where 5-day mortality was known [63]. This study also reports a 4-fold reduction in risk of death when TAS was administered as a part of comprehensive therapy. The authors use the very loose guideline that TAS administration should be discussed with the owner and considered in all tick paralysis patients regardless of stage. However, risk of TAS for some patients remains high enough that these risks may outweigh the benefits in certain patients (e.g., a cat with mild ataxia or previous history of TAS exposure).The risks of TAS administration include anaphylactic reactions and the Bezold-Jarisch (BJ) reflex [56, 70]. The BJ reflex is a vagally medicated response secondary to direct chemical stimulation of cardiac receptors. Because of this response, it has been advocated to premedicate with atropine prior to administration; however, this recommendation has changed over time and it is common for practitioners to use no premedication prior to administration of TAS [56, 63, 70]. Adverse reactions to TAS have been reported in as many as 9% of cats and 3% of dogs [63, 70]. Anaphylaxis is a serious complication of TAS administration, and for this reason it is recommended that TAS be administered slowly over the first 15 minutes and the patient monitored closely.Survival rates for most cases of tick paralysis are quite high. Although occasional cases progress to severe disease (Stages 3 and 4, C and D) the majority of cases remain clinically stable or respond quickly to TAS and complete resolution of disease is seen over a period of days [17, 56, 62]. Overall survival rates of upwards of 95% are reported in the literature [56]. However, for patients requiring mechanical ventilation or those experiencing severe complications such as aspiration pneumonia, the prognosis for survival is much more guarded [63, 69].Due to the varying severity of tick paralysis cases, there is no blanket dose rate of TAS that can be used for paralysis cases. The manufacturer (AVSL) recommends administering TAS at a rate of 1 ml/kg in stage 1A-2B cases of paralysis; however doses of up to 4.0ml/kg may be required. TAS should be allowed to warm to room temperature and then diluted 1:1 with normal saline and administered slowly (over 15-20 minutes) via intravenous route in dogs. In cats, the diluted TAS should be administered more slowly, with a small amount given intravenously over the first 15 minutes while the patient is closely observed for evidence of an adverse reaction. The remainder of the dose can then be administered over 30-60 minutes.
## 5. Snake Envenomation
There are several species of venomous snakes in Australia with venom capable of producing neurologic signs consistent with a lower motor neuron blockade. Nationwide, the most common species responsible for snake bites in small animals are the eastern brown snake (Pseudonaja textilis), western brown snake (Pseudonaja nuchalis), tiger snakes (Notechis scutatus), and red-bellied black snakes (Pseudechis porphyriacus) [71–76]. Of these, venoms of the tiger and brown snakes are most likely to produce neurologic signs [8, 72–75, 77–79]. Anecdotally, envenomation from the whipsnake (Demansiaspp.) is also associated with neurologic signs in dogs, although there are no case series reported in the literature. The most commonly encountered species of snake envenomation will depend largely on the geographical location of the patient at the time of the bite. In one 2005 survey of snake bites of animals treated by veterinarians in New South Wales it was reported that over 40% of snake bites were due to brown snake [72]. An older survey of snake bites in Australia reported over 76% of snake bites treated nationwide were the result of brown snake envenomation [78].The venom of each of these snakes is a heterogenous mix of toxic compounds and therefore envenomations are infrequently associated with only a single clinical sign [8]. Depending on the dose of venom delivered to the patient, the species of snake, and age of the snake, dogs and cats may present with only weakness and ataxia consistent with lower motor neuron blockade. Neurologic signs are due to both pre- and postsynaptic neurotoxins, inhibiting acetylcholine receptor activity and/or release of acetylcholine vesicles from the nerve terminus [75, 80].Patients presenting with bites from brown snakes frequently have both neurologic and coagulopathic signs [8, 75]. The venom of juvenile brown snakes is almost exclusively neurotoxic, whereas in adult snakes the venom is both neurotoxic and coagulopathic [81]. Tiger snake bites are frequently associated with neurologic (LMN) signs [71–74, 77–79, 82]. Although black snake venom can produce neurologic signs, it is much more likely to produce myolytic and coagulopathic signs in addition to any neurologic signs in a dog [8, 76, 80, 83]. The presence of a neuropathy with a concurrent coagulopathy or myopathy is most consistent with snake envenomation. Confirmation of snake envenomation can be made with a Snake Venom Detection Kit (SVDK) (CSL Limited, Victoria Park, Australia) using a swab of the bite site, serum/plasma, heparinised whole blood, or urine as the sample. A study in cats showed that plasma is the most reliable sample for detecting envenomation with a SVDK if the bite occurred within the last 8 hours, or urine if the bite occurred greater than 8 hours ago [84]. This is due to a delay of up to eight hours for the toxin to be filtered into the urine by the kidneys. In this same study, venom could be detected in a plasma sample within the first seventeen hours, but venom in the urine was detected up to 48 hours after bite.There are no veterinary studies investigating the variation of clinical signs associated with various species of snakes within a genus or in various geographic regions. Even among members of the same genus, there can be considerable intra- and interindividual variation in venom composition [5, 85]. In one case series study in human brown snake envenomations across Australia there was no difference in clinical signs, severity of disease, or deaths when cohorts of cases from various regions were compared [86]. Anecdotal reports by veterinary emergency and critical care specialists familiar with brown snake envenomation from both eastern and western brown snakes report that cases associated with eastern brown snake seem to be more likely to have a slightly delayed presentation of neurologic signs (up to 36 hours) as compared to western brown snake bites that appear to present much more rapidly with neurologic signs. The delayed presentation complicates diagnosis as the SDVK may be negative when assessed more than 48 hours after the bite.As with other causes of LMND, treatment will largely depend on the severity of LMN signs and may include ventilator support, nutritional support, and nursing care while the clinical signs improve. Unless the offending snake was seen and positively identified, it is always preferable to obtain a definitive identification via the SVDK so that antivenin therapy can be as specific as possible.Using monovalent antivenin is preferred as it is more efficacious and cost effective and has less risk of adverse reactions due to the smaller volume required for treatment as compared to polyvalent. The commercially available antivenin for veterinary use is formulated from hyperimmune equine serum, with each vial containing enough antibody to neutralise the average amount of venom milked from a snake of that species (package insert, Brown Snake Antivenom, AVSL, Lismore, Australia). The amount required, however, will depend on the amount of venom received by the patient [72]. The SVDK provides genus-specific identification of toxin, thereby allowing the practitioner to select the most appropriate monovalent antivenin.The method of administration of antivenin may depend slightly on manufacturer instructions. Generally room temperature antivenin is diluted with a balanced electrolyte solution (e.g., Hartmann’s) at a dilution of 1:10 and then given slowly via intravenous route over 15-30 minutes [72]. The major risk following antivenin administration is anaphylaxis and patients should always be carefully monitored during administration. Those patients who have already received antivenin are at greater risk of an adverse reaction. A delayed, type-III hypersensitivity has been recorded in humans, but has not been recorded in dogs [72]. Premedication with atropine, epinephrine, or a corticosteroid is not recommended.Prognosis for survival of snake bite envenomation in dogs and cats is good but depends on severity of clinical signs, accurate identification of the snake involved, and treatment with appropriate antivenins. Seventy-five percent of dogs and 91% of cats treated with antivenin survived as compared to only 66% of cats and 31% of dogs that did not receive antivenin in one retrospective study [78]. Another survey suggested that overall survival of snake bites in dogs and cats treated in New South Wales veterinary hospitals was seen in approximately 63, 70, and 84% of cases of tiger, brown, and red belly black snake, respectively [72].
## 6. Tetrodotoxin
An infrequently encountered but important cause of LMND in Australian dogs is ingestion of marine animals, particularly puffer fish or blue-ringed octopus, containing tetrodotoxin. Tetrodotoxin is not exclusive to puffer fish or blue-ring octopus, as the toxin has been identified in some terrestrial frogs and newts as well as several marine sea stars and sea slugs [87–89]. Interestingly, tetrodotoxin is not a direct product of the organism, but rather a product of commensal marine bacteria within and/or ingested by the animal (e.g., puffer fish) [90]. The toxin is not evenly distributed throughout the animal, with high concentrations to be found in the skin and viscera of puffer fish [87]. Dogs may encounter whole fish or parts of fish left on beaches by fishermen as pufferfish are a regular by-catch for sports fishermen in Australia.Tetrodotoxin is a potent neurotoxin that blocks fast sodium channels both within the nerve axon and on the myocyte. Onset of clinical signs is fairly rapid. In people, clinical signs manifest as numbness in the limbs followed by an ascending LMN paresis and paralysis. Large doses will manifest as seizures, respiratory paralysis, coma, and death [87]. Clinical signs in dogs are believed to include vomiting, ataxia, lethargy, cardiac arrhythmias, respiratory paralysis, and death [88].There is no antitoxin for tetrodotoxin and treatment is supportive. Diagnosis of tetrodotoxin poisoning should be made based on clinical presentation combined with a recent history of the pet visiting a beach or marina where discarded fish may have been found.
## 7. Miscellaneous Causes
The complete list of diseases, drugs, and toxins associated with neuromuscular blockade or neuromuscular weakness is outside the scope of this review. A brief list of less common causes of significant LMN disease in dogs and cats includes botulism, aminoglycoside antibiotics, hypothyroidism, hyperadrenocorticism, paraneoplastic polyneuritis, and vincristine/vinblastine neuropathy [11, 91–93].LMN signs associated with botulinum toxin have been reported in dogs but appear to be uncommon to rare [94]. Dogs may develop intoxication after ingestion of a carcass or similar spoiled product [95]. Signs of botulism can be delayed up to 4 days after ingestion of the toxin but typically begin as an ascending flaccid paralysis beginning in the pelvic limbs [95–97]. Similar to polyradiculoneuritis, affected dogs frequently retain the ability to wag their tail [94]. Treatment is supportive and recovery is usually complete within one to two weeks [94, 95].Neuromuscular blockade can be seen with several commonly used medications to include aminoglycoside antibiotics and tetracycline antibiotics [93]. Lasalocid toxicity from accidental ingestion or contaminated food has been associated with weakness and LMN signs in dogs [98–100]. Myasthenia gravis has been documented secondary to methimazole therapy in cats within the first several weeks of initiating therapy [101].Muscle weakness can also be seen in primary myopathies. Therefore, other differentials for patients presenting with presumed LMN or motor unit dysfunction may also include inflammatory causes of muscle weakness such as protozoal disease, immune-mediated myopathies, or paraneoplastic syndromes. A complete discussion of diseases producing a myopathy is beyond the scope of this review.
## 8. Conclusion
Lower motor neuron (motor unit) disease is a frequently encountered complaint in Australian dogs and cats. The frequency of patients presenting with LMN signs is largely due to the unique, native fauna of Australia. Although tick paralysis, snake envenomation, and marine animal intoxications are not exclusive to Australia, they are an important and not-uncommon group of intoxications seen by small animal practitioners in Australia. The ability to both rapidly identify LMN disease signs and identify the most likely diseases or intoxications associated with LMN disease in Australian dogs and cats are key to successful treatment and positive outcomes.
---
*Source: 1018230-2018-08-06.xml* | 1018230-2018-08-06_1018230-2018-08-06.md | 40,340 | Diagnosis and Treatment of Lower Motor Neuron Disease in Australian Dogs and Cats | A. M. Herndon; A. T. Thompson; C. Mack | Journal of Veterinary Medicine
(2018) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1018230 | 1018230-2018-08-06.xml | ---
## Abstract
Diseases presenting with lower motor neuron (LMN) signs are frequently seen in small animal veterinary practice in Australia. In addition to the most common causes of LMN disease seen world-wide, such as idiopathic polyradiculoneuritis and myasthenia gravis, there are several conditions presenting with LMN signs that are peculiar to the continent of Australia. These include snake envenomation by tiger (Notechisspp.), brown (Pseudonajaspp.), and black snakes (Pseudechisspp.), tick paralysis associated withIxodes holocyclus andIxodes coronatus, and tetrodotoxins from marine animals such as puffer fish (Tetraodontidae spp.) and blue-ring octopus (Hapalochlaenaspp.). The wide range of differential diagnoses along with the number of etiological-specific treatments (e.g., antivenin, acetylcholinesterase inhibitors) and highly variable prognoses underscores the importance of a complete physical exam and comprehensive history to aid in rapid and accurate diagnosis of LMN disease in Australian dogs and cats. The purpose of this review is to discuss diagnosis and treatment of LMN diseases seen in dogs and cats in Australia.
---
## Body
## 1. Introduction, History, and Physical Examination
Lower motor neuron disease (LMND) broadly refers to conditions that preferentially affect the motor nerve bodies originating in the ventral horn of the spinal cord grey matter, their axons, the neuromuscular junction, and the muscle fibre. Disruption of motor unit function results in diminished motor function (e.g., paresis or paralysis) of the affected region, flaccid muscle tone, and diminished or absent reflex arcs. Depending on the region and type of disease, this change in function may be regional, such as megaoesophagus seen in focal myasthenia gravis, or generalised, such as tetraparesis with pharyngeal and respiratory muscle paralysis that can be seen with tick paralysis cases. Lower motor neuron disease is the result of a wide variety of underlying disease pathologies. Immune-mediated targeting of nerves or motor endplate is seen in myasthenia gravis and idiopathic polyradiculoneuritis [1–3].The importance of a good history cannot be overemphasized. As certain causes of LMND in Australia show a strong regional distribution (e.g., tick paralysis), understanding the lifestyle and recent travel history of the pet may assist in shortening the list of differential diagnoses [4–9]. In addition to travel history, other important questions include the following: recent history of antigenic stimulation (e.g., vaccine); use of acaricides; onset, duration, and progression of clinical signs; sighting of ticks or snakes in the pet’s environment; history of similar, previous events; and history of raw or undercooked animal products in the diet.Neurolocalisation is a critical first step in identifying a disease with a LMND component. A summary of clinical abnormalities on neurologic exam and their relationship to LMND is included in Table1. The clinical findings of a good neurologic exam are frequently enough to localise the neurologic lesion to brain, brain stem, spinal cord segments, or lower motor neuron. The common theme among all neurologic abnormalities is the failure of the motor unit to function normally, despite normal sensory input.Table 1
Summary of abnormal findings on a clinical neurological exam and whether these findings are consistent with lower motor neuron disease.
Neurological Exam Abnormality
Typical of LMND
Seizures
NO
Altered mentation
Pacing
Head pressing
Head tilt
Head turn
Gait abnormalities
Short gait, stilted gait, sits frequently
YES
Ataxia
Normal muscle tone, abnormal movement
NO
Hypermetria
NO
Lameness
NO
Tires easily/weakness after exercise
YES
Proprioception/Postural reactions are ABNORMAL or ABSENT
NO
∗
∗
Only evaluate when patient is PROPERLY SUPPORTED when reactions are tested.
∗
∗
Decreased muscle tone and/or muscleatrophy
YES
Spinal Reflexes
patellar, triceps, perineal, sciatic DIMINISHED or EXHAUSTABLE with repetition
YES, although perineal reflexes and motor function to tail may be preserved
Reflexes clonic or exaggerated
NO
Nociception diminished or absent
NO
Dysphonia
YES
Dysphagia
Spinal pain
NO (rare with acute PRN)
Cranial nerves
Bilateral abnormalities in PLR, facial nerve weakness, diminished swallow or gag
Not typical of ALL LMND, but common with tick paralysis
UNILATERAL abnormalities?
NO
Megaoesophagus
Not typical of all LMND, but frequently seen with MG and tick paralysisThe clinical hallmark of lower motor neuron disease is skeletal muscle weakness, although there are examples of smooth and cardiac muscle involvement associated with diseases affecting the lower motor neuron [10]. LMND weakness is infrequently global, or involving all muscle groups equally. In most instances there will be a progression of signs with the pelvic limbs being affected first followed by thoracic limbs, oesophagus, and then cranial motor nerves to the face, pharynx, and larynx [3, 11–15]. Occasionally, disease is limited to a specific region, such as oesophagus in myasthenia gravis [3, 11, 13, 16]. Even less commonly, LMND signs may first be seen in the thoracic limbs [11]. Gait abnormalities such as goose-stepping, crossing limbs, spasmodic movements, or head tilts and turns are not consistent with LMND and disorders of the CNS should be considered.It is important to keep in mind that in LMND the sensory component of the nervous system is intact. Therefore, patients with LMND have intact nociception, proprioception, and spatial awareness. The skeletal muscle weakness may have the appearance of ataxia, but with support these patients will attempt normal responses to postural reactions and positioning. Postural reaction and nociceptive testing are important tools for differentiating LMND from spinal or brain disease. Polyneuropathies affecting both motor and sensory pathways may present with ataxia, and this may be seen along with classical LMN signs. Another important discriminator is that alterations in mentation and/or seizures are always associated with forebrain disease and are not consistent with LMND as a sole aetiology.Assessment of ventilation parameters can be critical in advanced or rapidly progressive LMND. Patients may be tachypneic, but due to weakened diaphragm and intercostal muscles the patient may be seriously underventilated. Dogs with tick paralysis often demonstrate a classic expiratory “grunt.” In the absence of severe pulmonary disease, identifying hypercapnia on venous blood gas sample is highly suggestive of hypoventilation and is an indication for ventilatory support. Lower motor neuron disease patients unable to maintain adequate ventilation require rapid intervention, possibly intubation with intermittent positive pressure ventilation or maintenance via mechanical ventilator.Dysphonia, dysphagia, and megaoesophagus are characteristics of some LMN diseases, especially with tick paralysis and myasthenia gravis [15, 17–21]. Ptyalism, slow or absent gag, slow or absent swallow on laryngeal palpation, and regurgitation are all consistent with pharyngeal, laryngeal, and oesophageal disease. As normal laryngeal function is required for the reflex movement of the epiglottis to protect the airway, patients with abnormal laryngeal function or those with passive regurgitation are at extremely high risk of aspiration. It is important that any patient with evidence of dysphagia or megaoesophagus be maintained in sternal recumbency with the head elevated at all time. As many LMND patients are unable to support their heads, stacks of towels or pillows may be required to keep the patient in an upright position.Nutritional requirements of dogs and cats with LMND will vary depending on the severity of disease and the effect on the animal’s ability to prehend food. In cases of focal or pelvic limb weakness, appetite, drinking, and eliminations are frequently normal. However, patients with laryngeal paralysis or megaoesophagus may even benefit from a gastric or nasogastric feeding tube in order to provide nutrition in the face of an inability to prehend food. Any patient that is unable to eat within three days or where the prognosis is such that they are unlikely to be able to safely prehend food within the next several days or weeks is a candidate for a supportive enteral feeding catheter, for example, a percutaneous gastric feeding tube.The degree and duration of supportive care required for animals affected by LMN diseases completely depend on the severity of disease and expected duration of clinical signs. Specific diseases are discussed in the next section. It follows logic that the more support a patient initially requires in hospital, the more prolonged their recovery may be. Cases of polyradiculoneuritis, for instance, may require days to weeks in hospital followed by weeks or even months of home care to fully resolve [11, 22, 23]. Some cases of tick paralysis respond rapidly to tick removal and/or antiserum and may improve from nearly lateral recumbency to normal ambulation within 24 to 48 hours [19]. The prognosis associated with the various LMND is discussed in the next section.
## 2. Idiopathic Polyradiculoneuritis
Idiopathic polyradiculoneuritis, or acute canine polyradiculoneuritis (ACP), is an ascending lower motor neuron paralysis first identified in dogs in the southern United States after exposures to raccoon saliva by way of a bite wound [24–26]. Identical signs were described in dogs not exposed to raccoon bites and in these dogs the condition eventually became known as idiopathic polyradiculoneuritis [11]. However, the original moniker of “Coonhound paralysis” still persists even though that only describes a specific subset of cases.Polyradiculoneuritis is an immune-mediated disease that can be triggered by a wide variety of stimuli, including vaccines, raccoon saliva, and some infectious agents (includingToxoplasma gondii) [25, 27–30]. The precise pathophysiology is not known for all cases of ACP, but immune targeting of gangliosides in nerve bodies and axons has been identified. Anti-ganglioside autoantibodies are also seen in the human disease Guillian-Barré, and canine polyradiculoneuritis is seen as a homolog to Guillian-Barré in people [31]. There is no age, breed, or sex predilection associated with development of ACP.Diagnosis of ACP is made based on presenting clinical signs and ruling out other cases of LMND. A complete history and thorough tick-check are a necessary first step in ruling out other common causes of LMND such as snake bite, tick paralysis, or myasthenia. A Snake Venom Detection Kit may be necessary to help rule out snake envenomation if there is a high enough index of suspicion. Acetylcholine receptor antibody titres are valuable to attempt to rule out acquired myasthenia gravis.Toxoplasma gondii serology should be considered in all cases of ACP diagnosed in Australia [29].Clinical signs of ACP typically begin in the pelvic limbs and slowly (over days) progress to involve the thoracic limbs and cervical muscles. In severe cases, motor function to the larynx and facial nerves is affected and dysphonia and absent swallow are possible [22, 23, 28, 32–34]. Paralysis of respiratory muscles is not common but possible. Motor function to the perineum and tail are typically spared and ACP dogs are able to wag their tail and perform normal, intentional urination and defecation [11]. ACP patients are typically bright and alert. Megaoesophagus is not a characteristic of ACP in dogs. Hence, a lack of a megaoesophagus may support a diagnosis of ACP.Although ACP is immune-mediated, immunosuppression is not advised. There have been no controlled trials investigating this in veterinary patients, and this advice is based on strong evidence in the human literature against employing corticosteroids in the treatment of Guillian-Barré [35]. The use of human intravenous immunoglobulin (IVIG) has been investigated in one study, but the benefits of immunoglobulin therapy were not clear [22].The mainstay of treatment of ACP is supportive care including nutritional support and physical therapy. ACP patients have normal sensory input and attention should be paid to provide adequately soft bedding and regular repositioning to avoid sores and discomfort. Every effort should be made to maintain sternal recumbency to facilitate easier eating and interaction with their environment. It is important to manage owner expectations as well in cases of ACP. Complete recovery is common (barring significant complication such as aspiration pneumonia) but can be prolonged. Most patients will gradually recover within a few weeks, but complete recovery may require several months of supportive nursing care [11, 23–25, 27, 28, 33].
## 3. Myasthenia Gravis
Myasthenia gravis (MG) is the result of immune-mediated targeting of the sarcolemmal acetylcholine receptors in the neuromuscular junction. The resulting blockade and associated remodelling cause insensitivity to acetylcholine and a failure of excitatory signalling to propagate to the myocyte and a flaccid paresis/paralysis [3].Myasthenia gravis can be seen at any age, but age at onset of clinical signs is distributed largely along two peaks, with one seen in young adulthood between two and four years of age, and a second one later in life between the ages of nine and thirteen [18, 21]. In dogs, MG is associated with a mediastinal mass approximately 3.4% of cases, whereas in cats the proportion is markedly higher with around half (52%) of all MG cats in one case series having a concurrent diagnosis of a mediastinal mass [36, 37].There are known breed predispositions for MG in both dogs and cats. Somali and Abyssinian breeds appear to be overrepresented among cats and the Akita, German Shepherd dog, German Shorthaired Pointer, Newfoundland, and most terrier breeds are overrepresented among dogs [36–39].Presenting clinical signs can vary from mild to severe, focal or generalised, and include weakness or paresis, collapse, megaoesophagus, and dysphonia. Signs are commonly more noticeable in the pelvic limbs. Owners typically report exercise intolerance deteriorating into a short, stilted gait which resolves after rest being a frequent complaint. In one study, approximately 43% of myasthenic patients did not have clinically detectable limb weakness at the time of diagnosis [37]. Some dogs and cats will present with only megaoesophagus or dysphonia and no evidence of limb weakness [16, 20, 21, 39].Myasthenia gravis has been reported as a paraneoplastic syndrome associated with numerous tumours. Most commonly associated with MG are mediastinal masses, especially thymoma, as mentioned above. However, various other sarcomas and haematopoetic tumours have also been associated with the onset of MG [40–43].A definitive diagnosis of MG requires demonstration of a positive antibody titre of greater than 0.6 nmol/L in dogs and 0.3 nmol/L in cats to acetylcholine receptors [20]. Occasionally, patients presenting early in the course of disease may have titres within the reference interval [44]. These patients may develop a positive titre if rechecked in 2-3 weeks. A very small percentage of tetraparetic or fulminant MG patients may have a negative titre, but this is reported as less than 2% [20, 45]. At the time of this publication, the only location running ACh receptor antibody titre is in North America and the turn-around time for testing a sample from an Australian patient is close to three weeks. Therefore, a presumptive diagnosis is usually required while awaiting the definitive diagnosis. Because of the variety of clinical presentations for acute MG it can be difficult to differentiate acute, fulminant MG from other causes of acute LMND based on presenting signs alone. For instance, as many as 10% of acute MG patients may not present with megaoesophagus. In such a case, fulminant MG may be difficult or impossible to differentiate from ACP.In cases where snake envenomation and tick are less likely (or have been ruled out) and the index of suspicion for MG is high, the clinical response to a test dose of edrophonium or pyridostigmine may be a good diagnostic option. Edrophonium has been unavailable on the Australian market for some time. However, if available, it provides an immediate, but very short lived, reversal of clinical signs in most MG cases of mild to moderate severity. Edrophonium is dosed at 0.11-0.22 mg/kg IV once for dogs and 0.25-0.5 mg per cat. The effect is typically seen within seconds and lasts less than two minutes. Alternatively, neostigmine bromide can be administered at a dose of 0.02 mg/kg given slowly IV. The effects of a single dose of neostigmine are not as dramatic as those seen with edrophonium, but a good clinical response (increased strength, ability to walk) within 15 to 30 minutes would support a diagnosis of MG. Both drugs must be used with caution in cats as they are more sensitive to the cholinergic side-effects than dogs. Pretreatment with atropine is recommended in all cats and should be kept on-hand in the event of an adverse reaction in dogs shortly after administration. It is important to remember that a positive response to acetylcholine esterase inhibitor is not pathognomonic for MG as other diseases of the neuromuscular junction may also experience a transient response.Treatment of MG in dogs and cats focuses on increasing the amount of acetylcholine available at the neuromuscular junction by inhibition of acetylcholinesterase enzyme. Pyridostigmine bromide (Mestinon®) is dosed at 1-3 mg/kg by mouth every eight to twelve hours to start and the dose may be slowly increased over a period of weeks if necessary to see clinical improvement. Side-effects (nausea, diarrhoea, salivation, and lacrimation) are usually mild and resolve with time but are occasionally significant in some patients and require dose adjustments or concurrent administration of atropine to minimise side-effects. Nutritional support may be a challenge in MG patients with concurrent megaoesophagus. Elevated feeding, slurry feeding, and even use of gastric feeding tubes may be necessary to aid in passive movement of food into the stomach and to limit the possibility of an aspiration event.Immunosuppression is not routinely necessary in mild or focal cases of MG and the myopathy associated weakness seen with high-dose steroid administration may only complicate the disease [46]. However, in severe or refractory cases of MG, immunomodulatory therapy may need to be considered. Corticosteroids such as prednisolone dosed at 0.5 mg/kg/day in dogs or cats is usually appropriate to start [20]. The dose is conservative at first to avoid side-effects such as muscle weakness but can be gradually increased over a period of days and weeks if needed to immunosuppressive dosing of around 1.5 mg/kg/day. If enteral administration of medication is complicated by megaoesophagus, parenteral dexamethasone at 0.15 mg/kg/day may also be considered. Cyclosporine or mycophenolate mofetil may be an alternative in patients where steroid use is contraindicated (diabetic, septic/pneumonia cases) [46, 47]. Azathioprine can be an excellent adjunct therapy or even a good long-term monotherapy, but slower onset of action may limit its utility in acute or fulminant cases of MG [20, 48]. Additionally, azathioprine is not appropriate for use in cats with MG as it is associated with neuromuscular blockade in this species. In human cases of MG, plasmapheresis and intravenous immunoglobulin treatments are associated with clinical improvement, but these therapies are not available, practical, or appropriate for most canine or feline patients [49–52].Spontaneous remission of disease is possible in cases of MG in dogs. In fact, some reports suggest nearly 89% of dogs may experience spontaneous remission [53]. Remission does not appear to be a characteristic of the disease in cats [36]. Clinical remission of MG in dogs may occur within as little as one month but on average occurs around six months after diagnosis [53].
## 4. Tick Paralysis
Tick paralysis is not exclusive to Australia. The disease is seen very infrequently in the Pacific Northwest and Atlantic South East of the United states and rarely in Europe [15, 54, 55]. By comparison, tick paralysis is fairly common in Australia with one study describing over 3,400 cases in eastern Australia over just two tick seasons, from September 2010 to January 2012 [56]. The disease is associated with toxins produced by the salivary glands of hard-bodied ticks in the genusIxodes, specificallyIxodes holocyclus, coronatus, andneumann [4, 7, 15, 57–59]. This is in contrast to tick paralysis in the United States that is associated with the genusDermacentor [54, 55]. The exact mechanism of holocylotoxin-induced LMND is incompletely understood. The effect of the toxin appears to be focused on the presynaptic surface of the motor endplate and appears to block calcium influx, thereby preventing depolarization of nerve endings and propagation of signal across the neuromuscular junction [12, 15].Cases of tick paralysis show a very strong geographical and seasonal distribution. TheIxodes ticks are distributed along the east and Southeastern coasts of Queensland, New South Wales, and Victoria, and roughly follow the native habitat of their preferred bandicoot and possum hosts [7, 15, 17, 60–63]. Only the female tick has long enough mouthparts to pierce and hold fast. Therefore, only females feed for a long enough time to produce tick paralysis. This helps explain much of the seasonality of tick paralysis with the numbers of female ticks seeking large meals in preparation for egg laying peak in the spring and early summer months. Tick paralysis cases can be seen year-round, but nearly three-quarters of all cases occur between September and December and a further 14% of cases over the summer months [17, 56].Tick feeding behaviour is not a simple “attach and start feeding” process. Over the first few days the tick will draw blood in and out of her mouthparts as she prepares both herself and the host’s local environment for a larger blood meal. Over these several days, the amount of holocylotoxin the salivary glands produce increases [15]. For this reason, ticks typically must remain attached for several days to see disease, with paralysis starting on the third day of feeding and progressing on subsequent days. This also highlights the importance of daily tick-checks and rapid-kill acaricides as effective preventative measures as they prevent tick attachment long enough to allow clinical disease to manifest. There are currently several acaricides on the market in Australia with documented rapid kill ofIxodes species [64–67]. Although no definitive studies have been published to date, early evidence supports a possible effect of isoxazoline parasiticides in decreasing the incidence of tick paralysis in Australia [68].Definitive diagnosis of tick paralysis is almost always made based on finding an engorged tick or a recent “crater” or site where the tick was recently attached. When tick paralysis is a likely differential, it is advised to clip all hair on the body and perform a thorough search for embedded ticks. Particular attention must be paid to the head and neck, thorax, areas of skin folds (axilla, vulva, and groin), and the interdigital spaces [63]. Successful treatment and a positive outcome are impossible unless all ticks attached to the patient are identified and removed. It is not necessary to find an engorged tick as the offending tick may be early in her feeding or have finished her feed and detached prior to the onset of significant clinical signs. Finding any tick or “crater” from attachment is diagnostic. It is common practice to apply a topical acaricide, such as permethrin or fipronil, to kill any ticks that may have been missed during the search. These treatments do not substitute for a comprehensive search and tick removal as killing an already attached and feeding tick is not as effective at resolving clinical signs as mechanically removing the feeding tick.The clinical signs of tick paralysis commonly start as an ascending limb paralysis first noted in the pelvic limbs and eventually involving the thoracic limbs. Involvement of the larynx and oesophagus as well as paralysis of the facial nerve is possible as the disease progresses [11, 15, 56, 63]. Advanced cases, particularly in cats, may see dilated and unresponsive pupils as the oculomotor nerve is involved [62]. Asymmetry in presenting signs is occasionally encountered with thoracic limbs more affected than pelvic limbs or right and left cranial nerves asymmetrically affected [14]. A clinical scoring system for tick paralysis is commonly used to standardise the assessment of tick paralysis patients [17] (Table 2). This scoring system considers the motor function (mild paresis through to lateral recumbency, graded 1-4) as well as respiratory score (no respiratory problems through to severe distress and cyanosis, graded A-D). Use of this scoring terminology allows for efficient communication of disease status between clinicians. Additionally, accurate staging can assist in formulating a prognosis for recovery, as discussed later in this section [17, 56, 62, 63, 69].Table 2
Clinical scoring system used to standardise the clinical severity of tick paralysis in dogs and cats. Adapted fromAtwell et al., 2001.
Neuromuscular Score
Respiratory Signs
1
Normal or mild weakness and incoordination
A
Normal
2
Ambulatory but with obvious weakness
B
Increased respiratory rate, but normal effort
3
Unable to stand, but can right self
C
Restrictive breathing pattern, gagging, retching
4
Unable to right self, moribund
D
Expiratory grunt, dyspnoea, cyanosisTreatment of tick paralysis requires removal of the offending tick (as described above), supportive care of the patient, and, in some patients, administration of tick antiserum (TAS). There are no clear-cut guidelines for when TAS is indicated in cases of tick paralysis. In one nationwide retrospective study, TAS was used in less than 2% of all cases [56]. This study did not report the tick clinical score of these cases and it is unknown if the low rates of TAS administration were due to a predominance of mild cases or other reasons such as financial constraints. This is in stark contrast to a recent retrospective cohort study of 2077 cases of tick paralysis over an eight-year period. In this study, TAS was administered in 95% of the 1742 feline cases where 5-day mortality was known [63]. This study also reports a 4-fold reduction in risk of death when TAS was administered as a part of comprehensive therapy. The authors use the very loose guideline that TAS administration should be discussed with the owner and considered in all tick paralysis patients regardless of stage. However, risk of TAS for some patients remains high enough that these risks may outweigh the benefits in certain patients (e.g., a cat with mild ataxia or previous history of TAS exposure).The risks of TAS administration include anaphylactic reactions and the Bezold-Jarisch (BJ) reflex [56, 70]. The BJ reflex is a vagally medicated response secondary to direct chemical stimulation of cardiac receptors. Because of this response, it has been advocated to premedicate with atropine prior to administration; however, this recommendation has changed over time and it is common for practitioners to use no premedication prior to administration of TAS [56, 63, 70]. Adverse reactions to TAS have been reported in as many as 9% of cats and 3% of dogs [63, 70]. Anaphylaxis is a serious complication of TAS administration, and for this reason it is recommended that TAS be administered slowly over the first 15 minutes and the patient monitored closely.Survival rates for most cases of tick paralysis are quite high. Although occasional cases progress to severe disease (Stages 3 and 4, C and D) the majority of cases remain clinically stable or respond quickly to TAS and complete resolution of disease is seen over a period of days [17, 56, 62]. Overall survival rates of upwards of 95% are reported in the literature [56]. However, for patients requiring mechanical ventilation or those experiencing severe complications such as aspiration pneumonia, the prognosis for survival is much more guarded [63, 69].Due to the varying severity of tick paralysis cases, there is no blanket dose rate of TAS that can be used for paralysis cases. The manufacturer (AVSL) recommends administering TAS at a rate of 1 ml/kg in stage 1A-2B cases of paralysis; however doses of up to 4.0ml/kg may be required. TAS should be allowed to warm to room temperature and then diluted 1:1 with normal saline and administered slowly (over 15-20 minutes) via intravenous route in dogs. In cats, the diluted TAS should be administered more slowly, with a small amount given intravenously over the first 15 minutes while the patient is closely observed for evidence of an adverse reaction. The remainder of the dose can then be administered over 30-60 minutes.
## 5. Snake Envenomation
There are several species of venomous snakes in Australia with venom capable of producing neurologic signs consistent with a lower motor neuron blockade. Nationwide, the most common species responsible for snake bites in small animals are the eastern brown snake (Pseudonaja textilis), western brown snake (Pseudonaja nuchalis), tiger snakes (Notechis scutatus), and red-bellied black snakes (Pseudechis porphyriacus) [71–76]. Of these, venoms of the tiger and brown snakes are most likely to produce neurologic signs [8, 72–75, 77–79]. Anecdotally, envenomation from the whipsnake (Demansiaspp.) is also associated with neurologic signs in dogs, although there are no case series reported in the literature. The most commonly encountered species of snake envenomation will depend largely on the geographical location of the patient at the time of the bite. In one 2005 survey of snake bites of animals treated by veterinarians in New South Wales it was reported that over 40% of snake bites were due to brown snake [72]. An older survey of snake bites in Australia reported over 76% of snake bites treated nationwide were the result of brown snake envenomation [78].The venom of each of these snakes is a heterogenous mix of toxic compounds and therefore envenomations are infrequently associated with only a single clinical sign [8]. Depending on the dose of venom delivered to the patient, the species of snake, and age of the snake, dogs and cats may present with only weakness and ataxia consistent with lower motor neuron blockade. Neurologic signs are due to both pre- and postsynaptic neurotoxins, inhibiting acetylcholine receptor activity and/or release of acetylcholine vesicles from the nerve terminus [75, 80].Patients presenting with bites from brown snakes frequently have both neurologic and coagulopathic signs [8, 75]. The venom of juvenile brown snakes is almost exclusively neurotoxic, whereas in adult snakes the venom is both neurotoxic and coagulopathic [81]. Tiger snake bites are frequently associated with neurologic (LMN) signs [71–74, 77–79, 82]. Although black snake venom can produce neurologic signs, it is much more likely to produce myolytic and coagulopathic signs in addition to any neurologic signs in a dog [8, 76, 80, 83]. The presence of a neuropathy with a concurrent coagulopathy or myopathy is most consistent with snake envenomation. Confirmation of snake envenomation can be made with a Snake Venom Detection Kit (SVDK) (CSL Limited, Victoria Park, Australia) using a swab of the bite site, serum/plasma, heparinised whole blood, or urine as the sample. A study in cats showed that plasma is the most reliable sample for detecting envenomation with a SVDK if the bite occurred within the last 8 hours, or urine if the bite occurred greater than 8 hours ago [84]. This is due to a delay of up to eight hours for the toxin to be filtered into the urine by the kidneys. In this same study, venom could be detected in a plasma sample within the first seventeen hours, but venom in the urine was detected up to 48 hours after bite.There are no veterinary studies investigating the variation of clinical signs associated with various species of snakes within a genus or in various geographic regions. Even among members of the same genus, there can be considerable intra- and interindividual variation in venom composition [5, 85]. In one case series study in human brown snake envenomations across Australia there was no difference in clinical signs, severity of disease, or deaths when cohorts of cases from various regions were compared [86]. Anecdotal reports by veterinary emergency and critical care specialists familiar with brown snake envenomation from both eastern and western brown snakes report that cases associated with eastern brown snake seem to be more likely to have a slightly delayed presentation of neurologic signs (up to 36 hours) as compared to western brown snake bites that appear to present much more rapidly with neurologic signs. The delayed presentation complicates diagnosis as the SDVK may be negative when assessed more than 48 hours after the bite.As with other causes of LMND, treatment will largely depend on the severity of LMN signs and may include ventilator support, nutritional support, and nursing care while the clinical signs improve. Unless the offending snake was seen and positively identified, it is always preferable to obtain a definitive identification via the SVDK so that antivenin therapy can be as specific as possible.Using monovalent antivenin is preferred as it is more efficacious and cost effective and has less risk of adverse reactions due to the smaller volume required for treatment as compared to polyvalent. The commercially available antivenin for veterinary use is formulated from hyperimmune equine serum, with each vial containing enough antibody to neutralise the average amount of venom milked from a snake of that species (package insert, Brown Snake Antivenom, AVSL, Lismore, Australia). The amount required, however, will depend on the amount of venom received by the patient [72]. The SVDK provides genus-specific identification of toxin, thereby allowing the practitioner to select the most appropriate monovalent antivenin.The method of administration of antivenin may depend slightly on manufacturer instructions. Generally room temperature antivenin is diluted with a balanced electrolyte solution (e.g., Hartmann’s) at a dilution of 1:10 and then given slowly via intravenous route over 15-30 minutes [72]. The major risk following antivenin administration is anaphylaxis and patients should always be carefully monitored during administration. Those patients who have already received antivenin are at greater risk of an adverse reaction. A delayed, type-III hypersensitivity has been recorded in humans, but has not been recorded in dogs [72]. Premedication with atropine, epinephrine, or a corticosteroid is not recommended.Prognosis for survival of snake bite envenomation in dogs and cats is good but depends on severity of clinical signs, accurate identification of the snake involved, and treatment with appropriate antivenins. Seventy-five percent of dogs and 91% of cats treated with antivenin survived as compared to only 66% of cats and 31% of dogs that did not receive antivenin in one retrospective study [78]. Another survey suggested that overall survival of snake bites in dogs and cats treated in New South Wales veterinary hospitals was seen in approximately 63, 70, and 84% of cases of tiger, brown, and red belly black snake, respectively [72].
## 6. Tetrodotoxin
An infrequently encountered but important cause of LMND in Australian dogs is ingestion of marine animals, particularly puffer fish or blue-ringed octopus, containing tetrodotoxin. Tetrodotoxin is not exclusive to puffer fish or blue-ring octopus, as the toxin has been identified in some terrestrial frogs and newts as well as several marine sea stars and sea slugs [87–89]. Interestingly, tetrodotoxin is not a direct product of the organism, but rather a product of commensal marine bacteria within and/or ingested by the animal (e.g., puffer fish) [90]. The toxin is not evenly distributed throughout the animal, with high concentrations to be found in the skin and viscera of puffer fish [87]. Dogs may encounter whole fish or parts of fish left on beaches by fishermen as pufferfish are a regular by-catch for sports fishermen in Australia.Tetrodotoxin is a potent neurotoxin that blocks fast sodium channels both within the nerve axon and on the myocyte. Onset of clinical signs is fairly rapid. In people, clinical signs manifest as numbness in the limbs followed by an ascending LMN paresis and paralysis. Large doses will manifest as seizures, respiratory paralysis, coma, and death [87]. Clinical signs in dogs are believed to include vomiting, ataxia, lethargy, cardiac arrhythmias, respiratory paralysis, and death [88].There is no antitoxin for tetrodotoxin and treatment is supportive. Diagnosis of tetrodotoxin poisoning should be made based on clinical presentation combined with a recent history of the pet visiting a beach or marina where discarded fish may have been found.
## 7. Miscellaneous Causes
The complete list of diseases, drugs, and toxins associated with neuromuscular blockade or neuromuscular weakness is outside the scope of this review. A brief list of less common causes of significant LMN disease in dogs and cats includes botulism, aminoglycoside antibiotics, hypothyroidism, hyperadrenocorticism, paraneoplastic polyneuritis, and vincristine/vinblastine neuropathy [11, 91–93].LMN signs associated with botulinum toxin have been reported in dogs but appear to be uncommon to rare [94]. Dogs may develop intoxication after ingestion of a carcass or similar spoiled product [95]. Signs of botulism can be delayed up to 4 days after ingestion of the toxin but typically begin as an ascending flaccid paralysis beginning in the pelvic limbs [95–97]. Similar to polyradiculoneuritis, affected dogs frequently retain the ability to wag their tail [94]. Treatment is supportive and recovery is usually complete within one to two weeks [94, 95].Neuromuscular blockade can be seen with several commonly used medications to include aminoglycoside antibiotics and tetracycline antibiotics [93]. Lasalocid toxicity from accidental ingestion or contaminated food has been associated with weakness and LMN signs in dogs [98–100]. Myasthenia gravis has been documented secondary to methimazole therapy in cats within the first several weeks of initiating therapy [101].Muscle weakness can also be seen in primary myopathies. Therefore, other differentials for patients presenting with presumed LMN or motor unit dysfunction may also include inflammatory causes of muscle weakness such as protozoal disease, immune-mediated myopathies, or paraneoplastic syndromes. A complete discussion of diseases producing a myopathy is beyond the scope of this review.
## 8. Conclusion
Lower motor neuron (motor unit) disease is a frequently encountered complaint in Australian dogs and cats. The frequency of patients presenting with LMN signs is largely due to the unique, native fauna of Australia. Although tick paralysis, snake envenomation, and marine animal intoxications are not exclusive to Australia, they are an important and not-uncommon group of intoxications seen by small animal practitioners in Australia. The ability to both rapidly identify LMN disease signs and identify the most likely diseases or intoxications associated with LMN disease in Australian dogs and cats are key to successful treatment and positive outcomes.
---
*Source: 1018230-2018-08-06.xml* | 2018 |
# Comparison of Inhibitory Effects of Safflower Decoction and Safflower Injection on Protein and mRNA Expressions of iNOS and IL-1β in LPS-Activated RAW264.7 Cells
**Authors:** Hui Liao; Yuanping Li; Xiaoru Zhai; Bin Zheng; Linda Banbury; Xiaoyun Zhao; Rongshan Li
**Journal:** Journal of Immunology Research
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1018274
---
## Abstract
Objective. Safflower has antioxidant and anti-inflammatory activities. The two forms of preparations for safflower which are widely used in China are injection and decoction. The first step of the process for preparing an injection involves extracting safflower with water, which actually yields a decoction. This study is intended to investigate how the preparation process influences the anti-inflammatory activity of safflower in vitro. Methods. Five samples, including a decoction (sample 1) and an injection (sample 5) of safflower, were prepared according to the national standard WS3-B-3825-98-2012 and were analyzed by the oxygen radical absorbance capacity (ORAC) method and the 1,1-diphenyl-2-trinitrophenylhydrazine (DPPH) method for comparison. Sample 1 and sample 5 were further tested by the Griess assay and ELISA for their effects on nitric oxide (NO) production and interleukin- (IL-) 1β content in lipopolysaccharide- (LPS-) activated RAW264.7 cells. The protein and mRNA levels of inducible nitric oxide synthase (iNOS) and IL-1β were measured by Western blotting and real-time quantitative PCR. Results. Sample 5 showed a significantly higher ORAC value and a lower half inhibitory concentration (IC50) for DPPH scavenging activity as compared to the other four samples (p<0.05). LPS significantly upregulated the mRNA and protein expressions of iNOS and IL-1β as compared to the solvent control (p<0.01). As compared to sample 1, sample 5 significantly decreased NO production, iNOS protein expression, and the contents of IL-1β mRNA and IL-1β protein at both 100 μg/ml and 200 μg/ml (all: p<0.05) and significantly downregulated iNOS mRNA expression at 100 μg/ml (p<0.05). Conclusions. Results of this study demonstrate that the safflower injection prepared according to the national standard has a significant effect of suppressing protein and mRNA expressions of iNOS and IL-1β as compared to its traditional decoction.
---
## Body
## 1. Introduction
Safflower is the tubular flower ofCarthamus tinctorius. According to theories of Chinese traditional medicine, safflower has effects of promoting blood circulation and removing blood stasis [1]. Modern pharmacological researches and clinical examinations suggest that safflower is a promising agent for ameliorating myocardial ischemia, trauma and pain of joints, etc. [2]. In China, safflower decoction is a traditional preparation, while safflower injection is regarded as a “product of herb’s modernization” [3]. A recent article reviewed 956 papers regarding the use of safflower injection in the treatment of a variety of diseases such as cerebral infarction, transient ischemic attack, and chronic glomerulonephritis [4].The effects of safflower injection have been pharmacologically and clinically proved to be related to the antioxidant and anti-inflammatory activities [5–7]. The protective effect of safflower injection against isoprenaline-induced acute myocardial ischemia in rats is likely to be related to a decreased inflammatory response mediated by tumor necrosis factor alpha (TNF-α) and interleukin- (IL-) 6 in the heart tissue [5]. Some clinical researches showed that safflower injection could be used to treat acute lung injury by decreasing TNF-α and IL-8 levels as measured in patient’s serum [6]. Another clinical study found that the serum levels of IL-6 and IL-10 were significantly elevated in patients with acute cerebral infarction (ACI) and safflower injection exerted certain neuroprotective effects in ACI patients by suppressing IL-6 and IL-10 expressions [7].Safflower injection has been widely used in China, and the process for preparing a safflower injection starts from the traditional decoction [7]. We were interested in how the process for preparing a safflower injection could influence its antioxidant and anti-inflammatory activities. The process for preparing a safflower injection includes the step of water decoction followed by alcohol precipitation according to the current national standard for injections, “WS3-B-3825-98-2012” (hereinafter referred to as WS3-2012) [8]. Our preliminary work showed that the safflower extract obtained according to WS3-2012 had an antioxidant effect which was associated with the activity of inhibiting nitric oxide (NO) production in lipopolysaccharide- (LPS-) activated RAW264.7 cells [9]. In this paper, five samples obtained during the process were compared in terms of antioxidant activity by the oxygen radical absorbance capacity (ORAC) method and the 1,1-diphenyl-2-trinitrophenylhydrazine (DPPH) radical scavenging method. NO production, IL-1β content, inducible nitric oxide synthase (iNOS), and IL-1β protein and mRNA expressions in LPS-activated RAW264.7 macrophages were further measured after treatment with the first water decoction sample and the final safflower injection sample.
## 2. Methods
### 2.1. Preparation of Samples
Safflower (Carthamus tinctorius) was produced in Xinjiang province and met the standard in the Chinese Pharmacopoeia, 2015 [10]. The safflower injection was manufactured by Shanxi Huawei Pharmaceutical Co. Ltd. according to WS3-2012 [8], as was shown in Figure 1.Figure 1
Flowchart on the process for producing safflower injection and the five samples obtained in the research [8]. ∗Safflower: the 20 kg dried herb. #Supernatant 1: the water decoction, and 20 ml of it was obtained as sample 1. $20 ml of each of the extracted supernatants 2, 3, and 4 was obtained as sample 2, sample 3, and sample 4, respectively. &Supernatant 5: the 40000 ml safflower injection, and 20 ml was sampled as sample 5. ^The filtrate was concentrated to a relative density of 1.10–1.14 for supernatant 1, 1.16–1.20 for supernatant 2, and 1.02–1.04 for supernatant 3.20 ml of each of the five extracted supernatants shown in Figure1 was labeled as sample 1 (traditional water decoction), sample 2, sample 3, sample 4, and sample 5 (safflower injection product). 10 ml of each sample was accurately pipetted into a container and dried in vacuo to a constant weight. All liquid and dried samples were stored at 0-4°C for future use. Five liquid samples were subjected to high-performance liquid chromatography (HPLC) profiling, and the dried samples were used to determine the antioxidant activity by the ORAC, DPPH methods, and in vitro cell assays.
### 2.2. HPLC Profiling of the Five Samples and Content Analysis of Hydroxysafflor Yellow A (HSYA) in Sample 1 and Sample 5 [8, 10]
In HPLC profiling, octadecylsilane-bonded silica used was a Gemini C18 (250×4.6mm, Phenomenex, Torrance, CA, USA) at a column temperature of 25°C. Gradient elution was carried out with acetonitrile as mobile phase A and aqueous trifluoroacetic acid (0.05%) as mobile phase B. The detection wavelength was 223 nm. 10 μl of HSYA (96.5%, China National Institutes for Food and Drug Control, Beijing) control solution and each sample solution were, respectively, injected into the liquid chromatograph column and ran for 70 min [8]. The contents of HSYA in sample 1 and sample 5 were measured with reference to the Chinese Pharmacopoeia, 2015 [10].
### 2.3. Determination of the Antioxidant Activities of the Five Samples by the ORAC Method [11]
#### 2.3.1. Preparation of the Standard Curve
6-hydroxy-2,5,7,8-tetramethyl-2-carboxylic acid (Trolox, 97.0%, Aldrich Corporation, USA), a water-soluble analog of vitamin E, was used as the standard. Firstly, 10μl of 75 nM 3′,6′-dihydroxy-spiro[isobenzofuran-1[3H],9′[9H]-xanthen]-3-one, also known as fluorescein disodium (FL) (95%, Aldrich Corporation, USA), was added to each well. Then, 20 μl of Trolox at concentrations of 6.25, 12.5, 25, and 50 μM was added in triplicate. Finally, 170 μl of 17 mM 2′-Azobis(2-amidinopropane) dihydrochloride (AAPH) (≥98.0%, Wako Pure Chemical Corporation, USA) was added to each well and the fluorescence change was dynamically recorded on the Wallac Victor 3 fully automated quantitative mapping microplate reader (PerkinElmer, USA) every 1 min for 35 min at 37°C. Trolox was diluted with deionized water, and FL and AAPH were diluted with 75 mM phosphate buffer solution (PBS) (in-house). 20 μl of deionized water was included as a solvent control. The fluorescence-time graph was plotted using the workout program, and the area under the curve was calculated. The following standard curve equation for Trolox was obtained with the area under the curve as the ordinate and the Trolox concentration as the abscissa: y=1.0259x+0.0960, r=0.9959.
#### 2.3.2. Determination of the ORAC Value
(1) Positive Control Group. 20 μl of curcumin (>95%, China National Institutes for Food and Drug Control, Beijing, China) was incubated with 10 μl of 75 nM FL and 170 μl of 17 mM AAPH in a total volume of 200 μl. The tested concentrations of curcumin were 1, 2, 4, and 8 μM in triplicate.(2) Five Safflower Samples. Briefly, 20 μl of safflower samples at 25, 50, 100, and 200 μg/ml and 20 μl of HSYA samples at 12.5, 25, 50, and 100 μM were tested. FL and AAPH were added following the same steps as those of curcumin.(3) Solvent Control. A solvent control comprising DMSO for curcumin and a deionized water control for the five samples and HSYA were included.The ORAC values (inμmol·TE/g) of the positive curcumin, safflower samples, and HSYA were calculated from the linear equation of the Trolox standard.
### 2.4. Determination of the Antioxidant Activities of the Five Samples by the DPPH Method [12]
#### 2.4.1. Preparation of the DPPH Standard Curve
0.5, 1.0, 2.0, 3.0, 4.0, and 5.0 ml of a 50μg/ml solution of DPPH (>97.0%, Tokyo Chemical Industry Corporation, Tokyo, Japan) in 95% ethanol were accurately pipetted into 5 ml volumetric flasks, to which ethanol was added to the final volume. The mixture was shaken well. The A values were measured at 517 nm. The following standard curve equation for DPPH was obtained with the A value as the ordinate and the concentration as the abscissa: y=29.1170x+0.0354, r=0.9999.
#### 2.4.2. Determination of Parameters of the Samples
(1) DPPH-Negative Control. 1.0 ml of 95% ethanol was added to 2.0 ml of a 50 μg/ml DPPH solution and mixed well. After the mixture was set aside for 30 min in a 28°C water bath, the A value at 517 nm was measured as AD.(2) Positive Control. 0.5 ml of the tested curcumin solutions at 1.56, 3.13, 6.25, 12.5, 25, and 50 μg/ml was thoroughly mixed with 0.5 ml of 95% ethanol, and then 2.0 ml of the 50 μg/ml DPPH solution was added to the mixture.(3) Five Safflower Samples. 0.5 ml of the tested safflower samples at 15.6, 31.3, 62.5, 125, 250, and 500 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control.(4) HSYA Sample. 0.5 ml of the tested HSYA at 3.2, 6.3, 12.5, 25, 50, and 100 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control. All the A values of curcumin, five safflower samples, and HSYA were recorded as AT.(5) Blank. The A value of 3.0 ml of 95% ethanol was measured as AB.(6) Solvent Control. 0.5 ml of DMSO (a solvent for curcumin) and deionized water (a solvent for safflower samples and HSYA) was mixed with 2.5 ml of 95% ethanol, and their A values were measured as AS.The DPPH scavenging rate of the samples at different concentrations was calculated according to the following equation:(1)1−AT−ASAD−AB×100%.The half inhibitory concentration (IC50) for DPPH scavenging of the samples, i.e., the corresponding concentration of the sample solution when the DPPH radical scavenging rate is 50%, was calculated.
### 2.5. Cell Culture
RAW264.7 cells, a mouse macrophage cell line, were purchased from Shanghai Cell Institute (Shanghai, China) and cultured in colorless Dulbecco’s modified Eagle’s medium (DMEM) supplemented with heat-inactivated fetal bovine serum (10%), D-glucose (3.5 mg/ml), Na pyruvate (100 mM), L-glutamine (2 mM), penicillin (100 U/ml), streptomycin (100μg/ml), and amphotericin B (250 μg/ml) at 37°C in a 5% CO2 incubator.
### 2.6. Determination of NO and IL-1β Levels in LPS-Activated RAW264.7 Cells in the Presence of Sample 1 and Sample 5
NO and IL-1β levels were determined in RAW264.7 cells (98 μl, plated at 1×106 cells/ml). The samples (1 μl each) were added to the cells, which were then stimulated with LPS (1 μl, 0.5 μg/ml, Wako Chemicals USA Inc., Richmond, VA, USA) after 2 h. Nitrite, a stable end product of NO metabolism, was measured using the Griess reaction [13] after another 22 hours, and IL-1β was measured using an ELISA kit commercially available from Wuhan Boster Biological Technology (Wuhan, China). All samples and controls were assayed in sextuplicate.
### 2.7. Real-Time Quantitative PCR of iNOS and IL-1β in the Presence of Sample 1 and Sample 5 [14, 15]
Total RNAs were extracted from solvent-treated RAW264.7 cells, LPS-activated cells, and sample-treated LPS-activated cells with TRIzol Reagent (Ambion, USA). Equal amounts (1μg) of RNAs were reverse transcribed using a high-capacity RNA-to-cDNA PCR kit (Takara, Beijing, China). Mouse gene PCR primer sets for iNOS and IL-1β were obtained from SABiosciences (Germantown, MD). The Power SYBR Green PCR Master Mix (Applied Biosystems) was used with the step-one-plus real-time PCR system (Applied Biosystems). The protocol included denaturing for 15 min at 95°C, 40 cycles of three-step PCR including denaturing for 15 sec at 95°C, annealing for 30 sec at 58°C, and extension for 30 sec at 72°C, with an additional 15-second detection step at 81°C, followed by a melting profile from 55°C to 95°C at a rate of 0.5°C per 10 sec. The samples of 25 ng cDNA were analyzed in quadruplicate in parallel with RPLP1/3 controls. Standard curves (threshold 1 cycle vs. log 2 pg cDNA) were generated from a series of log dilutions of standard cDNA (reverse transcribed from mRNA from RAW264.7 cells in growth media) from 0.1 pg to 100 ng. Initial quantities of experimental mRNA were then calculated from the standard curves and averaged using the SA Bioscience software. The ratio of the experimental marker gene (iNOS or IL-1β) to RPLP1/3 mRNA was calculated and normalized to the solvent control.
### 2.8. Western Blotting of iNOS and IL-1β in the Presence of Sample 1 and Sample 5 [16]
The treated cells were removed from the culture media and extracted with the RIPA lysis buffer from Beyotime Biotech (Jiangsu, China) for 30 min. Supernatants were collected after the tubes were centrifuged at10000g for 40 min at 4°C. The protein concentrations were determined using a BCA Protein Assay Kit from Wuhan Boster Biological Technology (Wuhan, China). Samples containing 50 μg of protein were resolved by 12% SDS-PAGE electrophoresis and transferred to nitrocellulose membranes (Whatman International Ltd., Maidstone, Germany). Nonspecific binding was blocked by immersing the membranes into 5% nonfat dried milk and 0.1% (v/v) Tween 20 in PBS for 3 h at room temperature. After rinsing with a washing buffer (0.1% Tween 20 in PBS) several times, the membranes were incubated with a primary antibody against iNOS at 1 : 1000 dilution (catalog no. ab49999, Abcam) or an antibody against IL-1β at 1 : 1000 dilution (catalog no. ab150777, Abcam) overnight at 4°C. The membranes were washed several times, then incubated with a corresponding anti-mouse secondary antibody IgG conjugated to HRP (Cell Signaling Technology, Danvers, MA) at room temperature for 3 h, and analyzed by the Quantity One analysis system (Bio-Rad, Hercules, CA, USA). GAPDH at a dilution of 1 : 2000 (catalog no. ab 9483, Abcam) was used as an internal loading control.
### 2.9. Statistical Analysis
The SPSS 19.0 software (IBM, Armonk, NY, US) was used for statistical analysis. All the data were expressed asmean±standard error of the mean. For continuous variables, comparisons among groups were conducted by one-way analysis of variance followed by Dunnett’s multiple comparisons test. All the p values reported were two tailed, and p<0.05 was set as the level of significance.
## 2.1. Preparation of Samples
Safflower (Carthamus tinctorius) was produced in Xinjiang province and met the standard in the Chinese Pharmacopoeia, 2015 [10]. The safflower injection was manufactured by Shanxi Huawei Pharmaceutical Co. Ltd. according to WS3-2012 [8], as was shown in Figure 1.Figure 1
Flowchart on the process for producing safflower injection and the five samples obtained in the research [8]. ∗Safflower: the 20 kg dried herb. #Supernatant 1: the water decoction, and 20 ml of it was obtained as sample 1. $20 ml of each of the extracted supernatants 2, 3, and 4 was obtained as sample 2, sample 3, and sample 4, respectively. &Supernatant 5: the 40000 ml safflower injection, and 20 ml was sampled as sample 5. ^The filtrate was concentrated to a relative density of 1.10–1.14 for supernatant 1, 1.16–1.20 for supernatant 2, and 1.02–1.04 for supernatant 3.20 ml of each of the five extracted supernatants shown in Figure1 was labeled as sample 1 (traditional water decoction), sample 2, sample 3, sample 4, and sample 5 (safflower injection product). 10 ml of each sample was accurately pipetted into a container and dried in vacuo to a constant weight. All liquid and dried samples were stored at 0-4°C for future use. Five liquid samples were subjected to high-performance liquid chromatography (HPLC) profiling, and the dried samples were used to determine the antioxidant activity by the ORAC, DPPH methods, and in vitro cell assays.
## 2.2. HPLC Profiling of the Five Samples and Content Analysis of Hydroxysafflor Yellow A (HSYA) in Sample 1 and Sample 5 [8, 10]
In HPLC profiling, octadecylsilane-bonded silica used was a Gemini C18 (250×4.6mm, Phenomenex, Torrance, CA, USA) at a column temperature of 25°C. Gradient elution was carried out with acetonitrile as mobile phase A and aqueous trifluoroacetic acid (0.05%) as mobile phase B. The detection wavelength was 223 nm. 10 μl of HSYA (96.5%, China National Institutes for Food and Drug Control, Beijing) control solution and each sample solution were, respectively, injected into the liquid chromatograph column and ran for 70 min [8]. The contents of HSYA in sample 1 and sample 5 were measured with reference to the Chinese Pharmacopoeia, 2015 [10].
## 2.3. Determination of the Antioxidant Activities of the Five Samples by the ORAC Method [11]
### 2.3.1. Preparation of the Standard Curve
6-hydroxy-2,5,7,8-tetramethyl-2-carboxylic acid (Trolox, 97.0%, Aldrich Corporation, USA), a water-soluble analog of vitamin E, was used as the standard. Firstly, 10μl of 75 nM 3′,6′-dihydroxy-spiro[isobenzofuran-1[3H],9′[9H]-xanthen]-3-one, also known as fluorescein disodium (FL) (95%, Aldrich Corporation, USA), was added to each well. Then, 20 μl of Trolox at concentrations of 6.25, 12.5, 25, and 50 μM was added in triplicate. Finally, 170 μl of 17 mM 2′-Azobis(2-amidinopropane) dihydrochloride (AAPH) (≥98.0%, Wako Pure Chemical Corporation, USA) was added to each well and the fluorescence change was dynamically recorded on the Wallac Victor 3 fully automated quantitative mapping microplate reader (PerkinElmer, USA) every 1 min for 35 min at 37°C. Trolox was diluted with deionized water, and FL and AAPH were diluted with 75 mM phosphate buffer solution (PBS) (in-house). 20 μl of deionized water was included as a solvent control. The fluorescence-time graph was plotted using the workout program, and the area under the curve was calculated. The following standard curve equation for Trolox was obtained with the area under the curve as the ordinate and the Trolox concentration as the abscissa: y=1.0259x+0.0960, r=0.9959.
### 2.3.2. Determination of the ORAC Value
(1) Positive Control Group. 20 μl of curcumin (>95%, China National Institutes for Food and Drug Control, Beijing, China) was incubated with 10 μl of 75 nM FL and 170 μl of 17 mM AAPH in a total volume of 200 μl. The tested concentrations of curcumin were 1, 2, 4, and 8 μM in triplicate.(2) Five Safflower Samples. Briefly, 20 μl of safflower samples at 25, 50, 100, and 200 μg/ml and 20 μl of HSYA samples at 12.5, 25, 50, and 100 μM were tested. FL and AAPH were added following the same steps as those of curcumin.(3) Solvent Control. A solvent control comprising DMSO for curcumin and a deionized water control for the five samples and HSYA were included.The ORAC values (inμmol·TE/g) of the positive curcumin, safflower samples, and HSYA were calculated from the linear equation of the Trolox standard.
## 2.3.1. Preparation of the Standard Curve
6-hydroxy-2,5,7,8-tetramethyl-2-carboxylic acid (Trolox, 97.0%, Aldrich Corporation, USA), a water-soluble analog of vitamin E, was used as the standard. Firstly, 10μl of 75 nM 3′,6′-dihydroxy-spiro[isobenzofuran-1[3H],9′[9H]-xanthen]-3-one, also known as fluorescein disodium (FL) (95%, Aldrich Corporation, USA), was added to each well. Then, 20 μl of Trolox at concentrations of 6.25, 12.5, 25, and 50 μM was added in triplicate. Finally, 170 μl of 17 mM 2′-Azobis(2-amidinopropane) dihydrochloride (AAPH) (≥98.0%, Wako Pure Chemical Corporation, USA) was added to each well and the fluorescence change was dynamically recorded on the Wallac Victor 3 fully automated quantitative mapping microplate reader (PerkinElmer, USA) every 1 min for 35 min at 37°C. Trolox was diluted with deionized water, and FL and AAPH were diluted with 75 mM phosphate buffer solution (PBS) (in-house). 20 μl of deionized water was included as a solvent control. The fluorescence-time graph was plotted using the workout program, and the area under the curve was calculated. The following standard curve equation for Trolox was obtained with the area under the curve as the ordinate and the Trolox concentration as the abscissa: y=1.0259x+0.0960, r=0.9959.
## 2.3.2. Determination of the ORAC Value
(1) Positive Control Group. 20 μl of curcumin (>95%, China National Institutes for Food and Drug Control, Beijing, China) was incubated with 10 μl of 75 nM FL and 170 μl of 17 mM AAPH in a total volume of 200 μl. The tested concentrations of curcumin were 1, 2, 4, and 8 μM in triplicate.(2) Five Safflower Samples. Briefly, 20 μl of safflower samples at 25, 50, 100, and 200 μg/ml and 20 μl of HSYA samples at 12.5, 25, 50, and 100 μM were tested. FL and AAPH were added following the same steps as those of curcumin.(3) Solvent Control. A solvent control comprising DMSO for curcumin and a deionized water control for the five samples and HSYA were included.The ORAC values (inμmol·TE/g) of the positive curcumin, safflower samples, and HSYA were calculated from the linear equation of the Trolox standard.
## 2.4. Determination of the Antioxidant Activities of the Five Samples by the DPPH Method [12]
### 2.4.1. Preparation of the DPPH Standard Curve
0.5, 1.0, 2.0, 3.0, 4.0, and 5.0 ml of a 50μg/ml solution of DPPH (>97.0%, Tokyo Chemical Industry Corporation, Tokyo, Japan) in 95% ethanol were accurately pipetted into 5 ml volumetric flasks, to which ethanol was added to the final volume. The mixture was shaken well. The A values were measured at 517 nm. The following standard curve equation for DPPH was obtained with the A value as the ordinate and the concentration as the abscissa: y=29.1170x+0.0354, r=0.9999.
### 2.4.2. Determination of Parameters of the Samples
(1) DPPH-Negative Control. 1.0 ml of 95% ethanol was added to 2.0 ml of a 50 μg/ml DPPH solution and mixed well. After the mixture was set aside for 30 min in a 28°C water bath, the A value at 517 nm was measured as AD.(2) Positive Control. 0.5 ml of the tested curcumin solutions at 1.56, 3.13, 6.25, 12.5, 25, and 50 μg/ml was thoroughly mixed with 0.5 ml of 95% ethanol, and then 2.0 ml of the 50 μg/ml DPPH solution was added to the mixture.(3) Five Safflower Samples. 0.5 ml of the tested safflower samples at 15.6, 31.3, 62.5, 125, 250, and 500 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control.(4) HSYA Sample. 0.5 ml of the tested HSYA at 3.2, 6.3, 12.5, 25, 50, and 100 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control. All the A values of curcumin, five safflower samples, and HSYA were recorded as AT.(5) Blank. The A value of 3.0 ml of 95% ethanol was measured as AB.(6) Solvent Control. 0.5 ml of DMSO (a solvent for curcumin) and deionized water (a solvent for safflower samples and HSYA) was mixed with 2.5 ml of 95% ethanol, and their A values were measured as AS.The DPPH scavenging rate of the samples at different concentrations was calculated according to the following equation:(1)1−AT−ASAD−AB×100%.The half inhibitory concentration (IC50) for DPPH scavenging of the samples, i.e., the corresponding concentration of the sample solution when the DPPH radical scavenging rate is 50%, was calculated.
## 2.4.1. Preparation of the DPPH Standard Curve
0.5, 1.0, 2.0, 3.0, 4.0, and 5.0 ml of a 50μg/ml solution of DPPH (>97.0%, Tokyo Chemical Industry Corporation, Tokyo, Japan) in 95% ethanol were accurately pipetted into 5 ml volumetric flasks, to which ethanol was added to the final volume. The mixture was shaken well. The A values were measured at 517 nm. The following standard curve equation for DPPH was obtained with the A value as the ordinate and the concentration as the abscissa: y=29.1170x+0.0354, r=0.9999.
## 2.4.2. Determination of Parameters of the Samples
(1) DPPH-Negative Control. 1.0 ml of 95% ethanol was added to 2.0 ml of a 50 μg/ml DPPH solution and mixed well. After the mixture was set aside for 30 min in a 28°C water bath, the A value at 517 nm was measured as AD.(2) Positive Control. 0.5 ml of the tested curcumin solutions at 1.56, 3.13, 6.25, 12.5, 25, and 50 μg/ml was thoroughly mixed with 0.5 ml of 95% ethanol, and then 2.0 ml of the 50 μg/ml DPPH solution was added to the mixture.(3) Five Safflower Samples. 0.5 ml of the tested safflower samples at 15.6, 31.3, 62.5, 125, 250, and 500 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control.(4) HSYA Sample. 0.5 ml of the tested HSYA at 3.2, 6.3, 12.5, 25, 50, and 100 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control. All the A values of curcumin, five safflower samples, and HSYA were recorded as AT.(5) Blank. The A value of 3.0 ml of 95% ethanol was measured as AB.(6) Solvent Control. 0.5 ml of DMSO (a solvent for curcumin) and deionized water (a solvent for safflower samples and HSYA) was mixed with 2.5 ml of 95% ethanol, and their A values were measured as AS.The DPPH scavenging rate of the samples at different concentrations was calculated according to the following equation:(1)1−AT−ASAD−AB×100%.The half inhibitory concentration (IC50) for DPPH scavenging of the samples, i.e., the corresponding concentration of the sample solution when the DPPH radical scavenging rate is 50%, was calculated.
## 2.5. Cell Culture
RAW264.7 cells, a mouse macrophage cell line, were purchased from Shanghai Cell Institute (Shanghai, China) and cultured in colorless Dulbecco’s modified Eagle’s medium (DMEM) supplemented with heat-inactivated fetal bovine serum (10%), D-glucose (3.5 mg/ml), Na pyruvate (100 mM), L-glutamine (2 mM), penicillin (100 U/ml), streptomycin (100μg/ml), and amphotericin B (250 μg/ml) at 37°C in a 5% CO2 incubator.
## 2.6. Determination of NO and IL-1β Levels in LPS-Activated RAW264.7 Cells in the Presence of Sample 1 and Sample 5
NO and IL-1β levels were determined in RAW264.7 cells (98 μl, plated at 1×106 cells/ml). The samples (1 μl each) were added to the cells, which were then stimulated with LPS (1 μl, 0.5 μg/ml, Wako Chemicals USA Inc., Richmond, VA, USA) after 2 h. Nitrite, a stable end product of NO metabolism, was measured using the Griess reaction [13] after another 22 hours, and IL-1β was measured using an ELISA kit commercially available from Wuhan Boster Biological Technology (Wuhan, China). All samples and controls were assayed in sextuplicate.
## 2.7. Real-Time Quantitative PCR of iNOS and IL-1β in the Presence of Sample 1 and Sample 5 [14, 15]
Total RNAs were extracted from solvent-treated RAW264.7 cells, LPS-activated cells, and sample-treated LPS-activated cells with TRIzol Reagent (Ambion, USA). Equal amounts (1μg) of RNAs were reverse transcribed using a high-capacity RNA-to-cDNA PCR kit (Takara, Beijing, China). Mouse gene PCR primer sets for iNOS and IL-1β were obtained from SABiosciences (Germantown, MD). The Power SYBR Green PCR Master Mix (Applied Biosystems) was used with the step-one-plus real-time PCR system (Applied Biosystems). The protocol included denaturing for 15 min at 95°C, 40 cycles of three-step PCR including denaturing for 15 sec at 95°C, annealing for 30 sec at 58°C, and extension for 30 sec at 72°C, with an additional 15-second detection step at 81°C, followed by a melting profile from 55°C to 95°C at a rate of 0.5°C per 10 sec. The samples of 25 ng cDNA were analyzed in quadruplicate in parallel with RPLP1/3 controls. Standard curves (threshold 1 cycle vs. log 2 pg cDNA) were generated from a series of log dilutions of standard cDNA (reverse transcribed from mRNA from RAW264.7 cells in growth media) from 0.1 pg to 100 ng. Initial quantities of experimental mRNA were then calculated from the standard curves and averaged using the SA Bioscience software. The ratio of the experimental marker gene (iNOS or IL-1β) to RPLP1/3 mRNA was calculated and normalized to the solvent control.
## 2.8. Western Blotting of iNOS and IL-1β in the Presence of Sample 1 and Sample 5 [16]
The treated cells were removed from the culture media and extracted with the RIPA lysis buffer from Beyotime Biotech (Jiangsu, China) for 30 min. Supernatants were collected after the tubes were centrifuged at10000g for 40 min at 4°C. The protein concentrations were determined using a BCA Protein Assay Kit from Wuhan Boster Biological Technology (Wuhan, China). Samples containing 50 μg of protein were resolved by 12% SDS-PAGE electrophoresis and transferred to nitrocellulose membranes (Whatman International Ltd., Maidstone, Germany). Nonspecific binding was blocked by immersing the membranes into 5% nonfat dried milk and 0.1% (v/v) Tween 20 in PBS for 3 h at room temperature. After rinsing with a washing buffer (0.1% Tween 20 in PBS) several times, the membranes were incubated with a primary antibody against iNOS at 1 : 1000 dilution (catalog no. ab49999, Abcam) or an antibody against IL-1β at 1 : 1000 dilution (catalog no. ab150777, Abcam) overnight at 4°C. The membranes were washed several times, then incubated with a corresponding anti-mouse secondary antibody IgG conjugated to HRP (Cell Signaling Technology, Danvers, MA) at room temperature for 3 h, and analyzed by the Quantity One analysis system (Bio-Rad, Hercules, CA, USA). GAPDH at a dilution of 1 : 2000 (catalog no. ab 9483, Abcam) was used as an internal loading control.
## 2.9. Statistical Analysis
The SPSS 19.0 software (IBM, Armonk, NY, US) was used for statistical analysis. All the data were expressed asmean±standard error of the mean. For continuous variables, comparisons among groups were conducted by one-way analysis of variance followed by Dunnett’s multiple comparisons test. All the p values reported were two tailed, and p<0.05 was set as the level of significance.
## 3. Results
### 3.1. The HSYA Contents in Sample 1 and Sample 5 and HPLC Profiling Results of the Five Samples
According to WS3-2012, the content of HSYA should be no less than 0.10 mg/ml [8]. The results showed that sample 5 obtained in the present study contained 0.20±0.01mg/ml of HSYA (n=3), which met the requirements of WS3-2012, and equaled to 11.2±0.2mg of HSYA per 1 g extract. The content of HSYA was also measured in sample 1, and the result was 43.3±0.8mg of HSYA per 1 g extract.In addition, WS3-2012 specifies 11 characteristic peaks in the HPLC profile of the injection, in which peak 9 represents HSYA (Figure2, sample 5). The theoretical number of column plates should be no less than 6000 as calculated from the HSYA peak, and the similarity determination of the 11 peaks between the profile of sample 5 and the reference fingerprint should be no less than 0.85 (Figure 2) [8]. The above HPLC indices of sample 5 all met the WS3-2012’s requirements. Figure 2 showed that the 11 characteristic peaks were also present in sample 1.Figure 2
High-performance liquid chromatography profiles of the reference and five samples. Sample 1: the safflower decoction; sample 5: the safflower injection. Performance temperature was 25°C. Mobile phase A was gradient elution with acetonitrile, and mobile phase B was aqueous trifluoroacetic acid. The detection wavelength was 223 nm. 10μl Hydroxysafflor yellow A (HSYA) was the control solution. Run time was 70 min. According to reference fingerprint, sample 5 showed 11 characteristic absorption peaks and peak 9 was confirmed as HSYA [8].
### 3.2. The ORAC Values of the Five Samples and HSYA
Sample 1 was prepared by extraction with water. Figure3(a) showed that, following several steps of alcohol precipitation and water precipitation, the ORAC value of sample 5 was significantly higher than that of sample 1 (1160±146μmol·TE/g vs. 650±61μmol·TE/g; p=0.001) and also higher than those of the other three samples (p<0.05). As an important compound in safflower, HSYA was found to have a significantly higher ORAC value (1702±109μmol·TE/g) than sample 5 (p=0.001). As a positive control in this study, curcumin exhibited the highest ORAC value (2307±66μmol·TE/g), which was significantly different from that of sample 5 (p<0.001).Figure 3
(a) The ORAC values of the samples and HSYA. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed asmean±standard error (n=6). #p<0.05 versus sample 5. Curcumin was the positive control. ORAC: oxygen radical absorbance capacity; HSYA: Hydroxysafflor yellow A. (b) The IC50 values for DPPH scavenging of the samples and HSYA. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed as mean±standard error of the mean (n=6). #p<0.05 versus sample 5. Curcumin was the positive control. DPPH: 1,1-diphenyl-2-trinitrophenylhydrazine; IC50: half inhibitory concentration; HSYA: Hydroxysafflor yellow A.
(a)
(b)
### 3.3. The IC50 Value for DPPH Scavenging of the Five Samples and HSYA
Figure3(b) showed that curcumin, as a reported antioxidant [17], exhibited the lowest IC50 value (5.7±1.1μg/ml), which was most significantly different from that of sample 5 (p<0.001). Among the five samples, sample 5 had the lowest IC50 value and sample 1 had the highest IC50 value (56.7±7.2μg/ml vs. 197.6±18.1μg/ml, p<0.001). The IC50 value of HSYA was 23.2±3.4μg/ml, further confirming its DPPH scavenging activity [12].
### 3.4. Effects of Sample 1 and Sample 5 on NO and IL-1β Contents in LPS-Activated RAW264.7 Cells
NO production increased significantly after LPS stimulation as compared to the solvent control (26.8±0.3μM vs. 6.6±0.1μM, p<0.001). Also, as compared to that of the solvent control, the IL-1β level in the LPS control increased significantly (14.7±0.3pg/ml vs. 69.4±5.6pg/ml, p=0.003).RAW264.7 cells treated with sample 1 and sample 5 at 50, 100, and 200μg/ml exhibited significantly lower LPS-stimulated NO production than the LPS control (p<0.05). Sample 5 showed a significant inhibitory effect as compared to sample 1 at 100 μg/ml (p=0.020) and at 200 μg/ml (p<0.001).Sample 1 at 50μg/ml did not exhibit a statistically significant inhibitory effect on LPS-stimulated IL-1β production (p=0.081 vs. the LPS control). As compared to sample 1, sample 5 showed a significant inhibitory effect on IL-1β production at 100 μg/ml (p=0.006) and at 200 μg/ml (p=0.007) (Figures 4(a) and 5(a)).Figure 4
(a) Effects of sample 1 and sample 5 on nitric oxide production in 0.5μg/ml LPS-activated RAW 264.7 cells. (b) Effects of sample 1 and sample 5 on the mRNA of iNOS in 0.5 μg/ml LPS-activated RAW264.7 cells by real-time quantitative PCR.(c) Effects of sample 1 and sample 5 on the expression of iNOS in 0.5 μg/ml LPS-activated RAW264.7 cells by Western blotting analysis. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed as mean±standard error of the mean (n=3). #p<0.05 versus LPS control. ∗p<0.05 sample 1 versus sample 5 at 100 μg/ml. ∗∗p<0.05 sample 1 versus sample 5 at 200 μg/ml. iNOS: inducible nitric oxide synthase; LPS: lipopolysaccharide.
(a)
(b)
(c)Figure 5
(a) Effects of sample 1 and sample 5 on IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells. (b) Effects of sample 1 and sample 5 on the mRNA of IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells by real-time quantitative PCR. (c) Effects of sample 1 and sample 5 on the expression of IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells by Western blotting analysis. Sample 1: the safflower decoction; sample 5, the safflower injection. Values are expressed as mean±standard error of the mean (n=3). #p<0.05 versus LPS control. ∗p<0.05 sample 1 versus sample 5 at 100 μg/ml. ∗∗p<0.05 sample 1 versus sample 5 at 200 μg/ml. LPS: lipopolysaccharide.
(a)
(b)
(c)
### 3.5. Effects of Sample 1 and Sample 5 on iNOS and IL-1β mRNAs in LPS-Activated RAW264.7 Cells
When compared to those of the solvent control, iNOS mRNA expression increased by approximately2.42±0.19 fold and IL-1β production increased by approximately 1.86±0.08 fold in LPS-activated cells.When compared to that of the LPS control, iNOS mRNA expression was significantly downregulated by sample 5 at 100μg/ml (p=0.040) and 200 μg/ml (p=0.019) and sample 1 at 100 and 200 μg/ml (both p<0.01). The significant inhibitory effect on IL-1β mRNA was also observed with sample 5 at 100 μg/ml and 200 μg/ml and sample 1 at 200 μg/ml (p<0.05).Compared to sample 1 at 100μg/ml, sample 5 at the same concentration significantly decreased iNOS mRNA and IL-1β mRNA levels (p=0.013 and p=0.009). Sample 5 at 200 μg/ml also exhibited a similar significant inhibitory effect on IL-1β mRNA expression as compared to sample 1 (p=0.011) (Figures 4(b) and 5(b)).
### 3.6. Effects of Sample 1 and Sample 5 on iNOS and IL-1β Protein Expressions in LPS-Activated RAW264.7 Cells
Western blotting was used to determine the effects of the decoction sample and injection sample on iNOS and IL-1β protein expressions. Figures 4(c) and 5(c) showed that iNOS and IL-1β protein expressions increased in LPS-activated RAW264.7cells.Compared to the LPS control, both sample 1 and sample 5 significantly suppressed iNOS expression at three tested concentrations (p<0.05). Sample 5 suppressed IL-1β protein expression at 50, 100, and 200 μg/ml, while sample 1 exhibited a significant suppressing effect on IL-1β protein expression at 100 and 200 μg/ml (all: p<0.05).Sample 1 and sample 5 were significantly different in their abilities of decreasing iNOS protein expression both at 100μg/ml and at 200 μg/ml (p<0.05). A significant difference in downregulation of IL-1β protein expression was also observed between the two groups treated with safflower at 100 μg/ml (p=0.010) and at 200 μg/ml (p=0.002), respectively.
## 3.1. The HSYA Contents in Sample 1 and Sample 5 and HPLC Profiling Results of the Five Samples
According to WS3-2012, the content of HSYA should be no less than 0.10 mg/ml [8]. The results showed that sample 5 obtained in the present study contained 0.20±0.01mg/ml of HSYA (n=3), which met the requirements of WS3-2012, and equaled to 11.2±0.2mg of HSYA per 1 g extract. The content of HSYA was also measured in sample 1, and the result was 43.3±0.8mg of HSYA per 1 g extract.In addition, WS3-2012 specifies 11 characteristic peaks in the HPLC profile of the injection, in which peak 9 represents HSYA (Figure2, sample 5). The theoretical number of column plates should be no less than 6000 as calculated from the HSYA peak, and the similarity determination of the 11 peaks between the profile of sample 5 and the reference fingerprint should be no less than 0.85 (Figure 2) [8]. The above HPLC indices of sample 5 all met the WS3-2012’s requirements. Figure 2 showed that the 11 characteristic peaks were also present in sample 1.Figure 2
High-performance liquid chromatography profiles of the reference and five samples. Sample 1: the safflower decoction; sample 5: the safflower injection. Performance temperature was 25°C. Mobile phase A was gradient elution with acetonitrile, and mobile phase B was aqueous trifluoroacetic acid. The detection wavelength was 223 nm. 10μl Hydroxysafflor yellow A (HSYA) was the control solution. Run time was 70 min. According to reference fingerprint, sample 5 showed 11 characteristic absorption peaks and peak 9 was confirmed as HSYA [8].
## 3.2. The ORAC Values of the Five Samples and HSYA
Sample 1 was prepared by extraction with water. Figure3(a) showed that, following several steps of alcohol precipitation and water precipitation, the ORAC value of sample 5 was significantly higher than that of sample 1 (1160±146μmol·TE/g vs. 650±61μmol·TE/g; p=0.001) and also higher than those of the other three samples (p<0.05). As an important compound in safflower, HSYA was found to have a significantly higher ORAC value (1702±109μmol·TE/g) than sample 5 (p=0.001). As a positive control in this study, curcumin exhibited the highest ORAC value (2307±66μmol·TE/g), which was significantly different from that of sample 5 (p<0.001).Figure 3
(a) The ORAC values of the samples and HSYA. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed asmean±standard error (n=6). #p<0.05 versus sample 5. Curcumin was the positive control. ORAC: oxygen radical absorbance capacity; HSYA: Hydroxysafflor yellow A. (b) The IC50 values for DPPH scavenging of the samples and HSYA. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed as mean±standard error of the mean (n=6). #p<0.05 versus sample 5. Curcumin was the positive control. DPPH: 1,1-diphenyl-2-trinitrophenylhydrazine; IC50: half inhibitory concentration; HSYA: Hydroxysafflor yellow A.
(a)
(b)
## 3.3. The IC50 Value for DPPH Scavenging of the Five Samples and HSYA
Figure3(b) showed that curcumin, as a reported antioxidant [17], exhibited the lowest IC50 value (5.7±1.1μg/ml), which was most significantly different from that of sample 5 (p<0.001). Among the five samples, sample 5 had the lowest IC50 value and sample 1 had the highest IC50 value (56.7±7.2μg/ml vs. 197.6±18.1μg/ml, p<0.001). The IC50 value of HSYA was 23.2±3.4μg/ml, further confirming its DPPH scavenging activity [12].
## 3.4. Effects of Sample 1 and Sample 5 on NO and IL-1β Contents in LPS-Activated RAW264.7 Cells
NO production increased significantly after LPS stimulation as compared to the solvent control (26.8±0.3μM vs. 6.6±0.1μM, p<0.001). Also, as compared to that of the solvent control, the IL-1β level in the LPS control increased significantly (14.7±0.3pg/ml vs. 69.4±5.6pg/ml, p=0.003).RAW264.7 cells treated with sample 1 and sample 5 at 50, 100, and 200μg/ml exhibited significantly lower LPS-stimulated NO production than the LPS control (p<0.05). Sample 5 showed a significant inhibitory effect as compared to sample 1 at 100 μg/ml (p=0.020) and at 200 μg/ml (p<0.001).Sample 1 at 50μg/ml did not exhibit a statistically significant inhibitory effect on LPS-stimulated IL-1β production (p=0.081 vs. the LPS control). As compared to sample 1, sample 5 showed a significant inhibitory effect on IL-1β production at 100 μg/ml (p=0.006) and at 200 μg/ml (p=0.007) (Figures 4(a) and 5(a)).Figure 4
(a) Effects of sample 1 and sample 5 on nitric oxide production in 0.5μg/ml LPS-activated RAW 264.7 cells. (b) Effects of sample 1 and sample 5 on the mRNA of iNOS in 0.5 μg/ml LPS-activated RAW264.7 cells by real-time quantitative PCR.(c) Effects of sample 1 and sample 5 on the expression of iNOS in 0.5 μg/ml LPS-activated RAW264.7 cells by Western blotting analysis. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed as mean±standard error of the mean (n=3). #p<0.05 versus LPS control. ∗p<0.05 sample 1 versus sample 5 at 100 μg/ml. ∗∗p<0.05 sample 1 versus sample 5 at 200 μg/ml. iNOS: inducible nitric oxide synthase; LPS: lipopolysaccharide.
(a)
(b)
(c)Figure 5
(a) Effects of sample 1 and sample 5 on IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells. (b) Effects of sample 1 and sample 5 on the mRNA of IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells by real-time quantitative PCR. (c) Effects of sample 1 and sample 5 on the expression of IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells by Western blotting analysis. Sample 1: the safflower decoction; sample 5, the safflower injection. Values are expressed as mean±standard error of the mean (n=3). #p<0.05 versus LPS control. ∗p<0.05 sample 1 versus sample 5 at 100 μg/ml. ∗∗p<0.05 sample 1 versus sample 5 at 200 μg/ml. LPS: lipopolysaccharide.
(a)
(b)
(c)
## 3.5. Effects of Sample 1 and Sample 5 on iNOS and IL-1β mRNAs in LPS-Activated RAW264.7 Cells
When compared to those of the solvent control, iNOS mRNA expression increased by approximately2.42±0.19 fold and IL-1β production increased by approximately 1.86±0.08 fold in LPS-activated cells.When compared to that of the LPS control, iNOS mRNA expression was significantly downregulated by sample 5 at 100μg/ml (p=0.040) and 200 μg/ml (p=0.019) and sample 1 at 100 and 200 μg/ml (both p<0.01). The significant inhibitory effect on IL-1β mRNA was also observed with sample 5 at 100 μg/ml and 200 μg/ml and sample 1 at 200 μg/ml (p<0.05).Compared to sample 1 at 100μg/ml, sample 5 at the same concentration significantly decreased iNOS mRNA and IL-1β mRNA levels (p=0.013 and p=0.009). Sample 5 at 200 μg/ml also exhibited a similar significant inhibitory effect on IL-1β mRNA expression as compared to sample 1 (p=0.011) (Figures 4(b) and 5(b)).
## 3.6. Effects of Sample 1 and Sample 5 on iNOS and IL-1β Protein Expressions in LPS-Activated RAW264.7 Cells
Western blotting was used to determine the effects of the decoction sample and injection sample on iNOS and IL-1β protein expressions. Figures 4(c) and 5(c) showed that iNOS and IL-1β protein expressions increased in LPS-activated RAW264.7cells.Compared to the LPS control, both sample 1 and sample 5 significantly suppressed iNOS expression at three tested concentrations (p<0.05). Sample 5 suppressed IL-1β protein expression at 50, 100, and 200 μg/ml, while sample 1 exhibited a significant suppressing effect on IL-1β protein expression at 100 and 200 μg/ml (all: p<0.05).Sample 1 and sample 5 were significantly different in their abilities of decreasing iNOS protein expression both at 100μg/ml and at 200 μg/ml (p<0.05). A significant difference in downregulation of IL-1β protein expression was also observed between the two groups treated with safflower at 100 μg/ml (p=0.010) and at 200 μg/ml (p=0.002), respectively.
## 4. Discussion
Safflower is well known for its antioxidant effects and has been widely used to treat conditions including musculoskeletal injuries and cardiocerebrovascular diseases [2, 4, 18]. A paper revealed that more than 100 herbal items have been used as topical agents in the treatment of musculoskeletal injuries. In order to verify the efficacies of these herbs, a comprehensive study was proposed, in which five herbs, including safflower, were selected as suitable candidates for further study. The clinical data from the pilot studies confirmed that the effects of safflower were related to its proven antioxidant activities [18].Traditionally, safflower is clinically used as a water decoction. As a modern preparation [3], the safflower injection prepared by Shanxi Provincial People’s Hospital was initially used to treat coronary diseases and cerebral thrombosis in 1973-1974 [19, 20]. Since then, safflower injection has been widely used in the treatment of cardiocerebrovascular diseases [4]. The injection has been studied more extensively than decoction for adverse reactions and the correlation between the antioxidant activity and its active contents [3, 21]. A study entitled “New technology for quality control of traditional Chinese medicine based on active ingredients and its application in safflower injection” was awarded a national prize in 2015. Such efforts have advanced the strategies for quality control and promoted the establishment of a standard system for safflower injection along with the development of relevant industries [22].Our previous study demonstrated the antioxidant activities of safflower extracts [23]. Another study conducted by our laboratory also showed that safflower injection could decrease NO production in LPS-stimulated RAW264.7 cells [9]. According to WS3-2012, the process for preparing a safflower injection begins with safflower decoction. In this study, 20 kg of safflower was decocted in water three times in a traditional manner: 1 h for the first time, 50 min for the second time, and 30 min for the third time and then subjected to alcohol precipitation twice and recovery with ethanol [8]. We were interested on how the process steps could influence the antioxidant activities of safflower decoction and safflower injection. In this study, five samples obtained from the preparation process (Figure 1) were tested for their antioxidant activities. In recent years, a variety of analytical methods have been used to evaluate the in vitro antioxidant capacity of safflower, among which the DPPH method and the ORAC assay were widely used [11, 12].As a positive control, curcumin displayed the highest ORAC value and lowest IC50 value for the DPPH scavenging activity in this study. The anti-inflammatory effect of curcumin is most likely exerted through its ability to inhibit cyclooxygenase-2, lipoxygenase, and iNOS [24]. Our previous research also showed that curcumin, as a positive control, decreased the level of nitrite in LPS-activated macrophages [9]. A review research elucidates that most chronic diseases are closely related to chronic inflammation and oxidative stress and the antioxidant properties of curcumin can play a key role in the prevention and treatment of chronic inflammation diseases [17].Our results showed that sample 5 exhibited an ORAC value significantly different from other tested samples, in particular, sample 1, as measured by the ORAC and DPPH methods. The IC50 values for DPPH scavenging activity of the five samples were also measured, and similar results were observed and shown in Figure 3(b).HSYA showed a higher ORAC value and DPPH scavenging activity as compared to sample 5 (Figures3(a) and 3(b)). As a main compound in safflower [8, 12], HSYA showed antioxidant activities in vivo and in vitro. Some studies aiming at identifying HSYA in the brain tissues of rats suggested that HSYA, which increased the activities of superoxide dismutase and catalase, can be potentially used as a neuroprotective agent for traumatic brain injury [25]. Carthamus yellow, which is composed of safflomin A and safflomin B, provided an anti-inflammatory response by inhibiting the production of NO through downregulating iNOS gene expression in LPS-induced macrophages [26]. HSYA also exerted a protective effect against LPS-induced neurotoxicity in dopaminergic neurons through a mechanism that may be associated with the inhibition of IL-1β, TNF-α, and NO [27].It is interesting to observe the inconsistency between the HSYA contents and the antioxidant activities of the samples. The content of HSYA in sample 1 was higher than that in sample 5, but the antioxidant activity of sample 1 was significantly lower than that of sample 5, as measured by the DPPH and ORAC methods. So, the first question is raised: what results will be obtained when other methods are used?The effects of safflower extracts on LPS-induced expression of proinflammatory cytokines, such as iNOS, IL-1β, the nuclear receptor NF-κB, and cyclooxygenase-2 (COX-2), were evaluated recently [28, 29]. Some research showed that methanol extracts of safflower (MES) reduced inflammation by suppressing iNOS and COX-2 expressions in LPS-activated cells. The binding to NF-κB and NF-κB luciferase activity were also significantly diminished by MES [28]. The hepatoprotective effects and mechanisms of an extract of Salvia miltiorrhiza and safflower were investigated in C57BL/6J mice. Western blotting revealed that DHI inhibited LPS-induced phosphorylation of IκBα and NF-κB p65 [29].To further provide an insight into the anti-inflammatory effect of safflower, LPS-activated RAW264.7 macrophages were used for further investigating the effects of sample 1 and sample 5 on mRNA and protein expressions of iNOS and IL-1β. The results showed that iNOS and IL-1β expressions in the LPS-stimulated group were significantly higher than those in the solvent control group. Compared to the LPS control, both sample 1 and sample 5 significantly suppressed iNOS and IL-1β expressions at different concentrations. Further comparison of the samples showed that sample 5 exhibited a significantly higher inhibitory effect on protein and mRNA expressions of both iNOS and IL-1β than sample 1.The above results were in accordance with those obtained from the ORAC and DPPH methods, confirming that the current standard process for preparing a safflower injection can ensure a higher antioxidant activity of the final product than the first water decoction. However, the second question is raised: as there were more HPLC peaks and higher HSYA content in sample 1 than in sample 5 (Figure2), why did sample 5 possess a higher antioxidant activity than sample 1?Content determination is an important means for evaluating the product quality and explaining the pharmacological results. Only active ingredients in safflower, such as safflower yellow, HSYA, kaempferol, and quercetin, can exert positive roles [30, 31]. We hypothesize that some interfering substances were removed during the preparation process while the antioxidant compounds were retained. A study involving depletion of some active ingredients supports our hypothesis by showing that several main components such as HSYA, dehydrated safflower yellow B, and 6-hydroxykaempferol-3,6-di-O-glucoside-7-O-glucuronide not only play a direct antioxidant role but also synergize [32]. It will be of particular interest to further study synergistic combinations of the compounds present in safflower injection.It is also interesting to identify promising candidates in safflower injection that can be used in future immunotherapeutic strategies. Some researches on the compounds in safflower injection have already obtained positive results. Recently, three active constituents in safflower injection, i.e., HSYA, sirongoside, and (8Z)-decaene-4,6-diyne-1-O-β-D-glucopyranoside, were identified by HPLC [33]. The contents of uridine, guanosine, and adenosine in the injection were also determined by HPLC. Nucleosides, such as uridine, have an effect against platelet aggregation [34]. Sixteen compounds were isolated from safflower injection, including (1) scutellarin, (2) kaempferol-3-O-β-rutinoside, (3) HSYA, (4) rutin, (5) coumalic acid, (6) adenosine, (7) syringoside, (8) (3E)-4-(4′-hydroxyphenyl)-3-buten-2-one, (9) (8z)-decaene-4,6-diyne-1-O-β-D-glucopyranoside, (10) 4-hydroxybenzaldegyde, (11) (2E, 8E)-tetradecadiene-4,6-diyne-1,12,14-triol-1-O-β-D-glucopyranoside, (12) kaem-pferol-3-O-β-sophorose, (13) uridine, (14) roseoside, (15) cinnamic acid, and (16) kaempferol. Compounds 1, 2, 7, 9, 11, and 12 were isolated from the safflower injection for the first time. The results indicated that all the tested compounds but compound 5 exhibited potent antioxidant and anti-inflammatory activities, while compounds 2, 3, 9, and 12 showed strong activities against platelet aggregation [35].All the efforts above help us understand the interaction of multiple components. The change in the proportion of active ingredients caused by the extraction process and the possibility of a synergistic antioxidant activity need to be further studied. It is also necessary to identify all the peaks in Figure2 and observe how they change during the extraction process. In summary, the present study, for the first time, provides in vitro evidence that the “modern” safflower injection significantly suppresses expressions of both iNOS and IL-1β in mRNA and protein levels in LPS-activated RAW264.7 cells as compared to the traditional water decoction. The compounds in safflower injection need to be identified before further in vivo studies on the molecular mechanism are conducted.
---
*Source: 1018274-2019-05-06.xml* | 1018274-2019-05-06_1018274-2019-05-06.md | 58,287 | Comparison of Inhibitory Effects of Safflower Decoction and Safflower Injection on Protein and mRNA Expressions of iNOS and IL-1β in LPS-Activated RAW264.7 Cells | Hui Liao; Yuanping Li; Xiaoru Zhai; Bin Zheng; Linda Banbury; Xiaoyun Zhao; Rongshan Li | Journal of Immunology Research
(2019) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1018274 | 1018274-2019-05-06.xml | ---
## Abstract
Objective. Safflower has antioxidant and anti-inflammatory activities. The two forms of preparations for safflower which are widely used in China are injection and decoction. The first step of the process for preparing an injection involves extracting safflower with water, which actually yields a decoction. This study is intended to investigate how the preparation process influences the anti-inflammatory activity of safflower in vitro. Methods. Five samples, including a decoction (sample 1) and an injection (sample 5) of safflower, were prepared according to the national standard WS3-B-3825-98-2012 and were analyzed by the oxygen radical absorbance capacity (ORAC) method and the 1,1-diphenyl-2-trinitrophenylhydrazine (DPPH) method for comparison. Sample 1 and sample 5 were further tested by the Griess assay and ELISA for their effects on nitric oxide (NO) production and interleukin- (IL-) 1β content in lipopolysaccharide- (LPS-) activated RAW264.7 cells. The protein and mRNA levels of inducible nitric oxide synthase (iNOS) and IL-1β were measured by Western blotting and real-time quantitative PCR. Results. Sample 5 showed a significantly higher ORAC value and a lower half inhibitory concentration (IC50) for DPPH scavenging activity as compared to the other four samples (p<0.05). LPS significantly upregulated the mRNA and protein expressions of iNOS and IL-1β as compared to the solvent control (p<0.01). As compared to sample 1, sample 5 significantly decreased NO production, iNOS protein expression, and the contents of IL-1β mRNA and IL-1β protein at both 100 μg/ml and 200 μg/ml (all: p<0.05) and significantly downregulated iNOS mRNA expression at 100 μg/ml (p<0.05). Conclusions. Results of this study demonstrate that the safflower injection prepared according to the national standard has a significant effect of suppressing protein and mRNA expressions of iNOS and IL-1β as compared to its traditional decoction.
---
## Body
## 1. Introduction
Safflower is the tubular flower ofCarthamus tinctorius. According to theories of Chinese traditional medicine, safflower has effects of promoting blood circulation and removing blood stasis [1]. Modern pharmacological researches and clinical examinations suggest that safflower is a promising agent for ameliorating myocardial ischemia, trauma and pain of joints, etc. [2]. In China, safflower decoction is a traditional preparation, while safflower injection is regarded as a “product of herb’s modernization” [3]. A recent article reviewed 956 papers regarding the use of safflower injection in the treatment of a variety of diseases such as cerebral infarction, transient ischemic attack, and chronic glomerulonephritis [4].The effects of safflower injection have been pharmacologically and clinically proved to be related to the antioxidant and anti-inflammatory activities [5–7]. The protective effect of safflower injection against isoprenaline-induced acute myocardial ischemia in rats is likely to be related to a decreased inflammatory response mediated by tumor necrosis factor alpha (TNF-α) and interleukin- (IL-) 6 in the heart tissue [5]. Some clinical researches showed that safflower injection could be used to treat acute lung injury by decreasing TNF-α and IL-8 levels as measured in patient’s serum [6]. Another clinical study found that the serum levels of IL-6 and IL-10 were significantly elevated in patients with acute cerebral infarction (ACI) and safflower injection exerted certain neuroprotective effects in ACI patients by suppressing IL-6 and IL-10 expressions [7].Safflower injection has been widely used in China, and the process for preparing a safflower injection starts from the traditional decoction [7]. We were interested in how the process for preparing a safflower injection could influence its antioxidant and anti-inflammatory activities. The process for preparing a safflower injection includes the step of water decoction followed by alcohol precipitation according to the current national standard for injections, “WS3-B-3825-98-2012” (hereinafter referred to as WS3-2012) [8]. Our preliminary work showed that the safflower extract obtained according to WS3-2012 had an antioxidant effect which was associated with the activity of inhibiting nitric oxide (NO) production in lipopolysaccharide- (LPS-) activated RAW264.7 cells [9]. In this paper, five samples obtained during the process were compared in terms of antioxidant activity by the oxygen radical absorbance capacity (ORAC) method and the 1,1-diphenyl-2-trinitrophenylhydrazine (DPPH) radical scavenging method. NO production, IL-1β content, inducible nitric oxide synthase (iNOS), and IL-1β protein and mRNA expressions in LPS-activated RAW264.7 macrophages were further measured after treatment with the first water decoction sample and the final safflower injection sample.
## 2. Methods
### 2.1. Preparation of Samples
Safflower (Carthamus tinctorius) was produced in Xinjiang province and met the standard in the Chinese Pharmacopoeia, 2015 [10]. The safflower injection was manufactured by Shanxi Huawei Pharmaceutical Co. Ltd. according to WS3-2012 [8], as was shown in Figure 1.Figure 1
Flowchart on the process for producing safflower injection and the five samples obtained in the research [8]. ∗Safflower: the 20 kg dried herb. #Supernatant 1: the water decoction, and 20 ml of it was obtained as sample 1. $20 ml of each of the extracted supernatants 2, 3, and 4 was obtained as sample 2, sample 3, and sample 4, respectively. &Supernatant 5: the 40000 ml safflower injection, and 20 ml was sampled as sample 5. ^The filtrate was concentrated to a relative density of 1.10–1.14 for supernatant 1, 1.16–1.20 for supernatant 2, and 1.02–1.04 for supernatant 3.20 ml of each of the five extracted supernatants shown in Figure1 was labeled as sample 1 (traditional water decoction), sample 2, sample 3, sample 4, and sample 5 (safflower injection product). 10 ml of each sample was accurately pipetted into a container and dried in vacuo to a constant weight. All liquid and dried samples were stored at 0-4°C for future use. Five liquid samples were subjected to high-performance liquid chromatography (HPLC) profiling, and the dried samples were used to determine the antioxidant activity by the ORAC, DPPH methods, and in vitro cell assays.
### 2.2. HPLC Profiling of the Five Samples and Content Analysis of Hydroxysafflor Yellow A (HSYA) in Sample 1 and Sample 5 [8, 10]
In HPLC profiling, octadecylsilane-bonded silica used was a Gemini C18 (250×4.6mm, Phenomenex, Torrance, CA, USA) at a column temperature of 25°C. Gradient elution was carried out with acetonitrile as mobile phase A and aqueous trifluoroacetic acid (0.05%) as mobile phase B. The detection wavelength was 223 nm. 10 μl of HSYA (96.5%, China National Institutes for Food and Drug Control, Beijing) control solution and each sample solution were, respectively, injected into the liquid chromatograph column and ran for 70 min [8]. The contents of HSYA in sample 1 and sample 5 were measured with reference to the Chinese Pharmacopoeia, 2015 [10].
### 2.3. Determination of the Antioxidant Activities of the Five Samples by the ORAC Method [11]
#### 2.3.1. Preparation of the Standard Curve
6-hydroxy-2,5,7,8-tetramethyl-2-carboxylic acid (Trolox, 97.0%, Aldrich Corporation, USA), a water-soluble analog of vitamin E, was used as the standard. Firstly, 10μl of 75 nM 3′,6′-dihydroxy-spiro[isobenzofuran-1[3H],9′[9H]-xanthen]-3-one, also known as fluorescein disodium (FL) (95%, Aldrich Corporation, USA), was added to each well. Then, 20 μl of Trolox at concentrations of 6.25, 12.5, 25, and 50 μM was added in triplicate. Finally, 170 μl of 17 mM 2′-Azobis(2-amidinopropane) dihydrochloride (AAPH) (≥98.0%, Wako Pure Chemical Corporation, USA) was added to each well and the fluorescence change was dynamically recorded on the Wallac Victor 3 fully automated quantitative mapping microplate reader (PerkinElmer, USA) every 1 min for 35 min at 37°C. Trolox was diluted with deionized water, and FL and AAPH were diluted with 75 mM phosphate buffer solution (PBS) (in-house). 20 μl of deionized water was included as a solvent control. The fluorescence-time graph was plotted using the workout program, and the area under the curve was calculated. The following standard curve equation for Trolox was obtained with the area under the curve as the ordinate and the Trolox concentration as the abscissa: y=1.0259x+0.0960, r=0.9959.
#### 2.3.2. Determination of the ORAC Value
(1) Positive Control Group. 20 μl of curcumin (>95%, China National Institutes for Food and Drug Control, Beijing, China) was incubated with 10 μl of 75 nM FL and 170 μl of 17 mM AAPH in a total volume of 200 μl. The tested concentrations of curcumin were 1, 2, 4, and 8 μM in triplicate.(2) Five Safflower Samples. Briefly, 20 μl of safflower samples at 25, 50, 100, and 200 μg/ml and 20 μl of HSYA samples at 12.5, 25, 50, and 100 μM were tested. FL and AAPH were added following the same steps as those of curcumin.(3) Solvent Control. A solvent control comprising DMSO for curcumin and a deionized water control for the five samples and HSYA were included.The ORAC values (inμmol·TE/g) of the positive curcumin, safflower samples, and HSYA were calculated from the linear equation of the Trolox standard.
### 2.4. Determination of the Antioxidant Activities of the Five Samples by the DPPH Method [12]
#### 2.4.1. Preparation of the DPPH Standard Curve
0.5, 1.0, 2.0, 3.0, 4.0, and 5.0 ml of a 50μg/ml solution of DPPH (>97.0%, Tokyo Chemical Industry Corporation, Tokyo, Japan) in 95% ethanol were accurately pipetted into 5 ml volumetric flasks, to which ethanol was added to the final volume. The mixture was shaken well. The A values were measured at 517 nm. The following standard curve equation for DPPH was obtained with the A value as the ordinate and the concentration as the abscissa: y=29.1170x+0.0354, r=0.9999.
#### 2.4.2. Determination of Parameters of the Samples
(1) DPPH-Negative Control. 1.0 ml of 95% ethanol was added to 2.0 ml of a 50 μg/ml DPPH solution and mixed well. After the mixture was set aside for 30 min in a 28°C water bath, the A value at 517 nm was measured as AD.(2) Positive Control. 0.5 ml of the tested curcumin solutions at 1.56, 3.13, 6.25, 12.5, 25, and 50 μg/ml was thoroughly mixed with 0.5 ml of 95% ethanol, and then 2.0 ml of the 50 μg/ml DPPH solution was added to the mixture.(3) Five Safflower Samples. 0.5 ml of the tested safflower samples at 15.6, 31.3, 62.5, 125, 250, and 500 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control.(4) HSYA Sample. 0.5 ml of the tested HSYA at 3.2, 6.3, 12.5, 25, 50, and 100 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control. All the A values of curcumin, five safflower samples, and HSYA were recorded as AT.(5) Blank. The A value of 3.0 ml of 95% ethanol was measured as AB.(6) Solvent Control. 0.5 ml of DMSO (a solvent for curcumin) and deionized water (a solvent for safflower samples and HSYA) was mixed with 2.5 ml of 95% ethanol, and their A values were measured as AS.The DPPH scavenging rate of the samples at different concentrations was calculated according to the following equation:(1)1−AT−ASAD−AB×100%.The half inhibitory concentration (IC50) for DPPH scavenging of the samples, i.e., the corresponding concentration of the sample solution when the DPPH radical scavenging rate is 50%, was calculated.
### 2.5. Cell Culture
RAW264.7 cells, a mouse macrophage cell line, were purchased from Shanghai Cell Institute (Shanghai, China) and cultured in colorless Dulbecco’s modified Eagle’s medium (DMEM) supplemented with heat-inactivated fetal bovine serum (10%), D-glucose (3.5 mg/ml), Na pyruvate (100 mM), L-glutamine (2 mM), penicillin (100 U/ml), streptomycin (100μg/ml), and amphotericin B (250 μg/ml) at 37°C in a 5% CO2 incubator.
### 2.6. Determination of NO and IL-1β Levels in LPS-Activated RAW264.7 Cells in the Presence of Sample 1 and Sample 5
NO and IL-1β levels were determined in RAW264.7 cells (98 μl, plated at 1×106 cells/ml). The samples (1 μl each) were added to the cells, which were then stimulated with LPS (1 μl, 0.5 μg/ml, Wako Chemicals USA Inc., Richmond, VA, USA) after 2 h. Nitrite, a stable end product of NO metabolism, was measured using the Griess reaction [13] after another 22 hours, and IL-1β was measured using an ELISA kit commercially available from Wuhan Boster Biological Technology (Wuhan, China). All samples and controls were assayed in sextuplicate.
### 2.7. Real-Time Quantitative PCR of iNOS and IL-1β in the Presence of Sample 1 and Sample 5 [14, 15]
Total RNAs were extracted from solvent-treated RAW264.7 cells, LPS-activated cells, and sample-treated LPS-activated cells with TRIzol Reagent (Ambion, USA). Equal amounts (1μg) of RNAs were reverse transcribed using a high-capacity RNA-to-cDNA PCR kit (Takara, Beijing, China). Mouse gene PCR primer sets for iNOS and IL-1β were obtained from SABiosciences (Germantown, MD). The Power SYBR Green PCR Master Mix (Applied Biosystems) was used with the step-one-plus real-time PCR system (Applied Biosystems). The protocol included denaturing for 15 min at 95°C, 40 cycles of three-step PCR including denaturing for 15 sec at 95°C, annealing for 30 sec at 58°C, and extension for 30 sec at 72°C, with an additional 15-second detection step at 81°C, followed by a melting profile from 55°C to 95°C at a rate of 0.5°C per 10 sec. The samples of 25 ng cDNA were analyzed in quadruplicate in parallel with RPLP1/3 controls. Standard curves (threshold 1 cycle vs. log 2 pg cDNA) were generated from a series of log dilutions of standard cDNA (reverse transcribed from mRNA from RAW264.7 cells in growth media) from 0.1 pg to 100 ng. Initial quantities of experimental mRNA were then calculated from the standard curves and averaged using the SA Bioscience software. The ratio of the experimental marker gene (iNOS or IL-1β) to RPLP1/3 mRNA was calculated and normalized to the solvent control.
### 2.8. Western Blotting of iNOS and IL-1β in the Presence of Sample 1 and Sample 5 [16]
The treated cells were removed from the culture media and extracted with the RIPA lysis buffer from Beyotime Biotech (Jiangsu, China) for 30 min. Supernatants were collected after the tubes were centrifuged at10000g for 40 min at 4°C. The protein concentrations were determined using a BCA Protein Assay Kit from Wuhan Boster Biological Technology (Wuhan, China). Samples containing 50 μg of protein were resolved by 12% SDS-PAGE electrophoresis and transferred to nitrocellulose membranes (Whatman International Ltd., Maidstone, Germany). Nonspecific binding was blocked by immersing the membranes into 5% nonfat dried milk and 0.1% (v/v) Tween 20 in PBS for 3 h at room temperature. After rinsing with a washing buffer (0.1% Tween 20 in PBS) several times, the membranes were incubated with a primary antibody against iNOS at 1 : 1000 dilution (catalog no. ab49999, Abcam) or an antibody against IL-1β at 1 : 1000 dilution (catalog no. ab150777, Abcam) overnight at 4°C. The membranes were washed several times, then incubated with a corresponding anti-mouse secondary antibody IgG conjugated to HRP (Cell Signaling Technology, Danvers, MA) at room temperature for 3 h, and analyzed by the Quantity One analysis system (Bio-Rad, Hercules, CA, USA). GAPDH at a dilution of 1 : 2000 (catalog no. ab 9483, Abcam) was used as an internal loading control.
### 2.9. Statistical Analysis
The SPSS 19.0 software (IBM, Armonk, NY, US) was used for statistical analysis. All the data were expressed asmean±standard error of the mean. For continuous variables, comparisons among groups were conducted by one-way analysis of variance followed by Dunnett’s multiple comparisons test. All the p values reported were two tailed, and p<0.05 was set as the level of significance.
## 2.1. Preparation of Samples
Safflower (Carthamus tinctorius) was produced in Xinjiang province and met the standard in the Chinese Pharmacopoeia, 2015 [10]. The safflower injection was manufactured by Shanxi Huawei Pharmaceutical Co. Ltd. according to WS3-2012 [8], as was shown in Figure 1.Figure 1
Flowchart on the process for producing safflower injection and the five samples obtained in the research [8]. ∗Safflower: the 20 kg dried herb. #Supernatant 1: the water decoction, and 20 ml of it was obtained as sample 1. $20 ml of each of the extracted supernatants 2, 3, and 4 was obtained as sample 2, sample 3, and sample 4, respectively. &Supernatant 5: the 40000 ml safflower injection, and 20 ml was sampled as sample 5. ^The filtrate was concentrated to a relative density of 1.10–1.14 for supernatant 1, 1.16–1.20 for supernatant 2, and 1.02–1.04 for supernatant 3.20 ml of each of the five extracted supernatants shown in Figure1 was labeled as sample 1 (traditional water decoction), sample 2, sample 3, sample 4, and sample 5 (safflower injection product). 10 ml of each sample was accurately pipetted into a container and dried in vacuo to a constant weight. All liquid and dried samples were stored at 0-4°C for future use. Five liquid samples were subjected to high-performance liquid chromatography (HPLC) profiling, and the dried samples were used to determine the antioxidant activity by the ORAC, DPPH methods, and in vitro cell assays.
## 2.2. HPLC Profiling of the Five Samples and Content Analysis of Hydroxysafflor Yellow A (HSYA) in Sample 1 and Sample 5 [8, 10]
In HPLC profiling, octadecylsilane-bonded silica used was a Gemini C18 (250×4.6mm, Phenomenex, Torrance, CA, USA) at a column temperature of 25°C. Gradient elution was carried out with acetonitrile as mobile phase A and aqueous trifluoroacetic acid (0.05%) as mobile phase B. The detection wavelength was 223 nm. 10 μl of HSYA (96.5%, China National Institutes for Food and Drug Control, Beijing) control solution and each sample solution were, respectively, injected into the liquid chromatograph column and ran for 70 min [8]. The contents of HSYA in sample 1 and sample 5 were measured with reference to the Chinese Pharmacopoeia, 2015 [10].
## 2.3. Determination of the Antioxidant Activities of the Five Samples by the ORAC Method [11]
### 2.3.1. Preparation of the Standard Curve
6-hydroxy-2,5,7,8-tetramethyl-2-carboxylic acid (Trolox, 97.0%, Aldrich Corporation, USA), a water-soluble analog of vitamin E, was used as the standard. Firstly, 10μl of 75 nM 3′,6′-dihydroxy-spiro[isobenzofuran-1[3H],9′[9H]-xanthen]-3-one, also known as fluorescein disodium (FL) (95%, Aldrich Corporation, USA), was added to each well. Then, 20 μl of Trolox at concentrations of 6.25, 12.5, 25, and 50 μM was added in triplicate. Finally, 170 μl of 17 mM 2′-Azobis(2-amidinopropane) dihydrochloride (AAPH) (≥98.0%, Wako Pure Chemical Corporation, USA) was added to each well and the fluorescence change was dynamically recorded on the Wallac Victor 3 fully automated quantitative mapping microplate reader (PerkinElmer, USA) every 1 min for 35 min at 37°C. Trolox was diluted with deionized water, and FL and AAPH were diluted with 75 mM phosphate buffer solution (PBS) (in-house). 20 μl of deionized water was included as a solvent control. The fluorescence-time graph was plotted using the workout program, and the area under the curve was calculated. The following standard curve equation for Trolox was obtained with the area under the curve as the ordinate and the Trolox concentration as the abscissa: y=1.0259x+0.0960, r=0.9959.
### 2.3.2. Determination of the ORAC Value
(1) Positive Control Group. 20 μl of curcumin (>95%, China National Institutes for Food and Drug Control, Beijing, China) was incubated with 10 μl of 75 nM FL and 170 μl of 17 mM AAPH in a total volume of 200 μl. The tested concentrations of curcumin were 1, 2, 4, and 8 μM in triplicate.(2) Five Safflower Samples. Briefly, 20 μl of safflower samples at 25, 50, 100, and 200 μg/ml and 20 μl of HSYA samples at 12.5, 25, 50, and 100 μM were tested. FL and AAPH were added following the same steps as those of curcumin.(3) Solvent Control. A solvent control comprising DMSO for curcumin and a deionized water control for the five samples and HSYA were included.The ORAC values (inμmol·TE/g) of the positive curcumin, safflower samples, and HSYA were calculated from the linear equation of the Trolox standard.
## 2.3.1. Preparation of the Standard Curve
6-hydroxy-2,5,7,8-tetramethyl-2-carboxylic acid (Trolox, 97.0%, Aldrich Corporation, USA), a water-soluble analog of vitamin E, was used as the standard. Firstly, 10μl of 75 nM 3′,6′-dihydroxy-spiro[isobenzofuran-1[3H],9′[9H]-xanthen]-3-one, also known as fluorescein disodium (FL) (95%, Aldrich Corporation, USA), was added to each well. Then, 20 μl of Trolox at concentrations of 6.25, 12.5, 25, and 50 μM was added in triplicate. Finally, 170 μl of 17 mM 2′-Azobis(2-amidinopropane) dihydrochloride (AAPH) (≥98.0%, Wako Pure Chemical Corporation, USA) was added to each well and the fluorescence change was dynamically recorded on the Wallac Victor 3 fully automated quantitative mapping microplate reader (PerkinElmer, USA) every 1 min for 35 min at 37°C. Trolox was diluted with deionized water, and FL and AAPH were diluted with 75 mM phosphate buffer solution (PBS) (in-house). 20 μl of deionized water was included as a solvent control. The fluorescence-time graph was plotted using the workout program, and the area under the curve was calculated. The following standard curve equation for Trolox was obtained with the area under the curve as the ordinate and the Trolox concentration as the abscissa: y=1.0259x+0.0960, r=0.9959.
## 2.3.2. Determination of the ORAC Value
(1) Positive Control Group. 20 μl of curcumin (>95%, China National Institutes for Food and Drug Control, Beijing, China) was incubated with 10 μl of 75 nM FL and 170 μl of 17 mM AAPH in a total volume of 200 μl. The tested concentrations of curcumin were 1, 2, 4, and 8 μM in triplicate.(2) Five Safflower Samples. Briefly, 20 μl of safflower samples at 25, 50, 100, and 200 μg/ml and 20 μl of HSYA samples at 12.5, 25, 50, and 100 μM were tested. FL and AAPH were added following the same steps as those of curcumin.(3) Solvent Control. A solvent control comprising DMSO for curcumin and a deionized water control for the five samples and HSYA were included.The ORAC values (inμmol·TE/g) of the positive curcumin, safflower samples, and HSYA were calculated from the linear equation of the Trolox standard.
## 2.4. Determination of the Antioxidant Activities of the Five Samples by the DPPH Method [12]
### 2.4.1. Preparation of the DPPH Standard Curve
0.5, 1.0, 2.0, 3.0, 4.0, and 5.0 ml of a 50μg/ml solution of DPPH (>97.0%, Tokyo Chemical Industry Corporation, Tokyo, Japan) in 95% ethanol were accurately pipetted into 5 ml volumetric flasks, to which ethanol was added to the final volume. The mixture was shaken well. The A values were measured at 517 nm. The following standard curve equation for DPPH was obtained with the A value as the ordinate and the concentration as the abscissa: y=29.1170x+0.0354, r=0.9999.
### 2.4.2. Determination of Parameters of the Samples
(1) DPPH-Negative Control. 1.0 ml of 95% ethanol was added to 2.0 ml of a 50 μg/ml DPPH solution and mixed well. After the mixture was set aside for 30 min in a 28°C water bath, the A value at 517 nm was measured as AD.(2) Positive Control. 0.5 ml of the tested curcumin solutions at 1.56, 3.13, 6.25, 12.5, 25, and 50 μg/ml was thoroughly mixed with 0.5 ml of 95% ethanol, and then 2.0 ml of the 50 μg/ml DPPH solution was added to the mixture.(3) Five Safflower Samples. 0.5 ml of the tested safflower samples at 15.6, 31.3, 62.5, 125, 250, and 500 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control.(4) HSYA Sample. 0.5 ml of the tested HSYA at 3.2, 6.3, 12.5, 25, 50, and 100 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control. All the A values of curcumin, five safflower samples, and HSYA were recorded as AT.(5) Blank. The A value of 3.0 ml of 95% ethanol was measured as AB.(6) Solvent Control. 0.5 ml of DMSO (a solvent for curcumin) and deionized water (a solvent for safflower samples and HSYA) was mixed with 2.5 ml of 95% ethanol, and their A values were measured as AS.The DPPH scavenging rate of the samples at different concentrations was calculated according to the following equation:(1)1−AT−ASAD−AB×100%.The half inhibitory concentration (IC50) for DPPH scavenging of the samples, i.e., the corresponding concentration of the sample solution when the DPPH radical scavenging rate is 50%, was calculated.
## 2.4.1. Preparation of the DPPH Standard Curve
0.5, 1.0, 2.0, 3.0, 4.0, and 5.0 ml of a 50μg/ml solution of DPPH (>97.0%, Tokyo Chemical Industry Corporation, Tokyo, Japan) in 95% ethanol were accurately pipetted into 5 ml volumetric flasks, to which ethanol was added to the final volume. The mixture was shaken well. The A values were measured at 517 nm. The following standard curve equation for DPPH was obtained with the A value as the ordinate and the concentration as the abscissa: y=29.1170x+0.0354, r=0.9999.
## 2.4.2. Determination of Parameters of the Samples
(1) DPPH-Negative Control. 1.0 ml of 95% ethanol was added to 2.0 ml of a 50 μg/ml DPPH solution and mixed well. After the mixture was set aside for 30 min in a 28°C water bath, the A value at 517 nm was measured as AD.(2) Positive Control. 0.5 ml of the tested curcumin solutions at 1.56, 3.13, 6.25, 12.5, 25, and 50 μg/ml was thoroughly mixed with 0.5 ml of 95% ethanol, and then 2.0 ml of the 50 μg/ml DPPH solution was added to the mixture.(3) Five Safflower Samples. 0.5 ml of the tested safflower samples at 15.6, 31.3, 62.5, 125, 250, and 500 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control.(4) HSYA Sample. 0.5 ml of the tested HSYA at 3.2, 6.3, 12.5, 25, 50, and 100 μg/ml was mixed with 0.5 ml of 95% ethanol, and other steps were the same as those for the positive control. All the A values of curcumin, five safflower samples, and HSYA were recorded as AT.(5) Blank. The A value of 3.0 ml of 95% ethanol was measured as AB.(6) Solvent Control. 0.5 ml of DMSO (a solvent for curcumin) and deionized water (a solvent for safflower samples and HSYA) was mixed with 2.5 ml of 95% ethanol, and their A values were measured as AS.The DPPH scavenging rate of the samples at different concentrations was calculated according to the following equation:(1)1−AT−ASAD−AB×100%.The half inhibitory concentration (IC50) for DPPH scavenging of the samples, i.e., the corresponding concentration of the sample solution when the DPPH radical scavenging rate is 50%, was calculated.
## 2.5. Cell Culture
RAW264.7 cells, a mouse macrophage cell line, were purchased from Shanghai Cell Institute (Shanghai, China) and cultured in colorless Dulbecco’s modified Eagle’s medium (DMEM) supplemented with heat-inactivated fetal bovine serum (10%), D-glucose (3.5 mg/ml), Na pyruvate (100 mM), L-glutamine (2 mM), penicillin (100 U/ml), streptomycin (100μg/ml), and amphotericin B (250 μg/ml) at 37°C in a 5% CO2 incubator.
## 2.6. Determination of NO and IL-1β Levels in LPS-Activated RAW264.7 Cells in the Presence of Sample 1 and Sample 5
NO and IL-1β levels were determined in RAW264.7 cells (98 μl, plated at 1×106 cells/ml). The samples (1 μl each) were added to the cells, which were then stimulated with LPS (1 μl, 0.5 μg/ml, Wako Chemicals USA Inc., Richmond, VA, USA) after 2 h. Nitrite, a stable end product of NO metabolism, was measured using the Griess reaction [13] after another 22 hours, and IL-1β was measured using an ELISA kit commercially available from Wuhan Boster Biological Technology (Wuhan, China). All samples and controls were assayed in sextuplicate.
## 2.7. Real-Time Quantitative PCR of iNOS and IL-1β in the Presence of Sample 1 and Sample 5 [14, 15]
Total RNAs were extracted from solvent-treated RAW264.7 cells, LPS-activated cells, and sample-treated LPS-activated cells with TRIzol Reagent (Ambion, USA). Equal amounts (1μg) of RNAs were reverse transcribed using a high-capacity RNA-to-cDNA PCR kit (Takara, Beijing, China). Mouse gene PCR primer sets for iNOS and IL-1β were obtained from SABiosciences (Germantown, MD). The Power SYBR Green PCR Master Mix (Applied Biosystems) was used with the step-one-plus real-time PCR system (Applied Biosystems). The protocol included denaturing for 15 min at 95°C, 40 cycles of three-step PCR including denaturing for 15 sec at 95°C, annealing for 30 sec at 58°C, and extension for 30 sec at 72°C, with an additional 15-second detection step at 81°C, followed by a melting profile from 55°C to 95°C at a rate of 0.5°C per 10 sec. The samples of 25 ng cDNA were analyzed in quadruplicate in parallel with RPLP1/3 controls. Standard curves (threshold 1 cycle vs. log 2 pg cDNA) were generated from a series of log dilutions of standard cDNA (reverse transcribed from mRNA from RAW264.7 cells in growth media) from 0.1 pg to 100 ng. Initial quantities of experimental mRNA were then calculated from the standard curves and averaged using the SA Bioscience software. The ratio of the experimental marker gene (iNOS or IL-1β) to RPLP1/3 mRNA was calculated and normalized to the solvent control.
## 2.8. Western Blotting of iNOS and IL-1β in the Presence of Sample 1 and Sample 5 [16]
The treated cells were removed from the culture media and extracted with the RIPA lysis buffer from Beyotime Biotech (Jiangsu, China) for 30 min. Supernatants were collected after the tubes were centrifuged at10000g for 40 min at 4°C. The protein concentrations were determined using a BCA Protein Assay Kit from Wuhan Boster Biological Technology (Wuhan, China). Samples containing 50 μg of protein were resolved by 12% SDS-PAGE electrophoresis and transferred to nitrocellulose membranes (Whatman International Ltd., Maidstone, Germany). Nonspecific binding was blocked by immersing the membranes into 5% nonfat dried milk and 0.1% (v/v) Tween 20 in PBS for 3 h at room temperature. After rinsing with a washing buffer (0.1% Tween 20 in PBS) several times, the membranes were incubated with a primary antibody against iNOS at 1 : 1000 dilution (catalog no. ab49999, Abcam) or an antibody against IL-1β at 1 : 1000 dilution (catalog no. ab150777, Abcam) overnight at 4°C. The membranes were washed several times, then incubated with a corresponding anti-mouse secondary antibody IgG conjugated to HRP (Cell Signaling Technology, Danvers, MA) at room temperature for 3 h, and analyzed by the Quantity One analysis system (Bio-Rad, Hercules, CA, USA). GAPDH at a dilution of 1 : 2000 (catalog no. ab 9483, Abcam) was used as an internal loading control.
## 2.9. Statistical Analysis
The SPSS 19.0 software (IBM, Armonk, NY, US) was used for statistical analysis. All the data were expressed asmean±standard error of the mean. For continuous variables, comparisons among groups were conducted by one-way analysis of variance followed by Dunnett’s multiple comparisons test. All the p values reported were two tailed, and p<0.05 was set as the level of significance.
## 3. Results
### 3.1. The HSYA Contents in Sample 1 and Sample 5 and HPLC Profiling Results of the Five Samples
According to WS3-2012, the content of HSYA should be no less than 0.10 mg/ml [8]. The results showed that sample 5 obtained in the present study contained 0.20±0.01mg/ml of HSYA (n=3), which met the requirements of WS3-2012, and equaled to 11.2±0.2mg of HSYA per 1 g extract. The content of HSYA was also measured in sample 1, and the result was 43.3±0.8mg of HSYA per 1 g extract.In addition, WS3-2012 specifies 11 characteristic peaks in the HPLC profile of the injection, in which peak 9 represents HSYA (Figure2, sample 5). The theoretical number of column plates should be no less than 6000 as calculated from the HSYA peak, and the similarity determination of the 11 peaks between the profile of sample 5 and the reference fingerprint should be no less than 0.85 (Figure 2) [8]. The above HPLC indices of sample 5 all met the WS3-2012’s requirements. Figure 2 showed that the 11 characteristic peaks were also present in sample 1.Figure 2
High-performance liquid chromatography profiles of the reference and five samples. Sample 1: the safflower decoction; sample 5: the safflower injection. Performance temperature was 25°C. Mobile phase A was gradient elution with acetonitrile, and mobile phase B was aqueous trifluoroacetic acid. The detection wavelength was 223 nm. 10μl Hydroxysafflor yellow A (HSYA) was the control solution. Run time was 70 min. According to reference fingerprint, sample 5 showed 11 characteristic absorption peaks and peak 9 was confirmed as HSYA [8].
### 3.2. The ORAC Values of the Five Samples and HSYA
Sample 1 was prepared by extraction with water. Figure3(a) showed that, following several steps of alcohol precipitation and water precipitation, the ORAC value of sample 5 was significantly higher than that of sample 1 (1160±146μmol·TE/g vs. 650±61μmol·TE/g; p=0.001) and also higher than those of the other three samples (p<0.05). As an important compound in safflower, HSYA was found to have a significantly higher ORAC value (1702±109μmol·TE/g) than sample 5 (p=0.001). As a positive control in this study, curcumin exhibited the highest ORAC value (2307±66μmol·TE/g), which was significantly different from that of sample 5 (p<0.001).Figure 3
(a) The ORAC values of the samples and HSYA. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed asmean±standard error (n=6). #p<0.05 versus sample 5. Curcumin was the positive control. ORAC: oxygen radical absorbance capacity; HSYA: Hydroxysafflor yellow A. (b) The IC50 values for DPPH scavenging of the samples and HSYA. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed as mean±standard error of the mean (n=6). #p<0.05 versus sample 5. Curcumin was the positive control. DPPH: 1,1-diphenyl-2-trinitrophenylhydrazine; IC50: half inhibitory concentration; HSYA: Hydroxysafflor yellow A.
(a)
(b)
### 3.3. The IC50 Value for DPPH Scavenging of the Five Samples and HSYA
Figure3(b) showed that curcumin, as a reported antioxidant [17], exhibited the lowest IC50 value (5.7±1.1μg/ml), which was most significantly different from that of sample 5 (p<0.001). Among the five samples, sample 5 had the lowest IC50 value and sample 1 had the highest IC50 value (56.7±7.2μg/ml vs. 197.6±18.1μg/ml, p<0.001). The IC50 value of HSYA was 23.2±3.4μg/ml, further confirming its DPPH scavenging activity [12].
### 3.4. Effects of Sample 1 and Sample 5 on NO and IL-1β Contents in LPS-Activated RAW264.7 Cells
NO production increased significantly after LPS stimulation as compared to the solvent control (26.8±0.3μM vs. 6.6±0.1μM, p<0.001). Also, as compared to that of the solvent control, the IL-1β level in the LPS control increased significantly (14.7±0.3pg/ml vs. 69.4±5.6pg/ml, p=0.003).RAW264.7 cells treated with sample 1 and sample 5 at 50, 100, and 200μg/ml exhibited significantly lower LPS-stimulated NO production than the LPS control (p<0.05). Sample 5 showed a significant inhibitory effect as compared to sample 1 at 100 μg/ml (p=0.020) and at 200 μg/ml (p<0.001).Sample 1 at 50μg/ml did not exhibit a statistically significant inhibitory effect on LPS-stimulated IL-1β production (p=0.081 vs. the LPS control). As compared to sample 1, sample 5 showed a significant inhibitory effect on IL-1β production at 100 μg/ml (p=0.006) and at 200 μg/ml (p=0.007) (Figures 4(a) and 5(a)).Figure 4
(a) Effects of sample 1 and sample 5 on nitric oxide production in 0.5μg/ml LPS-activated RAW 264.7 cells. (b) Effects of sample 1 and sample 5 on the mRNA of iNOS in 0.5 μg/ml LPS-activated RAW264.7 cells by real-time quantitative PCR.(c) Effects of sample 1 and sample 5 on the expression of iNOS in 0.5 μg/ml LPS-activated RAW264.7 cells by Western blotting analysis. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed as mean±standard error of the mean (n=3). #p<0.05 versus LPS control. ∗p<0.05 sample 1 versus sample 5 at 100 μg/ml. ∗∗p<0.05 sample 1 versus sample 5 at 200 μg/ml. iNOS: inducible nitric oxide synthase; LPS: lipopolysaccharide.
(a)
(b)
(c)Figure 5
(a) Effects of sample 1 and sample 5 on IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells. (b) Effects of sample 1 and sample 5 on the mRNA of IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells by real-time quantitative PCR. (c) Effects of sample 1 and sample 5 on the expression of IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells by Western blotting analysis. Sample 1: the safflower decoction; sample 5, the safflower injection. Values are expressed as mean±standard error of the mean (n=3). #p<0.05 versus LPS control. ∗p<0.05 sample 1 versus sample 5 at 100 μg/ml. ∗∗p<0.05 sample 1 versus sample 5 at 200 μg/ml. LPS: lipopolysaccharide.
(a)
(b)
(c)
### 3.5. Effects of Sample 1 and Sample 5 on iNOS and IL-1β mRNAs in LPS-Activated RAW264.7 Cells
When compared to those of the solvent control, iNOS mRNA expression increased by approximately2.42±0.19 fold and IL-1β production increased by approximately 1.86±0.08 fold in LPS-activated cells.When compared to that of the LPS control, iNOS mRNA expression was significantly downregulated by sample 5 at 100μg/ml (p=0.040) and 200 μg/ml (p=0.019) and sample 1 at 100 and 200 μg/ml (both p<0.01). The significant inhibitory effect on IL-1β mRNA was also observed with sample 5 at 100 μg/ml and 200 μg/ml and sample 1 at 200 μg/ml (p<0.05).Compared to sample 1 at 100μg/ml, sample 5 at the same concentration significantly decreased iNOS mRNA and IL-1β mRNA levels (p=0.013 and p=0.009). Sample 5 at 200 μg/ml also exhibited a similar significant inhibitory effect on IL-1β mRNA expression as compared to sample 1 (p=0.011) (Figures 4(b) and 5(b)).
### 3.6. Effects of Sample 1 and Sample 5 on iNOS and IL-1β Protein Expressions in LPS-Activated RAW264.7 Cells
Western blotting was used to determine the effects of the decoction sample and injection sample on iNOS and IL-1β protein expressions. Figures 4(c) and 5(c) showed that iNOS and IL-1β protein expressions increased in LPS-activated RAW264.7cells.Compared to the LPS control, both sample 1 and sample 5 significantly suppressed iNOS expression at three tested concentrations (p<0.05). Sample 5 suppressed IL-1β protein expression at 50, 100, and 200 μg/ml, while sample 1 exhibited a significant suppressing effect on IL-1β protein expression at 100 and 200 μg/ml (all: p<0.05).Sample 1 and sample 5 were significantly different in their abilities of decreasing iNOS protein expression both at 100μg/ml and at 200 μg/ml (p<0.05). A significant difference in downregulation of IL-1β protein expression was also observed between the two groups treated with safflower at 100 μg/ml (p=0.010) and at 200 μg/ml (p=0.002), respectively.
## 3.1. The HSYA Contents in Sample 1 and Sample 5 and HPLC Profiling Results of the Five Samples
According to WS3-2012, the content of HSYA should be no less than 0.10 mg/ml [8]. The results showed that sample 5 obtained in the present study contained 0.20±0.01mg/ml of HSYA (n=3), which met the requirements of WS3-2012, and equaled to 11.2±0.2mg of HSYA per 1 g extract. The content of HSYA was also measured in sample 1, and the result was 43.3±0.8mg of HSYA per 1 g extract.In addition, WS3-2012 specifies 11 characteristic peaks in the HPLC profile of the injection, in which peak 9 represents HSYA (Figure2, sample 5). The theoretical number of column plates should be no less than 6000 as calculated from the HSYA peak, and the similarity determination of the 11 peaks between the profile of sample 5 and the reference fingerprint should be no less than 0.85 (Figure 2) [8]. The above HPLC indices of sample 5 all met the WS3-2012’s requirements. Figure 2 showed that the 11 characteristic peaks were also present in sample 1.Figure 2
High-performance liquid chromatography profiles of the reference and five samples. Sample 1: the safflower decoction; sample 5: the safflower injection. Performance temperature was 25°C. Mobile phase A was gradient elution with acetonitrile, and mobile phase B was aqueous trifluoroacetic acid. The detection wavelength was 223 nm. 10μl Hydroxysafflor yellow A (HSYA) was the control solution. Run time was 70 min. According to reference fingerprint, sample 5 showed 11 characteristic absorption peaks and peak 9 was confirmed as HSYA [8].
## 3.2. The ORAC Values of the Five Samples and HSYA
Sample 1 was prepared by extraction with water. Figure3(a) showed that, following several steps of alcohol precipitation and water precipitation, the ORAC value of sample 5 was significantly higher than that of sample 1 (1160±146μmol·TE/g vs. 650±61μmol·TE/g; p=0.001) and also higher than those of the other three samples (p<0.05). As an important compound in safflower, HSYA was found to have a significantly higher ORAC value (1702±109μmol·TE/g) than sample 5 (p=0.001). As a positive control in this study, curcumin exhibited the highest ORAC value (2307±66μmol·TE/g), which was significantly different from that of sample 5 (p<0.001).Figure 3
(a) The ORAC values of the samples and HSYA. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed asmean±standard error (n=6). #p<0.05 versus sample 5. Curcumin was the positive control. ORAC: oxygen radical absorbance capacity; HSYA: Hydroxysafflor yellow A. (b) The IC50 values for DPPH scavenging of the samples and HSYA. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed as mean±standard error of the mean (n=6). #p<0.05 versus sample 5. Curcumin was the positive control. DPPH: 1,1-diphenyl-2-trinitrophenylhydrazine; IC50: half inhibitory concentration; HSYA: Hydroxysafflor yellow A.
(a)
(b)
## 3.3. The IC50 Value for DPPH Scavenging of the Five Samples and HSYA
Figure3(b) showed that curcumin, as a reported antioxidant [17], exhibited the lowest IC50 value (5.7±1.1μg/ml), which was most significantly different from that of sample 5 (p<0.001). Among the five samples, sample 5 had the lowest IC50 value and sample 1 had the highest IC50 value (56.7±7.2μg/ml vs. 197.6±18.1μg/ml, p<0.001). The IC50 value of HSYA was 23.2±3.4μg/ml, further confirming its DPPH scavenging activity [12].
## 3.4. Effects of Sample 1 and Sample 5 on NO and IL-1β Contents in LPS-Activated RAW264.7 Cells
NO production increased significantly after LPS stimulation as compared to the solvent control (26.8±0.3μM vs. 6.6±0.1μM, p<0.001). Also, as compared to that of the solvent control, the IL-1β level in the LPS control increased significantly (14.7±0.3pg/ml vs. 69.4±5.6pg/ml, p=0.003).RAW264.7 cells treated with sample 1 and sample 5 at 50, 100, and 200μg/ml exhibited significantly lower LPS-stimulated NO production than the LPS control (p<0.05). Sample 5 showed a significant inhibitory effect as compared to sample 1 at 100 μg/ml (p=0.020) and at 200 μg/ml (p<0.001).Sample 1 at 50μg/ml did not exhibit a statistically significant inhibitory effect on LPS-stimulated IL-1β production (p=0.081 vs. the LPS control). As compared to sample 1, sample 5 showed a significant inhibitory effect on IL-1β production at 100 μg/ml (p=0.006) and at 200 μg/ml (p=0.007) (Figures 4(a) and 5(a)).Figure 4
(a) Effects of sample 1 and sample 5 on nitric oxide production in 0.5μg/ml LPS-activated RAW 264.7 cells. (b) Effects of sample 1 and sample 5 on the mRNA of iNOS in 0.5 μg/ml LPS-activated RAW264.7 cells by real-time quantitative PCR.(c) Effects of sample 1 and sample 5 on the expression of iNOS in 0.5 μg/ml LPS-activated RAW264.7 cells by Western blotting analysis. Sample 1: the safflower decoction; sample 5: the safflower injection. Values are expressed as mean±standard error of the mean (n=3). #p<0.05 versus LPS control. ∗p<0.05 sample 1 versus sample 5 at 100 μg/ml. ∗∗p<0.05 sample 1 versus sample 5 at 200 μg/ml. iNOS: inducible nitric oxide synthase; LPS: lipopolysaccharide.
(a)
(b)
(c)Figure 5
(a) Effects of sample 1 and sample 5 on IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells. (b) Effects of sample 1 and sample 5 on the mRNA of IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells by real-time quantitative PCR. (c) Effects of sample 1 and sample 5 on the expression of IL-1β in 0.5 μg/ml LPS-activated RAW264.7 cells by Western blotting analysis. Sample 1: the safflower decoction; sample 5, the safflower injection. Values are expressed as mean±standard error of the mean (n=3). #p<0.05 versus LPS control. ∗p<0.05 sample 1 versus sample 5 at 100 μg/ml. ∗∗p<0.05 sample 1 versus sample 5 at 200 μg/ml. LPS: lipopolysaccharide.
(a)
(b)
(c)
## 3.5. Effects of Sample 1 and Sample 5 on iNOS and IL-1β mRNAs in LPS-Activated RAW264.7 Cells
When compared to those of the solvent control, iNOS mRNA expression increased by approximately2.42±0.19 fold and IL-1β production increased by approximately 1.86±0.08 fold in LPS-activated cells.When compared to that of the LPS control, iNOS mRNA expression was significantly downregulated by sample 5 at 100μg/ml (p=0.040) and 200 μg/ml (p=0.019) and sample 1 at 100 and 200 μg/ml (both p<0.01). The significant inhibitory effect on IL-1β mRNA was also observed with sample 5 at 100 μg/ml and 200 μg/ml and sample 1 at 200 μg/ml (p<0.05).Compared to sample 1 at 100μg/ml, sample 5 at the same concentration significantly decreased iNOS mRNA and IL-1β mRNA levels (p=0.013 and p=0.009). Sample 5 at 200 μg/ml also exhibited a similar significant inhibitory effect on IL-1β mRNA expression as compared to sample 1 (p=0.011) (Figures 4(b) and 5(b)).
## 3.6. Effects of Sample 1 and Sample 5 on iNOS and IL-1β Protein Expressions in LPS-Activated RAW264.7 Cells
Western blotting was used to determine the effects of the decoction sample and injection sample on iNOS and IL-1β protein expressions. Figures 4(c) and 5(c) showed that iNOS and IL-1β protein expressions increased in LPS-activated RAW264.7cells.Compared to the LPS control, both sample 1 and sample 5 significantly suppressed iNOS expression at three tested concentrations (p<0.05). Sample 5 suppressed IL-1β protein expression at 50, 100, and 200 μg/ml, while sample 1 exhibited a significant suppressing effect on IL-1β protein expression at 100 and 200 μg/ml (all: p<0.05).Sample 1 and sample 5 were significantly different in their abilities of decreasing iNOS protein expression both at 100μg/ml and at 200 μg/ml (p<0.05). A significant difference in downregulation of IL-1β protein expression was also observed between the two groups treated with safflower at 100 μg/ml (p=0.010) and at 200 μg/ml (p=0.002), respectively.
## 4. Discussion
Safflower is well known for its antioxidant effects and has been widely used to treat conditions including musculoskeletal injuries and cardiocerebrovascular diseases [2, 4, 18]. A paper revealed that more than 100 herbal items have been used as topical agents in the treatment of musculoskeletal injuries. In order to verify the efficacies of these herbs, a comprehensive study was proposed, in which five herbs, including safflower, were selected as suitable candidates for further study. The clinical data from the pilot studies confirmed that the effects of safflower were related to its proven antioxidant activities [18].Traditionally, safflower is clinically used as a water decoction. As a modern preparation [3], the safflower injection prepared by Shanxi Provincial People’s Hospital was initially used to treat coronary diseases and cerebral thrombosis in 1973-1974 [19, 20]. Since then, safflower injection has been widely used in the treatment of cardiocerebrovascular diseases [4]. The injection has been studied more extensively than decoction for adverse reactions and the correlation between the antioxidant activity and its active contents [3, 21]. A study entitled “New technology for quality control of traditional Chinese medicine based on active ingredients and its application in safflower injection” was awarded a national prize in 2015. Such efforts have advanced the strategies for quality control and promoted the establishment of a standard system for safflower injection along with the development of relevant industries [22].Our previous study demonstrated the antioxidant activities of safflower extracts [23]. Another study conducted by our laboratory also showed that safflower injection could decrease NO production in LPS-stimulated RAW264.7 cells [9]. According to WS3-2012, the process for preparing a safflower injection begins with safflower decoction. In this study, 20 kg of safflower was decocted in water three times in a traditional manner: 1 h for the first time, 50 min for the second time, and 30 min for the third time and then subjected to alcohol precipitation twice and recovery with ethanol [8]. We were interested on how the process steps could influence the antioxidant activities of safflower decoction and safflower injection. In this study, five samples obtained from the preparation process (Figure 1) were tested for their antioxidant activities. In recent years, a variety of analytical methods have been used to evaluate the in vitro antioxidant capacity of safflower, among which the DPPH method and the ORAC assay were widely used [11, 12].As a positive control, curcumin displayed the highest ORAC value and lowest IC50 value for the DPPH scavenging activity in this study. The anti-inflammatory effect of curcumin is most likely exerted through its ability to inhibit cyclooxygenase-2, lipoxygenase, and iNOS [24]. Our previous research also showed that curcumin, as a positive control, decreased the level of nitrite in LPS-activated macrophages [9]. A review research elucidates that most chronic diseases are closely related to chronic inflammation and oxidative stress and the antioxidant properties of curcumin can play a key role in the prevention and treatment of chronic inflammation diseases [17].Our results showed that sample 5 exhibited an ORAC value significantly different from other tested samples, in particular, sample 1, as measured by the ORAC and DPPH methods. The IC50 values for DPPH scavenging activity of the five samples were also measured, and similar results were observed and shown in Figure 3(b).HSYA showed a higher ORAC value and DPPH scavenging activity as compared to sample 5 (Figures3(a) and 3(b)). As a main compound in safflower [8, 12], HSYA showed antioxidant activities in vivo and in vitro. Some studies aiming at identifying HSYA in the brain tissues of rats suggested that HSYA, which increased the activities of superoxide dismutase and catalase, can be potentially used as a neuroprotective agent for traumatic brain injury [25]. Carthamus yellow, which is composed of safflomin A and safflomin B, provided an anti-inflammatory response by inhibiting the production of NO through downregulating iNOS gene expression in LPS-induced macrophages [26]. HSYA also exerted a protective effect against LPS-induced neurotoxicity in dopaminergic neurons through a mechanism that may be associated with the inhibition of IL-1β, TNF-α, and NO [27].It is interesting to observe the inconsistency between the HSYA contents and the antioxidant activities of the samples. The content of HSYA in sample 1 was higher than that in sample 5, but the antioxidant activity of sample 1 was significantly lower than that of sample 5, as measured by the DPPH and ORAC methods. So, the first question is raised: what results will be obtained when other methods are used?The effects of safflower extracts on LPS-induced expression of proinflammatory cytokines, such as iNOS, IL-1β, the nuclear receptor NF-κB, and cyclooxygenase-2 (COX-2), were evaluated recently [28, 29]. Some research showed that methanol extracts of safflower (MES) reduced inflammation by suppressing iNOS and COX-2 expressions in LPS-activated cells. The binding to NF-κB and NF-κB luciferase activity were also significantly diminished by MES [28]. The hepatoprotective effects and mechanisms of an extract of Salvia miltiorrhiza and safflower were investigated in C57BL/6J mice. Western blotting revealed that DHI inhibited LPS-induced phosphorylation of IκBα and NF-κB p65 [29].To further provide an insight into the anti-inflammatory effect of safflower, LPS-activated RAW264.7 macrophages were used for further investigating the effects of sample 1 and sample 5 on mRNA and protein expressions of iNOS and IL-1β. The results showed that iNOS and IL-1β expressions in the LPS-stimulated group were significantly higher than those in the solvent control group. Compared to the LPS control, both sample 1 and sample 5 significantly suppressed iNOS and IL-1β expressions at different concentrations. Further comparison of the samples showed that sample 5 exhibited a significantly higher inhibitory effect on protein and mRNA expressions of both iNOS and IL-1β than sample 1.The above results were in accordance with those obtained from the ORAC and DPPH methods, confirming that the current standard process for preparing a safflower injection can ensure a higher antioxidant activity of the final product than the first water decoction. However, the second question is raised: as there were more HPLC peaks and higher HSYA content in sample 1 than in sample 5 (Figure2), why did sample 5 possess a higher antioxidant activity than sample 1?Content determination is an important means for evaluating the product quality and explaining the pharmacological results. Only active ingredients in safflower, such as safflower yellow, HSYA, kaempferol, and quercetin, can exert positive roles [30, 31]. We hypothesize that some interfering substances were removed during the preparation process while the antioxidant compounds were retained. A study involving depletion of some active ingredients supports our hypothesis by showing that several main components such as HSYA, dehydrated safflower yellow B, and 6-hydroxykaempferol-3,6-di-O-glucoside-7-O-glucuronide not only play a direct antioxidant role but also synergize [32]. It will be of particular interest to further study synergistic combinations of the compounds present in safflower injection.It is also interesting to identify promising candidates in safflower injection that can be used in future immunotherapeutic strategies. Some researches on the compounds in safflower injection have already obtained positive results. Recently, three active constituents in safflower injection, i.e., HSYA, sirongoside, and (8Z)-decaene-4,6-diyne-1-O-β-D-glucopyranoside, were identified by HPLC [33]. The contents of uridine, guanosine, and adenosine in the injection were also determined by HPLC. Nucleosides, such as uridine, have an effect against platelet aggregation [34]. Sixteen compounds were isolated from safflower injection, including (1) scutellarin, (2) kaempferol-3-O-β-rutinoside, (3) HSYA, (4) rutin, (5) coumalic acid, (6) adenosine, (7) syringoside, (8) (3E)-4-(4′-hydroxyphenyl)-3-buten-2-one, (9) (8z)-decaene-4,6-diyne-1-O-β-D-glucopyranoside, (10) 4-hydroxybenzaldegyde, (11) (2E, 8E)-tetradecadiene-4,6-diyne-1,12,14-triol-1-O-β-D-glucopyranoside, (12) kaem-pferol-3-O-β-sophorose, (13) uridine, (14) roseoside, (15) cinnamic acid, and (16) kaempferol. Compounds 1, 2, 7, 9, 11, and 12 were isolated from the safflower injection for the first time. The results indicated that all the tested compounds but compound 5 exhibited potent antioxidant and anti-inflammatory activities, while compounds 2, 3, 9, and 12 showed strong activities against platelet aggregation [35].All the efforts above help us understand the interaction of multiple components. The change in the proportion of active ingredients caused by the extraction process and the possibility of a synergistic antioxidant activity need to be further studied. It is also necessary to identify all the peaks in Figure2 and observe how they change during the extraction process. In summary, the present study, for the first time, provides in vitro evidence that the “modern” safflower injection significantly suppresses expressions of both iNOS and IL-1β in mRNA and protein levels in LPS-activated RAW264.7 cells as compared to the traditional water decoction. The compounds in safflower injection need to be identified before further in vivo studies on the molecular mechanism are conducted.
---
*Source: 1018274-2019-05-06.xml* | 2019 |
# Rhinosporidiosis: Intraoperative Cytological Diagnosis in an Unsuspected Lesion
**Authors:** Shruti Bhargava; Mohnish Grover; Veena Maheshwari
**Journal:** Case Reports in Pathology
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101832
---
## Abstract
Rhinosporidiosis is a disease endemic to South India, Sri Lanka and some areas of the African continent. The nasal lesions can sometimes be confused with nasopharyngeal malignancy. We report here a clinically unsuspected case of rhinosporidiosis, diagnosed correctly by intraoperative FNAC, and later confirmed by histopathological examination.
---
## Body
## 1. Introduction
Rhinosporidiosis is a rare chronic granulomatous disease of mucocutaneous tissue, endemic in South India, Sri Lanka, and some areas of the African continent [1]. The etiological agent Rhinosporidium seeberi, in recent studies has been established as an aquatic protistan parasite [2]. It commonly affects the nasal mucosa, conjunctiva and urethra in people of any age and sex, but involvement of other sites has also been reported [3]. In the nasal cavity it manifests as a polypoid mass which can be confused sometimes with malignant lesions, wherein the exact diagnosis is confirmed on histology [2]. However, FNAC is an economical and reliable intraoperative method for diagnosis of such suspected and unsuspected lesions [3]. Surgical excision is the treatment of choice, and recurrence is possible but rare [4]. We report here a case of rhinosporidiosis diagnosed by intraoperative FNAC.
## 2. Case Report
A 21 years-old-man presented to the ENT outpatient department with obstruction of nose on right side. On examination there was an erythematous, irregular mass, 3 cm in diameter, obstructing the right nasal cavity and extending into the sinuses as well. No abnormality was seen in the contralateral nasal cavity or nasopharynx.Since, preoperatively, the exact nature of mass could not be clinically established with surety, an intraoperative aspirate smear was prepared to rule out malignancy. Rapid Hematoxylin & Eosin (H&E) and Periodic acid Schiff (PAS) stained smears revealed numerous globular sporangia containing spores (Figure1(a)) along with many free lying spores and inflammatory cells (Figure 1(b)). The spores stained magenta with PAS (Figure 1(c)), a feature used to differentiate them from epithelial cells of nasopharynx (PAS negative). Hence the diagnosis of rhinosporidiosis was established and malignancy was ruled out. This helped the surgeon completely remove the mass endoscopically.(a) FNA smear showing globular sporangia containing spores (H&E×400). (b) FNA smear showing isolated spores and inflammatory cells (H&E ×400). (c) PAS positive spores on FNA smear (PAS ×400). (d) Histopathological section showing sporangia and inflammatory cells in a fibrous stroma covered by stratified squamous epithelium (H&E ×100). (e) Ruptured sporangia liberating the spores histopathology section (H&E ×400). (f) PAS positive sporangia and spores on histology (PAS ×100).
(a)
(b)
(c)
(d)
(e)
(f)Histopathological examination of the resected mass revealed many globular cysts, each representing a thick-walled sporangium containing numerous daughter spores in a background of fibroblasts and acute and chronic inflammatory cells, covered by flat multistratified squamous epithelium (Figures1(d) and 1(e)). The endospores and sporangia were PAS positive (Figure 1(f)) and were, respectively, 5–10 μm and 50–1000 μm in size. These findings made easier the distinction of Rhinosporidium seeberi from another common nasal mycosis etiological agent, Coccidioides immitis. And hence, the intraoperative cytological diagnosis was confirmed.The patient did not assume any drug therapy and, until today, after one year of followup, during which he, underwent clinical examination twice, remains healthy with no sign of recurrence.
## 3. Discussion
Rhinosporidiosis is a rare disease affecting people of any age and sex [1]. Initially described by Seeber in 1900 in an individual from Argentina, rhinosporidiosis is endemic in India, Sri Lanka, South America, and Africa [2].The great majority of cases are sporadic. The etiological agentRhinosporidium seeberi causes granulomatous inflammation of mucocutaneous sites, presenting most frequently as polypoidal lesions in the nose. Sites like the conjunctiva, trachea, nasopharnyx, skin, and genitourinary tract are less frequently involved [3].The taxonomy ofR. seeberi was debated in the last decades, since the microorganism is intractable to isolation and microbiological culture [2]. Moreover, its morphological features resemble both fungi and protozoa [2]. Interestingly, the histopathology of some fish and amphibian diseases as well as the morphology of these pathogens closely resembles that of rhinosporidiosis [5]. Recently, it has been classified as the first known human pathogen belonging to class of aquatic protistan parasites [6].The presumed mode of infection from the natural aquatic habitat ofR. seeberi, is through the traumatized epithelium (“transepithelial infection”) most commonly in nasal sites [2]. There is evidence for hematogenous spread of rhinosporidiosis to anatomically distant sites [2].The nasal lesions may sometimes present clinically as ulcerated growths which could mimic malignant lesions such as sarcomas and carcinomas [2].The definitive diagnosis of rhinosporidiosis is by histopathology on biopsied or resected tissues, with the identification of the pathogen in its diverse stages, demonstration of sporangia and endospores [2]. The sporangia are large, thick-walled spherical structures (called sporangia) containing smaller “daughter cells” (called “sporangiospores”), seen in a stroma which is either fibromyxomatous or fibrous containing chronic inflammatory cells which include macrophages and lymphocytes, while neutrophils are numerous around free endospores [2]. Each mature sporangium contains an operculum or pore through which the endospores are extruded [2].However, cytodiagnosis on aspirates from rhinosporidial lumps or on smears of secretions from the surfaces of accessible polyps and fine-needle aspirates from lumps provide, with suitable stains, distinctive diagnostic features [2]. The various developmental stages of sporangia can be readily identified by special fungus stains such as the Gomori methenamine silver, Gridley’s, and the periodic acid-Schiff stains, although the identification of the stages can also be made with the routine haematoxylin and eosin stain [2].On direct examination, the cytological smears show spherules as well-circumscribed, globular structures with several endospores within [7]. The diameter of spherules ranges from 30 to 300 microns. The endospores may be confused with epithelial cells. The PAS stain is used to discriminate between endospores and epithelial cells, in which the residual cytoplasm and large nuclei can sometimes simulate the residual mucoid sporangial material around the endospores and the endospores themselves [2]. The endospores stain markedly magenta while the epithelial cells are PAS-negative [2].Rhinosporidium seeberi should be distinguished from another microorganism, Coccidioides immitis [2]. This latter has similar mature stages represented by large, thick-walled, spherical structures containing endospores, but the spherules are smaller (diameter of 20–80 μm versus 50–1000 μm) and contain small endospores (diameter of 2–4 μm). Moreover, Coccidiodes does not stain with the mucicarmine [2].The only curative approach is the surgical excision combined with electrocoagulation. There is no demonstrated efficacy in using antifungal and/or antimicrobial drugs. Recurrence, dissemination in anatomical close sites and local secondary bacterial infections are the most frequent complications [4].To conclude, Rhinosporidiosis is a condition which both clinicians and pathologists should keep in mind when managing patients from endemic countries with nasal masses. The cytological appearance of fine needle aspiration smears is distinctive, and a definitive diagnosis of rhinosporidiosis can be made easily and quickly in clinically unsuspected cases.
---
*Source: 101832-2012-10-11.xml* | 101832-2012-10-11_101832-2012-10-11.md | 8,373 | Rhinosporidiosis: Intraoperative Cytological Diagnosis in an Unsuspected Lesion | Shruti Bhargava; Mohnish Grover; Veena Maheshwari | Case Reports in Pathology
(2012) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101832 | 101832-2012-10-11.xml | ---
## Abstract
Rhinosporidiosis is a disease endemic to South India, Sri Lanka and some areas of the African continent. The nasal lesions can sometimes be confused with nasopharyngeal malignancy. We report here a clinically unsuspected case of rhinosporidiosis, diagnosed correctly by intraoperative FNAC, and later confirmed by histopathological examination.
---
## Body
## 1. Introduction
Rhinosporidiosis is a rare chronic granulomatous disease of mucocutaneous tissue, endemic in South India, Sri Lanka, and some areas of the African continent [1]. The etiological agent Rhinosporidium seeberi, in recent studies has been established as an aquatic protistan parasite [2]. It commonly affects the nasal mucosa, conjunctiva and urethra in people of any age and sex, but involvement of other sites has also been reported [3]. In the nasal cavity it manifests as a polypoid mass which can be confused sometimes with malignant lesions, wherein the exact diagnosis is confirmed on histology [2]. However, FNAC is an economical and reliable intraoperative method for diagnosis of such suspected and unsuspected lesions [3]. Surgical excision is the treatment of choice, and recurrence is possible but rare [4]. We report here a case of rhinosporidiosis diagnosed by intraoperative FNAC.
## 2. Case Report
A 21 years-old-man presented to the ENT outpatient department with obstruction of nose on right side. On examination there was an erythematous, irregular mass, 3 cm in diameter, obstructing the right nasal cavity and extending into the sinuses as well. No abnormality was seen in the contralateral nasal cavity or nasopharynx.Since, preoperatively, the exact nature of mass could not be clinically established with surety, an intraoperative aspirate smear was prepared to rule out malignancy. Rapid Hematoxylin & Eosin (H&E) and Periodic acid Schiff (PAS) stained smears revealed numerous globular sporangia containing spores (Figure1(a)) along with many free lying spores and inflammatory cells (Figure 1(b)). The spores stained magenta with PAS (Figure 1(c)), a feature used to differentiate them from epithelial cells of nasopharynx (PAS negative). Hence the diagnosis of rhinosporidiosis was established and malignancy was ruled out. This helped the surgeon completely remove the mass endoscopically.(a) FNA smear showing globular sporangia containing spores (H&E×400). (b) FNA smear showing isolated spores and inflammatory cells (H&E ×400). (c) PAS positive spores on FNA smear (PAS ×400). (d) Histopathological section showing sporangia and inflammatory cells in a fibrous stroma covered by stratified squamous epithelium (H&E ×100). (e) Ruptured sporangia liberating the spores histopathology section (H&E ×400). (f) PAS positive sporangia and spores on histology (PAS ×100).
(a)
(b)
(c)
(d)
(e)
(f)Histopathological examination of the resected mass revealed many globular cysts, each representing a thick-walled sporangium containing numerous daughter spores in a background of fibroblasts and acute and chronic inflammatory cells, covered by flat multistratified squamous epithelium (Figures1(d) and 1(e)). The endospores and sporangia were PAS positive (Figure 1(f)) and were, respectively, 5–10 μm and 50–1000 μm in size. These findings made easier the distinction of Rhinosporidium seeberi from another common nasal mycosis etiological agent, Coccidioides immitis. And hence, the intraoperative cytological diagnosis was confirmed.The patient did not assume any drug therapy and, until today, after one year of followup, during which he, underwent clinical examination twice, remains healthy with no sign of recurrence.
## 3. Discussion
Rhinosporidiosis is a rare disease affecting people of any age and sex [1]. Initially described by Seeber in 1900 in an individual from Argentina, rhinosporidiosis is endemic in India, Sri Lanka, South America, and Africa [2].The great majority of cases are sporadic. The etiological agentRhinosporidium seeberi causes granulomatous inflammation of mucocutaneous sites, presenting most frequently as polypoidal lesions in the nose. Sites like the conjunctiva, trachea, nasopharnyx, skin, and genitourinary tract are less frequently involved [3].The taxonomy ofR. seeberi was debated in the last decades, since the microorganism is intractable to isolation and microbiological culture [2]. Moreover, its morphological features resemble both fungi and protozoa [2]. Interestingly, the histopathology of some fish and amphibian diseases as well as the morphology of these pathogens closely resembles that of rhinosporidiosis [5]. Recently, it has been classified as the first known human pathogen belonging to class of aquatic protistan parasites [6].The presumed mode of infection from the natural aquatic habitat ofR. seeberi, is through the traumatized epithelium (“transepithelial infection”) most commonly in nasal sites [2]. There is evidence for hematogenous spread of rhinosporidiosis to anatomically distant sites [2].The nasal lesions may sometimes present clinically as ulcerated growths which could mimic malignant lesions such as sarcomas and carcinomas [2].The definitive diagnosis of rhinosporidiosis is by histopathology on biopsied or resected tissues, with the identification of the pathogen in its diverse stages, demonstration of sporangia and endospores [2]. The sporangia are large, thick-walled spherical structures (called sporangia) containing smaller “daughter cells” (called “sporangiospores”), seen in a stroma which is either fibromyxomatous or fibrous containing chronic inflammatory cells which include macrophages and lymphocytes, while neutrophils are numerous around free endospores [2]. Each mature sporangium contains an operculum or pore through which the endospores are extruded [2].However, cytodiagnosis on aspirates from rhinosporidial lumps or on smears of secretions from the surfaces of accessible polyps and fine-needle aspirates from lumps provide, with suitable stains, distinctive diagnostic features [2]. The various developmental stages of sporangia can be readily identified by special fungus stains such as the Gomori methenamine silver, Gridley’s, and the periodic acid-Schiff stains, although the identification of the stages can also be made with the routine haematoxylin and eosin stain [2].On direct examination, the cytological smears show spherules as well-circumscribed, globular structures with several endospores within [7]. The diameter of spherules ranges from 30 to 300 microns. The endospores may be confused with epithelial cells. The PAS stain is used to discriminate between endospores and epithelial cells, in which the residual cytoplasm and large nuclei can sometimes simulate the residual mucoid sporangial material around the endospores and the endospores themselves [2]. The endospores stain markedly magenta while the epithelial cells are PAS-negative [2].Rhinosporidium seeberi should be distinguished from another microorganism, Coccidioides immitis [2]. This latter has similar mature stages represented by large, thick-walled, spherical structures containing endospores, but the spherules are smaller (diameter of 20–80 μm versus 50–1000 μm) and contain small endospores (diameter of 2–4 μm). Moreover, Coccidiodes does not stain with the mucicarmine [2].The only curative approach is the surgical excision combined with electrocoagulation. There is no demonstrated efficacy in using antifungal and/or antimicrobial drugs. Recurrence, dissemination in anatomical close sites and local secondary bacterial infections are the most frequent complications [4].To conclude, Rhinosporidiosis is a condition which both clinicians and pathologists should keep in mind when managing patients from endemic countries with nasal masses. The cytological appearance of fine needle aspiration smears is distinctive, and a definitive diagnosis of rhinosporidiosis can be made easily and quickly in clinically unsuspected cases.
---
*Source: 101832-2012-10-11.xml* | 2012 |
# Detailed Distribution of Corneal Epithelial Thickness and Correlated Characteristics Measured with SD-OCT in Myopic Eyes
**Authors:** Yanan Wu; Yan Wang
**Journal:** Journal of Ophthalmology
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1018321
---
## Abstract
Purpose. To investigate the detailed distribution of corneal epithelial thickness in single sectors and its correlated characteristics in myopic eyes. Methods. SD-OCT was used to measure the corneal epithelial thickness distribution profile. Differences of corneal epithelial thickness between different parameters and some correlations of characteristics were calculated. Results. The thickest and thinnest part of epithelium were found at the nasal-inferior sector (P<0.05) and at the superior side (P<0.05), respectively. Subjects in the low and moderate myopia groups have thicker epithelial thickness than those in the high myopia group (P<0.05). Epithelial thickness was 1.39 μm thicker in male subjects than in female subjects (P<0.001). There was a slight negative correlation between corneal epithelial thickness and age (r=−0.13, P=0.042). Weak positive correlations were found between corneal epithelial thickness and corneal thickness (r=0.148, P=0.031). No correlations were found between corneal epithelial thickness, astigmatism axis, corneal front curvature, and IOP. Conclusions. The epithelial thickness is not evenly distributed across the cornea. The thickest location of the corneal epithelium is at the nasal-inferior sector. People with high myopia tend to have thinner corneal epithelium than low–moderate myopic patients. The corneal epithelial thickness is likely to be affected by some parameters, such as age, gender, and corneal thickness.
---
## Body
## 1. Introduction
The corneal epithelium plays a very important role in protecting eyes as it is the outermost layer and in maintaining high optical quality [1, 2] as well. It is found that the epithelium contributed 0.85 D alone in corneal refraction at the 3.6 mm diameter zone [3]. Furthermore, the corneal epithelial thickness is not of homogeneous depth and tends to alter its thickness profile to compensate for irregular corneal stromal surface to get a regular surface [4]. Some corneal surgery and corneal refractive surgery with excimer laser ablation were done directly on corneal epithelium, such as transepithelial photorefractive keratectomy (TransPRK) [5] and phototherapeutic keratectomy (PTK).Since the corneal epithelium contributes a lot in corneal refraction and it helps in the design of the above surgeries, it is very important to get a better knowledge of the characteristics of corneal epithelial thickness distribution. Previously, a few instruments have been invented and applied to corneal epithelium thickness measurement in vivo, including very high-frequency (VHF) digital ultrasound and confocal microscopy. A few studies on corneal epithelial thickness mapping have been done using very high-frequency (VHF) digital ultrasound and confocal microscopy [6–8]. However, these two techniques have some limitations. They both are invasive devices and need anesthetic. This may increase the risk of corneal infection and decrease the accuracy because of the possible contact-related cornea compression [6, 9, 10]. Since the latest years, SD-OCT has become a promising method to study the corneal epithelial thickness because of its noninvasiveness. It has showed good repeatability and accuracy [11, 12] at the same time. The noncontact, high-speed, and high-resolution characters make SD-OCT a popular device in assessing corneal epithelial thickness. Up to now, only a few research [13–17] could show the corneal epithelium map using noncontact device. This study aimed to figure out the detailed distribution of corneal epithelium.Furthermore, little knowledge in distinctions of epithelial thickness among different myopia degrees is known. Therefore, with the support of a large sample size, this study aims to investigate the distinction of corneal epithelial thickness in different myopic degrees. The description of corneal epithelial thickness distribution in more detailed parts and correlation between corneal epithelial thickness and various corneal parameters, such as age, corneal thickness, IOP, astigmatism, and corneal front curvature were also analyzed.
## 2. Methods
### 2.1. Subjects
Two hundred and fifteen eyes from 215 healthy subjects (102 women, 113 men) with a mean age of 21.26 ± 4.35 years(18 to 40 years) and mean manifest refraction spherical equivalent (MRSE) of −5.34 ± 2.19 D (ranging from −1.125 D to −12.00 D) participated in this study. Subjects reached a complete ophthalmologic evaluation, including the intraocular pressure (IOP) measurement, best-corrected distance visual acuity (BCVA), slit lamp and ophthalmoscope examination, corneal topography (Pentacam HR, OCULUS GmbH, Wetzlar, Germany), Schirmer I test, and tear break-up time test. Every subject had best-corrected distance visual acuity of 20/25 or better. All measurements were taken without the application of artificial tears or mydriatic eye drops. And the exclusion criteria included suspicious and frank keratoconus, a history of contact lens wear, current or prior ocular pathology, and dry eye disorder. All subjects were informed of the aim of the study, and their consent was obtained at the time of their first clinical visit. This prospective study was performed at the Refractive Surgery Center at the Tianjin Ophthalmology Hospital, Nankai University, and received the approval of the Ethics Committee of our Institution, in accord with the Declaration of Helsinki.
### 2.2. OCT
An ultrahigh resolution SD-OCT (RTVue-100, Optovue Inc., Fremont, CA) was used in this study. The system worked at 830 nm wavelength and had a scan speed of 26,000 axial scans per second. The setting’s axial resolution was 5μm, with an L-Cam lens attached to it, which can take 8 meridional B-scans per acquisition, consisting of 1024 A-scans. A Pachymetry_Cpwr scan pattern centered at the pupil center was used to map the cornea.The RTVue-100 corneal epithelial thickness mapping and pachymetry software (software version 6.11.0.12) automatically processed the OCT scan to provide the corneal epithelial thickness and pachymetry (corneal thickness) maps, corresponding to a 6 mm diameter area. A well-trained investigator conducted all the measurements, and three repeated measurements were collected and averaged in each case.
### 2.3. Corneal Epithelial Mapping
The analyzing area was two 6 mm diameter disks of corneal thickness and corneal epithelial thickness maps. Each map was divided into 3 zones by diameter: central 2 mm, inner ring from 2 to 5 mm, and outer ring from 5 to 6 mm, according to the set of the analyzing system (Figure1). The central 2 mm zone was named as center. The 2 to 5 mm zone (named Ring1) and 5 to 6 mm zone (named Ring2) were averagely divided into 8 sectors. The 8 sectors of Ring1 were named anticlockwise for OD as R1a, R1b, R1c, R1d, R1e, R1f, R1g, and R1h. Similarly, the sectors from Ring2 for OD were named from R2a to R2h (Figure 1). The naming all started from superior to temporal, then inferior to nasal. The left eye map was mirrored to the right eye to calculate the difference between the right and left eyes (Figure 1). The average epithelial thickness of each sector was calculated and displayed numerically over the corresponding area. Right eye minus left eye asymmetry (right − left (R-L)) was also calculated (Table 1).Figure 1
Details of the mapping of corneal thickness and corneal epithelial thickness over the 6 mm diameter cornea from the analyzing report in the set. The analyzing area is divided into three main parts (center, Ring1, and Ring2) and 17 sectors. In Ring1, the sectors were named, respectively, anticlockwise for OD as R1a, R1b, R1c, R1d, R1e, R1f, R1g, and R1h. Similarly, the sectors from Ring2 for OD were named from R2a to R2h. The naming all started from superior to temporal, then inferior to nasal. The left eye map was mirrored.Table 1
Distinction of corneal epithelial thickness between the right and left eyes.
Mean difference (μm)
SEM
Sig.
Right − left (R-L)
Center
−0.34237
0.20142
0.175
Ring1
−0.35241
0.21179
0.068
Ring2
−0.34456
0.21583
0.093
Avg.
−0.35361
0.21945
0.113
### 2.4. Manifest Refraction Spherical Equivalent (MRSE) Grouping
A set of groups were formed considering the average MRSE of the study population. Group Myopia-L consisted of a low-myopia population, defined as MRSE less than or equal to −3.00 D (n=26), while group Myopia-M was defined as MRSE more than −3.00 D and less than or equal to −6.00 D (n=122), and group Myopia-H consisted of a high-myopia population of MRSE more than −6.00 D (n=67).
### 2.5. Corneal Topography
Anterior segment was imaged with Pentacam (OCULUS GmbH, Wetzlar, Germany). In each acquisition, the rotating Scheimpflug camera captured 50 images automatically and measures 25,000 true elevation points. Due to the good repeatability of this device [18, 19], the acquisition would be applied to the study on condition that the quality specification was “OK.” If not, the acquisition was repeated. The cornea front astigmatism axis (flat) parameter and the mean front corneal surface curvature (Km) were recorded from the Pentacam map.
### 2.6. Statistical Analysis
Statistical Products and Services Solution (SPSS version 20.0, Chicago, Illinois, USA) were used for the statistical analysis. Normal distribution of data was assessed using the Kolmogorov-Smirnov test.Analysis of variance (ANOVA) was used to compare epithelial thickness in each sector of the 6 mm diameter of cornea and the differences of corneal epithelial thickness in different MRSE groups. The Student’s independent-samplest-test was used to investigate the difference in epithelial thickness among different parameters, including gender, eye sides, and R-L. Pearson’s correlation coefficient was used to compare corneal epithelial thickness to corneal thickness (pachymetry), age, intraocular pressure (IOP), mean front corneal surface curvature (Km), corneal front astigmatism axis (flat), and total eye astigmatism axis. All significant levels were set at P<0.05.
## 2.1. Subjects
Two hundred and fifteen eyes from 215 healthy subjects (102 women, 113 men) with a mean age of 21.26 ± 4.35 years(18 to 40 years) and mean manifest refraction spherical equivalent (MRSE) of −5.34 ± 2.19 D (ranging from −1.125 D to −12.00 D) participated in this study. Subjects reached a complete ophthalmologic evaluation, including the intraocular pressure (IOP) measurement, best-corrected distance visual acuity (BCVA), slit lamp and ophthalmoscope examination, corneal topography (Pentacam HR, OCULUS GmbH, Wetzlar, Germany), Schirmer I test, and tear break-up time test. Every subject had best-corrected distance visual acuity of 20/25 or better. All measurements were taken without the application of artificial tears or mydriatic eye drops. And the exclusion criteria included suspicious and frank keratoconus, a history of contact lens wear, current or prior ocular pathology, and dry eye disorder. All subjects were informed of the aim of the study, and their consent was obtained at the time of their first clinical visit. This prospective study was performed at the Refractive Surgery Center at the Tianjin Ophthalmology Hospital, Nankai University, and received the approval of the Ethics Committee of our Institution, in accord with the Declaration of Helsinki.
## 2.2. OCT
An ultrahigh resolution SD-OCT (RTVue-100, Optovue Inc., Fremont, CA) was used in this study. The system worked at 830 nm wavelength and had a scan speed of 26,000 axial scans per second. The setting’s axial resolution was 5μm, with an L-Cam lens attached to it, which can take 8 meridional B-scans per acquisition, consisting of 1024 A-scans. A Pachymetry_Cpwr scan pattern centered at the pupil center was used to map the cornea.The RTVue-100 corneal epithelial thickness mapping and pachymetry software (software version 6.11.0.12) automatically processed the OCT scan to provide the corneal epithelial thickness and pachymetry (corneal thickness) maps, corresponding to a 6 mm diameter area. A well-trained investigator conducted all the measurements, and three repeated measurements were collected and averaged in each case.
## 2.3. Corneal Epithelial Mapping
The analyzing area was two 6 mm diameter disks of corneal thickness and corneal epithelial thickness maps. Each map was divided into 3 zones by diameter: central 2 mm, inner ring from 2 to 5 mm, and outer ring from 5 to 6 mm, according to the set of the analyzing system (Figure1). The central 2 mm zone was named as center. The 2 to 5 mm zone (named Ring1) and 5 to 6 mm zone (named Ring2) were averagely divided into 8 sectors. The 8 sectors of Ring1 were named anticlockwise for OD as R1a, R1b, R1c, R1d, R1e, R1f, R1g, and R1h. Similarly, the sectors from Ring2 for OD were named from R2a to R2h (Figure 1). The naming all started from superior to temporal, then inferior to nasal. The left eye map was mirrored to the right eye to calculate the difference between the right and left eyes (Figure 1). The average epithelial thickness of each sector was calculated and displayed numerically over the corresponding area. Right eye minus left eye asymmetry (right − left (R-L)) was also calculated (Table 1).Figure 1
Details of the mapping of corneal thickness and corneal epithelial thickness over the 6 mm diameter cornea from the analyzing report in the set. The analyzing area is divided into three main parts (center, Ring1, and Ring2) and 17 sectors. In Ring1, the sectors were named, respectively, anticlockwise for OD as R1a, R1b, R1c, R1d, R1e, R1f, R1g, and R1h. Similarly, the sectors from Ring2 for OD were named from R2a to R2h. The naming all started from superior to temporal, then inferior to nasal. The left eye map was mirrored.Table 1
Distinction of corneal epithelial thickness between the right and left eyes.
Mean difference (μm)
SEM
Sig.
Right − left (R-L)
Center
−0.34237
0.20142
0.175
Ring1
−0.35241
0.21179
0.068
Ring2
−0.34456
0.21583
0.093
Avg.
−0.35361
0.21945
0.113
## 2.4. Manifest Refraction Spherical Equivalent (MRSE) Grouping
A set of groups were formed considering the average MRSE of the study population. Group Myopia-L consisted of a low-myopia population, defined as MRSE less than or equal to −3.00 D (n=26), while group Myopia-M was defined as MRSE more than −3.00 D and less than or equal to −6.00 D (n=122), and group Myopia-H consisted of a high-myopia population of MRSE more than −6.00 D (n=67).
## 2.5. Corneal Topography
Anterior segment was imaged with Pentacam (OCULUS GmbH, Wetzlar, Germany). In each acquisition, the rotating Scheimpflug camera captured 50 images automatically and measures 25,000 true elevation points. Due to the good repeatability of this device [18, 19], the acquisition would be applied to the study on condition that the quality specification was “OK.” If not, the acquisition was repeated. The cornea front astigmatism axis (flat) parameter and the mean front corneal surface curvature (Km) were recorded from the Pentacam map.
## 2.6. Statistical Analysis
Statistical Products and Services Solution (SPSS version 20.0, Chicago, Illinois, USA) were used for the statistical analysis. Normal distribution of data was assessed using the Kolmogorov-Smirnov test.Analysis of variance (ANOVA) was used to compare epithelial thickness in each sector of the 6 mm diameter of cornea and the differences of corneal epithelial thickness in different MRSE groups. The Student’s independent-samplest-test was used to investigate the difference in epithelial thickness among different parameters, including gender, eye sides, and R-L. Pearson’s correlation coefficient was used to compare corneal epithelial thickness to corneal thickness (pachymetry), age, intraocular pressure (IOP), mean front corneal surface curvature (Km), corneal front astigmatism axis (flat), and total eye astigmatism axis. All significant levels were set at P<0.05.
## 3. Results
### 3.1. Corneal Epithelium Distribution
Two hundred and fifteen eyes from 215 subjects were assigned to calculate myopic corneal epithelial thickness and corneal thickness of 17 sectors (Table2). The central corneal epithelial thickness was 53.26 ± 2.66 μm. The average epithelial thickness of Ring1 and Ring2 were 53.30 ± 2.48 μm and 53.04 ± 2.38 μm, respectively. The central corneal thickness (CCT) was 534.24 ± 29.89 μm. The average of Ring1 and Ring2 were 553.14 ± 30.56 μm and 579.64 ± 31.31 μm, respectively. As Figure 2(a) shows, no statistical difference was found among the center and the two rings of corneal epithelial thickness (P=0.536). The corneal thickness increased gradually from the center to the periphery (P<0.001, Figure 2(b)).Table 2
The corneal epithelial thickness and corneal thickness in different locations.
a
b
c
d
e
f
g
h
Avg.
Epithelial thickness (μm)
Center
Avg.
53.26
SD
2.66
Ring1
Avg.
52.21
52.74
53.43
53.59
54.08
54.12
53.51
52.73
53.30
SD
2.62
2.66
2.58
2.56
2.57
2.56
2.60
2.64
2.48
Ring2
Avg.
51.08
52.29
53.38
53.51
54.00
54.16
53.53
52.40
53.04
SD
2.68
2.70
2.52
2.54
2.64
2.54
2.53
2.68
2.38
Total thickness (μm)
Center
Avg.
534.24
SD
29.89
Ring1
Avg.
566.99
556.14
542.89
539.83
544.30
550.44
557.72
566.80
553.14
SD
31.88
31.55
30.94
30.79
30.39
30.10
30.64
31.33
30.56
Ring2
Avg.
602.28
585.08
564.26
562.04
568.37
573.62
584.28
597.17
579.64
SD
33.82
33.06
32.13
31.69
31.39
31.12
31.88
32.34
31.31Figure 2
Box plots to show the thickness differences of three locations (center, Ring1, and Ring2) in the corneal epithelial thickness map (a) and corneal thickness map (b). Corneal thickness increased from the center to the periphery (b) while corneal epithelial thickness remained constant (a).
(a)
(b)Significant differences in each sector of corneal epithelial thickness value and corneal thickness value were found (Figures3(a) and 3(b)). As shown clearly in Figure 3(a), R1e and R1f were remarkably larger than other sectors of Ring1 in corneal epithelial thickness value (P<0.05). Similarly, compared to other sectors of Ring2, R2e and R2f also had larger corneal epithelial thickness value numerously (P<0.05). No statistical difference was found between R1e and R1f, the same with R2e and R2f. As shown in Figure 3(b), R1a and R1h were larger numerously than other sectors of Ring1 in corneal thickness (P<0.001). R2a and R2h were also thicker than other sectors of Ring2 (P<0.001). That is to say, the thickest part of full-thickness cornea is the nasal-superior part.Figure 3
The detailed corneal epithelial thickness (a) and corneal thickness (b) of different sectors in Ring1 and Ring2.
(a)
(b)Figure4 used color gradation to describe the difference in each sector of the corneal epithelial thickness with average thickness on it.Figure 4
The distribution of corneal epithelial thickness in each sector using color gradations with average thickness on it.Table3 showed that there was a weak positive correlation between corneal epithelial thickness and corneal thickness (r=0.148, P=0.031).Table 3
Correlations between corneal epithelial thickness and some parameters.
Location
Age
CT
Km
Axis-C
Axis-T
IOP
r
P
r
P
r
P
r
P
r
P
r
P
Center
−0.11
0.045
0.157
0.021
0.065
0.340
−0.004
0.953
0.033
0.628
0.023
0.741
Ring1
−0.14
0.038
0.148
0.030
0.091
0.185
−0.061
0.373
0.051
0.454
−0.005
0.941
Ring2
−0.11
0.058
0.140
0.040
0.099
0.148
−0.087
0.201
0.041
0.553
−0.037
0.585
Avg.
−0.13
0.042
0.148
0.031
0.088
0.201
−0.051
0.456
0.043
0.527
−0.006
0.934
CT: corneal thickness; Km: mean front corneal surface curvature; Axis-C: cornea front astigmatism axis (flat); Axis-T: astigmatic axis; IOP: intraocular pressure.
### 3.2. Epithelial Thickness Differences in Refraction-Specific Groups
As shown in Figure5, differences of epithelial thickness among different refractions were found. The low and moderate myopia groups (group Myopia-L and Myopia-M) were statistically thicker than high myopia group (group Myopia-H) in the center (0.95 μm, P=0.04; 0.73 μm, P=0.025), Ring1 (0.98 μm, P=0.015; 0.75 μm, P=0.037), and Ring2 (1.15 μm, P=0.002; 0.77 μm, P=0.022). There was no significant difference between Myopia-L and Myopia-M in all locations (P>0.05).Figure 5
For three locations (center, Ring1, and Ring2), differences of corneal epithelial thickness among different MRSE groups which were divided according to manifest refraction (group myopia-L for less than or equal to 3.00 D, group myopia-M for 3.00 D to 6.00 D, group myopia-H for more than 6.00 D).∗∗ and ∗∗∗ indicate P<0.01 and P<0.001, respectively.
### 3.3. Epithelial Thickness Differences between the Right and Left Eyes
The differences of corneal epithelial thickness between the right and left eyes were calculated and described in Table1. The mean difference of R-L in the center, Ring1, and Ring2 were −0.34 μm, −0.35 μm, and −0.34 μm, respectively (P>0.05). Although the average epithelial thickness of the right eye was 0.35 μm thinner than that of the left eye, this difference was not statistically significant (P=0.113).
### 3.4. Epithelial Thickness Differences in Gender-Specific Groups
As shown in Figure6, the sample was divided into two gender-specific groups: group female (n=102) and group male (n=113). For group female, the average epithelial thickness was 52.43 ± 2.36 μm of the center, 52.39 ± 2.07 μm of Ring1, 52.26 ± 2.00 μm of Ring2, and 52.36 ± 2.05 μm on average. For group male, the average epithelial thickness was 53.77 ± 2.71 μm of the center, 53.91 ± 2.55 μm of Ring1, 53.57 ± 2.46 μm of Ring2, and 53.75 ± 2.48 μm on average. The mean difference between male and female in epithelial thickness value was 1.39 μm (P<0.001).Figure 6
Difference of corneal epithelial thickness between male and female in three locations.∗∗∗ indicates P<0.001.
### 3.5. Correlation with Age, IOP, Corneal Front Curvature, and Astigmatism
In Table3, there was slight negative correlation between corneal epithelial thickness and age on average (r=−0.13, P=0.042). No statistically significant correlation between corneal epithelial thickness and IOP was noted (r=−0.006, P=0.934). As for corneal front curvature, there was no statistically significant correlation between corneal epithelial thickness and corneal front curvature (r=0.088, P=0.201). Furthermore, Table 3 shows no significant correlation between corneal epithelial thickness and cornea front astigmatism axis (flat, r=−0.051, P=0.456) was noted, nor between corneal epithelial thickness and astigmatism axis in total eye (r=0.043, P=0.527).
## 3.1. Corneal Epithelium Distribution
Two hundred and fifteen eyes from 215 subjects were assigned to calculate myopic corneal epithelial thickness and corneal thickness of 17 sectors (Table2). The central corneal epithelial thickness was 53.26 ± 2.66 μm. The average epithelial thickness of Ring1 and Ring2 were 53.30 ± 2.48 μm and 53.04 ± 2.38 μm, respectively. The central corneal thickness (CCT) was 534.24 ± 29.89 μm. The average of Ring1 and Ring2 were 553.14 ± 30.56 μm and 579.64 ± 31.31 μm, respectively. As Figure 2(a) shows, no statistical difference was found among the center and the two rings of corneal epithelial thickness (P=0.536). The corneal thickness increased gradually from the center to the periphery (P<0.001, Figure 2(b)).Table 2
The corneal epithelial thickness and corneal thickness in different locations.
a
b
c
d
e
f
g
h
Avg.
Epithelial thickness (μm)
Center
Avg.
53.26
SD
2.66
Ring1
Avg.
52.21
52.74
53.43
53.59
54.08
54.12
53.51
52.73
53.30
SD
2.62
2.66
2.58
2.56
2.57
2.56
2.60
2.64
2.48
Ring2
Avg.
51.08
52.29
53.38
53.51
54.00
54.16
53.53
52.40
53.04
SD
2.68
2.70
2.52
2.54
2.64
2.54
2.53
2.68
2.38
Total thickness (μm)
Center
Avg.
534.24
SD
29.89
Ring1
Avg.
566.99
556.14
542.89
539.83
544.30
550.44
557.72
566.80
553.14
SD
31.88
31.55
30.94
30.79
30.39
30.10
30.64
31.33
30.56
Ring2
Avg.
602.28
585.08
564.26
562.04
568.37
573.62
584.28
597.17
579.64
SD
33.82
33.06
32.13
31.69
31.39
31.12
31.88
32.34
31.31Figure 2
Box plots to show the thickness differences of three locations (center, Ring1, and Ring2) in the corneal epithelial thickness map (a) and corneal thickness map (b). Corneal thickness increased from the center to the periphery (b) while corneal epithelial thickness remained constant (a).
(a)
(b)Significant differences in each sector of corneal epithelial thickness value and corneal thickness value were found (Figures3(a) and 3(b)). As shown clearly in Figure 3(a), R1e and R1f were remarkably larger than other sectors of Ring1 in corneal epithelial thickness value (P<0.05). Similarly, compared to other sectors of Ring2, R2e and R2f also had larger corneal epithelial thickness value numerously (P<0.05). No statistical difference was found between R1e and R1f, the same with R2e and R2f. As shown in Figure 3(b), R1a and R1h were larger numerously than other sectors of Ring1 in corneal thickness (P<0.001). R2a and R2h were also thicker than other sectors of Ring2 (P<0.001). That is to say, the thickest part of full-thickness cornea is the nasal-superior part.Figure 3
The detailed corneal epithelial thickness (a) and corneal thickness (b) of different sectors in Ring1 and Ring2.
(a)
(b)Figure4 used color gradation to describe the difference in each sector of the corneal epithelial thickness with average thickness on it.Figure 4
The distribution of corneal epithelial thickness in each sector using color gradations with average thickness on it.Table3 showed that there was a weak positive correlation between corneal epithelial thickness and corneal thickness (r=0.148, P=0.031).Table 3
Correlations between corneal epithelial thickness and some parameters.
Location
Age
CT
Km
Axis-C
Axis-T
IOP
r
P
r
P
r
P
r
P
r
P
r
P
Center
−0.11
0.045
0.157
0.021
0.065
0.340
−0.004
0.953
0.033
0.628
0.023
0.741
Ring1
−0.14
0.038
0.148
0.030
0.091
0.185
−0.061
0.373
0.051
0.454
−0.005
0.941
Ring2
−0.11
0.058
0.140
0.040
0.099
0.148
−0.087
0.201
0.041
0.553
−0.037
0.585
Avg.
−0.13
0.042
0.148
0.031
0.088
0.201
−0.051
0.456
0.043
0.527
−0.006
0.934
CT: corneal thickness; Km: mean front corneal surface curvature; Axis-C: cornea front astigmatism axis (flat); Axis-T: astigmatic axis; IOP: intraocular pressure.
## 3.2. Epithelial Thickness Differences in Refraction-Specific Groups
As shown in Figure5, differences of epithelial thickness among different refractions were found. The low and moderate myopia groups (group Myopia-L and Myopia-M) were statistically thicker than high myopia group (group Myopia-H) in the center (0.95 μm, P=0.04; 0.73 μm, P=0.025), Ring1 (0.98 μm, P=0.015; 0.75 μm, P=0.037), and Ring2 (1.15 μm, P=0.002; 0.77 μm, P=0.022). There was no significant difference between Myopia-L and Myopia-M in all locations (P>0.05).Figure 5
For three locations (center, Ring1, and Ring2), differences of corneal epithelial thickness among different MRSE groups which were divided according to manifest refraction (group myopia-L for less than or equal to 3.00 D, group myopia-M for 3.00 D to 6.00 D, group myopia-H for more than 6.00 D).∗∗ and ∗∗∗ indicate P<0.01 and P<0.001, respectively.
## 3.3. Epithelial Thickness Differences between the Right and Left Eyes
The differences of corneal epithelial thickness between the right and left eyes were calculated and described in Table1. The mean difference of R-L in the center, Ring1, and Ring2 were −0.34 μm, −0.35 μm, and −0.34 μm, respectively (P>0.05). Although the average epithelial thickness of the right eye was 0.35 μm thinner than that of the left eye, this difference was not statistically significant (P=0.113).
## 3.4. Epithelial Thickness Differences in Gender-Specific Groups
As shown in Figure6, the sample was divided into two gender-specific groups: group female (n=102) and group male (n=113). For group female, the average epithelial thickness was 52.43 ± 2.36 μm of the center, 52.39 ± 2.07 μm of Ring1, 52.26 ± 2.00 μm of Ring2, and 52.36 ± 2.05 μm on average. For group male, the average epithelial thickness was 53.77 ± 2.71 μm of the center, 53.91 ± 2.55 μm of Ring1, 53.57 ± 2.46 μm of Ring2, and 53.75 ± 2.48 μm on average. The mean difference between male and female in epithelial thickness value was 1.39 μm (P<0.001).Figure 6
Difference of corneal epithelial thickness between male and female in three locations.∗∗∗ indicates P<0.001.
## 3.5. Correlation with Age, IOP, Corneal Front Curvature, and Astigmatism
In Table3, there was slight negative correlation between corneal epithelial thickness and age on average (r=−0.13, P=0.042). No statistically significant correlation between corneal epithelial thickness and IOP was noted (r=−0.006, P=0.934). As for corneal front curvature, there was no statistically significant correlation between corneal epithelial thickness and corneal front curvature (r=0.088, P=0.201). Furthermore, Table 3 shows no significant correlation between corneal epithelial thickness and cornea front astigmatism axis (flat, r=−0.051, P=0.456) was noted, nor between corneal epithelial thickness and astigmatism axis in total eye (r=0.043, P=0.527).
## 4. Discussion
A good knowledge of the corneal epithelium distribution may help a lot in many aspects of clinical work, such as screening for keratoconus before corneal refractive surgery [20], fitting contact lens [21, 22], and increasing the accuracy of corneal refractive surgery [23, 24].The distribution of both corneal thickness and corneal epithelial thickness follow a nonuniform pattern (Table2 and Figure 3).The thinnest part of corneal thickness is R1d and R2d, namely, temporal-inferior part. The thickest part is R1a and R1h for Ring1 and R2a and R2h for Ring2, namely, nasal-superior part. The result is in agreement with previously reported values in the use of other evaluation tools [25, 26].However, the distribution of corneal epithelial thickness is quite different from that of corneal thickness. On the map of corneal epithelial thickness, the thinnest part is R1a for Ring1 and R2a for Ring2. The thickest part is R1e and R1f for Ring1 and R2e and R2f for Ring2. In another word, the thinnest part is the superior and the thickest part is the nasal-inferior. Reinstein et al. [7] reported a similar result in the use of very high-frequency (VHF) digital ultrasound Some previous studies [13, 14, 27] also reported that the inferior side is thicker than the superior, just like this study did.Concerning the nasal-inferior part to be the thickest part of corneal epithelium over the entire corneal area, one possible explanation of the asymmetry is the eye abrasion caused by the eyelid. Doane [28] reported that the upper eyelid descended fastest at the time it crossed the visual axis. Therefore, the eyelid might be rubbing the corneal epithelium and applied greater forces on the superior cornea than on the inferior part. This might have caused the inferior part of the corneal epithelial thickness to be thicker than the superior part. In this study, weak positive correlation was found between corneal epithelial thickness and corneal thickness (r=0.148, P=0.031). The thickest part of full-thickness cornea is the nasal-superior part. Thus, we postulate that the greater corneal epithelial thickness of the nasal side is related to the corneal thickness. The natural structural difference may be one of the reasons.It is a limitation here that the tear film was included in the measurement due to the restriction of the machine. Previous study [29] showed that the precorneal tear film was 4.79 ± 0.88 μm on average. This may influence the results of the corneal epithelium distribution, especially the differences between different locations. However, the OCT images were acquired within 5 seconds. We have excluded subjects who had dry eye. We supposed that the tear film was steady during the acquisition process. This would not influence the result too much. Further fundamental research is necessary to search for the reason behind this finding.The corneal thickness increases gradually from the center to the periphery. However, there is no significant difference among the center, Ring1, and Ring2 in corneal epithelial thickness map in this study. It means that the corneal epithelial thickness remains constant on average from the center to the periphery over the 6 mm diameter area. Tao et al. [30] also reported that the corneal epithelial thickness remained at the same thickness with the use of a different custom-built SD-OCT. In his study, only several points from different locations were acquired.The low to moderate myopia groups (group Myopia-L and Myopia-M) were statistically thicker than group Myopia-H. According to this, we could deduce that people with high myopia tend to have thinner corneal epithelium than others do. In a clinical study done by Gowrisankaran et al. [31], a correlation between refractive error and blink rate was found. They reported that a refractive error could cause an increasing blink rate (P=0.005). Thus, we deduce that the high myopia patients blink more times than others do. The more frequent eye friction can lead to the thinner epithelial thickness. Furthermore, high myopia is an ocular disease caused by excessive axial elongation. We could also deduce that it may cause thinner corneal epithelial thickness in high myopia eyes. This needs further pathology to confirm. However, some results in previous studies were different. They found that there was no correlation between corneal epithelial thickness and refraction [17, 32]. Further study is needed behind this finding.Male subjects have thicker corneal epithelial thickness than female subjects do in all three locations (center, Ring1, and Ring2, M-F = 1.39μm on average, P<0.001). Kanellopoulos and Asimellis [14] also did a similar report of central epithelial thickness. Small differences were noted between male (54.10 ± 3.34 mm) and female (52.58 ± 3.19 mm) subjects. Previous research [33, 34] revealed that gonadal hormones may affect ocular tissue growth. This may cause the difference of corneal epithelial thickness between male and female.The correlation between corneal epithelial thickness and age is also negative in this study. Kanellopoulos and Asimellis [14] reported that a positive correlation was found between corneal epithelial thickness and age. Reinstein et al. [7] reported that no correlation was found between the two parameters. Different from the other two studies, only young subjects (18–40 years) were recruited in this study. Therefore, the result could be different due to the different age group among different studies.Since many young patients suffered from myopia, the information provided by this study may to some degree help researchers or others who are interested in corneal epithelial mapping to get more information and develop further research.Due to the measuring limitation of the SD-OCT, the axial resolution of the system is 5 microns. Because the subjects were healthy except for myopia, their corneal epithelial thicknesses were in the normal range (45–60 microns, 53.26 on average). Therefore, there would not be too much difference numerously among them. Some of the differences observed were lower than 5 microns. Some previous studies [13, 16, 35] also suffered from the same limitation in reporting the results. Maybe the invention of new measuring device with higher resolution will help solve the problem.To sum up, the profile of the corneal epithelial thickness in myopic eyes was described in this study and confirmed to be nonuniform over the entire cornea. People with high myopia tend to have thinner corneal epithelium than low–moderate myopic patients do. Many factors can be related to the corneal epithelial thickness, such as age, gender, and corneal thickness. Further investigation of the correlation with corneal epithelial thickness might also be needed to expose a specific role for corneal epithelium, such as corneal biomechanics and corneal wounding healing after corneal surgery.
---
*Source: 1018321-2017-05-21.xml* | 1018321-2017-05-21_1018321-2017-05-21.md | 36,944 | Detailed Distribution of Corneal Epithelial Thickness and Correlated Characteristics Measured with SD-OCT in Myopic Eyes | Yanan Wu; Yan Wang | Journal of Ophthalmology
(2017) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1018321 | 1018321-2017-05-21.xml | ---
## Abstract
Purpose. To investigate the detailed distribution of corneal epithelial thickness in single sectors and its correlated characteristics in myopic eyes. Methods. SD-OCT was used to measure the corneal epithelial thickness distribution profile. Differences of corneal epithelial thickness between different parameters and some correlations of characteristics were calculated. Results. The thickest and thinnest part of epithelium were found at the nasal-inferior sector (P<0.05) and at the superior side (P<0.05), respectively. Subjects in the low and moderate myopia groups have thicker epithelial thickness than those in the high myopia group (P<0.05). Epithelial thickness was 1.39 μm thicker in male subjects than in female subjects (P<0.001). There was a slight negative correlation between corneal epithelial thickness and age (r=−0.13, P=0.042). Weak positive correlations were found between corneal epithelial thickness and corneal thickness (r=0.148, P=0.031). No correlations were found between corneal epithelial thickness, astigmatism axis, corneal front curvature, and IOP. Conclusions. The epithelial thickness is not evenly distributed across the cornea. The thickest location of the corneal epithelium is at the nasal-inferior sector. People with high myopia tend to have thinner corneal epithelium than low–moderate myopic patients. The corneal epithelial thickness is likely to be affected by some parameters, such as age, gender, and corneal thickness.
---
## Body
## 1. Introduction
The corneal epithelium plays a very important role in protecting eyes as it is the outermost layer and in maintaining high optical quality [1, 2] as well. It is found that the epithelium contributed 0.85 D alone in corneal refraction at the 3.6 mm diameter zone [3]. Furthermore, the corneal epithelial thickness is not of homogeneous depth and tends to alter its thickness profile to compensate for irregular corneal stromal surface to get a regular surface [4]. Some corneal surgery and corneal refractive surgery with excimer laser ablation were done directly on corneal epithelium, such as transepithelial photorefractive keratectomy (TransPRK) [5] and phototherapeutic keratectomy (PTK).Since the corneal epithelium contributes a lot in corneal refraction and it helps in the design of the above surgeries, it is very important to get a better knowledge of the characteristics of corneal epithelial thickness distribution. Previously, a few instruments have been invented and applied to corneal epithelium thickness measurement in vivo, including very high-frequency (VHF) digital ultrasound and confocal microscopy. A few studies on corneal epithelial thickness mapping have been done using very high-frequency (VHF) digital ultrasound and confocal microscopy [6–8]. However, these two techniques have some limitations. They both are invasive devices and need anesthetic. This may increase the risk of corneal infection and decrease the accuracy because of the possible contact-related cornea compression [6, 9, 10]. Since the latest years, SD-OCT has become a promising method to study the corneal epithelial thickness because of its noninvasiveness. It has showed good repeatability and accuracy [11, 12] at the same time. The noncontact, high-speed, and high-resolution characters make SD-OCT a popular device in assessing corneal epithelial thickness. Up to now, only a few research [13–17] could show the corneal epithelium map using noncontact device. This study aimed to figure out the detailed distribution of corneal epithelium.Furthermore, little knowledge in distinctions of epithelial thickness among different myopia degrees is known. Therefore, with the support of a large sample size, this study aims to investigate the distinction of corneal epithelial thickness in different myopic degrees. The description of corneal epithelial thickness distribution in more detailed parts and correlation between corneal epithelial thickness and various corneal parameters, such as age, corneal thickness, IOP, astigmatism, and corneal front curvature were also analyzed.
## 2. Methods
### 2.1. Subjects
Two hundred and fifteen eyes from 215 healthy subjects (102 women, 113 men) with a mean age of 21.26 ± 4.35 years(18 to 40 years) and mean manifest refraction spherical equivalent (MRSE) of −5.34 ± 2.19 D (ranging from −1.125 D to −12.00 D) participated in this study. Subjects reached a complete ophthalmologic evaluation, including the intraocular pressure (IOP) measurement, best-corrected distance visual acuity (BCVA), slit lamp and ophthalmoscope examination, corneal topography (Pentacam HR, OCULUS GmbH, Wetzlar, Germany), Schirmer I test, and tear break-up time test. Every subject had best-corrected distance visual acuity of 20/25 or better. All measurements were taken without the application of artificial tears or mydriatic eye drops. And the exclusion criteria included suspicious and frank keratoconus, a history of contact lens wear, current or prior ocular pathology, and dry eye disorder. All subjects were informed of the aim of the study, and their consent was obtained at the time of their first clinical visit. This prospective study was performed at the Refractive Surgery Center at the Tianjin Ophthalmology Hospital, Nankai University, and received the approval of the Ethics Committee of our Institution, in accord with the Declaration of Helsinki.
### 2.2. OCT
An ultrahigh resolution SD-OCT (RTVue-100, Optovue Inc., Fremont, CA) was used in this study. The system worked at 830 nm wavelength and had a scan speed of 26,000 axial scans per second. The setting’s axial resolution was 5μm, with an L-Cam lens attached to it, which can take 8 meridional B-scans per acquisition, consisting of 1024 A-scans. A Pachymetry_Cpwr scan pattern centered at the pupil center was used to map the cornea.The RTVue-100 corneal epithelial thickness mapping and pachymetry software (software version 6.11.0.12) automatically processed the OCT scan to provide the corneal epithelial thickness and pachymetry (corneal thickness) maps, corresponding to a 6 mm diameter area. A well-trained investigator conducted all the measurements, and three repeated measurements were collected and averaged in each case.
### 2.3. Corneal Epithelial Mapping
The analyzing area was two 6 mm diameter disks of corneal thickness and corneal epithelial thickness maps. Each map was divided into 3 zones by diameter: central 2 mm, inner ring from 2 to 5 mm, and outer ring from 5 to 6 mm, according to the set of the analyzing system (Figure1). The central 2 mm zone was named as center. The 2 to 5 mm zone (named Ring1) and 5 to 6 mm zone (named Ring2) were averagely divided into 8 sectors. The 8 sectors of Ring1 were named anticlockwise for OD as R1a, R1b, R1c, R1d, R1e, R1f, R1g, and R1h. Similarly, the sectors from Ring2 for OD were named from R2a to R2h (Figure 1). The naming all started from superior to temporal, then inferior to nasal. The left eye map was mirrored to the right eye to calculate the difference between the right and left eyes (Figure 1). The average epithelial thickness of each sector was calculated and displayed numerically over the corresponding area. Right eye minus left eye asymmetry (right − left (R-L)) was also calculated (Table 1).Figure 1
Details of the mapping of corneal thickness and corneal epithelial thickness over the 6 mm diameter cornea from the analyzing report in the set. The analyzing area is divided into three main parts (center, Ring1, and Ring2) and 17 sectors. In Ring1, the sectors were named, respectively, anticlockwise for OD as R1a, R1b, R1c, R1d, R1e, R1f, R1g, and R1h. Similarly, the sectors from Ring2 for OD were named from R2a to R2h. The naming all started from superior to temporal, then inferior to nasal. The left eye map was mirrored.Table 1
Distinction of corneal epithelial thickness between the right and left eyes.
Mean difference (μm)
SEM
Sig.
Right − left (R-L)
Center
−0.34237
0.20142
0.175
Ring1
−0.35241
0.21179
0.068
Ring2
−0.34456
0.21583
0.093
Avg.
−0.35361
0.21945
0.113
### 2.4. Manifest Refraction Spherical Equivalent (MRSE) Grouping
A set of groups were formed considering the average MRSE of the study population. Group Myopia-L consisted of a low-myopia population, defined as MRSE less than or equal to −3.00 D (n=26), while group Myopia-M was defined as MRSE more than −3.00 D and less than or equal to −6.00 D (n=122), and group Myopia-H consisted of a high-myopia population of MRSE more than −6.00 D (n=67).
### 2.5. Corneal Topography
Anterior segment was imaged with Pentacam (OCULUS GmbH, Wetzlar, Germany). In each acquisition, the rotating Scheimpflug camera captured 50 images automatically and measures 25,000 true elevation points. Due to the good repeatability of this device [18, 19], the acquisition would be applied to the study on condition that the quality specification was “OK.” If not, the acquisition was repeated. The cornea front astigmatism axis (flat) parameter and the mean front corneal surface curvature (Km) were recorded from the Pentacam map.
### 2.6. Statistical Analysis
Statistical Products and Services Solution (SPSS version 20.0, Chicago, Illinois, USA) were used for the statistical analysis. Normal distribution of data was assessed using the Kolmogorov-Smirnov test.Analysis of variance (ANOVA) was used to compare epithelial thickness in each sector of the 6 mm diameter of cornea and the differences of corneal epithelial thickness in different MRSE groups. The Student’s independent-samplest-test was used to investigate the difference in epithelial thickness among different parameters, including gender, eye sides, and R-L. Pearson’s correlation coefficient was used to compare corneal epithelial thickness to corneal thickness (pachymetry), age, intraocular pressure (IOP), mean front corneal surface curvature (Km), corneal front astigmatism axis (flat), and total eye astigmatism axis. All significant levels were set at P<0.05.
## 2.1. Subjects
Two hundred and fifteen eyes from 215 healthy subjects (102 women, 113 men) with a mean age of 21.26 ± 4.35 years(18 to 40 years) and mean manifest refraction spherical equivalent (MRSE) of −5.34 ± 2.19 D (ranging from −1.125 D to −12.00 D) participated in this study. Subjects reached a complete ophthalmologic evaluation, including the intraocular pressure (IOP) measurement, best-corrected distance visual acuity (BCVA), slit lamp and ophthalmoscope examination, corneal topography (Pentacam HR, OCULUS GmbH, Wetzlar, Germany), Schirmer I test, and tear break-up time test. Every subject had best-corrected distance visual acuity of 20/25 or better. All measurements were taken without the application of artificial tears or mydriatic eye drops. And the exclusion criteria included suspicious and frank keratoconus, a history of contact lens wear, current or prior ocular pathology, and dry eye disorder. All subjects were informed of the aim of the study, and their consent was obtained at the time of their first clinical visit. This prospective study was performed at the Refractive Surgery Center at the Tianjin Ophthalmology Hospital, Nankai University, and received the approval of the Ethics Committee of our Institution, in accord with the Declaration of Helsinki.
## 2.2. OCT
An ultrahigh resolution SD-OCT (RTVue-100, Optovue Inc., Fremont, CA) was used in this study. The system worked at 830 nm wavelength and had a scan speed of 26,000 axial scans per second. The setting’s axial resolution was 5μm, with an L-Cam lens attached to it, which can take 8 meridional B-scans per acquisition, consisting of 1024 A-scans. A Pachymetry_Cpwr scan pattern centered at the pupil center was used to map the cornea.The RTVue-100 corneal epithelial thickness mapping and pachymetry software (software version 6.11.0.12) automatically processed the OCT scan to provide the corneal epithelial thickness and pachymetry (corneal thickness) maps, corresponding to a 6 mm diameter area. A well-trained investigator conducted all the measurements, and three repeated measurements were collected and averaged in each case.
## 2.3. Corneal Epithelial Mapping
The analyzing area was two 6 mm diameter disks of corneal thickness and corneal epithelial thickness maps. Each map was divided into 3 zones by diameter: central 2 mm, inner ring from 2 to 5 mm, and outer ring from 5 to 6 mm, according to the set of the analyzing system (Figure1). The central 2 mm zone was named as center. The 2 to 5 mm zone (named Ring1) and 5 to 6 mm zone (named Ring2) were averagely divided into 8 sectors. The 8 sectors of Ring1 were named anticlockwise for OD as R1a, R1b, R1c, R1d, R1e, R1f, R1g, and R1h. Similarly, the sectors from Ring2 for OD were named from R2a to R2h (Figure 1). The naming all started from superior to temporal, then inferior to nasal. The left eye map was mirrored to the right eye to calculate the difference between the right and left eyes (Figure 1). The average epithelial thickness of each sector was calculated and displayed numerically over the corresponding area. Right eye minus left eye asymmetry (right − left (R-L)) was also calculated (Table 1).Figure 1
Details of the mapping of corneal thickness and corneal epithelial thickness over the 6 mm diameter cornea from the analyzing report in the set. The analyzing area is divided into three main parts (center, Ring1, and Ring2) and 17 sectors. In Ring1, the sectors were named, respectively, anticlockwise for OD as R1a, R1b, R1c, R1d, R1e, R1f, R1g, and R1h. Similarly, the sectors from Ring2 for OD were named from R2a to R2h. The naming all started from superior to temporal, then inferior to nasal. The left eye map was mirrored.Table 1
Distinction of corneal epithelial thickness between the right and left eyes.
Mean difference (μm)
SEM
Sig.
Right − left (R-L)
Center
−0.34237
0.20142
0.175
Ring1
−0.35241
0.21179
0.068
Ring2
−0.34456
0.21583
0.093
Avg.
−0.35361
0.21945
0.113
## 2.4. Manifest Refraction Spherical Equivalent (MRSE) Grouping
A set of groups were formed considering the average MRSE of the study population. Group Myopia-L consisted of a low-myopia population, defined as MRSE less than or equal to −3.00 D (n=26), while group Myopia-M was defined as MRSE more than −3.00 D and less than or equal to −6.00 D (n=122), and group Myopia-H consisted of a high-myopia population of MRSE more than −6.00 D (n=67).
## 2.5. Corneal Topography
Anterior segment was imaged with Pentacam (OCULUS GmbH, Wetzlar, Germany). In each acquisition, the rotating Scheimpflug camera captured 50 images automatically and measures 25,000 true elevation points. Due to the good repeatability of this device [18, 19], the acquisition would be applied to the study on condition that the quality specification was “OK.” If not, the acquisition was repeated. The cornea front astigmatism axis (flat) parameter and the mean front corneal surface curvature (Km) were recorded from the Pentacam map.
## 2.6. Statistical Analysis
Statistical Products and Services Solution (SPSS version 20.0, Chicago, Illinois, USA) were used for the statistical analysis. Normal distribution of data was assessed using the Kolmogorov-Smirnov test.Analysis of variance (ANOVA) was used to compare epithelial thickness in each sector of the 6 mm diameter of cornea and the differences of corneal epithelial thickness in different MRSE groups. The Student’s independent-samplest-test was used to investigate the difference in epithelial thickness among different parameters, including gender, eye sides, and R-L. Pearson’s correlation coefficient was used to compare corneal epithelial thickness to corneal thickness (pachymetry), age, intraocular pressure (IOP), mean front corneal surface curvature (Km), corneal front astigmatism axis (flat), and total eye astigmatism axis. All significant levels were set at P<0.05.
## 3. Results
### 3.1. Corneal Epithelium Distribution
Two hundred and fifteen eyes from 215 subjects were assigned to calculate myopic corneal epithelial thickness and corneal thickness of 17 sectors (Table2). The central corneal epithelial thickness was 53.26 ± 2.66 μm. The average epithelial thickness of Ring1 and Ring2 were 53.30 ± 2.48 μm and 53.04 ± 2.38 μm, respectively. The central corneal thickness (CCT) was 534.24 ± 29.89 μm. The average of Ring1 and Ring2 were 553.14 ± 30.56 μm and 579.64 ± 31.31 μm, respectively. As Figure 2(a) shows, no statistical difference was found among the center and the two rings of corneal epithelial thickness (P=0.536). The corneal thickness increased gradually from the center to the periphery (P<0.001, Figure 2(b)).Table 2
The corneal epithelial thickness and corneal thickness in different locations.
a
b
c
d
e
f
g
h
Avg.
Epithelial thickness (μm)
Center
Avg.
53.26
SD
2.66
Ring1
Avg.
52.21
52.74
53.43
53.59
54.08
54.12
53.51
52.73
53.30
SD
2.62
2.66
2.58
2.56
2.57
2.56
2.60
2.64
2.48
Ring2
Avg.
51.08
52.29
53.38
53.51
54.00
54.16
53.53
52.40
53.04
SD
2.68
2.70
2.52
2.54
2.64
2.54
2.53
2.68
2.38
Total thickness (μm)
Center
Avg.
534.24
SD
29.89
Ring1
Avg.
566.99
556.14
542.89
539.83
544.30
550.44
557.72
566.80
553.14
SD
31.88
31.55
30.94
30.79
30.39
30.10
30.64
31.33
30.56
Ring2
Avg.
602.28
585.08
564.26
562.04
568.37
573.62
584.28
597.17
579.64
SD
33.82
33.06
32.13
31.69
31.39
31.12
31.88
32.34
31.31Figure 2
Box plots to show the thickness differences of three locations (center, Ring1, and Ring2) in the corneal epithelial thickness map (a) and corneal thickness map (b). Corneal thickness increased from the center to the periphery (b) while corneal epithelial thickness remained constant (a).
(a)
(b)Significant differences in each sector of corneal epithelial thickness value and corneal thickness value were found (Figures3(a) and 3(b)). As shown clearly in Figure 3(a), R1e and R1f were remarkably larger than other sectors of Ring1 in corneal epithelial thickness value (P<0.05). Similarly, compared to other sectors of Ring2, R2e and R2f also had larger corneal epithelial thickness value numerously (P<0.05). No statistical difference was found between R1e and R1f, the same with R2e and R2f. As shown in Figure 3(b), R1a and R1h were larger numerously than other sectors of Ring1 in corneal thickness (P<0.001). R2a and R2h were also thicker than other sectors of Ring2 (P<0.001). That is to say, the thickest part of full-thickness cornea is the nasal-superior part.Figure 3
The detailed corneal epithelial thickness (a) and corneal thickness (b) of different sectors in Ring1 and Ring2.
(a)
(b)Figure4 used color gradation to describe the difference in each sector of the corneal epithelial thickness with average thickness on it.Figure 4
The distribution of corneal epithelial thickness in each sector using color gradations with average thickness on it.Table3 showed that there was a weak positive correlation between corneal epithelial thickness and corneal thickness (r=0.148, P=0.031).Table 3
Correlations between corneal epithelial thickness and some parameters.
Location
Age
CT
Km
Axis-C
Axis-T
IOP
r
P
r
P
r
P
r
P
r
P
r
P
Center
−0.11
0.045
0.157
0.021
0.065
0.340
−0.004
0.953
0.033
0.628
0.023
0.741
Ring1
−0.14
0.038
0.148
0.030
0.091
0.185
−0.061
0.373
0.051
0.454
−0.005
0.941
Ring2
−0.11
0.058
0.140
0.040
0.099
0.148
−0.087
0.201
0.041
0.553
−0.037
0.585
Avg.
−0.13
0.042
0.148
0.031
0.088
0.201
−0.051
0.456
0.043
0.527
−0.006
0.934
CT: corneal thickness; Km: mean front corneal surface curvature; Axis-C: cornea front astigmatism axis (flat); Axis-T: astigmatic axis; IOP: intraocular pressure.
### 3.2. Epithelial Thickness Differences in Refraction-Specific Groups
As shown in Figure5, differences of epithelial thickness among different refractions were found. The low and moderate myopia groups (group Myopia-L and Myopia-M) were statistically thicker than high myopia group (group Myopia-H) in the center (0.95 μm, P=0.04; 0.73 μm, P=0.025), Ring1 (0.98 μm, P=0.015; 0.75 μm, P=0.037), and Ring2 (1.15 μm, P=0.002; 0.77 μm, P=0.022). There was no significant difference between Myopia-L and Myopia-M in all locations (P>0.05).Figure 5
For three locations (center, Ring1, and Ring2), differences of corneal epithelial thickness among different MRSE groups which were divided according to manifest refraction (group myopia-L for less than or equal to 3.00 D, group myopia-M for 3.00 D to 6.00 D, group myopia-H for more than 6.00 D).∗∗ and ∗∗∗ indicate P<0.01 and P<0.001, respectively.
### 3.3. Epithelial Thickness Differences between the Right and Left Eyes
The differences of corneal epithelial thickness between the right and left eyes were calculated and described in Table1. The mean difference of R-L in the center, Ring1, and Ring2 were −0.34 μm, −0.35 μm, and −0.34 μm, respectively (P>0.05). Although the average epithelial thickness of the right eye was 0.35 μm thinner than that of the left eye, this difference was not statistically significant (P=0.113).
### 3.4. Epithelial Thickness Differences in Gender-Specific Groups
As shown in Figure6, the sample was divided into two gender-specific groups: group female (n=102) and group male (n=113). For group female, the average epithelial thickness was 52.43 ± 2.36 μm of the center, 52.39 ± 2.07 μm of Ring1, 52.26 ± 2.00 μm of Ring2, and 52.36 ± 2.05 μm on average. For group male, the average epithelial thickness was 53.77 ± 2.71 μm of the center, 53.91 ± 2.55 μm of Ring1, 53.57 ± 2.46 μm of Ring2, and 53.75 ± 2.48 μm on average. The mean difference between male and female in epithelial thickness value was 1.39 μm (P<0.001).Figure 6
Difference of corneal epithelial thickness between male and female in three locations.∗∗∗ indicates P<0.001.
### 3.5. Correlation with Age, IOP, Corneal Front Curvature, and Astigmatism
In Table3, there was slight negative correlation between corneal epithelial thickness and age on average (r=−0.13, P=0.042). No statistically significant correlation between corneal epithelial thickness and IOP was noted (r=−0.006, P=0.934). As for corneal front curvature, there was no statistically significant correlation between corneal epithelial thickness and corneal front curvature (r=0.088, P=0.201). Furthermore, Table 3 shows no significant correlation between corneal epithelial thickness and cornea front astigmatism axis (flat, r=−0.051, P=0.456) was noted, nor between corneal epithelial thickness and astigmatism axis in total eye (r=0.043, P=0.527).
## 3.1. Corneal Epithelium Distribution
Two hundred and fifteen eyes from 215 subjects were assigned to calculate myopic corneal epithelial thickness and corneal thickness of 17 sectors (Table2). The central corneal epithelial thickness was 53.26 ± 2.66 μm. The average epithelial thickness of Ring1 and Ring2 were 53.30 ± 2.48 μm and 53.04 ± 2.38 μm, respectively. The central corneal thickness (CCT) was 534.24 ± 29.89 μm. The average of Ring1 and Ring2 were 553.14 ± 30.56 μm and 579.64 ± 31.31 μm, respectively. As Figure 2(a) shows, no statistical difference was found among the center and the two rings of corneal epithelial thickness (P=0.536). The corneal thickness increased gradually from the center to the periphery (P<0.001, Figure 2(b)).Table 2
The corneal epithelial thickness and corneal thickness in different locations.
a
b
c
d
e
f
g
h
Avg.
Epithelial thickness (μm)
Center
Avg.
53.26
SD
2.66
Ring1
Avg.
52.21
52.74
53.43
53.59
54.08
54.12
53.51
52.73
53.30
SD
2.62
2.66
2.58
2.56
2.57
2.56
2.60
2.64
2.48
Ring2
Avg.
51.08
52.29
53.38
53.51
54.00
54.16
53.53
52.40
53.04
SD
2.68
2.70
2.52
2.54
2.64
2.54
2.53
2.68
2.38
Total thickness (μm)
Center
Avg.
534.24
SD
29.89
Ring1
Avg.
566.99
556.14
542.89
539.83
544.30
550.44
557.72
566.80
553.14
SD
31.88
31.55
30.94
30.79
30.39
30.10
30.64
31.33
30.56
Ring2
Avg.
602.28
585.08
564.26
562.04
568.37
573.62
584.28
597.17
579.64
SD
33.82
33.06
32.13
31.69
31.39
31.12
31.88
32.34
31.31Figure 2
Box plots to show the thickness differences of three locations (center, Ring1, and Ring2) in the corneal epithelial thickness map (a) and corneal thickness map (b). Corneal thickness increased from the center to the periphery (b) while corneal epithelial thickness remained constant (a).
(a)
(b)Significant differences in each sector of corneal epithelial thickness value and corneal thickness value were found (Figures3(a) and 3(b)). As shown clearly in Figure 3(a), R1e and R1f were remarkably larger than other sectors of Ring1 in corneal epithelial thickness value (P<0.05). Similarly, compared to other sectors of Ring2, R2e and R2f also had larger corneal epithelial thickness value numerously (P<0.05). No statistical difference was found between R1e and R1f, the same with R2e and R2f. As shown in Figure 3(b), R1a and R1h were larger numerously than other sectors of Ring1 in corneal thickness (P<0.001). R2a and R2h were also thicker than other sectors of Ring2 (P<0.001). That is to say, the thickest part of full-thickness cornea is the nasal-superior part.Figure 3
The detailed corneal epithelial thickness (a) and corneal thickness (b) of different sectors in Ring1 and Ring2.
(a)
(b)Figure4 used color gradation to describe the difference in each sector of the corneal epithelial thickness with average thickness on it.Figure 4
The distribution of corneal epithelial thickness in each sector using color gradations with average thickness on it.Table3 showed that there was a weak positive correlation between corneal epithelial thickness and corneal thickness (r=0.148, P=0.031).Table 3
Correlations between corneal epithelial thickness and some parameters.
Location
Age
CT
Km
Axis-C
Axis-T
IOP
r
P
r
P
r
P
r
P
r
P
r
P
Center
−0.11
0.045
0.157
0.021
0.065
0.340
−0.004
0.953
0.033
0.628
0.023
0.741
Ring1
−0.14
0.038
0.148
0.030
0.091
0.185
−0.061
0.373
0.051
0.454
−0.005
0.941
Ring2
−0.11
0.058
0.140
0.040
0.099
0.148
−0.087
0.201
0.041
0.553
−0.037
0.585
Avg.
−0.13
0.042
0.148
0.031
0.088
0.201
−0.051
0.456
0.043
0.527
−0.006
0.934
CT: corneal thickness; Km: mean front corneal surface curvature; Axis-C: cornea front astigmatism axis (flat); Axis-T: astigmatic axis; IOP: intraocular pressure.
## 3.2. Epithelial Thickness Differences in Refraction-Specific Groups
As shown in Figure5, differences of epithelial thickness among different refractions were found. The low and moderate myopia groups (group Myopia-L and Myopia-M) were statistically thicker than high myopia group (group Myopia-H) in the center (0.95 μm, P=0.04; 0.73 μm, P=0.025), Ring1 (0.98 μm, P=0.015; 0.75 μm, P=0.037), and Ring2 (1.15 μm, P=0.002; 0.77 μm, P=0.022). There was no significant difference between Myopia-L and Myopia-M in all locations (P>0.05).Figure 5
For three locations (center, Ring1, and Ring2), differences of corneal epithelial thickness among different MRSE groups which were divided according to manifest refraction (group myopia-L for less than or equal to 3.00 D, group myopia-M for 3.00 D to 6.00 D, group myopia-H for more than 6.00 D).∗∗ and ∗∗∗ indicate P<0.01 and P<0.001, respectively.
## 3.3. Epithelial Thickness Differences between the Right and Left Eyes
The differences of corneal epithelial thickness between the right and left eyes were calculated and described in Table1. The mean difference of R-L in the center, Ring1, and Ring2 were −0.34 μm, −0.35 μm, and −0.34 μm, respectively (P>0.05). Although the average epithelial thickness of the right eye was 0.35 μm thinner than that of the left eye, this difference was not statistically significant (P=0.113).
## 3.4. Epithelial Thickness Differences in Gender-Specific Groups
As shown in Figure6, the sample was divided into two gender-specific groups: group female (n=102) and group male (n=113). For group female, the average epithelial thickness was 52.43 ± 2.36 μm of the center, 52.39 ± 2.07 μm of Ring1, 52.26 ± 2.00 μm of Ring2, and 52.36 ± 2.05 μm on average. For group male, the average epithelial thickness was 53.77 ± 2.71 μm of the center, 53.91 ± 2.55 μm of Ring1, 53.57 ± 2.46 μm of Ring2, and 53.75 ± 2.48 μm on average. The mean difference between male and female in epithelial thickness value was 1.39 μm (P<0.001).Figure 6
Difference of corneal epithelial thickness between male and female in three locations.∗∗∗ indicates P<0.001.
## 3.5. Correlation with Age, IOP, Corneal Front Curvature, and Astigmatism
In Table3, there was slight negative correlation between corneal epithelial thickness and age on average (r=−0.13, P=0.042). No statistically significant correlation between corneal epithelial thickness and IOP was noted (r=−0.006, P=0.934). As for corneal front curvature, there was no statistically significant correlation between corneal epithelial thickness and corneal front curvature (r=0.088, P=0.201). Furthermore, Table 3 shows no significant correlation between corneal epithelial thickness and cornea front astigmatism axis (flat, r=−0.051, P=0.456) was noted, nor between corneal epithelial thickness and astigmatism axis in total eye (r=0.043, P=0.527).
## 4. Discussion
A good knowledge of the corneal epithelium distribution may help a lot in many aspects of clinical work, such as screening for keratoconus before corneal refractive surgery [20], fitting contact lens [21, 22], and increasing the accuracy of corneal refractive surgery [23, 24].The distribution of both corneal thickness and corneal epithelial thickness follow a nonuniform pattern (Table2 and Figure 3).The thinnest part of corneal thickness is R1d and R2d, namely, temporal-inferior part. The thickest part is R1a and R1h for Ring1 and R2a and R2h for Ring2, namely, nasal-superior part. The result is in agreement with previously reported values in the use of other evaluation tools [25, 26].However, the distribution of corneal epithelial thickness is quite different from that of corneal thickness. On the map of corneal epithelial thickness, the thinnest part is R1a for Ring1 and R2a for Ring2. The thickest part is R1e and R1f for Ring1 and R2e and R2f for Ring2. In another word, the thinnest part is the superior and the thickest part is the nasal-inferior. Reinstein et al. [7] reported a similar result in the use of very high-frequency (VHF) digital ultrasound Some previous studies [13, 14, 27] also reported that the inferior side is thicker than the superior, just like this study did.Concerning the nasal-inferior part to be the thickest part of corneal epithelium over the entire corneal area, one possible explanation of the asymmetry is the eye abrasion caused by the eyelid. Doane [28] reported that the upper eyelid descended fastest at the time it crossed the visual axis. Therefore, the eyelid might be rubbing the corneal epithelium and applied greater forces on the superior cornea than on the inferior part. This might have caused the inferior part of the corneal epithelial thickness to be thicker than the superior part. In this study, weak positive correlation was found between corneal epithelial thickness and corneal thickness (r=0.148, P=0.031). The thickest part of full-thickness cornea is the nasal-superior part. Thus, we postulate that the greater corneal epithelial thickness of the nasal side is related to the corneal thickness. The natural structural difference may be one of the reasons.It is a limitation here that the tear film was included in the measurement due to the restriction of the machine. Previous study [29] showed that the precorneal tear film was 4.79 ± 0.88 μm on average. This may influence the results of the corneal epithelium distribution, especially the differences between different locations. However, the OCT images were acquired within 5 seconds. We have excluded subjects who had dry eye. We supposed that the tear film was steady during the acquisition process. This would not influence the result too much. Further fundamental research is necessary to search for the reason behind this finding.The corneal thickness increases gradually from the center to the periphery. However, there is no significant difference among the center, Ring1, and Ring2 in corneal epithelial thickness map in this study. It means that the corneal epithelial thickness remains constant on average from the center to the periphery over the 6 mm diameter area. Tao et al. [30] also reported that the corneal epithelial thickness remained at the same thickness with the use of a different custom-built SD-OCT. In his study, only several points from different locations were acquired.The low to moderate myopia groups (group Myopia-L and Myopia-M) were statistically thicker than group Myopia-H. According to this, we could deduce that people with high myopia tend to have thinner corneal epithelium than others do. In a clinical study done by Gowrisankaran et al. [31], a correlation between refractive error and blink rate was found. They reported that a refractive error could cause an increasing blink rate (P=0.005). Thus, we deduce that the high myopia patients blink more times than others do. The more frequent eye friction can lead to the thinner epithelial thickness. Furthermore, high myopia is an ocular disease caused by excessive axial elongation. We could also deduce that it may cause thinner corneal epithelial thickness in high myopia eyes. This needs further pathology to confirm. However, some results in previous studies were different. They found that there was no correlation between corneal epithelial thickness and refraction [17, 32]. Further study is needed behind this finding.Male subjects have thicker corneal epithelial thickness than female subjects do in all three locations (center, Ring1, and Ring2, M-F = 1.39μm on average, P<0.001). Kanellopoulos and Asimellis [14] also did a similar report of central epithelial thickness. Small differences were noted between male (54.10 ± 3.34 mm) and female (52.58 ± 3.19 mm) subjects. Previous research [33, 34] revealed that gonadal hormones may affect ocular tissue growth. This may cause the difference of corneal epithelial thickness between male and female.The correlation between corneal epithelial thickness and age is also negative in this study. Kanellopoulos and Asimellis [14] reported that a positive correlation was found between corneal epithelial thickness and age. Reinstein et al. [7] reported that no correlation was found between the two parameters. Different from the other two studies, only young subjects (18–40 years) were recruited in this study. Therefore, the result could be different due to the different age group among different studies.Since many young patients suffered from myopia, the information provided by this study may to some degree help researchers or others who are interested in corneal epithelial mapping to get more information and develop further research.Due to the measuring limitation of the SD-OCT, the axial resolution of the system is 5 microns. Because the subjects were healthy except for myopia, their corneal epithelial thicknesses were in the normal range (45–60 microns, 53.26 on average). Therefore, there would not be too much difference numerously among them. Some of the differences observed were lower than 5 microns. Some previous studies [13, 16, 35] also suffered from the same limitation in reporting the results. Maybe the invention of new measuring device with higher resolution will help solve the problem.To sum up, the profile of the corneal epithelial thickness in myopic eyes was described in this study and confirmed to be nonuniform over the entire cornea. People with high myopia tend to have thinner corneal epithelium than low–moderate myopic patients do. Many factors can be related to the corneal epithelial thickness, such as age, gender, and corneal thickness. Further investigation of the correlation with corneal epithelial thickness might also be needed to expose a specific role for corneal epithelium, such as corneal biomechanics and corneal wounding healing after corneal surgery.
---
*Source: 1018321-2017-05-21.xml* | 2017 |
# A Combined Experimental and First-Principle Calculation (DFT Study) for In Situ Polymer Inclusion Membrane-Assisted Growth of Metal-Organic Frameworks (MOFs)
**Authors:** Reza Darvishi; Esmaeil Pakizeh
**Journal:** International Journal of Polymer Science
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1018347
---
## Abstract
A simple yet effective strategy was developed to prepare a metal-organic framework- (MOF-) based asymmetric membrane by depositing the Zeolitic imidazolate framework-8 (Zif-8) layer on the aminosilane-functionalized surface of a polymer inclusion membrane via an in situ growth process. During the extraction of the ligand molecules from the source to stripping compartment, metal ions react with ligand, and layers of Zif-8 were gradually grown onto aminosilane-modified polymer inclusion membrane (PIM). The properties of the surface-grown Zif-8 nanocrystalline layer were well characterized by powder X-ray diffraction, adsorption-desorption analysis, and scanning electron microscopy. The potential use of these Zif-8-supported PIM membranes for the separation of gases N2, CH4, and CO2 was evaluated at two temperatures (25 and 50°C) and pressures (1, 3, and 5 bar), by comparing the permeability and selectivity behavior of these membranes with neat PIM. The gas permeability of both pure PIM (PCO2=799.2barrer) and PIM-co-MOF (PCO2=675.8barrer) increases with the temperature for all three gases, and the permeation rate order was CO2 > CH4 > N2. The results showed that the presence of a layer of Zif-8 on the surface of the polymer inclusion membranes can get a slightly reduced permeability (~21%) but an enhanced selectivity of up to ~70% for CO2/CH4 and ~34% for CO2/N2. In the case of both membrane types, the ideal permselectivity decreases with the temperature, but this decrease was slightly more pronounced for the case of PIM-co-MOF. To understand more details about the electronic structure and optical and adsorption properties of Zif-8 and M+Zif-8 (M=N2,CH4,andCO2) compounds, the periodic plane-wave density functional theory (DFT) calculations were used. The electronic band structures and density of states for pure Zif-8 showed that this compound is metallic. Also, using DFT, the formation energy of M+Zif-8 compounds was calculated, and we showed that the CO2+Zif-8 composition is more stable than other compounds. This result suggests that the tendency of the Zif-8 compound to absorb the CO2 molecule is higher than that of other molecules. Confirming these results, DFT optical calculations showed that the affinity of the CO2+Zif-8 composition to absorb infrared light was greater than that of the other compounds.
---
## Body
## 1. Introduction
Membrane technology has been widely applied across various industries, such as medication (blood fractionation), purification and desalination of water, and gas and mixture separation [1–7]. Membrane gas separation has the potential for applications such as oxygen enrichment, nitrogen generation, hydrogen recovery, and carbon dioxide removal [7, 8]. However, the growth of the gas separation membranes market in recent years is attributed to the increasing demand for carbon dioxide removal applications [8–11]. The development of a potential polymer is key to any further research in the gas separation membrane [7]. Over the past few decades, many polymers have been studied for gas membrane applications, but only relatively few polymers have been established as common gas separation membranes [7, 9, 12]. Polymers for gas separation processes have to meet several requirements simultaneously, such as rapid mass transfer rate and high selectivity towards a specific gas, the ability to easily form desired membrane configurations, and resistance to swelling induced plasticization. The aim of the development of new material is to combine high permeability with high permselectivity [8–11, 13]. Polymer inclusion membranes (PIMs) and metal-organic frameworks (MOFs) have been investigated for membrane applications which are of potential interest in gas recycling and recovery applications [13, 14]. MOFs have been introduced as novel fillers for incorporation in many different polymer matrices to form composite or mixed-matrix membranes (MMMs) for achieving good permeability and high selectivity [10, 15]. Rodenas et al. [16] studied the NH2-MIL-53(Al)-filled polyimide-based MMMs and reported that the stability, selectivity, and permeability for CO2/CH4 separation are more enhanced compared with unfilled membranes. In principle, highly permeable MOFs have drawbacks in terms of brittleness and lack of flexibility, hindering their fabrication into continuous sheets and thus limiting their usage for further practical application. The drawbacks can be overcame by incorporating them into the polymer matrix. However, most of the polymers are incompatible with the MOF particles and are difficult to achieve uniform dispersion of the MOF in the MMMs [4, 10, 15–18]. The lack of strong interface between the MOF and polymer can cause the formation of agglomeration of the particles and nonselective interfacial voids. On the other side, high filler loadings can result in a brittle and rigid behavior of MMMs and a reduction of mechanical properties. Besides these issues, the polymer chains can penetrate the pores of MOF fillers, partially blocking the pore entrance and thus reducing gas permeability of the membranes [15, 17, 19].Polymer inclusion membranes (PIMs), a dense carrier-mediated transport membrane, are widely used for selectively recovering a target solute from a complex mixture. They are a type of self-supported liquid membranes in which extraction and stripping can be carried out in one operation with high selectivity for ion transport [3, 4]. These membranes are easy to fabricate and have outstanding mechanical properties. PIMs consist of a polymer, a plasticizer, and a carrier molecule facilitating the transport of both organic and inorganic species. The polymer support provides mechanical strength, the plasticizer improves flexibility, and the liquid phase facilitates the mobility of the carrier molecule. The carrier molecule acts as a guest-specific host, which can bound to target species by noncovalent intermolecular interactions, such as van der Waals, hydrophobic, or hydrogen bonds, thereby providing selective membrane permeability for target species [3]. Kebiche-Senhadji et al. [15] studied the pure gas permeation behavior of a CTA-based PIM containing an acidic carrier and found that gas permeation and CO2/H2 permselectivity increases during the incorporation of the plasticizer and acidic carrier into the CTA.This paper proposes a new approach for developing an asymmetric composite membrane, by growing Zeolitic imidazolate framework (Zif-8) particles on the surface of a novel polymer inclusion membranes, which were shown in a previous work to be efficient in facilitating transport of calcium cations [4]. For enhancing the adhesion between PIM support and Zif-8 particles, the surface modification of the solid substrate was performed through the reaction between an aminosilane compound and free hydroxyl groups on the polymer backbone [4]. A comparative study of gas transport through two types of membranes (pure PIM and Zif-8-coated PIM (coded as PIM-co-MOF)) was performed to evaluate more precisely the effect of the deposited Zif-8 crystal layer on membrane performance. The electronic band structures and density of states for the Zif-8 layer were also calculated using first-principles based on the density functional theory (DFT). Also, the optical and adsorption properties of Zif-8 and M+Zif-8 (M=N2,CH4,andCO2) were calculated with DFT, and their results are in agreement with the experimental data. The observations showed that the new class of the PIM-co-MOF membrane is more flexible compared to high-MOF-content MMMs. Although there are many different in situ synthesis techniques to directly grow/deposit the different types of MOF of solid surfaces [20–24], to our knowledge, there is no work reported in the literature concerning the usage of PIMs as support for in situ synthesis of MOFs on the membrane surfaces, and there are only very few studies that utilized PIMs for in situ synthesis of metal nanoparticles applications [25]. The growth of Zif-8 nanoparticles onto polymer inclusion membranes has the potential to convert them to the high-quality membrane for different applications. The prepared PIM-co-MOF was studied utilizing X-ray diffraction (XRD), Fourier transforms infrared spectroscopy (FTIR), scanning electron microscopy (SEM), and Brunauer–Emmett–Teller (BET) surface area. It will be expected that the proposed method can be considered as an alternative technique for MOF and novel hybrid MOF-coated membranes fabrication with additional selectivity through which the application of MOFs could be extended in the area such as filtration, sensors, and even ion and gas separation.
## 2. Experimental
### 2.1. Material
Cellulose acetate (CA, degree of acetylation: 2.87 and molecular weight: ~78,000 g/mole), isophorone diisocyanate, Benzo18Crown6, and ionic liquid, 1-butyl-3-methyl-imidazolium chloride ([BMIM] [Cl]), were bought from Sigma-Aldrich. Castor oil, (iodine value 90, viscosity of 950-1050 mPa·s at 20°C) was purchased from M/s SD Fine-Chem Limited, Mumbai, India. N,N-Dimethylacetamide (DMAC), zinc nitrate hexahydrate, and 2-methyl imidazole were provided by the Merck company. All chemicals were used as received.
### 2.2. Characterization
Nitrogen adsorption-desorption isotherms were obtained using a Micromeritics ASAP-2020 instrument at 77 K to measure pore textural properties of the synthesized Zi-8 samples. The X-ray diffraction (XRD) measurements were obtained using a STOE STADI MP (Germany) diffractometer (40 kV, 30 mA) with CuKa (λ=1.54184Å) source radiation at room temperature. The scanning rate was 2°/min in the range of 5° to 70° and a count time of 0.1 s/step. Fourier transform infrared (FTIR) spectra were recorded using a Bruker Vector 22, a German spectrophotometer equipped with a KBr beam splitter, with a wavenumber range of 4000 to 500 cm−1 at a resolution of 6 or 4 cm−1. Scanning electron microscopy (ZEISS EVO18) was used to study the quality and morphology of produced PIM and PIM-co-Zif-8. The samples were frozen in liquid nitrogen and then were mechanically broken into pieces. Then, the fractured samples were coated with a thin layer of gold using a gold sputterer (model SCD005, Bal-Tec, Hannover, Germany) in a vacuum, and then micrographs were prepared.
### 2.3. Preparation and Modification of PIM
In the present work, we modified the novel PIM, which was fabricated and characterized in the previous work in detail [4]. In brief, the PIM membrane was prepared by casting the solution of GPO which in turn synthesized by a reaction between epoxidized castor oil and cellulose acetate, thereafter, cross-linked by isophorone diisocyanate, crown ether, and ionic liquid in dimethylacetamide (DMAc) on an immaculate glass using a knife (Doctor Blade). The prepared dry PIM was configured into a two-compartment glass cell for the feed and stripping phases with 200cm3 volume each. A 5cm diameter PIM was sandwiched between the circular openings of the two compartments. 3-Aminopropyl trimethoxysilane was dissolved in the feed chamber at a concentration of 5°wt%, and pH was adjusted to 4.5 by adding some 0.1M hydrochloric acid solution. The stripping aqueous phase consisted of distilled deionized water. The cells were gently stirred for 24°h at 25°C (50°rpm). Finally, the membranes were washed three times with water.
### 2.4. ZIF-8 Deposition on PIM Surface
In this work, an in situ PIM-assisted growth method has been developed for the synthesis of metal-organic frameworks (MOFs) Zif-8 on the amino-functionalized membrane surface. In this procedure, the amino-functionalized PIM clamped between the two compartments of the above-described diffusion cell. The feed phase was homogenized by stirring at a speed of500rpm with a magnetic bar. The feed compartment was filled with an aqueous solution containing the 5mg of zinc nitrate hexahydrate, and the receiving compartment was filled with an aqueous solution containing 20mg of 2-methylimidazole. The extraction of Zn(II) into the PIM was carried out for a predetermined period of time at 25°C. This PIM-supported Zif-8 membrane is called PIM-co-MOF from here on.
### 2.5. Measurement of Gas Permeability
The gas permeation properties of both membranes of neat PIM and PIM-co-MOF were measured in a constant volume/variable pressure gas permeability apparatus (Figure1) at controlled temperature and pressure. Single gas permeability values of N2, CO2, and CH4at298K and 323K and pressures of 1, 3, and 5°bar were obtained.Figure 1
Schematic of the experimental setup for gas separation test.The gas permeability was calculated using the following equation [9]:
(1)P=273.15×1010760AT×14.7VLP0×76×dpdt,where P is the gas permeability (barrer), V is the downstream volume (cm3), L is the membrane thickness (cm), A is the effective area of the membrane (cm2), T is the operating temperature (K), Po is the feed gas pressure (psi), and dp/dt is the steady rate of pressure increase in the downstream side (cmHg/sec). The ideal permselectivity between two different gases is defined as the ratio of the single gas permeabilities and determined as [26]:
(2)α=PAPB.
## 2.1. Material
Cellulose acetate (CA, degree of acetylation: 2.87 and molecular weight: ~78,000 g/mole), isophorone diisocyanate, Benzo18Crown6, and ionic liquid, 1-butyl-3-methyl-imidazolium chloride ([BMIM] [Cl]), were bought from Sigma-Aldrich. Castor oil, (iodine value 90, viscosity of 950-1050 mPa·s at 20°C) was purchased from M/s SD Fine-Chem Limited, Mumbai, India. N,N-Dimethylacetamide (DMAC), zinc nitrate hexahydrate, and 2-methyl imidazole were provided by the Merck company. All chemicals were used as received.
## 2.2. Characterization
Nitrogen adsorption-desorption isotherms were obtained using a Micromeritics ASAP-2020 instrument at 77 K to measure pore textural properties of the synthesized Zi-8 samples. The X-ray diffraction (XRD) measurements were obtained using a STOE STADI MP (Germany) diffractometer (40 kV, 30 mA) with CuKa (λ=1.54184Å) source radiation at room temperature. The scanning rate was 2°/min in the range of 5° to 70° and a count time of 0.1 s/step. Fourier transform infrared (FTIR) spectra were recorded using a Bruker Vector 22, a German spectrophotometer equipped with a KBr beam splitter, with a wavenumber range of 4000 to 500 cm−1 at a resolution of 6 or 4 cm−1. Scanning electron microscopy (ZEISS EVO18) was used to study the quality and morphology of produced PIM and PIM-co-Zif-8. The samples were frozen in liquid nitrogen and then were mechanically broken into pieces. Then, the fractured samples were coated with a thin layer of gold using a gold sputterer (model SCD005, Bal-Tec, Hannover, Germany) in a vacuum, and then micrographs were prepared.
## 2.3. Preparation and Modification of PIM
In the present work, we modified the novel PIM, which was fabricated and characterized in the previous work in detail [4]. In brief, the PIM membrane was prepared by casting the solution of GPO which in turn synthesized by a reaction between epoxidized castor oil and cellulose acetate, thereafter, cross-linked by isophorone diisocyanate, crown ether, and ionic liquid in dimethylacetamide (DMAc) on an immaculate glass using a knife (Doctor Blade). The prepared dry PIM was configured into a two-compartment glass cell for the feed and stripping phases with 200cm3 volume each. A 5cm diameter PIM was sandwiched between the circular openings of the two compartments. 3-Aminopropyl trimethoxysilane was dissolved in the feed chamber at a concentration of 5°wt%, and pH was adjusted to 4.5 by adding some 0.1M hydrochloric acid solution. The stripping aqueous phase consisted of distilled deionized water. The cells were gently stirred for 24°h at 25°C (50°rpm). Finally, the membranes were washed three times with water.
## 2.4. ZIF-8 Deposition on PIM Surface
In this work, an in situ PIM-assisted growth method has been developed for the synthesis of metal-organic frameworks (MOFs) Zif-8 on the amino-functionalized membrane surface. In this procedure, the amino-functionalized PIM clamped between the two compartments of the above-described diffusion cell. The feed phase was homogenized by stirring at a speed of500rpm with a magnetic bar. The feed compartment was filled with an aqueous solution containing the 5mg of zinc nitrate hexahydrate, and the receiving compartment was filled with an aqueous solution containing 20mg of 2-methylimidazole. The extraction of Zn(II) into the PIM was carried out for a predetermined period of time at 25°C. This PIM-supported Zif-8 membrane is called PIM-co-MOF from here on.
## 2.5. Measurement of Gas Permeability
The gas permeation properties of both membranes of neat PIM and PIM-co-MOF were measured in a constant volume/variable pressure gas permeability apparatus (Figure1) at controlled temperature and pressure. Single gas permeability values of N2, CO2, and CH4at298K and 323K and pressures of 1, 3, and 5°bar were obtained.Figure 1
Schematic of the experimental setup for gas separation test.The gas permeability was calculated using the following equation [9]:
(1)P=273.15×1010760AT×14.7VLP0×76×dpdt,where P is the gas permeability (barrer), V is the downstream volume (cm3), L is the membrane thickness (cm), A is the effective area of the membrane (cm2), T is the operating temperature (K), Po is the feed gas pressure (psi), and dp/dt is the steady rate of pressure increase in the downstream side (cmHg/sec). The ideal permselectivity between two different gases is defined as the ratio of the single gas permeabilities and determined as [26]:
(2)α=PAPB.
## 3. Computational Methods and Model Systems
In this paper, the calculations were performed using the density functional theory and Quantum ESPRESSO package [27]. The exchange-correlation term was considered within the generalized gradient (GGA) approximation presented by Perdew–Burke–Ernzerhof (PBE) [28]. The energy cut-off for the extension of the wave-functions was selected as 408eV. The Brillouin zone integration was executed over a Monkhorst–Pack 4×4×4 meshes [29]. The lattice constant of the Zif-8 material was optimized until the total energy converged to at least 10−3eV. In the theoretical calculations, the unit cells of Zif-8 with cubic structure and space group (I−43m) are shown in Figure 2.Figure 2
Top views of the atomic structure of the Zif-8 compound. The green, blue, pink, and black spheres represent C, N, H, and Zn atoms, respectively.In this figure, each central zinc (Zn) element is coordinated by four 2-methylimidazolate ligands with one of its two N atoms. To simulate the Zif-8 compound, we used a unit cell with 156 atoms and Zn12N48C48H48 formula. The atomic positions of Zn in this structure are presented in Table 1.Table 1
Atomic positon of Zn in Zif-8 compounds.
AtomsX (Å)Y (Å)Z (Å)Zn112.7408.49Zn28.494.240Zn38.4912.740Zn44.2408.49Zn508.494.24Zn6012.748.49Zn712.748.490Zn804.248.49Zn94.248.490Zn1008.4912.74Zn118.4904.24Zn128.49012.74Also, the optimized structure parameters of Zif-8 compounds are shown in Table2.Table 2
Calculated bond lengths, bond angle, and lattice constants of Zif-8.
ZN-N (Å)1.99N-Zn-N (°)109.44N-C (Å)1.37Zn-N-C (°)126.52C-C (Å)1.32N-C-H (°)125.56C-H (Å)0.93H-C-C (°)125.55a=b=c (Å)16.99α=β=γ (°)90
## 4. Theory Results and Discussion
### 4.1. Electronic Band Structure and Density of States
The electronic band structure and partial density of states (PDOS) of Zif-8 compounds are shown in Figures3(b) and 3(c), respectively.Figure 3
The calculated electronic band structure and density of states (DOS) of Zif-8 compounds.Figure3(a) also shows the band structure in the range of -0.6 to 0.4eV for more clarity. To help readability, we used a color legend for the DOS. We drew in black, green, blue, and red colors the total, p, d, and s DOS, respectively. To specify the metallic systems, the Fermi energy (EF) was taken here as the highest occupied energy level of the system. The symmetry points, while in the cubic Brillouin zone (BZ) in which Zif-8 compound crystallizes, are plotted in Figure 4. Coordinates of the k points within a cubic BZ framework are following: Γ (0 0 0), X (0.5 0 0), M (0.5 0.5 0), and R (0.5 0.5 0.5) [30].Figure 4
The high symmetryk-path in the cubic first Brillouin zone.The set of atomic electron configurations included in the present band structure computation process of Zif-8 is listed in Table3.Table 3
Atomic orbitals as employed in the present band structure computation of Zif-8.
ElementCore electronsSemi-core electronsValence electronsZn1s22s22p63s23p64s23d10N1s22s22p3C1s22s22p2H——1s1According to Figures3(a) and 3(b), the band structure crosses the Fermi level (EF) at various symmetry directions, suggesting a metallic-like character of Zif-8 compounds. The electronic bands are mostly found in the energy range of -1.3 to -3eV, due to the dispersive nature of p orbitals of Zn and N atoms. Low allowed energy states in the valence region of Figure 3(c) are mainly originated due to s orbitals with a small contribution. Due to the type of occupation of atomic orbitals by electrons, the highest density of states is related to the p orbitals of the Zn, N, and C atoms. When the semi-core electrons are omitted, the d bands are shifted up while the sp bands are shifted down in a nonhomogeneous way. This leads to a reduction of the d-sp interband gap in the range of -1 to 0.5, -4 to -3, and -5.4 to −4.7eV [31]. This result is quite clear in Figure 3(b).
### 4.2. Adsorption Properties
The formation energy or enthalpy of formation shows the thermodynamic stability of the materials [32]. Under ideal conditions (zero Kelvin temperature and zero pressure) for the first time, we calculated the formation energy of M+Zif-8 (M=N2,CH4,andCO2) compounds. The locations of dope M molecules to calculate the formation energy of M+Zif-8 are shown in Figure 5 (near the red Zn polyhedral).Figure 5
The location ofM molecules in the Zif-8 compound. (a) Top view of unit cell. (b) Crystal shape view of unit cell. (c) Crystal shape of 2×2×2 super cell.
(a)(b)(c)For each case in the unit cell, fourM molecules were used. In the experimental section, the adsorption of M molecules by Zif-8 is investigated, and there we showed that the adsorption of M molecules is done by zinc elements, so they are located near the Zn atoms in the simulation part. The formation energy (EF) of compounds is calculated as
(3)EF=ExM+Zif8‐EZif8‐xEMx,where x is the number of doped atoms (M=N2,CH4,andCO2), ExM+Zif−8, EZif−8, and EM the total energy of M+Zif-8, Zif-8, and M molecules, respectively. This energy reflects the stability of the compounds. The total and formation energies of compounds are shown in Table 4.Table 4
Total and formation energies ofM+Zif-8 compounds.
MaterialsZif-8-N2Zif-8-CH4Zif-8-CO2Total energy (eV)-41862.60-40571.92-43793.53Formation energy (eV)-12.39-9.05-14.60According to this table, on comparing the formation energies between compounds, the minimum formation energy was evaluated to be−14.6eV in the Zif-8+CO2 case. This indicates that the tendency of the Zif-8 compound to adsorption of CO2 molecules is greater than in other cases.
### 4.3. Optical Properties
In this work, we used generalized gradient approximation (GGA) in the framework of DFT to calculate the optical properties of Zif-8 andM+Zif-8 compounds. These properties include real and imaginary parts of the dielectric function, extinction parameter, and adsorption coefficient. The optical properties are related to the frequency-dependent complex dielectric function εω by the following relation [33]:
(4)εω=ε1ω+iε2ω,where ε1ω and ε2ω are the real and imaginary parts of dielectric function, respectively. The real part is related to the electronic polarizability of the compounds, and the imaginary part is related to the electronic absorption of the compounds. The imaginary part of dielectric function can be defined as [34]:
(5)ε2ω=2πe2Ωε0∑k,ν,cΨkcu.rΨkv2δEkc−Ekν−Ewhere Ω is the volume of the unit cell, e indicates electronic charge, ε0 signifies permittivity of free space, u defines the polarization of the incident electric field, r and k are the vectors in the real and reciprocal lattice, respectively, and Ψkv and Ψkc are the valence band and conduction band wave-functions at k point corresponding to energies Ekv and Ekc, respectively. The ε1ω and ε2ω parameters are related to each other using the famous Kramer-Kronig transformations [35–37]. These relations are used to obtain the real part of dielectric function from the imaginary part. From ε1ω and ε2ω, the other optical constants such as the extinction coefficient kω and adsorption coefficient Aω can be determined as follows [33]:
(6)kω=1/2ε1ω2+ε2ω2−ε1ω,(7)Aω=2kωEℏc.The calculated optical parameters were evaluated in the energy range of 0 to5eV (infrared–visible-ultraviolet ranges). The schematic plots of ε1ω and ε2ω of Zif-8 and M+Zif-8 compounds are shown in Figure 6.Figure 6
Realε1ω and imaginary ε2ω parts of the dielectric function.The real partε1ω of the dielectric function shows the lowest peak intensity at 0.44, 0.41, 0.45, and 0.67eV for Zif-8+CH4, Zif+CO2, Zif-8, and Zif-8+N2, respectively. The imaginary part ε2ω shows the energy peaks at about 0.13eV for the Zif-8+CO2 and Zif-8+N2 and 0.14eV for the Zif-8 and Zif-8+CH4 compounds. These peaks belong to the electronic transition from Zn 3d to N and C 2p states at the conduction and valence band. The adsorption coefficient Aω of compounds is presented in Figure 7.Figure 7
The adsorption coefficient of Zif-8 andM+Zif-8 compounds.This parameter occurs in the infrared light region at an adsorption edge from 0 to1.5eV. The prominent peaks intensity for adsorption values of Zif-8+CO2, Zif+N2, Zif-8+CH4, and Zif-8 compounds are 18.43×104, 14.77×104, 13.55×104, and 13.30×104cm−1 corresponding to the energy peak at 0.79, 0.61, 0.57, and 0.70eV, respectively. Therefore, Zif-8+CO2 is chosen for optical infrared applications due to its higher adsorption coefficient [38–40].
## 4.1. Electronic Band Structure and Density of States
The electronic band structure and partial density of states (PDOS) of Zif-8 compounds are shown in Figures3(b) and 3(c), respectively.Figure 3
The calculated electronic band structure and density of states (DOS) of Zif-8 compounds.Figure3(a) also shows the band structure in the range of -0.6 to 0.4eV for more clarity. To help readability, we used a color legend for the DOS. We drew in black, green, blue, and red colors the total, p, d, and s DOS, respectively. To specify the metallic systems, the Fermi energy (EF) was taken here as the highest occupied energy level of the system. The symmetry points, while in the cubic Brillouin zone (BZ) in which Zif-8 compound crystallizes, are plotted in Figure 4. Coordinates of the k points within a cubic BZ framework are following: Γ (0 0 0), X (0.5 0 0), M (0.5 0.5 0), and R (0.5 0.5 0.5) [30].Figure 4
The high symmetryk-path in the cubic first Brillouin zone.The set of atomic electron configurations included in the present band structure computation process of Zif-8 is listed in Table3.Table 3
Atomic orbitals as employed in the present band structure computation of Zif-8.
ElementCore electronsSemi-core electronsValence electronsZn1s22s22p63s23p64s23d10N1s22s22p3C1s22s22p2H——1s1According to Figures3(a) and 3(b), the band structure crosses the Fermi level (EF) at various symmetry directions, suggesting a metallic-like character of Zif-8 compounds. The electronic bands are mostly found in the energy range of -1.3 to -3eV, due to the dispersive nature of p orbitals of Zn and N atoms. Low allowed energy states in the valence region of Figure 3(c) are mainly originated due to s orbitals with a small contribution. Due to the type of occupation of atomic orbitals by electrons, the highest density of states is related to the p orbitals of the Zn, N, and C atoms. When the semi-core electrons are omitted, the d bands are shifted up while the sp bands are shifted down in a nonhomogeneous way. This leads to a reduction of the d-sp interband gap in the range of -1 to 0.5, -4 to -3, and -5.4 to −4.7eV [31]. This result is quite clear in Figure 3(b).
## 4.2. Adsorption Properties
The formation energy or enthalpy of formation shows the thermodynamic stability of the materials [32]. Under ideal conditions (zero Kelvin temperature and zero pressure) for the first time, we calculated the formation energy of M+Zif-8 (M=N2,CH4,andCO2) compounds. The locations of dope M molecules to calculate the formation energy of M+Zif-8 are shown in Figure 5 (near the red Zn polyhedral).Figure 5
The location ofM molecules in the Zif-8 compound. (a) Top view of unit cell. (b) Crystal shape view of unit cell. (c) Crystal shape of 2×2×2 super cell.
(a)(b)(c)For each case in the unit cell, fourM molecules were used. In the experimental section, the adsorption of M molecules by Zif-8 is investigated, and there we showed that the adsorption of M molecules is done by zinc elements, so they are located near the Zn atoms in the simulation part. The formation energy (EF) of compounds is calculated as
(3)EF=ExM+Zif8‐EZif8‐xEMx,where x is the number of doped atoms (M=N2,CH4,andCO2), ExM+Zif−8, EZif−8, and EM the total energy of M+Zif-8, Zif-8, and M molecules, respectively. This energy reflects the stability of the compounds. The total and formation energies of compounds are shown in Table 4.Table 4
Total and formation energies ofM+Zif-8 compounds.
MaterialsZif-8-N2Zif-8-CH4Zif-8-CO2Total energy (eV)-41862.60-40571.92-43793.53Formation energy (eV)-12.39-9.05-14.60According to this table, on comparing the formation energies between compounds, the minimum formation energy was evaluated to be−14.6eV in the Zif-8+CO2 case. This indicates that the tendency of the Zif-8 compound to adsorption of CO2 molecules is greater than in other cases.
## 4.3. Optical Properties
In this work, we used generalized gradient approximation (GGA) in the framework of DFT to calculate the optical properties of Zif-8 andM+Zif-8 compounds. These properties include real and imaginary parts of the dielectric function, extinction parameter, and adsorption coefficient. The optical properties are related to the frequency-dependent complex dielectric function εω by the following relation [33]:
(4)εω=ε1ω+iε2ω,where ε1ω and ε2ω are the real and imaginary parts of dielectric function, respectively. The real part is related to the electronic polarizability of the compounds, and the imaginary part is related to the electronic absorption of the compounds. The imaginary part of dielectric function can be defined as [34]:
(5)ε2ω=2πe2Ωε0∑k,ν,cΨkcu.rΨkv2δEkc−Ekν−Ewhere Ω is the volume of the unit cell, e indicates electronic charge, ε0 signifies permittivity of free space, u defines the polarization of the incident electric field, r and k are the vectors in the real and reciprocal lattice, respectively, and Ψkv and Ψkc are the valence band and conduction band wave-functions at k point corresponding to energies Ekv and Ekc, respectively. The ε1ω and ε2ω parameters are related to each other using the famous Kramer-Kronig transformations [35–37]. These relations are used to obtain the real part of dielectric function from the imaginary part. From ε1ω and ε2ω, the other optical constants such as the extinction coefficient kω and adsorption coefficient Aω can be determined as follows [33]:
(6)kω=1/2ε1ω2+ε2ω2−ε1ω,(7)Aω=2kωEℏc.The calculated optical parameters were evaluated in the energy range of 0 to5eV (infrared–visible-ultraviolet ranges). The schematic plots of ε1ω and ε2ω of Zif-8 and M+Zif-8 compounds are shown in Figure 6.Figure 6
Realε1ω and imaginary ε2ω parts of the dielectric function.The real partε1ω of the dielectric function shows the lowest peak intensity at 0.44, 0.41, 0.45, and 0.67eV for Zif-8+CH4, Zif+CO2, Zif-8, and Zif-8+N2, respectively. The imaginary part ε2ω shows the energy peaks at about 0.13eV for the Zif-8+CO2 and Zif-8+N2 and 0.14eV for the Zif-8 and Zif-8+CH4 compounds. These peaks belong to the electronic transition from Zn 3d to N and C 2p states at the conduction and valence band. The adsorption coefficient Aω of compounds is presented in Figure 7.Figure 7
The adsorption coefficient of Zif-8 andM+Zif-8 compounds.This parameter occurs in the infrared light region at an adsorption edge from 0 to1.5eV. The prominent peaks intensity for adsorption values of Zif-8+CO2, Zif+N2, Zif-8+CH4, and Zif-8 compounds are 18.43×104, 14.77×104, 13.55×104, and 13.30×104cm−1 corresponding to the energy peak at 0.79, 0.61, 0.57, and 0.70eV, respectively. Therefore, Zif-8+CO2 is chosen for optical infrared applications due to its higher adsorption coefficient [38–40].
## 5. Experimental Results and Discussion
### 5.1. Analysis of PIM-co-MOF
Since the GPO molecules, in which the membrane is made from, has hydroxyl groups on its branches, the membrane surface is full of hydroxyl functional groups that can be used for subsequent reactions. The hydroxyl groups on the membrane surface can react with APTS molecules through a very slow alcohol exchange/hydrolysis reaction. Therefore, the side of the membrane that is exposed to the APTS solution will be functionalized with aminosilane groups. This fact can also be confirmed by comparing their FTIR spectra of the aminosilane-functionalized PIM with the spectra in bare PIM as shown in Figure8.Figure 8
FTIR of amino silane-modified polymer inclusion membrane.The samples were used for the FTIR tests and the characteristic spectra were scanned in the wavenumber range of 4000 to500cm−1 using KBr pellets. It could be observed that, compared with the unmodified membrane, the aminosilane-coated PIM possess adsorption band in 1091.5cm−1 due to the stretching vibration of the C-N bond, band in 1051.0cm−1 due to the stretching vibration of Si-O bond, band in 885.2cm−1 due to the bending vibration of NH group, and band near 3400cm−1 due to the NH2 group. All of these reveal that the PIM was functionalized successfully with aminosilane molecules; thus, this approach can be considered as an alternative route for the aminosilane functionalization of membranes.The next step for the experiment will be to grow Zif-8 particles on the modified surface of the membrane where the membranes were clamped between the receiving (methyl imidazole) and feed solution (zinc nitrate) compartments. Because of the presence of crown ether in the membrane and hydrophobic characteristic of the membrane, methyl imidazole molecules pass through much more rapidly than Zinc(II) ions. Imidazole molecules can gradually move from one side of the membrane to the other and react with preformed zinc clusters on the membrane surface. Therefore, MOF particles are formed on the side contacted with zinc nitrate solution. The crystal morphology of Zif-8 grown on the PIM is shown by virtue of FESEM. Figure9 shows the SEM images of the PIM surface before and after MOF deposition. Under this circumstance, the whole surface of PIM was covered with continuous rod-like crystals of Zif-8 after 1°h.Figure 9
SEM images of the PIM surfaces of before (a) and after (b) Zif-8 deposition.
(a)(b)The aminosilane-functionalized PIM shares the amine functionality of the PIM building block as anchoring in Zif-8 and allows the MOF clusters to grow.For deeply understanding the role of aminosilane, a control experiment was performed with a PIM with the same composition and without any aminosilane functionalization. Under this circumstance, Zif-8 crystals grew in a discrete stratum upon membrane and removed after being washed several times with methanol/water (the images are not shown here). Moreover, the weight percentage of Zif-8 adhered to the surface of the aminosilane-modified PIM was much higher (23wt%) compared to that for bare PIM (only 4.3wt%). It implies the presence of the hydroxyl pendant group upon GPO chains can also share the MOF crystal formation but NH2 groups facilitate efficient nucleation and could better contribute to Zif-8 crystal formation than OH groups.Figure10 exhibits the cross-sectional SEM images of the membranes, PIM and PIM-co-MOF.Figure 10
Cross-section of the membranes: (a) neat PIM and (b) PIM-co-MOF.
(a)(b)The cross-section of the PIM-co-MOF membrane indicates that the Zif-8 thin films are composed of1μm thickness layer of closely intergrown nano-sized crystals that tightly adhere to the surface of the PIM support (Figure 10).There is no gap between the Zif-8 nanoparticles layer and PIM support, once again indicating good interface adhesion between the PIM support and Zif-8 particles. The X-ray diffraction pattern of the PIM-co-MOF membrane is also shown in Figure11.Figure 11
XRD spectra of Zif-8 deposited on the PIM surface.The peak at8.84°, characteristic of the simulated Zif-8, is clearly visible in the XRD pattern, confirming the presence of Zif-8 on the PIM. Nitrogen adsorption-desorption analysis and pore size distribution were carried out to evaluate the pore sizes and surface area properties of PIM-supported Zif-8 nanoparticles as shown in Figure 12.Figure 12
N2 adsorption-desorption isotherms of the Zif-8 supported PIM.The nitrogen adsorption-desorption isotherms display typical reversible type I isotherms with very little hysteresis. The rapid increase in the adsorbed N2 at low pressures suggests the presence of a microporous structure. Furthermore, the hysteresis loop at high-pressure ratios indicates the existence of interparticle mesoporosity and macroporosity between Zif-8 particles. The specific surface area and the pore volume of the prepared PIM-supported Zif-8 were 1114.5m2g−1 and 0.83cm3g−1. The pore size distribution (PSD) curve shows that the samples exhibit a dominant pore diameter of approximately 1.01nm. These results are mainly in agreement with data in the literature, and some differences are mainly due to the different synthesis conditions.
### 5.2. Gas Transport Characteristics of the Membranes
The pure gas permeability and ideal selectivity measurements of PIM-co-MOF and neat PIM are represented for two temperatures (25 and 50°C) in Figure13.Figure 13
The permeability of the PIM and PIM-co-MOF as a function of feed pressure.
(a)(b)(c)As can be seen from the figures, with the temperature increase, the gas permeability in both types of membranes slightly increases for all gas species with an order CO2 > CH4 >N2. However, the increase in the permeation rate with increasing temperature is more pronounced, for the less permeable species, i.e., N2 and then CH4. The literature review shows that, with increasing temperature, the permeation rate increases in the polymer inclusion membranes, while it decreases in the Zif-8 membranes [8, 24]. Therefore, the increasing trends of data show that the polymer inclusion membrane layer is the determinant and limiting factor in the permeation rate. The next thing to note is that the neat PIM exhibited significantly higher gas permeability than PIM-co-MOF. This reveals that the continuous coverage of the membrane surface by Zif-8 caused an almost 50% permeability reduction compared to neat PIM. Gas permeation through the PIM-co-MOF membrane can be explained by a combination of two mechanisms including the solution-diffusion and adsorption-diffusion mechanisms. This means that gas molecules firstly adsorb on the MOF layer, then undergo a Knudsen diffusion through MOF channels, next dissolve in polymer inclusion membrane, reach PIM surface and diffuse through, and finally desorb at the other side of the membrane. With increasing temperature, the molecular motions in PIM increase, and subsequently, the solution and diffusion of gas species are increased while gas adsorption on the surface of Zif-8 layer is decreased especially for CO2. Therefore, the gas permeability decreased in MOF membranes. As a result of these two events, the gas permeability of PIM-co-MOF will not increase as much as that of PIM.Moreover, it can be seen from the figures that gas permeation rates increase slightly with the increasing feed pressure. With increasing feed pressure, on the one hand, viscous flow and Knudsen diffusion increase considerably in the MOF layer, and on the other hand, the membrane-free volumes decrease, exerting an opposite effect on the permeation of gases over PIM-co-MOF membrane. Therefore, no significant change in permeability (especially for N2 in which the permeation was more influenced by the membrane compaction) is observed. Overall, N2 and CH4 hardly pass through the membrane by a solution-diffusion mechanism, while CO2 transport behavior is according to an adsorption-controlled transport [8]. Figure 14 compares the effect of feed pressure on CO2/CH4 and CO2/N2 permselectivity over both types of membranes at two temperatures of 25 and 50°C.Figure 14
Permselectivity of PIM and PIM-co-MOF at two temperatures and different pressures.A comparison of permselectivity data indicates that a slight increase in both CO2/CH4 and CO2/N2 ideal selectivity is observed with increasing feed pressure which could be attributed to a relatively larger increase in CO2 permeability than N2 and CH4 permeability when the pressure increased. Although the same trend of selectivity values with the temperature has been observed for both types of membranes, ideal permselectivity values measured for PIM-co-MOF membrane are higher than those calculated in neat PIM.Although the value of CO2 permeability is reduced by an average of 21%, with the crystal growth of Zif-8 nanoparticles on the surface of the polymer inclusion membrane, the CO2/N2 and CO2/CH4 gas selectivity of membranes following the adsorption-diffusion mechanism increased significantly. As can be seen, the CO2/N2 and CO2/CH4 selectivity at 50°C increases which could be attributed to the carrier-mediated diffusion mechanism for the CO2 transport through PIMs, similar to what was reported by Kebiche-Senhadji et al. Crown ether as the carrier could react with CO2 in a reversible complexation form, and consequently, total CO2 flux is originated from the facilitated transport of the carrier–CO2 complex and the simple diffusion of CO2.
## 5.1. Analysis of PIM-co-MOF
Since the GPO molecules, in which the membrane is made from, has hydroxyl groups on its branches, the membrane surface is full of hydroxyl functional groups that can be used for subsequent reactions. The hydroxyl groups on the membrane surface can react with APTS molecules through a very slow alcohol exchange/hydrolysis reaction. Therefore, the side of the membrane that is exposed to the APTS solution will be functionalized with aminosilane groups. This fact can also be confirmed by comparing their FTIR spectra of the aminosilane-functionalized PIM with the spectra in bare PIM as shown in Figure8.Figure 8
FTIR of amino silane-modified polymer inclusion membrane.The samples were used for the FTIR tests and the characteristic spectra were scanned in the wavenumber range of 4000 to500cm−1 using KBr pellets. It could be observed that, compared with the unmodified membrane, the aminosilane-coated PIM possess adsorption band in 1091.5cm−1 due to the stretching vibration of the C-N bond, band in 1051.0cm−1 due to the stretching vibration of Si-O bond, band in 885.2cm−1 due to the bending vibration of NH group, and band near 3400cm−1 due to the NH2 group. All of these reveal that the PIM was functionalized successfully with aminosilane molecules; thus, this approach can be considered as an alternative route for the aminosilane functionalization of membranes.The next step for the experiment will be to grow Zif-8 particles on the modified surface of the membrane where the membranes were clamped between the receiving (methyl imidazole) and feed solution (zinc nitrate) compartments. Because of the presence of crown ether in the membrane and hydrophobic characteristic of the membrane, methyl imidazole molecules pass through much more rapidly than Zinc(II) ions. Imidazole molecules can gradually move from one side of the membrane to the other and react with preformed zinc clusters on the membrane surface. Therefore, MOF particles are formed on the side contacted with zinc nitrate solution. The crystal morphology of Zif-8 grown on the PIM is shown by virtue of FESEM. Figure9 shows the SEM images of the PIM surface before and after MOF deposition. Under this circumstance, the whole surface of PIM was covered with continuous rod-like crystals of Zif-8 after 1°h.Figure 9
SEM images of the PIM surfaces of before (a) and after (b) Zif-8 deposition.
(a)(b)The aminosilane-functionalized PIM shares the amine functionality of the PIM building block as anchoring in Zif-8 and allows the MOF clusters to grow.For deeply understanding the role of aminosilane, a control experiment was performed with a PIM with the same composition and without any aminosilane functionalization. Under this circumstance, Zif-8 crystals grew in a discrete stratum upon membrane and removed after being washed several times with methanol/water (the images are not shown here). Moreover, the weight percentage of Zif-8 adhered to the surface of the aminosilane-modified PIM was much higher (23wt%) compared to that for bare PIM (only 4.3wt%). It implies the presence of the hydroxyl pendant group upon GPO chains can also share the MOF crystal formation but NH2 groups facilitate efficient nucleation and could better contribute to Zif-8 crystal formation than OH groups.Figure10 exhibits the cross-sectional SEM images of the membranes, PIM and PIM-co-MOF.Figure 10
Cross-section of the membranes: (a) neat PIM and (b) PIM-co-MOF.
(a)(b)The cross-section of the PIM-co-MOF membrane indicates that the Zif-8 thin films are composed of1μm thickness layer of closely intergrown nano-sized crystals that tightly adhere to the surface of the PIM support (Figure 10).There is no gap between the Zif-8 nanoparticles layer and PIM support, once again indicating good interface adhesion between the PIM support and Zif-8 particles. The X-ray diffraction pattern of the PIM-co-MOF membrane is also shown in Figure11.Figure 11
XRD spectra of Zif-8 deposited on the PIM surface.The peak at8.84°, characteristic of the simulated Zif-8, is clearly visible in the XRD pattern, confirming the presence of Zif-8 on the PIM. Nitrogen adsorption-desorption analysis and pore size distribution were carried out to evaluate the pore sizes and surface area properties of PIM-supported Zif-8 nanoparticles as shown in Figure 12.Figure 12
N2 adsorption-desorption isotherms of the Zif-8 supported PIM.The nitrogen adsorption-desorption isotherms display typical reversible type I isotherms with very little hysteresis. The rapid increase in the adsorbed N2 at low pressures suggests the presence of a microporous structure. Furthermore, the hysteresis loop at high-pressure ratios indicates the existence of interparticle mesoporosity and macroporosity between Zif-8 particles. The specific surface area and the pore volume of the prepared PIM-supported Zif-8 were 1114.5m2g−1 and 0.83cm3g−1. The pore size distribution (PSD) curve shows that the samples exhibit a dominant pore diameter of approximately 1.01nm. These results are mainly in agreement with data in the literature, and some differences are mainly due to the different synthesis conditions.
## 5.2. Gas Transport Characteristics of the Membranes
The pure gas permeability and ideal selectivity measurements of PIM-co-MOF and neat PIM are represented for two temperatures (25 and 50°C) in Figure13.Figure 13
The permeability of the PIM and PIM-co-MOF as a function of feed pressure.
(a)(b)(c)As can be seen from the figures, with the temperature increase, the gas permeability in both types of membranes slightly increases for all gas species with an order CO2 > CH4 >N2. However, the increase in the permeation rate with increasing temperature is more pronounced, for the less permeable species, i.e., N2 and then CH4. The literature review shows that, with increasing temperature, the permeation rate increases in the polymer inclusion membranes, while it decreases in the Zif-8 membranes [8, 24]. Therefore, the increasing trends of data show that the polymer inclusion membrane layer is the determinant and limiting factor in the permeation rate. The next thing to note is that the neat PIM exhibited significantly higher gas permeability than PIM-co-MOF. This reveals that the continuous coverage of the membrane surface by Zif-8 caused an almost 50% permeability reduction compared to neat PIM. Gas permeation through the PIM-co-MOF membrane can be explained by a combination of two mechanisms including the solution-diffusion and adsorption-diffusion mechanisms. This means that gas molecules firstly adsorb on the MOF layer, then undergo a Knudsen diffusion through MOF channels, next dissolve in polymer inclusion membrane, reach PIM surface and diffuse through, and finally desorb at the other side of the membrane. With increasing temperature, the molecular motions in PIM increase, and subsequently, the solution and diffusion of gas species are increased while gas adsorption on the surface of Zif-8 layer is decreased especially for CO2. Therefore, the gas permeability decreased in MOF membranes. As a result of these two events, the gas permeability of PIM-co-MOF will not increase as much as that of PIM.Moreover, it can be seen from the figures that gas permeation rates increase slightly with the increasing feed pressure. With increasing feed pressure, on the one hand, viscous flow and Knudsen diffusion increase considerably in the MOF layer, and on the other hand, the membrane-free volumes decrease, exerting an opposite effect on the permeation of gases over PIM-co-MOF membrane. Therefore, no significant change in permeability (especially for N2 in which the permeation was more influenced by the membrane compaction) is observed. Overall, N2 and CH4 hardly pass through the membrane by a solution-diffusion mechanism, while CO2 transport behavior is according to an adsorption-controlled transport [8]. Figure 14 compares the effect of feed pressure on CO2/CH4 and CO2/N2 permselectivity over both types of membranes at two temperatures of 25 and 50°C.Figure 14
Permselectivity of PIM and PIM-co-MOF at two temperatures and different pressures.A comparison of permselectivity data indicates that a slight increase in both CO2/CH4 and CO2/N2 ideal selectivity is observed with increasing feed pressure which could be attributed to a relatively larger increase in CO2 permeability than N2 and CH4 permeability when the pressure increased. Although the same trend of selectivity values with the temperature has been observed for both types of membranes, ideal permselectivity values measured for PIM-co-MOF membrane are higher than those calculated in neat PIM.Although the value of CO2 permeability is reduced by an average of 21%, with the crystal growth of Zif-8 nanoparticles on the surface of the polymer inclusion membrane, the CO2/N2 and CO2/CH4 gas selectivity of membranes following the adsorption-diffusion mechanism increased significantly. As can be seen, the CO2/N2 and CO2/CH4 selectivity at 50°C increases which could be attributed to the carrier-mediated diffusion mechanism for the CO2 transport through PIMs, similar to what was reported by Kebiche-Senhadji et al. Crown ether as the carrier could react with CO2 in a reversible complexation form, and consequently, total CO2 flux is originated from the facilitated transport of the carrier–CO2 complex and the simple diffusion of CO2.
## 6. Conclusions
Zif-8 is successfully anchored onto the surface of amino-functionalized PIM configured into a flat-sheet membrane module during the process of ligand extraction. The different characterization techniques confirmed the growth of a thick and continuous layer of Zif-8 on aminosilane-modified PIM. According to the observations, we assumed that the MOF’s crystalline layer could not grow well on the neat PIM surface. The gas transport behavior of both the neat PIM and PIM-co-MOF was measured using pure CO2, N2, and CH4 gases at the different temperatures and pressures. The growth of the Zif-8 nanoparticle layer on the modified PIM leads to a dramatic increase in selectivity and a slight decrease in gas permeability. The membranes have the same permeation rate order CO2 > CH4 > N2. The permeability in both types of membranes increases with increasing temperature for three gases. This increase is much more significant in the case of neat PIM compared to PIM-co-MOF. As the temperature increases, the CO2/N2 and the CO2/CH4 selectivity in both cases decrease which is slightly more pronounced for the case of PIM-co-MOF. The results revealed no significant effect of feed pressure on permeability. The DFT study shows that the Zif-8 compound has a metallic behavior, and the tendency of this compound to adsorption of CO2 molecules is greater than in other cases. Also, the optical calculation of DFT simulations confirmed that the Zif-8+CO2 is chosen for optical infrared applications due to its higher adsorption coefficient. These results prove the agreement between experiment and theory. This work provides guidelines for rational design of MOF-covered-polymer inclusion membranes for gas separation application or as sensing materials for the detection of combustion gases.
---
*Source: 1018347-2020-06-05.xml* | 1018347-2020-06-05_1018347-2020-06-05.md | 54,266 | A Combined Experimental and First-Principle Calculation (DFT Study) for In Situ Polymer Inclusion Membrane-Assisted Growth of Metal-Organic Frameworks (MOFs) | Reza Darvishi; Esmaeil Pakizeh | International Journal of Polymer Science
(2020) | Chemistry and Chemical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1018347 | 1018347-2020-06-05.xml | ---
## Abstract
A simple yet effective strategy was developed to prepare a metal-organic framework- (MOF-) based asymmetric membrane by depositing the Zeolitic imidazolate framework-8 (Zif-8) layer on the aminosilane-functionalized surface of a polymer inclusion membrane via an in situ growth process. During the extraction of the ligand molecules from the source to stripping compartment, metal ions react with ligand, and layers of Zif-8 were gradually grown onto aminosilane-modified polymer inclusion membrane (PIM). The properties of the surface-grown Zif-8 nanocrystalline layer were well characterized by powder X-ray diffraction, adsorption-desorption analysis, and scanning electron microscopy. The potential use of these Zif-8-supported PIM membranes for the separation of gases N2, CH4, and CO2 was evaluated at two temperatures (25 and 50°C) and pressures (1, 3, and 5 bar), by comparing the permeability and selectivity behavior of these membranes with neat PIM. The gas permeability of both pure PIM (PCO2=799.2barrer) and PIM-co-MOF (PCO2=675.8barrer) increases with the temperature for all three gases, and the permeation rate order was CO2 > CH4 > N2. The results showed that the presence of a layer of Zif-8 on the surface of the polymer inclusion membranes can get a slightly reduced permeability (~21%) but an enhanced selectivity of up to ~70% for CO2/CH4 and ~34% for CO2/N2. In the case of both membrane types, the ideal permselectivity decreases with the temperature, but this decrease was slightly more pronounced for the case of PIM-co-MOF. To understand more details about the electronic structure and optical and adsorption properties of Zif-8 and M+Zif-8 (M=N2,CH4,andCO2) compounds, the periodic plane-wave density functional theory (DFT) calculations were used. The electronic band structures and density of states for pure Zif-8 showed that this compound is metallic. Also, using DFT, the formation energy of M+Zif-8 compounds was calculated, and we showed that the CO2+Zif-8 composition is more stable than other compounds. This result suggests that the tendency of the Zif-8 compound to absorb the CO2 molecule is higher than that of other molecules. Confirming these results, DFT optical calculations showed that the affinity of the CO2+Zif-8 composition to absorb infrared light was greater than that of the other compounds.
---
## Body
## 1. Introduction
Membrane technology has been widely applied across various industries, such as medication (blood fractionation), purification and desalination of water, and gas and mixture separation [1–7]. Membrane gas separation has the potential for applications such as oxygen enrichment, nitrogen generation, hydrogen recovery, and carbon dioxide removal [7, 8]. However, the growth of the gas separation membranes market in recent years is attributed to the increasing demand for carbon dioxide removal applications [8–11]. The development of a potential polymer is key to any further research in the gas separation membrane [7]. Over the past few decades, many polymers have been studied for gas membrane applications, but only relatively few polymers have been established as common gas separation membranes [7, 9, 12]. Polymers for gas separation processes have to meet several requirements simultaneously, such as rapid mass transfer rate and high selectivity towards a specific gas, the ability to easily form desired membrane configurations, and resistance to swelling induced plasticization. The aim of the development of new material is to combine high permeability with high permselectivity [8–11, 13]. Polymer inclusion membranes (PIMs) and metal-organic frameworks (MOFs) have been investigated for membrane applications which are of potential interest in gas recycling and recovery applications [13, 14]. MOFs have been introduced as novel fillers for incorporation in many different polymer matrices to form composite or mixed-matrix membranes (MMMs) for achieving good permeability and high selectivity [10, 15]. Rodenas et al. [16] studied the NH2-MIL-53(Al)-filled polyimide-based MMMs and reported that the stability, selectivity, and permeability for CO2/CH4 separation are more enhanced compared with unfilled membranes. In principle, highly permeable MOFs have drawbacks in terms of brittleness and lack of flexibility, hindering their fabrication into continuous sheets and thus limiting their usage for further practical application. The drawbacks can be overcame by incorporating them into the polymer matrix. However, most of the polymers are incompatible with the MOF particles and are difficult to achieve uniform dispersion of the MOF in the MMMs [4, 10, 15–18]. The lack of strong interface between the MOF and polymer can cause the formation of agglomeration of the particles and nonselective interfacial voids. On the other side, high filler loadings can result in a brittle and rigid behavior of MMMs and a reduction of mechanical properties. Besides these issues, the polymer chains can penetrate the pores of MOF fillers, partially blocking the pore entrance and thus reducing gas permeability of the membranes [15, 17, 19].Polymer inclusion membranes (PIMs), a dense carrier-mediated transport membrane, are widely used for selectively recovering a target solute from a complex mixture. They are a type of self-supported liquid membranes in which extraction and stripping can be carried out in one operation with high selectivity for ion transport [3, 4]. These membranes are easy to fabricate and have outstanding mechanical properties. PIMs consist of a polymer, a plasticizer, and a carrier molecule facilitating the transport of both organic and inorganic species. The polymer support provides mechanical strength, the plasticizer improves flexibility, and the liquid phase facilitates the mobility of the carrier molecule. The carrier molecule acts as a guest-specific host, which can bound to target species by noncovalent intermolecular interactions, such as van der Waals, hydrophobic, or hydrogen bonds, thereby providing selective membrane permeability for target species [3]. Kebiche-Senhadji et al. [15] studied the pure gas permeation behavior of a CTA-based PIM containing an acidic carrier and found that gas permeation and CO2/H2 permselectivity increases during the incorporation of the plasticizer and acidic carrier into the CTA.This paper proposes a new approach for developing an asymmetric composite membrane, by growing Zeolitic imidazolate framework (Zif-8) particles on the surface of a novel polymer inclusion membranes, which were shown in a previous work to be efficient in facilitating transport of calcium cations [4]. For enhancing the adhesion between PIM support and Zif-8 particles, the surface modification of the solid substrate was performed through the reaction between an aminosilane compound and free hydroxyl groups on the polymer backbone [4]. A comparative study of gas transport through two types of membranes (pure PIM and Zif-8-coated PIM (coded as PIM-co-MOF)) was performed to evaluate more precisely the effect of the deposited Zif-8 crystal layer on membrane performance. The electronic band structures and density of states for the Zif-8 layer were also calculated using first-principles based on the density functional theory (DFT). Also, the optical and adsorption properties of Zif-8 and M+Zif-8 (M=N2,CH4,andCO2) were calculated with DFT, and their results are in agreement with the experimental data. The observations showed that the new class of the PIM-co-MOF membrane is more flexible compared to high-MOF-content MMMs. Although there are many different in situ synthesis techniques to directly grow/deposit the different types of MOF of solid surfaces [20–24], to our knowledge, there is no work reported in the literature concerning the usage of PIMs as support for in situ synthesis of MOFs on the membrane surfaces, and there are only very few studies that utilized PIMs for in situ synthesis of metal nanoparticles applications [25]. The growth of Zif-8 nanoparticles onto polymer inclusion membranes has the potential to convert them to the high-quality membrane for different applications. The prepared PIM-co-MOF was studied utilizing X-ray diffraction (XRD), Fourier transforms infrared spectroscopy (FTIR), scanning electron microscopy (SEM), and Brunauer–Emmett–Teller (BET) surface area. It will be expected that the proposed method can be considered as an alternative technique for MOF and novel hybrid MOF-coated membranes fabrication with additional selectivity through which the application of MOFs could be extended in the area such as filtration, sensors, and even ion and gas separation.
## 2. Experimental
### 2.1. Material
Cellulose acetate (CA, degree of acetylation: 2.87 and molecular weight: ~78,000 g/mole), isophorone diisocyanate, Benzo18Crown6, and ionic liquid, 1-butyl-3-methyl-imidazolium chloride ([BMIM] [Cl]), were bought from Sigma-Aldrich. Castor oil, (iodine value 90, viscosity of 950-1050 mPa·s at 20°C) was purchased from M/s SD Fine-Chem Limited, Mumbai, India. N,N-Dimethylacetamide (DMAC), zinc nitrate hexahydrate, and 2-methyl imidazole were provided by the Merck company. All chemicals were used as received.
### 2.2. Characterization
Nitrogen adsorption-desorption isotherms were obtained using a Micromeritics ASAP-2020 instrument at 77 K to measure pore textural properties of the synthesized Zi-8 samples. The X-ray diffraction (XRD) measurements were obtained using a STOE STADI MP (Germany) diffractometer (40 kV, 30 mA) with CuKa (λ=1.54184Å) source radiation at room temperature. The scanning rate was 2°/min in the range of 5° to 70° and a count time of 0.1 s/step. Fourier transform infrared (FTIR) spectra were recorded using a Bruker Vector 22, a German spectrophotometer equipped with a KBr beam splitter, with a wavenumber range of 4000 to 500 cm−1 at a resolution of 6 or 4 cm−1. Scanning electron microscopy (ZEISS EVO18) was used to study the quality and morphology of produced PIM and PIM-co-Zif-8. The samples were frozen in liquid nitrogen and then were mechanically broken into pieces. Then, the fractured samples were coated with a thin layer of gold using a gold sputterer (model SCD005, Bal-Tec, Hannover, Germany) in a vacuum, and then micrographs were prepared.
### 2.3. Preparation and Modification of PIM
In the present work, we modified the novel PIM, which was fabricated and characterized in the previous work in detail [4]. In brief, the PIM membrane was prepared by casting the solution of GPO which in turn synthesized by a reaction between epoxidized castor oil and cellulose acetate, thereafter, cross-linked by isophorone diisocyanate, crown ether, and ionic liquid in dimethylacetamide (DMAc) on an immaculate glass using a knife (Doctor Blade). The prepared dry PIM was configured into a two-compartment glass cell for the feed and stripping phases with 200cm3 volume each. A 5cm diameter PIM was sandwiched between the circular openings of the two compartments. 3-Aminopropyl trimethoxysilane was dissolved in the feed chamber at a concentration of 5°wt%, and pH was adjusted to 4.5 by adding some 0.1M hydrochloric acid solution. The stripping aqueous phase consisted of distilled deionized water. The cells were gently stirred for 24°h at 25°C (50°rpm). Finally, the membranes were washed three times with water.
### 2.4. ZIF-8 Deposition on PIM Surface
In this work, an in situ PIM-assisted growth method has been developed for the synthesis of metal-organic frameworks (MOFs) Zif-8 on the amino-functionalized membrane surface. In this procedure, the amino-functionalized PIM clamped between the two compartments of the above-described diffusion cell. The feed phase was homogenized by stirring at a speed of500rpm with a magnetic bar. The feed compartment was filled with an aqueous solution containing the 5mg of zinc nitrate hexahydrate, and the receiving compartment was filled with an aqueous solution containing 20mg of 2-methylimidazole. The extraction of Zn(II) into the PIM was carried out for a predetermined period of time at 25°C. This PIM-supported Zif-8 membrane is called PIM-co-MOF from here on.
### 2.5. Measurement of Gas Permeability
The gas permeation properties of both membranes of neat PIM and PIM-co-MOF were measured in a constant volume/variable pressure gas permeability apparatus (Figure1) at controlled temperature and pressure. Single gas permeability values of N2, CO2, and CH4at298K and 323K and pressures of 1, 3, and 5°bar were obtained.Figure 1
Schematic of the experimental setup for gas separation test.The gas permeability was calculated using the following equation [9]:
(1)P=273.15×1010760AT×14.7VLP0×76×dpdt,where P is the gas permeability (barrer), V is the downstream volume (cm3), L is the membrane thickness (cm), A is the effective area of the membrane (cm2), T is the operating temperature (K), Po is the feed gas pressure (psi), and dp/dt is the steady rate of pressure increase in the downstream side (cmHg/sec). The ideal permselectivity between two different gases is defined as the ratio of the single gas permeabilities and determined as [26]:
(2)α=PAPB.
## 2.1. Material
Cellulose acetate (CA, degree of acetylation: 2.87 and molecular weight: ~78,000 g/mole), isophorone diisocyanate, Benzo18Crown6, and ionic liquid, 1-butyl-3-methyl-imidazolium chloride ([BMIM] [Cl]), were bought from Sigma-Aldrich. Castor oil, (iodine value 90, viscosity of 950-1050 mPa·s at 20°C) was purchased from M/s SD Fine-Chem Limited, Mumbai, India. N,N-Dimethylacetamide (DMAC), zinc nitrate hexahydrate, and 2-methyl imidazole were provided by the Merck company. All chemicals were used as received.
## 2.2. Characterization
Nitrogen adsorption-desorption isotherms were obtained using a Micromeritics ASAP-2020 instrument at 77 K to measure pore textural properties of the synthesized Zi-8 samples. The X-ray diffraction (XRD) measurements were obtained using a STOE STADI MP (Germany) diffractometer (40 kV, 30 mA) with CuKa (λ=1.54184Å) source radiation at room temperature. The scanning rate was 2°/min in the range of 5° to 70° and a count time of 0.1 s/step. Fourier transform infrared (FTIR) spectra were recorded using a Bruker Vector 22, a German spectrophotometer equipped with a KBr beam splitter, with a wavenumber range of 4000 to 500 cm−1 at a resolution of 6 or 4 cm−1. Scanning electron microscopy (ZEISS EVO18) was used to study the quality and morphology of produced PIM and PIM-co-Zif-8. The samples were frozen in liquid nitrogen and then were mechanically broken into pieces. Then, the fractured samples were coated with a thin layer of gold using a gold sputterer (model SCD005, Bal-Tec, Hannover, Germany) in a vacuum, and then micrographs were prepared.
## 2.3. Preparation and Modification of PIM
In the present work, we modified the novel PIM, which was fabricated and characterized in the previous work in detail [4]. In brief, the PIM membrane was prepared by casting the solution of GPO which in turn synthesized by a reaction between epoxidized castor oil and cellulose acetate, thereafter, cross-linked by isophorone diisocyanate, crown ether, and ionic liquid in dimethylacetamide (DMAc) on an immaculate glass using a knife (Doctor Blade). The prepared dry PIM was configured into a two-compartment glass cell for the feed and stripping phases with 200cm3 volume each. A 5cm diameter PIM was sandwiched between the circular openings of the two compartments. 3-Aminopropyl trimethoxysilane was dissolved in the feed chamber at a concentration of 5°wt%, and pH was adjusted to 4.5 by adding some 0.1M hydrochloric acid solution. The stripping aqueous phase consisted of distilled deionized water. The cells were gently stirred for 24°h at 25°C (50°rpm). Finally, the membranes were washed three times with water.
## 2.4. ZIF-8 Deposition on PIM Surface
In this work, an in situ PIM-assisted growth method has been developed for the synthesis of metal-organic frameworks (MOFs) Zif-8 on the amino-functionalized membrane surface. In this procedure, the amino-functionalized PIM clamped between the two compartments of the above-described diffusion cell. The feed phase was homogenized by stirring at a speed of500rpm with a magnetic bar. The feed compartment was filled with an aqueous solution containing the 5mg of zinc nitrate hexahydrate, and the receiving compartment was filled with an aqueous solution containing 20mg of 2-methylimidazole. The extraction of Zn(II) into the PIM was carried out for a predetermined period of time at 25°C. This PIM-supported Zif-8 membrane is called PIM-co-MOF from here on.
## 2.5. Measurement of Gas Permeability
The gas permeation properties of both membranes of neat PIM and PIM-co-MOF were measured in a constant volume/variable pressure gas permeability apparatus (Figure1) at controlled temperature and pressure. Single gas permeability values of N2, CO2, and CH4at298K and 323K and pressures of 1, 3, and 5°bar were obtained.Figure 1
Schematic of the experimental setup for gas separation test.The gas permeability was calculated using the following equation [9]:
(1)P=273.15×1010760AT×14.7VLP0×76×dpdt,where P is the gas permeability (barrer), V is the downstream volume (cm3), L is the membrane thickness (cm), A is the effective area of the membrane (cm2), T is the operating temperature (K), Po is the feed gas pressure (psi), and dp/dt is the steady rate of pressure increase in the downstream side (cmHg/sec). The ideal permselectivity between two different gases is defined as the ratio of the single gas permeabilities and determined as [26]:
(2)α=PAPB.
## 3. Computational Methods and Model Systems
In this paper, the calculations were performed using the density functional theory and Quantum ESPRESSO package [27]. The exchange-correlation term was considered within the generalized gradient (GGA) approximation presented by Perdew–Burke–Ernzerhof (PBE) [28]. The energy cut-off for the extension of the wave-functions was selected as 408eV. The Brillouin zone integration was executed over a Monkhorst–Pack 4×4×4 meshes [29]. The lattice constant of the Zif-8 material was optimized until the total energy converged to at least 10−3eV. In the theoretical calculations, the unit cells of Zif-8 with cubic structure and space group (I−43m) are shown in Figure 2.Figure 2
Top views of the atomic structure of the Zif-8 compound. The green, blue, pink, and black spheres represent C, N, H, and Zn atoms, respectively.In this figure, each central zinc (Zn) element is coordinated by four 2-methylimidazolate ligands with one of its two N atoms. To simulate the Zif-8 compound, we used a unit cell with 156 atoms and Zn12N48C48H48 formula. The atomic positions of Zn in this structure are presented in Table 1.Table 1
Atomic positon of Zn in Zif-8 compounds.
AtomsX (Å)Y (Å)Z (Å)Zn112.7408.49Zn28.494.240Zn38.4912.740Zn44.2408.49Zn508.494.24Zn6012.748.49Zn712.748.490Zn804.248.49Zn94.248.490Zn1008.4912.74Zn118.4904.24Zn128.49012.74Also, the optimized structure parameters of Zif-8 compounds are shown in Table2.Table 2
Calculated bond lengths, bond angle, and lattice constants of Zif-8.
ZN-N (Å)1.99N-Zn-N (°)109.44N-C (Å)1.37Zn-N-C (°)126.52C-C (Å)1.32N-C-H (°)125.56C-H (Å)0.93H-C-C (°)125.55a=b=c (Å)16.99α=β=γ (°)90
## 4. Theory Results and Discussion
### 4.1. Electronic Band Structure and Density of States
The electronic band structure and partial density of states (PDOS) of Zif-8 compounds are shown in Figures3(b) and 3(c), respectively.Figure 3
The calculated electronic band structure and density of states (DOS) of Zif-8 compounds.Figure3(a) also shows the band structure in the range of -0.6 to 0.4eV for more clarity. To help readability, we used a color legend for the DOS. We drew in black, green, blue, and red colors the total, p, d, and s DOS, respectively. To specify the metallic systems, the Fermi energy (EF) was taken here as the highest occupied energy level of the system. The symmetry points, while in the cubic Brillouin zone (BZ) in which Zif-8 compound crystallizes, are plotted in Figure 4. Coordinates of the k points within a cubic BZ framework are following: Γ (0 0 0), X (0.5 0 0), M (0.5 0.5 0), and R (0.5 0.5 0.5) [30].Figure 4
The high symmetryk-path in the cubic first Brillouin zone.The set of atomic electron configurations included in the present band structure computation process of Zif-8 is listed in Table3.Table 3
Atomic orbitals as employed in the present band structure computation of Zif-8.
ElementCore electronsSemi-core electronsValence electronsZn1s22s22p63s23p64s23d10N1s22s22p3C1s22s22p2H——1s1According to Figures3(a) and 3(b), the band structure crosses the Fermi level (EF) at various symmetry directions, suggesting a metallic-like character of Zif-8 compounds. The electronic bands are mostly found in the energy range of -1.3 to -3eV, due to the dispersive nature of p orbitals of Zn and N atoms. Low allowed energy states in the valence region of Figure 3(c) are mainly originated due to s orbitals with a small contribution. Due to the type of occupation of atomic orbitals by electrons, the highest density of states is related to the p orbitals of the Zn, N, and C atoms. When the semi-core electrons are omitted, the d bands are shifted up while the sp bands are shifted down in a nonhomogeneous way. This leads to a reduction of the d-sp interband gap in the range of -1 to 0.5, -4 to -3, and -5.4 to −4.7eV [31]. This result is quite clear in Figure 3(b).
### 4.2. Adsorption Properties
The formation energy or enthalpy of formation shows the thermodynamic stability of the materials [32]. Under ideal conditions (zero Kelvin temperature and zero pressure) for the first time, we calculated the formation energy of M+Zif-8 (M=N2,CH4,andCO2) compounds. The locations of dope M molecules to calculate the formation energy of M+Zif-8 are shown in Figure 5 (near the red Zn polyhedral).Figure 5
The location ofM molecules in the Zif-8 compound. (a) Top view of unit cell. (b) Crystal shape view of unit cell. (c) Crystal shape of 2×2×2 super cell.
(a)(b)(c)For each case in the unit cell, fourM molecules were used. In the experimental section, the adsorption of M molecules by Zif-8 is investigated, and there we showed that the adsorption of M molecules is done by zinc elements, so they are located near the Zn atoms in the simulation part. The formation energy (EF) of compounds is calculated as
(3)EF=ExM+Zif8‐EZif8‐xEMx,where x is the number of doped atoms (M=N2,CH4,andCO2), ExM+Zif−8, EZif−8, and EM the total energy of M+Zif-8, Zif-8, and M molecules, respectively. This energy reflects the stability of the compounds. The total and formation energies of compounds are shown in Table 4.Table 4
Total and formation energies ofM+Zif-8 compounds.
MaterialsZif-8-N2Zif-8-CH4Zif-8-CO2Total energy (eV)-41862.60-40571.92-43793.53Formation energy (eV)-12.39-9.05-14.60According to this table, on comparing the formation energies between compounds, the minimum formation energy was evaluated to be−14.6eV in the Zif-8+CO2 case. This indicates that the tendency of the Zif-8 compound to adsorption of CO2 molecules is greater than in other cases.
### 4.3. Optical Properties
In this work, we used generalized gradient approximation (GGA) in the framework of DFT to calculate the optical properties of Zif-8 andM+Zif-8 compounds. These properties include real and imaginary parts of the dielectric function, extinction parameter, and adsorption coefficient. The optical properties are related to the frequency-dependent complex dielectric function εω by the following relation [33]:
(4)εω=ε1ω+iε2ω,where ε1ω and ε2ω are the real and imaginary parts of dielectric function, respectively. The real part is related to the electronic polarizability of the compounds, and the imaginary part is related to the electronic absorption of the compounds. The imaginary part of dielectric function can be defined as [34]:
(5)ε2ω=2πe2Ωε0∑k,ν,cΨkcu.rΨkv2δEkc−Ekν−Ewhere Ω is the volume of the unit cell, e indicates electronic charge, ε0 signifies permittivity of free space, u defines the polarization of the incident electric field, r and k are the vectors in the real and reciprocal lattice, respectively, and Ψkv and Ψkc are the valence band and conduction band wave-functions at k point corresponding to energies Ekv and Ekc, respectively. The ε1ω and ε2ω parameters are related to each other using the famous Kramer-Kronig transformations [35–37]. These relations are used to obtain the real part of dielectric function from the imaginary part. From ε1ω and ε2ω, the other optical constants such as the extinction coefficient kω and adsorption coefficient Aω can be determined as follows [33]:
(6)kω=1/2ε1ω2+ε2ω2−ε1ω,(7)Aω=2kωEℏc.The calculated optical parameters were evaluated in the energy range of 0 to5eV (infrared–visible-ultraviolet ranges). The schematic plots of ε1ω and ε2ω of Zif-8 and M+Zif-8 compounds are shown in Figure 6.Figure 6
Realε1ω and imaginary ε2ω parts of the dielectric function.The real partε1ω of the dielectric function shows the lowest peak intensity at 0.44, 0.41, 0.45, and 0.67eV for Zif-8+CH4, Zif+CO2, Zif-8, and Zif-8+N2, respectively. The imaginary part ε2ω shows the energy peaks at about 0.13eV for the Zif-8+CO2 and Zif-8+N2 and 0.14eV for the Zif-8 and Zif-8+CH4 compounds. These peaks belong to the electronic transition from Zn 3d to N and C 2p states at the conduction and valence band. The adsorption coefficient Aω of compounds is presented in Figure 7.Figure 7
The adsorption coefficient of Zif-8 andM+Zif-8 compounds.This parameter occurs in the infrared light region at an adsorption edge from 0 to1.5eV. The prominent peaks intensity for adsorption values of Zif-8+CO2, Zif+N2, Zif-8+CH4, and Zif-8 compounds are 18.43×104, 14.77×104, 13.55×104, and 13.30×104cm−1 corresponding to the energy peak at 0.79, 0.61, 0.57, and 0.70eV, respectively. Therefore, Zif-8+CO2 is chosen for optical infrared applications due to its higher adsorption coefficient [38–40].
## 4.1. Electronic Band Structure and Density of States
The electronic band structure and partial density of states (PDOS) of Zif-8 compounds are shown in Figures3(b) and 3(c), respectively.Figure 3
The calculated electronic band structure and density of states (DOS) of Zif-8 compounds.Figure3(a) also shows the band structure in the range of -0.6 to 0.4eV for more clarity. To help readability, we used a color legend for the DOS. We drew in black, green, blue, and red colors the total, p, d, and s DOS, respectively. To specify the metallic systems, the Fermi energy (EF) was taken here as the highest occupied energy level of the system. The symmetry points, while in the cubic Brillouin zone (BZ) in which Zif-8 compound crystallizes, are plotted in Figure 4. Coordinates of the k points within a cubic BZ framework are following: Γ (0 0 0), X (0.5 0 0), M (0.5 0.5 0), and R (0.5 0.5 0.5) [30].Figure 4
The high symmetryk-path in the cubic first Brillouin zone.The set of atomic electron configurations included in the present band structure computation process of Zif-8 is listed in Table3.Table 3
Atomic orbitals as employed in the present band structure computation of Zif-8.
ElementCore electronsSemi-core electronsValence electronsZn1s22s22p63s23p64s23d10N1s22s22p3C1s22s22p2H——1s1According to Figures3(a) and 3(b), the band structure crosses the Fermi level (EF) at various symmetry directions, suggesting a metallic-like character of Zif-8 compounds. The electronic bands are mostly found in the energy range of -1.3 to -3eV, due to the dispersive nature of p orbitals of Zn and N atoms. Low allowed energy states in the valence region of Figure 3(c) are mainly originated due to s orbitals with a small contribution. Due to the type of occupation of atomic orbitals by electrons, the highest density of states is related to the p orbitals of the Zn, N, and C atoms. When the semi-core electrons are omitted, the d bands are shifted up while the sp bands are shifted down in a nonhomogeneous way. This leads to a reduction of the d-sp interband gap in the range of -1 to 0.5, -4 to -3, and -5.4 to −4.7eV [31]. This result is quite clear in Figure 3(b).
## 4.2. Adsorption Properties
The formation energy or enthalpy of formation shows the thermodynamic stability of the materials [32]. Under ideal conditions (zero Kelvin temperature and zero pressure) for the first time, we calculated the formation energy of M+Zif-8 (M=N2,CH4,andCO2) compounds. The locations of dope M molecules to calculate the formation energy of M+Zif-8 are shown in Figure 5 (near the red Zn polyhedral).Figure 5
The location ofM molecules in the Zif-8 compound. (a) Top view of unit cell. (b) Crystal shape view of unit cell. (c) Crystal shape of 2×2×2 super cell.
(a)(b)(c)For each case in the unit cell, fourM molecules were used. In the experimental section, the adsorption of M molecules by Zif-8 is investigated, and there we showed that the adsorption of M molecules is done by zinc elements, so they are located near the Zn atoms in the simulation part. The formation energy (EF) of compounds is calculated as
(3)EF=ExM+Zif8‐EZif8‐xEMx,where x is the number of doped atoms (M=N2,CH4,andCO2), ExM+Zif−8, EZif−8, and EM the total energy of M+Zif-8, Zif-8, and M molecules, respectively. This energy reflects the stability of the compounds. The total and formation energies of compounds are shown in Table 4.Table 4
Total and formation energies ofM+Zif-8 compounds.
MaterialsZif-8-N2Zif-8-CH4Zif-8-CO2Total energy (eV)-41862.60-40571.92-43793.53Formation energy (eV)-12.39-9.05-14.60According to this table, on comparing the formation energies between compounds, the minimum formation energy was evaluated to be−14.6eV in the Zif-8+CO2 case. This indicates that the tendency of the Zif-8 compound to adsorption of CO2 molecules is greater than in other cases.
## 4.3. Optical Properties
In this work, we used generalized gradient approximation (GGA) in the framework of DFT to calculate the optical properties of Zif-8 andM+Zif-8 compounds. These properties include real and imaginary parts of the dielectric function, extinction parameter, and adsorption coefficient. The optical properties are related to the frequency-dependent complex dielectric function εω by the following relation [33]:
(4)εω=ε1ω+iε2ω,where ε1ω and ε2ω are the real and imaginary parts of dielectric function, respectively. The real part is related to the electronic polarizability of the compounds, and the imaginary part is related to the electronic absorption of the compounds. The imaginary part of dielectric function can be defined as [34]:
(5)ε2ω=2πe2Ωε0∑k,ν,cΨkcu.rΨkv2δEkc−Ekν−Ewhere Ω is the volume of the unit cell, e indicates electronic charge, ε0 signifies permittivity of free space, u defines the polarization of the incident electric field, r and k are the vectors in the real and reciprocal lattice, respectively, and Ψkv and Ψkc are the valence band and conduction band wave-functions at k point corresponding to energies Ekv and Ekc, respectively. The ε1ω and ε2ω parameters are related to each other using the famous Kramer-Kronig transformations [35–37]. These relations are used to obtain the real part of dielectric function from the imaginary part. From ε1ω and ε2ω, the other optical constants such as the extinction coefficient kω and adsorption coefficient Aω can be determined as follows [33]:
(6)kω=1/2ε1ω2+ε2ω2−ε1ω,(7)Aω=2kωEℏc.The calculated optical parameters were evaluated in the energy range of 0 to5eV (infrared–visible-ultraviolet ranges). The schematic plots of ε1ω and ε2ω of Zif-8 and M+Zif-8 compounds are shown in Figure 6.Figure 6
Realε1ω and imaginary ε2ω parts of the dielectric function.The real partε1ω of the dielectric function shows the lowest peak intensity at 0.44, 0.41, 0.45, and 0.67eV for Zif-8+CH4, Zif+CO2, Zif-8, and Zif-8+N2, respectively. The imaginary part ε2ω shows the energy peaks at about 0.13eV for the Zif-8+CO2 and Zif-8+N2 and 0.14eV for the Zif-8 and Zif-8+CH4 compounds. These peaks belong to the electronic transition from Zn 3d to N and C 2p states at the conduction and valence band. The adsorption coefficient Aω of compounds is presented in Figure 7.Figure 7
The adsorption coefficient of Zif-8 andM+Zif-8 compounds.This parameter occurs in the infrared light region at an adsorption edge from 0 to1.5eV. The prominent peaks intensity for adsorption values of Zif-8+CO2, Zif+N2, Zif-8+CH4, and Zif-8 compounds are 18.43×104, 14.77×104, 13.55×104, and 13.30×104cm−1 corresponding to the energy peak at 0.79, 0.61, 0.57, and 0.70eV, respectively. Therefore, Zif-8+CO2 is chosen for optical infrared applications due to its higher adsorption coefficient [38–40].
## 5. Experimental Results and Discussion
### 5.1. Analysis of PIM-co-MOF
Since the GPO molecules, in which the membrane is made from, has hydroxyl groups on its branches, the membrane surface is full of hydroxyl functional groups that can be used for subsequent reactions. The hydroxyl groups on the membrane surface can react with APTS molecules through a very slow alcohol exchange/hydrolysis reaction. Therefore, the side of the membrane that is exposed to the APTS solution will be functionalized with aminosilane groups. This fact can also be confirmed by comparing their FTIR spectra of the aminosilane-functionalized PIM with the spectra in bare PIM as shown in Figure8.Figure 8
FTIR of amino silane-modified polymer inclusion membrane.The samples were used for the FTIR tests and the characteristic spectra were scanned in the wavenumber range of 4000 to500cm−1 using KBr pellets. It could be observed that, compared with the unmodified membrane, the aminosilane-coated PIM possess adsorption band in 1091.5cm−1 due to the stretching vibration of the C-N bond, band in 1051.0cm−1 due to the stretching vibration of Si-O bond, band in 885.2cm−1 due to the bending vibration of NH group, and band near 3400cm−1 due to the NH2 group. All of these reveal that the PIM was functionalized successfully with aminosilane molecules; thus, this approach can be considered as an alternative route for the aminosilane functionalization of membranes.The next step for the experiment will be to grow Zif-8 particles on the modified surface of the membrane where the membranes were clamped between the receiving (methyl imidazole) and feed solution (zinc nitrate) compartments. Because of the presence of crown ether in the membrane and hydrophobic characteristic of the membrane, methyl imidazole molecules pass through much more rapidly than Zinc(II) ions. Imidazole molecules can gradually move from one side of the membrane to the other and react with preformed zinc clusters on the membrane surface. Therefore, MOF particles are formed on the side contacted with zinc nitrate solution. The crystal morphology of Zif-8 grown on the PIM is shown by virtue of FESEM. Figure9 shows the SEM images of the PIM surface before and after MOF deposition. Under this circumstance, the whole surface of PIM was covered with continuous rod-like crystals of Zif-8 after 1°h.Figure 9
SEM images of the PIM surfaces of before (a) and after (b) Zif-8 deposition.
(a)(b)The aminosilane-functionalized PIM shares the amine functionality of the PIM building block as anchoring in Zif-8 and allows the MOF clusters to grow.For deeply understanding the role of aminosilane, a control experiment was performed with a PIM with the same composition and without any aminosilane functionalization. Under this circumstance, Zif-8 crystals grew in a discrete stratum upon membrane and removed after being washed several times with methanol/water (the images are not shown here). Moreover, the weight percentage of Zif-8 adhered to the surface of the aminosilane-modified PIM was much higher (23wt%) compared to that for bare PIM (only 4.3wt%). It implies the presence of the hydroxyl pendant group upon GPO chains can also share the MOF crystal formation but NH2 groups facilitate efficient nucleation and could better contribute to Zif-8 crystal formation than OH groups.Figure10 exhibits the cross-sectional SEM images of the membranes, PIM and PIM-co-MOF.Figure 10
Cross-section of the membranes: (a) neat PIM and (b) PIM-co-MOF.
(a)(b)The cross-section of the PIM-co-MOF membrane indicates that the Zif-8 thin films are composed of1μm thickness layer of closely intergrown nano-sized crystals that tightly adhere to the surface of the PIM support (Figure 10).There is no gap between the Zif-8 nanoparticles layer and PIM support, once again indicating good interface adhesion between the PIM support and Zif-8 particles. The X-ray diffraction pattern of the PIM-co-MOF membrane is also shown in Figure11.Figure 11
XRD spectra of Zif-8 deposited on the PIM surface.The peak at8.84°, characteristic of the simulated Zif-8, is clearly visible in the XRD pattern, confirming the presence of Zif-8 on the PIM. Nitrogen adsorption-desorption analysis and pore size distribution were carried out to evaluate the pore sizes and surface area properties of PIM-supported Zif-8 nanoparticles as shown in Figure 12.Figure 12
N2 adsorption-desorption isotherms of the Zif-8 supported PIM.The nitrogen adsorption-desorption isotherms display typical reversible type I isotherms with very little hysteresis. The rapid increase in the adsorbed N2 at low pressures suggests the presence of a microporous structure. Furthermore, the hysteresis loop at high-pressure ratios indicates the existence of interparticle mesoporosity and macroporosity between Zif-8 particles. The specific surface area and the pore volume of the prepared PIM-supported Zif-8 were 1114.5m2g−1 and 0.83cm3g−1. The pore size distribution (PSD) curve shows that the samples exhibit a dominant pore diameter of approximately 1.01nm. These results are mainly in agreement with data in the literature, and some differences are mainly due to the different synthesis conditions.
### 5.2. Gas Transport Characteristics of the Membranes
The pure gas permeability and ideal selectivity measurements of PIM-co-MOF and neat PIM are represented for two temperatures (25 and 50°C) in Figure13.Figure 13
The permeability of the PIM and PIM-co-MOF as a function of feed pressure.
(a)(b)(c)As can be seen from the figures, with the temperature increase, the gas permeability in both types of membranes slightly increases for all gas species with an order CO2 > CH4 >N2. However, the increase in the permeation rate with increasing temperature is more pronounced, for the less permeable species, i.e., N2 and then CH4. The literature review shows that, with increasing temperature, the permeation rate increases in the polymer inclusion membranes, while it decreases in the Zif-8 membranes [8, 24]. Therefore, the increasing trends of data show that the polymer inclusion membrane layer is the determinant and limiting factor in the permeation rate. The next thing to note is that the neat PIM exhibited significantly higher gas permeability than PIM-co-MOF. This reveals that the continuous coverage of the membrane surface by Zif-8 caused an almost 50% permeability reduction compared to neat PIM. Gas permeation through the PIM-co-MOF membrane can be explained by a combination of two mechanisms including the solution-diffusion and adsorption-diffusion mechanisms. This means that gas molecules firstly adsorb on the MOF layer, then undergo a Knudsen diffusion through MOF channels, next dissolve in polymer inclusion membrane, reach PIM surface and diffuse through, and finally desorb at the other side of the membrane. With increasing temperature, the molecular motions in PIM increase, and subsequently, the solution and diffusion of gas species are increased while gas adsorption on the surface of Zif-8 layer is decreased especially for CO2. Therefore, the gas permeability decreased in MOF membranes. As a result of these two events, the gas permeability of PIM-co-MOF will not increase as much as that of PIM.Moreover, it can be seen from the figures that gas permeation rates increase slightly with the increasing feed pressure. With increasing feed pressure, on the one hand, viscous flow and Knudsen diffusion increase considerably in the MOF layer, and on the other hand, the membrane-free volumes decrease, exerting an opposite effect on the permeation of gases over PIM-co-MOF membrane. Therefore, no significant change in permeability (especially for N2 in which the permeation was more influenced by the membrane compaction) is observed. Overall, N2 and CH4 hardly pass through the membrane by a solution-diffusion mechanism, while CO2 transport behavior is according to an adsorption-controlled transport [8]. Figure 14 compares the effect of feed pressure on CO2/CH4 and CO2/N2 permselectivity over both types of membranes at two temperatures of 25 and 50°C.Figure 14
Permselectivity of PIM and PIM-co-MOF at two temperatures and different pressures.A comparison of permselectivity data indicates that a slight increase in both CO2/CH4 and CO2/N2 ideal selectivity is observed with increasing feed pressure which could be attributed to a relatively larger increase in CO2 permeability than N2 and CH4 permeability when the pressure increased. Although the same trend of selectivity values with the temperature has been observed for both types of membranes, ideal permselectivity values measured for PIM-co-MOF membrane are higher than those calculated in neat PIM.Although the value of CO2 permeability is reduced by an average of 21%, with the crystal growth of Zif-8 nanoparticles on the surface of the polymer inclusion membrane, the CO2/N2 and CO2/CH4 gas selectivity of membranes following the adsorption-diffusion mechanism increased significantly. As can be seen, the CO2/N2 and CO2/CH4 selectivity at 50°C increases which could be attributed to the carrier-mediated diffusion mechanism for the CO2 transport through PIMs, similar to what was reported by Kebiche-Senhadji et al. Crown ether as the carrier could react with CO2 in a reversible complexation form, and consequently, total CO2 flux is originated from the facilitated transport of the carrier–CO2 complex and the simple diffusion of CO2.
## 5.1. Analysis of PIM-co-MOF
Since the GPO molecules, in which the membrane is made from, has hydroxyl groups on its branches, the membrane surface is full of hydroxyl functional groups that can be used for subsequent reactions. The hydroxyl groups on the membrane surface can react with APTS molecules through a very slow alcohol exchange/hydrolysis reaction. Therefore, the side of the membrane that is exposed to the APTS solution will be functionalized with aminosilane groups. This fact can also be confirmed by comparing their FTIR spectra of the aminosilane-functionalized PIM with the spectra in bare PIM as shown in Figure8.Figure 8
FTIR of amino silane-modified polymer inclusion membrane.The samples were used for the FTIR tests and the characteristic spectra were scanned in the wavenumber range of 4000 to500cm−1 using KBr pellets. It could be observed that, compared with the unmodified membrane, the aminosilane-coated PIM possess adsorption band in 1091.5cm−1 due to the stretching vibration of the C-N bond, band in 1051.0cm−1 due to the stretching vibration of Si-O bond, band in 885.2cm−1 due to the bending vibration of NH group, and band near 3400cm−1 due to the NH2 group. All of these reveal that the PIM was functionalized successfully with aminosilane molecules; thus, this approach can be considered as an alternative route for the aminosilane functionalization of membranes.The next step for the experiment will be to grow Zif-8 particles on the modified surface of the membrane where the membranes were clamped between the receiving (methyl imidazole) and feed solution (zinc nitrate) compartments. Because of the presence of crown ether in the membrane and hydrophobic characteristic of the membrane, methyl imidazole molecules pass through much more rapidly than Zinc(II) ions. Imidazole molecules can gradually move from one side of the membrane to the other and react with preformed zinc clusters on the membrane surface. Therefore, MOF particles are formed on the side contacted with zinc nitrate solution. The crystal morphology of Zif-8 grown on the PIM is shown by virtue of FESEM. Figure9 shows the SEM images of the PIM surface before and after MOF deposition. Under this circumstance, the whole surface of PIM was covered with continuous rod-like crystals of Zif-8 after 1°h.Figure 9
SEM images of the PIM surfaces of before (a) and after (b) Zif-8 deposition.
(a)(b)The aminosilane-functionalized PIM shares the amine functionality of the PIM building block as anchoring in Zif-8 and allows the MOF clusters to grow.For deeply understanding the role of aminosilane, a control experiment was performed with a PIM with the same composition and without any aminosilane functionalization. Under this circumstance, Zif-8 crystals grew in a discrete stratum upon membrane and removed after being washed several times with methanol/water (the images are not shown here). Moreover, the weight percentage of Zif-8 adhered to the surface of the aminosilane-modified PIM was much higher (23wt%) compared to that for bare PIM (only 4.3wt%). It implies the presence of the hydroxyl pendant group upon GPO chains can also share the MOF crystal formation but NH2 groups facilitate efficient nucleation and could better contribute to Zif-8 crystal formation than OH groups.Figure10 exhibits the cross-sectional SEM images of the membranes, PIM and PIM-co-MOF.Figure 10
Cross-section of the membranes: (a) neat PIM and (b) PIM-co-MOF.
(a)(b)The cross-section of the PIM-co-MOF membrane indicates that the Zif-8 thin films are composed of1μm thickness layer of closely intergrown nano-sized crystals that tightly adhere to the surface of the PIM support (Figure 10).There is no gap between the Zif-8 nanoparticles layer and PIM support, once again indicating good interface adhesion between the PIM support and Zif-8 particles. The X-ray diffraction pattern of the PIM-co-MOF membrane is also shown in Figure11.Figure 11
XRD spectra of Zif-8 deposited on the PIM surface.The peak at8.84°, characteristic of the simulated Zif-8, is clearly visible in the XRD pattern, confirming the presence of Zif-8 on the PIM. Nitrogen adsorption-desorption analysis and pore size distribution were carried out to evaluate the pore sizes and surface area properties of PIM-supported Zif-8 nanoparticles as shown in Figure 12.Figure 12
N2 adsorption-desorption isotherms of the Zif-8 supported PIM.The nitrogen adsorption-desorption isotherms display typical reversible type I isotherms with very little hysteresis. The rapid increase in the adsorbed N2 at low pressures suggests the presence of a microporous structure. Furthermore, the hysteresis loop at high-pressure ratios indicates the existence of interparticle mesoporosity and macroporosity between Zif-8 particles. The specific surface area and the pore volume of the prepared PIM-supported Zif-8 were 1114.5m2g−1 and 0.83cm3g−1. The pore size distribution (PSD) curve shows that the samples exhibit a dominant pore diameter of approximately 1.01nm. These results are mainly in agreement with data in the literature, and some differences are mainly due to the different synthesis conditions.
## 5.2. Gas Transport Characteristics of the Membranes
The pure gas permeability and ideal selectivity measurements of PIM-co-MOF and neat PIM are represented for two temperatures (25 and 50°C) in Figure13.Figure 13
The permeability of the PIM and PIM-co-MOF as a function of feed pressure.
(a)(b)(c)As can be seen from the figures, with the temperature increase, the gas permeability in both types of membranes slightly increases for all gas species with an order CO2 > CH4 >N2. However, the increase in the permeation rate with increasing temperature is more pronounced, for the less permeable species, i.e., N2 and then CH4. The literature review shows that, with increasing temperature, the permeation rate increases in the polymer inclusion membranes, while it decreases in the Zif-8 membranes [8, 24]. Therefore, the increasing trends of data show that the polymer inclusion membrane layer is the determinant and limiting factor in the permeation rate. The next thing to note is that the neat PIM exhibited significantly higher gas permeability than PIM-co-MOF. This reveals that the continuous coverage of the membrane surface by Zif-8 caused an almost 50% permeability reduction compared to neat PIM. Gas permeation through the PIM-co-MOF membrane can be explained by a combination of two mechanisms including the solution-diffusion and adsorption-diffusion mechanisms. This means that gas molecules firstly adsorb on the MOF layer, then undergo a Knudsen diffusion through MOF channels, next dissolve in polymer inclusion membrane, reach PIM surface and diffuse through, and finally desorb at the other side of the membrane. With increasing temperature, the molecular motions in PIM increase, and subsequently, the solution and diffusion of gas species are increased while gas adsorption on the surface of Zif-8 layer is decreased especially for CO2. Therefore, the gas permeability decreased in MOF membranes. As a result of these two events, the gas permeability of PIM-co-MOF will not increase as much as that of PIM.Moreover, it can be seen from the figures that gas permeation rates increase slightly with the increasing feed pressure. With increasing feed pressure, on the one hand, viscous flow and Knudsen diffusion increase considerably in the MOF layer, and on the other hand, the membrane-free volumes decrease, exerting an opposite effect on the permeation of gases over PIM-co-MOF membrane. Therefore, no significant change in permeability (especially for N2 in which the permeation was more influenced by the membrane compaction) is observed. Overall, N2 and CH4 hardly pass through the membrane by a solution-diffusion mechanism, while CO2 transport behavior is according to an adsorption-controlled transport [8]. Figure 14 compares the effect of feed pressure on CO2/CH4 and CO2/N2 permselectivity over both types of membranes at two temperatures of 25 and 50°C.Figure 14
Permselectivity of PIM and PIM-co-MOF at two temperatures and different pressures.A comparison of permselectivity data indicates that a slight increase in both CO2/CH4 and CO2/N2 ideal selectivity is observed with increasing feed pressure which could be attributed to a relatively larger increase in CO2 permeability than N2 and CH4 permeability when the pressure increased. Although the same trend of selectivity values with the temperature has been observed for both types of membranes, ideal permselectivity values measured for PIM-co-MOF membrane are higher than those calculated in neat PIM.Although the value of CO2 permeability is reduced by an average of 21%, with the crystal growth of Zif-8 nanoparticles on the surface of the polymer inclusion membrane, the CO2/N2 and CO2/CH4 gas selectivity of membranes following the adsorption-diffusion mechanism increased significantly. As can be seen, the CO2/N2 and CO2/CH4 selectivity at 50°C increases which could be attributed to the carrier-mediated diffusion mechanism for the CO2 transport through PIMs, similar to what was reported by Kebiche-Senhadji et al. Crown ether as the carrier could react with CO2 in a reversible complexation form, and consequently, total CO2 flux is originated from the facilitated transport of the carrier–CO2 complex and the simple diffusion of CO2.
## 6. Conclusions
Zif-8 is successfully anchored onto the surface of amino-functionalized PIM configured into a flat-sheet membrane module during the process of ligand extraction. The different characterization techniques confirmed the growth of a thick and continuous layer of Zif-8 on aminosilane-modified PIM. According to the observations, we assumed that the MOF’s crystalline layer could not grow well on the neat PIM surface. The gas transport behavior of both the neat PIM and PIM-co-MOF was measured using pure CO2, N2, and CH4 gases at the different temperatures and pressures. The growth of the Zif-8 nanoparticle layer on the modified PIM leads to a dramatic increase in selectivity and a slight decrease in gas permeability. The membranes have the same permeation rate order CO2 > CH4 > N2. The permeability in both types of membranes increases with increasing temperature for three gases. This increase is much more significant in the case of neat PIM compared to PIM-co-MOF. As the temperature increases, the CO2/N2 and the CO2/CH4 selectivity in both cases decrease which is slightly more pronounced for the case of PIM-co-MOF. The results revealed no significant effect of feed pressure on permeability. The DFT study shows that the Zif-8 compound has a metallic behavior, and the tendency of this compound to adsorption of CO2 molecules is greater than in other cases. Also, the optical calculation of DFT simulations confirmed that the Zif-8+CO2 is chosen for optical infrared applications due to its higher adsorption coefficient. These results prove the agreement between experiment and theory. This work provides guidelines for rational design of MOF-covered-polymer inclusion membranes for gas separation application or as sensing materials for the detection of combustion gases.
---
*Source: 1018347-2020-06-05.xml* | 2020 |
# 1D Nanomaterials: Synthesis, Properties, and Applications
**Authors:** Yun Zhao; Haiping Hong; Qianming Gong; Lijun Ji
**Journal:** Journal of Nanomaterials
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101836
---
## Body
---
*Source: 101836-2013-07-16.xml* | 101836-2013-07-16_101836-2013-07-16.md | 358 | 1D Nanomaterials: Synthesis, Properties, and Applications | Yun Zhao; Haiping Hong; Qianming Gong; Lijun Ji | Journal of Nanomaterials
(2013) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101836 | 101836-2013-07-16.xml | ---
## Body
---
*Source: 101836-2013-07-16.xml* | 2013 |
# Corrigendum to “Molecular Dynamics Simulation of the Cu/Au Nanoparticles Alloying Process”
**Authors:** Linxing Zhang; Qibin Li; Sen Tian; Guang Hong
**Journal:** Journal of Nanomaterials
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1018369
---
## Body
---
*Source: 1018369-2020-03-31.xml* | 1018369-2020-03-31_1018369-2020-03-31.md | 372 | Corrigendum to “Molecular Dynamics Simulation of the Cu/Au Nanoparticles Alloying Process” | Linxing Zhang; Qibin Li; Sen Tian; Guang Hong | Journal of Nanomaterials
(2020) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1018369 | 1018369-2020-03-31.xml | ---
## Body
---
*Source: 1018369-2020-03-31.xml* | 2020 |
# Automated Visual Inspection of Ship Hull Surfaces Using the Wavelet Transform
**Authors:** Carlos Fernández-Isla; Pedro J. Navarro; Pedro María Alcover
**Journal:** Mathematical Problems in Engineering
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101837
---
## Abstract
A new online visual inspection technique is proposed, based on a wavelet reconstruction scheme over images obtained from the hull. This type of visual inspection to detect defects in hull surfaces is commonly carried out at shipyards by human inspectors before the hull repair task starts. We propose the use of Shannon entropy for automatic selection of the band for image reconstruction which provides a low decomposition level, thus avoiding excessive degradation of the image, allowing more precise defect segmentation. The proposed method here is capable of on-line assisting to a robotic system to perform grit blasting operations over damage areas of ship hulls. This solution allows a reliable and cost-effective operation for hull grit spot blasting. A prototype of the automated blasting system has been developed and tested in the Spanish NAVANTIA shipyards.
---
## Body
## 1. Introduction
Main ships’ maintenance care consists of periodical (every 4-5 years) hull treatment which includes blasting works; blasting consists in projecting a high-pressure jet of abrasive matter (typically water or grit) onto a surface to remove adherences or rust traces. The object of this task is to maintain hull integrity, guarantee navigational safety conditions, and assure that the surface offers little resistance to the water in order to reduce fuel consumption. That object can be achieved by grit blasting [1] or ultra-high pressure water jetting [2]. In most cases these techniques are applied using manual or semiautomated procedures with the help of robotized devices [3]. In either case defects are detected by means of human operators; this is therefore a subjective task and hence vulnerable to cumulative operator fatigue and highly dependent on the experience of the personnel performing the task. Figure 1 shows a view of ship’s hulls under repair at NAVANTIA’s shipyards.Ship’s hulls at a repair yard.
(a)
Shaped hull
(b)
Flat hull
(c)
Damaged portions
(d)
Detail visual appearanceFrom an operational point of view, there are two working modes: full blasting and spot blasting. Full blasting consists of blasting the entire hull of the ship, while spot blasting consists of blasting numerous isolated areas where corrosion has been observed. Spot blasting is the most demanded operation due to cost saving reasons. This second working mode demands very precise information about position, size, and shape of damaged portions of the hull to make robotic devices [3–5] to achieve maximum efficiency.This paper proposes a computer vision algorithm which equips a machine vision system (see Figure2), capable for precisely detecting defects in ship hulls which is simple enough to be implemented in such a way as to meet the real-time requirements for the application.Figure 2
Machine vision system for hull blasting.Because of the textured appearance of the hull’s surface under inspection (see Figures1(c) and 1(d)), we have used the wavelet transform, and the developed computer vision algorithm includes an image reconstruction approach based on automatic selection of the optimal wavelet transform resolution level, using Shannon entropy.
## 2. Defect Detection in Textured Surfaces
Texture is a very important characteristic when identifying defects or flaws, as it provides important information for defect detection. In fact, the task of detecting defects has been largely seen as a texture analysis problem. Figure3 shows several texture images from ship hull surfaces.Figure 3
Texture images from ships’ hull surface.In his review Xie [6] classified texture analysis techniques for defect detection in four categories: statistical approaches, structural approaches, filter based approaches, and model based approaches. In his review of fabric defect detection similarly Kumar [7] classified the proposed solutions into three categories: statistical, spectral, and model based. In his review of automated defect detection in fabric, Ngan et al. [8] classified defect detection techniques in textured fabric into nonmotif-based and motif-based approaches. The motif-based approach [9] uses the symmetry property of motifs to calculate the energy of moving subtraction and its variance among different motifs. Many defect detection methods usually use clustering techniques which are mainly based on texture feature extraction and texture classifications. These features are collated using methods such as cooccurrence matrix [10], Fourier transform [11], Gabor transform [12], or the wavelet transform [13].Spectral-approach methods for texture analysis characterize the frequency contents of a texture image—Fourier transform—or provide spatial-frequency analysis—Gabor filters, wavelets. A two-dimensional spectrum of a visual texture frequently contains information about the periodicity and directionality of the texture pattern. For example, a texture with a coarse appearance analysed from the spectral point of view shows high-frequency components, while a texture with a fine appearance shows low-frequency components. The analytical methods based on Fourier transform show good results in texture patterns with high regularity and/or directionality, but they are limited by a lack of spatial localization. In this field, Gabor filters provide better spatial localization, although their utility in natural textures is limited because there is no single filter resolution that can localize a structure. The wavelet transform has some advantages over the Gabor transform, such as the fact that the variation of the spatial resolution makes it possible to represent the textures in the appropriate scale, as well as to choose from a wide range of wavelet functions.
## 3. The Wavelet Transform for Defect Detection
The suitability of wavelet transforms for use in image analysis is well established: a representation in terms of the frequency content of local regions over a range of scales provides an ideal framework for the analysis of image features, which in general are of different size and can often be characterised by their frequency domain properties [14]. This makes the wavelet transform an attractive option when attempting defect detection in textured products, as reported by Truchetet and Laligant [15] in his review of industrial applications of wavelet-based image processing. He reported different uses of wavelet analysis in successful machine vision applications: detecting defects for manufacturing applications for the production of furniture, textiles, integrated circuits, and so forth, from their wavelet transformation and vector quantization-related properties of the associated wavelet coefficients; printing defect identification and classification (applied to printed decoration and tampon printed images) by analysing the fractal properties of a textured image; online inspection of the loom under construction using a specific class of the 2D discrete wavelet transform (DWT) called the multiscale wavelet representation with the objectives of attenuating the background texture and accentuating the defects; online fabric inspection device performing an independent component analysis on a subband decomposition provided by a 2-level DWT in order to increase the defect detection rate.The review of the literature shows two categories of defect detection methods based on wavelet transform. The first category includes direct thresholding methods [10, 11], whose design is based on the fact that texture background can be attenuated by the wavelet decomposition. If we remove the texture pattern from real texture, it is feasible to use existing defect detecting techniques for nontexture images, such as thresholding techniques [16]. Textural features extracted from wavelet-decomposed images are another category which is widely used for defect detection [17, 18]. Features extracted from the texture patterns are used as feature vectors to feed a classifier (Bayer, Euclidean distance, Neural Networks, or Support Vector Machines), which has unavoidable drawbacks when dealing with the vast image data obtained during inspection tasks. For instance, proximity-based methods tend to be computationally expensive and there is no straightforward way of defining a meaningful stopping criterion for data fusion (or division). Often, the learning-based classifiers need to be trained by the nondefect features, which is a troublesome and usually time consuming procedure, thus limiting its real-time applications [10]. For this reason we have focused on direct thresholding methods. The use of direct thresholding presents a main challenge: how to select the decomposition level. On the other hand, direct thresholding presents two main drawbacks: (1) an excessive wavelet decomposition level produces a fusion of defects with the texture pattern and (2) a wrong reconstruction scheme produces false positives when defects are detected.The work presented here is based on the authors’ research on previous works [10, 19] and addresses abovementioned drawbacks by a new approach based on Shannon Entropy calculation. Its main contribution is the formulation of a novel use of the normalized Shannon Entropy, calculated on the different detail subimages, to determine the optimal decomposition level in textures with low directionality. For this purpose we propose to calculate the optimal decomposition level as the maximum of the ratio between the entropy of the approximation subimage and the total entropy, as the sum of entropies calculated for every subimage.
### 3.1. Wavelet Decomposition
For an imagef(x,y) of size M×N pixels, each level of wavelet decomposition is obtained by applying two filters: a low-pass filter (L) and a high-pass filter (H). The different combinations of these filters produce four images that are here denoted with the subscripts LL, LH, HL, and HH. In the first decomposition level four subimages or bands are produced: one smooth image, also called approximation, fLL(1)(x,y), that represents an approximation of the original image f(x,y) and three detail subimages fLH(1)(x,y), fHL(1)(x,y), and fHH(1)(x,y), which represent the horizontal, vertical and diagonal details, respectively. With this notation, fLL(0)(x,y) represents the original image, f(x,y), and fLL(j)(x,y) represent the approximation image in the decomposition level j. From each decomposition level fLL(j)(x,y) we obtain four subimages, designated here as fLL(j+1)(x,y),fLH(j+1)(x,y),fHL(j+1)(x,y), and fHH(j+1)(x,y), which together form the decomposition level j+1. The pyramid algorithm to obtain fLL(j)(x,y),fLH(j)(x,y),fHL(j)(x,y), and fHH(j)(x,y), as well as the calculation of the inverse transform, can be found in [20]. We will designate F(x,y)=W-1[fLL(j)] as the inverse wavelet transform of a subimage fLL(j)from the resolution level j to level 0.Figure4 shows the wavelet decomposition for three conveniently scaled levels (j=3) of a statistical texture pattern—painted surface—with corrosion defects; the different subimages or bands are shown (named LL, LH, HL, and HH). These were obtained after applying the different coefficients of the wavelet filters. More specifically, the image in Figure 4(a) was decomposed through the application of the Haar wavelet with two coefficients. At level j, images of size (N/2j)×(M/2j) pixels are obtained by iterative application of the pyramid algorithm. Note also that the subimages corresponding to the different decomposition levels are produced by successively applying the low-pass and high-pass filters and reducing the rows and columns by a factor of two.Decomposition of an image from a damaged hull, using Haar wavelet with two coefficients, at three decomposition levels: (a) original image, (b) first decomposition level, (c) second decomposition level, and (d) third decomposition level.
(a)
(b)
(c)
(d)
## 3.1. Wavelet Decomposition
For an imagef(x,y) of size M×N pixels, each level of wavelet decomposition is obtained by applying two filters: a low-pass filter (L) and a high-pass filter (H). The different combinations of these filters produce four images that are here denoted with the subscripts LL, LH, HL, and HH. In the first decomposition level four subimages or bands are produced: one smooth image, also called approximation, fLL(1)(x,y), that represents an approximation of the original image f(x,y) and three detail subimages fLH(1)(x,y), fHL(1)(x,y), and fHH(1)(x,y), which represent the horizontal, vertical and diagonal details, respectively. With this notation, fLL(0)(x,y) represents the original image, f(x,y), and fLL(j)(x,y) represent the approximation image in the decomposition level j. From each decomposition level fLL(j)(x,y) we obtain four subimages, designated here as fLL(j+1)(x,y),fLH(j+1)(x,y),fHL(j+1)(x,y), and fHH(j+1)(x,y), which together form the decomposition level j+1. The pyramid algorithm to obtain fLL(j)(x,y),fLH(j)(x,y),fHL(j)(x,y), and fHH(j)(x,y), as well as the calculation of the inverse transform, can be found in [20]. We will designate F(x,y)=W-1[fLL(j)] as the inverse wavelet transform of a subimage fLL(j)from the resolution level j to level 0.Figure4 shows the wavelet decomposition for three conveniently scaled levels (j=3) of a statistical texture pattern—painted surface—with corrosion defects; the different subimages or bands are shown (named LL, LH, HL, and HH). These were obtained after applying the different coefficients of the wavelet filters. More specifically, the image in Figure 4(a) was decomposed through the application of the Haar wavelet with two coefficients. At level j, images of size (N/2j)×(M/2j) pixels are obtained by iterative application of the pyramid algorithm. Note also that the subimages corresponding to the different decomposition levels are produced by successively applying the low-pass and high-pass filters and reducing the rows and columns by a factor of two.Decomposition of an image from a damaged hull, using Haar wavelet with two coefficients, at three decomposition levels: (a) original image, (b) first decomposition level, (c) second decomposition level, and (d) third decomposition level.
(a)
(b)
(c)
(d)
## 4. Entropy-Based Method for the Automatic Selection of the Wavelet Decomposition Level
In image processing, entropy has been used by many authors as part of the algorithmic development procedure. There are examples of the use of entropy in the programming of thresholding algorithms [21] and image segmenting [22] as a descriptor for texture classification [23]; as one of the parameters selected by Haralick et al. for application to gray level concurrence matrixes and used for texture characterization [24]; as an element in characteristic vector groups used for classification by Bayesian techniques [25], neuronal networks [26], compact support vectors [27], and so forth.
### 4.1. Automatic Selection of the Appropriate Decomposition Level
In this work we propose a novel approach for the automatic selection of the appropriate decomposition level by means of Shannon entropy. The entropy function was used to identify the resolution level that provides the most information about defects in real textures. For this purpose, the intensity levels of the subimages of the wavelet transform were considered as random samples. The concept of information entropy—Shannon entropy—describes how much randomness (or uncertainty) there is in a signal or an image; in other words, how much information is provided by the signal or image. In terms of physics, the greater the information entropy of the image is, the higher its quality will be [28].Figure5 shows how the texture pattern degrades as the decomposition level increases. This degradation is distributed among the different decomposition levels depending on the texture nature and can be quantified by means of the Shannon entropy.Approximation subimages(fLL(j)) of four wavelet decomposition levels for different images ((a) H225, (b) H34, (c) H241, (d) H137, and (e) H10) from portions of ship’s hulls.
(a)
(b)
(c)
(d)
(e)The Shannon entropy function [28, 29] is calculated according to the expression
(1)s(X)=-∑i=1Tp(xi)logp(xi),
where X={x1,x2,…,xT} is a set of random variables with T outcomes and p(xi) is the probability of occurrence associated with xi.For a 256-gray-level image of sizeNt pixels, we define a set of random variables X={x1,x2,…,xi,…,x256} as the number of pixels in the image that have gray level i. The probability of this random variable xi is calculated as the number of occurrences, hist[xi], divided by the total number of pixels, Nt(2)p(xi)=hist[xi]Nt.To calculate the value of the Shannon entropy on the approximation subimage (fLL(j)(x,y)) and on the horizontal, vertical and diagonal detail subimages (fLH(j)(x,y),fHL(j)(x,y), and fHH(j)(x,y)) in each decomposition level j, we obtain first the inverse wavelet transform of every subimage and then we apply (3)
(3)sLLj=s[W-1[fLL(j)(x,y)]],sLHj=s[W-1[fLH(j)(x,y)]],sHLj=s[W-1[fHL(j)(x,y)]],sHHj=s[W-1[fHH(j)(x,y)]].The normalized entropy of each subimage, for a decomposition levelj, has been calculated as
(4)Ssj=1Npixelsj∑x∑ysLLj(x,y),Shj=1Npixelsj∑x∑ysLHj(x,y),Svj=1Npixelsj∑x∑ysHLj(x,y),Sdj=1Npixelsj∑x∑ysHHj(x,y),
where Npixelsj is the number of pixels at each decomposition level j. Table 1 shows the values for Shannon entropy calculated for images of Figure 5.Table 1
Normalized entropies of four decomposition levels for textures of Figure5.
Decomposition level,j
S
s
j
S
h
j
S
v
j
S
d
j
H225
j
=
1
0.00011
0.00008
0.00007
0.00004
j
=
2
0.00048
0.00042
0.00042
0.00037
j
=
3
0.00200
0.00172
0.00168
0.00179
j
=
4
0.00796
0.00750
0.00681
0.00716
H34
j
=
1
0.00011
0.00007
0.00008
0.00006
j
=
2
0.00046
0.00038
0.00039
0.00038
j
=
3
0.00185
0.00179
0.00169
0.00186
j
=
4
0.00732
0.00750
0.00662
0.00698
H241
j
=
1
0.00011
0.00007
0.00007
0.00005
j
=
2
0.00046
0.00038
0.00037
0.00036
j
=
3
0.00194
0.00159
0.00151
0.00164
j
=
4
0.00763
0.00669
0.00640
0.00643
H137
j
=
1
0.00011
0.00007
0.00006
0.00004
j
=
2
0.00047
0.00028
0.00027
0.00031
j
=
3
0.00191
0.00127
0.00132
0.00118
j
=
4
0.00726
0.00593
0.00581
0.00524
H10
j
=
1
0.00013
0.00004
0.00004
0.00002
j
=
2
0.00057
0.00035
0.00037
0.00025
j
=
3
0.00214
0.00190
0.00182
0.00140
j
=
4
0.00841
0.00711
0.00737
0.00665Shannon entropy brings us information about the amount of texture pattern that remains after every decomposition level. Considering (2), entropy provides a measurement of the histogram distribution; the higher the entropy the greater the histogram uniformity; that is, a greater amount of texture pattern is contained in the image. As the decomposition level increases, the texture pattern is being removed; that is, the information content decreases; so the histogram distribution gains uniformity. An optimal reconstruction scheme would eliminate the texture pattern, without loss of defect information. To determine this optimal decomposition level we use a ratio Rj (see (5)) between the entropy of the approximation subimage and the sum of the entropies for all detail subimages, so Rj indicates how much information about the texture pattern is contained in decomposition level j. Variations in this ratio allow detecting changes in the amount of information about the texture pattern between two consecutive decomposition levels
(5)Rj=SsjSsj+Shj+Svj+Sdj,j=1,2,….The goal is to find the optimal decomposition level which provides the maximum variation among two consecutiveRj values because this indicates that, in decomposition level j, the texture pattern still present in level j-1 has been removed, keeping useful information (defects).For this purpose we defineADRj as the difference between two consecutive Rj values (see (6)). The optimal decomposition level J* is calculated as the value of j for which ADRj takes a maximum value. This maximum value points out the greatest variation of information content among two consecutive decomposition levels, which means that both decomposition levels are sufficiently separated in terms of texture pattern information content, and the decomposition process should end. For decomposition levels j<J*, ADRj indicates that significant texture pattern information still remains in the approximation subimage, and the decomposition process should continue. For decomposition levels j>J*; ADRj indicates that the approximation subimage is oversmoothed, and the reconstruction result from such smooth approximation subimage will cause defect loss
(6)ADRj={0j=1Rj-Rj-1j=2,…},J*=arg{maxj(|ADRj|)}.Table2 shows values for Rj coefficients at every image decomposition level (j) for the different textures shown in Figure 5, together with the ADRj values.Table 2
R
j, ADRj, and optimal decomposition level obtained for texture images of Figure 5.
Image
Level (j)
1
2
3
4
J
*
H225
R
j
0.3785
0.2812
0.2778
0.2704
2
ADR
j
0
0.0973
0.0035
0.0074
H34
R
j
0.3460
0.2852
0.2576
0.2575
2
ADR
j
0
0.0608
0.0276
0.0002
H241
R
j
0.3626
0.2931
0.2899
0.2811
2
ADR
j
0
0.0695
0.0032
0.0088
H137
R
j
0.388067
0.353254
0.336133
0.299523
4
ADR
j
0
0.034813
0.017121
0.036610
H10
R
j
0.545018
0.369480
0.294868
0.284585
2
ADR
j
0
0.175538
0.074612
0.010283Once the optimal decomposition level is obtained, the process ends with the production of the reconstructed image using (7)
(7)F(x,y)=W-1[fLL(j)(x,y)].
### 4.2. Smoothing Mask
To remove the noise running through the successive decomposition levels, we applied average-based smoothing over imageF(x,y) to obtain F′(x,y) as shown in (8)
(8)F′(x,y)=1k2∑i=0k-1∑j=0k-1F(x-⌊k2⌋+i,y-⌊k2⌋+j),
where k is the size of the smoothing mask (see Figure 6).Figure 6
Smoothing mask (k=3) for the wavelet coefficients.
## 4.1. Automatic Selection of the Appropriate Decomposition Level
In this work we propose a novel approach for the automatic selection of the appropriate decomposition level by means of Shannon entropy. The entropy function was used to identify the resolution level that provides the most information about defects in real textures. For this purpose, the intensity levels of the subimages of the wavelet transform were considered as random samples. The concept of information entropy—Shannon entropy—describes how much randomness (or uncertainty) there is in a signal or an image; in other words, how much information is provided by the signal or image. In terms of physics, the greater the information entropy of the image is, the higher its quality will be [28].Figure5 shows how the texture pattern degrades as the decomposition level increases. This degradation is distributed among the different decomposition levels depending on the texture nature and can be quantified by means of the Shannon entropy.Approximation subimages(fLL(j)) of four wavelet decomposition levels for different images ((a) H225, (b) H34, (c) H241, (d) H137, and (e) H10) from portions of ship’s hulls.
(a)
(b)
(c)
(d)
(e)The Shannon entropy function [28, 29] is calculated according to the expression
(1)s(X)=-∑i=1Tp(xi)logp(xi),
where X={x1,x2,…,xT} is a set of random variables with T outcomes and p(xi) is the probability of occurrence associated with xi.For a 256-gray-level image of sizeNt pixels, we define a set of random variables X={x1,x2,…,xi,…,x256} as the number of pixels in the image that have gray level i. The probability of this random variable xi is calculated as the number of occurrences, hist[xi], divided by the total number of pixels, Nt(2)p(xi)=hist[xi]Nt.To calculate the value of the Shannon entropy on the approximation subimage (fLL(j)(x,y)) and on the horizontal, vertical and diagonal detail subimages (fLH(j)(x,y),fHL(j)(x,y), and fHH(j)(x,y)) in each decomposition level j, we obtain first the inverse wavelet transform of every subimage and then we apply (3)
(3)sLLj=s[W-1[fLL(j)(x,y)]],sLHj=s[W-1[fLH(j)(x,y)]],sHLj=s[W-1[fHL(j)(x,y)]],sHHj=s[W-1[fHH(j)(x,y)]].The normalized entropy of each subimage, for a decomposition levelj, has been calculated as
(4)Ssj=1Npixelsj∑x∑ysLLj(x,y),Shj=1Npixelsj∑x∑ysLHj(x,y),Svj=1Npixelsj∑x∑ysHLj(x,y),Sdj=1Npixelsj∑x∑ysHHj(x,y),
where Npixelsj is the number of pixels at each decomposition level j. Table 1 shows the values for Shannon entropy calculated for images of Figure 5.Table 1
Normalized entropies of four decomposition levels for textures of Figure5.
Decomposition level,j
S
s
j
S
h
j
S
v
j
S
d
j
H225
j
=
1
0.00011
0.00008
0.00007
0.00004
j
=
2
0.00048
0.00042
0.00042
0.00037
j
=
3
0.00200
0.00172
0.00168
0.00179
j
=
4
0.00796
0.00750
0.00681
0.00716
H34
j
=
1
0.00011
0.00007
0.00008
0.00006
j
=
2
0.00046
0.00038
0.00039
0.00038
j
=
3
0.00185
0.00179
0.00169
0.00186
j
=
4
0.00732
0.00750
0.00662
0.00698
H241
j
=
1
0.00011
0.00007
0.00007
0.00005
j
=
2
0.00046
0.00038
0.00037
0.00036
j
=
3
0.00194
0.00159
0.00151
0.00164
j
=
4
0.00763
0.00669
0.00640
0.00643
H137
j
=
1
0.00011
0.00007
0.00006
0.00004
j
=
2
0.00047
0.00028
0.00027
0.00031
j
=
3
0.00191
0.00127
0.00132
0.00118
j
=
4
0.00726
0.00593
0.00581
0.00524
H10
j
=
1
0.00013
0.00004
0.00004
0.00002
j
=
2
0.00057
0.00035
0.00037
0.00025
j
=
3
0.00214
0.00190
0.00182
0.00140
j
=
4
0.00841
0.00711
0.00737
0.00665Shannon entropy brings us information about the amount of texture pattern that remains after every decomposition level. Considering (2), entropy provides a measurement of the histogram distribution; the higher the entropy the greater the histogram uniformity; that is, a greater amount of texture pattern is contained in the image. As the decomposition level increases, the texture pattern is being removed; that is, the information content decreases; so the histogram distribution gains uniformity. An optimal reconstruction scheme would eliminate the texture pattern, without loss of defect information. To determine this optimal decomposition level we use a ratio Rj (see (5)) between the entropy of the approximation subimage and the sum of the entropies for all detail subimages, so Rj indicates how much information about the texture pattern is contained in decomposition level j. Variations in this ratio allow detecting changes in the amount of information about the texture pattern between two consecutive decomposition levels
(5)Rj=SsjSsj+Shj+Svj+Sdj,j=1,2,….The goal is to find the optimal decomposition level which provides the maximum variation among two consecutiveRj values because this indicates that, in decomposition level j, the texture pattern still present in level j-1 has been removed, keeping useful information (defects).For this purpose we defineADRj as the difference between two consecutive Rj values (see (6)). The optimal decomposition level J* is calculated as the value of j for which ADRj takes a maximum value. This maximum value points out the greatest variation of information content among two consecutive decomposition levels, which means that both decomposition levels are sufficiently separated in terms of texture pattern information content, and the decomposition process should end. For decomposition levels j<J*, ADRj indicates that significant texture pattern information still remains in the approximation subimage, and the decomposition process should continue. For decomposition levels j>J*; ADRj indicates that the approximation subimage is oversmoothed, and the reconstruction result from such smooth approximation subimage will cause defect loss
(6)ADRj={0j=1Rj-Rj-1j=2,…},J*=arg{maxj(|ADRj|)}.Table2 shows values for Rj coefficients at every image decomposition level (j) for the different textures shown in Figure 5, together with the ADRj values.Table 2
R
j, ADRj, and optimal decomposition level obtained for texture images of Figure 5.
Image
Level (j)
1
2
3
4
J
*
H225
R
j
0.3785
0.2812
0.2778
0.2704
2
ADR
j
0
0.0973
0.0035
0.0074
H34
R
j
0.3460
0.2852
0.2576
0.2575
2
ADR
j
0
0.0608
0.0276
0.0002
H241
R
j
0.3626
0.2931
0.2899
0.2811
2
ADR
j
0
0.0695
0.0032
0.0088
H137
R
j
0.388067
0.353254
0.336133
0.299523
4
ADR
j
0
0.034813
0.017121
0.036610
H10
R
j
0.545018
0.369480
0.294868
0.284585
2
ADR
j
0
0.175538
0.074612
0.010283Once the optimal decomposition level is obtained, the process ends with the production of the reconstructed image using (7)
(7)F(x,y)=W-1[fLL(j)(x,y)].
## 4.2. Smoothing Mask
To remove the noise running through the successive decomposition levels, we applied average-based smoothing over imageF(x,y) to obtain F′(x,y) as shown in (8)
(8)F′(x,y)=1k2∑i=0k-1∑j=0k-1F(x-⌊k2⌋+i,y-⌊k2⌋+j),
where k is the size of the smoothing mask (see Figure 6).Figure 6
Smoothing mask (k=3) for the wavelet coefficients.
## 5. Results
### 5.1. Algorithm Implementation
The proposed computer vision algorithm was implemented as shown in the Pseudocode1, using the C++ programming language. The mother wavelet used for decomposition was the Haar base function with two coefficients, applied up to a fourth decomposition level. A decomposition level higher than four produced the fusion between defects and background, thus reducing the probability of defect detection.Pseudocode 1:Pseudocode to implement the developed algorithm.
Step 1. Compute Shannon Entropy:SSj, Shj, Svj and Sdj
Step 2. ComputeRj=SSj/(SSj+Shj+Svj+Sdj) for j=1,2,3,…,J
Step 3. Compute optimal decomposition level:
J*=arg{maxj{ADRj}}, j=1,2,3,…,J
Step 4. ComputeF=W-1[fLL(J*)]
Step 5. ComputeF′=mk×k[F]
Step 6. BinarizeF′
### 5.2. Implementation of the Computer Vision System
The computer vision system for visual inspection of ship hull surfaces (Figure2) has been implemented on a Pentium computer with a Meteor II/1394 card. This card is connected to the microprocessor via a PCI bus and is used as a frame-grabber. For that purpose the card had a processing node based on the TMS320C80 DSP from Texas Instruments and the Matrox NOA ASIC. In addition, the card had a firewire input/output bus (IEEE 1394) which enables it to control a half-inch digital colour camera (15 fps, 1024 × 768 square pixel) equipped with a wide-angle lens (f 4,2 mm).The software development environment used to implement the system software modules was the Visual C++ programming language powered by the Matrox Imaging Library v9.0. The system also had a Siemens CP5611 card which acted as a PROFIBUS-DP interface for connection with the corresponding robotized blasting system. A Honeywell sensor was used to measure the distance to the ship by ultrasound, with a range of 200–2000 mm and an output of 4–20 mA. User access to the computer vision system was by means of an industrial PDS (Mobic T8 from Siemens) and a wireless access point. Among other functions, the software that has been developed allows the operator to (1) enter the system configuration parameters, (2) visualize the detected areas to blast for validation by the operator before blasting commences, and (3) calibrate the computer vision system.
### 5.3. Validation Environment
The proposed computer vision algorithm was assessed at the NAVANTIA shipyard in Ferrol (Spain) on a robotized system used for automatic spot blasting. This operation accounts for 70% of all cleaning work carried out at that shipyard. The robotized system (Figure7) consists of a mechanical structure divided into two parts: primary and secondary. The primary structure holds the secondary structure (XYZ table), which supports the cleaning head and the computer vision system. More information regarding this system can be found in [5].Figure 7
Robotized blasting system.With the help of this platform, 260 images of ship hulls’ surfaces (with and without defects) were taken, similar to those shown in Figure3. In this way a catalogue was compiled of typical surface defects as they appear before grit blasting.
### 5.4. Metrics
To conduct a quantitative analysis of the quality of the proposed segmentation method, we need to use the best suited metrics to that purpose. The performance of image segmentation methods has been assessed by such authors as Zhang [30] and Sezgin and Sankur [16]. They proposed various different metrics for measurement of the quality of the segmentation in a given method, using parameters like position of the pixels, area, edges, and so forth. Out of these, one of the quantitative appraisal methods proposed by Sezgin was selected and examined: Misclassification Error (ME).ME represents the percentage of the background pixels that are incorrectly allocated to the object (i.e., to the foreground) or vice versa(9)ME=1-|BP∩BT|+|OP∩OT||BP|+|OP|.The error can be calculated by means of (9), where BP (background pattern) and OP (object pattern) represent the pattern image of the background and of the object taken as reference, and BT (background test) and OT (object test) represent the image to be assessed. In the event that the test image coincides with the pattern image, the classification error will be zero and therefore the performance of the segmentation will be the maximum.The performance of the implemented algorithms is assessed according to the equation:(10)η=100·(1-ME).
### 5.5. Algorithm Appraisal
The proposed visual inspection algorithm (see Pseudocode1) was applied to the above mentioned catalogue that had been taken at the shipyard (some samples are shown in column (a) of Figure 8). The Shannon entropy was calculated and normalized for four wavelet decomposition levels and the optimal J* level was calculated (6). Images were also processed applying algorithms proposed by Han and Shi [10] and Tsai and Chiang [19]. The result was 3 sets of 260 reconstructed images in which the defects have been isolated from texture. To check the quality of the defect detection algorithms we have concluded with a binarization stage. For that purpose we have selected Kapur’s method [21] which belongs to the group of entropy-based methods, as classified by Sezgin and Sankur [16] in his review of thresholding methods; this has resulted in 3 sets of 260 images (column (b) of Figure 8 shows some results obtained with the proposed algorithm; column (c) of Figure 8 shows some results obtained with the Tsai algorithm and column (d) of Figure 8 shows some results obtained with Han algorithm).Columns: column (i) shows texture images of portions of hulls; column (ii) shows reconstructed images resulting from the proposed reconstruction scheme; column (iii) shows reconstructed images resulting from Tsai algorithm; column (iv) shows reconstructed images resulting from Han algorithm; column (v) shows defects segmented by hand, the “ground truth.” Image Rows: (a) H225, (b) H34, (c) H241, (d) H1, (e) H120, (f) H121, (g) H99, (h) H137, (i) H48, (j) H9, (k) H10, and (l) H11.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)To apply the metrics described above, human inspectors were needed to segment each of the catalogue images manually (samples of these are shown in column (v) of Figure8). Table 3 shows the performance (η) when sample texture images of Figure 8 were segmented using the three algorithms.Table 3
η in defect segmentation of texture images of Figure 8.
Entropy based
J
*
Tsai
J
*
Han
J
*
Defect samples
H225
97.09
2
94.51
4
2.14
4
H34
97.41
2
74.52
2
3.43
4
H241
96.04
2
95.17
4
97.41
2
H1
86.50
3
82.63
4
84.69
4
H20
91.40
3
89.22
4
90.08
4
H121
89.36
4
89.91
4
90.23
4
H99
89.30
4
73.76
4
91.46
3
H137
93.94
4
82.87
4
94.26
3
H48
93.06
4
93.97
4
64.23
4
Average on defect samples
92.68
3.11
86.28
3.78
68.66
3.56
Nondefect samples
H9
98.30
2
74.04
3
99.61
4
H10
98.33
2
80.85
4
48.05
4
H11
98.22
2
89.06
4
48.65
4
Average on nondefect samples
98.28
2.00
81.32
3.67
65.44
4.00
Total Average
95.48
2.56
83.80
3.72
67.05
3.78As can be observed from above results, the proposed entropy-based algorithm achieved better results than Tsai algorithm and significantly better results than Han algorithm. In both cases the proposed algorithm obtains higher performance with low decomposition level.We have also analysed the behaviour of the proposed algorithm as misclassification rates. A set of 120 images were processed by the proposed algorithm and also by Han and Tsai algorithms. Results were then analysed by a skilled blasting operator, who assessed what portions of the shown hull surface would be blasted in real conditions at the repair yard. Table4 shows the average number of defect points classified as Type I and Type II errors for 120 samples of the 260-image set indicated above.Table 4
Automated inspection examined by a skilled blasting operator.
Entropy-basedalgorithm
Hanalgorithm
Tsaialgorithm
Type I error
6.8%
9.2%
11.1%
Type II error
0.9%
1.1%
0.7%As we can see, the proposed algorithm produced better results as regards false positives—that is, points marked as defective when they are not (Type I error). This is essentially because the operator tends to blast larger areas than necessary, and moreover he is less able to control the cut-off of the grit jet. On the other hand, the proposed algorithm identified similar false negatives (Type II error). This difference was not very significant and is quite acceptable in view of the clear advantage offered by the computer vision system equipped with the proposed inspection algorithm as regards Type I errors.
## 5.1. Algorithm Implementation
The proposed computer vision algorithm was implemented as shown in the Pseudocode1, using the C++ programming language. The mother wavelet used for decomposition was the Haar base function with two coefficients, applied up to a fourth decomposition level. A decomposition level higher than four produced the fusion between defects and background, thus reducing the probability of defect detection.Pseudocode 1:Pseudocode to implement the developed algorithm.
Step 1. Compute Shannon Entropy:SSj, Shj, Svj and Sdj
Step 2. ComputeRj=SSj/(SSj+Shj+Svj+Sdj) for j=1,2,3,…,J
Step 3. Compute optimal decomposition level:
J*=arg{maxj{ADRj}}, j=1,2,3,…,J
Step 4. ComputeF=W-1[fLL(J*)]
Step 5. ComputeF′=mk×k[F]
Step 6. BinarizeF′
## 5.2. Implementation of the Computer Vision System
The computer vision system for visual inspection of ship hull surfaces (Figure2) has been implemented on a Pentium computer with a Meteor II/1394 card. This card is connected to the microprocessor via a PCI bus and is used as a frame-grabber. For that purpose the card had a processing node based on the TMS320C80 DSP from Texas Instruments and the Matrox NOA ASIC. In addition, the card had a firewire input/output bus (IEEE 1394) which enables it to control a half-inch digital colour camera (15 fps, 1024 × 768 square pixel) equipped with a wide-angle lens (f 4,2 mm).The software development environment used to implement the system software modules was the Visual C++ programming language powered by the Matrox Imaging Library v9.0. The system also had a Siemens CP5611 card which acted as a PROFIBUS-DP interface for connection with the corresponding robotized blasting system. A Honeywell sensor was used to measure the distance to the ship by ultrasound, with a range of 200–2000 mm and an output of 4–20 mA. User access to the computer vision system was by means of an industrial PDS (Mobic T8 from Siemens) and a wireless access point. Among other functions, the software that has been developed allows the operator to (1) enter the system configuration parameters, (2) visualize the detected areas to blast for validation by the operator before blasting commences, and (3) calibrate the computer vision system.
## 5.3. Validation Environment
The proposed computer vision algorithm was assessed at the NAVANTIA shipyard in Ferrol (Spain) on a robotized system used for automatic spot blasting. This operation accounts for 70% of all cleaning work carried out at that shipyard. The robotized system (Figure7) consists of a mechanical structure divided into two parts: primary and secondary. The primary structure holds the secondary structure (XYZ table), which supports the cleaning head and the computer vision system. More information regarding this system can be found in [5].Figure 7
Robotized blasting system.With the help of this platform, 260 images of ship hulls’ surfaces (with and without defects) were taken, similar to those shown in Figure3. In this way a catalogue was compiled of typical surface defects as they appear before grit blasting.
## 5.4. Metrics
To conduct a quantitative analysis of the quality of the proposed segmentation method, we need to use the best suited metrics to that purpose. The performance of image segmentation methods has been assessed by such authors as Zhang [30] and Sezgin and Sankur [16]. They proposed various different metrics for measurement of the quality of the segmentation in a given method, using parameters like position of the pixels, area, edges, and so forth. Out of these, one of the quantitative appraisal methods proposed by Sezgin was selected and examined: Misclassification Error (ME).ME represents the percentage of the background pixels that are incorrectly allocated to the object (i.e., to the foreground) or vice versa(9)ME=1-|BP∩BT|+|OP∩OT||BP|+|OP|.The error can be calculated by means of (9), where BP (background pattern) and OP (object pattern) represent the pattern image of the background and of the object taken as reference, and BT (background test) and OT (object test) represent the image to be assessed. In the event that the test image coincides with the pattern image, the classification error will be zero and therefore the performance of the segmentation will be the maximum.The performance of the implemented algorithms is assessed according to the equation:(10)η=100·(1-ME).
## 5.5. Algorithm Appraisal
The proposed visual inspection algorithm (see Pseudocode1) was applied to the above mentioned catalogue that had been taken at the shipyard (some samples are shown in column (a) of Figure 8). The Shannon entropy was calculated and normalized for four wavelet decomposition levels and the optimal J* level was calculated (6). Images were also processed applying algorithms proposed by Han and Shi [10] and Tsai and Chiang [19]. The result was 3 sets of 260 reconstructed images in which the defects have been isolated from texture. To check the quality of the defect detection algorithms we have concluded with a binarization stage. For that purpose we have selected Kapur’s method [21] which belongs to the group of entropy-based methods, as classified by Sezgin and Sankur [16] in his review of thresholding methods; this has resulted in 3 sets of 260 images (column (b) of Figure 8 shows some results obtained with the proposed algorithm; column (c) of Figure 8 shows some results obtained with the Tsai algorithm and column (d) of Figure 8 shows some results obtained with Han algorithm).Columns: column (i) shows texture images of portions of hulls; column (ii) shows reconstructed images resulting from the proposed reconstruction scheme; column (iii) shows reconstructed images resulting from Tsai algorithm; column (iv) shows reconstructed images resulting from Han algorithm; column (v) shows defects segmented by hand, the “ground truth.” Image Rows: (a) H225, (b) H34, (c) H241, (d) H1, (e) H120, (f) H121, (g) H99, (h) H137, (i) H48, (j) H9, (k) H10, and (l) H11.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)To apply the metrics described above, human inspectors were needed to segment each of the catalogue images manually (samples of these are shown in column (v) of Figure8). Table 3 shows the performance (η) when sample texture images of Figure 8 were segmented using the three algorithms.Table 3
η in defect segmentation of texture images of Figure 8.
Entropy based
J
*
Tsai
J
*
Han
J
*
Defect samples
H225
97.09
2
94.51
4
2.14
4
H34
97.41
2
74.52
2
3.43
4
H241
96.04
2
95.17
4
97.41
2
H1
86.50
3
82.63
4
84.69
4
H20
91.40
3
89.22
4
90.08
4
H121
89.36
4
89.91
4
90.23
4
H99
89.30
4
73.76
4
91.46
3
H137
93.94
4
82.87
4
94.26
3
H48
93.06
4
93.97
4
64.23
4
Average on defect samples
92.68
3.11
86.28
3.78
68.66
3.56
Nondefect samples
H9
98.30
2
74.04
3
99.61
4
H10
98.33
2
80.85
4
48.05
4
H11
98.22
2
89.06
4
48.65
4
Average on nondefect samples
98.28
2.00
81.32
3.67
65.44
4.00
Total Average
95.48
2.56
83.80
3.72
67.05
3.78As can be observed from above results, the proposed entropy-based algorithm achieved better results than Tsai algorithm and significantly better results than Han algorithm. In both cases the proposed algorithm obtains higher performance with low decomposition level.We have also analysed the behaviour of the proposed algorithm as misclassification rates. A set of 120 images were processed by the proposed algorithm and also by Han and Tsai algorithms. Results were then analysed by a skilled blasting operator, who assessed what portions of the shown hull surface would be blasted in real conditions at the repair yard. Table4 shows the average number of defect points classified as Type I and Type II errors for 120 samples of the 260-image set indicated above.Table 4
Automated inspection examined by a skilled blasting operator.
Entropy-basedalgorithm
Hanalgorithm
Tsaialgorithm
Type I error
6.8%
9.2%
11.1%
Type II error
0.9%
1.1%
0.7%As we can see, the proposed algorithm produced better results as regards false positives—that is, points marked as defective when they are not (Type I error). This is essentially because the operator tends to blast larger areas than necessary, and moreover he is less able to control the cut-off of the grit jet. On the other hand, the proposed algorithm identified similar false negatives (Type II error). This difference was not very significant and is quite acceptable in view of the clear advantage offered by the computer vision system equipped with the proposed inspection algorithm as regards Type I errors.
## 6. Conclusions
This paper has presented a computer vision algorithm based on the wavelet transform which brings a robust method for detecting defects in ship hull surfaces. To achieve this, we used an image reconstruction approach based on automatic selection of the optimal wavelet transform resolution level by means of a novel use of the Shannon entropy, calculated on the different detail subimages.The algorithm has been incorporated to a computer vision system that masters a robotized system for blasting ship hulls, making it possible to fully automate grit blasting operation. The results as regards reliability were very similar to those achieved with human workers, while faster inspection was provided (among 8% for flat surfaces in oil tankers and 15% for shaped hulls like frigates) and the consequences of operator fatigue minimized.
---
*Source: 101837-2013-05-27.xml* | 101837-2013-05-27_101837-2013-05-27.md | 47,174 | Automated Visual Inspection of Ship Hull Surfaces Using the Wavelet Transform | Carlos Fernández-Isla; Pedro J. Navarro; Pedro María Alcover | Mathematical Problems in Engineering
(2013) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101837 | 101837-2013-05-27.xml | ---
## Abstract
A new online visual inspection technique is proposed, based on a wavelet reconstruction scheme over images obtained from the hull. This type of visual inspection to detect defects in hull surfaces is commonly carried out at shipyards by human inspectors before the hull repair task starts. We propose the use of Shannon entropy for automatic selection of the band for image reconstruction which provides a low decomposition level, thus avoiding excessive degradation of the image, allowing more precise defect segmentation. The proposed method here is capable of on-line assisting to a robotic system to perform grit blasting operations over damage areas of ship hulls. This solution allows a reliable and cost-effective operation for hull grit spot blasting. A prototype of the automated blasting system has been developed and tested in the Spanish NAVANTIA shipyards.
---
## Body
## 1. Introduction
Main ships’ maintenance care consists of periodical (every 4-5 years) hull treatment which includes blasting works; blasting consists in projecting a high-pressure jet of abrasive matter (typically water or grit) onto a surface to remove adherences or rust traces. The object of this task is to maintain hull integrity, guarantee navigational safety conditions, and assure that the surface offers little resistance to the water in order to reduce fuel consumption. That object can be achieved by grit blasting [1] or ultra-high pressure water jetting [2]. In most cases these techniques are applied using manual or semiautomated procedures with the help of robotized devices [3]. In either case defects are detected by means of human operators; this is therefore a subjective task and hence vulnerable to cumulative operator fatigue and highly dependent on the experience of the personnel performing the task. Figure 1 shows a view of ship’s hulls under repair at NAVANTIA’s shipyards.Ship’s hulls at a repair yard.
(a)
Shaped hull
(b)
Flat hull
(c)
Damaged portions
(d)
Detail visual appearanceFrom an operational point of view, there are two working modes: full blasting and spot blasting. Full blasting consists of blasting the entire hull of the ship, while spot blasting consists of blasting numerous isolated areas where corrosion has been observed. Spot blasting is the most demanded operation due to cost saving reasons. This second working mode demands very precise information about position, size, and shape of damaged portions of the hull to make robotic devices [3–5] to achieve maximum efficiency.This paper proposes a computer vision algorithm which equips a machine vision system (see Figure2), capable for precisely detecting defects in ship hulls which is simple enough to be implemented in such a way as to meet the real-time requirements for the application.Figure 2
Machine vision system for hull blasting.Because of the textured appearance of the hull’s surface under inspection (see Figures1(c) and 1(d)), we have used the wavelet transform, and the developed computer vision algorithm includes an image reconstruction approach based on automatic selection of the optimal wavelet transform resolution level, using Shannon entropy.
## 2. Defect Detection in Textured Surfaces
Texture is a very important characteristic when identifying defects or flaws, as it provides important information for defect detection. In fact, the task of detecting defects has been largely seen as a texture analysis problem. Figure3 shows several texture images from ship hull surfaces.Figure 3
Texture images from ships’ hull surface.In his review Xie [6] classified texture analysis techniques for defect detection in four categories: statistical approaches, structural approaches, filter based approaches, and model based approaches. In his review of fabric defect detection similarly Kumar [7] classified the proposed solutions into three categories: statistical, spectral, and model based. In his review of automated defect detection in fabric, Ngan et al. [8] classified defect detection techniques in textured fabric into nonmotif-based and motif-based approaches. The motif-based approach [9] uses the symmetry property of motifs to calculate the energy of moving subtraction and its variance among different motifs. Many defect detection methods usually use clustering techniques which are mainly based on texture feature extraction and texture classifications. These features are collated using methods such as cooccurrence matrix [10], Fourier transform [11], Gabor transform [12], or the wavelet transform [13].Spectral-approach methods for texture analysis characterize the frequency contents of a texture image—Fourier transform—or provide spatial-frequency analysis—Gabor filters, wavelets. A two-dimensional spectrum of a visual texture frequently contains information about the periodicity and directionality of the texture pattern. For example, a texture with a coarse appearance analysed from the spectral point of view shows high-frequency components, while a texture with a fine appearance shows low-frequency components. The analytical methods based on Fourier transform show good results in texture patterns with high regularity and/or directionality, but they are limited by a lack of spatial localization. In this field, Gabor filters provide better spatial localization, although their utility in natural textures is limited because there is no single filter resolution that can localize a structure. The wavelet transform has some advantages over the Gabor transform, such as the fact that the variation of the spatial resolution makes it possible to represent the textures in the appropriate scale, as well as to choose from a wide range of wavelet functions.
## 3. The Wavelet Transform for Defect Detection
The suitability of wavelet transforms for use in image analysis is well established: a representation in terms of the frequency content of local regions over a range of scales provides an ideal framework for the analysis of image features, which in general are of different size and can often be characterised by their frequency domain properties [14]. This makes the wavelet transform an attractive option when attempting defect detection in textured products, as reported by Truchetet and Laligant [15] in his review of industrial applications of wavelet-based image processing. He reported different uses of wavelet analysis in successful machine vision applications: detecting defects for manufacturing applications for the production of furniture, textiles, integrated circuits, and so forth, from their wavelet transformation and vector quantization-related properties of the associated wavelet coefficients; printing defect identification and classification (applied to printed decoration and tampon printed images) by analysing the fractal properties of a textured image; online inspection of the loom under construction using a specific class of the 2D discrete wavelet transform (DWT) called the multiscale wavelet representation with the objectives of attenuating the background texture and accentuating the defects; online fabric inspection device performing an independent component analysis on a subband decomposition provided by a 2-level DWT in order to increase the defect detection rate.The review of the literature shows two categories of defect detection methods based on wavelet transform. The first category includes direct thresholding methods [10, 11], whose design is based on the fact that texture background can be attenuated by the wavelet decomposition. If we remove the texture pattern from real texture, it is feasible to use existing defect detecting techniques for nontexture images, such as thresholding techniques [16]. Textural features extracted from wavelet-decomposed images are another category which is widely used for defect detection [17, 18]. Features extracted from the texture patterns are used as feature vectors to feed a classifier (Bayer, Euclidean distance, Neural Networks, or Support Vector Machines), which has unavoidable drawbacks when dealing with the vast image data obtained during inspection tasks. For instance, proximity-based methods tend to be computationally expensive and there is no straightforward way of defining a meaningful stopping criterion for data fusion (or division). Often, the learning-based classifiers need to be trained by the nondefect features, which is a troublesome and usually time consuming procedure, thus limiting its real-time applications [10]. For this reason we have focused on direct thresholding methods. The use of direct thresholding presents a main challenge: how to select the decomposition level. On the other hand, direct thresholding presents two main drawbacks: (1) an excessive wavelet decomposition level produces a fusion of defects with the texture pattern and (2) a wrong reconstruction scheme produces false positives when defects are detected.The work presented here is based on the authors’ research on previous works [10, 19] and addresses abovementioned drawbacks by a new approach based on Shannon Entropy calculation. Its main contribution is the formulation of a novel use of the normalized Shannon Entropy, calculated on the different detail subimages, to determine the optimal decomposition level in textures with low directionality. For this purpose we propose to calculate the optimal decomposition level as the maximum of the ratio between the entropy of the approximation subimage and the total entropy, as the sum of entropies calculated for every subimage.
### 3.1. Wavelet Decomposition
For an imagef(x,y) of size M×N pixels, each level of wavelet decomposition is obtained by applying two filters: a low-pass filter (L) and a high-pass filter (H). The different combinations of these filters produce four images that are here denoted with the subscripts LL, LH, HL, and HH. In the first decomposition level four subimages or bands are produced: one smooth image, also called approximation, fLL(1)(x,y), that represents an approximation of the original image f(x,y) and three detail subimages fLH(1)(x,y), fHL(1)(x,y), and fHH(1)(x,y), which represent the horizontal, vertical and diagonal details, respectively. With this notation, fLL(0)(x,y) represents the original image, f(x,y), and fLL(j)(x,y) represent the approximation image in the decomposition level j. From each decomposition level fLL(j)(x,y) we obtain four subimages, designated here as fLL(j+1)(x,y),fLH(j+1)(x,y),fHL(j+1)(x,y), and fHH(j+1)(x,y), which together form the decomposition level j+1. The pyramid algorithm to obtain fLL(j)(x,y),fLH(j)(x,y),fHL(j)(x,y), and fHH(j)(x,y), as well as the calculation of the inverse transform, can be found in [20]. We will designate F(x,y)=W-1[fLL(j)] as the inverse wavelet transform of a subimage fLL(j)from the resolution level j to level 0.Figure4 shows the wavelet decomposition for three conveniently scaled levels (j=3) of a statistical texture pattern—painted surface—with corrosion defects; the different subimages or bands are shown (named LL, LH, HL, and HH). These were obtained after applying the different coefficients of the wavelet filters. More specifically, the image in Figure 4(a) was decomposed through the application of the Haar wavelet with two coefficients. At level j, images of size (N/2j)×(M/2j) pixels are obtained by iterative application of the pyramid algorithm. Note also that the subimages corresponding to the different decomposition levels are produced by successively applying the low-pass and high-pass filters and reducing the rows and columns by a factor of two.Decomposition of an image from a damaged hull, using Haar wavelet with two coefficients, at three decomposition levels: (a) original image, (b) first decomposition level, (c) second decomposition level, and (d) third decomposition level.
(a)
(b)
(c)
(d)
## 3.1. Wavelet Decomposition
For an imagef(x,y) of size M×N pixels, each level of wavelet decomposition is obtained by applying two filters: a low-pass filter (L) and a high-pass filter (H). The different combinations of these filters produce four images that are here denoted with the subscripts LL, LH, HL, and HH. In the first decomposition level four subimages or bands are produced: one smooth image, also called approximation, fLL(1)(x,y), that represents an approximation of the original image f(x,y) and three detail subimages fLH(1)(x,y), fHL(1)(x,y), and fHH(1)(x,y), which represent the horizontal, vertical and diagonal details, respectively. With this notation, fLL(0)(x,y) represents the original image, f(x,y), and fLL(j)(x,y) represent the approximation image in the decomposition level j. From each decomposition level fLL(j)(x,y) we obtain four subimages, designated here as fLL(j+1)(x,y),fLH(j+1)(x,y),fHL(j+1)(x,y), and fHH(j+1)(x,y), which together form the decomposition level j+1. The pyramid algorithm to obtain fLL(j)(x,y),fLH(j)(x,y),fHL(j)(x,y), and fHH(j)(x,y), as well as the calculation of the inverse transform, can be found in [20]. We will designate F(x,y)=W-1[fLL(j)] as the inverse wavelet transform of a subimage fLL(j)from the resolution level j to level 0.Figure4 shows the wavelet decomposition for three conveniently scaled levels (j=3) of a statistical texture pattern—painted surface—with corrosion defects; the different subimages or bands are shown (named LL, LH, HL, and HH). These were obtained after applying the different coefficients of the wavelet filters. More specifically, the image in Figure 4(a) was decomposed through the application of the Haar wavelet with two coefficients. At level j, images of size (N/2j)×(M/2j) pixels are obtained by iterative application of the pyramid algorithm. Note also that the subimages corresponding to the different decomposition levels are produced by successively applying the low-pass and high-pass filters and reducing the rows and columns by a factor of two.Decomposition of an image from a damaged hull, using Haar wavelet with two coefficients, at three decomposition levels: (a) original image, (b) first decomposition level, (c) second decomposition level, and (d) third decomposition level.
(a)
(b)
(c)
(d)
## 4. Entropy-Based Method for the Automatic Selection of the Wavelet Decomposition Level
In image processing, entropy has been used by many authors as part of the algorithmic development procedure. There are examples of the use of entropy in the programming of thresholding algorithms [21] and image segmenting [22] as a descriptor for texture classification [23]; as one of the parameters selected by Haralick et al. for application to gray level concurrence matrixes and used for texture characterization [24]; as an element in characteristic vector groups used for classification by Bayesian techniques [25], neuronal networks [26], compact support vectors [27], and so forth.
### 4.1. Automatic Selection of the Appropriate Decomposition Level
In this work we propose a novel approach for the automatic selection of the appropriate decomposition level by means of Shannon entropy. The entropy function was used to identify the resolution level that provides the most information about defects in real textures. For this purpose, the intensity levels of the subimages of the wavelet transform were considered as random samples. The concept of information entropy—Shannon entropy—describes how much randomness (or uncertainty) there is in a signal or an image; in other words, how much information is provided by the signal or image. In terms of physics, the greater the information entropy of the image is, the higher its quality will be [28].Figure5 shows how the texture pattern degrades as the decomposition level increases. This degradation is distributed among the different decomposition levels depending on the texture nature and can be quantified by means of the Shannon entropy.Approximation subimages(fLL(j)) of four wavelet decomposition levels for different images ((a) H225, (b) H34, (c) H241, (d) H137, and (e) H10) from portions of ship’s hulls.
(a)
(b)
(c)
(d)
(e)The Shannon entropy function [28, 29] is calculated according to the expression
(1)s(X)=-∑i=1Tp(xi)logp(xi),
where X={x1,x2,…,xT} is a set of random variables with T outcomes and p(xi) is the probability of occurrence associated with xi.For a 256-gray-level image of sizeNt pixels, we define a set of random variables X={x1,x2,…,xi,…,x256} as the number of pixels in the image that have gray level i. The probability of this random variable xi is calculated as the number of occurrences, hist[xi], divided by the total number of pixels, Nt(2)p(xi)=hist[xi]Nt.To calculate the value of the Shannon entropy on the approximation subimage (fLL(j)(x,y)) and on the horizontal, vertical and diagonal detail subimages (fLH(j)(x,y),fHL(j)(x,y), and fHH(j)(x,y)) in each decomposition level j, we obtain first the inverse wavelet transform of every subimage and then we apply (3)
(3)sLLj=s[W-1[fLL(j)(x,y)]],sLHj=s[W-1[fLH(j)(x,y)]],sHLj=s[W-1[fHL(j)(x,y)]],sHHj=s[W-1[fHH(j)(x,y)]].The normalized entropy of each subimage, for a decomposition levelj, has been calculated as
(4)Ssj=1Npixelsj∑x∑ysLLj(x,y),Shj=1Npixelsj∑x∑ysLHj(x,y),Svj=1Npixelsj∑x∑ysHLj(x,y),Sdj=1Npixelsj∑x∑ysHHj(x,y),
where Npixelsj is the number of pixels at each decomposition level j. Table 1 shows the values for Shannon entropy calculated for images of Figure 5.Table 1
Normalized entropies of four decomposition levels for textures of Figure5.
Decomposition level,j
S
s
j
S
h
j
S
v
j
S
d
j
H225
j
=
1
0.00011
0.00008
0.00007
0.00004
j
=
2
0.00048
0.00042
0.00042
0.00037
j
=
3
0.00200
0.00172
0.00168
0.00179
j
=
4
0.00796
0.00750
0.00681
0.00716
H34
j
=
1
0.00011
0.00007
0.00008
0.00006
j
=
2
0.00046
0.00038
0.00039
0.00038
j
=
3
0.00185
0.00179
0.00169
0.00186
j
=
4
0.00732
0.00750
0.00662
0.00698
H241
j
=
1
0.00011
0.00007
0.00007
0.00005
j
=
2
0.00046
0.00038
0.00037
0.00036
j
=
3
0.00194
0.00159
0.00151
0.00164
j
=
4
0.00763
0.00669
0.00640
0.00643
H137
j
=
1
0.00011
0.00007
0.00006
0.00004
j
=
2
0.00047
0.00028
0.00027
0.00031
j
=
3
0.00191
0.00127
0.00132
0.00118
j
=
4
0.00726
0.00593
0.00581
0.00524
H10
j
=
1
0.00013
0.00004
0.00004
0.00002
j
=
2
0.00057
0.00035
0.00037
0.00025
j
=
3
0.00214
0.00190
0.00182
0.00140
j
=
4
0.00841
0.00711
0.00737
0.00665Shannon entropy brings us information about the amount of texture pattern that remains after every decomposition level. Considering (2), entropy provides a measurement of the histogram distribution; the higher the entropy the greater the histogram uniformity; that is, a greater amount of texture pattern is contained in the image. As the decomposition level increases, the texture pattern is being removed; that is, the information content decreases; so the histogram distribution gains uniformity. An optimal reconstruction scheme would eliminate the texture pattern, without loss of defect information. To determine this optimal decomposition level we use a ratio Rj (see (5)) between the entropy of the approximation subimage and the sum of the entropies for all detail subimages, so Rj indicates how much information about the texture pattern is contained in decomposition level j. Variations in this ratio allow detecting changes in the amount of information about the texture pattern between two consecutive decomposition levels
(5)Rj=SsjSsj+Shj+Svj+Sdj,j=1,2,….The goal is to find the optimal decomposition level which provides the maximum variation among two consecutiveRj values because this indicates that, in decomposition level j, the texture pattern still present in level j-1 has been removed, keeping useful information (defects).For this purpose we defineADRj as the difference between two consecutive Rj values (see (6)). The optimal decomposition level J* is calculated as the value of j for which ADRj takes a maximum value. This maximum value points out the greatest variation of information content among two consecutive decomposition levels, which means that both decomposition levels are sufficiently separated in terms of texture pattern information content, and the decomposition process should end. For decomposition levels j<J*, ADRj indicates that significant texture pattern information still remains in the approximation subimage, and the decomposition process should continue. For decomposition levels j>J*; ADRj indicates that the approximation subimage is oversmoothed, and the reconstruction result from such smooth approximation subimage will cause defect loss
(6)ADRj={0j=1Rj-Rj-1j=2,…},J*=arg{maxj(|ADRj|)}.Table2 shows values for Rj coefficients at every image decomposition level (j) for the different textures shown in Figure 5, together with the ADRj values.Table 2
R
j, ADRj, and optimal decomposition level obtained for texture images of Figure 5.
Image
Level (j)
1
2
3
4
J
*
H225
R
j
0.3785
0.2812
0.2778
0.2704
2
ADR
j
0
0.0973
0.0035
0.0074
H34
R
j
0.3460
0.2852
0.2576
0.2575
2
ADR
j
0
0.0608
0.0276
0.0002
H241
R
j
0.3626
0.2931
0.2899
0.2811
2
ADR
j
0
0.0695
0.0032
0.0088
H137
R
j
0.388067
0.353254
0.336133
0.299523
4
ADR
j
0
0.034813
0.017121
0.036610
H10
R
j
0.545018
0.369480
0.294868
0.284585
2
ADR
j
0
0.175538
0.074612
0.010283Once the optimal decomposition level is obtained, the process ends with the production of the reconstructed image using (7)
(7)F(x,y)=W-1[fLL(j)(x,y)].
### 4.2. Smoothing Mask
To remove the noise running through the successive decomposition levels, we applied average-based smoothing over imageF(x,y) to obtain F′(x,y) as shown in (8)
(8)F′(x,y)=1k2∑i=0k-1∑j=0k-1F(x-⌊k2⌋+i,y-⌊k2⌋+j),
where k is the size of the smoothing mask (see Figure 6).Figure 6
Smoothing mask (k=3) for the wavelet coefficients.
## 4.1. Automatic Selection of the Appropriate Decomposition Level
In this work we propose a novel approach for the automatic selection of the appropriate decomposition level by means of Shannon entropy. The entropy function was used to identify the resolution level that provides the most information about defects in real textures. For this purpose, the intensity levels of the subimages of the wavelet transform were considered as random samples. The concept of information entropy—Shannon entropy—describes how much randomness (or uncertainty) there is in a signal or an image; in other words, how much information is provided by the signal or image. In terms of physics, the greater the information entropy of the image is, the higher its quality will be [28].Figure5 shows how the texture pattern degrades as the decomposition level increases. This degradation is distributed among the different decomposition levels depending on the texture nature and can be quantified by means of the Shannon entropy.Approximation subimages(fLL(j)) of four wavelet decomposition levels for different images ((a) H225, (b) H34, (c) H241, (d) H137, and (e) H10) from portions of ship’s hulls.
(a)
(b)
(c)
(d)
(e)The Shannon entropy function [28, 29] is calculated according to the expression
(1)s(X)=-∑i=1Tp(xi)logp(xi),
where X={x1,x2,…,xT} is a set of random variables with T outcomes and p(xi) is the probability of occurrence associated with xi.For a 256-gray-level image of sizeNt pixels, we define a set of random variables X={x1,x2,…,xi,…,x256} as the number of pixels in the image that have gray level i. The probability of this random variable xi is calculated as the number of occurrences, hist[xi], divided by the total number of pixels, Nt(2)p(xi)=hist[xi]Nt.To calculate the value of the Shannon entropy on the approximation subimage (fLL(j)(x,y)) and on the horizontal, vertical and diagonal detail subimages (fLH(j)(x,y),fHL(j)(x,y), and fHH(j)(x,y)) in each decomposition level j, we obtain first the inverse wavelet transform of every subimage and then we apply (3)
(3)sLLj=s[W-1[fLL(j)(x,y)]],sLHj=s[W-1[fLH(j)(x,y)]],sHLj=s[W-1[fHL(j)(x,y)]],sHHj=s[W-1[fHH(j)(x,y)]].The normalized entropy of each subimage, for a decomposition levelj, has been calculated as
(4)Ssj=1Npixelsj∑x∑ysLLj(x,y),Shj=1Npixelsj∑x∑ysLHj(x,y),Svj=1Npixelsj∑x∑ysHLj(x,y),Sdj=1Npixelsj∑x∑ysHHj(x,y),
where Npixelsj is the number of pixels at each decomposition level j. Table 1 shows the values for Shannon entropy calculated for images of Figure 5.Table 1
Normalized entropies of four decomposition levels for textures of Figure5.
Decomposition level,j
S
s
j
S
h
j
S
v
j
S
d
j
H225
j
=
1
0.00011
0.00008
0.00007
0.00004
j
=
2
0.00048
0.00042
0.00042
0.00037
j
=
3
0.00200
0.00172
0.00168
0.00179
j
=
4
0.00796
0.00750
0.00681
0.00716
H34
j
=
1
0.00011
0.00007
0.00008
0.00006
j
=
2
0.00046
0.00038
0.00039
0.00038
j
=
3
0.00185
0.00179
0.00169
0.00186
j
=
4
0.00732
0.00750
0.00662
0.00698
H241
j
=
1
0.00011
0.00007
0.00007
0.00005
j
=
2
0.00046
0.00038
0.00037
0.00036
j
=
3
0.00194
0.00159
0.00151
0.00164
j
=
4
0.00763
0.00669
0.00640
0.00643
H137
j
=
1
0.00011
0.00007
0.00006
0.00004
j
=
2
0.00047
0.00028
0.00027
0.00031
j
=
3
0.00191
0.00127
0.00132
0.00118
j
=
4
0.00726
0.00593
0.00581
0.00524
H10
j
=
1
0.00013
0.00004
0.00004
0.00002
j
=
2
0.00057
0.00035
0.00037
0.00025
j
=
3
0.00214
0.00190
0.00182
0.00140
j
=
4
0.00841
0.00711
0.00737
0.00665Shannon entropy brings us information about the amount of texture pattern that remains after every decomposition level. Considering (2), entropy provides a measurement of the histogram distribution; the higher the entropy the greater the histogram uniformity; that is, a greater amount of texture pattern is contained in the image. As the decomposition level increases, the texture pattern is being removed; that is, the information content decreases; so the histogram distribution gains uniformity. An optimal reconstruction scheme would eliminate the texture pattern, without loss of defect information. To determine this optimal decomposition level we use a ratio Rj (see (5)) between the entropy of the approximation subimage and the sum of the entropies for all detail subimages, so Rj indicates how much information about the texture pattern is contained in decomposition level j. Variations in this ratio allow detecting changes in the amount of information about the texture pattern between two consecutive decomposition levels
(5)Rj=SsjSsj+Shj+Svj+Sdj,j=1,2,….The goal is to find the optimal decomposition level which provides the maximum variation among two consecutiveRj values because this indicates that, in decomposition level j, the texture pattern still present in level j-1 has been removed, keeping useful information (defects).For this purpose we defineADRj as the difference between two consecutive Rj values (see (6)). The optimal decomposition level J* is calculated as the value of j for which ADRj takes a maximum value. This maximum value points out the greatest variation of information content among two consecutive decomposition levels, which means that both decomposition levels are sufficiently separated in terms of texture pattern information content, and the decomposition process should end. For decomposition levels j<J*, ADRj indicates that significant texture pattern information still remains in the approximation subimage, and the decomposition process should continue. For decomposition levels j>J*; ADRj indicates that the approximation subimage is oversmoothed, and the reconstruction result from such smooth approximation subimage will cause defect loss
(6)ADRj={0j=1Rj-Rj-1j=2,…},J*=arg{maxj(|ADRj|)}.Table2 shows values for Rj coefficients at every image decomposition level (j) for the different textures shown in Figure 5, together with the ADRj values.Table 2
R
j, ADRj, and optimal decomposition level obtained for texture images of Figure 5.
Image
Level (j)
1
2
3
4
J
*
H225
R
j
0.3785
0.2812
0.2778
0.2704
2
ADR
j
0
0.0973
0.0035
0.0074
H34
R
j
0.3460
0.2852
0.2576
0.2575
2
ADR
j
0
0.0608
0.0276
0.0002
H241
R
j
0.3626
0.2931
0.2899
0.2811
2
ADR
j
0
0.0695
0.0032
0.0088
H137
R
j
0.388067
0.353254
0.336133
0.299523
4
ADR
j
0
0.034813
0.017121
0.036610
H10
R
j
0.545018
0.369480
0.294868
0.284585
2
ADR
j
0
0.175538
0.074612
0.010283Once the optimal decomposition level is obtained, the process ends with the production of the reconstructed image using (7)
(7)F(x,y)=W-1[fLL(j)(x,y)].
## 4.2. Smoothing Mask
To remove the noise running through the successive decomposition levels, we applied average-based smoothing over imageF(x,y) to obtain F′(x,y) as shown in (8)
(8)F′(x,y)=1k2∑i=0k-1∑j=0k-1F(x-⌊k2⌋+i,y-⌊k2⌋+j),
where k is the size of the smoothing mask (see Figure 6).Figure 6
Smoothing mask (k=3) for the wavelet coefficients.
## 5. Results
### 5.1. Algorithm Implementation
The proposed computer vision algorithm was implemented as shown in the Pseudocode1, using the C++ programming language. The mother wavelet used for decomposition was the Haar base function with two coefficients, applied up to a fourth decomposition level. A decomposition level higher than four produced the fusion between defects and background, thus reducing the probability of defect detection.Pseudocode 1:Pseudocode to implement the developed algorithm.
Step 1. Compute Shannon Entropy:SSj, Shj, Svj and Sdj
Step 2. ComputeRj=SSj/(SSj+Shj+Svj+Sdj) for j=1,2,3,…,J
Step 3. Compute optimal decomposition level:
J*=arg{maxj{ADRj}}, j=1,2,3,…,J
Step 4. ComputeF=W-1[fLL(J*)]
Step 5. ComputeF′=mk×k[F]
Step 6. BinarizeF′
### 5.2. Implementation of the Computer Vision System
The computer vision system for visual inspection of ship hull surfaces (Figure2) has been implemented on a Pentium computer with a Meteor II/1394 card. This card is connected to the microprocessor via a PCI bus and is used as a frame-grabber. For that purpose the card had a processing node based on the TMS320C80 DSP from Texas Instruments and the Matrox NOA ASIC. In addition, the card had a firewire input/output bus (IEEE 1394) which enables it to control a half-inch digital colour camera (15 fps, 1024 × 768 square pixel) equipped with a wide-angle lens (f 4,2 mm).The software development environment used to implement the system software modules was the Visual C++ programming language powered by the Matrox Imaging Library v9.0. The system also had a Siemens CP5611 card which acted as a PROFIBUS-DP interface for connection with the corresponding robotized blasting system. A Honeywell sensor was used to measure the distance to the ship by ultrasound, with a range of 200–2000 mm and an output of 4–20 mA. User access to the computer vision system was by means of an industrial PDS (Mobic T8 from Siemens) and a wireless access point. Among other functions, the software that has been developed allows the operator to (1) enter the system configuration parameters, (2) visualize the detected areas to blast for validation by the operator before blasting commences, and (3) calibrate the computer vision system.
### 5.3. Validation Environment
The proposed computer vision algorithm was assessed at the NAVANTIA shipyard in Ferrol (Spain) on a robotized system used for automatic spot blasting. This operation accounts for 70% of all cleaning work carried out at that shipyard. The robotized system (Figure7) consists of a mechanical structure divided into two parts: primary and secondary. The primary structure holds the secondary structure (XYZ table), which supports the cleaning head and the computer vision system. More information regarding this system can be found in [5].Figure 7
Robotized blasting system.With the help of this platform, 260 images of ship hulls’ surfaces (with and without defects) were taken, similar to those shown in Figure3. In this way a catalogue was compiled of typical surface defects as they appear before grit blasting.
### 5.4. Metrics
To conduct a quantitative analysis of the quality of the proposed segmentation method, we need to use the best suited metrics to that purpose. The performance of image segmentation methods has been assessed by such authors as Zhang [30] and Sezgin and Sankur [16]. They proposed various different metrics for measurement of the quality of the segmentation in a given method, using parameters like position of the pixels, area, edges, and so forth. Out of these, one of the quantitative appraisal methods proposed by Sezgin was selected and examined: Misclassification Error (ME).ME represents the percentage of the background pixels that are incorrectly allocated to the object (i.e., to the foreground) or vice versa(9)ME=1-|BP∩BT|+|OP∩OT||BP|+|OP|.The error can be calculated by means of (9), where BP (background pattern) and OP (object pattern) represent the pattern image of the background and of the object taken as reference, and BT (background test) and OT (object test) represent the image to be assessed. In the event that the test image coincides with the pattern image, the classification error will be zero and therefore the performance of the segmentation will be the maximum.The performance of the implemented algorithms is assessed according to the equation:(10)η=100·(1-ME).
### 5.5. Algorithm Appraisal
The proposed visual inspection algorithm (see Pseudocode1) was applied to the above mentioned catalogue that had been taken at the shipyard (some samples are shown in column (a) of Figure 8). The Shannon entropy was calculated and normalized for four wavelet decomposition levels and the optimal J* level was calculated (6). Images were also processed applying algorithms proposed by Han and Shi [10] and Tsai and Chiang [19]. The result was 3 sets of 260 reconstructed images in which the defects have been isolated from texture. To check the quality of the defect detection algorithms we have concluded with a binarization stage. For that purpose we have selected Kapur’s method [21] which belongs to the group of entropy-based methods, as classified by Sezgin and Sankur [16] in his review of thresholding methods; this has resulted in 3 sets of 260 images (column (b) of Figure 8 shows some results obtained with the proposed algorithm; column (c) of Figure 8 shows some results obtained with the Tsai algorithm and column (d) of Figure 8 shows some results obtained with Han algorithm).Columns: column (i) shows texture images of portions of hulls; column (ii) shows reconstructed images resulting from the proposed reconstruction scheme; column (iii) shows reconstructed images resulting from Tsai algorithm; column (iv) shows reconstructed images resulting from Han algorithm; column (v) shows defects segmented by hand, the “ground truth.” Image Rows: (a) H225, (b) H34, (c) H241, (d) H1, (e) H120, (f) H121, (g) H99, (h) H137, (i) H48, (j) H9, (k) H10, and (l) H11.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)To apply the metrics described above, human inspectors were needed to segment each of the catalogue images manually (samples of these are shown in column (v) of Figure8). Table 3 shows the performance (η) when sample texture images of Figure 8 were segmented using the three algorithms.Table 3
η in defect segmentation of texture images of Figure 8.
Entropy based
J
*
Tsai
J
*
Han
J
*
Defect samples
H225
97.09
2
94.51
4
2.14
4
H34
97.41
2
74.52
2
3.43
4
H241
96.04
2
95.17
4
97.41
2
H1
86.50
3
82.63
4
84.69
4
H20
91.40
3
89.22
4
90.08
4
H121
89.36
4
89.91
4
90.23
4
H99
89.30
4
73.76
4
91.46
3
H137
93.94
4
82.87
4
94.26
3
H48
93.06
4
93.97
4
64.23
4
Average on defect samples
92.68
3.11
86.28
3.78
68.66
3.56
Nondefect samples
H9
98.30
2
74.04
3
99.61
4
H10
98.33
2
80.85
4
48.05
4
H11
98.22
2
89.06
4
48.65
4
Average on nondefect samples
98.28
2.00
81.32
3.67
65.44
4.00
Total Average
95.48
2.56
83.80
3.72
67.05
3.78As can be observed from above results, the proposed entropy-based algorithm achieved better results than Tsai algorithm and significantly better results than Han algorithm. In both cases the proposed algorithm obtains higher performance with low decomposition level.We have also analysed the behaviour of the proposed algorithm as misclassification rates. A set of 120 images were processed by the proposed algorithm and also by Han and Tsai algorithms. Results were then analysed by a skilled blasting operator, who assessed what portions of the shown hull surface would be blasted in real conditions at the repair yard. Table4 shows the average number of defect points classified as Type I and Type II errors for 120 samples of the 260-image set indicated above.Table 4
Automated inspection examined by a skilled blasting operator.
Entropy-basedalgorithm
Hanalgorithm
Tsaialgorithm
Type I error
6.8%
9.2%
11.1%
Type II error
0.9%
1.1%
0.7%As we can see, the proposed algorithm produced better results as regards false positives—that is, points marked as defective when they are not (Type I error). This is essentially because the operator tends to blast larger areas than necessary, and moreover he is less able to control the cut-off of the grit jet. On the other hand, the proposed algorithm identified similar false negatives (Type II error). This difference was not very significant and is quite acceptable in view of the clear advantage offered by the computer vision system equipped with the proposed inspection algorithm as regards Type I errors.
## 5.1. Algorithm Implementation
The proposed computer vision algorithm was implemented as shown in the Pseudocode1, using the C++ programming language. The mother wavelet used for decomposition was the Haar base function with two coefficients, applied up to a fourth decomposition level. A decomposition level higher than four produced the fusion between defects and background, thus reducing the probability of defect detection.Pseudocode 1:Pseudocode to implement the developed algorithm.
Step 1. Compute Shannon Entropy:SSj, Shj, Svj and Sdj
Step 2. ComputeRj=SSj/(SSj+Shj+Svj+Sdj) for j=1,2,3,…,J
Step 3. Compute optimal decomposition level:
J*=arg{maxj{ADRj}}, j=1,2,3,…,J
Step 4. ComputeF=W-1[fLL(J*)]
Step 5. ComputeF′=mk×k[F]
Step 6. BinarizeF′
## 5.2. Implementation of the Computer Vision System
The computer vision system for visual inspection of ship hull surfaces (Figure2) has been implemented on a Pentium computer with a Meteor II/1394 card. This card is connected to the microprocessor via a PCI bus and is used as a frame-grabber. For that purpose the card had a processing node based on the TMS320C80 DSP from Texas Instruments and the Matrox NOA ASIC. In addition, the card had a firewire input/output bus (IEEE 1394) which enables it to control a half-inch digital colour camera (15 fps, 1024 × 768 square pixel) equipped with a wide-angle lens (f 4,2 mm).The software development environment used to implement the system software modules was the Visual C++ programming language powered by the Matrox Imaging Library v9.0. The system also had a Siemens CP5611 card which acted as a PROFIBUS-DP interface for connection with the corresponding robotized blasting system. A Honeywell sensor was used to measure the distance to the ship by ultrasound, with a range of 200–2000 mm and an output of 4–20 mA. User access to the computer vision system was by means of an industrial PDS (Mobic T8 from Siemens) and a wireless access point. Among other functions, the software that has been developed allows the operator to (1) enter the system configuration parameters, (2) visualize the detected areas to blast for validation by the operator before blasting commences, and (3) calibrate the computer vision system.
## 5.3. Validation Environment
The proposed computer vision algorithm was assessed at the NAVANTIA shipyard in Ferrol (Spain) on a robotized system used for automatic spot blasting. This operation accounts for 70% of all cleaning work carried out at that shipyard. The robotized system (Figure7) consists of a mechanical structure divided into two parts: primary and secondary. The primary structure holds the secondary structure (XYZ table), which supports the cleaning head and the computer vision system. More information regarding this system can be found in [5].Figure 7
Robotized blasting system.With the help of this platform, 260 images of ship hulls’ surfaces (with and without defects) were taken, similar to those shown in Figure3. In this way a catalogue was compiled of typical surface defects as they appear before grit blasting.
## 5.4. Metrics
To conduct a quantitative analysis of the quality of the proposed segmentation method, we need to use the best suited metrics to that purpose. The performance of image segmentation methods has been assessed by such authors as Zhang [30] and Sezgin and Sankur [16]. They proposed various different metrics for measurement of the quality of the segmentation in a given method, using parameters like position of the pixels, area, edges, and so forth. Out of these, one of the quantitative appraisal methods proposed by Sezgin was selected and examined: Misclassification Error (ME).ME represents the percentage of the background pixels that are incorrectly allocated to the object (i.e., to the foreground) or vice versa(9)ME=1-|BP∩BT|+|OP∩OT||BP|+|OP|.The error can be calculated by means of (9), where BP (background pattern) and OP (object pattern) represent the pattern image of the background and of the object taken as reference, and BT (background test) and OT (object test) represent the image to be assessed. In the event that the test image coincides with the pattern image, the classification error will be zero and therefore the performance of the segmentation will be the maximum.The performance of the implemented algorithms is assessed according to the equation:(10)η=100·(1-ME).
## 5.5. Algorithm Appraisal
The proposed visual inspection algorithm (see Pseudocode1) was applied to the above mentioned catalogue that had been taken at the shipyard (some samples are shown in column (a) of Figure 8). The Shannon entropy was calculated and normalized for four wavelet decomposition levels and the optimal J* level was calculated (6). Images were also processed applying algorithms proposed by Han and Shi [10] and Tsai and Chiang [19]. The result was 3 sets of 260 reconstructed images in which the defects have been isolated from texture. To check the quality of the defect detection algorithms we have concluded with a binarization stage. For that purpose we have selected Kapur’s method [21] which belongs to the group of entropy-based methods, as classified by Sezgin and Sankur [16] in his review of thresholding methods; this has resulted in 3 sets of 260 images (column (b) of Figure 8 shows some results obtained with the proposed algorithm; column (c) of Figure 8 shows some results obtained with the Tsai algorithm and column (d) of Figure 8 shows some results obtained with Han algorithm).Columns: column (i) shows texture images of portions of hulls; column (ii) shows reconstructed images resulting from the proposed reconstruction scheme; column (iii) shows reconstructed images resulting from Tsai algorithm; column (iv) shows reconstructed images resulting from Han algorithm; column (v) shows defects segmented by hand, the “ground truth.” Image Rows: (a) H225, (b) H34, (c) H241, (d) H1, (e) H120, (f) H121, (g) H99, (h) H137, (i) H48, (j) H9, (k) H10, and (l) H11.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)To apply the metrics described above, human inspectors were needed to segment each of the catalogue images manually (samples of these are shown in column (v) of Figure8). Table 3 shows the performance (η) when sample texture images of Figure 8 were segmented using the three algorithms.Table 3
η in defect segmentation of texture images of Figure 8.
Entropy based
J
*
Tsai
J
*
Han
J
*
Defect samples
H225
97.09
2
94.51
4
2.14
4
H34
97.41
2
74.52
2
3.43
4
H241
96.04
2
95.17
4
97.41
2
H1
86.50
3
82.63
4
84.69
4
H20
91.40
3
89.22
4
90.08
4
H121
89.36
4
89.91
4
90.23
4
H99
89.30
4
73.76
4
91.46
3
H137
93.94
4
82.87
4
94.26
3
H48
93.06
4
93.97
4
64.23
4
Average on defect samples
92.68
3.11
86.28
3.78
68.66
3.56
Nondefect samples
H9
98.30
2
74.04
3
99.61
4
H10
98.33
2
80.85
4
48.05
4
H11
98.22
2
89.06
4
48.65
4
Average on nondefect samples
98.28
2.00
81.32
3.67
65.44
4.00
Total Average
95.48
2.56
83.80
3.72
67.05
3.78As can be observed from above results, the proposed entropy-based algorithm achieved better results than Tsai algorithm and significantly better results than Han algorithm. In both cases the proposed algorithm obtains higher performance with low decomposition level.We have also analysed the behaviour of the proposed algorithm as misclassification rates. A set of 120 images were processed by the proposed algorithm and also by Han and Tsai algorithms. Results were then analysed by a skilled blasting operator, who assessed what portions of the shown hull surface would be blasted in real conditions at the repair yard. Table4 shows the average number of defect points classified as Type I and Type II errors for 120 samples of the 260-image set indicated above.Table 4
Automated inspection examined by a skilled blasting operator.
Entropy-basedalgorithm
Hanalgorithm
Tsaialgorithm
Type I error
6.8%
9.2%
11.1%
Type II error
0.9%
1.1%
0.7%As we can see, the proposed algorithm produced better results as regards false positives—that is, points marked as defective when they are not (Type I error). This is essentially because the operator tends to blast larger areas than necessary, and moreover he is less able to control the cut-off of the grit jet. On the other hand, the proposed algorithm identified similar false negatives (Type II error). This difference was not very significant and is quite acceptable in view of the clear advantage offered by the computer vision system equipped with the proposed inspection algorithm as regards Type I errors.
## 6. Conclusions
This paper has presented a computer vision algorithm based on the wavelet transform which brings a robust method for detecting defects in ship hull surfaces. To achieve this, we used an image reconstruction approach based on automatic selection of the optimal wavelet transform resolution level by means of a novel use of the Shannon entropy, calculated on the different detail subimages.The algorithm has been incorporated to a computer vision system that masters a robotized system for blasting ship hulls, making it possible to fully automate grit blasting operation. The results as regards reliability were very similar to those achieved with human workers, while faster inspection was provided (among 8% for flat surfaces in oil tankers and 15% for shaped hulls like frigates) and the consequences of operator fatigue minimized.
---
*Source: 101837-2013-05-27.xml* | 2013 |
# Dynamic Adjustment Optimisation Algorithm in 3D Directional Sensor Networks Based on Spherical Sector Coverage Models
**Authors:** Xiaochao Dang; Chenguang Shao; Zhanjun Hao
**Journal:** Journal of Sensors
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1018434
---
## Abstract
In directional sensor networks research, target event detection is currently an active research area, with applications in underwater target monitoring, forest fire warnings, border areas, and other important activities. Previous studies have often discussed target coverage in two-dimensional sensor networks, but these studies cannot be extensively applied to three-dimensional networks. Additionally, most of the previous target coverage detection models are based on a circular or omnidirectional sensing model. More importantly, if the directional sensor network does not design a better coverage algorithm in the coverage-monitoring process, its nodes’ energy consumption will increase and the network lifetime will be significantly shortened. With the objective of addressing three-dimensional target coverage in applications, this study proposes a dynamic adjustment optimisation algorithm for three-dimensional directional sensor networks based on a spherical sector coverage model, which improves the lifetime and coverage ratio of the network. First, we redefine the directional nodes’ sensing model and use the three-dimensional Voronoi method to divide the regions where the nodes are located. Then, we introduce a correlation force between the target and the sensor node to optimise the algorithm’s coverage mechanism, so that the sensor node can accurately move to the specified position for target coverage. Finally, by verifying the feasibility and accuracy of the proposed algorithm, the simulation experiments demonstrate that the proposed algorithm can effectively improve the network coverage and node utilisation.
---
## Body
## 1. Introduction
A three-dimensional (3D) wireless sensor networks (WSNs) consists of several tiny, battery-powered sensors that can communicate with each other to monitor a 3D field of interest (FOI) [1] for target events. WSNs include sensor networks (i.e., omnidirectional sensor networks) and directional sensor networks (DSNs). Research into WSN coverage is roughly classified into three branches: area coverage, barrier coverage, and target coverage. In recent years, WSNs coverage has been an active research area with a wide range of practical applications: target detection [2], healthcare applications [3], target location [4], data transmission [5], etc. In these real-world applications, we can detect some target events in the region of interest by deploying sensor nodes. Therefore, the use of existing methods and techniques to achieve effective event detection is now the current research focus. At the same time, improving multiple objectives (e.g., reducing the network’s overall energy consumption while ensuring a high coverage ratio) is an indispensable consideration in research.In most of the previous studies, the researchers have discussed and presented solutions for 2D coordinate systems under realistic conditions to reduce the difficulty and they have made great progress [6–8]. However, modelling and studying DSNs coverage are still less common in 3D systems than in 2D systems; not only does the difficultly of research increase in 3D systems, but deployed sensor nodes often encounter complex environmental influences (e.g., weather and climate). In recent years, some researchers have established models for 3D WSNs and proposed corresponding distributed optimisation algorithms [9–11]. However, the WSN node coverage model is mainly based on the 2D omnidirectional sensing model, and a large part of the research in 3D systems is based on the omnidirectional ball sensing model. While the omnidirectional sensing model can provide better range and node utilisation for area coverage, we only require modest energy and nodes with limited directional detection to achieve target coverage for a set of targets or special events in practice. Therefore, 3D DSNs coverage research is more suitable for the above conditions.Of course, the directional sensor not only needs to consider its own position and sensing range (as with the omnidirectional sensor) but must also consider the angle change problem. Furthermore, when nodes are randomly deployed covered, they cannot be accurate and some omissions will occur. Therefore, in a specific environment, we need a dynamic algorithm to select the optimal number of active nodes to detect the target [12]. At the same time, we need to consider moving or rotating these active nodes within a certain period to adjust their own headings to achieve the best coverage. For example, in [8], the use of unattended sensor networks has been discussed for detecting targets using energy-efficient methods. The authors are dedicated to analysing the trade-offs between power consumption and quality of service in WSNs in terms of detection capabilities and latency. In [13], the authors propose using the hybrid movement strategy (HMS) to solve the problem of high energy consumption (resulting from mobility) and improve the coverage ratio of DSNs. Although the method proposed above can reduce the energy consumption of the network and improve the coverage ratio, the rotation angle of the 3D directional sensor node is difficult to determine; increased dimensionality brings further complications.Therefore, we propose a network model suitable for directional sensors and related dynamic adjustment optimisation algorithms for 3D systems. We first design a sensing model that is more suitable for 3D DSNs and allows us to quantify the rotation angle of the node. Secondly, to achieve accurate coverage, we extend traditional 2D Voronoi division and apply it to 3D DSNs. We also use theory and experimentation to verify the algorithm to further reduce network energy consumption. Finally, we design experimental simulations and perform algorithm comparisons to further analyse our algorithm’s effectiveness. Our main contributions are highlighted as follows:(i)
We are the first to propose a spherical sector sensing model for 3D DSNs that quantifies the rotation angle in combination with using a 3D Voronoi method [14] to divide space using the sensors’ positions
(ii)
We design a synergistic priority coverage mechanism to reduce the moving distance of nodes, thereby reducing excessive energy consumption while guaranteeing a high coverage ratio for the sensor network
(iii)
We optimise the traditional virtual force algorithm to suit practical conditions, and we perform a full theoretical analysis and experimental comparative analysis of the algorithm proposed in this paper to verify its validity and accuracyThe remainder of this paper is organised as follows. In Section2, the research progress and related work on DSNs in recent years are summarised. In Section 3, the DSNs coverage model and sensing angle are described and the relevant definitions are provided. After this, we compare the differences between 2D and 3D Voronoi and give the 3D Voronoi partition theory in Section 4. We then show how we have designed and improved the relevant algorithms and provide its design steps in Section 5. In Section 6, we describe the simulations and experiments we performed on the algorithm and compare it with other algorithms for analysis. Conclusions and future works are discussed in the final section.
## 2. Related Works
In recent years, research on DSNs has been carried out mainly based on 2D planes. For example, in [15], the authors propose a cluster head- (CH-) based distributed target coverage algorithm to solve a Maximum Coverage with Minimum Sensor (MCMS) problem. The authors also designed distributed clustering and target coverage algorithms to reduce network energy consumption. Subsequently, in [12], they designed a target coverage algorithm for DSNs, in an energy-saving manner based on [15], through the distributed clustering mechanism. The authors improved the distributed algorithm in [15] to use the CH approach and ensure that it is used appropriately to enhance DSNs target coverage. In [16], the authors propose a new method (based on particle swarm optimisation) to maximise coverage for 2D regions. This algorithm allows a directional sensor node to constantly adjust its sensing direction to provide the best coverage. However, most of the above studies map 3D sensor coverage problems into 2D for discussion—they cannot be applied directly in three dimensions. Therefore, we need to consider not only dimensionality but also a node-aware model that can be applied in the dimension of the actual environment.In addition, the nodes are often distributed randomly in the monitoring area. Reducing the deployment cost—while using the limited node energy for efficient coverage—has become an active research topic. The authors point out that motility and mobility are essential for DSN nodes to minimise occlusion effects and coverage overlap in [17]. At the same time, motility is superior to mobility in terms of network cost and energy efficiency. Therefore, almost all research aims to solve coverage problems through motility.In practice, however, there are still some coverage holes that can only be addressed through mobility. For example, the authors in [18] use the directionality of the orientation sensor to rotate it to locate periodic detection objects. Therefore, the above authors developed an event monitoring system that proposed a maximum coverage deployment (MCD) heuristic iteration to deploy sensors to cover targets. But we must not only consider the direction of the orientation sensor (i.e., the change or rotation of its sensing angle) to enable efficient deployment; we must also consider that the orientation sensor can move to fill coverage holes in the monitoring area (i.e., DSNs can be moved). Therefore, the literature [13] proposes HMS to solve the high energy consumption of directional sensor movement. The authors use the cascading method to adjust the coverage of the DSNs, effectively reducing network energy consumption. In [19], the authors propose an algorithm based on learning automata to address the orientation sensor network’s coverage quality requirements and to maximise the network lifetime (i.e., priority-based target coverage). The algorithm divides the DSNs into several coverage sets so that each coverage set can meet the coverage quality requirements of all targets. Thus, it effectively extends the network lifetime.In [13, 18, 19], the authors have better solved the problem of mobile energy consumption, but these are based on 2D plane verification and are not suitable for 3D environments. Therefore, the research of the literature [20–22] has successively proposed the orientation sensor model and algorithm for the 3D coordinate system. For example, the authors studied the low-power green communication of 3D DSNs and proposed the space-time coverage optimisation scheduling (STCOS) algorithm to obtain the maximum network coverage in [21]. In [22], the authors propose a network coverage enhancement algorithm based on an artificial fish swarm algorithm to improve the coverage rate. However, the authors only optimised the angle of the sensor and did not solve the mobility problem in the directional sensor. In [23], the authors propose prescheduling-based k-coverage group scheduling (PSKGS) and self-organised k-coverage scheduling (SKS) algorithms to reduce the cost of the algorithm and ensure the effective monitoring of node quality. The experimental results show that PSKGS improves monitoring quality and the SKS algorithm reduces the node’s computation and communication costs.In addition, the special geometric properties of the Voronoi diagram are applied in many aspects of WSN coverage. In [24], the authors propose Voronoi-based centralised approximation (VCA) and Voronoi-based distributed approximation (VDA) for optimal coverage in DSNs. The authors have experimentally verified that the two algorithms can reduce the coverage overlap and achieve a higher coverage rate. In [25], the authors combine the special set features of the 2D Voronoi graph with the real-time response of dynamic environment changes and propose a distributed greedy algorithm that can select and adjust the intracellular sensing direction based on coverage (IDS&IDA). Obviously, the research on the 2D Voronoi algorithm has shown better results, but it is rarely applied in three dimensions.Therefore, based on the typical literature [14, 25], this paper improves and extends the Voronoi method, making it suitable for 3D DSNs target coverage. In this paper, we propose a dynamic adjustment optimisation algorithm for 3D DSNs based on a spherical sector coverage model. This algorithm can maximise coverage and improve network lifetime by adjusting the direction and specific movements of nodes in the DSNs. In the subsequent experimental verification section, we discuss the proposed algorithm and compare it with other algorithms.
## 3. Network Coverage Model and Angle Quantification Method
### 3.1. Network Coverage Model
First, we assume that the sensing model of the sensor node covers a sphere with its midpoint at the node’s positionoixi,yi,zi and its sensing range Rs is the maximum detection distance. Initially, it is assumed that sensor nodes si are randomly scattered in an L3 target area, and the set of nodes is sis1,s2,⋯,sn. Rc represents the communication radius of the node, when the Euclidean distance between two nodes si and sj satisfies dsi,sj<Rc; we call them neighbour nodes [26]. In a traditional 2D study, most researchers transform the sensor nodes into a 2D planar fan to achieve coverage optimisation. In some related 3D research fields, the node’s sensing range is abstracted into a covering model of a rounded hammer. However, the coverage model of the 3D directional sensor should be obtained by rotating a planar fan with radius Rs and central angle 2θ around its axis of symmetry, as shown in Figure 1. Therefore, we define the directional node’s sensing range as a spherical sector sensing model. As shown in Figure 1, the spherical sector O—A1B1C1 represents the coverage model of the directional sensor. When 2θ=360°, its coverage matches that of the omnidirectional sensor node. Therefore, the spherical sector network model redefined in this paper is more suitable for modelling the coverage of 3D sensor nodes.Figure 1
Spherical sector sensing model.Initially, sensor nodes are randomly scattered in the target monitoring area, which may result in an uneven node distribution, excessive node energy consumption, and duplicate or missing coverage for some targets. In Figure2, the grey dots indicate targets that need to be covered, and the three spherical sectors represent sensor coverage. Some of the targets in Figure 2 are not completely covered. Therefore, the sensor network may also have omission problems, resulting in lower node utilisation. Before designing a 3D DSNs coverage algorithm based on the 3D Voronoi diagram partition, the following assumptions are made:
(i)
The sensor nodes are isomorphic, and each node has access to its own location and that of its neighbour nodes through some technical means
(ii)
Each node has the same detection range, but its sensing range can be different; that is, each sensor can select different sensing angles2θ, where each si can select its own sensing angle 2θi
(iii)
Each node can rotate and move freely in any directionFigure 2
Target detection model.
### 3.2. Model Angle Quantification
In previous studies, the randomly distributed target pointp in space is covered by the directional node si and the basic conditions doi,p≤Rs and ∣φ≤θ∣ need to be satisfied. Most studies [27, 28] use the partitioning model shown in Figure 3 to specify angles. However, it is difficult for this model to quantify the angle φ between the target point p and the node si. In particular, it is difficult to determine the necessary rotation amount when a node must rotate to cover a target. Furthermore, the sensing model and direction angle partitioning of Figure 3 is abstract and impractical for directional sensor nodes with differing θ and varying main direction angle ψ.Figure 3
Node sensing direction.Therefore, we redefine the sensing model and propose an angle and direction division method using one octant of a sphere to unify the rotation as shown in Figure4. As long as the spherical sector busbar is exactly tangent to the three edges of O—ABC (i.e., the spherical sector contains O—ABC), coverage can be achieved by rotating the model to the coordinate system in which the target event is located—when the condition dsi,p≤Rs is satisfied. The above assumptions can reduce omissions and node energy consumption. In this regard, we subsequently respecified the conditions under which the target event can be covered by the directed node.Figure 4
Angle division of sensing model.
(a)
(b)
(c)
(d)
(e)As shown in Figure4, we cut the sphere of radius r along its axes of symmetry to divide it into eight parts; that is, the shaded portion in Figure 4(a) is the isolated polyhedron O—ABC. For a more intuitive understanding and for analysis and quantification, we separately extract the shaded parts removed in Figure 4(a) and draw the perspective view shown in Figure 4(b). To quantify the angle θ in our model, we need to solve for ∠COO′. Therefore, we project the point O onto a plane containing O′ that is perpendicular to the line passing through O and O′. For a more intuitive understanding and analysis, we separately extract the triangle △ABC in Figure 4(b) and draw the plane view shown in Figure 4(c). The line segments AO, BO, and CO are perpendicular and congruent (i.e., AO=BO=CO=r), so we determine AB=AC=BC=2r. In Figure 4(c), O′ represents the projection of point O, which is located at the centre of the equilateral triangle △ABC. Note that CD=2/2r. We now calculate CO′=CD/cos30∘=2/2r/cos30∘=6/3r. The connecting line segments O′G and OG form the right triangle △GOO′, as shown in Figures 4(b) and 4(d). In Figure 4(d), φ=∠COO′ is exactly on the direction angle we need to calculate; that is, φ=arcsin6/3≈54.74∘. Note that φ is not related to the radius r. Next, we draw a plane view of the spherical sector projection on the plane, as shown in Figure 4(e). We know that 2φ is not equal to the true angle at which the spherical sector O—ABC is projected onto the plane, 2φ≠∠COG; the inner angle of the calculated ∠COG=90∘. Therefore, we can get the minimum sensing angle θ when the condition θ=arcsin6/3 is satisfied, as shown in Figure 4(e). At this time, the regular triangular pyramid OABC is surrounded by the spherical sector O—A1B1C1. Meanwhile, when the projected fan’s central angle 2θ≥2arcsin6/3, the spherical sector sensing area contains the polyhedron O—ABC.In summary, we first assume that the node’s central angle2θ≥2arcsin6/3 can meet the required coverage. We then specify that a target point px,y,z is to be covered by the sensor node sixi,yi,zi, subject to the following conditions:
(i)
The Euclidean distance between pointsp and si must be less than or equal to the maximum sensing distance of the node; that is, dsi,p≤Rs
(ii)
The angleφ formed between the vector from p to si and the node’s main sensing direction must be less than θ; that is, φ≤arcsin6/3≈54.74∘
(iii)
The central angle of the directed sensing model satisfies2θ≥2arcsin6/3≈109.5∘
### 3.3. Related Definitions
For a more intuitive follow-up analysis and discussion of this article, we introduce the following definitions to better describe the problem.Definition 1 (3D-directed node sensing model).
A 3D-directed sensing model can be represented by the five-tuple<six,y,z,w,Rs,2θ,ψ>, where si, w, Rs, 2θ (0≤θ≤π), and ψ represent the vertex position coordinate, the main sensing direction vector, the node’s sensing radius, the node’s sensing angle, and the node’s sensing direction, respectively.Definition 2 (neighbour node).
Each node is unique within the Voronoi; therefore, according to reference [29], we can specify that two sensor nodes that have the same neighbouring edge are neighbouring nodes.Definition 3 (network coverage ratio).
We refer to the sensing accuracy model in [27] to determine the probability that any point p in space is monitored by node si. Assuming that the sensing accuracy C decays as the distance increases, the sensing accuracy Csi,p is
(1)Csi,p=11+αdsi,pβ,where Csi,p represents the sensing accuracy of sensor si at point p and dsi,p represents the Euclidean distance from point p to si, which can be calculated as
(2)dsi,p=x−xi2+y−yi2+z−zi2.Constantsα and β reflect the device correlation coefficient for the physical characteristics of the sensor. Typically, β has a range of (1~4) and α is used as an adjustment parameter.A target in the monitoring area can be covered simultaneously by multiple sensor nodes, and its coverage probabilityC can be expressed as
(3)C=1−∏i=1N1−Csi,p,which is equivalent to
(4)C=1−∏i=1N1−11+αdsi,pβ.
## 3.1. Network Coverage Model
First, we assume that the sensing model of the sensor node covers a sphere with its midpoint at the node’s positionoixi,yi,zi and its sensing range Rs is the maximum detection distance. Initially, it is assumed that sensor nodes si are randomly scattered in an L3 target area, and the set of nodes is sis1,s2,⋯,sn. Rc represents the communication radius of the node, when the Euclidean distance between two nodes si and sj satisfies dsi,sj<Rc; we call them neighbour nodes [26]. In a traditional 2D study, most researchers transform the sensor nodes into a 2D planar fan to achieve coverage optimisation. In some related 3D research fields, the node’s sensing range is abstracted into a covering model of a rounded hammer. However, the coverage model of the 3D directional sensor should be obtained by rotating a planar fan with radius Rs and central angle 2θ around its axis of symmetry, as shown in Figure 1. Therefore, we define the directional node’s sensing range as a spherical sector sensing model. As shown in Figure 1, the spherical sector O—A1B1C1 represents the coverage model of the directional sensor. When 2θ=360°, its coverage matches that of the omnidirectional sensor node. Therefore, the spherical sector network model redefined in this paper is more suitable for modelling the coverage of 3D sensor nodes.Figure 1
Spherical sector sensing model.Initially, sensor nodes are randomly scattered in the target monitoring area, which may result in an uneven node distribution, excessive node energy consumption, and duplicate or missing coverage for some targets. In Figure2, the grey dots indicate targets that need to be covered, and the three spherical sectors represent sensor coverage. Some of the targets in Figure 2 are not completely covered. Therefore, the sensor network may also have omission problems, resulting in lower node utilisation. Before designing a 3D DSNs coverage algorithm based on the 3D Voronoi diagram partition, the following assumptions are made:
(i)
The sensor nodes are isomorphic, and each node has access to its own location and that of its neighbour nodes through some technical means
(ii)
Each node has the same detection range, but its sensing range can be different; that is, each sensor can select different sensing angles2θ, where each si can select its own sensing angle 2θi
(iii)
Each node can rotate and move freely in any directionFigure 2
Target detection model.
## 3.2. Model Angle Quantification
In previous studies, the randomly distributed target pointp in space is covered by the directional node si and the basic conditions doi,p≤Rs and ∣φ≤θ∣ need to be satisfied. Most studies [27, 28] use the partitioning model shown in Figure 3 to specify angles. However, it is difficult for this model to quantify the angle φ between the target point p and the node si. In particular, it is difficult to determine the necessary rotation amount when a node must rotate to cover a target. Furthermore, the sensing model and direction angle partitioning of Figure 3 is abstract and impractical for directional sensor nodes with differing θ and varying main direction angle ψ.Figure 3
Node sensing direction.Therefore, we redefine the sensing model and propose an angle and direction division method using one octant of a sphere to unify the rotation as shown in Figure4. As long as the spherical sector busbar is exactly tangent to the three edges of O—ABC (i.e., the spherical sector contains O—ABC), coverage can be achieved by rotating the model to the coordinate system in which the target event is located—when the condition dsi,p≤Rs is satisfied. The above assumptions can reduce omissions and node energy consumption. In this regard, we subsequently respecified the conditions under which the target event can be covered by the directed node.Figure 4
Angle division of sensing model.
(a)
(b)
(c)
(d)
(e)As shown in Figure4, we cut the sphere of radius r along its axes of symmetry to divide it into eight parts; that is, the shaded portion in Figure 4(a) is the isolated polyhedron O—ABC. For a more intuitive understanding and for analysis and quantification, we separately extract the shaded parts removed in Figure 4(a) and draw the perspective view shown in Figure 4(b). To quantify the angle θ in our model, we need to solve for ∠COO′. Therefore, we project the point O onto a plane containing O′ that is perpendicular to the line passing through O and O′. For a more intuitive understanding and analysis, we separately extract the triangle △ABC in Figure 4(b) and draw the plane view shown in Figure 4(c). The line segments AO, BO, and CO are perpendicular and congruent (i.e., AO=BO=CO=r), so we determine AB=AC=BC=2r. In Figure 4(c), O′ represents the projection of point O, which is located at the centre of the equilateral triangle △ABC. Note that CD=2/2r. We now calculate CO′=CD/cos30∘=2/2r/cos30∘=6/3r. The connecting line segments O′G and OG form the right triangle △GOO′, as shown in Figures 4(b) and 4(d). In Figure 4(d), φ=∠COO′ is exactly on the direction angle we need to calculate; that is, φ=arcsin6/3≈54.74∘. Note that φ is not related to the radius r. Next, we draw a plane view of the spherical sector projection on the plane, as shown in Figure 4(e). We know that 2φ is not equal to the true angle at which the spherical sector O—ABC is projected onto the plane, 2φ≠∠COG; the inner angle of the calculated ∠COG=90∘. Therefore, we can get the minimum sensing angle θ when the condition θ=arcsin6/3 is satisfied, as shown in Figure 4(e). At this time, the regular triangular pyramid OABC is surrounded by the spherical sector O—A1B1C1. Meanwhile, when the projected fan’s central angle 2θ≥2arcsin6/3, the spherical sector sensing area contains the polyhedron O—ABC.In summary, we first assume that the node’s central angle2θ≥2arcsin6/3 can meet the required coverage. We then specify that a target point px,y,z is to be covered by the sensor node sixi,yi,zi, subject to the following conditions:
(i)
The Euclidean distance between pointsp and si must be less than or equal to the maximum sensing distance of the node; that is, dsi,p≤Rs
(ii)
The angleφ formed between the vector from p to si and the node’s main sensing direction must be less than θ; that is, φ≤arcsin6/3≈54.74∘
(iii)
The central angle of the directed sensing model satisfies2θ≥2arcsin6/3≈109.5∘
## 3.3. Related Definitions
For a more intuitive follow-up analysis and discussion of this article, we introduce the following definitions to better describe the problem.Definition 1 (3D-directed node sensing model).
A 3D-directed sensing model can be represented by the five-tuple<six,y,z,w,Rs,2θ,ψ>, where si, w, Rs, 2θ (0≤θ≤π), and ψ represent the vertex position coordinate, the main sensing direction vector, the node’s sensing radius, the node’s sensing angle, and the node’s sensing direction, respectively.Definition 2 (neighbour node).
Each node is unique within the Voronoi; therefore, according to reference [29], we can specify that two sensor nodes that have the same neighbouring edge are neighbouring nodes.Definition 3 (network coverage ratio).
We refer to the sensing accuracy model in [27] to determine the probability that any point p in space is monitored by node si. Assuming that the sensing accuracy C decays as the distance increases, the sensing accuracy Csi,p is
(1)Csi,p=11+αdsi,pβ,where Csi,p represents the sensing accuracy of sensor si at point p and dsi,p represents the Euclidean distance from point p to si, which can be calculated as
(2)dsi,p=x−xi2+y−yi2+z−zi2.Constantsα and β reflect the device correlation coefficient for the physical characteristics of the sensor. Typically, β has a range of (1~4) and α is used as an adjustment parameter.A target in the monitoring area can be covered simultaneously by multiple sensor nodes, and its coverage probabilityC can be expressed as
(3)C=1−∏i=1N1−Csi,p,which is equivalent to
(4)C=1−∏i=1N1−11+αdsi,pβ.
## 4. Voronoi Partitioning Method
### 4.1. 2D Voronoi Principle
In the early research of two dimensional DSN coverage, nodes are randomly distributed in the plane and divided into the 2D Voronoi method. As shown in Figure5, given a set of sensor nodes si=s1,s2,⋯,sn, the bounded plane is divided into polygonal cells Ki=K1,K2,⋯,Kn, such that each cell Ki contains exactly one of the sensor nodes si, where si is called the Ki-divided generation node [14, 30]. Furthermore, according to the partitioning property of the Voronoi diagram, the distance Dsi,T from any point T in cell Ki to si is shorter than the distance Dsj,T between the point T and the neighbour nodes of si.Figure 5
2D Voronoi diagram.As shown in Figure6, there are 70 sensor nodes in the plane and the grey area represents the coverage of each node. After division, each Voronoi unit corresponds to a single node.Figure 6
2D Voronoi node coverage.
### 4.2. 3D Voronoi Partition Principle
After reviewing the related 2D Voronoi research in the previous section, we extend it to divide three-dimensional volumes. The volume is divided into polyhedral Voronoi units called V-body units; each is an irregular, multifaceted, closed, convex body according to the literature [14]. Meanwhile, each unit Vi∈V1,V2,⋯,Vn contains a unique node si. Hence, according to the property of 2D-Voronoi, the 3D Voronoi partitioning definition satisfies
(5)QVi=Vi∈L3∣dT,si≤dT,sj,j=1,2,⋯,n−1,∀j≠i.It can be concluded from the aforementioned results that the number of nodesNsi is equal to the number of Voronoi units NVi after division; that is, Nsi=NVi,i=1,2,⋯,n. Thus, this paper first uses this important neighbouring property to divide and study the 3D coverage problem.
## 4.1. 2D Voronoi Principle
In the early research of two dimensional DSN coverage, nodes are randomly distributed in the plane and divided into the 2D Voronoi method. As shown in Figure5, given a set of sensor nodes si=s1,s2,⋯,sn, the bounded plane is divided into polygonal cells Ki=K1,K2,⋯,Kn, such that each cell Ki contains exactly one of the sensor nodes si, where si is called the Ki-divided generation node [14, 30]. Furthermore, according to the partitioning property of the Voronoi diagram, the distance Dsi,T from any point T in cell Ki to si is shorter than the distance Dsj,T between the point T and the neighbour nodes of si.Figure 5
2D Voronoi diagram.As shown in Figure6, there are 70 sensor nodes in the plane and the grey area represents the coverage of each node. After division, each Voronoi unit corresponds to a single node.Figure 6
2D Voronoi node coverage.
## 4.2. 3D Voronoi Partition Principle
After reviewing the related 2D Voronoi research in the previous section, we extend it to divide three-dimensional volumes. The volume is divided into polyhedral Voronoi units called V-body units; each is an irregular, multifaceted, closed, convex body according to the literature [14]. Meanwhile, each unit Vi∈V1,V2,⋯,Vn contains a unique node si. Hence, according to the property of 2D-Voronoi, the 3D Voronoi partitioning definition satisfies
(5)QVi=Vi∈L3∣dT,si≤dT,sj,j=1,2,⋯,n−1,∀j≠i.It can be concluded from the aforementioned results that the number of nodesNsi is equal to the number of Voronoi units NVi after division; that is, Nsi=NVi,i=1,2,⋯,n. Thus, this paper first uses this important neighbouring property to divide and study the 3D coverage problem.
## 5. VFA Analysis and 3D-DAOA
As discussed earlier, directional sensor network nodes can be separated into a unique set of nonoverlapping V-body units by the 3D Voronoi partitioning method after an initial, random deployment. We know a target may not be detected by a given node, and each target could be located in any V-body unit. Additionally, according to the Voronoi partitioning property, we will first consider nodes preferentially covering targets in that node’s V-body unit, for which we need to design related node rotation and movement algorithms to achieve coverage.
### 5.1. Definitions of VFA
In sensor network coverage, the VFA (virtual force algorithm) [31] algorithm has enabled nodes deployed in the monitoring environment to be redeployed by different virtual field forces. The concept of a virtual force first came from physics; that is, when the distance between two atoms is too small, they are separated by the repulsion between them. When the distance between two atoms is too large, gravity is generated, bringing them closer to each other [14, 32]. In this article, we need to redesign an improved 3D-VFA to solve the following problems:
(i)
Redeploying a node in a 3D Voronoi partition to accurately cover uncovered targets
(ii)
Quantifying the node’s rotation angle and the unity of the node’s coordinate system
(iii)
Defining the virtual forces—those generated between nodes (e.g., mutual attraction and repulsion) and obstacle repulsion between the forces—to move the directional nodes to complete the coverage
### 5.2. Improved 3D-VFA Analysis
Through the above definition of virtual forces, we mainly address directional node mobility. During optimisation, nodes move under a total resultant forceFA, thereby achieving node balance and uniform target coverage. In the monitoring region, we assume that a sensor node is subject to a gravitational force Fa from neighbouring nodes, an interaction force Fij from nodes, and a force Fo between the node and the boundary of the target region L. The total force FA is therefore
(6)FA=∑j=1,j≠inFij+Fa+Fo.We further constrain our virtual forces to prevent the node from running out of energy prematurely due to excessive node movement. We introduce two distance thresholds:rmin represents the minimum safe distance between nodes and rb represents the distance beyond which the interaction force between the nodes is zero. According to the literature [14, 33], equation (7) defines the interaction force Fij between the nodes as
(7)Fij=+∞,0<dsi,sj≤rmin,k1mimjdsi,sja1,rmin<dsi,sj<rb,0,dsi,sj=rb,−k2mimjdsi,sja2,rb<dsi,sj≤Rc,0,dsi,sj>Rc.Here,k1, k2, a1, and a2 represent gain coefficients and mi and mjrepresent the node quality factor (typically with value of 1). When the distance between two nodes dsi,sj satisfies the condition rmin<dsi,sj<rb, the nodes are mutually exclusive.To enable the node to perform motion detection on targets that are far away, we set the targetTi as the attraction source for the node. In addition, we consider the problem of incompleteness of the node-aware signals as mentioned in [34]. Therefore, we establish the force between the sensing model’s centre of gravity and the target. In this paper, the centre of gravity of the spherical fan is at Gi and the centre of gravity of the spherical sector is
(8)Gi=382r−h,where r represents the length of the spherical sector busbar (i.e., r=Rs) and h represents the length of the point F and the vertex C′ in the plane sector, as shown in Figure 4(e), then h=FC1=r1−cosθ. Therefore, we can calculate the centre of gravity Gi for the node model (i.e., Gi=3/82r−h=3/8r1+cosθ). The gravitational pull of the target on the node’s centre of gravity can be calculated as
(9)Fa=−k3mGimTidGi,Tiae,j∈QT,0,otherwise,where k3 and ae represent the gain coefficient and dGi,Ti represents the Euclidean distance from the node’s centre of gravity Gi to target Ti. Additionally, mTi and mGi represent quality factors of target Ti and node model Gi, respectively. QT represents the force generated by the target set T in the region of action.Additionally, to avoid collisions between nodes and obstacles during movement, we must add a boundary repulsionFo—this ensures the distance between nodes is in the optimal range. According to [14], boundary repulsion is calculated as
(10)Fo=k4mimjdsi,sjab,0<dsi,sj≤L,0,dsi,sj>L,where k4 and ab are the gain coefficients and dsi,sj is the distance between node si and the obstacle. When the distance between the node and the obstacle is within L, the node is repelled by the obstacle.
### 5.3. 3D-DAOA
We design related algorithms to solve two core issues encountered with directional sensor nodes: node rotation and mobility in [29]. We now describe a dynamic adjustment optimisation algorithm for 3D DSNs based on spherical sector coverage models: 3D-DAOA. Meanwhile, to address the issues encountered with the original VFA approach, we designed the dynamic coverage adjustment strategy and combined it with 3D-VFA shown below. If the deployed sensor node can cover the target by rotating, rotation takes priority, and we reduce the activity of the node’s mobility coverage method. Therefore, we present the design steps and pseudocode of the algorithm in this paper.Step 1.
Deploy the numbern of sensor nodes si in the monitoring area L.Step 2.
The 3D Voronoi method is used to divide the regionL where the sensor nodes si are located, leaving each node is in its own Voronoi unit vi.Step 3.
For each directional sensor, we set its coordinate system origin to the sensor’s position and define the central angle2θ of the node’s sensing model, where2θ≥2arcsin6/3≈109.5∘.Step 4.
Assuming that the position information of the target pointTj is known, we test the conditions dsi,Tj≤Rs and φ≤θ. If both are true, we store the number of targets that have been covered NTk and the number of nodes that are covering the target NSk and execute Step 5; otherwise, we execute Step 13.Step 5.
Evaluatedsi,Tj≤Rs again. If it is true, we calculate the number of target points NTf and proceed to Step 7; otherwise, we execute Step 12.Step 6.
Calculate the set of anglesσ between each target that has been covered Tk and the main direction axis w→ and find the smallest angle σmin among them.Step 7.
Calculate the numberNTs of remaining targets Ts; that is,NTs=NTf−NTk.Step 8.
Determine whether the angleξ between Ts and w→ satisfies the conditions ξ<θ+σmin or ξ<θ−σmin.Step 9.
If one of the above conditions is satisfied, the main direction axis of the node is rotated byθ+σmin or ξ<θ−σmin toward the target point Ts. Otherwise, the target that is not currently covered Ta is marked, and we execute Step 10.Step 10.
The remaining nodes are retained, rotation is stopped, and the number of nodesN2 is calculated.Step 11.
The resultant forceFa of the idle neighbour node and Ta is introduced to move the idle neighbour node SI to cover Ta.Step 12.
Calculate the total number of remaining nodesNsc and the number of targets that are not covered NTc.Step 13.
We use the resultant forceFA to move the remaining nodes Sc to Tc.Step 14.
Repeat Steps4, 5, 6, 7, 8, 9, 10, 11, 12 and 13 a set number of iterations until all nodes move to the optimal position and complete the final coverage.In this paper, the 3D Voronoi method is first used to divide the space in which the nodes are located, allowing us to determine whether a target is located inside a Voronoi unit—though a target might not be contained in any units. As the number of nodes increases, so does the density of the increasingly compact V-body units; therefore, with a large number of nodes and events, our method can more accurately divide the space for target detection. However, this paper aims to use algorithms to improve network coverage ratios and increase average node residual energy. Our main goal is to find a better balance between the node utilisation and remaining energy to extend the network lifetime. To achieve this, we design the node’s coverage rotation mechanism, priority coverage mechanism, and movement mechanism. We first design the discriminant condition of the algorithm by combining the 3D Voronoi partitioning method with an optimised core adjustment mechanism. The pseudocode of 3D-DAOA is shown in Algorithm1.Algorithm 1: Dynamic adjustment optimisation algorithm (3D-DAOA).
1 Input1: The total numbern of sensor nodes si and the perceived radius of the nodes Rs
2 Input2:Ti // The area of the targets
3 // Randomly generate the numbern of nodes si in the area L of 100m3 size
4L = Polyhedron ([0 0 0;1 0 0;1 1 0;0 1 0;0 0 1;1 0 1;1 1 1;0 1 1] ∗ 100)
5si=gallery′unformdata′,3,n,0 ∗ 100
6 Maxiter = 50 // Set the maximum number of iterations
7 Max_Step = 0~10 // Set the maximum moving step size of the node
8θi=2θ // Set the initial angle of all directional nodes
9Pi=Locationsi // Get location information for all nodes six,y,z
10vi,L=VoronoiPi,R3 // Divide V-body units ∈ vi, vi=v1,v2,⋯,vn
11if dsi,Tj≤Rs && φ≤θ
12Tk=SizeNTk && Sk=SizeNSk
13 // Calculate the number of targets that have been coveredNTk
14 // Calculate the number nodes that are covering the targetNSk
15while i≤Num do
16if dsi,Tj≤Rs then
17NTf=SizeTf && σi=Sizeθmin
18 // Calculate the number of target pointsNTf and the minimum angle σmin
19 // Calculate the number of target points covered by the same nodeNTs
20else
21 Select the free neighbour nodesNTsNTs=NTf−NTk to move to cover Ta
22if ξ<θ+σminξ<θ−σmin then
23 Rotate the main direction axisw→ by θ±σmin
24else
25NTs=SizeTs
26// Calculate the number of target points that are currently not covered NTs
27FA=∑j=1,j≠inFa+Fij+Fo // Calculate the total force FA
28 MoveSc→Tc
29end if
30 Set the number of iterations and repeat lines 12-29 until coverage is complete
31end if
32end while
## 5.1. Definitions of VFA
In sensor network coverage, the VFA (virtual force algorithm) [31] algorithm has enabled nodes deployed in the monitoring environment to be redeployed by different virtual field forces. The concept of a virtual force first came from physics; that is, when the distance between two atoms is too small, they are separated by the repulsion between them. When the distance between two atoms is too large, gravity is generated, bringing them closer to each other [14, 32]. In this article, we need to redesign an improved 3D-VFA to solve the following problems:
(i)
Redeploying a node in a 3D Voronoi partition to accurately cover uncovered targets
(ii)
Quantifying the node’s rotation angle and the unity of the node’s coordinate system
(iii)
Defining the virtual forces—those generated between nodes (e.g., mutual attraction and repulsion) and obstacle repulsion between the forces—to move the directional nodes to complete the coverage
## 5.2. Improved 3D-VFA Analysis
Through the above definition of virtual forces, we mainly address directional node mobility. During optimisation, nodes move under a total resultant forceFA, thereby achieving node balance and uniform target coverage. In the monitoring region, we assume that a sensor node is subject to a gravitational force Fa from neighbouring nodes, an interaction force Fij from nodes, and a force Fo between the node and the boundary of the target region L. The total force FA is therefore
(6)FA=∑j=1,j≠inFij+Fa+Fo.We further constrain our virtual forces to prevent the node from running out of energy prematurely due to excessive node movement. We introduce two distance thresholds:rmin represents the minimum safe distance between nodes and rb represents the distance beyond which the interaction force between the nodes is zero. According to the literature [14, 33], equation (7) defines the interaction force Fij between the nodes as
(7)Fij=+∞,0<dsi,sj≤rmin,k1mimjdsi,sja1,rmin<dsi,sj<rb,0,dsi,sj=rb,−k2mimjdsi,sja2,rb<dsi,sj≤Rc,0,dsi,sj>Rc.Here,k1, k2, a1, and a2 represent gain coefficients and mi and mjrepresent the node quality factor (typically with value of 1). When the distance between two nodes dsi,sj satisfies the condition rmin<dsi,sj<rb, the nodes are mutually exclusive.To enable the node to perform motion detection on targets that are far away, we set the targetTi as the attraction source for the node. In addition, we consider the problem of incompleteness of the node-aware signals as mentioned in [34]. Therefore, we establish the force between the sensing model’s centre of gravity and the target. In this paper, the centre of gravity of the spherical fan is at Gi and the centre of gravity of the spherical sector is
(8)Gi=382r−h,where r represents the length of the spherical sector busbar (i.e., r=Rs) and h represents the length of the point F and the vertex C′ in the plane sector, as shown in Figure 4(e), then h=FC1=r1−cosθ. Therefore, we can calculate the centre of gravity Gi for the node model (i.e., Gi=3/82r−h=3/8r1+cosθ). The gravitational pull of the target on the node’s centre of gravity can be calculated as
(9)Fa=−k3mGimTidGi,Tiae,j∈QT,0,otherwise,where k3 and ae represent the gain coefficient and dGi,Ti represents the Euclidean distance from the node’s centre of gravity Gi to target Ti. Additionally, mTi and mGi represent quality factors of target Ti and node model Gi, respectively. QT represents the force generated by the target set T in the region of action.Additionally, to avoid collisions between nodes and obstacles during movement, we must add a boundary repulsionFo—this ensures the distance between nodes is in the optimal range. According to [14], boundary repulsion is calculated as
(10)Fo=k4mimjdsi,sjab,0<dsi,sj≤L,0,dsi,sj>L,where k4 and ab are the gain coefficients and dsi,sj is the distance between node si and the obstacle. When the distance between the node and the obstacle is within L, the node is repelled by the obstacle.
## 5.3. 3D-DAOA
We design related algorithms to solve two core issues encountered with directional sensor nodes: node rotation and mobility in [29]. We now describe a dynamic adjustment optimisation algorithm for 3D DSNs based on spherical sector coverage models: 3D-DAOA. Meanwhile, to address the issues encountered with the original VFA approach, we designed the dynamic coverage adjustment strategy and combined it with 3D-VFA shown below. If the deployed sensor node can cover the target by rotating, rotation takes priority, and we reduce the activity of the node’s mobility coverage method. Therefore, we present the design steps and pseudocode of the algorithm in this paper.Step 1.
Deploy the numbern of sensor nodes si in the monitoring area L.Step 2.
The 3D Voronoi method is used to divide the regionL where the sensor nodes si are located, leaving each node is in its own Voronoi unit vi.Step 3.
For each directional sensor, we set its coordinate system origin to the sensor’s position and define the central angle2θ of the node’s sensing model, where2θ≥2arcsin6/3≈109.5∘.Step 4.
Assuming that the position information of the target pointTj is known, we test the conditions dsi,Tj≤Rs and φ≤θ. If both are true, we store the number of targets that have been covered NTk and the number of nodes that are covering the target NSk and execute Step 5; otherwise, we execute Step 13.Step 5.
Evaluatedsi,Tj≤Rs again. If it is true, we calculate the number of target points NTf and proceed to Step 7; otherwise, we execute Step 12.Step 6.
Calculate the set of anglesσ between each target that has been covered Tk and the main direction axis w→ and find the smallest angle σmin among them.Step 7.
Calculate the numberNTs of remaining targets Ts; that is,NTs=NTf−NTk.Step 8.
Determine whether the angleξ between Ts and w→ satisfies the conditions ξ<θ+σmin or ξ<θ−σmin.Step 9.
If one of the above conditions is satisfied, the main direction axis of the node is rotated byθ+σmin or ξ<θ−σmin toward the target point Ts. Otherwise, the target that is not currently covered Ta is marked, and we execute Step 10.Step 10.
The remaining nodes are retained, rotation is stopped, and the number of nodesN2 is calculated.Step 11.
The resultant forceFa of the idle neighbour node and Ta is introduced to move the idle neighbour node SI to cover Ta.Step 12.
Calculate the total number of remaining nodesNsc and the number of targets that are not covered NTc.Step 13.
We use the resultant forceFA to move the remaining nodes Sc to Tc.Step 14.
Repeat Steps4, 5, 6, 7, 8, 9, 10, 11, 12 and 13 a set number of iterations until all nodes move to the optimal position and complete the final coverage.In this paper, the 3D Voronoi method is first used to divide the space in which the nodes are located, allowing us to determine whether a target is located inside a Voronoi unit—though a target might not be contained in any units. As the number of nodes increases, so does the density of the increasingly compact V-body units; therefore, with a large number of nodes and events, our method can more accurately divide the space for target detection. However, this paper aims to use algorithms to improve network coverage ratios and increase average node residual energy. Our main goal is to find a better balance between the node utilisation and remaining energy to extend the network lifetime. To achieve this, we design the node’s coverage rotation mechanism, priority coverage mechanism, and movement mechanism. We first design the discriminant condition of the algorithm by combining the 3D Voronoi partitioning method with an optimised core adjustment mechanism. The pseudocode of 3D-DAOA is shown in Algorithm1.Algorithm 1: Dynamic adjustment optimisation algorithm (3D-DAOA).
1 Input1: The total numbern of sensor nodes si and the perceived radius of the nodes Rs
2 Input2:Ti // The area of the targets
3 // Randomly generate the numbern of nodes si in the area L of 100m3 size
4L = Polyhedron ([0 0 0;1 0 0;1 1 0;0 1 0;0 0 1;1 0 1;1 1 1;0 1 1] ∗ 100)
5si=gallery′unformdata′,3,n,0 ∗ 100
6 Maxiter = 50 // Set the maximum number of iterations
7 Max_Step = 0~10 // Set the maximum moving step size of the node
8θi=2θ // Set the initial angle of all directional nodes
9Pi=Locationsi // Get location information for all nodes six,y,z
10vi,L=VoronoiPi,R3 // Divide V-body units ∈ vi, vi=v1,v2,⋯,vn
11if dsi,Tj≤Rs && φ≤θ
12Tk=SizeNTk && Sk=SizeNSk
13 // Calculate the number of targets that have been coveredNTk
14 // Calculate the number nodes that are covering the targetNSk
15while i≤Num do
16if dsi,Tj≤Rs then
17NTf=SizeTf && σi=Sizeθmin
18 // Calculate the number of target pointsNTf and the minimum angle σmin
19 // Calculate the number of target points covered by the same nodeNTs
20else
21 Select the free neighbour nodesNTsNTs=NTf−NTk to move to cover Ta
22if ξ<θ+σminξ<θ−σmin then
23 Rotate the main direction axisw→ by θ±σmin
24else
25NTs=SizeTs
26// Calculate the number of target points that are currently not covered NTs
27FA=∑j=1,j≠inFa+Fij+Fo // Calculate the total force FA
28 MoveSc→Tc
29end if
30 Set the number of iterations and repeat lines 12-29 until coverage is complete
31end if
32end while
## 6. Experiment Simulation and Discussion
### 6.1. Simulation Environment and Results
In this section, we use MATLAB (2015b) to perform simulation experiments to verify the performance of the proposed algorithm. Initially, we randomly deploy the sensor nodes into a 100 m3 cube to test the target points of the deployment. According to [35], when node deployment is low, the optimal node distance to ensure network connectivity is Rc=2Rs. When the number of nodes is large, the optimal distance for network connectivity is Rc=3Rs. The simulation parameters are listed in Table 1.Table 1
Parameter settings.
Name
Value
Simulation area size,L
100 m3
Total number of targets,Noi
25
Number of nodes,n
60/100
Sensing radius,Rs
10~60 m
Node communication radius,Rc
Rc=2Rs
Initial residual energy,E
30 J
rmin
Rs×3%~7%
α
0.5
β
0.5
Angle of view,θ
10∘≤θ≤90∘We first deploy the nodes, as shown in Figure7(a), where the blue cone represents the directional nodes. In the first set of experiments shown in Figure 7, 60 directional sensor nodes were randomly deployed in a 100 m3 space. During the algorithm, the 3D Voronoi partitioning method is used to divide the space into 60 different V-body units using the number and positions of the nodes, such that each node si is located in the respective unit vi, as shown in Figure 7(b). As shown in Figure 7(c) a red dot represents a target to be covered and a blue cone represents the node of the target covered by the algorithm movement adjustment. The black cone indicates the node’s change of the position toward a target that is not within the coverage range of the algorithm. The simulation results show that when the position coordinates of 25 targets are known, the number of targets covered by nodes is firstly calculated under the adjustment of the algorithm. When the target is not within the node’s coverage, the algorithm selects some of the nodes to move.Figure 7
Simulation experiment diagrams: (a) initial node deployment; (b) experimental simulation process; (c) experimental simulation result.
(a)
(b)
(c)
### 6.2. Algorithm Analysis and Contrast Experiment
To further verify the accuracy of the experiment, we compared the 3D-DAOA with the random algorithm (RA) and the improved VFA algorithm [36]. In Experiment 1, we set the number of nodes N=25, the node’s angle of view AOV=55∘, and the number of target points T=60 to verify the relationship between the node’s detection radius and the coverage ratio. As shown in Figure 8, as the detection radius increases, the coverage ratio of the three algorithms also increases. However, the coverage of the proposed algorithm is significantly higher than that of the other two algorithms. It can also be seen from Figure 8 that the coverage ratio of the algorithm first reaches full coverage when the sensing radius is 60 m, because the algorithm can reasonably divide the node position from the beginning, and it can achieve precise coverage through rotation or movement by setting the priority adjustment strategy. Therefore, the proposed algorithm can reduce coverage redundancy and greatly improve the coverage ratio of the overall network.Figure 8
Coverage ratio with increasing sensing ratio.In Experiment 2, we verified the effect on the coverage ratio of changing the node’s viewing angle as shown in Figure9, where we see that the coverage of the three algorithms increases as the viewing angle increases; however, this increase is less than that caused by increasing the detection radius, because different fields of view (FOV) of the same node have different effects on the coverage ratio. Therefore, having a larger FOV achieves a larger coverage range; that is, the probability of covering a target also increases. The advantage of the proposed algorithm is that it can better determine the current location of nodes and targets, and it uses the priority coverage mechanism or idle nodes to achieve a higher coverage ratio.Figure 9
Coverage ratio with increasing field of view.Sensor nodes typically carry a power source with limited energy, and it is difficult to replenish this energy. Therefore, we need to use energy reasonably. In this experiment, the node’s rotational and mobile energy consumption make up a large portion of its total energy consumption. According to [13, 37], a single directional node rotating 180° consumes 1.52 J of energy; this means a single node rotating 1 degree consumes 0.009 J. However, each node consumes 3.6 J per 1 m of movement.In Experiment 3, we assume the number of nodesN=25, the angle of view θ=55°, and that the initial energy of each node is 30 J to verify the relationship between the average residual energy and the coverage ratio in the three algorithms. As shown in Figures 10 and 11, when the viewing angles of the nodes are the same, in each algorithm, the average residual energy decreases as the angle increases, while the coverage ratio of the nodes increases as the angle increases. The improved VFA algorithm has the lowest average residual energy, because it does not dynamically adjust the coverage mechanism, which leads to too many mobile nodes. Therefore, the VFA algorithm has the largest average node energy consumption. 3D-DAOA reasonably reduces unnecessary energy consumption to achieve a better balance while ensuring a high coverage ratio.Figure 10
Index values of each algorithm whenθ=45∘.Figure 11
Index values of each algorithm whenθ=55∘.We now compare the residual energy of a single node in the three algorithms with the total coverage ratio when the angles take different values, as shown in Table2. From this, we conclude that the total value of the two index values for the proposed algorithm is greatest when the angle of view is 55°, because 3D-DAOA can appropriately balance the network coverage and energy consumption. Furthermore, it comprehensively considers a variety of factors and indicators to achieve better detection results.Table 2
Average node residual energy and coverage ratio values for the three algorithms.
Angle of view,θ
Random algorithm
Improved VFA
3D-DAOA
Residual energy
Coverage ratio (%)
Total value
Residual energy
Coverage ratio (%)
Total value
Residual energy
Coverage ratio (%)
Total value
45°
29.71
34
63.71
17.06
43
60.06
24.51
52
76.51
55°
29.35
37
66.35
14.31
48
62.31
22.46
61
83.46
60°
29.17
39
68.17
11.52
53
64.52
20.33
63
83.33Combining the data in Figures10 and 11 with Table 2, we conclude that the node’s residual energy after the random algorithm has almost no change and the coverage rate is the lowest, because this algorithm does not cause the node to rotate or move based on the target’s position. Under the same evaluation index conditions, the proposed algorithm has obvious advantages over the improved VFA algorithm: the algorithm’s priority coverage mechanism achieves accurate target coverage, the dynamic adjustment mechanism avoids invalid node movement, and the algorithm’s coverage strategy is better when the angle of view is 55°.In Experiments 4 and 5, we verified the relationship between the number of nodes and the coverage ratio. In Experiment 4, we setN=25, θ=55°, T=60, and Rs=30m, as shown in Figure 12. We conclude that as the number of nodes increases, the overall coverage ratio of the three algorithms increases. Figure 12 also shows that when there are fewer nodes, the coverage ratio of the three algorithms is lower. The coverage of the random algorithm and the improved VFA algorithm is lower than that of 3D-DAOA, especially when the number of nodes exceeds 30.Figure 12
The effect of changes in the number of nodes on the coverage ratio (when the number of the targets is small).In Experiment 5, settingT=100 does not change other indicators, as shown in Figure 13. Additionally, when the number of target points is large, the proposed algorithm has a higher coverage ratio. Therefore, under the same conditions, the proposed algorithm is more suitable for large-scale target detection, because the adjustment mechanism of 3D-DAOA enables the node to accurately cover the target.Figure 13
The effect of changes in the number of nodes on the coverage ratio (when the number of the target points is large).
## 6.1. Simulation Environment and Results
In this section, we use MATLAB (2015b) to perform simulation experiments to verify the performance of the proposed algorithm. Initially, we randomly deploy the sensor nodes into a 100 m3 cube to test the target points of the deployment. According to [35], when node deployment is low, the optimal node distance to ensure network connectivity is Rc=2Rs. When the number of nodes is large, the optimal distance for network connectivity is Rc=3Rs. The simulation parameters are listed in Table 1.Table 1
Parameter settings.
Name
Value
Simulation area size,L
100 m3
Total number of targets,Noi
25
Number of nodes,n
60/100
Sensing radius,Rs
10~60 m
Node communication radius,Rc
Rc=2Rs
Initial residual energy,E
30 J
rmin
Rs×3%~7%
α
0.5
β
0.5
Angle of view,θ
10∘≤θ≤90∘We first deploy the nodes, as shown in Figure7(a), where the blue cone represents the directional nodes. In the first set of experiments shown in Figure 7, 60 directional sensor nodes were randomly deployed in a 100 m3 space. During the algorithm, the 3D Voronoi partitioning method is used to divide the space into 60 different V-body units using the number and positions of the nodes, such that each node si is located in the respective unit vi, as shown in Figure 7(b). As shown in Figure 7(c) a red dot represents a target to be covered and a blue cone represents the node of the target covered by the algorithm movement adjustment. The black cone indicates the node’s change of the position toward a target that is not within the coverage range of the algorithm. The simulation results show that when the position coordinates of 25 targets are known, the number of targets covered by nodes is firstly calculated under the adjustment of the algorithm. When the target is not within the node’s coverage, the algorithm selects some of the nodes to move.Figure 7
Simulation experiment diagrams: (a) initial node deployment; (b) experimental simulation process; (c) experimental simulation result.
(a)
(b)
(c)
## 6.2. Algorithm Analysis and Contrast Experiment
To further verify the accuracy of the experiment, we compared the 3D-DAOA with the random algorithm (RA) and the improved VFA algorithm [36]. In Experiment 1, we set the number of nodes N=25, the node’s angle of view AOV=55∘, and the number of target points T=60 to verify the relationship between the node’s detection radius and the coverage ratio. As shown in Figure 8, as the detection radius increases, the coverage ratio of the three algorithms also increases. However, the coverage of the proposed algorithm is significantly higher than that of the other two algorithms. It can also be seen from Figure 8 that the coverage ratio of the algorithm first reaches full coverage when the sensing radius is 60 m, because the algorithm can reasonably divide the node position from the beginning, and it can achieve precise coverage through rotation or movement by setting the priority adjustment strategy. Therefore, the proposed algorithm can reduce coverage redundancy and greatly improve the coverage ratio of the overall network.Figure 8
Coverage ratio with increasing sensing ratio.In Experiment 2, we verified the effect on the coverage ratio of changing the node’s viewing angle as shown in Figure9, where we see that the coverage of the three algorithms increases as the viewing angle increases; however, this increase is less than that caused by increasing the detection radius, because different fields of view (FOV) of the same node have different effects on the coverage ratio. Therefore, having a larger FOV achieves a larger coverage range; that is, the probability of covering a target also increases. The advantage of the proposed algorithm is that it can better determine the current location of nodes and targets, and it uses the priority coverage mechanism or idle nodes to achieve a higher coverage ratio.Figure 9
Coverage ratio with increasing field of view.Sensor nodes typically carry a power source with limited energy, and it is difficult to replenish this energy. Therefore, we need to use energy reasonably. In this experiment, the node’s rotational and mobile energy consumption make up a large portion of its total energy consumption. According to [13, 37], a single directional node rotating 180° consumes 1.52 J of energy; this means a single node rotating 1 degree consumes 0.009 J. However, each node consumes 3.6 J per 1 m of movement.In Experiment 3, we assume the number of nodesN=25, the angle of view θ=55°, and that the initial energy of each node is 30 J to verify the relationship between the average residual energy and the coverage ratio in the three algorithms. As shown in Figures 10 and 11, when the viewing angles of the nodes are the same, in each algorithm, the average residual energy decreases as the angle increases, while the coverage ratio of the nodes increases as the angle increases. The improved VFA algorithm has the lowest average residual energy, because it does not dynamically adjust the coverage mechanism, which leads to too many mobile nodes. Therefore, the VFA algorithm has the largest average node energy consumption. 3D-DAOA reasonably reduces unnecessary energy consumption to achieve a better balance while ensuring a high coverage ratio.Figure 10
Index values of each algorithm whenθ=45∘.Figure 11
Index values of each algorithm whenθ=55∘.We now compare the residual energy of a single node in the three algorithms with the total coverage ratio when the angles take different values, as shown in Table2. From this, we conclude that the total value of the two index values for the proposed algorithm is greatest when the angle of view is 55°, because 3D-DAOA can appropriately balance the network coverage and energy consumption. Furthermore, it comprehensively considers a variety of factors and indicators to achieve better detection results.Table 2
Average node residual energy and coverage ratio values for the three algorithms.
Angle of view,θ
Random algorithm
Improved VFA
3D-DAOA
Residual energy
Coverage ratio (%)
Total value
Residual energy
Coverage ratio (%)
Total value
Residual energy
Coverage ratio (%)
Total value
45°
29.71
34
63.71
17.06
43
60.06
24.51
52
76.51
55°
29.35
37
66.35
14.31
48
62.31
22.46
61
83.46
60°
29.17
39
68.17
11.52
53
64.52
20.33
63
83.33Combining the data in Figures10 and 11 with Table 2, we conclude that the node’s residual energy after the random algorithm has almost no change and the coverage rate is the lowest, because this algorithm does not cause the node to rotate or move based on the target’s position. Under the same evaluation index conditions, the proposed algorithm has obvious advantages over the improved VFA algorithm: the algorithm’s priority coverage mechanism achieves accurate target coverage, the dynamic adjustment mechanism avoids invalid node movement, and the algorithm’s coverage strategy is better when the angle of view is 55°.In Experiments 4 and 5, we verified the relationship between the number of nodes and the coverage ratio. In Experiment 4, we setN=25, θ=55°, T=60, and Rs=30m, as shown in Figure 12. We conclude that as the number of nodes increases, the overall coverage ratio of the three algorithms increases. Figure 12 also shows that when there are fewer nodes, the coverage ratio of the three algorithms is lower. The coverage of the random algorithm and the improved VFA algorithm is lower than that of 3D-DAOA, especially when the number of nodes exceeds 30.Figure 12
The effect of changes in the number of nodes on the coverage ratio (when the number of the targets is small).In Experiment 5, settingT=100 does not change other indicators, as shown in Figure 13. Additionally, when the number of target points is large, the proposed algorithm has a higher coverage ratio. Therefore, under the same conditions, the proposed algorithm is more suitable for large-scale target detection, because the adjustment mechanism of 3D-DAOA enables the node to accurately cover the target.Figure 13
The effect of changes in the number of nodes on the coverage ratio (when the number of the target points is large).
## 7. Conclusions
In this paper, we studied target coverage in 3D DSNs. First, we improved the traditional 3D directional sensing model and proposed a spherical sector model that is more suitable for 3D directional sensor nodes. Next, we unified the coordinate system of the nodes and rotated them to achieve coverage using the spherical sector model. We then quantified the sensing model’s perspective to provide an effective detection scheme for directional node coverage. We proposed a correlation algorithm and combined node rotation and mobility to achieve priority coverage effectively, enabling our algorithm to achieve a higher coverage ratio while reducing network energy consumption. Finally, we verified and compared 3D-DAOA with other algorithms to prove its reliability and accuracy. In future efforts, we will further study the algorithm’s actual test environment and target mobility.
---
*Source: 1018434-2019-10-07.xml* | 1018434-2019-10-07_1018434-2019-10-07.md | 69,837 | Dynamic Adjustment Optimisation Algorithm in 3D Directional Sensor Networks Based on Spherical Sector Coverage Models | Xiaochao Dang; Chenguang Shao; Zhanjun Hao | Journal of Sensors
(2019) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1018434 | 1018434-2019-10-07.xml | ---
## Abstract
In directional sensor networks research, target event detection is currently an active research area, with applications in underwater target monitoring, forest fire warnings, border areas, and other important activities. Previous studies have often discussed target coverage in two-dimensional sensor networks, but these studies cannot be extensively applied to three-dimensional networks. Additionally, most of the previous target coverage detection models are based on a circular or omnidirectional sensing model. More importantly, if the directional sensor network does not design a better coverage algorithm in the coverage-monitoring process, its nodes’ energy consumption will increase and the network lifetime will be significantly shortened. With the objective of addressing three-dimensional target coverage in applications, this study proposes a dynamic adjustment optimisation algorithm for three-dimensional directional sensor networks based on a spherical sector coverage model, which improves the lifetime and coverage ratio of the network. First, we redefine the directional nodes’ sensing model and use the three-dimensional Voronoi method to divide the regions where the nodes are located. Then, we introduce a correlation force between the target and the sensor node to optimise the algorithm’s coverage mechanism, so that the sensor node can accurately move to the specified position for target coverage. Finally, by verifying the feasibility and accuracy of the proposed algorithm, the simulation experiments demonstrate that the proposed algorithm can effectively improve the network coverage and node utilisation.
---
## Body
## 1. Introduction
A three-dimensional (3D) wireless sensor networks (WSNs) consists of several tiny, battery-powered sensors that can communicate with each other to monitor a 3D field of interest (FOI) [1] for target events. WSNs include sensor networks (i.e., omnidirectional sensor networks) and directional sensor networks (DSNs). Research into WSN coverage is roughly classified into three branches: area coverage, barrier coverage, and target coverage. In recent years, WSNs coverage has been an active research area with a wide range of practical applications: target detection [2], healthcare applications [3], target location [4], data transmission [5], etc. In these real-world applications, we can detect some target events in the region of interest by deploying sensor nodes. Therefore, the use of existing methods and techniques to achieve effective event detection is now the current research focus. At the same time, improving multiple objectives (e.g., reducing the network’s overall energy consumption while ensuring a high coverage ratio) is an indispensable consideration in research.In most of the previous studies, the researchers have discussed and presented solutions for 2D coordinate systems under realistic conditions to reduce the difficulty and they have made great progress [6–8]. However, modelling and studying DSNs coverage are still less common in 3D systems than in 2D systems; not only does the difficultly of research increase in 3D systems, but deployed sensor nodes often encounter complex environmental influences (e.g., weather and climate). In recent years, some researchers have established models for 3D WSNs and proposed corresponding distributed optimisation algorithms [9–11]. However, the WSN node coverage model is mainly based on the 2D omnidirectional sensing model, and a large part of the research in 3D systems is based on the omnidirectional ball sensing model. While the omnidirectional sensing model can provide better range and node utilisation for area coverage, we only require modest energy and nodes with limited directional detection to achieve target coverage for a set of targets or special events in practice. Therefore, 3D DSNs coverage research is more suitable for the above conditions.Of course, the directional sensor not only needs to consider its own position and sensing range (as with the omnidirectional sensor) but must also consider the angle change problem. Furthermore, when nodes are randomly deployed covered, they cannot be accurate and some omissions will occur. Therefore, in a specific environment, we need a dynamic algorithm to select the optimal number of active nodes to detect the target [12]. At the same time, we need to consider moving or rotating these active nodes within a certain period to adjust their own headings to achieve the best coverage. For example, in [8], the use of unattended sensor networks has been discussed for detecting targets using energy-efficient methods. The authors are dedicated to analysing the trade-offs between power consumption and quality of service in WSNs in terms of detection capabilities and latency. In [13], the authors propose using the hybrid movement strategy (HMS) to solve the problem of high energy consumption (resulting from mobility) and improve the coverage ratio of DSNs. Although the method proposed above can reduce the energy consumption of the network and improve the coverage ratio, the rotation angle of the 3D directional sensor node is difficult to determine; increased dimensionality brings further complications.Therefore, we propose a network model suitable for directional sensors and related dynamic adjustment optimisation algorithms for 3D systems. We first design a sensing model that is more suitable for 3D DSNs and allows us to quantify the rotation angle of the node. Secondly, to achieve accurate coverage, we extend traditional 2D Voronoi division and apply it to 3D DSNs. We also use theory and experimentation to verify the algorithm to further reduce network energy consumption. Finally, we design experimental simulations and perform algorithm comparisons to further analyse our algorithm’s effectiveness. Our main contributions are highlighted as follows:(i)
We are the first to propose a spherical sector sensing model for 3D DSNs that quantifies the rotation angle in combination with using a 3D Voronoi method [14] to divide space using the sensors’ positions
(ii)
We design a synergistic priority coverage mechanism to reduce the moving distance of nodes, thereby reducing excessive energy consumption while guaranteeing a high coverage ratio for the sensor network
(iii)
We optimise the traditional virtual force algorithm to suit practical conditions, and we perform a full theoretical analysis and experimental comparative analysis of the algorithm proposed in this paper to verify its validity and accuracyThe remainder of this paper is organised as follows. In Section2, the research progress and related work on DSNs in recent years are summarised. In Section 3, the DSNs coverage model and sensing angle are described and the relevant definitions are provided. After this, we compare the differences between 2D and 3D Voronoi and give the 3D Voronoi partition theory in Section 4. We then show how we have designed and improved the relevant algorithms and provide its design steps in Section 5. In Section 6, we describe the simulations and experiments we performed on the algorithm and compare it with other algorithms for analysis. Conclusions and future works are discussed in the final section.
## 2. Related Works
In recent years, research on DSNs has been carried out mainly based on 2D planes. For example, in [15], the authors propose a cluster head- (CH-) based distributed target coverage algorithm to solve a Maximum Coverage with Minimum Sensor (MCMS) problem. The authors also designed distributed clustering and target coverage algorithms to reduce network energy consumption. Subsequently, in [12], they designed a target coverage algorithm for DSNs, in an energy-saving manner based on [15], through the distributed clustering mechanism. The authors improved the distributed algorithm in [15] to use the CH approach and ensure that it is used appropriately to enhance DSNs target coverage. In [16], the authors propose a new method (based on particle swarm optimisation) to maximise coverage for 2D regions. This algorithm allows a directional sensor node to constantly adjust its sensing direction to provide the best coverage. However, most of the above studies map 3D sensor coverage problems into 2D for discussion—they cannot be applied directly in three dimensions. Therefore, we need to consider not only dimensionality but also a node-aware model that can be applied in the dimension of the actual environment.In addition, the nodes are often distributed randomly in the monitoring area. Reducing the deployment cost—while using the limited node energy for efficient coverage—has become an active research topic. The authors point out that motility and mobility are essential for DSN nodes to minimise occlusion effects and coverage overlap in [17]. At the same time, motility is superior to mobility in terms of network cost and energy efficiency. Therefore, almost all research aims to solve coverage problems through motility.In practice, however, there are still some coverage holes that can only be addressed through mobility. For example, the authors in [18] use the directionality of the orientation sensor to rotate it to locate periodic detection objects. Therefore, the above authors developed an event monitoring system that proposed a maximum coverage deployment (MCD) heuristic iteration to deploy sensors to cover targets. But we must not only consider the direction of the orientation sensor (i.e., the change or rotation of its sensing angle) to enable efficient deployment; we must also consider that the orientation sensor can move to fill coverage holes in the monitoring area (i.e., DSNs can be moved). Therefore, the literature [13] proposes HMS to solve the high energy consumption of directional sensor movement. The authors use the cascading method to adjust the coverage of the DSNs, effectively reducing network energy consumption. In [19], the authors propose an algorithm based on learning automata to address the orientation sensor network’s coverage quality requirements and to maximise the network lifetime (i.e., priority-based target coverage). The algorithm divides the DSNs into several coverage sets so that each coverage set can meet the coverage quality requirements of all targets. Thus, it effectively extends the network lifetime.In [13, 18, 19], the authors have better solved the problem of mobile energy consumption, but these are based on 2D plane verification and are not suitable for 3D environments. Therefore, the research of the literature [20–22] has successively proposed the orientation sensor model and algorithm for the 3D coordinate system. For example, the authors studied the low-power green communication of 3D DSNs and proposed the space-time coverage optimisation scheduling (STCOS) algorithm to obtain the maximum network coverage in [21]. In [22], the authors propose a network coverage enhancement algorithm based on an artificial fish swarm algorithm to improve the coverage rate. However, the authors only optimised the angle of the sensor and did not solve the mobility problem in the directional sensor. In [23], the authors propose prescheduling-based k-coverage group scheduling (PSKGS) and self-organised k-coverage scheduling (SKS) algorithms to reduce the cost of the algorithm and ensure the effective monitoring of node quality. The experimental results show that PSKGS improves monitoring quality and the SKS algorithm reduces the node’s computation and communication costs.In addition, the special geometric properties of the Voronoi diagram are applied in many aspects of WSN coverage. In [24], the authors propose Voronoi-based centralised approximation (VCA) and Voronoi-based distributed approximation (VDA) for optimal coverage in DSNs. The authors have experimentally verified that the two algorithms can reduce the coverage overlap and achieve a higher coverage rate. In [25], the authors combine the special set features of the 2D Voronoi graph with the real-time response of dynamic environment changes and propose a distributed greedy algorithm that can select and adjust the intracellular sensing direction based on coverage (IDS&IDA). Obviously, the research on the 2D Voronoi algorithm has shown better results, but it is rarely applied in three dimensions.Therefore, based on the typical literature [14, 25], this paper improves and extends the Voronoi method, making it suitable for 3D DSNs target coverage. In this paper, we propose a dynamic adjustment optimisation algorithm for 3D DSNs based on a spherical sector coverage model. This algorithm can maximise coverage and improve network lifetime by adjusting the direction and specific movements of nodes in the DSNs. In the subsequent experimental verification section, we discuss the proposed algorithm and compare it with other algorithms.
## 3. Network Coverage Model and Angle Quantification Method
### 3.1. Network Coverage Model
First, we assume that the sensing model of the sensor node covers a sphere with its midpoint at the node’s positionoixi,yi,zi and its sensing range Rs is the maximum detection distance. Initially, it is assumed that sensor nodes si are randomly scattered in an L3 target area, and the set of nodes is sis1,s2,⋯,sn. Rc represents the communication radius of the node, when the Euclidean distance between two nodes si and sj satisfies dsi,sj<Rc; we call them neighbour nodes [26]. In a traditional 2D study, most researchers transform the sensor nodes into a 2D planar fan to achieve coverage optimisation. In some related 3D research fields, the node’s sensing range is abstracted into a covering model of a rounded hammer. However, the coverage model of the 3D directional sensor should be obtained by rotating a planar fan with radius Rs and central angle 2θ around its axis of symmetry, as shown in Figure 1. Therefore, we define the directional node’s sensing range as a spherical sector sensing model. As shown in Figure 1, the spherical sector O—A1B1C1 represents the coverage model of the directional sensor. When 2θ=360°, its coverage matches that of the omnidirectional sensor node. Therefore, the spherical sector network model redefined in this paper is more suitable for modelling the coverage of 3D sensor nodes.Figure 1
Spherical sector sensing model.Initially, sensor nodes are randomly scattered in the target monitoring area, which may result in an uneven node distribution, excessive node energy consumption, and duplicate or missing coverage for some targets. In Figure2, the grey dots indicate targets that need to be covered, and the three spherical sectors represent sensor coverage. Some of the targets in Figure 2 are not completely covered. Therefore, the sensor network may also have omission problems, resulting in lower node utilisation. Before designing a 3D DSNs coverage algorithm based on the 3D Voronoi diagram partition, the following assumptions are made:
(i)
The sensor nodes are isomorphic, and each node has access to its own location and that of its neighbour nodes through some technical means
(ii)
Each node has the same detection range, but its sensing range can be different; that is, each sensor can select different sensing angles2θ, where each si can select its own sensing angle 2θi
(iii)
Each node can rotate and move freely in any directionFigure 2
Target detection model.
### 3.2. Model Angle Quantification
In previous studies, the randomly distributed target pointp in space is covered by the directional node si and the basic conditions doi,p≤Rs and ∣φ≤θ∣ need to be satisfied. Most studies [27, 28] use the partitioning model shown in Figure 3 to specify angles. However, it is difficult for this model to quantify the angle φ between the target point p and the node si. In particular, it is difficult to determine the necessary rotation amount when a node must rotate to cover a target. Furthermore, the sensing model and direction angle partitioning of Figure 3 is abstract and impractical for directional sensor nodes with differing θ and varying main direction angle ψ.Figure 3
Node sensing direction.Therefore, we redefine the sensing model and propose an angle and direction division method using one octant of a sphere to unify the rotation as shown in Figure4. As long as the spherical sector busbar is exactly tangent to the three edges of O—ABC (i.e., the spherical sector contains O—ABC), coverage can be achieved by rotating the model to the coordinate system in which the target event is located—when the condition dsi,p≤Rs is satisfied. The above assumptions can reduce omissions and node energy consumption. In this regard, we subsequently respecified the conditions under which the target event can be covered by the directed node.Figure 4
Angle division of sensing model.
(a)
(b)
(c)
(d)
(e)As shown in Figure4, we cut the sphere of radius r along its axes of symmetry to divide it into eight parts; that is, the shaded portion in Figure 4(a) is the isolated polyhedron O—ABC. For a more intuitive understanding and for analysis and quantification, we separately extract the shaded parts removed in Figure 4(a) and draw the perspective view shown in Figure 4(b). To quantify the angle θ in our model, we need to solve for ∠COO′. Therefore, we project the point O onto a plane containing O′ that is perpendicular to the line passing through O and O′. For a more intuitive understanding and analysis, we separately extract the triangle △ABC in Figure 4(b) and draw the plane view shown in Figure 4(c). The line segments AO, BO, and CO are perpendicular and congruent (i.e., AO=BO=CO=r), so we determine AB=AC=BC=2r. In Figure 4(c), O′ represents the projection of point O, which is located at the centre of the equilateral triangle △ABC. Note that CD=2/2r. We now calculate CO′=CD/cos30∘=2/2r/cos30∘=6/3r. The connecting line segments O′G and OG form the right triangle △GOO′, as shown in Figures 4(b) and 4(d). In Figure 4(d), φ=∠COO′ is exactly on the direction angle we need to calculate; that is, φ=arcsin6/3≈54.74∘. Note that φ is not related to the radius r. Next, we draw a plane view of the spherical sector projection on the plane, as shown in Figure 4(e). We know that 2φ is not equal to the true angle at which the spherical sector O—ABC is projected onto the plane, 2φ≠∠COG; the inner angle of the calculated ∠COG=90∘. Therefore, we can get the minimum sensing angle θ when the condition θ=arcsin6/3 is satisfied, as shown in Figure 4(e). At this time, the regular triangular pyramid OABC is surrounded by the spherical sector O—A1B1C1. Meanwhile, when the projected fan’s central angle 2θ≥2arcsin6/3, the spherical sector sensing area contains the polyhedron O—ABC.In summary, we first assume that the node’s central angle2θ≥2arcsin6/3 can meet the required coverage. We then specify that a target point px,y,z is to be covered by the sensor node sixi,yi,zi, subject to the following conditions:
(i)
The Euclidean distance between pointsp and si must be less than or equal to the maximum sensing distance of the node; that is, dsi,p≤Rs
(ii)
The angleφ formed between the vector from p to si and the node’s main sensing direction must be less than θ; that is, φ≤arcsin6/3≈54.74∘
(iii)
The central angle of the directed sensing model satisfies2θ≥2arcsin6/3≈109.5∘
### 3.3. Related Definitions
For a more intuitive follow-up analysis and discussion of this article, we introduce the following definitions to better describe the problem.Definition 1 (3D-directed node sensing model).
A 3D-directed sensing model can be represented by the five-tuple<six,y,z,w,Rs,2θ,ψ>, where si, w, Rs, 2θ (0≤θ≤π), and ψ represent the vertex position coordinate, the main sensing direction vector, the node’s sensing radius, the node’s sensing angle, and the node’s sensing direction, respectively.Definition 2 (neighbour node).
Each node is unique within the Voronoi; therefore, according to reference [29], we can specify that two sensor nodes that have the same neighbouring edge are neighbouring nodes.Definition 3 (network coverage ratio).
We refer to the sensing accuracy model in [27] to determine the probability that any point p in space is monitored by node si. Assuming that the sensing accuracy C decays as the distance increases, the sensing accuracy Csi,p is
(1)Csi,p=11+αdsi,pβ,where Csi,p represents the sensing accuracy of sensor si at point p and dsi,p represents the Euclidean distance from point p to si, which can be calculated as
(2)dsi,p=x−xi2+y−yi2+z−zi2.Constantsα and β reflect the device correlation coefficient for the physical characteristics of the sensor. Typically, β has a range of (1~4) and α is used as an adjustment parameter.A target in the monitoring area can be covered simultaneously by multiple sensor nodes, and its coverage probabilityC can be expressed as
(3)C=1−∏i=1N1−Csi,p,which is equivalent to
(4)C=1−∏i=1N1−11+αdsi,pβ.
## 3.1. Network Coverage Model
First, we assume that the sensing model of the sensor node covers a sphere with its midpoint at the node’s positionoixi,yi,zi and its sensing range Rs is the maximum detection distance. Initially, it is assumed that sensor nodes si are randomly scattered in an L3 target area, and the set of nodes is sis1,s2,⋯,sn. Rc represents the communication radius of the node, when the Euclidean distance between two nodes si and sj satisfies dsi,sj<Rc; we call them neighbour nodes [26]. In a traditional 2D study, most researchers transform the sensor nodes into a 2D planar fan to achieve coverage optimisation. In some related 3D research fields, the node’s sensing range is abstracted into a covering model of a rounded hammer. However, the coverage model of the 3D directional sensor should be obtained by rotating a planar fan with radius Rs and central angle 2θ around its axis of symmetry, as shown in Figure 1. Therefore, we define the directional node’s sensing range as a spherical sector sensing model. As shown in Figure 1, the spherical sector O—A1B1C1 represents the coverage model of the directional sensor. When 2θ=360°, its coverage matches that of the omnidirectional sensor node. Therefore, the spherical sector network model redefined in this paper is more suitable for modelling the coverage of 3D sensor nodes.Figure 1
Spherical sector sensing model.Initially, sensor nodes are randomly scattered in the target monitoring area, which may result in an uneven node distribution, excessive node energy consumption, and duplicate or missing coverage for some targets. In Figure2, the grey dots indicate targets that need to be covered, and the three spherical sectors represent sensor coverage. Some of the targets in Figure 2 are not completely covered. Therefore, the sensor network may also have omission problems, resulting in lower node utilisation. Before designing a 3D DSNs coverage algorithm based on the 3D Voronoi diagram partition, the following assumptions are made:
(i)
The sensor nodes are isomorphic, and each node has access to its own location and that of its neighbour nodes through some technical means
(ii)
Each node has the same detection range, but its sensing range can be different; that is, each sensor can select different sensing angles2θ, where each si can select its own sensing angle 2θi
(iii)
Each node can rotate and move freely in any directionFigure 2
Target detection model.
## 3.2. Model Angle Quantification
In previous studies, the randomly distributed target pointp in space is covered by the directional node si and the basic conditions doi,p≤Rs and ∣φ≤θ∣ need to be satisfied. Most studies [27, 28] use the partitioning model shown in Figure 3 to specify angles. However, it is difficult for this model to quantify the angle φ between the target point p and the node si. In particular, it is difficult to determine the necessary rotation amount when a node must rotate to cover a target. Furthermore, the sensing model and direction angle partitioning of Figure 3 is abstract and impractical for directional sensor nodes with differing θ and varying main direction angle ψ.Figure 3
Node sensing direction.Therefore, we redefine the sensing model and propose an angle and direction division method using one octant of a sphere to unify the rotation as shown in Figure4. As long as the spherical sector busbar is exactly tangent to the three edges of O—ABC (i.e., the spherical sector contains O—ABC), coverage can be achieved by rotating the model to the coordinate system in which the target event is located—when the condition dsi,p≤Rs is satisfied. The above assumptions can reduce omissions and node energy consumption. In this regard, we subsequently respecified the conditions under which the target event can be covered by the directed node.Figure 4
Angle division of sensing model.
(a)
(b)
(c)
(d)
(e)As shown in Figure4, we cut the sphere of radius r along its axes of symmetry to divide it into eight parts; that is, the shaded portion in Figure 4(a) is the isolated polyhedron O—ABC. For a more intuitive understanding and for analysis and quantification, we separately extract the shaded parts removed in Figure 4(a) and draw the perspective view shown in Figure 4(b). To quantify the angle θ in our model, we need to solve for ∠COO′. Therefore, we project the point O onto a plane containing O′ that is perpendicular to the line passing through O and O′. For a more intuitive understanding and analysis, we separately extract the triangle △ABC in Figure 4(b) and draw the plane view shown in Figure 4(c). The line segments AO, BO, and CO are perpendicular and congruent (i.e., AO=BO=CO=r), so we determine AB=AC=BC=2r. In Figure 4(c), O′ represents the projection of point O, which is located at the centre of the equilateral triangle △ABC. Note that CD=2/2r. We now calculate CO′=CD/cos30∘=2/2r/cos30∘=6/3r. The connecting line segments O′G and OG form the right triangle △GOO′, as shown in Figures 4(b) and 4(d). In Figure 4(d), φ=∠COO′ is exactly on the direction angle we need to calculate; that is, φ=arcsin6/3≈54.74∘. Note that φ is not related to the radius r. Next, we draw a plane view of the spherical sector projection on the plane, as shown in Figure 4(e). We know that 2φ is not equal to the true angle at which the spherical sector O—ABC is projected onto the plane, 2φ≠∠COG; the inner angle of the calculated ∠COG=90∘. Therefore, we can get the minimum sensing angle θ when the condition θ=arcsin6/3 is satisfied, as shown in Figure 4(e). At this time, the regular triangular pyramid OABC is surrounded by the spherical sector O—A1B1C1. Meanwhile, when the projected fan’s central angle 2θ≥2arcsin6/3, the spherical sector sensing area contains the polyhedron O—ABC.In summary, we first assume that the node’s central angle2θ≥2arcsin6/3 can meet the required coverage. We then specify that a target point px,y,z is to be covered by the sensor node sixi,yi,zi, subject to the following conditions:
(i)
The Euclidean distance between pointsp and si must be less than or equal to the maximum sensing distance of the node; that is, dsi,p≤Rs
(ii)
The angleφ formed between the vector from p to si and the node’s main sensing direction must be less than θ; that is, φ≤arcsin6/3≈54.74∘
(iii)
The central angle of the directed sensing model satisfies2θ≥2arcsin6/3≈109.5∘
## 3.3. Related Definitions
For a more intuitive follow-up analysis and discussion of this article, we introduce the following definitions to better describe the problem.Definition 1 (3D-directed node sensing model).
A 3D-directed sensing model can be represented by the five-tuple<six,y,z,w,Rs,2θ,ψ>, where si, w, Rs, 2θ (0≤θ≤π), and ψ represent the vertex position coordinate, the main sensing direction vector, the node’s sensing radius, the node’s sensing angle, and the node’s sensing direction, respectively.Definition 2 (neighbour node).
Each node is unique within the Voronoi; therefore, according to reference [29], we can specify that two sensor nodes that have the same neighbouring edge are neighbouring nodes.Definition 3 (network coverage ratio).
We refer to the sensing accuracy model in [27] to determine the probability that any point p in space is monitored by node si. Assuming that the sensing accuracy C decays as the distance increases, the sensing accuracy Csi,p is
(1)Csi,p=11+αdsi,pβ,where Csi,p represents the sensing accuracy of sensor si at point p and dsi,p represents the Euclidean distance from point p to si, which can be calculated as
(2)dsi,p=x−xi2+y−yi2+z−zi2.Constantsα and β reflect the device correlation coefficient for the physical characteristics of the sensor. Typically, β has a range of (1~4) and α is used as an adjustment parameter.A target in the monitoring area can be covered simultaneously by multiple sensor nodes, and its coverage probabilityC can be expressed as
(3)C=1−∏i=1N1−Csi,p,which is equivalent to
(4)C=1−∏i=1N1−11+αdsi,pβ.
## 4. Voronoi Partitioning Method
### 4.1. 2D Voronoi Principle
In the early research of two dimensional DSN coverage, nodes are randomly distributed in the plane and divided into the 2D Voronoi method. As shown in Figure5, given a set of sensor nodes si=s1,s2,⋯,sn, the bounded plane is divided into polygonal cells Ki=K1,K2,⋯,Kn, such that each cell Ki contains exactly one of the sensor nodes si, where si is called the Ki-divided generation node [14, 30]. Furthermore, according to the partitioning property of the Voronoi diagram, the distance Dsi,T from any point T in cell Ki to si is shorter than the distance Dsj,T between the point T and the neighbour nodes of si.Figure 5
2D Voronoi diagram.As shown in Figure6, there are 70 sensor nodes in the plane and the grey area represents the coverage of each node. After division, each Voronoi unit corresponds to a single node.Figure 6
2D Voronoi node coverage.
### 4.2. 3D Voronoi Partition Principle
After reviewing the related 2D Voronoi research in the previous section, we extend it to divide three-dimensional volumes. The volume is divided into polyhedral Voronoi units called V-body units; each is an irregular, multifaceted, closed, convex body according to the literature [14]. Meanwhile, each unit Vi∈V1,V2,⋯,Vn contains a unique node si. Hence, according to the property of 2D-Voronoi, the 3D Voronoi partitioning definition satisfies
(5)QVi=Vi∈L3∣dT,si≤dT,sj,j=1,2,⋯,n−1,∀j≠i.It can be concluded from the aforementioned results that the number of nodesNsi is equal to the number of Voronoi units NVi after division; that is, Nsi=NVi,i=1,2,⋯,n. Thus, this paper first uses this important neighbouring property to divide and study the 3D coverage problem.
## 4.1. 2D Voronoi Principle
In the early research of two dimensional DSN coverage, nodes are randomly distributed in the plane and divided into the 2D Voronoi method. As shown in Figure5, given a set of sensor nodes si=s1,s2,⋯,sn, the bounded plane is divided into polygonal cells Ki=K1,K2,⋯,Kn, such that each cell Ki contains exactly one of the sensor nodes si, where si is called the Ki-divided generation node [14, 30]. Furthermore, according to the partitioning property of the Voronoi diagram, the distance Dsi,T from any point T in cell Ki to si is shorter than the distance Dsj,T between the point T and the neighbour nodes of si.Figure 5
2D Voronoi diagram.As shown in Figure6, there are 70 sensor nodes in the plane and the grey area represents the coverage of each node. After division, each Voronoi unit corresponds to a single node.Figure 6
2D Voronoi node coverage.
## 4.2. 3D Voronoi Partition Principle
After reviewing the related 2D Voronoi research in the previous section, we extend it to divide three-dimensional volumes. The volume is divided into polyhedral Voronoi units called V-body units; each is an irregular, multifaceted, closed, convex body according to the literature [14]. Meanwhile, each unit Vi∈V1,V2,⋯,Vn contains a unique node si. Hence, according to the property of 2D-Voronoi, the 3D Voronoi partitioning definition satisfies
(5)QVi=Vi∈L3∣dT,si≤dT,sj,j=1,2,⋯,n−1,∀j≠i.It can be concluded from the aforementioned results that the number of nodesNsi is equal to the number of Voronoi units NVi after division; that is, Nsi=NVi,i=1,2,⋯,n. Thus, this paper first uses this important neighbouring property to divide and study the 3D coverage problem.
## 5. VFA Analysis and 3D-DAOA
As discussed earlier, directional sensor network nodes can be separated into a unique set of nonoverlapping V-body units by the 3D Voronoi partitioning method after an initial, random deployment. We know a target may not be detected by a given node, and each target could be located in any V-body unit. Additionally, according to the Voronoi partitioning property, we will first consider nodes preferentially covering targets in that node’s V-body unit, for which we need to design related node rotation and movement algorithms to achieve coverage.
### 5.1. Definitions of VFA
In sensor network coverage, the VFA (virtual force algorithm) [31] algorithm has enabled nodes deployed in the monitoring environment to be redeployed by different virtual field forces. The concept of a virtual force first came from physics; that is, when the distance between two atoms is too small, they are separated by the repulsion between them. When the distance between two atoms is too large, gravity is generated, bringing them closer to each other [14, 32]. In this article, we need to redesign an improved 3D-VFA to solve the following problems:
(i)
Redeploying a node in a 3D Voronoi partition to accurately cover uncovered targets
(ii)
Quantifying the node’s rotation angle and the unity of the node’s coordinate system
(iii)
Defining the virtual forces—those generated between nodes (e.g., mutual attraction and repulsion) and obstacle repulsion between the forces—to move the directional nodes to complete the coverage
### 5.2. Improved 3D-VFA Analysis
Through the above definition of virtual forces, we mainly address directional node mobility. During optimisation, nodes move under a total resultant forceFA, thereby achieving node balance and uniform target coverage. In the monitoring region, we assume that a sensor node is subject to a gravitational force Fa from neighbouring nodes, an interaction force Fij from nodes, and a force Fo between the node and the boundary of the target region L. The total force FA is therefore
(6)FA=∑j=1,j≠inFij+Fa+Fo.We further constrain our virtual forces to prevent the node from running out of energy prematurely due to excessive node movement. We introduce two distance thresholds:rmin represents the minimum safe distance between nodes and rb represents the distance beyond which the interaction force between the nodes is zero. According to the literature [14, 33], equation (7) defines the interaction force Fij between the nodes as
(7)Fij=+∞,0<dsi,sj≤rmin,k1mimjdsi,sja1,rmin<dsi,sj<rb,0,dsi,sj=rb,−k2mimjdsi,sja2,rb<dsi,sj≤Rc,0,dsi,sj>Rc.Here,k1, k2, a1, and a2 represent gain coefficients and mi and mjrepresent the node quality factor (typically with value of 1). When the distance between two nodes dsi,sj satisfies the condition rmin<dsi,sj<rb, the nodes are mutually exclusive.To enable the node to perform motion detection on targets that are far away, we set the targetTi as the attraction source for the node. In addition, we consider the problem of incompleteness of the node-aware signals as mentioned in [34]. Therefore, we establish the force between the sensing model’s centre of gravity and the target. In this paper, the centre of gravity of the spherical fan is at Gi and the centre of gravity of the spherical sector is
(8)Gi=382r−h,where r represents the length of the spherical sector busbar (i.e., r=Rs) and h represents the length of the point F and the vertex C′ in the plane sector, as shown in Figure 4(e), then h=FC1=r1−cosθ. Therefore, we can calculate the centre of gravity Gi for the node model (i.e., Gi=3/82r−h=3/8r1+cosθ). The gravitational pull of the target on the node’s centre of gravity can be calculated as
(9)Fa=−k3mGimTidGi,Tiae,j∈QT,0,otherwise,where k3 and ae represent the gain coefficient and dGi,Ti represents the Euclidean distance from the node’s centre of gravity Gi to target Ti. Additionally, mTi and mGi represent quality factors of target Ti and node model Gi, respectively. QT represents the force generated by the target set T in the region of action.Additionally, to avoid collisions between nodes and obstacles during movement, we must add a boundary repulsionFo—this ensures the distance between nodes is in the optimal range. According to [14], boundary repulsion is calculated as
(10)Fo=k4mimjdsi,sjab,0<dsi,sj≤L,0,dsi,sj>L,where k4 and ab are the gain coefficients and dsi,sj is the distance between node si and the obstacle. When the distance between the node and the obstacle is within L, the node is repelled by the obstacle.
### 5.3. 3D-DAOA
We design related algorithms to solve two core issues encountered with directional sensor nodes: node rotation and mobility in [29]. We now describe a dynamic adjustment optimisation algorithm for 3D DSNs based on spherical sector coverage models: 3D-DAOA. Meanwhile, to address the issues encountered with the original VFA approach, we designed the dynamic coverage adjustment strategy and combined it with 3D-VFA shown below. If the deployed sensor node can cover the target by rotating, rotation takes priority, and we reduce the activity of the node’s mobility coverage method. Therefore, we present the design steps and pseudocode of the algorithm in this paper.Step 1.
Deploy the numbern of sensor nodes si in the monitoring area L.Step 2.
The 3D Voronoi method is used to divide the regionL where the sensor nodes si are located, leaving each node is in its own Voronoi unit vi.Step 3.
For each directional sensor, we set its coordinate system origin to the sensor’s position and define the central angle2θ of the node’s sensing model, where2θ≥2arcsin6/3≈109.5∘.Step 4.
Assuming that the position information of the target pointTj is known, we test the conditions dsi,Tj≤Rs and φ≤θ. If both are true, we store the number of targets that have been covered NTk and the number of nodes that are covering the target NSk and execute Step 5; otherwise, we execute Step 13.Step 5.
Evaluatedsi,Tj≤Rs again. If it is true, we calculate the number of target points NTf and proceed to Step 7; otherwise, we execute Step 12.Step 6.
Calculate the set of anglesσ between each target that has been covered Tk and the main direction axis w→ and find the smallest angle σmin among them.Step 7.
Calculate the numberNTs of remaining targets Ts; that is,NTs=NTf−NTk.Step 8.
Determine whether the angleξ between Ts and w→ satisfies the conditions ξ<θ+σmin or ξ<θ−σmin.Step 9.
If one of the above conditions is satisfied, the main direction axis of the node is rotated byθ+σmin or ξ<θ−σmin toward the target point Ts. Otherwise, the target that is not currently covered Ta is marked, and we execute Step 10.Step 10.
The remaining nodes are retained, rotation is stopped, and the number of nodesN2 is calculated.Step 11.
The resultant forceFa of the idle neighbour node and Ta is introduced to move the idle neighbour node SI to cover Ta.Step 12.
Calculate the total number of remaining nodesNsc and the number of targets that are not covered NTc.Step 13.
We use the resultant forceFA to move the remaining nodes Sc to Tc.Step 14.
Repeat Steps4, 5, 6, 7, 8, 9, 10, 11, 12 and 13 a set number of iterations until all nodes move to the optimal position and complete the final coverage.In this paper, the 3D Voronoi method is first used to divide the space in which the nodes are located, allowing us to determine whether a target is located inside a Voronoi unit—though a target might not be contained in any units. As the number of nodes increases, so does the density of the increasingly compact V-body units; therefore, with a large number of nodes and events, our method can more accurately divide the space for target detection. However, this paper aims to use algorithms to improve network coverage ratios and increase average node residual energy. Our main goal is to find a better balance between the node utilisation and remaining energy to extend the network lifetime. To achieve this, we design the node’s coverage rotation mechanism, priority coverage mechanism, and movement mechanism. We first design the discriminant condition of the algorithm by combining the 3D Voronoi partitioning method with an optimised core adjustment mechanism. The pseudocode of 3D-DAOA is shown in Algorithm1.Algorithm 1: Dynamic adjustment optimisation algorithm (3D-DAOA).
1 Input1: The total numbern of sensor nodes si and the perceived radius of the nodes Rs
2 Input2:Ti // The area of the targets
3 // Randomly generate the numbern of nodes si in the area L of 100m3 size
4L = Polyhedron ([0 0 0;1 0 0;1 1 0;0 1 0;0 0 1;1 0 1;1 1 1;0 1 1] ∗ 100)
5si=gallery′unformdata′,3,n,0 ∗ 100
6 Maxiter = 50 // Set the maximum number of iterations
7 Max_Step = 0~10 // Set the maximum moving step size of the node
8θi=2θ // Set the initial angle of all directional nodes
9Pi=Locationsi // Get location information for all nodes six,y,z
10vi,L=VoronoiPi,R3 // Divide V-body units ∈ vi, vi=v1,v2,⋯,vn
11if dsi,Tj≤Rs && φ≤θ
12Tk=SizeNTk && Sk=SizeNSk
13 // Calculate the number of targets that have been coveredNTk
14 // Calculate the number nodes that are covering the targetNSk
15while i≤Num do
16if dsi,Tj≤Rs then
17NTf=SizeTf && σi=Sizeθmin
18 // Calculate the number of target pointsNTf and the minimum angle σmin
19 // Calculate the number of target points covered by the same nodeNTs
20else
21 Select the free neighbour nodesNTsNTs=NTf−NTk to move to cover Ta
22if ξ<θ+σminξ<θ−σmin then
23 Rotate the main direction axisw→ by θ±σmin
24else
25NTs=SizeTs
26// Calculate the number of target points that are currently not covered NTs
27FA=∑j=1,j≠inFa+Fij+Fo // Calculate the total force FA
28 MoveSc→Tc
29end if
30 Set the number of iterations and repeat lines 12-29 until coverage is complete
31end if
32end while
## 5.1. Definitions of VFA
In sensor network coverage, the VFA (virtual force algorithm) [31] algorithm has enabled nodes deployed in the monitoring environment to be redeployed by different virtual field forces. The concept of a virtual force first came from physics; that is, when the distance between two atoms is too small, they are separated by the repulsion between them. When the distance between two atoms is too large, gravity is generated, bringing them closer to each other [14, 32]. In this article, we need to redesign an improved 3D-VFA to solve the following problems:
(i)
Redeploying a node in a 3D Voronoi partition to accurately cover uncovered targets
(ii)
Quantifying the node’s rotation angle and the unity of the node’s coordinate system
(iii)
Defining the virtual forces—those generated between nodes (e.g., mutual attraction and repulsion) and obstacle repulsion between the forces—to move the directional nodes to complete the coverage
## 5.2. Improved 3D-VFA Analysis
Through the above definition of virtual forces, we mainly address directional node mobility. During optimisation, nodes move under a total resultant forceFA, thereby achieving node balance and uniform target coverage. In the monitoring region, we assume that a sensor node is subject to a gravitational force Fa from neighbouring nodes, an interaction force Fij from nodes, and a force Fo between the node and the boundary of the target region L. The total force FA is therefore
(6)FA=∑j=1,j≠inFij+Fa+Fo.We further constrain our virtual forces to prevent the node from running out of energy prematurely due to excessive node movement. We introduce two distance thresholds:rmin represents the minimum safe distance between nodes and rb represents the distance beyond which the interaction force between the nodes is zero. According to the literature [14, 33], equation (7) defines the interaction force Fij between the nodes as
(7)Fij=+∞,0<dsi,sj≤rmin,k1mimjdsi,sja1,rmin<dsi,sj<rb,0,dsi,sj=rb,−k2mimjdsi,sja2,rb<dsi,sj≤Rc,0,dsi,sj>Rc.Here,k1, k2, a1, and a2 represent gain coefficients and mi and mjrepresent the node quality factor (typically with value of 1). When the distance between two nodes dsi,sj satisfies the condition rmin<dsi,sj<rb, the nodes are mutually exclusive.To enable the node to perform motion detection on targets that are far away, we set the targetTi as the attraction source for the node. In addition, we consider the problem of incompleteness of the node-aware signals as mentioned in [34]. Therefore, we establish the force between the sensing model’s centre of gravity and the target. In this paper, the centre of gravity of the spherical fan is at Gi and the centre of gravity of the spherical sector is
(8)Gi=382r−h,where r represents the length of the spherical sector busbar (i.e., r=Rs) and h represents the length of the point F and the vertex C′ in the plane sector, as shown in Figure 4(e), then h=FC1=r1−cosθ. Therefore, we can calculate the centre of gravity Gi for the node model (i.e., Gi=3/82r−h=3/8r1+cosθ). The gravitational pull of the target on the node’s centre of gravity can be calculated as
(9)Fa=−k3mGimTidGi,Tiae,j∈QT,0,otherwise,where k3 and ae represent the gain coefficient and dGi,Ti represents the Euclidean distance from the node’s centre of gravity Gi to target Ti. Additionally, mTi and mGi represent quality factors of target Ti and node model Gi, respectively. QT represents the force generated by the target set T in the region of action.Additionally, to avoid collisions between nodes and obstacles during movement, we must add a boundary repulsionFo—this ensures the distance between nodes is in the optimal range. According to [14], boundary repulsion is calculated as
(10)Fo=k4mimjdsi,sjab,0<dsi,sj≤L,0,dsi,sj>L,where k4 and ab are the gain coefficients and dsi,sj is the distance between node si and the obstacle. When the distance between the node and the obstacle is within L, the node is repelled by the obstacle.
## 5.3. 3D-DAOA
We design related algorithms to solve two core issues encountered with directional sensor nodes: node rotation and mobility in [29]. We now describe a dynamic adjustment optimisation algorithm for 3D DSNs based on spherical sector coverage models: 3D-DAOA. Meanwhile, to address the issues encountered with the original VFA approach, we designed the dynamic coverage adjustment strategy and combined it with 3D-VFA shown below. If the deployed sensor node can cover the target by rotating, rotation takes priority, and we reduce the activity of the node’s mobility coverage method. Therefore, we present the design steps and pseudocode of the algorithm in this paper.Step 1.
Deploy the numbern of sensor nodes si in the monitoring area L.Step 2.
The 3D Voronoi method is used to divide the regionL where the sensor nodes si are located, leaving each node is in its own Voronoi unit vi.Step 3.
For each directional sensor, we set its coordinate system origin to the sensor’s position and define the central angle2θ of the node’s sensing model, where2θ≥2arcsin6/3≈109.5∘.Step 4.
Assuming that the position information of the target pointTj is known, we test the conditions dsi,Tj≤Rs and φ≤θ. If both are true, we store the number of targets that have been covered NTk and the number of nodes that are covering the target NSk and execute Step 5; otherwise, we execute Step 13.Step 5.
Evaluatedsi,Tj≤Rs again. If it is true, we calculate the number of target points NTf and proceed to Step 7; otherwise, we execute Step 12.Step 6.
Calculate the set of anglesσ between each target that has been covered Tk and the main direction axis w→ and find the smallest angle σmin among them.Step 7.
Calculate the numberNTs of remaining targets Ts; that is,NTs=NTf−NTk.Step 8.
Determine whether the angleξ between Ts and w→ satisfies the conditions ξ<θ+σmin or ξ<θ−σmin.Step 9.
If one of the above conditions is satisfied, the main direction axis of the node is rotated byθ+σmin or ξ<θ−σmin toward the target point Ts. Otherwise, the target that is not currently covered Ta is marked, and we execute Step 10.Step 10.
The remaining nodes are retained, rotation is stopped, and the number of nodesN2 is calculated.Step 11.
The resultant forceFa of the idle neighbour node and Ta is introduced to move the idle neighbour node SI to cover Ta.Step 12.
Calculate the total number of remaining nodesNsc and the number of targets that are not covered NTc.Step 13.
We use the resultant forceFA to move the remaining nodes Sc to Tc.Step 14.
Repeat Steps4, 5, 6, 7, 8, 9, 10, 11, 12 and 13 a set number of iterations until all nodes move to the optimal position and complete the final coverage.In this paper, the 3D Voronoi method is first used to divide the space in which the nodes are located, allowing us to determine whether a target is located inside a Voronoi unit—though a target might not be contained in any units. As the number of nodes increases, so does the density of the increasingly compact V-body units; therefore, with a large number of nodes and events, our method can more accurately divide the space for target detection. However, this paper aims to use algorithms to improve network coverage ratios and increase average node residual energy. Our main goal is to find a better balance between the node utilisation and remaining energy to extend the network lifetime. To achieve this, we design the node’s coverage rotation mechanism, priority coverage mechanism, and movement mechanism. We first design the discriminant condition of the algorithm by combining the 3D Voronoi partitioning method with an optimised core adjustment mechanism. The pseudocode of 3D-DAOA is shown in Algorithm1.Algorithm 1: Dynamic adjustment optimisation algorithm (3D-DAOA).
1 Input1: The total numbern of sensor nodes si and the perceived radius of the nodes Rs
2 Input2:Ti // The area of the targets
3 // Randomly generate the numbern of nodes si in the area L of 100m3 size
4L = Polyhedron ([0 0 0;1 0 0;1 1 0;0 1 0;0 0 1;1 0 1;1 1 1;0 1 1] ∗ 100)
5si=gallery′unformdata′,3,n,0 ∗ 100
6 Maxiter = 50 // Set the maximum number of iterations
7 Max_Step = 0~10 // Set the maximum moving step size of the node
8θi=2θ // Set the initial angle of all directional nodes
9Pi=Locationsi // Get location information for all nodes six,y,z
10vi,L=VoronoiPi,R3 // Divide V-body units ∈ vi, vi=v1,v2,⋯,vn
11if dsi,Tj≤Rs && φ≤θ
12Tk=SizeNTk && Sk=SizeNSk
13 // Calculate the number of targets that have been coveredNTk
14 // Calculate the number nodes that are covering the targetNSk
15while i≤Num do
16if dsi,Tj≤Rs then
17NTf=SizeTf && σi=Sizeθmin
18 // Calculate the number of target pointsNTf and the minimum angle σmin
19 // Calculate the number of target points covered by the same nodeNTs
20else
21 Select the free neighbour nodesNTsNTs=NTf−NTk to move to cover Ta
22if ξ<θ+σminξ<θ−σmin then
23 Rotate the main direction axisw→ by θ±σmin
24else
25NTs=SizeTs
26// Calculate the number of target points that are currently not covered NTs
27FA=∑j=1,j≠inFa+Fij+Fo // Calculate the total force FA
28 MoveSc→Tc
29end if
30 Set the number of iterations and repeat lines 12-29 until coverage is complete
31end if
32end while
## 6. Experiment Simulation and Discussion
### 6.1. Simulation Environment and Results
In this section, we use MATLAB (2015b) to perform simulation experiments to verify the performance of the proposed algorithm. Initially, we randomly deploy the sensor nodes into a 100 m3 cube to test the target points of the deployment. According to [35], when node deployment is low, the optimal node distance to ensure network connectivity is Rc=2Rs. When the number of nodes is large, the optimal distance for network connectivity is Rc=3Rs. The simulation parameters are listed in Table 1.Table 1
Parameter settings.
Name
Value
Simulation area size,L
100 m3
Total number of targets,Noi
25
Number of nodes,n
60/100
Sensing radius,Rs
10~60 m
Node communication radius,Rc
Rc=2Rs
Initial residual energy,E
30 J
rmin
Rs×3%~7%
α
0.5
β
0.5
Angle of view,θ
10∘≤θ≤90∘We first deploy the nodes, as shown in Figure7(a), where the blue cone represents the directional nodes. In the first set of experiments shown in Figure 7, 60 directional sensor nodes were randomly deployed in a 100 m3 space. During the algorithm, the 3D Voronoi partitioning method is used to divide the space into 60 different V-body units using the number and positions of the nodes, such that each node si is located in the respective unit vi, as shown in Figure 7(b). As shown in Figure 7(c) a red dot represents a target to be covered and a blue cone represents the node of the target covered by the algorithm movement adjustment. The black cone indicates the node’s change of the position toward a target that is not within the coverage range of the algorithm. The simulation results show that when the position coordinates of 25 targets are known, the number of targets covered by nodes is firstly calculated under the adjustment of the algorithm. When the target is not within the node’s coverage, the algorithm selects some of the nodes to move.Figure 7
Simulation experiment diagrams: (a) initial node deployment; (b) experimental simulation process; (c) experimental simulation result.
(a)
(b)
(c)
### 6.2. Algorithm Analysis and Contrast Experiment
To further verify the accuracy of the experiment, we compared the 3D-DAOA with the random algorithm (RA) and the improved VFA algorithm [36]. In Experiment 1, we set the number of nodes N=25, the node’s angle of view AOV=55∘, and the number of target points T=60 to verify the relationship between the node’s detection radius and the coverage ratio. As shown in Figure 8, as the detection radius increases, the coverage ratio of the three algorithms also increases. However, the coverage of the proposed algorithm is significantly higher than that of the other two algorithms. It can also be seen from Figure 8 that the coverage ratio of the algorithm first reaches full coverage when the sensing radius is 60 m, because the algorithm can reasonably divide the node position from the beginning, and it can achieve precise coverage through rotation or movement by setting the priority adjustment strategy. Therefore, the proposed algorithm can reduce coverage redundancy and greatly improve the coverage ratio of the overall network.Figure 8
Coverage ratio with increasing sensing ratio.In Experiment 2, we verified the effect on the coverage ratio of changing the node’s viewing angle as shown in Figure9, where we see that the coverage of the three algorithms increases as the viewing angle increases; however, this increase is less than that caused by increasing the detection radius, because different fields of view (FOV) of the same node have different effects on the coverage ratio. Therefore, having a larger FOV achieves a larger coverage range; that is, the probability of covering a target also increases. The advantage of the proposed algorithm is that it can better determine the current location of nodes and targets, and it uses the priority coverage mechanism or idle nodes to achieve a higher coverage ratio.Figure 9
Coverage ratio with increasing field of view.Sensor nodes typically carry a power source with limited energy, and it is difficult to replenish this energy. Therefore, we need to use energy reasonably. In this experiment, the node’s rotational and mobile energy consumption make up a large portion of its total energy consumption. According to [13, 37], a single directional node rotating 180° consumes 1.52 J of energy; this means a single node rotating 1 degree consumes 0.009 J. However, each node consumes 3.6 J per 1 m of movement.In Experiment 3, we assume the number of nodesN=25, the angle of view θ=55°, and that the initial energy of each node is 30 J to verify the relationship between the average residual energy and the coverage ratio in the three algorithms. As shown in Figures 10 and 11, when the viewing angles of the nodes are the same, in each algorithm, the average residual energy decreases as the angle increases, while the coverage ratio of the nodes increases as the angle increases. The improved VFA algorithm has the lowest average residual energy, because it does not dynamically adjust the coverage mechanism, which leads to too many mobile nodes. Therefore, the VFA algorithm has the largest average node energy consumption. 3D-DAOA reasonably reduces unnecessary energy consumption to achieve a better balance while ensuring a high coverage ratio.Figure 10
Index values of each algorithm whenθ=45∘.Figure 11
Index values of each algorithm whenθ=55∘.We now compare the residual energy of a single node in the three algorithms with the total coverage ratio when the angles take different values, as shown in Table2. From this, we conclude that the total value of the two index values for the proposed algorithm is greatest when the angle of view is 55°, because 3D-DAOA can appropriately balance the network coverage and energy consumption. Furthermore, it comprehensively considers a variety of factors and indicators to achieve better detection results.Table 2
Average node residual energy and coverage ratio values for the three algorithms.
Angle of view,θ
Random algorithm
Improved VFA
3D-DAOA
Residual energy
Coverage ratio (%)
Total value
Residual energy
Coverage ratio (%)
Total value
Residual energy
Coverage ratio (%)
Total value
45°
29.71
34
63.71
17.06
43
60.06
24.51
52
76.51
55°
29.35
37
66.35
14.31
48
62.31
22.46
61
83.46
60°
29.17
39
68.17
11.52
53
64.52
20.33
63
83.33Combining the data in Figures10 and 11 with Table 2, we conclude that the node’s residual energy after the random algorithm has almost no change and the coverage rate is the lowest, because this algorithm does not cause the node to rotate or move based on the target’s position. Under the same evaluation index conditions, the proposed algorithm has obvious advantages over the improved VFA algorithm: the algorithm’s priority coverage mechanism achieves accurate target coverage, the dynamic adjustment mechanism avoids invalid node movement, and the algorithm’s coverage strategy is better when the angle of view is 55°.In Experiments 4 and 5, we verified the relationship between the number of nodes and the coverage ratio. In Experiment 4, we setN=25, θ=55°, T=60, and Rs=30m, as shown in Figure 12. We conclude that as the number of nodes increases, the overall coverage ratio of the three algorithms increases. Figure 12 also shows that when there are fewer nodes, the coverage ratio of the three algorithms is lower. The coverage of the random algorithm and the improved VFA algorithm is lower than that of 3D-DAOA, especially when the number of nodes exceeds 30.Figure 12
The effect of changes in the number of nodes on the coverage ratio (when the number of the targets is small).In Experiment 5, settingT=100 does not change other indicators, as shown in Figure 13. Additionally, when the number of target points is large, the proposed algorithm has a higher coverage ratio. Therefore, under the same conditions, the proposed algorithm is more suitable for large-scale target detection, because the adjustment mechanism of 3D-DAOA enables the node to accurately cover the target.Figure 13
The effect of changes in the number of nodes on the coverage ratio (when the number of the target points is large).
## 6.1. Simulation Environment and Results
In this section, we use MATLAB (2015b) to perform simulation experiments to verify the performance of the proposed algorithm. Initially, we randomly deploy the sensor nodes into a 100 m3 cube to test the target points of the deployment. According to [35], when node deployment is low, the optimal node distance to ensure network connectivity is Rc=2Rs. When the number of nodes is large, the optimal distance for network connectivity is Rc=3Rs. The simulation parameters are listed in Table 1.Table 1
Parameter settings.
Name
Value
Simulation area size,L
100 m3
Total number of targets,Noi
25
Number of nodes,n
60/100
Sensing radius,Rs
10~60 m
Node communication radius,Rc
Rc=2Rs
Initial residual energy,E
30 J
rmin
Rs×3%~7%
α
0.5
β
0.5
Angle of view,θ
10∘≤θ≤90∘We first deploy the nodes, as shown in Figure7(a), where the blue cone represents the directional nodes. In the first set of experiments shown in Figure 7, 60 directional sensor nodes were randomly deployed in a 100 m3 space. During the algorithm, the 3D Voronoi partitioning method is used to divide the space into 60 different V-body units using the number and positions of the nodes, such that each node si is located in the respective unit vi, as shown in Figure 7(b). As shown in Figure 7(c) a red dot represents a target to be covered and a blue cone represents the node of the target covered by the algorithm movement adjustment. The black cone indicates the node’s change of the position toward a target that is not within the coverage range of the algorithm. The simulation results show that when the position coordinates of 25 targets are known, the number of targets covered by nodes is firstly calculated under the adjustment of the algorithm. When the target is not within the node’s coverage, the algorithm selects some of the nodes to move.Figure 7
Simulation experiment diagrams: (a) initial node deployment; (b) experimental simulation process; (c) experimental simulation result.
(a)
(b)
(c)
## 6.2. Algorithm Analysis and Contrast Experiment
To further verify the accuracy of the experiment, we compared the 3D-DAOA with the random algorithm (RA) and the improved VFA algorithm [36]. In Experiment 1, we set the number of nodes N=25, the node’s angle of view AOV=55∘, and the number of target points T=60 to verify the relationship between the node’s detection radius and the coverage ratio. As shown in Figure 8, as the detection radius increases, the coverage ratio of the three algorithms also increases. However, the coverage of the proposed algorithm is significantly higher than that of the other two algorithms. It can also be seen from Figure 8 that the coverage ratio of the algorithm first reaches full coverage when the sensing radius is 60 m, because the algorithm can reasonably divide the node position from the beginning, and it can achieve precise coverage through rotation or movement by setting the priority adjustment strategy. Therefore, the proposed algorithm can reduce coverage redundancy and greatly improve the coverage ratio of the overall network.Figure 8
Coverage ratio with increasing sensing ratio.In Experiment 2, we verified the effect on the coverage ratio of changing the node’s viewing angle as shown in Figure9, where we see that the coverage of the three algorithms increases as the viewing angle increases; however, this increase is less than that caused by increasing the detection radius, because different fields of view (FOV) of the same node have different effects on the coverage ratio. Therefore, having a larger FOV achieves a larger coverage range; that is, the probability of covering a target also increases. The advantage of the proposed algorithm is that it can better determine the current location of nodes and targets, and it uses the priority coverage mechanism or idle nodes to achieve a higher coverage ratio.Figure 9
Coverage ratio with increasing field of view.Sensor nodes typically carry a power source with limited energy, and it is difficult to replenish this energy. Therefore, we need to use energy reasonably. In this experiment, the node’s rotational and mobile energy consumption make up a large portion of its total energy consumption. According to [13, 37], a single directional node rotating 180° consumes 1.52 J of energy; this means a single node rotating 1 degree consumes 0.009 J. However, each node consumes 3.6 J per 1 m of movement.In Experiment 3, we assume the number of nodesN=25, the angle of view θ=55°, and that the initial energy of each node is 30 J to verify the relationship between the average residual energy and the coverage ratio in the three algorithms. As shown in Figures 10 and 11, when the viewing angles of the nodes are the same, in each algorithm, the average residual energy decreases as the angle increases, while the coverage ratio of the nodes increases as the angle increases. The improved VFA algorithm has the lowest average residual energy, because it does not dynamically adjust the coverage mechanism, which leads to too many mobile nodes. Therefore, the VFA algorithm has the largest average node energy consumption. 3D-DAOA reasonably reduces unnecessary energy consumption to achieve a better balance while ensuring a high coverage ratio.Figure 10
Index values of each algorithm whenθ=45∘.Figure 11
Index values of each algorithm whenθ=55∘.We now compare the residual energy of a single node in the three algorithms with the total coverage ratio when the angles take different values, as shown in Table2. From this, we conclude that the total value of the two index values for the proposed algorithm is greatest when the angle of view is 55°, because 3D-DAOA can appropriately balance the network coverage and energy consumption. Furthermore, it comprehensively considers a variety of factors and indicators to achieve better detection results.Table 2
Average node residual energy and coverage ratio values for the three algorithms.
Angle of view,θ
Random algorithm
Improved VFA
3D-DAOA
Residual energy
Coverage ratio (%)
Total value
Residual energy
Coverage ratio (%)
Total value
Residual energy
Coverage ratio (%)
Total value
45°
29.71
34
63.71
17.06
43
60.06
24.51
52
76.51
55°
29.35
37
66.35
14.31
48
62.31
22.46
61
83.46
60°
29.17
39
68.17
11.52
53
64.52
20.33
63
83.33Combining the data in Figures10 and 11 with Table 2, we conclude that the node’s residual energy after the random algorithm has almost no change and the coverage rate is the lowest, because this algorithm does not cause the node to rotate or move based on the target’s position. Under the same evaluation index conditions, the proposed algorithm has obvious advantages over the improved VFA algorithm: the algorithm’s priority coverage mechanism achieves accurate target coverage, the dynamic adjustment mechanism avoids invalid node movement, and the algorithm’s coverage strategy is better when the angle of view is 55°.In Experiments 4 and 5, we verified the relationship between the number of nodes and the coverage ratio. In Experiment 4, we setN=25, θ=55°, T=60, and Rs=30m, as shown in Figure 12. We conclude that as the number of nodes increases, the overall coverage ratio of the three algorithms increases. Figure 12 also shows that when there are fewer nodes, the coverage ratio of the three algorithms is lower. The coverage of the random algorithm and the improved VFA algorithm is lower than that of 3D-DAOA, especially when the number of nodes exceeds 30.Figure 12
The effect of changes in the number of nodes on the coverage ratio (when the number of the targets is small).In Experiment 5, settingT=100 does not change other indicators, as shown in Figure 13. Additionally, when the number of target points is large, the proposed algorithm has a higher coverage ratio. Therefore, under the same conditions, the proposed algorithm is more suitable for large-scale target detection, because the adjustment mechanism of 3D-DAOA enables the node to accurately cover the target.Figure 13
The effect of changes in the number of nodes on the coverage ratio (when the number of the target points is large).
## 7. Conclusions
In this paper, we studied target coverage in 3D DSNs. First, we improved the traditional 3D directional sensing model and proposed a spherical sector model that is more suitable for 3D directional sensor nodes. Next, we unified the coordinate system of the nodes and rotated them to achieve coverage using the spherical sector model. We then quantified the sensing model’s perspective to provide an effective detection scheme for directional node coverage. We proposed a correlation algorithm and combined node rotation and mobility to achieve priority coverage effectively, enabling our algorithm to achieve a higher coverage ratio while reducing network energy consumption. Finally, we verified and compared 3D-DAOA with other algorithms to prove its reliability and accuracy. In future efforts, we will further study the algorithm’s actual test environment and target mobility.
---
*Source: 1018434-2019-10-07.xml* | 2019 |
# An Overview of Molecular Mechanism, Clinicopathological Factors, and Treatment in NUT Carcinoma
**Authors:** Qian W. Huang; Li J. He; Shuang Zheng; Tao Liu; Bei N. Peng
**Journal:** BioMed Research International
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1018439
---
## Abstract
NUT carcinoma (NC) is a rare and poorly differentiated tumor, with highly aggressive and fatal neoplasm. NC is characterized by chromosomal rearrangement involving NUTM1 gene, but lack of specific clinical and histomorphological features. It is more common in midline anatomic sites, such as head and neck, mediastinum, and other midline organs. NC may occur at any age, but mainly in children and young adults. In addition, male and female are equally affected. Most clinicians lack a clear understanding of the disease, and NC diagnostic reagents are still not widely used; therefore, misdiagnosis often occurs in clinic. Due to the highly aggressive nature of the disease and the insensitivity to nonspecific chemotherapy or radiotherapy, many patients have died before the confirmation of NC. In fact, the true incidence of NC is much higher than the current statistics. In recent years, targeted therapy for NC has also made some progress. This article aims to summarize the molecular mechanisms, clinicopathological characteristics, and treatment of NC.
---
## Body
## 1. Progress of NUT Carcinoma
NUTM1 (NUT midline carcinoma family member 1, aka NUT) gene, on chromosome 15, is normally expressed only in mature spermatogonia and has no known function [1]. NUT carcinoma (NC), a rare and poorly differentiated tumor, is characterized by chromosomal rearrangement involving NUT gene, without any clinical or histomorphological features to distinguish it in clinical diagnosis [2]. In 1991, NC was first described in two cases, characterized by t(15; 19) translocation [3, 4]. In 2003, scholars found that the occurrence of t(15; 19)(q13; p13.1) translocation caused the formation of a BRD4-NUT fusion oncogene [5]. In most of the previous cases, NC arose from midline anatomic structures, such as the head, neck, and mediastinum [6, 7]. In 2004, NC was defined as midline carcinoma with NUT rearrangement, also called NUT midline carcinoma, which was caused by NUT gene on chromosome 15 fused to BRD4 gene on chromosome 19 or other fusion partner genes, leading to the formation of BRD4-NUT fusion oncogene or NUT-variant fusion oncogene [8, 9]. However, more and more studies have found that NC arose not only in midline structures but also in the lung [10], pancreas [11], kidney [12], bladder [8], endometrium [8], salivary gland [13], bone [14], ovarian [15], and other organs or soft tissues.Therefore, the WHO classification of tumors removed the word “midline” from the name of this type of tumors and redefined it as NUT carcinoma in 2015 [15].
## 2. Genetic Abnormality of NUT Carcinoma
Somatic cytogenetic abnormality is the basis of NC. Cytogenetic analysis shows that the oncogene of NC includes the rearrangement of the NUTM1 gene with a set of partner genes, mainly fused to the paralogous genes encoding bromodomain and extraterminal domain proteins (BET proteins), including BRD2, BRD3, BDR4, and BRDT [16–18]. In two-thirds of the cases, NUT gene is fused to BRD4 resulting in BRD4-NUT fusion gene [19]. BRD3 [20] and NSD3 [21] are also relatively common fusion partners with NUT. Recently, accumulating studies have identified novel fusion partners, including ZNF532 [22], ZNF592 [23], MXD4 [24], BCORL1 [25], MXD1 [15, 25], CIC [26], MGA [27], and other unknown genes.
## 3. Pathogenic Mechanism
NC is a highly invasive tumor driven by NUT fusion oncoprotein. The normal single molecule of NUT, the family of nuclear protein in testis, has two acidic domains (AD), and one of which binds to histone acetyltransferase (HAT) p300, resulting in histone acetylation [28]. The most common NUT fusion partners are the members of BET family, which is a special protein family of transcription/chromosome regulators, including BRD2, BRD3, BDR4, and BRDT, and the single protein molecule of all members contains two bromodomains and an extraterminal (ET) domain [29]. BRD2, BRD3, and BRD4 are widely expressed in organs, while BRDT is limited to the testis [30]. As a key member of the BET family, BRD4 plays an important role in regulating transcription, cell growth, cell cycle, and chromatin structure and its dysregulation is associated with many tumors [31–36]. The BRD4 bromodomains can specifically recognize and bind acetylated lysine residues of histone and other proteins, and the ET domain can bind to a series of chromatin-modifying proteins as the protein-protein interaction module [17]. The BRD4-NUT fusion oncoprotein retains the bromodomains and ET domain of BRD4 and nearly complete the coding sequence of NUT. In vitro cell studies showed that knockdown of the BRD3/4-NUT gene by siRNA in NC cell lines induced rapid squamous differentiation and arrested growth, which suggested that the BRD3/4-NUT fusion protein blocked differentiation and promoted proliferation of carcinoma cells [20]. Therefore, the mechanism of BRD-NUT oncoprotein is to restrict cell differentiation and promote uncontrolled cell growth.The interaction of acetylated lysine residues with bromodomains is pivotal for the carcinogenic function of BRD4-NUT fusion protein [37]. BRD4-NUT protein is contained in huge nuclear foci produced by combining BRD4-NUT with acetylated chromatin through acetylated lysine residues on histone [28]. Some scholars analyzed the nuclear foci of BRD4-NUT in NC cell lines, and the results showed that BRD4-NUT was highly enriched in adjacent regions of acetylated chromatin. In NC cell lines, BRD4 bromodomains can combine with histone acetylated lysine residues which promote the binding of BRD4-NUT to chromatin and produce foci of BRD4-NUT and acetylated chromatin. The NUT component of BRD4-NUT complexes can recruit p300, leading to the high level of local histone acetylation, further producing BRD4-NUT complexes in a feed-forward mechanism. Finally, it causes the formation of the huge regions containing acetylated chromatin, BRD4-NUT, and EP300, and the huge regions are termed “megadomains” as the whole topologically associating domains (TADs) can be filled with acetylated chromatin and BRD4-NUT oncoprotein [23, 38]. The resultant megadomains cover the regulatory regions of MYC and p63, both of which have been proved to be necessary for the growth of NC cell lines. After the knockdown of the MYC or p63 in NC cell lines, cell growth stopped, especially in case of MYC knockdown, which also led to cell differentiation [39]. This indicated that MYC and p63 are the key target genes of BRD4-NUT. Thus, BRD4-NUT might directly misregulate these two key genes, driving the occurrence of NC. The pathogenic mechanism of NSD3-NUT [21] and ZNF532-NUT [22] fusion proteins is similar to that of BRD4-NUT.In addition, a recent study showed BRD4 was hyperphosphorylated in NC, and CDK9 was the potential kinase mediating BRD4 hyperphosphorylation. Blocking BRD4 hyperphosphorylation with chemical and molecular inhibitors, the expression of BRD4 downstream oncogenes was inhibited and cell transformation was abrogated [38]. It suggested that BRD4 hyperphosphorylation was associated with its function to drive the expression of downstream oncogenes and cellular transformation in NC.
## 4. Clinicopathological Features and Diagnosis of NUT Carcinoma
At present, the cellular origin of NC is still unclear. According to previous reports, NC might be derived from the malignant epithelial tumor, while there were rare reports, suggesting it might be originated from mesenchymal cells [27, 40]. Nothing is certainly known about the etiology of NC, which was found to be not associated with Epstein–Barr virus (EBV) and human papillomavirus (HPV) infection [41], and also different from some squamous cell carcinomas closely related to environmental factors. Although the diagnosed cases of NC have been increasing in recent years, its actual incidence remains unknown.NC lacks specific clinical manifestations and histomorphological features. It is usually found in the midline anatomic sites, such as the head, neck, or thorax, and also it is easy to report NC as diagnosed in other tissues or organs. It can occur at any age, ranging from newborn to 78 years, but mainly in children and young adults. In addition, male and female are equally affected [42, 43]. NC is a fatal disease with extremely poor prognosis, and most patients died within a year after diagnosis. In 2012, a retrospective study [18] of 63 NC patients revealed that the median age of the patients was 16 years (range 0–78 years). About 56% of all patients had the tumor occurred in the thorax and 21% in the head and neck. The median overall survival (OS) of patients with NC was 6.7 months, and the one-year OS was 30%. A recent large cohort study (n = 119) [44] reported that the median age of NC patients was 23 years (range 0–68 years). The majority of tumors arose in the lung (35.3%), head and neck (35%), and mediastinum (26%). The median OS was only 5 months, and the one-year OS was 24.99%. Both of the two studies revealed equal incidence in males and females. Although an earlier study showed the average survival time of NC patients with NUT-variant was increased almost fourfold compared to that of the patients with BRD4- NUT [8], it was also reported that the patients with NUT-variant had no recurrence within about 34 months after surgery [45]. However, in the two large cohort studies mentioned above, both of them failed to show significant difference in OS among translocation types (BRD4-NUT, BRD3-NUT, and NUT-variant) [18, 44]. Therefore, further cohort studies are required to investigate whether the prognosis of NUT-variant NC patients is better than that of BRD3/4-NUT patients, which may contribute to identify molecular subtypes with unique prognostic features.The histopathology of NC is not usable for diagnosis due to lack of typical morphological characteristics. The more common description is a poorly differentiated or undifferentiated carcinoma with abrupt and focal squamous differentiation, containing medium-sized round or oval cells with high nuclear-to-cytoplasm ratio, with variably prominent nucleoli and pale cytoplasm. Thin rim and foci of abrupt keratinization are often present [46, 47]. NC have also been reported to display different appearances, including high-grade spindle cell neoplasms [48], small round blue cell sarcoma [24], and high-grade neuroepithelial neoplasm with PNET [27]. Because the histopathological features of NC overlap with other poorly differentiated/undifferentiation tumors or appear similar to several commonly seen pathologies, it often leads to misdiagnosis. It is significant to remind the clinicians to reconsider some cases of diagnosed squamous cell carcinoma, germ cell tumors, neuroendocrine carcinoma, and small round blue cell sarcoma for NC differential diagnosis [46, 49].NC was initially diagnosed by the use of fluorescence in situ hybridization (FISH) and reverse-transcriptase polymerase chain reaction (RT-PCR), which directly detects the NUT gene rearrangement. In 2009, a specific monoclonal antibody against NUT (C52B1) was developed for the diagnosis of NC and had the specificity of 100% and sensitivity of 87% [50]. If immunohistochemical (IHC) nuclear staining was observed as more than 50%, it could confirm the diagnosis of NC [51]. Although the C52B1 can be directly used for diagnosis of NC, the mutation subtypes such as BRD4-NUT, BRD3-NUT, or NUT-variant cannot be identified. Thus, FISH, RT-PCR, and next-generation sequencing (NGS) are still necessary to determine the gene fused to NUT. With the development of targeted therapy, it is important to identify the specific genetic subtype of NC.
## 5. Therapy Strategies
There is no constantly effective treatment strategy for NC to date. It was reported that radiotherapy and surgical resection could prolong progression-free survival (PFS) and OS for NC patients, but chemotherapy had nothing to do with improved outcome [18]. Another study showed that chemotherapy and radiotherapy were associated with the higher survival rate, but not applicable to surgical resection [44]. A 10-year-old boy [14] with NC involving the iliac bone was initially diagnosed as Ewing sarcoma and received the SSG-IX protocol and local radiotherapy. This patient has remained in complete remission for 13 years. Similar therapy regimens have been used in NC patients before, but the outcome was not satisfactory. Therefore, the effect of surgery, chemotherapy, or radiotherapy on the prognosis of NC patients is still not clear because relevant data are obviously lacking. Although some NC patients have showed response to chemotherapy or radiotherapy, in most cases, the time of remission was short, and then the patients relapsed and died soon.Targeted therapy has become focus on the clinical research of NC therapy. Histone deacetylase inhibitors (HDACi) and BET inhibitors (BETi) are target drugs against NC which were firstly found. HDACi was proved to significantly inhibit tumor cell growth and to induce differentiation in NC cell lines and murine xenograft models of NC [52]. Based on research findings, a 10-year-old boy [45] with NC was treated with a single agent of HDACi vorinostat and showed significant response after five weeks of therapy. Due to severe (grade 3) nausea and emesis, this patient stopped to receive the treatment of vorinostat, and then the tumor grew rapidly. He died with an OS of 11 months. BETi is an acetyl-histone mimetic compound, which can bind to bromodomains and competitively inhibit the tether of BRD3/4 to acetylated chromatin, and directly target BRD3/4-NUT fusion protein. In 2010, a study [53] found that BETi JQ1 could induce differentiation and inhibit growth of NC cells in vivo and in vitro. Because of significant preclinical response of BETi, phase I/II clinical trials for the safety and efficacy of different BETis in NC patients are currently under way. The clinical efficacy of BETi in 4 patients with NC has been reported in 2016 [54]. Two of them showed a rapid response with tumor regression, and one maintained disease stabilization. The OS of 4 cases was 19, 18, 7, and 5 months, respectively. Compared with previously described NC patients with 6.7 or 5 months of median OS, it proved that HDACi and BETi could significantly prolong the survival of NC patients. Interestingly, Stirnweiss and his colleagues [55] found that the BETi was more sensitive in BRD4-NUT (ex11:ex2) variant NC cell lines than in BRD4-NUT (ex15:ex2) variant or non-NC cell lines. The BETi was also effective in the BRD3-NUT fusion cell line. The result suggested that different breakpoints or fusion subtypes in NC tumors might have different responses to BETi. This indicated that BETi had the possibility of ineffective treatment and reminded the researchers of the necessity to identify fusion gene for the decision of specific NC therapy with maximized effectiveness. However, the efficacy of HDACi and BETi is limited by drug toxicity such as the unwanted effect on normal cells, and BETi is also limited by the acquisition of resistance [56, 57]. Therefore, to a large extent, the effective and precise treatment of NC has become more difficult.Recently, scholars revealed that a novel dual HDAC/PI3K inhibitor (CUDC-907) showed the strongest outcomes on NC cells in vitro compared to HDACi or BETi [58, 59]. The mechanism of CUDC-907 is to downregulate MYC expression and inhibit the growth of MYC-driven malignant cells by targeting the upstream regulators of MYC, such as BRD4-NUT and phosphoinositide 3-kinases (PI3K) [60]. Thus, CUDC-907 might be a promising target drug for NC therapy. In addition to HDACi, BETi, and HDAC/PI3K inhibitors, CDK9 inhibitors [61] and mTOR inhibitors [62] in drug screening were proved to be sensitive drugs against NC in vitro. Both of them also showed remarkable efficacies to inhibit the proliferation of NC cells.
## 6. Conclusion
NC is a rare and highly lethal carcinoma, which lacks special clinicopathological features. While IHC, FISH, RT-PCR, and NGS are still not widely used for the diagnosis of NC and clinicians lack understanding about the disease, NC can be easily misdiagnosed. Early recognition of NC is crucial to select and establish the optimal treatment regimens. Great progress has also been made in the development of NC therapy, especially targeted therapies, which shows a promising tendency. Today, it is clear that NC is no longer confined to the midline structure, and it can occur in any tissue or organ, at any age. What we are facing now is not only to help clinicians to raise their awareness of NC but also to clarify the criteria for when to consider NC. The diagnosis of a poorly or undifferentiated carcinoma should prompt clinicians to consider the possibility of NC, and small round cell sarcoma, neuroendocrine carcinoma, germ cell tumors, and Ewing sarcoma/PNET are also taken into account to initiate NC differential diagnosis. The previously hidden and currently increasing occurrence of NC make the clinicians and patients strive for early detection of NC and timely symptomatic treatment, as well as more advanced target anti-NC therapy.
---
*Source: 1018439-2019-11-11.xml* | 1018439-2019-11-11_1018439-2019-11-11.md | 17,563 | An Overview of Molecular Mechanism, Clinicopathological Factors, and Treatment in NUT Carcinoma | Qian W. Huang; Li J. He; Shuang Zheng; Tao Liu; Bei N. Peng | BioMed Research International
(2019) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1018439 | 1018439-2019-11-11.xml | ---
## Abstract
NUT carcinoma (NC) is a rare and poorly differentiated tumor, with highly aggressive and fatal neoplasm. NC is characterized by chromosomal rearrangement involving NUTM1 gene, but lack of specific clinical and histomorphological features. It is more common in midline anatomic sites, such as head and neck, mediastinum, and other midline organs. NC may occur at any age, but mainly in children and young adults. In addition, male and female are equally affected. Most clinicians lack a clear understanding of the disease, and NC diagnostic reagents are still not widely used; therefore, misdiagnosis often occurs in clinic. Due to the highly aggressive nature of the disease and the insensitivity to nonspecific chemotherapy or radiotherapy, many patients have died before the confirmation of NC. In fact, the true incidence of NC is much higher than the current statistics. In recent years, targeted therapy for NC has also made some progress. This article aims to summarize the molecular mechanisms, clinicopathological characteristics, and treatment of NC.
---
## Body
## 1. Progress of NUT Carcinoma
NUTM1 (NUT midline carcinoma family member 1, aka NUT) gene, on chromosome 15, is normally expressed only in mature spermatogonia and has no known function [1]. NUT carcinoma (NC), a rare and poorly differentiated tumor, is characterized by chromosomal rearrangement involving NUT gene, without any clinical or histomorphological features to distinguish it in clinical diagnosis [2]. In 1991, NC was first described in two cases, characterized by t(15; 19) translocation [3, 4]. In 2003, scholars found that the occurrence of t(15; 19)(q13; p13.1) translocation caused the formation of a BRD4-NUT fusion oncogene [5]. In most of the previous cases, NC arose from midline anatomic structures, such as the head, neck, and mediastinum [6, 7]. In 2004, NC was defined as midline carcinoma with NUT rearrangement, also called NUT midline carcinoma, which was caused by NUT gene on chromosome 15 fused to BRD4 gene on chromosome 19 or other fusion partner genes, leading to the formation of BRD4-NUT fusion oncogene or NUT-variant fusion oncogene [8, 9]. However, more and more studies have found that NC arose not only in midline structures but also in the lung [10], pancreas [11], kidney [12], bladder [8], endometrium [8], salivary gland [13], bone [14], ovarian [15], and other organs or soft tissues.Therefore, the WHO classification of tumors removed the word “midline” from the name of this type of tumors and redefined it as NUT carcinoma in 2015 [15].
## 2. Genetic Abnormality of NUT Carcinoma
Somatic cytogenetic abnormality is the basis of NC. Cytogenetic analysis shows that the oncogene of NC includes the rearrangement of the NUTM1 gene with a set of partner genes, mainly fused to the paralogous genes encoding bromodomain and extraterminal domain proteins (BET proteins), including BRD2, BRD3, BDR4, and BRDT [16–18]. In two-thirds of the cases, NUT gene is fused to BRD4 resulting in BRD4-NUT fusion gene [19]. BRD3 [20] and NSD3 [21] are also relatively common fusion partners with NUT. Recently, accumulating studies have identified novel fusion partners, including ZNF532 [22], ZNF592 [23], MXD4 [24], BCORL1 [25], MXD1 [15, 25], CIC [26], MGA [27], and other unknown genes.
## 3. Pathogenic Mechanism
NC is a highly invasive tumor driven by NUT fusion oncoprotein. The normal single molecule of NUT, the family of nuclear protein in testis, has two acidic domains (AD), and one of which binds to histone acetyltransferase (HAT) p300, resulting in histone acetylation [28]. The most common NUT fusion partners are the members of BET family, which is a special protein family of transcription/chromosome regulators, including BRD2, BRD3, BDR4, and BRDT, and the single protein molecule of all members contains two bromodomains and an extraterminal (ET) domain [29]. BRD2, BRD3, and BRD4 are widely expressed in organs, while BRDT is limited to the testis [30]. As a key member of the BET family, BRD4 plays an important role in regulating transcription, cell growth, cell cycle, and chromatin structure and its dysregulation is associated with many tumors [31–36]. The BRD4 bromodomains can specifically recognize and bind acetylated lysine residues of histone and other proteins, and the ET domain can bind to a series of chromatin-modifying proteins as the protein-protein interaction module [17]. The BRD4-NUT fusion oncoprotein retains the bromodomains and ET domain of BRD4 and nearly complete the coding sequence of NUT. In vitro cell studies showed that knockdown of the BRD3/4-NUT gene by siRNA in NC cell lines induced rapid squamous differentiation and arrested growth, which suggested that the BRD3/4-NUT fusion protein blocked differentiation and promoted proliferation of carcinoma cells [20]. Therefore, the mechanism of BRD-NUT oncoprotein is to restrict cell differentiation and promote uncontrolled cell growth.The interaction of acetylated lysine residues with bromodomains is pivotal for the carcinogenic function of BRD4-NUT fusion protein [37]. BRD4-NUT protein is contained in huge nuclear foci produced by combining BRD4-NUT with acetylated chromatin through acetylated lysine residues on histone [28]. Some scholars analyzed the nuclear foci of BRD4-NUT in NC cell lines, and the results showed that BRD4-NUT was highly enriched in adjacent regions of acetylated chromatin. In NC cell lines, BRD4 bromodomains can combine with histone acetylated lysine residues which promote the binding of BRD4-NUT to chromatin and produce foci of BRD4-NUT and acetylated chromatin. The NUT component of BRD4-NUT complexes can recruit p300, leading to the high level of local histone acetylation, further producing BRD4-NUT complexes in a feed-forward mechanism. Finally, it causes the formation of the huge regions containing acetylated chromatin, BRD4-NUT, and EP300, and the huge regions are termed “megadomains” as the whole topologically associating domains (TADs) can be filled with acetylated chromatin and BRD4-NUT oncoprotein [23, 38]. The resultant megadomains cover the regulatory regions of MYC and p63, both of which have been proved to be necessary for the growth of NC cell lines. After the knockdown of the MYC or p63 in NC cell lines, cell growth stopped, especially in case of MYC knockdown, which also led to cell differentiation [39]. This indicated that MYC and p63 are the key target genes of BRD4-NUT. Thus, BRD4-NUT might directly misregulate these two key genes, driving the occurrence of NC. The pathogenic mechanism of NSD3-NUT [21] and ZNF532-NUT [22] fusion proteins is similar to that of BRD4-NUT.In addition, a recent study showed BRD4 was hyperphosphorylated in NC, and CDK9 was the potential kinase mediating BRD4 hyperphosphorylation. Blocking BRD4 hyperphosphorylation with chemical and molecular inhibitors, the expression of BRD4 downstream oncogenes was inhibited and cell transformation was abrogated [38]. It suggested that BRD4 hyperphosphorylation was associated with its function to drive the expression of downstream oncogenes and cellular transformation in NC.
## 4. Clinicopathological Features and Diagnosis of NUT Carcinoma
At present, the cellular origin of NC is still unclear. According to previous reports, NC might be derived from the malignant epithelial tumor, while there were rare reports, suggesting it might be originated from mesenchymal cells [27, 40]. Nothing is certainly known about the etiology of NC, which was found to be not associated with Epstein–Barr virus (EBV) and human papillomavirus (HPV) infection [41], and also different from some squamous cell carcinomas closely related to environmental factors. Although the diagnosed cases of NC have been increasing in recent years, its actual incidence remains unknown.NC lacks specific clinical manifestations and histomorphological features. It is usually found in the midline anatomic sites, such as the head, neck, or thorax, and also it is easy to report NC as diagnosed in other tissues or organs. It can occur at any age, ranging from newborn to 78 years, but mainly in children and young adults. In addition, male and female are equally affected [42, 43]. NC is a fatal disease with extremely poor prognosis, and most patients died within a year after diagnosis. In 2012, a retrospective study [18] of 63 NC patients revealed that the median age of the patients was 16 years (range 0–78 years). About 56% of all patients had the tumor occurred in the thorax and 21% in the head and neck. The median overall survival (OS) of patients with NC was 6.7 months, and the one-year OS was 30%. A recent large cohort study (n = 119) [44] reported that the median age of NC patients was 23 years (range 0–68 years). The majority of tumors arose in the lung (35.3%), head and neck (35%), and mediastinum (26%). The median OS was only 5 months, and the one-year OS was 24.99%. Both of the two studies revealed equal incidence in males and females. Although an earlier study showed the average survival time of NC patients with NUT-variant was increased almost fourfold compared to that of the patients with BRD4- NUT [8], it was also reported that the patients with NUT-variant had no recurrence within about 34 months after surgery [45]. However, in the two large cohort studies mentioned above, both of them failed to show significant difference in OS among translocation types (BRD4-NUT, BRD3-NUT, and NUT-variant) [18, 44]. Therefore, further cohort studies are required to investigate whether the prognosis of NUT-variant NC patients is better than that of BRD3/4-NUT patients, which may contribute to identify molecular subtypes with unique prognostic features.The histopathology of NC is not usable for diagnosis due to lack of typical morphological characteristics. The more common description is a poorly differentiated or undifferentiated carcinoma with abrupt and focal squamous differentiation, containing medium-sized round or oval cells with high nuclear-to-cytoplasm ratio, with variably prominent nucleoli and pale cytoplasm. Thin rim and foci of abrupt keratinization are often present [46, 47]. NC have also been reported to display different appearances, including high-grade spindle cell neoplasms [48], small round blue cell sarcoma [24], and high-grade neuroepithelial neoplasm with PNET [27]. Because the histopathological features of NC overlap with other poorly differentiated/undifferentiation tumors or appear similar to several commonly seen pathologies, it often leads to misdiagnosis. It is significant to remind the clinicians to reconsider some cases of diagnosed squamous cell carcinoma, germ cell tumors, neuroendocrine carcinoma, and small round blue cell sarcoma for NC differential diagnosis [46, 49].NC was initially diagnosed by the use of fluorescence in situ hybridization (FISH) and reverse-transcriptase polymerase chain reaction (RT-PCR), which directly detects the NUT gene rearrangement. In 2009, a specific monoclonal antibody against NUT (C52B1) was developed for the diagnosis of NC and had the specificity of 100% and sensitivity of 87% [50]. If immunohistochemical (IHC) nuclear staining was observed as more than 50%, it could confirm the diagnosis of NC [51]. Although the C52B1 can be directly used for diagnosis of NC, the mutation subtypes such as BRD4-NUT, BRD3-NUT, or NUT-variant cannot be identified. Thus, FISH, RT-PCR, and next-generation sequencing (NGS) are still necessary to determine the gene fused to NUT. With the development of targeted therapy, it is important to identify the specific genetic subtype of NC.
## 5. Therapy Strategies
There is no constantly effective treatment strategy for NC to date. It was reported that radiotherapy and surgical resection could prolong progression-free survival (PFS) and OS for NC patients, but chemotherapy had nothing to do with improved outcome [18]. Another study showed that chemotherapy and radiotherapy were associated with the higher survival rate, but not applicable to surgical resection [44]. A 10-year-old boy [14] with NC involving the iliac bone was initially diagnosed as Ewing sarcoma and received the SSG-IX protocol and local radiotherapy. This patient has remained in complete remission for 13 years. Similar therapy regimens have been used in NC patients before, but the outcome was not satisfactory. Therefore, the effect of surgery, chemotherapy, or radiotherapy on the prognosis of NC patients is still not clear because relevant data are obviously lacking. Although some NC patients have showed response to chemotherapy or radiotherapy, in most cases, the time of remission was short, and then the patients relapsed and died soon.Targeted therapy has become focus on the clinical research of NC therapy. Histone deacetylase inhibitors (HDACi) and BET inhibitors (BETi) are target drugs against NC which were firstly found. HDACi was proved to significantly inhibit tumor cell growth and to induce differentiation in NC cell lines and murine xenograft models of NC [52]. Based on research findings, a 10-year-old boy [45] with NC was treated with a single agent of HDACi vorinostat and showed significant response after five weeks of therapy. Due to severe (grade 3) nausea and emesis, this patient stopped to receive the treatment of vorinostat, and then the tumor grew rapidly. He died with an OS of 11 months. BETi is an acetyl-histone mimetic compound, which can bind to bromodomains and competitively inhibit the tether of BRD3/4 to acetylated chromatin, and directly target BRD3/4-NUT fusion protein. In 2010, a study [53] found that BETi JQ1 could induce differentiation and inhibit growth of NC cells in vivo and in vitro. Because of significant preclinical response of BETi, phase I/II clinical trials for the safety and efficacy of different BETis in NC patients are currently under way. The clinical efficacy of BETi in 4 patients with NC has been reported in 2016 [54]. Two of them showed a rapid response with tumor regression, and one maintained disease stabilization. The OS of 4 cases was 19, 18, 7, and 5 months, respectively. Compared with previously described NC patients with 6.7 or 5 months of median OS, it proved that HDACi and BETi could significantly prolong the survival of NC patients. Interestingly, Stirnweiss and his colleagues [55] found that the BETi was more sensitive in BRD4-NUT (ex11:ex2) variant NC cell lines than in BRD4-NUT (ex15:ex2) variant or non-NC cell lines. The BETi was also effective in the BRD3-NUT fusion cell line. The result suggested that different breakpoints or fusion subtypes in NC tumors might have different responses to BETi. This indicated that BETi had the possibility of ineffective treatment and reminded the researchers of the necessity to identify fusion gene for the decision of specific NC therapy with maximized effectiveness. However, the efficacy of HDACi and BETi is limited by drug toxicity such as the unwanted effect on normal cells, and BETi is also limited by the acquisition of resistance [56, 57]. Therefore, to a large extent, the effective and precise treatment of NC has become more difficult.Recently, scholars revealed that a novel dual HDAC/PI3K inhibitor (CUDC-907) showed the strongest outcomes on NC cells in vitro compared to HDACi or BETi [58, 59]. The mechanism of CUDC-907 is to downregulate MYC expression and inhibit the growth of MYC-driven malignant cells by targeting the upstream regulators of MYC, such as BRD4-NUT and phosphoinositide 3-kinases (PI3K) [60]. Thus, CUDC-907 might be a promising target drug for NC therapy. In addition to HDACi, BETi, and HDAC/PI3K inhibitors, CDK9 inhibitors [61] and mTOR inhibitors [62] in drug screening were proved to be sensitive drugs against NC in vitro. Both of them also showed remarkable efficacies to inhibit the proliferation of NC cells.
## 6. Conclusion
NC is a rare and highly lethal carcinoma, which lacks special clinicopathological features. While IHC, FISH, RT-PCR, and NGS are still not widely used for the diagnosis of NC and clinicians lack understanding about the disease, NC can be easily misdiagnosed. Early recognition of NC is crucial to select and establish the optimal treatment regimens. Great progress has also been made in the development of NC therapy, especially targeted therapies, which shows a promising tendency. Today, it is clear that NC is no longer confined to the midline structure, and it can occur in any tissue or organ, at any age. What we are facing now is not only to help clinicians to raise their awareness of NC but also to clarify the criteria for when to consider NC. The diagnosis of a poorly or undifferentiated carcinoma should prompt clinicians to consider the possibility of NC, and small round cell sarcoma, neuroendocrine carcinoma, germ cell tumors, and Ewing sarcoma/PNET are also taken into account to initiate NC differential diagnosis. The previously hidden and currently increasing occurrence of NC make the clinicians and patients strive for early detection of NC and timely symptomatic treatment, as well as more advanced target anti-NC therapy.
---
*Source: 1018439-2019-11-11.xml* | 2019 |
# The Influence of MHC and Immunoglobulins A and E on Host Resistance to Gastrointestinal Nematodes in Sheep
**Authors:** C. Y. Lee; K. A. Munyard; K. Gregg; J. D. Wetherall; M. J. Stear; D. M. Groth
**Journal:** Journal of Parasitology Research
(2011)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2011/101848
---
## Abstract
Gastrointestinal nematode parasites in farmed animals are of particular importance due to their effects on production. In Australia, it is estimated that the direct and indirect effects of parasite infestation cost the animal production industries hundreds of millions of dollars each year. The main factors considered by immunologists when studying gastrointestinal nematode infections are the effects the host's response has on the parasite, which immunological components are responsible for these effects, genetic factors involved in controlling immunological responses, and the interactions between these forming an interconnecting multilevel relationship. In this paper, we describe the roles of immunoglobulins, in particular IgA and IgE, and the major histocompatibility complex in resistance to gastrointestinal parasites in sheep. We also draw evidence from other animal models to support the involvement of these immune components. Finally, we examine how IgA and IgE exert their influence and how methods may be developed to manage susceptible animals.
---
## Body
## 1. Introduction
Gastrointestinal worm infestation is one of the major causes of reduced productivity in domestic sheep in tropical and temperate regions of the world. In common with other parasitic infections, there is a complex interaction between the host’s innate and adaptive defence mechanisms and consequent adaptations by the parasite. An understanding of these interactions is essential for the development of sustainable strategies to minimise the impact of the parasite burden on the host. Analysis of the problem is made more difficult by the diversity of nematode species and strains that commonly infect sheep and the apparently variable manner in which sheep respond to these organisms.Inherited factors play an important role in determining susceptibility to nematode infections. For example, over the past two decades, the Rylington Merino Project has selected sheep for resistance to nematodes on the basis of annual worm egg counts [1, 2]. Relative to a control flock, the selected flock now has sufficient inherited resistance to nematodes that anthelminthic chemicals are not required during the lambing season. Selective breeding has been successful in other research flocks [1, 3, 4] and many commercial farms. Resistant animals can be identified by measuring faecal egg counts (FECs) over the first year of life. Selection for nematode resistance is widely practised in Australia and New Zealand but less common in the rest of the world.In Australia and New Zealand, the correlations between FEC and growth rate have been weak [5–7]. In contrast, in Europe, the correlations are strong [8–10] but have been shown to change over time. The differences may reflect the breed of sheep in the different regions, that is, Australian Merino, New Zealand Romney, Scottish Blackface, and Polish long wool sheep. Alternatively, the differences may be a consequence of the nematode community. In the two European FEC studies, egg counts were predominantly Teladorsagia circumcinctabut in the Australian and New Zealand studies, Haemonchus contortusor Trichostrongylus colubriformis made a much greater contribution to egg counts. Alternatively, the differences between Europe and Australasia could reflect the different husbandry conditions; European sheep generally reach sale weights at an earlier age. IgA and IgE responses have been associated with reduced egg counts, but IgE responses have been shown to develop more slowly and are associated with pathology [11].Many studies have implicated variation within the major histocompatibility complex (MHC) as a determinant of host resistance and/or sensitivity to gastrointestinal parasitism in several species [12]. In addition, mucosal humoral responses to parasites have been implicated in mechanisms that restrict parasite growth and mediate the expulsion of worms [13]. In this paper, the roles of the MHC and immunoglobulin synthesis, especially IgA and IgE, are discussed with particular emphasis on nematode infections in sheep.
## 2. Role of Adaptive Immunity in Gastrointestinal Parasitic Infestation
Parasitic gastroenteritis is caused by nematodes that include species from the generaTrichostrongylus, Teladorsagia, Haemonchus, Nematodirusi,andCooperia [14]. Infections usually arise from ingestion of parasite larvae or eggs from pasture, and it is well established that the presence of parasite antigens in the host’s gastrointestinal system triggers innate immune responses, in addition to humoral and cell-mediated adaptive responses, with recruitment of T cells along the gastrointestinal mucosa [15, 16]. During an initial infection, dendritic cells take up and process parasite molecules. The dendritic cells then migrate to the draining lymph nodes and activate T cells, although additional interactions between antigen presenting cells and T cells may occur close to the site of uptake. In the small intestine, soluble antigens (metabolic or excretory-secretory components) are absorbed by specialised microfold cells in the follicle-associated epithelium overlying the Peyer’s patches either through phagocytosis or pinocytosis [17]. Antigens are transported from the intestinal lumen to the subepithelial dome, where the antigen-presenting cells interact with T cells.The importance of T lymphocytes, which regulate the host adaptive response against gastrointestinal parasites, has been demonstrated in several laboratory animal models, includingTrichinella spiralis, Heligmosomoides bakeri, and Strongyloides stercoralis [12, 37, 38] and also in sheep infected with Haemonchus contortus [39]. However, it is also clear that adaptive immune responses to nematode parasites do not completely prevent subsequent infection, at least in most animals within a flock.The three major manifestations of resistance to nematodes are reduced numbers of adult nematodes, decreased size of adult nematodes, and increased numbers of inhibited larvae, compared to susceptible contemporaries. However, not all resistant animals manifest all the three primary indicators, and the three indicators do not develop at the same rate [40, 41]. Large worms tend to lay more eggs [42] and are generally more pathogenic [11]. Reduced egg counts, increased expulsion of parasites, altered growth rates in resistant hosts, increased numbers of eosinophils, mast cells, plasma cells, and lymphocytes as well as increased concentrations of antibody are common secondary indicators in most nematode infections of sheep.Much of the current knowledge concerning the mammalian immune response to parasites comes from studies on laboratory animals, particularly rodents. Experimental infections in rodents have provided valuable information for the analysis of immunological and genetic mechanisms that determine resistance to gastrointestinal nematode parasites [32, 43]. The demonstration that genetic factors influence resistance and susceptibility in mice allows the identification of genetic markers or genes that confer resistance [43]. Although the genes controlling resistance in different species are unlikely to be identical, many of the pathways are likely to be similar.
## 3. The Role of IgA in Nematode Resistance
In several host-parasite systems, parasite-specific IgA has been associated with resistance [44–48]. However, careful experimental design and interpretation are needed because IgA responses to nematode infection are correlated with IgE production, together with infiltration of eosinophils and mast cells and the subsequent degranulation of mast cells [49]. The mutual correlations could be a consequence of cytokines from Th2 cells, which recruit the relevant cells. Therefore, it is possible that increased IgA activity may be a marker of an increased mucosal immune response. IgA is not complement fixing and recently has been implicated in anti-inflammatory mechanisms [50]. Evidence for an active role is discussed below.In mice, the humoral immune response has been reported to exert a direct effector role against gastrointestinal nematode parasites. Immunity against murineTrichuris muris has been achieved through monoclonal IgA antibody infusion that resulted in the expulsion of the parasites from the gastrointestinal tract [51]. The immune mechanism was thought to be through antibody binding directly to parasite excretion/secretion antigens [51].Smith et al. [52] were the first to report a relationship between IgA response and reduced worm length following infection with T. circumcincta. They examined the length of all nematodes, including larval stages, to identify inhibited larvae. They found an increase in lymphatic IgA and IgA-positive cells in the gastric lymph. Pooling data across age classes produced an extremely strong correlation between the increased IgA response and increased numbers of inhibited larvae. A large study in naturally infected sheep supported this finding by showing that lambs with higher peripheral IgA activity against fourth-stage larvae showed inhibition of a higher proportion of larvae [53].More recent data have cast doubt on the role of IgA in nematode inhibition [54]. Sheep were trickle-infected, then, challenged with 50,000 T. circumcincta. Parasite development ceased approximately five days after challenge and preceded the peak of IgA activity in the gastric lymph on day 9. The IgA response was apparently too slow to play a direct role in the inhibition of larval development. However, more research is necessary before firm conclusions can be made. The relationship between IgA levels in the gastric lymph and IgA levels at the site of infection in the abomasal mucosa is unknown. In addition, there is density-dependent inhibition of larval development [55]. The mechanism of density-dependent inhibition may differ from that of immune-mediated inhibition, and the inhibition observed in this experiment may not have been immune mediated.In contrast to the uncertain relationship between IgA level and numbers of inhibited larvae, the parasite-specific IgA response is consistently correlated with a reduction in adult worm length in infected animals. In Scottish Blackface sheep matched for age, sex, breed, farm of origin, and parasite exposure history, Stear et al. [49] observed considerable variation in the number of IgA-positive plasma cells and the activity of parasite-specific IgA in the abomasal mucosa. There was a negative correlation between IgA and worm length, which was stronger for mucosal IgA than for serum IgA. The correlations observed were also stronger against fourth-stage larvae (L4) than against third-stage larvae (L3). Recently, Henderson and Stear [56] showed a direct correlation between mucosal IgA and plasma IgA levels of 0.66. The negative correlation observed between parasite-specific IgA levels and worm length was likely to have been a direct effect of IgA on the parasite, rather than a change in the quantity of antibody produced in response to changes in worm number [49]. Similar correlations have been observed in Santa Ines, Suffolk, and Ile de France lambs infected by H. contortus, Scottish Blackface lambs infected by H. contortus, and Churra lambs infected with T. circumcincta[57–59]. In addition, Scottish Blackface lambs that were naturally infected with T. circumcincta have shown a similar relationship [53, 60].Stear et al. [49] estimated that approximately 38% of nematode parasite worm length variation could be accounted for by mucosal IgA activity directed against L4 worms, a value considerably less than the over 90% estimated by Smith et al. [52]. However, the high value reported by Smith et al. may have been an artefact created by pooling data from sheep of different ages. The level of variation in nematode parasite worm length due to L4 parasite-specific IgA activity has been independently estimated as ~38% in Churra sheep [59], with similar estimates reported by Sinski et al. [61], Strain and Stear [57], Strain et al. [60], Stear et al. [53], Amarante et al. [58], and Henderson and Stear [56].In addition to the effects of IgA, two other factors influence the size of adult nematodes: IgA specificity and worm density dependence. Variance analysis in sheep intentionally infected withT. circumcincta [53] indicated that these three components accounted for most of the variation in adult female worm length. This conclusion is consistent with the hypothesis that, in this host-parasite system, IgA is the major host mechanism influencing parasite growth and fecundity. In Strongyloides ratti, the density-dependent response is abolished in immunosuppressed rats [62], which suggests that density dependence is mediated through the immune system in at least some host-parasite systems.There are several methods by which IgA could influence nematode growth. Parasitic nematodes release a variety of proteases that partially predigest proteins and may also break down antibodies and other mediators of host resistance. Antibodies against these enzymes or other molecules could inhibit enzyme activity and feeding by the parasite [63–67]. This appears to be a mechanism underlying the success of vaccination against H-Gal-GP (a galactose-containing glycoprotein complex purified from intestinal membranes of adult H. contortus worms) from H. contortus[68, 69]. Alternatively, IgA could interact with eosinophils to control nematode growth and fecundity (see below).There does not appear to be a consistent association between IgA activity and the number of adultT. circumcincta[49].There is also no consistent association with the number of H. contortus [70–72]. The absence of a relationship suggests that IgA activity does not determine worm numbers.Hertzberg et al. [73] trickle infected White Alpine lambs with Ostertagia leptospicularis and showed that there was a gradual increase in serum IgA levels during infection. As expected from other species, IgA has a short half-life and IgA activity declined rapidly after anthelminthic treatment. When subsequently challenged with 100,000 infective L3 parasites, the serum IgA level rose rapidly but was observed to decrease earlier than either IgG1 or IgG2.
## 4. IgA and Eosinophilia
Variation in the number of mast cells, globule leucocytes, eosinophils, and IgA plasma cells has been observed in sheep that were infected with nematodes [49, 58]. Globule leucocytes are derived from subepithelial mast cells [74, 75]. Stear et al. [49] found that sheep with more mast cells had higher abomasal concentrations of globule leucocytes, eosinophils, IgA plasma cells, and more larval antigen-specific IgA antibody. Henderson and Stear [56] measured the level of IgA and eosinophil numbers in Scottish Blackface lambs over a period of 60 days after challenge and observed that both variables had similar response kinetics. IgA and eosinophil activity peaked at 8–10 days after infection and declined subsequently. Stear et al. [49] measured eosinophil numbers at the end of the experiment during necropsy of the animals while Henderson and Stear [56] measured mucosal eosinophilia over a 60-day period. A similar study using Caribbean hair sheep and wool sheep [19] found that the hair breed had higher serum levels of IgA and IgE in uninfected sheep, and that there were significant differences in IgA, IgE, and tissue eosinophils levels between the two sheep breeds which was negatively correlated with worm counts. IgA levels accounted for 38% and eosinophil numbers 40% of the variation in worm length, respectively. In correlation studies that analysed the two variables together, the combination accounted for 53% of worm length variation. Therefore, it appears that IgA and eosinophilia have a combined or synergistic effect on worm length [56]. Eosinophils have been shown to express receptors for IgA [76, 77], which can be activated by binding of parasite antigen/IgA to IgA cell surface receptors [78]. Therefore, IgA could help target eosinophils to nematodes. Interestingly, eosinophils in mice lack receptors for IgA [76], and this could explain the relative ineffectiveness of eosinophils in some murine models [79, 80].
## 5. The Role of IgE in Nematode Resistance
Increased numbers of mast cells is a hallmark of many nematode infections, and they have been implicated in the control of worm numbers in some but not all infections. For example, mast cells appear crucial for the control ofTrichinella spiralisbut not forTrichuris muris or Nippostrongylus brasiliensis[81]. Sheep that are resistant to T. circumcincta have increased numbers of mast cells or globule leucocytes compared to more susceptible contemporaries [49]. Similarly, mast cells are important for resistance to H. contortus [82, 83].As binding of parasite molecules by cell-surface IgE is the major trigger for mast cell degranulation, IgE is implicated by default in resistance to nematode infection. An association between high plasma IgE activity against a high-molecular-weight allergen and low egg counts was reported in 20 lambs selected from a group of 72 naturally infected crossbred sheep [84]. A study using lymphatic cannulation to allow continuous assessment of the migrating immune cells from the intestinal mucosa and mesenteric lymph nodes showed differential changes in the expression of IL-5 in the afferent intestinal lymph in two lines of sheep selected for susceptibility or resistance to T. colubriformis [85]. Furthermore, in a parallel study by the same group, the resistant line had higher IgE in lymph than the susceptible line [86]. Naturally infected Texel lambs with high IgE activity against recombinant tropomyosin from T. circumcincta also had lower egg counts than lambs with lower IgE responses [87]. An independent study from New Zealand also showed an association between increased IgE activity against an aspartyl protease inhibitor from T. colubriformis and reduced egg counts [88].
## 6. Genetic Factors in Gastrointestinal Parasite Immunity
Quantitative genetic analysis in sheep and cattle has clearly shown that resistance to nematode infection is under genetic control [2, 89–93]. The heritability of a single egg count varies among populations but is usually between 0.2 and 0.4 in animals that have been previously exposed to infection [94]. This is similar to the heritability of milk production in dairy cattle or growth rate in beef cattle and indicates the feasibility of selective breeding [95]. Quantitative trait loci (QTLs) for resistance to the intestinal nematode Heligmosomoides polygyrus were located on mouse chromosomes 1, 2, 8, 13, 17, and 19 by Iraqi et al. [32]. Interestingly, one chromosomal region identified by these researchers was the MHC located on mouse chromosome 17. Their observations were confirmed independently by Behnke et al. [33] who found associations between eight immunological traits (FEC at weeks 2, 4, and 6, mucosal mast cell protease 1, granuloma score, IgG1 against L5, and IgG1, and IgE to L4) and QTLs on chromosome 1 and 17 associated with resistance to the H. polygyrus infection. More specifically, the MHC genes, most notably, the class II and TNF regions were significantly associated with gastrointestinal parasite infection.Davies et al. [29] provided evidence of QTLs located on sheep chromosomes 2, 3, 14, and 20 conferring resistance to infection with T. circumcincta in Scottish Blackface sheep. Analysis of chromosome 20 showed that the MHC region had a statistically significant association with gastrointestinal nematode parasite resistance. QTLs associated with specific IgA activity against nematode parasites were also located on chromosomes 3 and 20. Alleles of the DRB1 in the MHC class II region have been associated with nematode resistance in several different breeds of sheep [23–25, 96] and cattle [90, 97, 98]. However, in contrast, Beh et al. [99] found no significant linkage of the MHC in sheep resistance to Trichostrongylus colubriformis. Unfortunately, their study used only a single marker to represent the MHC region and chromosome 20 in their whole-genome linkage analysis. Beh et al. [99] also applied an additional two markers to a single-point ANOVA and confirmed no linkage to the MHC region. In another linkage study, no significant QTL was found on chromosome 20, for resistance to parasitic nematode infection in sheep [100]. In this study, only four markers were used to represent chromosome 20, of which only two mapped to the MHC region [100]. Recently, a more extensive whole-genome QTL analysis for resistance to H. contortus showed, in one family, weak linkage between egg counts and the Ovar-DYA region in the MHC class IIb region [101], consistent with a previous report that associated this region with resistance to T. circumcincta[26].
## 7. The Influence of the MHC on Antibody Production
The role of MHC in controlling IgA concentrations is supported by several human studies, especially on IgA and combined variable immunodeficiency (CVID). One of the first studies that identified an association between IgA deficiency and the MHC region was by Wilton et al. [102], who found an association between MHC class III genes and IgA deficiency. An increase in frequency of certain HLA haplotypes was observed in deficient patients [102, 103]. A number of studies have since focused on the HLA-A1-B8-DR3 haplotype to locate the IgA deficiency locus [104, 105]. An investigation of the HLA-DR3-extended haplotype showed that in the Sardinian population, where a lower prevalence of IgA deficiency exists, the HLA-DR3-B18 haplotype is more common than the HLA-DR3-B8 haplotype, suggesting that the IgA deficiency susceptibility gene is located in the more common Northern European DR3-B8 haplotypes [106]. The investigation of features common to the different haplotypes was used to establish the region associated with IgA deficiency, and thus far several different studies have placed the susceptibility locus between the class III region [103, 105, 107, 108] and the class II region [109–111].Polymorphisms inMSH5 have also been shown to be associated with CVID and IgA deficiency in a mouse model and through statistical analysis of human populations [112]. This gene, located within the MHC class III region, is involved in DNA mismatch repair as well as in resolving Holliday junctions that form between homologous DNA strands during meiosis [113, 114]. However, Guikema et al. [115] observed a large variety of splice variants of MSH5 mRNA (all of which are unlikely to be stable) and suggested that MSH5 was nonfunctional and therefore probably does not participate in Ig class switching. Recently, it has been shown that haplotypes of MSH5 are associated with IgA deficiency [116, 117] but are not likely to be the causative mutations [117].
## 8. Mechanisms Underlying the MHC Association with Nematode Resistance
Genetic variation in the mouse MHC has long been associated with resistance to nematode infection [118] and with the specificity of antibody responses [119]. It has been reported that the helminth Nippostrongylus brasiliensis may possibly be able to suppress MHC class II molecule expression as an evasive mechanism [120]. Likewise, for sheep, it has been shown that the parasite T. colubriformis seems to be capable of downregulating several immune genes, particularly DRB1 and DRA, in afferent lymph migratory cells [121]. In the mouse model infected with Strongyloides venezuelensis, class II −/− animals were more susceptible to infection (based on increase in FEC and elimination of worms) than wild-type and class I −/− mice [31]. In addition, parasite-specific IgM, IgA, and IgG were also significantly reduced in class II −/− mice. This study concluded that class II MHC expression was essential to induce a Th2 response against S. venezuelensis infection and class I expression was not [31]. Interestingly and somewhat contradictory to the findings discussed above [121], it has been shown that mice strains that lack I-E, a homologue of DRB1, in their MHC class II region are more resistant [122].In a comparative study using bovine cDNA microarray analysis of duodenum tissue from an outbred population of resistant and susceptible lambs (which had been subjected to two natural challenges with a range of gastrointestinal parasites), increased expression was observed in a range of genes [18]. Upregulated genes included DQB1, DRA, and DQA1 from the MHC class II region [18]. This observation highlights key differences between resistant and susceptible animals in the early immune response to gastrointestinal nematodes. In a separate microarray study, differences were observed in gene expression profiles of hair and wool sheep that had been infected with H. contortus [19]. Elevated expression of the MHC class II DM β-chain precursor gene was observed in lymph node tissue of the wool breed. However, no significant change in the expression of this or any other MHC-related gene was observed in abomasal tissue [19]. In another study, using transcriptional profiling of duodenum tissue samples from resistant and susceptible sheep [20], up-regulation of MHC class II genes Ovar DQA1, Ovar DQB1, and Ovar DRA was observed in resistant animals. Subsequent RT-PCR analysis of Ovar DQA1showed an average 8.4-fold greater expression in resistant animals than in susceptible animals. Further analysis using GO terms highlighted the significant association between genes highly expressed in resistant animals with terms such as MHC class II activity and exogenous antigen processing and presentation [20]. Furthermore, the frequency of Ovar DQA1 haplotypes differed between animals from the resistant and susceptible selection lines, with an increase in Ovar DQA1*Null in susceptible animals from both Perendale and Romney sheep lines. In Perendale sheep, the frequency of Ovar DQA1*0101 and DQA1*0402 alleles was increased in resistant animals and Ovar DQA1*0103 increased in the susceptible line. However, these observations seemingly contradict earlier findings by the same group, in which no increase was observed in the expression of either MHC class II genes nor any association was found with antigen presentation or processing [123]. Interestingly, a significant increase in expression of a MHC class I gene (HLA-A orthologue) in resistant animals was also observed, indicating possible crosstalk between the different responses. Recently, Forrest and colleagues [21] conversely demonstrated no evidence of an interbreed effect of the Ovar-DQA1*Null allele on total faecal egg counts. However, the Ovar-DQA1*Null appeared to have a significant effect when the analysis was performed within breeds [21].In a statistical examination of the relationship between MHC polymorphism and parasitological traits in Scottish Blackface sheep, the resistant allele G2 at the DRB1 locus was significantly associated with decreased egg counts and decreased numbers of adultT. circumcincta [96]. However, no apparent correlation was observed with adult female parasite length. Hence, the mechanism by which the MHC influences egg counts may operate through the control of worm number and not by controlling nematode fecundity. There are several possible mechanisms but possibly specific class II molecules direct responses to specific peptides, and these responses may play a direct role in protection.Another possibility is that the observed associations in livestock are a consequence of heterozygote advantage [96]. Heterozygote advantage has complex effects on the power of statistical analyses to detect specific allele effects [27]. As the frequency of an allele increases in a population, an increasing proportion of homozygous sheep will be present and thus the average effect of the specific allele will decline. Also, an allele that is very rare in a population will be present in too few animals to show a significant effect. Conversely, when the allele is very common in the population, its average effect is quite small making its contribution to reduced egg counts difficult to detect. Consequently, only alleles within a narrow frequency range will show effects on parasite resistance. Interestingly, the allele most strongly associated with resistance in Scottish Blackface sheep fell within the narrow detection window, and the most common allele was also associated with the most susceptible animals as predicted by heterozygote advantage. There was also more direct evidence: Heterozygous sheep had lower egg counts following natural T. circumcinctainfection [96].Heterozygote advantage is a particularly appealing mechanism for explaining the IgE response to parasites. The specificity of IgE responses is relatively unimportant for mast cell degranulation if the target molecule is soluble and large enough to promote cross-linking of IgE receptors. Therefore, a heterozygote advantage that leads to increased IgE concentrations is more supported than a model of determinant selection (i.e., a direct role of the allele in determining levels of IgE).Charbonnel and Pemberton [124] examined both MHC and neutral loci in free-living Soay sheep that were infected by T. circumcincta in St Kilda (Scotland). Over eight years, lower levels of temporal genetic differentiation were observed at MHC loci compared with neutral loci, consistent with balancing selection activity at the MHC loci [124]. These observations confirmed earlier work by Paterson [125] but have not been supported by subsequent research [126]. Significant studies showing positive associations between genes within the MHC and gastrointestinal parasites are summarised in Table 1.Table 1
Summary of studies that have implicated the MHC in resistance to gastointestinal parasites.
SpeciesParasite speciesMethodMHC associationReferenceSheep (Ovis aries)mixedMicroarrayDQB1, DRA, DQA1[18]H. contortusMicroarrayDMB[19]mixedMicroarrayDQA1*Null, DQB1, DRA[20]mixedPCR analysisDQA1*Null[21]mixedPCR/sequencingDQA1*0101, DQA1*0402[20]mixedPCR/sequencingDRB1[22, 23]mixedPCRDRB microsatellite[23]Teladorsagia circumcinctaPCR/sequencingDRB1[24–27]H. contortusPCR/sequencingDRB1, OMHC1[28]Teladorsagia circumcinctaLinkageClass IIb region[29]Sheep (Ovis canadensis)n/aPopulation analysis PCR/sequencingDRB1[30]Mouse (Mus musculus)S. venezuelensisKnock outClass II[31]H. polygyrusLinkageClass II region[32, 33]Striped mouse (Rhabdomys pumilio)mixedPCR/sequencingDRB[34]Yellow necked mouse (Apodemus flavicollis)mixedPCR/sequencingDRB[35]Gray mouse lemur (Microcebusmurinus)mixedPCR/sequencingDRB[36]
## 9. Conclusions
There is no single mechanism of nematode resistance in sheep. Resistance to gastrointestinal nematodes involves the control of worm growth as well as worm numbers. The negative correlation between parasite-specific IgA levels and worm length has been well established by many research groups in different breeds of sheep infected by different gastrointestinal parasites. The control of worm numbers involves mast cells in some but not all host-nematode systems. There is a genetic component to nematode resistance, and the MHC is one of the most important components of genetic resistance. QTL analyses have shown a link between the MHC region and FEC in mouse models, as well as in sheep and cattle. The influence of the class II region on parasite resistance has been shown in experimental models as well as by microarray analysis.Despite the large number of studies that confirmed these relationships, there are other studies in which contradictory results reject these hypotheses. However, correlation studies may generate a complex heterogeneity of results because of the large variety of gastrointestinal nematode parasites and differences in environmental conditions, nutritional status of animals, and geographical locations. Another complication is that the relationship between gene expression from the MHC region, IgA activity, and their effects on parasites is often considered individually rather than as interconnecting multilevel interactions.
---
*Source: 101848-2011-04-12.xml* | 101848-2011-04-12_101848-2011-04-12.md | 32,649 | The Influence of MHC and Immunoglobulins A and E on Host Resistance to Gastrointestinal Nematodes in Sheep | C. Y. Lee; K. A. Munyard; K. Gregg; J. D. Wetherall; M. J. Stear; D. M. Groth | Journal of Parasitology Research
(2011) | Biological Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2011/101848 | 101848-2011-04-12.xml | ---
## Abstract
Gastrointestinal nematode parasites in farmed animals are of particular importance due to their effects on production. In Australia, it is estimated that the direct and indirect effects of parasite infestation cost the animal production industries hundreds of millions of dollars each year. The main factors considered by immunologists when studying gastrointestinal nematode infections are the effects the host's response has on the parasite, which immunological components are responsible for these effects, genetic factors involved in controlling immunological responses, and the interactions between these forming an interconnecting multilevel relationship. In this paper, we describe the roles of immunoglobulins, in particular IgA and IgE, and the major histocompatibility complex in resistance to gastrointestinal parasites in sheep. We also draw evidence from other animal models to support the involvement of these immune components. Finally, we examine how IgA and IgE exert their influence and how methods may be developed to manage susceptible animals.
---
## Body
## 1. Introduction
Gastrointestinal worm infestation is one of the major causes of reduced productivity in domestic sheep in tropical and temperate regions of the world. In common with other parasitic infections, there is a complex interaction between the host’s innate and adaptive defence mechanisms and consequent adaptations by the parasite. An understanding of these interactions is essential for the development of sustainable strategies to minimise the impact of the parasite burden on the host. Analysis of the problem is made more difficult by the diversity of nematode species and strains that commonly infect sheep and the apparently variable manner in which sheep respond to these organisms.Inherited factors play an important role in determining susceptibility to nematode infections. For example, over the past two decades, the Rylington Merino Project has selected sheep for resistance to nematodes on the basis of annual worm egg counts [1, 2]. Relative to a control flock, the selected flock now has sufficient inherited resistance to nematodes that anthelminthic chemicals are not required during the lambing season. Selective breeding has been successful in other research flocks [1, 3, 4] and many commercial farms. Resistant animals can be identified by measuring faecal egg counts (FECs) over the first year of life. Selection for nematode resistance is widely practised in Australia and New Zealand but less common in the rest of the world.In Australia and New Zealand, the correlations between FEC and growth rate have been weak [5–7]. In contrast, in Europe, the correlations are strong [8–10] but have been shown to change over time. The differences may reflect the breed of sheep in the different regions, that is, Australian Merino, New Zealand Romney, Scottish Blackface, and Polish long wool sheep. Alternatively, the differences may be a consequence of the nematode community. In the two European FEC studies, egg counts were predominantly Teladorsagia circumcinctabut in the Australian and New Zealand studies, Haemonchus contortusor Trichostrongylus colubriformis made a much greater contribution to egg counts. Alternatively, the differences between Europe and Australasia could reflect the different husbandry conditions; European sheep generally reach sale weights at an earlier age. IgA and IgE responses have been associated with reduced egg counts, but IgE responses have been shown to develop more slowly and are associated with pathology [11].Many studies have implicated variation within the major histocompatibility complex (MHC) as a determinant of host resistance and/or sensitivity to gastrointestinal parasitism in several species [12]. In addition, mucosal humoral responses to parasites have been implicated in mechanisms that restrict parasite growth and mediate the expulsion of worms [13]. In this paper, the roles of the MHC and immunoglobulin synthesis, especially IgA and IgE, are discussed with particular emphasis on nematode infections in sheep.
## 2. Role of Adaptive Immunity in Gastrointestinal Parasitic Infestation
Parasitic gastroenteritis is caused by nematodes that include species from the generaTrichostrongylus, Teladorsagia, Haemonchus, Nematodirusi,andCooperia [14]. Infections usually arise from ingestion of parasite larvae or eggs from pasture, and it is well established that the presence of parasite antigens in the host’s gastrointestinal system triggers innate immune responses, in addition to humoral and cell-mediated adaptive responses, with recruitment of T cells along the gastrointestinal mucosa [15, 16]. During an initial infection, dendritic cells take up and process parasite molecules. The dendritic cells then migrate to the draining lymph nodes and activate T cells, although additional interactions between antigen presenting cells and T cells may occur close to the site of uptake. In the small intestine, soluble antigens (metabolic or excretory-secretory components) are absorbed by specialised microfold cells in the follicle-associated epithelium overlying the Peyer’s patches either through phagocytosis or pinocytosis [17]. Antigens are transported from the intestinal lumen to the subepithelial dome, where the antigen-presenting cells interact with T cells.The importance of T lymphocytes, which regulate the host adaptive response against gastrointestinal parasites, has been demonstrated in several laboratory animal models, includingTrichinella spiralis, Heligmosomoides bakeri, and Strongyloides stercoralis [12, 37, 38] and also in sheep infected with Haemonchus contortus [39]. However, it is also clear that adaptive immune responses to nematode parasites do not completely prevent subsequent infection, at least in most animals within a flock.The three major manifestations of resistance to nematodes are reduced numbers of adult nematodes, decreased size of adult nematodes, and increased numbers of inhibited larvae, compared to susceptible contemporaries. However, not all resistant animals manifest all the three primary indicators, and the three indicators do not develop at the same rate [40, 41]. Large worms tend to lay more eggs [42] and are generally more pathogenic [11]. Reduced egg counts, increased expulsion of parasites, altered growth rates in resistant hosts, increased numbers of eosinophils, mast cells, plasma cells, and lymphocytes as well as increased concentrations of antibody are common secondary indicators in most nematode infections of sheep.Much of the current knowledge concerning the mammalian immune response to parasites comes from studies on laboratory animals, particularly rodents. Experimental infections in rodents have provided valuable information for the analysis of immunological and genetic mechanisms that determine resistance to gastrointestinal nematode parasites [32, 43]. The demonstration that genetic factors influence resistance and susceptibility in mice allows the identification of genetic markers or genes that confer resistance [43]. Although the genes controlling resistance in different species are unlikely to be identical, many of the pathways are likely to be similar.
## 3. The Role of IgA in Nematode Resistance
In several host-parasite systems, parasite-specific IgA has been associated with resistance [44–48]. However, careful experimental design and interpretation are needed because IgA responses to nematode infection are correlated with IgE production, together with infiltration of eosinophils and mast cells and the subsequent degranulation of mast cells [49]. The mutual correlations could be a consequence of cytokines from Th2 cells, which recruit the relevant cells. Therefore, it is possible that increased IgA activity may be a marker of an increased mucosal immune response. IgA is not complement fixing and recently has been implicated in anti-inflammatory mechanisms [50]. Evidence for an active role is discussed below.In mice, the humoral immune response has been reported to exert a direct effector role against gastrointestinal nematode parasites. Immunity against murineTrichuris muris has been achieved through monoclonal IgA antibody infusion that resulted in the expulsion of the parasites from the gastrointestinal tract [51]. The immune mechanism was thought to be through antibody binding directly to parasite excretion/secretion antigens [51].Smith et al. [52] were the first to report a relationship between IgA response and reduced worm length following infection with T. circumcincta. They examined the length of all nematodes, including larval stages, to identify inhibited larvae. They found an increase in lymphatic IgA and IgA-positive cells in the gastric lymph. Pooling data across age classes produced an extremely strong correlation between the increased IgA response and increased numbers of inhibited larvae. A large study in naturally infected sheep supported this finding by showing that lambs with higher peripheral IgA activity against fourth-stage larvae showed inhibition of a higher proportion of larvae [53].More recent data have cast doubt on the role of IgA in nematode inhibition [54]. Sheep were trickle-infected, then, challenged with 50,000 T. circumcincta. Parasite development ceased approximately five days after challenge and preceded the peak of IgA activity in the gastric lymph on day 9. The IgA response was apparently too slow to play a direct role in the inhibition of larval development. However, more research is necessary before firm conclusions can be made. The relationship between IgA levels in the gastric lymph and IgA levels at the site of infection in the abomasal mucosa is unknown. In addition, there is density-dependent inhibition of larval development [55]. The mechanism of density-dependent inhibition may differ from that of immune-mediated inhibition, and the inhibition observed in this experiment may not have been immune mediated.In contrast to the uncertain relationship between IgA level and numbers of inhibited larvae, the parasite-specific IgA response is consistently correlated with a reduction in adult worm length in infected animals. In Scottish Blackface sheep matched for age, sex, breed, farm of origin, and parasite exposure history, Stear et al. [49] observed considerable variation in the number of IgA-positive plasma cells and the activity of parasite-specific IgA in the abomasal mucosa. There was a negative correlation between IgA and worm length, which was stronger for mucosal IgA than for serum IgA. The correlations observed were also stronger against fourth-stage larvae (L4) than against third-stage larvae (L3). Recently, Henderson and Stear [56] showed a direct correlation between mucosal IgA and plasma IgA levels of 0.66. The negative correlation observed between parasite-specific IgA levels and worm length was likely to have been a direct effect of IgA on the parasite, rather than a change in the quantity of antibody produced in response to changes in worm number [49]. Similar correlations have been observed in Santa Ines, Suffolk, and Ile de France lambs infected by H. contortus, Scottish Blackface lambs infected by H. contortus, and Churra lambs infected with T. circumcincta[57–59]. In addition, Scottish Blackface lambs that were naturally infected with T. circumcincta have shown a similar relationship [53, 60].Stear et al. [49] estimated that approximately 38% of nematode parasite worm length variation could be accounted for by mucosal IgA activity directed against L4 worms, a value considerably less than the over 90% estimated by Smith et al. [52]. However, the high value reported by Smith et al. may have been an artefact created by pooling data from sheep of different ages. The level of variation in nematode parasite worm length due to L4 parasite-specific IgA activity has been independently estimated as ~38% in Churra sheep [59], with similar estimates reported by Sinski et al. [61], Strain and Stear [57], Strain et al. [60], Stear et al. [53], Amarante et al. [58], and Henderson and Stear [56].In addition to the effects of IgA, two other factors influence the size of adult nematodes: IgA specificity and worm density dependence. Variance analysis in sheep intentionally infected withT. circumcincta [53] indicated that these three components accounted for most of the variation in adult female worm length. This conclusion is consistent with the hypothesis that, in this host-parasite system, IgA is the major host mechanism influencing parasite growth and fecundity. In Strongyloides ratti, the density-dependent response is abolished in immunosuppressed rats [62], which suggests that density dependence is mediated through the immune system in at least some host-parasite systems.There are several methods by which IgA could influence nematode growth. Parasitic nematodes release a variety of proteases that partially predigest proteins and may also break down antibodies and other mediators of host resistance. Antibodies against these enzymes or other molecules could inhibit enzyme activity and feeding by the parasite [63–67]. This appears to be a mechanism underlying the success of vaccination against H-Gal-GP (a galactose-containing glycoprotein complex purified from intestinal membranes of adult H. contortus worms) from H. contortus[68, 69]. Alternatively, IgA could interact with eosinophils to control nematode growth and fecundity (see below).There does not appear to be a consistent association between IgA activity and the number of adultT. circumcincta[49].There is also no consistent association with the number of H. contortus [70–72]. The absence of a relationship suggests that IgA activity does not determine worm numbers.Hertzberg et al. [73] trickle infected White Alpine lambs with Ostertagia leptospicularis and showed that there was a gradual increase in serum IgA levels during infection. As expected from other species, IgA has a short half-life and IgA activity declined rapidly after anthelminthic treatment. When subsequently challenged with 100,000 infective L3 parasites, the serum IgA level rose rapidly but was observed to decrease earlier than either IgG1 or IgG2.
## 4. IgA and Eosinophilia
Variation in the number of mast cells, globule leucocytes, eosinophils, and IgA plasma cells has been observed in sheep that were infected with nematodes [49, 58]. Globule leucocytes are derived from subepithelial mast cells [74, 75]. Stear et al. [49] found that sheep with more mast cells had higher abomasal concentrations of globule leucocytes, eosinophils, IgA plasma cells, and more larval antigen-specific IgA antibody. Henderson and Stear [56] measured the level of IgA and eosinophil numbers in Scottish Blackface lambs over a period of 60 days after challenge and observed that both variables had similar response kinetics. IgA and eosinophil activity peaked at 8–10 days after infection and declined subsequently. Stear et al. [49] measured eosinophil numbers at the end of the experiment during necropsy of the animals while Henderson and Stear [56] measured mucosal eosinophilia over a 60-day period. A similar study using Caribbean hair sheep and wool sheep [19] found that the hair breed had higher serum levels of IgA and IgE in uninfected sheep, and that there were significant differences in IgA, IgE, and tissue eosinophils levels between the two sheep breeds which was negatively correlated with worm counts. IgA levels accounted for 38% and eosinophil numbers 40% of the variation in worm length, respectively. In correlation studies that analysed the two variables together, the combination accounted for 53% of worm length variation. Therefore, it appears that IgA and eosinophilia have a combined or synergistic effect on worm length [56]. Eosinophils have been shown to express receptors for IgA [76, 77], which can be activated by binding of parasite antigen/IgA to IgA cell surface receptors [78]. Therefore, IgA could help target eosinophils to nematodes. Interestingly, eosinophils in mice lack receptors for IgA [76], and this could explain the relative ineffectiveness of eosinophils in some murine models [79, 80].
## 5. The Role of IgE in Nematode Resistance
Increased numbers of mast cells is a hallmark of many nematode infections, and they have been implicated in the control of worm numbers in some but not all infections. For example, mast cells appear crucial for the control ofTrichinella spiralisbut not forTrichuris muris or Nippostrongylus brasiliensis[81]. Sheep that are resistant to T. circumcincta have increased numbers of mast cells or globule leucocytes compared to more susceptible contemporaries [49]. Similarly, mast cells are important for resistance to H. contortus [82, 83].As binding of parasite molecules by cell-surface IgE is the major trigger for mast cell degranulation, IgE is implicated by default in resistance to nematode infection. An association between high plasma IgE activity against a high-molecular-weight allergen and low egg counts was reported in 20 lambs selected from a group of 72 naturally infected crossbred sheep [84]. A study using lymphatic cannulation to allow continuous assessment of the migrating immune cells from the intestinal mucosa and mesenteric lymph nodes showed differential changes in the expression of IL-5 in the afferent intestinal lymph in two lines of sheep selected for susceptibility or resistance to T. colubriformis [85]. Furthermore, in a parallel study by the same group, the resistant line had higher IgE in lymph than the susceptible line [86]. Naturally infected Texel lambs with high IgE activity against recombinant tropomyosin from T. circumcincta also had lower egg counts than lambs with lower IgE responses [87]. An independent study from New Zealand also showed an association between increased IgE activity against an aspartyl protease inhibitor from T. colubriformis and reduced egg counts [88].
## 6. Genetic Factors in Gastrointestinal Parasite Immunity
Quantitative genetic analysis in sheep and cattle has clearly shown that resistance to nematode infection is under genetic control [2, 89–93]. The heritability of a single egg count varies among populations but is usually between 0.2 and 0.4 in animals that have been previously exposed to infection [94]. This is similar to the heritability of milk production in dairy cattle or growth rate in beef cattle and indicates the feasibility of selective breeding [95]. Quantitative trait loci (QTLs) for resistance to the intestinal nematode Heligmosomoides polygyrus were located on mouse chromosomes 1, 2, 8, 13, 17, and 19 by Iraqi et al. [32]. Interestingly, one chromosomal region identified by these researchers was the MHC located on mouse chromosome 17. Their observations were confirmed independently by Behnke et al. [33] who found associations between eight immunological traits (FEC at weeks 2, 4, and 6, mucosal mast cell protease 1, granuloma score, IgG1 against L5, and IgG1, and IgE to L4) and QTLs on chromosome 1 and 17 associated with resistance to the H. polygyrus infection. More specifically, the MHC genes, most notably, the class II and TNF regions were significantly associated with gastrointestinal parasite infection.Davies et al. [29] provided evidence of QTLs located on sheep chromosomes 2, 3, 14, and 20 conferring resistance to infection with T. circumcincta in Scottish Blackface sheep. Analysis of chromosome 20 showed that the MHC region had a statistically significant association with gastrointestinal nematode parasite resistance. QTLs associated with specific IgA activity against nematode parasites were also located on chromosomes 3 and 20. Alleles of the DRB1 in the MHC class II region have been associated with nematode resistance in several different breeds of sheep [23–25, 96] and cattle [90, 97, 98]. However, in contrast, Beh et al. [99] found no significant linkage of the MHC in sheep resistance to Trichostrongylus colubriformis. Unfortunately, their study used only a single marker to represent the MHC region and chromosome 20 in their whole-genome linkage analysis. Beh et al. [99] also applied an additional two markers to a single-point ANOVA and confirmed no linkage to the MHC region. In another linkage study, no significant QTL was found on chromosome 20, for resistance to parasitic nematode infection in sheep [100]. In this study, only four markers were used to represent chromosome 20, of which only two mapped to the MHC region [100]. Recently, a more extensive whole-genome QTL analysis for resistance to H. contortus showed, in one family, weak linkage between egg counts and the Ovar-DYA region in the MHC class IIb region [101], consistent with a previous report that associated this region with resistance to T. circumcincta[26].
## 7. The Influence of the MHC on Antibody Production
The role of MHC in controlling IgA concentrations is supported by several human studies, especially on IgA and combined variable immunodeficiency (CVID). One of the first studies that identified an association between IgA deficiency and the MHC region was by Wilton et al. [102], who found an association between MHC class III genes and IgA deficiency. An increase in frequency of certain HLA haplotypes was observed in deficient patients [102, 103]. A number of studies have since focused on the HLA-A1-B8-DR3 haplotype to locate the IgA deficiency locus [104, 105]. An investigation of the HLA-DR3-extended haplotype showed that in the Sardinian population, where a lower prevalence of IgA deficiency exists, the HLA-DR3-B18 haplotype is more common than the HLA-DR3-B8 haplotype, suggesting that the IgA deficiency susceptibility gene is located in the more common Northern European DR3-B8 haplotypes [106]. The investigation of features common to the different haplotypes was used to establish the region associated with IgA deficiency, and thus far several different studies have placed the susceptibility locus between the class III region [103, 105, 107, 108] and the class II region [109–111].Polymorphisms inMSH5 have also been shown to be associated with CVID and IgA deficiency in a mouse model and through statistical analysis of human populations [112]. This gene, located within the MHC class III region, is involved in DNA mismatch repair as well as in resolving Holliday junctions that form between homologous DNA strands during meiosis [113, 114]. However, Guikema et al. [115] observed a large variety of splice variants of MSH5 mRNA (all of which are unlikely to be stable) and suggested that MSH5 was nonfunctional and therefore probably does not participate in Ig class switching. Recently, it has been shown that haplotypes of MSH5 are associated with IgA deficiency [116, 117] but are not likely to be the causative mutations [117].
## 8. Mechanisms Underlying the MHC Association with Nematode Resistance
Genetic variation in the mouse MHC has long been associated with resistance to nematode infection [118] and with the specificity of antibody responses [119]. It has been reported that the helminth Nippostrongylus brasiliensis may possibly be able to suppress MHC class II molecule expression as an evasive mechanism [120]. Likewise, for sheep, it has been shown that the parasite T. colubriformis seems to be capable of downregulating several immune genes, particularly DRB1 and DRA, in afferent lymph migratory cells [121]. In the mouse model infected with Strongyloides venezuelensis, class II −/− animals were more susceptible to infection (based on increase in FEC and elimination of worms) than wild-type and class I −/− mice [31]. In addition, parasite-specific IgM, IgA, and IgG were also significantly reduced in class II −/− mice. This study concluded that class II MHC expression was essential to induce a Th2 response against S. venezuelensis infection and class I expression was not [31]. Interestingly and somewhat contradictory to the findings discussed above [121], it has been shown that mice strains that lack I-E, a homologue of DRB1, in their MHC class II region are more resistant [122].In a comparative study using bovine cDNA microarray analysis of duodenum tissue from an outbred population of resistant and susceptible lambs (which had been subjected to two natural challenges with a range of gastrointestinal parasites), increased expression was observed in a range of genes [18]. Upregulated genes included DQB1, DRA, and DQA1 from the MHC class II region [18]. This observation highlights key differences between resistant and susceptible animals in the early immune response to gastrointestinal nematodes. In a separate microarray study, differences were observed in gene expression profiles of hair and wool sheep that had been infected with H. contortus [19]. Elevated expression of the MHC class II DM β-chain precursor gene was observed in lymph node tissue of the wool breed. However, no significant change in the expression of this or any other MHC-related gene was observed in abomasal tissue [19]. In another study, using transcriptional profiling of duodenum tissue samples from resistant and susceptible sheep [20], up-regulation of MHC class II genes Ovar DQA1, Ovar DQB1, and Ovar DRA was observed in resistant animals. Subsequent RT-PCR analysis of Ovar DQA1showed an average 8.4-fold greater expression in resistant animals than in susceptible animals. Further analysis using GO terms highlighted the significant association between genes highly expressed in resistant animals with terms such as MHC class II activity and exogenous antigen processing and presentation [20]. Furthermore, the frequency of Ovar DQA1 haplotypes differed between animals from the resistant and susceptible selection lines, with an increase in Ovar DQA1*Null in susceptible animals from both Perendale and Romney sheep lines. In Perendale sheep, the frequency of Ovar DQA1*0101 and DQA1*0402 alleles was increased in resistant animals and Ovar DQA1*0103 increased in the susceptible line. However, these observations seemingly contradict earlier findings by the same group, in which no increase was observed in the expression of either MHC class II genes nor any association was found with antigen presentation or processing [123]. Interestingly, a significant increase in expression of a MHC class I gene (HLA-A orthologue) in resistant animals was also observed, indicating possible crosstalk between the different responses. Recently, Forrest and colleagues [21] conversely demonstrated no evidence of an interbreed effect of the Ovar-DQA1*Null allele on total faecal egg counts. However, the Ovar-DQA1*Null appeared to have a significant effect when the analysis was performed within breeds [21].In a statistical examination of the relationship between MHC polymorphism and parasitological traits in Scottish Blackface sheep, the resistant allele G2 at the DRB1 locus was significantly associated with decreased egg counts and decreased numbers of adultT. circumcincta [96]. However, no apparent correlation was observed with adult female parasite length. Hence, the mechanism by which the MHC influences egg counts may operate through the control of worm number and not by controlling nematode fecundity. There are several possible mechanisms but possibly specific class II molecules direct responses to specific peptides, and these responses may play a direct role in protection.Another possibility is that the observed associations in livestock are a consequence of heterozygote advantage [96]. Heterozygote advantage has complex effects on the power of statistical analyses to detect specific allele effects [27]. As the frequency of an allele increases in a population, an increasing proportion of homozygous sheep will be present and thus the average effect of the specific allele will decline. Also, an allele that is very rare in a population will be present in too few animals to show a significant effect. Conversely, when the allele is very common in the population, its average effect is quite small making its contribution to reduced egg counts difficult to detect. Consequently, only alleles within a narrow frequency range will show effects on parasite resistance. Interestingly, the allele most strongly associated with resistance in Scottish Blackface sheep fell within the narrow detection window, and the most common allele was also associated with the most susceptible animals as predicted by heterozygote advantage. There was also more direct evidence: Heterozygous sheep had lower egg counts following natural T. circumcinctainfection [96].Heterozygote advantage is a particularly appealing mechanism for explaining the IgE response to parasites. The specificity of IgE responses is relatively unimportant for mast cell degranulation if the target molecule is soluble and large enough to promote cross-linking of IgE receptors. Therefore, a heterozygote advantage that leads to increased IgE concentrations is more supported than a model of determinant selection (i.e., a direct role of the allele in determining levels of IgE).Charbonnel and Pemberton [124] examined both MHC and neutral loci in free-living Soay sheep that were infected by T. circumcincta in St Kilda (Scotland). Over eight years, lower levels of temporal genetic differentiation were observed at MHC loci compared with neutral loci, consistent with balancing selection activity at the MHC loci [124]. These observations confirmed earlier work by Paterson [125] but have not been supported by subsequent research [126]. Significant studies showing positive associations between genes within the MHC and gastrointestinal parasites are summarised in Table 1.Table 1
Summary of studies that have implicated the MHC in resistance to gastointestinal parasites.
SpeciesParasite speciesMethodMHC associationReferenceSheep (Ovis aries)mixedMicroarrayDQB1, DRA, DQA1[18]H. contortusMicroarrayDMB[19]mixedMicroarrayDQA1*Null, DQB1, DRA[20]mixedPCR analysisDQA1*Null[21]mixedPCR/sequencingDQA1*0101, DQA1*0402[20]mixedPCR/sequencingDRB1[22, 23]mixedPCRDRB microsatellite[23]Teladorsagia circumcinctaPCR/sequencingDRB1[24–27]H. contortusPCR/sequencingDRB1, OMHC1[28]Teladorsagia circumcinctaLinkageClass IIb region[29]Sheep (Ovis canadensis)n/aPopulation analysis PCR/sequencingDRB1[30]Mouse (Mus musculus)S. venezuelensisKnock outClass II[31]H. polygyrusLinkageClass II region[32, 33]Striped mouse (Rhabdomys pumilio)mixedPCR/sequencingDRB[34]Yellow necked mouse (Apodemus flavicollis)mixedPCR/sequencingDRB[35]Gray mouse lemur (Microcebusmurinus)mixedPCR/sequencingDRB[36]
## 9. Conclusions
There is no single mechanism of nematode resistance in sheep. Resistance to gastrointestinal nematodes involves the control of worm growth as well as worm numbers. The negative correlation between parasite-specific IgA levels and worm length has been well established by many research groups in different breeds of sheep infected by different gastrointestinal parasites. The control of worm numbers involves mast cells in some but not all host-nematode systems. There is a genetic component to nematode resistance, and the MHC is one of the most important components of genetic resistance. QTL analyses have shown a link between the MHC region and FEC in mouse models, as well as in sheep and cattle. The influence of the class II region on parasite resistance has been shown in experimental models as well as by microarray analysis.Despite the large number of studies that confirmed these relationships, there are other studies in which contradictory results reject these hypotheses. However, correlation studies may generate a complex heterogeneity of results because of the large variety of gastrointestinal nematode parasites and differences in environmental conditions, nutritional status of animals, and geographical locations. Another complication is that the relationship between gene expression from the MHC region, IgA activity, and their effects on parasites is often considered individually rather than as interconnecting multilevel interactions.
---
*Source: 101848-2011-04-12.xml* | 2011 |
# A Case of Stage I Vulvar Squamous Cell Carcinoma with Early Relapse and Rapid Disease Progression
**Authors:** Marta Peri; Antonino Grassadonia; Laura Iezzi; Patrizia Vici; Michele De Tursi; Clara Natoli; Nicola Tinari; Marinella Zilli
**Journal:** Case Reports in Oncological Medicine
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1018492
---
## Abstract
Squamous cell carcinoma (SCC) is the most common subtype of vulvar cancer. Locoregional surgery is often curative when the tumor is diagnosed at an early stage. However, the disease can unexpectedly evolve with a dismal prognosis even after an early diagnosis. We report a case of a woman who experienced a rapid, chemorefractory tumor progression after surgery for stage IB vulvar SCC.
---
## Body
## 1. Introduction
Vulvar cancer is the fourth most common gynecologic cancer after endometrial, ovarian, and cervical cancer, accounting for about 5% of all female genital tract malignancies [1].The most common histological type of vulvar cancer is squamous cell carcinoma (SCC), which accounts for about 90% of the cases. Two different types of SCC have been described: a keratinizing form and a warty/basaloid form [2]. The former occurs predominantly in postmenopausal women with a background of lichen sclerosus or lichen planus evolving in differentiated vulvar intraepithelial neoplasia (d-VIN) and is associated with poorer prognosis [3, 4]. The latter is more common in younger patients and is related to high-risk papillomavirus (HPV) infection evolving in high-grade squamous intraepithelial lesion (HSIL or VIN3) [3, 5].Radical local excision with inguinofemoral lymph node dissection currently represents the standard of treatment for women with vulvar cancer [6]. Since surgery is associated with high morbidity, noninvasive methods are commonly utilized to evaluate the extension of disease in the preoperative setting. In particular, magnetic resonance imaging (MRI) is helpful for the assessment of local tumor extension and inguinal lymph node involvement [7].The dissemination pattern of vulvar carcinoma is mostly lymphogenic, and the inguinofemoral lymph nodes are the primary site of regional spread [8]. Distant spread usually occurs late in the course of the disease. In the absence of distant metastases, the most important prognostic factor is represented by pathologic status of the inguinal nodes, while the size of the primary tumor is less important in defining prognosis [9].Herein, we describe a case of vulvar carcinoma diagnosed at stage IB FIGO that displayed a very aggressive behavior. The patient experienced early local recurrence and rapid metastatic disease progression causing death just a few months after relapse.
## 2. Case Presentation
A 70-year-old woman presented at the gynecology unit of our hospital complaining about a painful vulvar lesion in May 2017. She had no significant medical history. Physical examination revealed an exophytic and ulcerative vulvar mass, approximately 4 cm in diameter, localized on the right labium majus at less than 2 cm from the midline, without palpable inguinal lymph nodes bilaterally. An incisional biopsy was performed, and histology revealed an invasive poorly differentiated vulvar SCC. A total-body CT scan performed to stage the disease resulted negative for distant metastases.The patient underwent right hemivulvectomy in order to obtain wide tumor-free pathological margins in June 2017. Concomitant inguinal lymph node dissection was not performed due to the patient’s refusal (risk of lymphedema). Histopathologic findings confirmed a poorly differentiated vulvar SCC arising on a background of lichen sclerosus. The size of the invasive SCC lesion was 4.5 cm with a depth of invasion of 2.7 mm and no lymphovascular invasion. All surgical margins of the lesion were tumor-free (more than 1 cm).She was addressed to our oncology unit in July 2017. We required a disease restaging by abdominal and pelvic MRI scan and chest CT scan. No evidence of distant metastases resulted from the imaging studies. Therefore, we suggested locoregional lymph node dissection in order to define the pathologic stage of the tumor and to plan postoperative adjuvant radiotherapy to the groin just in case of lymph node involvement.In August 2017, a bilateral inguinofemoral lymph node dissection was performed with all nodes (twelve) resulting negative for metastatic spread on conventional hematoxylin-eosin staining. The tumor was staged as FIGO stage IB, and the patient was addressed to strict follow-up.However, just one month later (September 2017), the patient developed a local recurrence with a 3 cm nodule in the right vulvar area and a 0.8 cm lesion in the clitoris. A wide local excision was performed and histopathology examination revealed a poorly differentiated vulvar SCC in both lesions.A restaging CT scan of the chest, abdomen, and pelvis showed multiple bilateral pulmonary metastases and multiple inguinal and pelvic lymph node involvement.Because of recurrence, systemic chemotherapy was started with carboplatin (AUC5 day 1 every 3 weeks) and paclitaxel (80 mg/m2 days 1 and 8 every 3 weeks). After 3 cycles, a total body CT scan showed progression of metastatic disease in the lungs, lymph nodes, and liver. Moreover, a painful erythematosus nodule appeared on the skin of the right groin (Figure 1(a)) and right thigh (Figure 1(b)).Figure 1
Cutaneous tumor progression during the first-line chemotherapy. Erythematosus nodules on the skin of the right groin (a) and thigh (b).
(a)
(b)Because of disease progression, a second-line chemotherapy with capecitabine (1000 mg/m2 bid, days 1-14 every 21 days) was started (December 2017). After 3 cycles of treatment, the patient presented ulceration and fistulization of the groin lesion (Figure 2) and new skin nodules in the right thigh associated with extremities lymphedema. She complained of perineal pain and analgesic therapy was prescribed. Moreover, palliative radiotherapy to inguinal metastases (30 Gy in 10 fractions) was performed.Figure 2
Further cutaneous progression during the second-line chemotherapy. Ulceration and fistulization of the right groin nodule.A reevaluation CT scan (February 2018) revealed further progression of the disease with multiple liver metastases, multiple excavated lesions in the lungs (Figure3(a)), and matted metastatic iliac/inguinal lymph nodes (Figure 3(b)).Figure 3
Further systemic tumor progression during the second-line chemotherapy. CT scan showing multiple lesions in the lungs (a), some excavated (arrow), and matted metastatic lymph nodes in the iliac/inguinal region (b) (arrows).
(a)
(b)The patient died one month later, in March 2018, because of respiratory failure.
## 3. Discussion
We described a case of vulvar SCC that, despite an early-stage presentation at diagnosis, rapidly evolved into metastatic chemoresistant disease, leading the patient to death in a few months.Vulvar SCC is a rare disease mainly occurring in postmenopausal women. The prognosis for patients with early-stage disease is generally good, but cancer can spread from its original site to locoregional nodes and/or distant organs by lymphatic embolization or hematogenous diffusion, respectively. Lymphatic spread represents an early event in the course of the disease and involves ipsilateral inguinal, femoral, and pelvic lymph nodes, usually in a sequential manner. Lymph node status remains the single most important prognostic factor. The 5-year overall survival is more than 90% for patients without nodal involvement, which reduces to less than 60% in the case of nodal involvement [9].Our patient presented with a negative pathological nodal status at diagnosis, but unfortunately, her disease-free survival was very short, with local relapse occurring just one month after surgery. It is evident that other prognostic factors other than stage may influence the natural history of the disease. For example, tumor characteristics such as high-grade or lichen sclerosus-related etiopathogenesis, as in the case herein described, could play a pivotal role in determining a poor prognosis [4, 10].Hematogenous spread to distant sites, including the lung, liver, and bone, usually occurs late in the natural history of the disease. Cutaneous metastases from vulvar carcinoma have been rarely described, but when documented, they are associated with short survival [11–15]. The median time to death from the diagnosis of skin metastasis has been estimated to be around 6 months [4, 15].In our case, distant metastases developed shortly after diagnosis, in an early infiltrating phase of the disease (only 2.7 mm). Skin metastases also manifested early during the metastatic phase, in the course of the first-line systemic treatment. Consistent with literature data, the appearance of cutaneous metastases was a predictor of short survival for our patient. She died after only 4 months.The rapid tumor progression was accompanied by a chemoresistant phenotype with a dismal prognosis. In fact, disease progressed during the first-line platinum-based chemotherapy in all the secondary sites and further progressed during the subsequent second-line treatment with capecitabine.To date, the therapeutic management of metastatic vulvar SCC is heterogeneous and data are insufficient to recommend a preferred chemotherapeutic regimen in the palliative setting. Usually, regimens with proved efficacy in advanced cervical or anal cancers are chosen for the treatment of metastatic vulvar SCC [6].Advances in the molecular biology of vulvar SCC may provide insight into the future management of this tumor [16]. At the moment, despite the two distinct etiologies, there is no specific recommended chemotherapeutic regimen for HPV-associated or HPV-independent tumors.
## 4. Conclusion
Vulvar SCC can unexpectedly evolve with a dismal prognosis even if diagnosed at an early stage. The disease can be rapidly progressive and refractory to chemotherapy. Novel therapeutic approaches are required.
---
*Source: 1018492-2019-07-03.xml* | 1018492-2019-07-03_1018492-2019-07-03.md | 10,186 | A Case of Stage I Vulvar Squamous Cell Carcinoma with Early Relapse and Rapid Disease Progression | Marta Peri; Antonino Grassadonia; Laura Iezzi; Patrizia Vici; Michele De Tursi; Clara Natoli; Nicola Tinari; Marinella Zilli | Case Reports in Oncological Medicine
(2019) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1018492 | 1018492-2019-07-03.xml | ---
## Abstract
Squamous cell carcinoma (SCC) is the most common subtype of vulvar cancer. Locoregional surgery is often curative when the tumor is diagnosed at an early stage. However, the disease can unexpectedly evolve with a dismal prognosis even after an early diagnosis. We report a case of a woman who experienced a rapid, chemorefractory tumor progression after surgery for stage IB vulvar SCC.
---
## Body
## 1. Introduction
Vulvar cancer is the fourth most common gynecologic cancer after endometrial, ovarian, and cervical cancer, accounting for about 5% of all female genital tract malignancies [1].The most common histological type of vulvar cancer is squamous cell carcinoma (SCC), which accounts for about 90% of the cases. Two different types of SCC have been described: a keratinizing form and a warty/basaloid form [2]. The former occurs predominantly in postmenopausal women with a background of lichen sclerosus or lichen planus evolving in differentiated vulvar intraepithelial neoplasia (d-VIN) and is associated with poorer prognosis [3, 4]. The latter is more common in younger patients and is related to high-risk papillomavirus (HPV) infection evolving in high-grade squamous intraepithelial lesion (HSIL or VIN3) [3, 5].Radical local excision with inguinofemoral lymph node dissection currently represents the standard of treatment for women with vulvar cancer [6]. Since surgery is associated with high morbidity, noninvasive methods are commonly utilized to evaluate the extension of disease in the preoperative setting. In particular, magnetic resonance imaging (MRI) is helpful for the assessment of local tumor extension and inguinal lymph node involvement [7].The dissemination pattern of vulvar carcinoma is mostly lymphogenic, and the inguinofemoral lymph nodes are the primary site of regional spread [8]. Distant spread usually occurs late in the course of the disease. In the absence of distant metastases, the most important prognostic factor is represented by pathologic status of the inguinal nodes, while the size of the primary tumor is less important in defining prognosis [9].Herein, we describe a case of vulvar carcinoma diagnosed at stage IB FIGO that displayed a very aggressive behavior. The patient experienced early local recurrence and rapid metastatic disease progression causing death just a few months after relapse.
## 2. Case Presentation
A 70-year-old woman presented at the gynecology unit of our hospital complaining about a painful vulvar lesion in May 2017. She had no significant medical history. Physical examination revealed an exophytic and ulcerative vulvar mass, approximately 4 cm in diameter, localized on the right labium majus at less than 2 cm from the midline, without palpable inguinal lymph nodes bilaterally. An incisional biopsy was performed, and histology revealed an invasive poorly differentiated vulvar SCC. A total-body CT scan performed to stage the disease resulted negative for distant metastases.The patient underwent right hemivulvectomy in order to obtain wide tumor-free pathological margins in June 2017. Concomitant inguinal lymph node dissection was not performed due to the patient’s refusal (risk of lymphedema). Histopathologic findings confirmed a poorly differentiated vulvar SCC arising on a background of lichen sclerosus. The size of the invasive SCC lesion was 4.5 cm with a depth of invasion of 2.7 mm and no lymphovascular invasion. All surgical margins of the lesion were tumor-free (more than 1 cm).She was addressed to our oncology unit in July 2017. We required a disease restaging by abdominal and pelvic MRI scan and chest CT scan. No evidence of distant metastases resulted from the imaging studies. Therefore, we suggested locoregional lymph node dissection in order to define the pathologic stage of the tumor and to plan postoperative adjuvant radiotherapy to the groin just in case of lymph node involvement.In August 2017, a bilateral inguinofemoral lymph node dissection was performed with all nodes (twelve) resulting negative for metastatic spread on conventional hematoxylin-eosin staining. The tumor was staged as FIGO stage IB, and the patient was addressed to strict follow-up.However, just one month later (September 2017), the patient developed a local recurrence with a 3 cm nodule in the right vulvar area and a 0.8 cm lesion in the clitoris. A wide local excision was performed and histopathology examination revealed a poorly differentiated vulvar SCC in both lesions.A restaging CT scan of the chest, abdomen, and pelvis showed multiple bilateral pulmonary metastases and multiple inguinal and pelvic lymph node involvement.Because of recurrence, systemic chemotherapy was started with carboplatin (AUC5 day 1 every 3 weeks) and paclitaxel (80 mg/m2 days 1 and 8 every 3 weeks). After 3 cycles, a total body CT scan showed progression of metastatic disease in the lungs, lymph nodes, and liver. Moreover, a painful erythematosus nodule appeared on the skin of the right groin (Figure 1(a)) and right thigh (Figure 1(b)).Figure 1
Cutaneous tumor progression during the first-line chemotherapy. Erythematosus nodules on the skin of the right groin (a) and thigh (b).
(a)
(b)Because of disease progression, a second-line chemotherapy with capecitabine (1000 mg/m2 bid, days 1-14 every 21 days) was started (December 2017). After 3 cycles of treatment, the patient presented ulceration and fistulization of the groin lesion (Figure 2) and new skin nodules in the right thigh associated with extremities lymphedema. She complained of perineal pain and analgesic therapy was prescribed. Moreover, palliative radiotherapy to inguinal metastases (30 Gy in 10 fractions) was performed.Figure 2
Further cutaneous progression during the second-line chemotherapy. Ulceration and fistulization of the right groin nodule.A reevaluation CT scan (February 2018) revealed further progression of the disease with multiple liver metastases, multiple excavated lesions in the lungs (Figure3(a)), and matted metastatic iliac/inguinal lymph nodes (Figure 3(b)).Figure 3
Further systemic tumor progression during the second-line chemotherapy. CT scan showing multiple lesions in the lungs (a), some excavated (arrow), and matted metastatic lymph nodes in the iliac/inguinal region (b) (arrows).
(a)
(b)The patient died one month later, in March 2018, because of respiratory failure.
## 3. Discussion
We described a case of vulvar SCC that, despite an early-stage presentation at diagnosis, rapidly evolved into metastatic chemoresistant disease, leading the patient to death in a few months.Vulvar SCC is a rare disease mainly occurring in postmenopausal women. The prognosis for patients with early-stage disease is generally good, but cancer can spread from its original site to locoregional nodes and/or distant organs by lymphatic embolization or hematogenous diffusion, respectively. Lymphatic spread represents an early event in the course of the disease and involves ipsilateral inguinal, femoral, and pelvic lymph nodes, usually in a sequential manner. Lymph node status remains the single most important prognostic factor. The 5-year overall survival is more than 90% for patients without nodal involvement, which reduces to less than 60% in the case of nodal involvement [9].Our patient presented with a negative pathological nodal status at diagnosis, but unfortunately, her disease-free survival was very short, with local relapse occurring just one month after surgery. It is evident that other prognostic factors other than stage may influence the natural history of the disease. For example, tumor characteristics such as high-grade or lichen sclerosus-related etiopathogenesis, as in the case herein described, could play a pivotal role in determining a poor prognosis [4, 10].Hematogenous spread to distant sites, including the lung, liver, and bone, usually occurs late in the natural history of the disease. Cutaneous metastases from vulvar carcinoma have been rarely described, but when documented, they are associated with short survival [11–15]. The median time to death from the diagnosis of skin metastasis has been estimated to be around 6 months [4, 15].In our case, distant metastases developed shortly after diagnosis, in an early infiltrating phase of the disease (only 2.7 mm). Skin metastases also manifested early during the metastatic phase, in the course of the first-line systemic treatment. Consistent with literature data, the appearance of cutaneous metastases was a predictor of short survival for our patient. She died after only 4 months.The rapid tumor progression was accompanied by a chemoresistant phenotype with a dismal prognosis. In fact, disease progressed during the first-line platinum-based chemotherapy in all the secondary sites and further progressed during the subsequent second-line treatment with capecitabine.To date, the therapeutic management of metastatic vulvar SCC is heterogeneous and data are insufficient to recommend a preferred chemotherapeutic regimen in the palliative setting. Usually, regimens with proved efficacy in advanced cervical or anal cancers are chosen for the treatment of metastatic vulvar SCC [6].Advances in the molecular biology of vulvar SCC may provide insight into the future management of this tumor [16]. At the moment, despite the two distinct etiologies, there is no specific recommended chemotherapeutic regimen for HPV-associated or HPV-independent tumors.
## 4. Conclusion
Vulvar SCC can unexpectedly evolve with a dismal prognosis even if diagnosed at an early stage. The disease can be rapidly progressive and refractory to chemotherapy. Novel therapeutic approaches are required.
---
*Source: 1018492-2019-07-03.xml* | 2019 |
# Evaluation of a Bladder Cancer Cluster in a Population of Criminal Investigators with the Bureau of Alcohol, Tobacco, Firearms and Explosives—Part 1: The Cancer Incidence
**Authors:** Susan R. Davis; Xuguang Tao; Edward J. Bernacki; Amy S. Alfriend
**Journal:** Journal of Environmental and Public Health
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101850
---
## Abstract
This study investigated a bladder cancer cluster in a cohort of employees, predominately criminal investigators, participating in a medical surveillance program with the United States Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) between 1995 and 2007. Standardized incidence ratios (SIRs) were used to compare cancer incidences in the ATF population and the US reference population. Seven cases of bladder cancer (five cases verified by pathology report at time of analysis) were identified among a total employee population of 3,768 individuals. All cases were white males and criminal investigators. Six of seven cases were in the 30 to 49 age range at the time of diagnosis. The SIRs for white male criminal investigators undergoing examinations were 7.63 (95% confidence interval = 3.70–15.75) for reported cases and 5.45 (2.33–12.76) for verified cases. White male criminal investigators in the ATF population are at statistically significant increased risk for bladder cancer.
---
## Body
## 1. Introduction
The Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), a law enforcement agency within the United States Department of Justice, has been in the business of investigating the cause and origin of fires and explosions since the 1970s. ATF typically collaborates with other federal, state, and local authorities to provide this service through several venues. At the local level, criminal investigators with ATF, especially ones holding special designations as certified fire investigators (CFIs), work side-by-side with their state and local counterparts to assist with investigation of post-fire and post-blast scenes. These individuals may also serve on city-based federal arson task forces, in partnership with other federal, state, and local investigators, to address acute arson problems. At the national level, ATF maintains a well-trained national response team (NRT) which provides comprehensive response to assist these other jurisdictions with onsite investigation of major arson and bombing incidents. The NRT is a multispecialty team comprised of criminal investigators with post-fire and post-blast investigation expertise, explosives enforcement specialists, forensic chemists, fire protection engineers, and other technical experts. ATF established the NRT program in 1978 and subsequently instituted a program for certification of criminal investigators as fire investigators (CFIs) in 1986.In 1995, ATF commenced a voluntary medical surveillance program for members of the NRT to monitor the health of participants working in the potentially hazardous environment of the post-fire/post-blast scene. The agency extended the voluntary program to all CFIs in 1997. By 2000, three white male participants of the program had reported diagnoses of bladder cancer between the years 1994 and 1999. All three individuals were nonsmokers and younger than 50 years of age when diagnosed. This information raised concern that a bladder cancer cluster was occurring among scene investigators. In response to this concern, in 2002 ATF mandated participation in the medical surveillance program for all NRT members, all explosives enforcement specialists, and all criminal investigators, including those not involved in post-fire/post-blast investigations. To facilitate future epidemiologic evaluation of the significance of the apparent cancer cluster, ATF also committed medical surveillance data entry into an electronic database. By 2006, four additional white male program participants had reported diagnoses of bladder cancer between 2001 and 2005. Combined, the seven reported cases ranged in age from 32 to 53 in the year of diagnosis. From a job title perspective, all cases were among criminal investigators, who comprised 96% of the employees participating in the surveillance program. In addition, six of the seven cases reported working post-fire and post-blast scenes while employed with ATF. None of the seven cases reported work histories which involved investigation of such scenes prior to their employment with ATF.The presence of this cancer cluster generated two questions. First, was this group of employees, predominantly criminal investigators, experiencing a greater than expected incidence of bladder cancer from a demographic perspective? Second, was post-fire/post-blast scene investigation associated with increased risk for bladder cancer? Part 1 of this study addresses the question on cancer incidence and Part 2 addresses the question on cancer risk associated with investigation of these scenes. Analyses for both parts use data collected through the medical surveillance program.In the United States, bladder cancer is expected to be the sixth most commonly occurring cancer, excluding basal and squamous cell skin cancers, in 2012 [1]. As reported by the American Cancer Society, the estimated number of new bladder cancer cases in 2012 is 73,510, which equates to approximately 4.5% of all new cases of cancer [1]. The demographic characteristics associated with greatest risk for bladder cancer include male gender, white race, and increasing age [2]. According to the surveillance epidemiology and end results (SEER) age-adjusted incidence data for the period 2005–2009, bladder cancer occurs four times more often in men than in women [3]. In this data set, it ranks as the fourth most common cancer among men, but only the twelfth most common cancer among women [3]. The incidence among whites is 1.78 times greater than the incidence among blacks, while the incidence among Hispanics is 0.90 times the incidence among blacks [2, 3]. Bladder cancer incidence is the lowest among Asian/Pacific islanders and American Indian/Alaska natives [3]. The risk of onset increases with age over the age of 40 and is greatest during the ninth decade of life [3]. About 90% of cases occur in individuals over the age of 55, with the average age being 73 at the time of diagnosis [2]. In the ATF cancer cluster, all the cases of bladder cancer represented both the gender and the race with the greatest incidence of bladder cancer, but the ages of the cases at the time of diagnosis were relatively young for bladder cancer as all cases occurred in individuals less than 55 years of age.Recognized nondemographic risk factors include cigarette smoking, bladder birth defects, chronic bladder inflammation, genetic predisposition, use of herbal remedies containing aristolochic acid, drinking water containing arsenic and chlorination by-products, prior history of bladder cancer, chemotherapy and radiation therapy, and specific industries and occupations with exposures to known or suspect bladder carcinogens [2, 4–13]. Many studies have explored the association of fluid intake and type of fluid intake with risk for bladder cancer, but findings for both total fluids and specific fluids have been inconsistent, likely due to influences of confounding variables such as the presence of bladder carcinogens in tap water [14–24]. Two studies have demonstrated that increased frequency of urination is associated with reduced risk for bladder cancer [18, 21].Smoking, the number one risk factor for bladder cancer, is estimated to cause approximately 50% of bladder cancer cases in men and 30% of cases in women [6]. Second to smoking, occupational exposures to carcinogens may account for as few as 10% of cases in men and five percent of cases in women to as many as 20–25% of all bladder cancers [6, 11]. Established at risk industries include the manufacturing of products such as synthetic dyes and paints, cables, textiles, leather works, and aluminum and the petrochemical, coal tar, and rubber industries [5–7, 12, 25–30]. A number of specific occupations have also been identified to be associated with increased risk of bladder cancer. These include, but are not limited to, cooks and kitchen workers, electricians, hairdressers, leather workers, machinists, petroleum workers, rubber workers, coal miners, truckers, and vehicle mechanics, as summarized by Schulte et al. [12] in 1987, as well as coke oven workers, roofers, dry cleaners, chimney sweeps, and painters, as addressed by others in more recent literature [5–7, 25, 31–34]. Exposure to both cigarette smoke and occupationally related bladder carcinogens may work synergistically to increase further the risk for bladder cancer [2, 7].As an occupation, the broad job category of law enforcement is generally not recognized for being at increased risk for bladder cancer. In 1987, Schulte et al. [12] did include protection guards on their summary list of occupations associated with risk for bladder cancer, but acknowledged that the potential etiologic agent was unknown. As the rate of occurrence of bladder cancer is almost five times greater than the associated death rate in the US, with 80% of cases surviving for five or more years after diagnosis [1], cancer incidence is a more sensitive measure of occupational risk for bladder cancer than related mortality and it is important to differentiate between epidemiologic studies looking at cancer incidence and those looking at cancer mortality. Many epidemiologic studies on the association of bladder cancer incidence and occupations have not found police officers, guards, and related categories, such as protective services and government worker inspection/investigation occupations, to be associated with increased risk for bladder cancer [11, 35–46]. One exception was a study by Howe et al. [47] which showed guards and watchmen to have a statistically significant age-adjusted relative risk for bladder cancer of 4.0. In Reulen et al.’s [32] 2009 meta-analysis on the association between bladder cancer incidence and occupation, summary relative risks for bladder cancer were obtained for protective service occupations from the findings of 23 studies and for police officers and guards from the findings of 14 studies. These summary relative risks were only marginally elevated and not statistically significant, at 1.07 (95% confidence interval (CI) 0.96–1.19) for protective services and 1.10 (95% CI 0.95–1.29) for police officers and guards. Three mortality studies on police officer cohorts also did not find statistically significant increases in mortality from bladder cancer overall [48–50]. One of these studies did demonstrate that policemen who were professional drivers had significantly increased bladder cancer-related mortality [48] and another showed a higher than expected mortality rate for police officers with 10–19 years of service [50]. As some, but not all, epidemiology studies have demonstrated an increased risk for bladder cancer in jobs with exposure to diesel or traffic fumes [6, 40, 43, 51, 52] and one meta-analysis has shown increased cancer risk for several types of vehicle drivers [32], the use of broad law enforcement job categories in studies on cancer risk and occupation assumes a degree of uniformity in exposures to potential bladder carcinogens and risk for bladder cancer, which may not always be the case.Part 1 of this study investigates the statistical significance of the incidence of bladder cancer in the ATF employee population comprised of criminal investigators and members of the NRT.
## 2. Methods
### 2.1. Study Time Frame and Study Subjects
The time interval of this study is 1993, the year preceding diagnosis of the first bladder cancer case, through 2007. For this period, a full roster cohort was constructed from the annual staffing rosters provided by ATF for each calendar year from 1993 through 2007. The annual staffing rosters included all criminal investigators, explosives enforcement specialists, forensic chemists, fire protection engineers, and a small number of other specialists typically affiliated with NRT work, regardless of individual membership on the NRT or eligibility for participation in the medical surveillance program, who were currently working for ATF in each respective calendar year. ATF provided the demographic study parameters of gender, race, and age, and the job series and titles for all members of the full roster cohort. Members of the full roster cohort are dispersed throughout the United States and its territories Puerto Rico and Guam and may move around within this geographic area during their employment with ATF.With the advent of the medical surveillance program in 1995, a subset of the full roster cohort, comprised of employees participating in the program, was created. As explained in the introduction, the program was initially voluntary and offered to members of the NRT, but became mandatory in 2002 for all criminal investigators, explosives enforcement specialists, and members of the NRT. Setup in partnership with Federal Occupational Health (FOH), United States Department of Health and Human Services, the program consisted of an annual evaluation which included a medical history and tobacco use questionnaire, physical examination, and laboratory and ancillary tests. Collection of detailed work history information, including job series and titles, began in 2003 with the institution of a work history questionnaire. The use of FOH’s electronic database, the Occupational Health Information Management System (OHIMS), to facilitate future epidemiologic evaluation of the bladder cancer cluster began in 2002. Data for key study variables from pre-2002 exams were retrospectively retrieved and also entered into OHIMS.For the cohort of employees participating in the medical surveillance program from 1995 to 2007, pertinent data collected with each annual evaluation included the demographic variables, gender, race, and age, as well as cancer history and tobacco use history. Data on job series and titles were also provided by members of the cohort participating in the program between 2003 and 2007. The demographic data and the job series and titles data provided by ATF for the full roster cohort were subsequently cross-referenced with the same data provided by employees participating in the medical surveillance program. Any data inconsistencies between the two sources were resolved by medical record review or phone contact with employees.
### 2.2. Identification and Verification of Bladder Cancer Cases
As stated in the introduction, all bladder cancer cases were initially identified by employees self-reporting the diagnosis and year of diagnosis at the time of the annual medical surveillance evaluation. The self-reported cases were subsequently contacted by the occupational medicine physician overseeing the medical surveillance program, who informed them of the bladder cancer cluster and of the plans to evaluate the significance of the cluster through epidemiologic analysis and requested their assistance with voluntary provision of a pathology report for verification of case diagnosis. Requests for pathology reports were accompanied by provision of medical release forms for completion by cases.
### 2.3. Study Design
This is a cohort study in which standardized incidence ratios (SIRs) were calculated for four defined populations of ATF employees to compare the observed bladder cancer case numbers in each ATF population with the expected cancer case numbers based on incidence rate in the US general population, appropriately adjusted for age and stratified by sex and race, as relevant to each of the four ATF populations. The US general population was chosen as the best reference population for analysis due to the nation-wide dispersion of ATF employees under study. As all the bladder cancer cases occurred among white male criminal investigators, the populations chosen for analysis included: (1) the full roster cohort comprising all males and females, (2) all white males in the full roster cohort, (3) all white males in the full roster cohort with examinations (cancer and tobacco use histories), and (4) all white males in the full roster cohort with both examinations and work histories who were also criminal investigators. Determination of the expected cancer incidence rate was based on data from the surveillance epidemiology and end results (SEER) program for the US population for the period 1993–2007. The SIR estimates the relative risk of bladder cancer incidence in the ATF population compared to the US population adjusted for age and stratified by race and gender. Computations for each population included determination of the person-year distribution during the study period, which served as the denominator for the respective SIR analysis. One person-year was counted for each year an individual was a member of the cohort. The person-years were arranged by five-year age increments and three five-year time intervals, 1993–1997, 1998–2002, and 2003–2007.
## 2.1. Study Time Frame and Study Subjects
The time interval of this study is 1993, the year preceding diagnosis of the first bladder cancer case, through 2007. For this period, a full roster cohort was constructed from the annual staffing rosters provided by ATF for each calendar year from 1993 through 2007. The annual staffing rosters included all criminal investigators, explosives enforcement specialists, forensic chemists, fire protection engineers, and a small number of other specialists typically affiliated with NRT work, regardless of individual membership on the NRT or eligibility for participation in the medical surveillance program, who were currently working for ATF in each respective calendar year. ATF provided the demographic study parameters of gender, race, and age, and the job series and titles for all members of the full roster cohort. Members of the full roster cohort are dispersed throughout the United States and its territories Puerto Rico and Guam and may move around within this geographic area during their employment with ATF.With the advent of the medical surveillance program in 1995, a subset of the full roster cohort, comprised of employees participating in the program, was created. As explained in the introduction, the program was initially voluntary and offered to members of the NRT, but became mandatory in 2002 for all criminal investigators, explosives enforcement specialists, and members of the NRT. Setup in partnership with Federal Occupational Health (FOH), United States Department of Health and Human Services, the program consisted of an annual evaluation which included a medical history and tobacco use questionnaire, physical examination, and laboratory and ancillary tests. Collection of detailed work history information, including job series and titles, began in 2003 with the institution of a work history questionnaire. The use of FOH’s electronic database, the Occupational Health Information Management System (OHIMS), to facilitate future epidemiologic evaluation of the bladder cancer cluster began in 2002. Data for key study variables from pre-2002 exams were retrospectively retrieved and also entered into OHIMS.For the cohort of employees participating in the medical surveillance program from 1995 to 2007, pertinent data collected with each annual evaluation included the demographic variables, gender, race, and age, as well as cancer history and tobacco use history. Data on job series and titles were also provided by members of the cohort participating in the program between 2003 and 2007. The demographic data and the job series and titles data provided by ATF for the full roster cohort were subsequently cross-referenced with the same data provided by employees participating in the medical surveillance program. Any data inconsistencies between the two sources were resolved by medical record review or phone contact with employees.
## 2.2. Identification and Verification of Bladder Cancer Cases
As stated in the introduction, all bladder cancer cases were initially identified by employees self-reporting the diagnosis and year of diagnosis at the time of the annual medical surveillance evaluation. The self-reported cases were subsequently contacted by the occupational medicine physician overseeing the medical surveillance program, who informed them of the bladder cancer cluster and of the plans to evaluate the significance of the cluster through epidemiologic analysis and requested their assistance with voluntary provision of a pathology report for verification of case diagnosis. Requests for pathology reports were accompanied by provision of medical release forms for completion by cases.
## 2.3. Study Design
This is a cohort study in which standardized incidence ratios (SIRs) were calculated for four defined populations of ATF employees to compare the observed bladder cancer case numbers in each ATF population with the expected cancer case numbers based on incidence rate in the US general population, appropriately adjusted for age and stratified by sex and race, as relevant to each of the four ATF populations. The US general population was chosen as the best reference population for analysis due to the nation-wide dispersion of ATF employees under study. As all the bladder cancer cases occurred among white male criminal investigators, the populations chosen for analysis included: (1) the full roster cohort comprising all males and females, (2) all white males in the full roster cohort, (3) all white males in the full roster cohort with examinations (cancer and tobacco use histories), and (4) all white males in the full roster cohort with both examinations and work histories who were also criminal investigators. Determination of the expected cancer incidence rate was based on data from the surveillance epidemiology and end results (SEER) program for the US population for the period 1993–2007. The SIR estimates the relative risk of bladder cancer incidence in the ATF population compared to the US population adjusted for age and stratified by race and gender. Computations for each population included determination of the person-year distribution during the study period, which served as the denominator for the respective SIR analysis. One person-year was counted for each year an individual was a member of the cohort. The person-years were arranged by five-year age increments and three five-year time intervals, 1993–1997, 1998–2002, and 2003–2007.
## 3. Results
### 3.1. Study Subjects
Table1 shows the distribution of individuals by gender and race for the full roster cohort, for the subset of employees with surveillance examinations, for the subset of employees with surveillance examinations and work histories, and for the subset of criminal investigators with surveillance exams and work histories. The percent distribution of individuals by gender and race for each of four of these populations is comparable.Table 1
Distribution of self-reported bladder cancers and employees in key ATF study populations by gender and race (1993–2007).
Study population
Males
Females
Total
Percent of full roster
Whites
Nonwhites
Whites
Nonwhites
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Full roster
7
2723 (72.3%)
0
570 (15.1%)
0
358 (9.5%)
0
117 (3.1%)
7
3768 (100%)
100.00
Examinations
7
1885 (69.5%)
0
467 (17.2%)
0
271 (10.0%)
0
89 (3.3%)
7
2712 (100%)
71.97
Examinations and work histories
7
1771 (69.5%)
0
441 (17.3%)
0
253 (9.9%)
0
84 (3.3%)
7
2549 (100%)
67.65
Criminal investigators with examinations and work histories
7
1715 (69.2%)
0
436 (17.6%)
0
244 (9.8%)
0
83 (3.4%)
7
2478 (100%)
65.76The full roster cohort comprised 3,768 individuals (Table1). Criminal investigators, with job series 1811, accounted for 96% of all employees in the full roster cohort and 96% of all white males in the full roster cohort. Within the full roster cohort, only 18.2% of members were non-white and only 12.6% of members were female.Of the full roster cohort, 2,712 (72%) members participated in the medical surveillance program between 1995 and 2007 and had data for one or more examinations in the program’s electronic database (Table1). The percentage of full roster cohort members with examinations was not greater than 72% due in part to the medical surveillance program being voluntary and only open to members of the NRT and to CFIs prior to 2002. Since 2002, when the program became mandatory for all criminal investigators, explosives enforcement specialists, and members of the NRT, the percentage of currently employed cohort members who obtained annual examinations ranged from a low of 50% in 2003 to a high of 67% in 2006. Despite this variance in annual rate of participation in the mandatory program, 2697 of 3136 (86%) full roster cohort members who were employed for one or more years between 2002 and 2007 obtained at least one medical exam.Job history data, collected from employees between 2003 and 2007 at the time of the annual exam, were available for 2549 (68%) members of the full roster cohort. Of these individuals, 2478 (97%) were criminal investigators (Table1).
### 3.2. Characteristics of Bladder Cancer Cases
During the study period, seven individuals reported bladder cancer diagnoses. At the time of the analysis, five of the seven cases had provided medical documentation (pathology reports) verifying the diagnosis of bladder cancer and diagnosis year. Another case provided verifying documentation following the completion of the analysis.As affirmed from review of pathology reports of the urinary bladder biopsies of the five cases verified at the time of the analysis, four of the cases were low grade papillary transitional cell carcinomas and one was transitional cell carcinomain situ. The subsequent sixth case verified after analysis was also low grade papillary transitional cell carcinoma.The first case was diagnosed in 1994 and the most recent case was diagnosed in 2005. As already known from the medical surveillance program, all bladder cancers occurred in white males and in criminal investigators. Table1 includes the distribution of reported bladder cancer cases by gender and race for each defined employee population. White males comprised 72% (2723/3768) of the full roster cohort and 69-70% of each of the three subset populations, including criminal investigators with medical surveillance examinations and work histories (69% (1715/2478)). The cases ranged from 32 to 53 years of age the year of diagnosis, with three individuals being in their 30s, three being in their 40s, and one being in his early 50s. Table 2 shows the distribution of the self-reported bladder cancer cases at the time of diagnosis in the same five-year age increments and three five-year time intervals as used to establish the person-year distributions for each SIR analysis. Two cases were diagnosed in 1993–1997, four cases were diagnosed in 1998–2002, and one case was diagnosed in 2003–2007.Table 2
Distribution of self-reported bladder cancer cases and of person-years observed for white males in full roster cohort by calendar period and age group.
Age group
1993–1997
1998–2002
2003–2007
Total
Cases
Person-years
Cases
Person-years
Cases
Person-years
Cases
Person-years
0–4
0
0.0
0
0.0
0
0.0
0
0.0
5–9
0
0.0
0
0.0
0
0.0
0
0.0
10–14
0
0.0
0
0.0
0
0.0
0
0.0
15–19
0
0.0
0
0.0
0
0.0
0
0.0
20–24
0
25.1
0
81.6
0
71.3
0
178.0
25–29
0
642.9
0
611.6
0
749.7
0
2004.1
30–34
2
1730.8
0
1593.4
0
1670.6
2
4994.8
35–39
0
1321.5
0
2136.4
1
2125.1
1
5583.0
40–44
0
773.1
2
1329.7
0
2111.9
2
4214.6
45–49
0
1574.9
1
763.0
0
1291.9
1
3629.8
50–54
0
1046.2
1
1083.5
0
571.5
1
2701.1
55–59
0
261.6
0
383.1
0
264.5
0
909.2
60–64
0
12.4
0
113.1
0
95.9
0
221.4
65–69
0
3.9
0
5.7
0
43.3
0
52.9
70–74
0
1.8
0
2.7
0
3.0
0
7.5
75–79
0
0.0
0
0.0
0
0.0
0
0.0
80–84
0
0.0
0
0.0
0
0.0
0
0.0
85–89
0
0.0
0
0.0
0
0.0
0
0.0
Total
2
7394.0
4
8103.8
1
8998.7
7
24496.5
*The one remaining unverified cancer case occurred in the 1998–2002 time frame.
### 3.3. Distribution of Person-Years within the Study Populations Undergoing Incidence Analysis
Table3 summarizes the total person-years calculated for each of the four study populations undergoing incidence analysis. The number of total person-years ranged from a high of 34,818 for the full roster cohort to a low of 17,976 for the population of white male criminal investigators with exams and work histories. The pattern of distribution of person-years was similar for all four study populations and is illustrated in Table 2 for the population of white males in the full roster cohort. The vast majority of employees for each study population fell within 25 and 54 years of age for each of the five-year time intervals. This distribution pattern reflects ATF’s practice of hiring criminal investigators with prior work experience and the mandated retirement age of 57 years for federal criminal investigators.Table 3
Standardized incidence ratios (SIRs) of self-reported and verified urinary bladder cancer cases for the period 1993–2007.
Study population
Employee #
Person-years
Expected cases*
Observed cases
SIR
95% CI
Entire roster cohort
3,768
34,818.01
2.91
7
2.41
1.17
4.96
Entire roster cohort**
3,768
34,818.01
2.91
5
1.72
0.73
4.02
Roster white males
2,723
24,496.47
2.39
7
2.93
1.42
6.07
Roster white males**
2,723
24,496.47
2.39
5
2.09
0.90
1.91
Exam white males
1,885
19,648.25
1.15
7
6.08
2.94
12.54
Exam white males**
1,885
19,648.25
1.15
5
4.34
1.85
10.16
Job 1811 white males
1,715
17,976.42
0.92
7
7.63
3.70
15.75
Job 1811 white males**
1,715
17,976.42
0.92
5
5.45
2.33
12.76
*Expected number of cases calculated using US incidence rates from SEER for the same period.
**SIRs determined for the five verified cases with pathology reports.
### 3.4. Bladder Cancer Incidence Ratios
Table3 presents the standardized incidence ratios (SIRs) calculated for each of the four defined study populations: the full roster cohort, all white males in the full roster cohort, all white males with examinations (cancer and tobacco use histories), and all white males with examinations and work histories who were also criminal investigators. To access the effect of the two unverified cancer cases on SIR outcomes, SIRs were performed for the scenario with seven reported cancer cases and for the scenario with five verified cases for each of the four study populations.When computed with all seven cases, the SIR is 2.41 (95% CI 1.17–4.96) for the entire roster cohort, 2.93 (95% CI 1.42–6.07) for the white male cohort, 6.08 (95% CI 2.94–12.54) for white males with exams, and 7.63 (95% CI 3.70–15.75) for white male criminal investigators with exams and work histories (Table3). When recalculated with only the five verified cases, the SIR is 1.72 (95% CI 0.73–4.02) for the entire roster cohort, 2.09 (95% CI 0.90–1.91) for white males, 4.34 (95% CI 1.85–10.16) for white males with exams, and 5.45 (95% CI 2.33–12.76) for white male criminal investigators with exams and work histories (Table 3). The elevated SIRs are statistically significant for all four of these populations when all seven cases are included in the analysis and remain statistically significant for white males with exams and white male criminal investigators with exams and work histories when only the five verified cases are used.Age-specific cancer incidence rates in the white male ATF population were greater than the rates for the adjusted U.S. reference SEER population for each age group in which cases occurred (Table4). This finding is expected since 90% of bladder cancers occur in individuals over the age of 55 and all seven cases in the ATF population were younger than 55 years at the time of diagnosis. The highest age-specific relative risk for bladder cancer in the ATF population compared to the reference population, and the only one of statistical significance, was seen in the 30–34 age group, the youngest age group experiencing bladder cancer within the ATF population.Table 4
Age-specific white male bladder cancer incidence rates (1/100,000) for ATF and SEER (13 registries), with respective rate ratios, for the period 1993–2007.
Age
ATF
SEER
RR
95% CI-L
95% CI-U
20–24
0.00
0.37
0.00
0.00
0.00
25–29
0.00
0.57
0.00
0.00
0.00
30–34
40.04
1.17
34.32
3.78
124.24
35–39
17.91
2.80
6.40
0.19
35.63
40–44
47.45
5.83
8.13
0.89
29.45
45–49
27.55
13.13
2.10
0.06
11.68
50–54
37.02
25.33
1.46
0.04
8.14
55–59
0.00
50.57
0.00
0.00
0.00
60–64
0.00
88.33
0.00
0.00
0.00
65–69
0.00
145.70
0.00
0.00
0.00
70–74
0.00
209.43
0.00
0.00
0.00
75–79
0.00
275.40
0.00
0.00
0.00
80–84
0.00
327.83
0.00
0.00
0.00
85–89
0.00
353.73
0.00
0.00
0.00
## 3.1. Study Subjects
Table1 shows the distribution of individuals by gender and race for the full roster cohort, for the subset of employees with surveillance examinations, for the subset of employees with surveillance examinations and work histories, and for the subset of criminal investigators with surveillance exams and work histories. The percent distribution of individuals by gender and race for each of four of these populations is comparable.Table 1
Distribution of self-reported bladder cancers and employees in key ATF study populations by gender and race (1993–2007).
Study population
Males
Females
Total
Percent of full roster
Whites
Nonwhites
Whites
Nonwhites
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Full roster
7
2723 (72.3%)
0
570 (15.1%)
0
358 (9.5%)
0
117 (3.1%)
7
3768 (100%)
100.00
Examinations
7
1885 (69.5%)
0
467 (17.2%)
0
271 (10.0%)
0
89 (3.3%)
7
2712 (100%)
71.97
Examinations and work histories
7
1771 (69.5%)
0
441 (17.3%)
0
253 (9.9%)
0
84 (3.3%)
7
2549 (100%)
67.65
Criminal investigators with examinations and work histories
7
1715 (69.2%)
0
436 (17.6%)
0
244 (9.8%)
0
83 (3.4%)
7
2478 (100%)
65.76The full roster cohort comprised 3,768 individuals (Table1). Criminal investigators, with job series 1811, accounted for 96% of all employees in the full roster cohort and 96% of all white males in the full roster cohort. Within the full roster cohort, only 18.2% of members were non-white and only 12.6% of members were female.Of the full roster cohort, 2,712 (72%) members participated in the medical surveillance program between 1995 and 2007 and had data for one or more examinations in the program’s electronic database (Table1). The percentage of full roster cohort members with examinations was not greater than 72% due in part to the medical surveillance program being voluntary and only open to members of the NRT and to CFIs prior to 2002. Since 2002, when the program became mandatory for all criminal investigators, explosives enforcement specialists, and members of the NRT, the percentage of currently employed cohort members who obtained annual examinations ranged from a low of 50% in 2003 to a high of 67% in 2006. Despite this variance in annual rate of participation in the mandatory program, 2697 of 3136 (86%) full roster cohort members who were employed for one or more years between 2002 and 2007 obtained at least one medical exam.Job history data, collected from employees between 2003 and 2007 at the time of the annual exam, were available for 2549 (68%) members of the full roster cohort. Of these individuals, 2478 (97%) were criminal investigators (Table1).
## 3.2. Characteristics of Bladder Cancer Cases
During the study period, seven individuals reported bladder cancer diagnoses. At the time of the analysis, five of the seven cases had provided medical documentation (pathology reports) verifying the diagnosis of bladder cancer and diagnosis year. Another case provided verifying documentation following the completion of the analysis.As affirmed from review of pathology reports of the urinary bladder biopsies of the five cases verified at the time of the analysis, four of the cases were low grade papillary transitional cell carcinomas and one was transitional cell carcinomain situ. The subsequent sixth case verified after analysis was also low grade papillary transitional cell carcinoma.The first case was diagnosed in 1994 and the most recent case was diagnosed in 2005. As already known from the medical surveillance program, all bladder cancers occurred in white males and in criminal investigators. Table1 includes the distribution of reported bladder cancer cases by gender and race for each defined employee population. White males comprised 72% (2723/3768) of the full roster cohort and 69-70% of each of the three subset populations, including criminal investigators with medical surveillance examinations and work histories (69% (1715/2478)). The cases ranged from 32 to 53 years of age the year of diagnosis, with three individuals being in their 30s, three being in their 40s, and one being in his early 50s. Table 2 shows the distribution of the self-reported bladder cancer cases at the time of diagnosis in the same five-year age increments and three five-year time intervals as used to establish the person-year distributions for each SIR analysis. Two cases were diagnosed in 1993–1997, four cases were diagnosed in 1998–2002, and one case was diagnosed in 2003–2007.Table 2
Distribution of self-reported bladder cancer cases and of person-years observed for white males in full roster cohort by calendar period and age group.
Age group
1993–1997
1998–2002
2003–2007
Total
Cases
Person-years
Cases
Person-years
Cases
Person-years
Cases
Person-years
0–4
0
0.0
0
0.0
0
0.0
0
0.0
5–9
0
0.0
0
0.0
0
0.0
0
0.0
10–14
0
0.0
0
0.0
0
0.0
0
0.0
15–19
0
0.0
0
0.0
0
0.0
0
0.0
20–24
0
25.1
0
81.6
0
71.3
0
178.0
25–29
0
642.9
0
611.6
0
749.7
0
2004.1
30–34
2
1730.8
0
1593.4
0
1670.6
2
4994.8
35–39
0
1321.5
0
2136.4
1
2125.1
1
5583.0
40–44
0
773.1
2
1329.7
0
2111.9
2
4214.6
45–49
0
1574.9
1
763.0
0
1291.9
1
3629.8
50–54
0
1046.2
1
1083.5
0
571.5
1
2701.1
55–59
0
261.6
0
383.1
0
264.5
0
909.2
60–64
0
12.4
0
113.1
0
95.9
0
221.4
65–69
0
3.9
0
5.7
0
43.3
0
52.9
70–74
0
1.8
0
2.7
0
3.0
0
7.5
75–79
0
0.0
0
0.0
0
0.0
0
0.0
80–84
0
0.0
0
0.0
0
0.0
0
0.0
85–89
0
0.0
0
0.0
0
0.0
0
0.0
Total
2
7394.0
4
8103.8
1
8998.7
7
24496.5
*The one remaining unverified cancer case occurred in the 1998–2002 time frame.
## 3.3. Distribution of Person-Years within the Study Populations Undergoing Incidence Analysis
Table3 summarizes the total person-years calculated for each of the four study populations undergoing incidence analysis. The number of total person-years ranged from a high of 34,818 for the full roster cohort to a low of 17,976 for the population of white male criminal investigators with exams and work histories. The pattern of distribution of person-years was similar for all four study populations and is illustrated in Table 2 for the population of white males in the full roster cohort. The vast majority of employees for each study population fell within 25 and 54 years of age for each of the five-year time intervals. This distribution pattern reflects ATF’s practice of hiring criminal investigators with prior work experience and the mandated retirement age of 57 years for federal criminal investigators.Table 3
Standardized incidence ratios (SIRs) of self-reported and verified urinary bladder cancer cases for the period 1993–2007.
Study population
Employee #
Person-years
Expected cases*
Observed cases
SIR
95% CI
Entire roster cohort
3,768
34,818.01
2.91
7
2.41
1.17
4.96
Entire roster cohort**
3,768
34,818.01
2.91
5
1.72
0.73
4.02
Roster white males
2,723
24,496.47
2.39
7
2.93
1.42
6.07
Roster white males**
2,723
24,496.47
2.39
5
2.09
0.90
1.91
Exam white males
1,885
19,648.25
1.15
7
6.08
2.94
12.54
Exam white males**
1,885
19,648.25
1.15
5
4.34
1.85
10.16
Job 1811 white males
1,715
17,976.42
0.92
7
7.63
3.70
15.75
Job 1811 white males**
1,715
17,976.42
0.92
5
5.45
2.33
12.76
*Expected number of cases calculated using US incidence rates from SEER for the same period.
**SIRs determined for the five verified cases with pathology reports.
## 3.4. Bladder Cancer Incidence Ratios
Table3 presents the standardized incidence ratios (SIRs) calculated for each of the four defined study populations: the full roster cohort, all white males in the full roster cohort, all white males with examinations (cancer and tobacco use histories), and all white males with examinations and work histories who were also criminal investigators. To access the effect of the two unverified cancer cases on SIR outcomes, SIRs were performed for the scenario with seven reported cancer cases and for the scenario with five verified cases for each of the four study populations.When computed with all seven cases, the SIR is 2.41 (95% CI 1.17–4.96) for the entire roster cohort, 2.93 (95% CI 1.42–6.07) for the white male cohort, 6.08 (95% CI 2.94–12.54) for white males with exams, and 7.63 (95% CI 3.70–15.75) for white male criminal investigators with exams and work histories (Table3). When recalculated with only the five verified cases, the SIR is 1.72 (95% CI 0.73–4.02) for the entire roster cohort, 2.09 (95% CI 0.90–1.91) for white males, 4.34 (95% CI 1.85–10.16) for white males with exams, and 5.45 (95% CI 2.33–12.76) for white male criminal investigators with exams and work histories (Table 3). The elevated SIRs are statistically significant for all four of these populations when all seven cases are included in the analysis and remain statistically significant for white males with exams and white male criminal investigators with exams and work histories when only the five verified cases are used.Age-specific cancer incidence rates in the white male ATF population were greater than the rates for the adjusted U.S. reference SEER population for each age group in which cases occurred (Table4). This finding is expected since 90% of bladder cancers occur in individuals over the age of 55 and all seven cases in the ATF population were younger than 55 years at the time of diagnosis. The highest age-specific relative risk for bladder cancer in the ATF population compared to the reference population, and the only one of statistical significance, was seen in the 30–34 age group, the youngest age group experiencing bladder cancer within the ATF population.Table 4
Age-specific white male bladder cancer incidence rates (1/100,000) for ATF and SEER (13 registries), with respective rate ratios, for the period 1993–2007.
Age
ATF
SEER
RR
95% CI-L
95% CI-U
20–24
0.00
0.37
0.00
0.00
0.00
25–29
0.00
0.57
0.00
0.00
0.00
30–34
40.04
1.17
34.32
3.78
124.24
35–39
17.91
2.80
6.40
0.19
35.63
40–44
47.45
5.83
8.13
0.89
29.45
45–49
27.55
13.13
2.10
0.06
11.68
50–54
37.02
25.33
1.46
0.04
8.14
55–59
0.00
50.57
0.00
0.00
0.00
60–64
0.00
88.33
0.00
0.00
0.00
65–69
0.00
145.70
0.00
0.00
0.00
70–74
0.00
209.43
0.00
0.00
0.00
75–79
0.00
275.40
0.00
0.00
0.00
80–84
0.00
327.83
0.00
0.00
0.00
85–89
0.00
353.73
0.00
0.00
0.00
## 4. Discussion
Recognition of a bladder cancer cluster among white male criminal investigators participating in an ATF medical surveillance program raised concern that the employee population was experiencing a greater-than-expected incidence of bladder cancer and that post-fire/post-blast scene investigation might be associated with increased risk of bladder cancer. Part 1 of this epidemiologic study determined that bladder cancer incidence in the white male study population was significantly elevated statistically for the period 1993–2007.This study illustrates the twofold utility of using a medical surveillance program to monitor the health of an employee population potentially exposed to hazardous agents and to perform epidemiologic analysis of the significance of cancer occurring within the population. Although the ATF medical surveillance program evolved over time in several ways, which included changing from voluntary to mandatory participation, and despite an annual participation rate of 50–67% after the program became mandatory, the requisite data were sufficient to perform SIRs for the ATF study population which showed white males to be at increased risk for bladder cancer when compared to white males in the US reference population.All bladder cancer cases in the ATF cohort were reported by white males, who constituted about 72% of the full roster cohort. In the incidence analyses of the two larger ATF populations, the full roster cohort and white males in the full roster cohort, the cancer risk was elevated for analyses performed with both seven reported and five verified cases, but was only statistically significant for the analyses with the seven reported cases. Since these two ATF populations included individuals who had not participated in the medical surveillance program and whose cancer history was unknown, which amounted to 28% of the full roster cohort and 31% of white males in the full roster cohort, the actual number of cases of bladder cancer within these two populations could be greater than the observed seven reported and five verified cases, which would lead to even higher SIRs than the ones calculated with the seven and five cases. With the two smaller ATF populations, white males with exams and white male criminal investigators with exams, the computed SIRs were elevated and statistically significant with both seven reported and five verified cancer cases. The SIRs of the two smaller populations, understandably higher than those of the two larger populations, might best depict the true cancer risk for the ATF cohort as the reported cancer history is known for all individuals in these two smaller populations.The finding that white male criminal investigators in the ATF population are at increased risk for bladder cancer is in contrast with the findings of prior epidemiologic studies of bladder cancer incidence in law enforcement occupations, in which law enforcement and related job categories were generally found not to be at increased risk for bladder cancer [11, 32, 35–39, 41, 42, 44–46]. In Reulen et al.’s [32] meta-analysis, the summary relative risk was 1.10 (95% CI 0.95–1.29) for police officers and 1.07 (95% CI 0.96–1.19) for protective service occupations. What may make this population of ATF criminal investigators unique from other populations of law enforcement specialists is the presence of a sizable subset of ATF criminal investigators who specialize in investigation of post-fire and post-blast scenes. What we know from the medical surveillance program is that six of the seven bladder cancer cases in the current study had occupational histories of working these scenes while employed with ATF. Thus, the increased incidence of bladder cancer identified among white male criminal investigators in the current study appears to be associated with the performance of post-fire/post-blast investigations. If this is the case, the increased risk for bladder cancer in this subset of specialized criminal investigators is sufficiently strong to influence the bladder cancer risk within the entire ATF study population.The magnitude of the SIRs computed in this study deserves some discussion. For the population of white male criminal investigators with exams, the SIRs were 7.63 (95% CI 3.70–15.75) for seven reported cases and 5.45 (95% CI 2.33–12.76) for five verified cases, and for the slightly larger population of all white males with exams, the SIRs were 6.08 (95% CI 2.94–12.54) for seven reported cases and 4.34 (95% CI 1.85–10.16) for five verified cases. In individual epidemiologic studies of bladder cancer incidence in other occupations and industries, statistically significant elevations in relative risk have been found with a 1.1-fold to fivefold increase [11, 32, 33, 39, 41–44, 46, 52–54] and even with a sixfold to tenfold increase for some occupations such as chemical workers [41, 47], dye manufacturing [55, 56], railroad workers [47], and physicians [41]. In one epidemiologic study on firefighters, the reported statistically significant odds ratio for bladder cancer was as high as 22.7 [41]. Thus, some epidemiologic studies on other occupations have obtained elevated relative risk for bladder cancer of the same order of magnitude as that found in the present study on criminal investigators. In Reulen et al.’s [32] meta-analysis on the association between bladder cancer and occupations, however, statistically significant summary relative risks which were found for several occupations only fell in the 1.1–1.3 range. These occupations included miners, rubber workers, leather workers, four types of professional drivers, and mechanics, but not chemical workers, firefighters, police officers, protective service occupations, or health care professionals.The elevated risk for bladder cancer in the current study cohort also approximates the increased risk for bladder cancer seen in smokers compared to nonsmokers, as demonstrated in various epidemiologic studies which show smokers to have a twofold to sixfold increased risk for bladder cancer [1, 6, 57–59]. Further corroborating study is advised to verify the magnitude of the increased risk found among criminal investigators in the current study.Although the elevated SIRs for the two populations with exams remained statistically significant when analyzed with only the five verified cases, having two of the seven reported cases (28%) unverified at the time of statistical analysis constitutes a weakness of the study and illustrates a limitation of using data from a medical surveillance program to conduct a cancer incidence analysis. With so few total cases, the difference in number between reported and verified cases can impact the significance of study outcomes, as demonstrated by comparing the SIRs computed with both the seven reported and the five verified cases for the full roster cohort and for white males in the full roster cohort. Since the study analysis was completed, ongoing effort to verify the unverified cases was successful in verifying one of the two cases.Another point to make is that the SIRs on the populations with exams could in actuality be artificially high, as only 72% of individuals in the full roster cohort underwent physical examination and individuals without medical issues may have selectively avoided coming in for exams. Since the institution of the mandatory program in 2002, however, 86% of individuals employed by ATF between 2002 and 2007 obtained at least one exam during this time frame and these 2697 employees accounted for 99% of the 2712 individuals in the full roster cohort with exams. With this level of participation in the mandatory program since 2002, potential for bias due to exam avoidance is likely very limited.In conclusion, white male members of the ATF cohort experienced statistically significant increased risk for bladder cancer when compared to white males in the US population for the study period 1993–2007. Among white males with exams and white male criminal investigators with exams, the elevated risk was demonstrated for computations with both seven reported and five verified cancer cases. There was no observed bladder cancer risk for nonwhite males and all females in the cohort study.With six of the seven cases in the bladder cancer cluster, having known histories of investigating post-fire and post-blast scenes while employed with ATF, scene investigation work appeared to be linked with the observed increase in bladder cancer incidence. Part 2 of the study will evaluate the association of post-fire/post-blast scene work history and risk for bladder cancer, while controlling for tobacco use history.
---
*Source: 101850-2012-12-09.xml* | 101850-2012-12-09_101850-2012-12-09.md | 52,155 | Evaluation of a Bladder Cancer Cluster in a Population of Criminal Investigators with the Bureau of Alcohol, Tobacco, Firearms and Explosives—Part 1: The Cancer Incidence | Susan R. Davis; Xuguang Tao; Edward J. Bernacki; Amy S. Alfriend | Journal of Environmental and Public Health
(2012) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101850 | 101850-2012-12-09.xml | ---
## Abstract
This study investigated a bladder cancer cluster in a cohort of employees, predominately criminal investigators, participating in a medical surveillance program with the United States Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) between 1995 and 2007. Standardized incidence ratios (SIRs) were used to compare cancer incidences in the ATF population and the US reference population. Seven cases of bladder cancer (five cases verified by pathology report at time of analysis) were identified among a total employee population of 3,768 individuals. All cases were white males and criminal investigators. Six of seven cases were in the 30 to 49 age range at the time of diagnosis. The SIRs for white male criminal investigators undergoing examinations were 7.63 (95% confidence interval = 3.70–15.75) for reported cases and 5.45 (2.33–12.76) for verified cases. White male criminal investigators in the ATF population are at statistically significant increased risk for bladder cancer.
---
## Body
## 1. Introduction
The Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), a law enforcement agency within the United States Department of Justice, has been in the business of investigating the cause and origin of fires and explosions since the 1970s. ATF typically collaborates with other federal, state, and local authorities to provide this service through several venues. At the local level, criminal investigators with ATF, especially ones holding special designations as certified fire investigators (CFIs), work side-by-side with their state and local counterparts to assist with investigation of post-fire and post-blast scenes. These individuals may also serve on city-based federal arson task forces, in partnership with other federal, state, and local investigators, to address acute arson problems. At the national level, ATF maintains a well-trained national response team (NRT) which provides comprehensive response to assist these other jurisdictions with onsite investigation of major arson and bombing incidents. The NRT is a multispecialty team comprised of criminal investigators with post-fire and post-blast investigation expertise, explosives enforcement specialists, forensic chemists, fire protection engineers, and other technical experts. ATF established the NRT program in 1978 and subsequently instituted a program for certification of criminal investigators as fire investigators (CFIs) in 1986.In 1995, ATF commenced a voluntary medical surveillance program for members of the NRT to monitor the health of participants working in the potentially hazardous environment of the post-fire/post-blast scene. The agency extended the voluntary program to all CFIs in 1997. By 2000, three white male participants of the program had reported diagnoses of bladder cancer between the years 1994 and 1999. All three individuals were nonsmokers and younger than 50 years of age when diagnosed. This information raised concern that a bladder cancer cluster was occurring among scene investigators. In response to this concern, in 2002 ATF mandated participation in the medical surveillance program for all NRT members, all explosives enforcement specialists, and all criminal investigators, including those not involved in post-fire/post-blast investigations. To facilitate future epidemiologic evaluation of the significance of the apparent cancer cluster, ATF also committed medical surveillance data entry into an electronic database. By 2006, four additional white male program participants had reported diagnoses of bladder cancer between 2001 and 2005. Combined, the seven reported cases ranged in age from 32 to 53 in the year of diagnosis. From a job title perspective, all cases were among criminal investigators, who comprised 96% of the employees participating in the surveillance program. In addition, six of the seven cases reported working post-fire and post-blast scenes while employed with ATF. None of the seven cases reported work histories which involved investigation of such scenes prior to their employment with ATF.The presence of this cancer cluster generated two questions. First, was this group of employees, predominantly criminal investigators, experiencing a greater than expected incidence of bladder cancer from a demographic perspective? Second, was post-fire/post-blast scene investigation associated with increased risk for bladder cancer? Part 1 of this study addresses the question on cancer incidence and Part 2 addresses the question on cancer risk associated with investigation of these scenes. Analyses for both parts use data collected through the medical surveillance program.In the United States, bladder cancer is expected to be the sixth most commonly occurring cancer, excluding basal and squamous cell skin cancers, in 2012 [1]. As reported by the American Cancer Society, the estimated number of new bladder cancer cases in 2012 is 73,510, which equates to approximately 4.5% of all new cases of cancer [1]. The demographic characteristics associated with greatest risk for bladder cancer include male gender, white race, and increasing age [2]. According to the surveillance epidemiology and end results (SEER) age-adjusted incidence data for the period 2005–2009, bladder cancer occurs four times more often in men than in women [3]. In this data set, it ranks as the fourth most common cancer among men, but only the twelfth most common cancer among women [3]. The incidence among whites is 1.78 times greater than the incidence among blacks, while the incidence among Hispanics is 0.90 times the incidence among blacks [2, 3]. Bladder cancer incidence is the lowest among Asian/Pacific islanders and American Indian/Alaska natives [3]. The risk of onset increases with age over the age of 40 and is greatest during the ninth decade of life [3]. About 90% of cases occur in individuals over the age of 55, with the average age being 73 at the time of diagnosis [2]. In the ATF cancer cluster, all the cases of bladder cancer represented both the gender and the race with the greatest incidence of bladder cancer, but the ages of the cases at the time of diagnosis were relatively young for bladder cancer as all cases occurred in individuals less than 55 years of age.Recognized nondemographic risk factors include cigarette smoking, bladder birth defects, chronic bladder inflammation, genetic predisposition, use of herbal remedies containing aristolochic acid, drinking water containing arsenic and chlorination by-products, prior history of bladder cancer, chemotherapy and radiation therapy, and specific industries and occupations with exposures to known or suspect bladder carcinogens [2, 4–13]. Many studies have explored the association of fluid intake and type of fluid intake with risk for bladder cancer, but findings for both total fluids and specific fluids have been inconsistent, likely due to influences of confounding variables such as the presence of bladder carcinogens in tap water [14–24]. Two studies have demonstrated that increased frequency of urination is associated with reduced risk for bladder cancer [18, 21].Smoking, the number one risk factor for bladder cancer, is estimated to cause approximately 50% of bladder cancer cases in men and 30% of cases in women [6]. Second to smoking, occupational exposures to carcinogens may account for as few as 10% of cases in men and five percent of cases in women to as many as 20–25% of all bladder cancers [6, 11]. Established at risk industries include the manufacturing of products such as synthetic dyes and paints, cables, textiles, leather works, and aluminum and the petrochemical, coal tar, and rubber industries [5–7, 12, 25–30]. A number of specific occupations have also been identified to be associated with increased risk of bladder cancer. These include, but are not limited to, cooks and kitchen workers, electricians, hairdressers, leather workers, machinists, petroleum workers, rubber workers, coal miners, truckers, and vehicle mechanics, as summarized by Schulte et al. [12] in 1987, as well as coke oven workers, roofers, dry cleaners, chimney sweeps, and painters, as addressed by others in more recent literature [5–7, 25, 31–34]. Exposure to both cigarette smoke and occupationally related bladder carcinogens may work synergistically to increase further the risk for bladder cancer [2, 7].As an occupation, the broad job category of law enforcement is generally not recognized for being at increased risk for bladder cancer. In 1987, Schulte et al. [12] did include protection guards on their summary list of occupations associated with risk for bladder cancer, but acknowledged that the potential etiologic agent was unknown. As the rate of occurrence of bladder cancer is almost five times greater than the associated death rate in the US, with 80% of cases surviving for five or more years after diagnosis [1], cancer incidence is a more sensitive measure of occupational risk for bladder cancer than related mortality and it is important to differentiate between epidemiologic studies looking at cancer incidence and those looking at cancer mortality. Many epidemiologic studies on the association of bladder cancer incidence and occupations have not found police officers, guards, and related categories, such as protective services and government worker inspection/investigation occupations, to be associated with increased risk for bladder cancer [11, 35–46]. One exception was a study by Howe et al. [47] which showed guards and watchmen to have a statistically significant age-adjusted relative risk for bladder cancer of 4.0. In Reulen et al.’s [32] 2009 meta-analysis on the association between bladder cancer incidence and occupation, summary relative risks for bladder cancer were obtained for protective service occupations from the findings of 23 studies and for police officers and guards from the findings of 14 studies. These summary relative risks were only marginally elevated and not statistically significant, at 1.07 (95% confidence interval (CI) 0.96–1.19) for protective services and 1.10 (95% CI 0.95–1.29) for police officers and guards. Three mortality studies on police officer cohorts also did not find statistically significant increases in mortality from bladder cancer overall [48–50]. One of these studies did demonstrate that policemen who were professional drivers had significantly increased bladder cancer-related mortality [48] and another showed a higher than expected mortality rate for police officers with 10–19 years of service [50]. As some, but not all, epidemiology studies have demonstrated an increased risk for bladder cancer in jobs with exposure to diesel or traffic fumes [6, 40, 43, 51, 52] and one meta-analysis has shown increased cancer risk for several types of vehicle drivers [32], the use of broad law enforcement job categories in studies on cancer risk and occupation assumes a degree of uniformity in exposures to potential bladder carcinogens and risk for bladder cancer, which may not always be the case.Part 1 of this study investigates the statistical significance of the incidence of bladder cancer in the ATF employee population comprised of criminal investigators and members of the NRT.
## 2. Methods
### 2.1. Study Time Frame and Study Subjects
The time interval of this study is 1993, the year preceding diagnosis of the first bladder cancer case, through 2007. For this period, a full roster cohort was constructed from the annual staffing rosters provided by ATF for each calendar year from 1993 through 2007. The annual staffing rosters included all criminal investigators, explosives enforcement specialists, forensic chemists, fire protection engineers, and a small number of other specialists typically affiliated with NRT work, regardless of individual membership on the NRT or eligibility for participation in the medical surveillance program, who were currently working for ATF in each respective calendar year. ATF provided the demographic study parameters of gender, race, and age, and the job series and titles for all members of the full roster cohort. Members of the full roster cohort are dispersed throughout the United States and its territories Puerto Rico and Guam and may move around within this geographic area during their employment with ATF.With the advent of the medical surveillance program in 1995, a subset of the full roster cohort, comprised of employees participating in the program, was created. As explained in the introduction, the program was initially voluntary and offered to members of the NRT, but became mandatory in 2002 for all criminal investigators, explosives enforcement specialists, and members of the NRT. Setup in partnership with Federal Occupational Health (FOH), United States Department of Health and Human Services, the program consisted of an annual evaluation which included a medical history and tobacco use questionnaire, physical examination, and laboratory and ancillary tests. Collection of detailed work history information, including job series and titles, began in 2003 with the institution of a work history questionnaire. The use of FOH’s electronic database, the Occupational Health Information Management System (OHIMS), to facilitate future epidemiologic evaluation of the bladder cancer cluster began in 2002. Data for key study variables from pre-2002 exams were retrospectively retrieved and also entered into OHIMS.For the cohort of employees participating in the medical surveillance program from 1995 to 2007, pertinent data collected with each annual evaluation included the demographic variables, gender, race, and age, as well as cancer history and tobacco use history. Data on job series and titles were also provided by members of the cohort participating in the program between 2003 and 2007. The demographic data and the job series and titles data provided by ATF for the full roster cohort were subsequently cross-referenced with the same data provided by employees participating in the medical surveillance program. Any data inconsistencies between the two sources were resolved by medical record review or phone contact with employees.
### 2.2. Identification and Verification of Bladder Cancer Cases
As stated in the introduction, all bladder cancer cases were initially identified by employees self-reporting the diagnosis and year of diagnosis at the time of the annual medical surveillance evaluation. The self-reported cases were subsequently contacted by the occupational medicine physician overseeing the medical surveillance program, who informed them of the bladder cancer cluster and of the plans to evaluate the significance of the cluster through epidemiologic analysis and requested their assistance with voluntary provision of a pathology report for verification of case diagnosis. Requests for pathology reports were accompanied by provision of medical release forms for completion by cases.
### 2.3. Study Design
This is a cohort study in which standardized incidence ratios (SIRs) were calculated for four defined populations of ATF employees to compare the observed bladder cancer case numbers in each ATF population with the expected cancer case numbers based on incidence rate in the US general population, appropriately adjusted for age and stratified by sex and race, as relevant to each of the four ATF populations. The US general population was chosen as the best reference population for analysis due to the nation-wide dispersion of ATF employees under study. As all the bladder cancer cases occurred among white male criminal investigators, the populations chosen for analysis included: (1) the full roster cohort comprising all males and females, (2) all white males in the full roster cohort, (3) all white males in the full roster cohort with examinations (cancer and tobacco use histories), and (4) all white males in the full roster cohort with both examinations and work histories who were also criminal investigators. Determination of the expected cancer incidence rate was based on data from the surveillance epidemiology and end results (SEER) program for the US population for the period 1993–2007. The SIR estimates the relative risk of bladder cancer incidence in the ATF population compared to the US population adjusted for age and stratified by race and gender. Computations for each population included determination of the person-year distribution during the study period, which served as the denominator for the respective SIR analysis. One person-year was counted for each year an individual was a member of the cohort. The person-years were arranged by five-year age increments and three five-year time intervals, 1993–1997, 1998–2002, and 2003–2007.
## 2.1. Study Time Frame and Study Subjects
The time interval of this study is 1993, the year preceding diagnosis of the first bladder cancer case, through 2007. For this period, a full roster cohort was constructed from the annual staffing rosters provided by ATF for each calendar year from 1993 through 2007. The annual staffing rosters included all criminal investigators, explosives enforcement specialists, forensic chemists, fire protection engineers, and a small number of other specialists typically affiliated with NRT work, regardless of individual membership on the NRT or eligibility for participation in the medical surveillance program, who were currently working for ATF in each respective calendar year. ATF provided the demographic study parameters of gender, race, and age, and the job series and titles for all members of the full roster cohort. Members of the full roster cohort are dispersed throughout the United States and its territories Puerto Rico and Guam and may move around within this geographic area during their employment with ATF.With the advent of the medical surveillance program in 1995, a subset of the full roster cohort, comprised of employees participating in the program, was created. As explained in the introduction, the program was initially voluntary and offered to members of the NRT, but became mandatory in 2002 for all criminal investigators, explosives enforcement specialists, and members of the NRT. Setup in partnership with Federal Occupational Health (FOH), United States Department of Health and Human Services, the program consisted of an annual evaluation which included a medical history and tobacco use questionnaire, physical examination, and laboratory and ancillary tests. Collection of detailed work history information, including job series and titles, began in 2003 with the institution of a work history questionnaire. The use of FOH’s electronic database, the Occupational Health Information Management System (OHIMS), to facilitate future epidemiologic evaluation of the bladder cancer cluster began in 2002. Data for key study variables from pre-2002 exams were retrospectively retrieved and also entered into OHIMS.For the cohort of employees participating in the medical surveillance program from 1995 to 2007, pertinent data collected with each annual evaluation included the demographic variables, gender, race, and age, as well as cancer history and tobacco use history. Data on job series and titles were also provided by members of the cohort participating in the program between 2003 and 2007. The demographic data and the job series and titles data provided by ATF for the full roster cohort were subsequently cross-referenced with the same data provided by employees participating in the medical surveillance program. Any data inconsistencies between the two sources were resolved by medical record review or phone contact with employees.
## 2.2. Identification and Verification of Bladder Cancer Cases
As stated in the introduction, all bladder cancer cases were initially identified by employees self-reporting the diagnosis and year of diagnosis at the time of the annual medical surveillance evaluation. The self-reported cases were subsequently contacted by the occupational medicine physician overseeing the medical surveillance program, who informed them of the bladder cancer cluster and of the plans to evaluate the significance of the cluster through epidemiologic analysis and requested their assistance with voluntary provision of a pathology report for verification of case diagnosis. Requests for pathology reports were accompanied by provision of medical release forms for completion by cases.
## 2.3. Study Design
This is a cohort study in which standardized incidence ratios (SIRs) were calculated for four defined populations of ATF employees to compare the observed bladder cancer case numbers in each ATF population with the expected cancer case numbers based on incidence rate in the US general population, appropriately adjusted for age and stratified by sex and race, as relevant to each of the four ATF populations. The US general population was chosen as the best reference population for analysis due to the nation-wide dispersion of ATF employees under study. As all the bladder cancer cases occurred among white male criminal investigators, the populations chosen for analysis included: (1) the full roster cohort comprising all males and females, (2) all white males in the full roster cohort, (3) all white males in the full roster cohort with examinations (cancer and tobacco use histories), and (4) all white males in the full roster cohort with both examinations and work histories who were also criminal investigators. Determination of the expected cancer incidence rate was based on data from the surveillance epidemiology and end results (SEER) program for the US population for the period 1993–2007. The SIR estimates the relative risk of bladder cancer incidence in the ATF population compared to the US population adjusted for age and stratified by race and gender. Computations for each population included determination of the person-year distribution during the study period, which served as the denominator for the respective SIR analysis. One person-year was counted for each year an individual was a member of the cohort. The person-years were arranged by five-year age increments and three five-year time intervals, 1993–1997, 1998–2002, and 2003–2007.
## 3. Results
### 3.1. Study Subjects
Table1 shows the distribution of individuals by gender and race for the full roster cohort, for the subset of employees with surveillance examinations, for the subset of employees with surveillance examinations and work histories, and for the subset of criminal investigators with surveillance exams and work histories. The percent distribution of individuals by gender and race for each of four of these populations is comparable.Table 1
Distribution of self-reported bladder cancers and employees in key ATF study populations by gender and race (1993–2007).
Study population
Males
Females
Total
Percent of full roster
Whites
Nonwhites
Whites
Nonwhites
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Full roster
7
2723 (72.3%)
0
570 (15.1%)
0
358 (9.5%)
0
117 (3.1%)
7
3768 (100%)
100.00
Examinations
7
1885 (69.5%)
0
467 (17.2%)
0
271 (10.0%)
0
89 (3.3%)
7
2712 (100%)
71.97
Examinations and work histories
7
1771 (69.5%)
0
441 (17.3%)
0
253 (9.9%)
0
84 (3.3%)
7
2549 (100%)
67.65
Criminal investigators with examinations and work histories
7
1715 (69.2%)
0
436 (17.6%)
0
244 (9.8%)
0
83 (3.4%)
7
2478 (100%)
65.76The full roster cohort comprised 3,768 individuals (Table1). Criminal investigators, with job series 1811, accounted for 96% of all employees in the full roster cohort and 96% of all white males in the full roster cohort. Within the full roster cohort, only 18.2% of members were non-white and only 12.6% of members were female.Of the full roster cohort, 2,712 (72%) members participated in the medical surveillance program between 1995 and 2007 and had data for one or more examinations in the program’s electronic database (Table1). The percentage of full roster cohort members with examinations was not greater than 72% due in part to the medical surveillance program being voluntary and only open to members of the NRT and to CFIs prior to 2002. Since 2002, when the program became mandatory for all criminal investigators, explosives enforcement specialists, and members of the NRT, the percentage of currently employed cohort members who obtained annual examinations ranged from a low of 50% in 2003 to a high of 67% in 2006. Despite this variance in annual rate of participation in the mandatory program, 2697 of 3136 (86%) full roster cohort members who were employed for one or more years between 2002 and 2007 obtained at least one medical exam.Job history data, collected from employees between 2003 and 2007 at the time of the annual exam, were available for 2549 (68%) members of the full roster cohort. Of these individuals, 2478 (97%) were criminal investigators (Table1).
### 3.2. Characteristics of Bladder Cancer Cases
During the study period, seven individuals reported bladder cancer diagnoses. At the time of the analysis, five of the seven cases had provided medical documentation (pathology reports) verifying the diagnosis of bladder cancer and diagnosis year. Another case provided verifying documentation following the completion of the analysis.As affirmed from review of pathology reports of the urinary bladder biopsies of the five cases verified at the time of the analysis, four of the cases were low grade papillary transitional cell carcinomas and one was transitional cell carcinomain situ. The subsequent sixth case verified after analysis was also low grade papillary transitional cell carcinoma.The first case was diagnosed in 1994 and the most recent case was diagnosed in 2005. As already known from the medical surveillance program, all bladder cancers occurred in white males and in criminal investigators. Table1 includes the distribution of reported bladder cancer cases by gender and race for each defined employee population. White males comprised 72% (2723/3768) of the full roster cohort and 69-70% of each of the three subset populations, including criminal investigators with medical surveillance examinations and work histories (69% (1715/2478)). The cases ranged from 32 to 53 years of age the year of diagnosis, with three individuals being in their 30s, three being in their 40s, and one being in his early 50s. Table 2 shows the distribution of the self-reported bladder cancer cases at the time of diagnosis in the same five-year age increments and three five-year time intervals as used to establish the person-year distributions for each SIR analysis. Two cases were diagnosed in 1993–1997, four cases were diagnosed in 1998–2002, and one case was diagnosed in 2003–2007.Table 2
Distribution of self-reported bladder cancer cases and of person-years observed for white males in full roster cohort by calendar period and age group.
Age group
1993–1997
1998–2002
2003–2007
Total
Cases
Person-years
Cases
Person-years
Cases
Person-years
Cases
Person-years
0–4
0
0.0
0
0.0
0
0.0
0
0.0
5–9
0
0.0
0
0.0
0
0.0
0
0.0
10–14
0
0.0
0
0.0
0
0.0
0
0.0
15–19
0
0.0
0
0.0
0
0.0
0
0.0
20–24
0
25.1
0
81.6
0
71.3
0
178.0
25–29
0
642.9
0
611.6
0
749.7
0
2004.1
30–34
2
1730.8
0
1593.4
0
1670.6
2
4994.8
35–39
0
1321.5
0
2136.4
1
2125.1
1
5583.0
40–44
0
773.1
2
1329.7
0
2111.9
2
4214.6
45–49
0
1574.9
1
763.0
0
1291.9
1
3629.8
50–54
0
1046.2
1
1083.5
0
571.5
1
2701.1
55–59
0
261.6
0
383.1
0
264.5
0
909.2
60–64
0
12.4
0
113.1
0
95.9
0
221.4
65–69
0
3.9
0
5.7
0
43.3
0
52.9
70–74
0
1.8
0
2.7
0
3.0
0
7.5
75–79
0
0.0
0
0.0
0
0.0
0
0.0
80–84
0
0.0
0
0.0
0
0.0
0
0.0
85–89
0
0.0
0
0.0
0
0.0
0
0.0
Total
2
7394.0
4
8103.8
1
8998.7
7
24496.5
*The one remaining unverified cancer case occurred in the 1998–2002 time frame.
### 3.3. Distribution of Person-Years within the Study Populations Undergoing Incidence Analysis
Table3 summarizes the total person-years calculated for each of the four study populations undergoing incidence analysis. The number of total person-years ranged from a high of 34,818 for the full roster cohort to a low of 17,976 for the population of white male criminal investigators with exams and work histories. The pattern of distribution of person-years was similar for all four study populations and is illustrated in Table 2 for the population of white males in the full roster cohort. The vast majority of employees for each study population fell within 25 and 54 years of age for each of the five-year time intervals. This distribution pattern reflects ATF’s practice of hiring criminal investigators with prior work experience and the mandated retirement age of 57 years for federal criminal investigators.Table 3
Standardized incidence ratios (SIRs) of self-reported and verified urinary bladder cancer cases for the period 1993–2007.
Study population
Employee #
Person-years
Expected cases*
Observed cases
SIR
95% CI
Entire roster cohort
3,768
34,818.01
2.91
7
2.41
1.17
4.96
Entire roster cohort**
3,768
34,818.01
2.91
5
1.72
0.73
4.02
Roster white males
2,723
24,496.47
2.39
7
2.93
1.42
6.07
Roster white males**
2,723
24,496.47
2.39
5
2.09
0.90
1.91
Exam white males
1,885
19,648.25
1.15
7
6.08
2.94
12.54
Exam white males**
1,885
19,648.25
1.15
5
4.34
1.85
10.16
Job 1811 white males
1,715
17,976.42
0.92
7
7.63
3.70
15.75
Job 1811 white males**
1,715
17,976.42
0.92
5
5.45
2.33
12.76
*Expected number of cases calculated using US incidence rates from SEER for the same period.
**SIRs determined for the five verified cases with pathology reports.
### 3.4. Bladder Cancer Incidence Ratios
Table3 presents the standardized incidence ratios (SIRs) calculated for each of the four defined study populations: the full roster cohort, all white males in the full roster cohort, all white males with examinations (cancer and tobacco use histories), and all white males with examinations and work histories who were also criminal investigators. To access the effect of the two unverified cancer cases on SIR outcomes, SIRs were performed for the scenario with seven reported cancer cases and for the scenario with five verified cases for each of the four study populations.When computed with all seven cases, the SIR is 2.41 (95% CI 1.17–4.96) for the entire roster cohort, 2.93 (95% CI 1.42–6.07) for the white male cohort, 6.08 (95% CI 2.94–12.54) for white males with exams, and 7.63 (95% CI 3.70–15.75) for white male criminal investigators with exams and work histories (Table3). When recalculated with only the five verified cases, the SIR is 1.72 (95% CI 0.73–4.02) for the entire roster cohort, 2.09 (95% CI 0.90–1.91) for white males, 4.34 (95% CI 1.85–10.16) for white males with exams, and 5.45 (95% CI 2.33–12.76) for white male criminal investigators with exams and work histories (Table 3). The elevated SIRs are statistically significant for all four of these populations when all seven cases are included in the analysis and remain statistically significant for white males with exams and white male criminal investigators with exams and work histories when only the five verified cases are used.Age-specific cancer incidence rates in the white male ATF population were greater than the rates for the adjusted U.S. reference SEER population for each age group in which cases occurred (Table4). This finding is expected since 90% of bladder cancers occur in individuals over the age of 55 and all seven cases in the ATF population were younger than 55 years at the time of diagnosis. The highest age-specific relative risk for bladder cancer in the ATF population compared to the reference population, and the only one of statistical significance, was seen in the 30–34 age group, the youngest age group experiencing bladder cancer within the ATF population.Table 4
Age-specific white male bladder cancer incidence rates (1/100,000) for ATF and SEER (13 registries), with respective rate ratios, for the period 1993–2007.
Age
ATF
SEER
RR
95% CI-L
95% CI-U
20–24
0.00
0.37
0.00
0.00
0.00
25–29
0.00
0.57
0.00
0.00
0.00
30–34
40.04
1.17
34.32
3.78
124.24
35–39
17.91
2.80
6.40
0.19
35.63
40–44
47.45
5.83
8.13
0.89
29.45
45–49
27.55
13.13
2.10
0.06
11.68
50–54
37.02
25.33
1.46
0.04
8.14
55–59
0.00
50.57
0.00
0.00
0.00
60–64
0.00
88.33
0.00
0.00
0.00
65–69
0.00
145.70
0.00
0.00
0.00
70–74
0.00
209.43
0.00
0.00
0.00
75–79
0.00
275.40
0.00
0.00
0.00
80–84
0.00
327.83
0.00
0.00
0.00
85–89
0.00
353.73
0.00
0.00
0.00
## 3.1. Study Subjects
Table1 shows the distribution of individuals by gender and race for the full roster cohort, for the subset of employees with surveillance examinations, for the subset of employees with surveillance examinations and work histories, and for the subset of criminal investigators with surveillance exams and work histories. The percent distribution of individuals by gender and race for each of four of these populations is comparable.Table 1
Distribution of self-reported bladder cancers and employees in key ATF study populations by gender and race (1993–2007).
Study population
Males
Females
Total
Percent of full roster
Whites
Nonwhites
Whites
Nonwhites
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Cases
Employees (%)
Full roster
7
2723 (72.3%)
0
570 (15.1%)
0
358 (9.5%)
0
117 (3.1%)
7
3768 (100%)
100.00
Examinations
7
1885 (69.5%)
0
467 (17.2%)
0
271 (10.0%)
0
89 (3.3%)
7
2712 (100%)
71.97
Examinations and work histories
7
1771 (69.5%)
0
441 (17.3%)
0
253 (9.9%)
0
84 (3.3%)
7
2549 (100%)
67.65
Criminal investigators with examinations and work histories
7
1715 (69.2%)
0
436 (17.6%)
0
244 (9.8%)
0
83 (3.4%)
7
2478 (100%)
65.76The full roster cohort comprised 3,768 individuals (Table1). Criminal investigators, with job series 1811, accounted for 96% of all employees in the full roster cohort and 96% of all white males in the full roster cohort. Within the full roster cohort, only 18.2% of members were non-white and only 12.6% of members were female.Of the full roster cohort, 2,712 (72%) members participated in the medical surveillance program between 1995 and 2007 and had data for one or more examinations in the program’s electronic database (Table1). The percentage of full roster cohort members with examinations was not greater than 72% due in part to the medical surveillance program being voluntary and only open to members of the NRT and to CFIs prior to 2002. Since 2002, when the program became mandatory for all criminal investigators, explosives enforcement specialists, and members of the NRT, the percentage of currently employed cohort members who obtained annual examinations ranged from a low of 50% in 2003 to a high of 67% in 2006. Despite this variance in annual rate of participation in the mandatory program, 2697 of 3136 (86%) full roster cohort members who were employed for one or more years between 2002 and 2007 obtained at least one medical exam.Job history data, collected from employees between 2003 and 2007 at the time of the annual exam, were available for 2549 (68%) members of the full roster cohort. Of these individuals, 2478 (97%) were criminal investigators (Table1).
## 3.2. Characteristics of Bladder Cancer Cases
During the study period, seven individuals reported bladder cancer diagnoses. At the time of the analysis, five of the seven cases had provided medical documentation (pathology reports) verifying the diagnosis of bladder cancer and diagnosis year. Another case provided verifying documentation following the completion of the analysis.As affirmed from review of pathology reports of the urinary bladder biopsies of the five cases verified at the time of the analysis, four of the cases were low grade papillary transitional cell carcinomas and one was transitional cell carcinomain situ. The subsequent sixth case verified after analysis was also low grade papillary transitional cell carcinoma.The first case was diagnosed in 1994 and the most recent case was diagnosed in 2005. As already known from the medical surveillance program, all bladder cancers occurred in white males and in criminal investigators. Table1 includes the distribution of reported bladder cancer cases by gender and race for each defined employee population. White males comprised 72% (2723/3768) of the full roster cohort and 69-70% of each of the three subset populations, including criminal investigators with medical surveillance examinations and work histories (69% (1715/2478)). The cases ranged from 32 to 53 years of age the year of diagnosis, with three individuals being in their 30s, three being in their 40s, and one being in his early 50s. Table 2 shows the distribution of the self-reported bladder cancer cases at the time of diagnosis in the same five-year age increments and three five-year time intervals as used to establish the person-year distributions for each SIR analysis. Two cases were diagnosed in 1993–1997, four cases were diagnosed in 1998–2002, and one case was diagnosed in 2003–2007.Table 2
Distribution of self-reported bladder cancer cases and of person-years observed for white males in full roster cohort by calendar period and age group.
Age group
1993–1997
1998–2002
2003–2007
Total
Cases
Person-years
Cases
Person-years
Cases
Person-years
Cases
Person-years
0–4
0
0.0
0
0.0
0
0.0
0
0.0
5–9
0
0.0
0
0.0
0
0.0
0
0.0
10–14
0
0.0
0
0.0
0
0.0
0
0.0
15–19
0
0.0
0
0.0
0
0.0
0
0.0
20–24
0
25.1
0
81.6
0
71.3
0
178.0
25–29
0
642.9
0
611.6
0
749.7
0
2004.1
30–34
2
1730.8
0
1593.4
0
1670.6
2
4994.8
35–39
0
1321.5
0
2136.4
1
2125.1
1
5583.0
40–44
0
773.1
2
1329.7
0
2111.9
2
4214.6
45–49
0
1574.9
1
763.0
0
1291.9
1
3629.8
50–54
0
1046.2
1
1083.5
0
571.5
1
2701.1
55–59
0
261.6
0
383.1
0
264.5
0
909.2
60–64
0
12.4
0
113.1
0
95.9
0
221.4
65–69
0
3.9
0
5.7
0
43.3
0
52.9
70–74
0
1.8
0
2.7
0
3.0
0
7.5
75–79
0
0.0
0
0.0
0
0.0
0
0.0
80–84
0
0.0
0
0.0
0
0.0
0
0.0
85–89
0
0.0
0
0.0
0
0.0
0
0.0
Total
2
7394.0
4
8103.8
1
8998.7
7
24496.5
*The one remaining unverified cancer case occurred in the 1998–2002 time frame.
## 3.3. Distribution of Person-Years within the Study Populations Undergoing Incidence Analysis
Table3 summarizes the total person-years calculated for each of the four study populations undergoing incidence analysis. The number of total person-years ranged from a high of 34,818 for the full roster cohort to a low of 17,976 for the population of white male criminal investigators with exams and work histories. The pattern of distribution of person-years was similar for all four study populations and is illustrated in Table 2 for the population of white males in the full roster cohort. The vast majority of employees for each study population fell within 25 and 54 years of age for each of the five-year time intervals. This distribution pattern reflects ATF’s practice of hiring criminal investigators with prior work experience and the mandated retirement age of 57 years for federal criminal investigators.Table 3
Standardized incidence ratios (SIRs) of self-reported and verified urinary bladder cancer cases for the period 1993–2007.
Study population
Employee #
Person-years
Expected cases*
Observed cases
SIR
95% CI
Entire roster cohort
3,768
34,818.01
2.91
7
2.41
1.17
4.96
Entire roster cohort**
3,768
34,818.01
2.91
5
1.72
0.73
4.02
Roster white males
2,723
24,496.47
2.39
7
2.93
1.42
6.07
Roster white males**
2,723
24,496.47
2.39
5
2.09
0.90
1.91
Exam white males
1,885
19,648.25
1.15
7
6.08
2.94
12.54
Exam white males**
1,885
19,648.25
1.15
5
4.34
1.85
10.16
Job 1811 white males
1,715
17,976.42
0.92
7
7.63
3.70
15.75
Job 1811 white males**
1,715
17,976.42
0.92
5
5.45
2.33
12.76
*Expected number of cases calculated using US incidence rates from SEER for the same period.
**SIRs determined for the five verified cases with pathology reports.
## 3.4. Bladder Cancer Incidence Ratios
Table3 presents the standardized incidence ratios (SIRs) calculated for each of the four defined study populations: the full roster cohort, all white males in the full roster cohort, all white males with examinations (cancer and tobacco use histories), and all white males with examinations and work histories who were also criminal investigators. To access the effect of the two unverified cancer cases on SIR outcomes, SIRs were performed for the scenario with seven reported cancer cases and for the scenario with five verified cases for each of the four study populations.When computed with all seven cases, the SIR is 2.41 (95% CI 1.17–4.96) for the entire roster cohort, 2.93 (95% CI 1.42–6.07) for the white male cohort, 6.08 (95% CI 2.94–12.54) for white males with exams, and 7.63 (95% CI 3.70–15.75) for white male criminal investigators with exams and work histories (Table3). When recalculated with only the five verified cases, the SIR is 1.72 (95% CI 0.73–4.02) for the entire roster cohort, 2.09 (95% CI 0.90–1.91) for white males, 4.34 (95% CI 1.85–10.16) for white males with exams, and 5.45 (95% CI 2.33–12.76) for white male criminal investigators with exams and work histories (Table 3). The elevated SIRs are statistically significant for all four of these populations when all seven cases are included in the analysis and remain statistically significant for white males with exams and white male criminal investigators with exams and work histories when only the five verified cases are used.Age-specific cancer incidence rates in the white male ATF population were greater than the rates for the adjusted U.S. reference SEER population for each age group in which cases occurred (Table4). This finding is expected since 90% of bladder cancers occur in individuals over the age of 55 and all seven cases in the ATF population were younger than 55 years at the time of diagnosis. The highest age-specific relative risk for bladder cancer in the ATF population compared to the reference population, and the only one of statistical significance, was seen in the 30–34 age group, the youngest age group experiencing bladder cancer within the ATF population.Table 4
Age-specific white male bladder cancer incidence rates (1/100,000) for ATF and SEER (13 registries), with respective rate ratios, for the period 1993–2007.
Age
ATF
SEER
RR
95% CI-L
95% CI-U
20–24
0.00
0.37
0.00
0.00
0.00
25–29
0.00
0.57
0.00
0.00
0.00
30–34
40.04
1.17
34.32
3.78
124.24
35–39
17.91
2.80
6.40
0.19
35.63
40–44
47.45
5.83
8.13
0.89
29.45
45–49
27.55
13.13
2.10
0.06
11.68
50–54
37.02
25.33
1.46
0.04
8.14
55–59
0.00
50.57
0.00
0.00
0.00
60–64
0.00
88.33
0.00
0.00
0.00
65–69
0.00
145.70
0.00
0.00
0.00
70–74
0.00
209.43
0.00
0.00
0.00
75–79
0.00
275.40
0.00
0.00
0.00
80–84
0.00
327.83
0.00
0.00
0.00
85–89
0.00
353.73
0.00
0.00
0.00
## 4. Discussion
Recognition of a bladder cancer cluster among white male criminal investigators participating in an ATF medical surveillance program raised concern that the employee population was experiencing a greater-than-expected incidence of bladder cancer and that post-fire/post-blast scene investigation might be associated with increased risk of bladder cancer. Part 1 of this epidemiologic study determined that bladder cancer incidence in the white male study population was significantly elevated statistically for the period 1993–2007.This study illustrates the twofold utility of using a medical surveillance program to monitor the health of an employee population potentially exposed to hazardous agents and to perform epidemiologic analysis of the significance of cancer occurring within the population. Although the ATF medical surveillance program evolved over time in several ways, which included changing from voluntary to mandatory participation, and despite an annual participation rate of 50–67% after the program became mandatory, the requisite data were sufficient to perform SIRs for the ATF study population which showed white males to be at increased risk for bladder cancer when compared to white males in the US reference population.All bladder cancer cases in the ATF cohort were reported by white males, who constituted about 72% of the full roster cohort. In the incidence analyses of the two larger ATF populations, the full roster cohort and white males in the full roster cohort, the cancer risk was elevated for analyses performed with both seven reported and five verified cases, but was only statistically significant for the analyses with the seven reported cases. Since these two ATF populations included individuals who had not participated in the medical surveillance program and whose cancer history was unknown, which amounted to 28% of the full roster cohort and 31% of white males in the full roster cohort, the actual number of cases of bladder cancer within these two populations could be greater than the observed seven reported and five verified cases, which would lead to even higher SIRs than the ones calculated with the seven and five cases. With the two smaller ATF populations, white males with exams and white male criminal investigators with exams, the computed SIRs were elevated and statistically significant with both seven reported and five verified cancer cases. The SIRs of the two smaller populations, understandably higher than those of the two larger populations, might best depict the true cancer risk for the ATF cohort as the reported cancer history is known for all individuals in these two smaller populations.The finding that white male criminal investigators in the ATF population are at increased risk for bladder cancer is in contrast with the findings of prior epidemiologic studies of bladder cancer incidence in law enforcement occupations, in which law enforcement and related job categories were generally found not to be at increased risk for bladder cancer [11, 32, 35–39, 41, 42, 44–46]. In Reulen et al.’s [32] meta-analysis, the summary relative risk was 1.10 (95% CI 0.95–1.29) for police officers and 1.07 (95% CI 0.96–1.19) for protective service occupations. What may make this population of ATF criminal investigators unique from other populations of law enforcement specialists is the presence of a sizable subset of ATF criminal investigators who specialize in investigation of post-fire and post-blast scenes. What we know from the medical surveillance program is that six of the seven bladder cancer cases in the current study had occupational histories of working these scenes while employed with ATF. Thus, the increased incidence of bladder cancer identified among white male criminal investigators in the current study appears to be associated with the performance of post-fire/post-blast investigations. If this is the case, the increased risk for bladder cancer in this subset of specialized criminal investigators is sufficiently strong to influence the bladder cancer risk within the entire ATF study population.The magnitude of the SIRs computed in this study deserves some discussion. For the population of white male criminal investigators with exams, the SIRs were 7.63 (95% CI 3.70–15.75) for seven reported cases and 5.45 (95% CI 2.33–12.76) for five verified cases, and for the slightly larger population of all white males with exams, the SIRs were 6.08 (95% CI 2.94–12.54) for seven reported cases and 4.34 (95% CI 1.85–10.16) for five verified cases. In individual epidemiologic studies of bladder cancer incidence in other occupations and industries, statistically significant elevations in relative risk have been found with a 1.1-fold to fivefold increase [11, 32, 33, 39, 41–44, 46, 52–54] and even with a sixfold to tenfold increase for some occupations such as chemical workers [41, 47], dye manufacturing [55, 56], railroad workers [47], and physicians [41]. In one epidemiologic study on firefighters, the reported statistically significant odds ratio for bladder cancer was as high as 22.7 [41]. Thus, some epidemiologic studies on other occupations have obtained elevated relative risk for bladder cancer of the same order of magnitude as that found in the present study on criminal investigators. In Reulen et al.’s [32] meta-analysis on the association between bladder cancer and occupations, however, statistically significant summary relative risks which were found for several occupations only fell in the 1.1–1.3 range. These occupations included miners, rubber workers, leather workers, four types of professional drivers, and mechanics, but not chemical workers, firefighters, police officers, protective service occupations, or health care professionals.The elevated risk for bladder cancer in the current study cohort also approximates the increased risk for bladder cancer seen in smokers compared to nonsmokers, as demonstrated in various epidemiologic studies which show smokers to have a twofold to sixfold increased risk for bladder cancer [1, 6, 57–59]. Further corroborating study is advised to verify the magnitude of the increased risk found among criminal investigators in the current study.Although the elevated SIRs for the two populations with exams remained statistically significant when analyzed with only the five verified cases, having two of the seven reported cases (28%) unverified at the time of statistical analysis constitutes a weakness of the study and illustrates a limitation of using data from a medical surveillance program to conduct a cancer incidence analysis. With so few total cases, the difference in number between reported and verified cases can impact the significance of study outcomes, as demonstrated by comparing the SIRs computed with both the seven reported and the five verified cases for the full roster cohort and for white males in the full roster cohort. Since the study analysis was completed, ongoing effort to verify the unverified cases was successful in verifying one of the two cases.Another point to make is that the SIRs on the populations with exams could in actuality be artificially high, as only 72% of individuals in the full roster cohort underwent physical examination and individuals without medical issues may have selectively avoided coming in for exams. Since the institution of the mandatory program in 2002, however, 86% of individuals employed by ATF between 2002 and 2007 obtained at least one exam during this time frame and these 2697 employees accounted for 99% of the 2712 individuals in the full roster cohort with exams. With this level of participation in the mandatory program since 2002, potential for bias due to exam avoidance is likely very limited.In conclusion, white male members of the ATF cohort experienced statistically significant increased risk for bladder cancer when compared to white males in the US population for the study period 1993–2007. Among white males with exams and white male criminal investigators with exams, the elevated risk was demonstrated for computations with both seven reported and five verified cancer cases. There was no observed bladder cancer risk for nonwhite males and all females in the cohort study.With six of the seven cases in the bladder cancer cluster, having known histories of investigating post-fire and post-blast scenes while employed with ATF, scene investigation work appeared to be linked with the observed increase in bladder cancer incidence. Part 2 of the study will evaluate the association of post-fire/post-blast scene work history and risk for bladder cancer, while controlling for tobacco use history.
---
*Source: 101850-2012-12-09.xml* | 2012 |
# Generalisation of Hajek’s Stochastic Comparison Results to Stochastic Sums
**Authors:** Jörg Kampen
**Journal:** International Journal of Stochastic Analysis
(2016)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2016/1018509
---
## Abstract
Hajek’s univariate stochastic comparison result is generalised to multivariate stochastic sum processes with univariate convex data functions and for univariate monotonic nondecreasing convex data functions for processes with and without drift, respectively. As a consequence strategies for a class of multivariate optimal control problems can be determined by maximizing variance. An example is passport options written on multivariate traded accounts. The argument describes a narrow path between impossibilities of generalisations to jump processes or impossibilities of more general data functions.
---
## Body
## 1. Introduction and Statement of Results
Mean stochastic comparison results may have applications in many areas beyond mathematical finance. However, it seems that they were first applied in order to find optimal strategies for passport options in the univariate case in [1], where the result in [2] is used. More general univariate processes are considered in [3]. Anyway, path continuity seems to be essential as it can be shown that Hajek’s comparison result cannot be generalised to Poisson processes even in the univariate case (cf. [4]). More recent research on controlled options with applications of Hamilton-Jacobi-Bellman equations can be found in [5]. However the results for passport options are still univariate. Multivariate problems are essentially different as optimal strategies may depend on the correlations between processes. For example, if(1)
Π
q
=
∑
i
=
1
n
q
i
σ
i
S
i
,
where
d
S
i
=
σ
i
S
i
d
W
i
,
S
0
=
x
∈
R
nis a trading account with n lognormal processes S
i
1
≤
i
≤
n, where correlations of Brownian motions W
i are encoded to be (
ρ
i
j
)
1
≤
i
,
j
≤
n, and q
i
∈
[
-
1,1
] are bounded trading positions, then the solution of an optimal control problem(2)
sup
-
1
≤
q
i
≤
1
,
1
≤
i
≤
n
E
x
f
Π
q
,
f
convex, exponentially bounded
,for the trading positions q
i
∈
[
-
1,1
] reduces under mild assumptions to the maximization of the basket volatility; that is,(3)
sup
-
1
≤
q
i
≤
1
,
1
≤
i
≤
n
∑
i
,
j
=
1
n
ρ
i
j
q
i
q
j
σ
i
σ
j
S
i
S
j
∑
i
=
1
n
S
i
.Hence, signs of correlations (and space-time dependence of the signs of correlations) can change an optimal strategy essentially. This indicates also that multivariate mean comparison results are significant extensions of univariate results.Applications of the extension of Hajek’s results to stochastic sums were described in [6, 7], but a full proof was not given in these notes. Here we give a short complete proof of related results. Hajek’s results are recovered by the different method of proof.In the followingC
(
R
) denotes the space of continuous functions on the set of real numbers R, W denotes a standard N-dimensional Brownian motion, and E
x denotes the expectation of a process starting at x
∈
R
n. Furthermore, for an R
n-valued process (
X
(
t
)
)
0
≤
t
≤
T the ith component of this process is denoted by X
i
(
t
)
0
≤
t
≤
T. For processes without drift we prove the following.Theorem 1.
LetT
>
0, f
∈
C
(
R
) be convex, and assume that f satisfies an exponential growth condition. Assume that c
i
>
0 are some positive real constants for 1
≤
i
≤
n. Furthermore, let X
,
Y be Itô’s diffusions with x
=
X
(
0
)
=
Y
(
0
), where(4)
X
t
=
X
0
+
∫
0
t
σ
X
s
d
W
s
,
Y
t
=
Y
0
+
∫
0
t
ρ
Y
s
d
W
s
,with n
×
n-matrix-valued bounded Lipschitz-continuous functions x
→
σ
σ
T
(
x
) and y
→
ρ
ρ
T. If σ
σ
T
≤
ρ
ρ
T, then for 0
≤
t
≤
T we have(5)
E
x
f
∑
i
=
1
n
c
i
X
i
t
≤
E
x
f
∑
i
=
1
n
c
i
Y
i
t
.Here, the symbol ≤ refers to the usual order of positive matrices. Furthermore, if in addition f
′
′
≠
0 (in the sense of distributions), then this result holds with strict inequalities.For processes with drift we prove the following.Theorem 2.
LetT
>
0, f
∈
C
(
R
) be nondecreasing and convex, and assume that f satisfies an exponential growth condition. Assume that c
i
>
0 are some positive real constants for 1
≤
i
≤
n. Furthermore, let X
,
Y be Itô’s diffusions with nonzero drifts with x
=
X
(
0
)
=
Y
(
0
), where(6)
X
t
=
X
0
+
∫
0
t
μ
X
s
d
s
+
∫
0
t
σ
X
s
d
W
s
,
Y
t
=
Y
0
+
∫
0
t
ν
Y
s
d
s
+
∫
0
t
ρ
Y
s
d
W
s
,with bounded Lipschitz-continuous drift functions μ
≤
ν and n
×
n-matrix-valued bounded Lipschitz-continuous functions x
→
σ
σ
T
(
x
) and y
→
ρ
ρ
T. If μ
≤
ν and σ
σ
T
≤
ρ
ρ
T, then for 0
≤
t
≤
T we have(7)
E
x
f
∑
i
=
1
n
c
i
X
i
t
≤
E
x
f
∑
i
=
1
n
c
i
Y
i
t
.Here, μ
≤
ν is understood componentwise. Furthermore, if in addition f
′
′
≠
0 (in the sense of distributions), then this result holds with strict inequalities.Remark 3.
Bounded Lipschitz-continuity, that is, the condition that for someC
>
0
(8)
b
x
+
σ
σ
T
x
≤
C
,
b
x
-
b
y
+
σ
x
-
σ
y
≤
C
x
-
yholds for all x
,
y
∈
R
n, implies existence of a t-continuous solution in stochastic L
2 sense of X. Similarly for Y, proofs are based on a generalisation of ODE-proofs to infinite dimensional function spaces and can be found in elementary standard textbooks such as [8].
## 2. Proof of Theorem1
We first remark that the initial data function has to be univariate: for a general multivariate data functionf the results do not hold, because simple examples show that convexity can be strongly violated in this general situation. Since classical representations of the value functions in terms of the probability density (fundamental solution) are not convolutions we use the adjoint of the fundamental solution. For this and other technical reasons we need some more regularity of the data function and the diffusion matrix σ
σ
T in order to treat the problem at an analytical level. We will observe then that the pointwise result is preserved as we consider certain data and coefficient function limits reducing the regularity assumptions. First we need some regularity assumptions which ensure existence of the fundamental solution and the adjoint fundamental solution in a classical sense, that is, have pointwise well-defined spatial derivatives up to second order and a pointwise well-defined partial time derivative up to first order (in the domain where it is continuous). For the sake of possible generalisations in the next section we consider the more general operator(9)
L
≡
∂
∂
t
-
∑
i
,
j
=
1
n
a
i
j
∂
2
∂
x
i
∂
x
j
-
∑
i
=
1
n
b
i
∂
∂
x
i
-
c
.We include even the potential term coefficients c because such a coefficient appears in the adjoint even if c
=
0. Recall that the adjoint operator is given by(10)
L
∗
≡
-
∂
∂
t
+
∑
i
,
j
=
1
n
a
i
j
∗
∂
2
∂
x
i
∂
x
j
+
∑
i
=
1
n
b
i
∗
∂
∂
x
i
+
c
∗
,where(11)
a
i
j
∗
=
a
i
j
,
b
i
∗
=
2
∑
j
=
1
n
a
i
j
,
j
-
b
i
,
c
∗
=
c
+
∑
i
,
j
=
1
n
a
i
j
,
i
j
-
∑
i
=
1
n
b
i
,
i
.Here we use Einstein notation, that is, a
i
j
,
j
≔
(
∂
/
∂
x
j
)
a
i
j and a
i
j
,
j
≔
(
∂
2
/
∂
x
i
∂
x
j
)
a
i
j
,
i
j, and b
i
,
i
≔
(
∂
/
∂
x
i
)
b
i for the sake of brevity. In this section we will assume that b
i
≡
0 and c
≡
0. Note that even in this restrictive situation we have b
i
∗
≠
0 and c
∗
≠
0. For our purposes it suffices to assume that the coefficients are of spatial dependence (the generalisation to additional time dependence is straightforward). In order that the adjoint exists in a classical sense we should have bounded continuous derivatives.We assume(i)
(12)
a
i
j
∈
C
2
∩
H
2
,
∀
1
≤
i
,
j
≤
n
,
whereC
m
≡
C
m
R
n denotes the space of real-valued m-time continuously differentiable functions and H
m denotes the standard Sobolev space of order m
≥
0. In the next section we assume in addition that b
i
∈
C
1
∩
H
1 for all 1
≤
i
≤
n. For the following considerations concerning the adjoint we assume that c
∈
C
0
∩
L
2 if a potential coefficient is considered.
(ii)
We have uniform ellipticity; that is, there exists0
<
λ
<
Λ
<
∞ such that(13)
λ
y
2
≤
∑
i
,
j
=
1
n
a
i
j
x
y
i
y
j
≤
Λ
y
2
,
∀
x
,
y
∈
R
n
.We use one observation concerning the adjoint. Note that the adjoint equations are known in the context of probability theory as the forward and backward Kolmogorov equations. The derivation of these equations (cf. Feller’s classical treatment) shows that the density and its adjoint are equal and that it is possible to switch from a representation of Cauchy problem solutions in terms of the backward density to an equivalent representation in terms of the forward equations. For example, forward and backward representations of option prices for regime switching models are used in [9]. However, the essential additional observation we need here is the relation of partial derivatives of densities and their adjoint densities. We use again Einstein’s notation for classical derivatives.Lemma 4.
Assume that conditions (12) and (13) hold and let p be the fundamental solution of(14)
L
p
=
0
,and let p
∗ be the fundamental solution of(15)
L
∗
p
∗
=
0
.Then for s
<
t and x
,
y
∈
R
n
p, p
∗ have spatial derivatives up to order 2:(16)
p
t
,
x
;
s
,
y
=
p
∗
s
,
y
;
t
,
x
,
p
,
i
t
,
x
;
s
,
y
=
p
,
i
∗
s
,
y
;
t
,
x
,
p
,
i
j
t
,
x
;
s
,
y
=
p
,
i
j
∗
s
,
y
;
t
,
x
.Here, for x
=
(
x
1
,
…
,
x
n
) and e
i
=
(
e
i
1
,
…
,
e
i
n
) along with e
i
j
=
δ
i
j (Kronecker δ) and s
<
t, x
,
y
∈
R
n we denote(17)
p
,
i
t
,
x
;
s
,
y
=
lim
h
↓
0
p
t
,
x
+
h
e
i
;
s
,
y
-
p
t
,
x
;
s
,
y
h
,
p
,
i
j
t
,
x
;
s
,
y
=
lim
h
↓
0
p
,
i
t
,
x
+
h
e
j
;
s
,
y
-
p
,
i
t
,
x
;
s
,
y
h
,
p
,
i
∗
t
,
x
;
s
,
y
=
lim
h
↓
0
p
∗
s
,
y
+
h
e
i
;
t
,
x
-
p
∗
s
,
y
;
t
,
x
h
,
p
,
i
j
∗
t
,
x
;
s
,
y
=
lim
h
↓
0
p
,
i
∗
t
,
s
,
y
+
h
e
j
;
t
,
x
-
p
,
i
∗
s
,
y
;
t
,
x
h
.Proof.
Forq
(
τ
,
z
)
=
p
(
τ
,
z
;
s
,
y
) and r
(
τ
,
z
)
=
p
∗
(
τ
,
z
;
t
,
x
) for s
<
τ
<
t we show that for 1
≤
i
,
j
≤
n
(18)
q
t
,
x
=
r
s
,
y
,
q
,
i
t
,
x
=
r
,
i
s
,
y
,
q
,
i
j
t
,
x
=
r
,
i
j
s
,
yhold. Let B
R be the ball of radius R around zero. As s
<
t there exists δ
>
0 such that s
+
δ
<
t
-
δ and using Green’s identity Gaussian upper bounds of the fundamental solution and its first-order spatial derivatives, L
q
=
0 and L
∗
r
=
0, we get(19)
0
=
lim
R
↑
∞
∫
s
+
δ
t
-
δ
∫
B
R
∂
∂
τ
q
r
τ
,
z
d
τ
d
z
=
∫
R
n
q
t
-
δ
,
z
p
∗
t
-
δ
,
z
;
t
,
x
d
z
-
∫
R
n
r
t
+
δ
,
z
p
t
+
δ
,
z
;
s
,
y
d
z
.This leads to the identities(20)
∫
R
n
q
,
i
t
-
δ
,
z
p
∗
t
-
δ
,
z
;
t
,
x
d
z
=
∫
R
n
r
,
i
t
+
δ
,
z
p
t
+
δ
,
z
;
s
,
y
d
z
,
∫
R
n
q
,
i
j
t
-
δ
,
z
p
∗
t
-
δ
,
z
;
t
,
x
d
z
=
∫
R
n
r
,
i
j
t
+
δ
,
z
p
t
+
δ
,
z
;
s
,
y
d
z
.In the limit δ
↓
0 we get the relations stated.For technical reasons we need more approximations concerning the data. As we are aiming at a pointwise comparison result and we have Gaussian upper bounds it suffices to consider approximating data which are regular convex in a core region and decay to zero at spatial infinity. We have the following.Proposition 5.
Letf
∈
C
(
R
) be a real-valued continuous convex function. Let B
R
⊂
R
n be the ball of finite radius R around the origin. Then there is a function f
R
ϵ
∈
C
2
∩
H
2 such that(i)
(21)
f
x
-
f
R
ϵ
x
≤
ϵ
,
∀
x
∈
B
R
;
(ii)
the second (classically well-defined) derivative is strictly positive; that is,(22)
f
′
′
x
>
0
,
∀
x
∈
B
R
.Proposition5 can be proved by using regular polynomial interpolation. Here the fact that classical derivatives of second order exist for the convex continuous function f almost everywhere can be used. The function f
ϵ
,
R is not convex in general of course, but it is convex in a core region B
R
(
x
). For all ϵ
>
0 and all R
>
0 using Lemma 4 and integration by parts we get(23)
v
,
i
j
ϵ
,
R
t
,
x
=
∫
R
n
f
R
ϵ
∑
i
=
1
n
c
i
y
i
p
,
i
j
t
,
x
;
0
,
y
d
y
=
∫
R
n
f
R
ϵ
∑
i
=
1
n
c
i
y
i
p
,
i
j
∗
0
,
y
;
t
,
x
d
y
=
∫
R
n
c
i
c
j
f
R
ϵ
′
′
∑
i
=
1
n
c
i
y
i
p
∗
0
,
y
;
t
,
x
d
y
.Here for a univariate function g
∈
C
2 the symbol g
′
′ denotes its second derivative. Since f
ϵ
(
z
)
>
0, for all z
∈
B
R
(
z
), and p
∗
≥
0 and by the standard Gaussian estimate(24)
p
∗
σ
,
η
;
τ
,
ξ
≤
C
∗
τ
-
σ
n
exp
-
λ
∗
η
-
ξ
2
τ
-
σfor some finite constants C
∗
,
λ
∗ we get from (23)(25)
∀
r
>
0
,
∀
x
∈
B
r
,
∃
R
0
>
r
such that
∀
R
≥
R
0
,
v
,
i
,
j
ϵ
,
R
t
,
x
≥
0which means that the Hessian is positive in a smaller core region B
r
=
x
∣
x
≤
r for R large enough. Furthermore, classical regularity theory (cf. [10] and references therein) tells us that(26)
∀
ϵ
>
0
,
v
ϵ
t
,
·
∈
C
2
,
∀
ϵ
>
0
,
∀
R
>
0
,
v
ϵ
,
R
t
,
·
∈
C
2
,where v
ϵ
(
t
,
·
)
=
lim
R
↑
∞
v
ϵ
,
R
(
t
,
·
).Remark 6.
Actually, the regularityv
ϵ
(
t
,
·
)
∈
C
2 follows from the smoothness of the density p for positive time and holds in the more general context of highly degenerate parabolic equations of second order (cf. [11]). In this paper we consider equations with uniform elliptic second-order part, because this implies that the density, its adjoint, and spatial derivatives up to second order have spatial decay at infinity to zero. This is not true in general for highly degenerate equations (cf. [12]). Extensions to some classes of degenerate equations are possible (cf. [10]).It follows that(27)
∀
x
∈
R
n
,
∀
t
∈
0
,
T
,
∃
R
0
>
0
such that
∀
R
≥
R
0
,
∀
ϵ
>
0
,
Tr
A
x
D
2
v
ϵ
,
R
t
,
x
≥
0
,where A
(
x
)
=
(
a
i
j
(
x
)
) is the coefficient matrix and D
2
v
ϵ
,
R
(
t
,
x
) is the Hessian of v
ϵ
,
R evaluated at (
t
,
x
). Hence,(28)
∀
x
∈
R
n
,
∀
t
∈
0
,
T
,
∀
ϵ
>
0
,
Tr
A
x
D
2
v
ϵ
t
,
x
≥
0
,and as the ϵ
↓
0 limit of the Hessian is well defined for t
∈
(
0
,
T
] we get(29)
∀
x
∈
R
n
,
∀
t
∈
0
,
T
,
∀
ϵ
>
0
,
Tr
A
x
D
2
v
t
,
x
≥
0
.Now consider matrices (
a
i
j
v
1
) and (
a
i
j
v
2
) where v
1 and v
2 solve(30)
∂
v
1
∂
t
-
∑
i
j
a
i
j
v
1
∂
v
1
∂
x
i
∂
x
j
=
0
,
∂
v
2
∂
t
-
∑
i
j
a
i
j
v
2
∂
v
2
∂
x
i
∂
x
j
=
0
,and v
1
(
0
,
·
)
=
v
2
(
0
,
·
). Note that δ
v
=
v
1
-
v
2 satisfies(31)
∂
δ
v
t
,
x
∂
t
=
∑
i
j
a
i
j
v
2
-
a
i
j
v
1
∂
v
1
∂
x
i
∂
x
j
+
∑
i
j
a
i
j
v
2
∂
2
δ
v
∂
x
i
∂
x
j
,where δ
v
(
0
,
x
)
=
0 for all x
∈
R
n. We have the classical representation(32)
δ
v
t
,
x
=
∫
0
t
∫
R
n
∑
i
j
a
i
j
v
2
-
a
i
j
v
2
s
,
y
∂
2
v
1
∂
x
i
∂
x
j
s
,
y
p
v
2
t
,
x
,
s
,
y
d
s
d
y
,where p
v
2 is the fundamental solution of(33)
∂
δ
v
t
,
x
∂
t
-
∑
i
j
a
i
j
v
2
∂
2
δ
v
∂
x
i
∂
x
j
=
0
.As ∑
i
j
a
i
j
v
2
-
a
i
j
v
1
s
,
y
(
∂
v
1
/
∂
x
i
∂
x
j
)
(
s
,
y
)
≥
0 and p
v
2
(
t
,
x
,
s
,
y
)
≥
0 we conclude that δ
v
≥
0. Now we have proved the main theorem for a
i
j
∈
C
2
∩
H
2. Next, for each ϵ
>
0 and R
>
0 there exists a matrix (
a
i
j
ϵ
,
R
) with components in C
2
∩
H
2, where for all x
∈
R
n
(34)
a
ϵ
,
R
x
=
σ
ϵ
,
R
σ
ϵ
,
R
,
T
x
,with σ
ϵ
,
R
,
T
(
x
) being the transpose of σ
ϵ
,
R, and where(35)
sup
x
∈
B
R
σ
ϵ
,
R
x
-
σ
x
≤
ϵ
.Here σ is the original dispersion matrix related to the process X of the main theorem (which is assumed to be bounded and Lipschitz-continuous). Consider the following:(36)
X
ϵ
,
R
t
=
X
0
+
∫
0
t
σ
ϵ
,
R
X
ϵ
,
R
s
d
W
s
.For ρ
ϵ
,
R
(
x
) which satisfies analogous conditions we define(37)
Y
ϵ
,
R
t
=
Y
0
+
∫
0
t
ρ
ϵ
,
R
Y
ϵ
,
R
s
d
W
s
.Then the preceding argument shows that for σ
ϵ
,
R
σ
ϵ
,
R
,
T
≤
ρ
ϵ
,
R
ρ
ϵ
,
R
,
T and then for 0
≤
t
≤
T we have(38)
∀
ϵ
>
0
,
∀
r
>
0
,
∀
x
∈
B
r
,
∃
R
0
such that
∀
R
≥
R
0
,
E
x
f
ϵ
,
R
∑
i
=
1
n
c
i
X
i
ϵ
,
R
t
≤
E
x
f
ϵ
,
R
∑
i
=
1
n
c
i
Y
i
ϵ
,
R
t
.This leads to(39)
∀
r
>
0
,
∀
x
∈
B
r
,
∃
R
0
such that
∀
R
≥
R
0
,
E
x
f
R
∑
i
=
1
n
c
i
X
i
R
t
≤
E
x
f
R
∑
i
=
1
n
c
i
Y
i
R
t
,where X
R are processes:(40)
X
ϵ
,
R
t
=
X
0
+
∫
0
t
σ
R
X
R
s
d
W
s
,with a bounded continuous σ
R which satisfies(41)
∀
x
∈
B
R
,
σ
R
x
=
σ
x
=
0
.The process Y
R is defined analogously. Similarly f
R is a limit of functions f
ϵ
,
R
∈
C
2
∩
H
2 which equals f on B
R. In (39) X
R can be replaced by X and Y
R by Y by the probability law of the processes, and a limit consideration for data which equal for each R the function f on B
R leads to the statement of the theorem by an uniform exponential bound of the data functions, the boundedness of the Lipschitz-continuous coefficients, and the Gaussian law of the Brownian motion.
## 3. Additional Note for the Proof of Theorem2
Ifw
1 and w
2 solve(42)
∂
w
1
∂
t
-
∑
i
j
a
i
j
w
1
∂
w
1
∂
x
i
∂
x
j
+
∑
i
b
i
w
1
x
∂
w
1
∂
x
i
=
0
,
∂
w
2
∂
t
-
∑
i
j
a
i
j
w
2
∂
w
2
∂
x
i
∂
x
j
+
∑
i
b
i
w
2
x
∂
w
2
∂
x
i
=
0
,and w
1
(
0
,
·
)
=
w
2
(
0
,
·
), note that δ
w
=
w
1
-
w
2 satisfies(43)
∂
δ
w
t
,
x
∂
t
=
∑
i
j
a
i
j
w
2
-
a
i
j
w
1
∂
w
1
∂
x
i
∂
x
j
+
∑
i
j
a
i
j
w
2
∂
δ
w
∂
x
i
∂
x
j
-
b
i
w
2
∂
δ
w
∂
x
i
+
∑
i
b
i
w
2
x
-
b
i
w
1
x
∂
w
1
∂
x
i
,where δ
w
(
0
,
x
)
=
0 for all x
∈
R
n. Consider the following:(44)
δ
w
t
,
x
=
∫
0
t
∫
R
n
∑
i
j
a
i
j
w
2
-
a
i
j
w
1
s
,
y
w
,
i
,
j
1
s
,
y
+
∑
i
b
i
w
2
x
-
b
i
w
1
x
∂
w
1
∂
x
i
p
w
2
t
,
x
,
s
,
y
d
s
d
y
,where p
w
2 is the fundamental solution of(45)
∂
δ
w
∂
t
-
∑
i
j
a
i
j
w
2
∂
δ
w
∂
x
i
∂
x
j
+
b
i
w
2
∂
δ
w
∂
x
i
=
0
.As∑
i
j
a
i
j
w
2
-
a
i
j
w
1
(
s
,
y
)
(
∂
w
1
/
∂
x
i
∂
x
j
)
(
s
,
y
)
≥
0 and p
w
2
(
t
,
x
,
s
,
y
)
≥
0 we conclude that δ
w
≥
0 if ∑
i
b
i
w
2
x
-
b
i
w
1
x
∂
w
1
/
∂
x
i
). As b
i
w
2
(
x
)
-
b
i
w
1
(
x
)
≥
0 for all x this condition reduces to the monotonicity condition ∂
w
1
/
∂
x
i
≥
0. The truth of the latter monotonicity condition for the value function w
1 can be proved using the adjoint using the same trick as in the preceding section.Remark 7.
These notes are from my lecture notes “Die Fundamentallösung Parabolischer Gleichungen und Schwache Schemata Höherer Ordnung für Stochastische Diffusionsprozesse” of WS 2005/2006 in Heidelberg, which are not published. The argument given there is published now upon request, as research is going on concerning applications of comparison principles. Originally the relevance of stochastic comparison results was pointed out to the author by P. Laurence and V. Henderson. The main theorems proved here are stated essentially in the conference notes in [6, 7] but were not strictly proved there. In these notes applications to American options and to passport options are considered. For example, explicit solutions for optimal strategies related to the optimal control problem of passport options and the dependence of that strategy on correlations between assets can be obtained. The proof given here can be applied in the univariate case as well and recovers the result of Hajek in [1] using the result of [2].
---
*Source: 1018509-2016-09-05.xml* | 1018509-2016-09-05_1018509-2016-09-05.md | 19,490 | Generalisation of Hajek’s Stochastic Comparison Results to Stochastic Sums | Jörg Kampen | International Journal of Stochastic Analysis
(2016) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2016/1018509 | 1018509-2016-09-05.xml | ---
## Abstract
Hajek’s univariate stochastic comparison result is generalised to multivariate stochastic sum processes with univariate convex data functions and for univariate monotonic nondecreasing convex data functions for processes with and without drift, respectively. As a consequence strategies for a class of multivariate optimal control problems can be determined by maximizing variance. An example is passport options written on multivariate traded accounts. The argument describes a narrow path between impossibilities of generalisations to jump processes or impossibilities of more general data functions.
---
## Body
## 1. Introduction and Statement of Results
Mean stochastic comparison results may have applications in many areas beyond mathematical finance. However, it seems that they were first applied in order to find optimal strategies for passport options in the univariate case in [1], where the result in [2] is used. More general univariate processes are considered in [3]. Anyway, path continuity seems to be essential as it can be shown that Hajek’s comparison result cannot be generalised to Poisson processes even in the univariate case (cf. [4]). More recent research on controlled options with applications of Hamilton-Jacobi-Bellman equations can be found in [5]. However the results for passport options are still univariate. Multivariate problems are essentially different as optimal strategies may depend on the correlations between processes. For example, if(1)
Π
q
=
∑
i
=
1
n
q
i
σ
i
S
i
,
where
d
S
i
=
σ
i
S
i
d
W
i
,
S
0
=
x
∈
R
nis a trading account with n lognormal processes S
i
1
≤
i
≤
n, where correlations of Brownian motions W
i are encoded to be (
ρ
i
j
)
1
≤
i
,
j
≤
n, and q
i
∈
[
-
1,1
] are bounded trading positions, then the solution of an optimal control problem(2)
sup
-
1
≤
q
i
≤
1
,
1
≤
i
≤
n
E
x
f
Π
q
,
f
convex, exponentially bounded
,for the trading positions q
i
∈
[
-
1,1
] reduces under mild assumptions to the maximization of the basket volatility; that is,(3)
sup
-
1
≤
q
i
≤
1
,
1
≤
i
≤
n
∑
i
,
j
=
1
n
ρ
i
j
q
i
q
j
σ
i
σ
j
S
i
S
j
∑
i
=
1
n
S
i
.Hence, signs of correlations (and space-time dependence of the signs of correlations) can change an optimal strategy essentially. This indicates also that multivariate mean comparison results are significant extensions of univariate results.Applications of the extension of Hajek’s results to stochastic sums were described in [6, 7], but a full proof was not given in these notes. Here we give a short complete proof of related results. Hajek’s results are recovered by the different method of proof.In the followingC
(
R
) denotes the space of continuous functions on the set of real numbers R, W denotes a standard N-dimensional Brownian motion, and E
x denotes the expectation of a process starting at x
∈
R
n. Furthermore, for an R
n-valued process (
X
(
t
)
)
0
≤
t
≤
T the ith component of this process is denoted by X
i
(
t
)
0
≤
t
≤
T. For processes without drift we prove the following.Theorem 1.
LetT
>
0, f
∈
C
(
R
) be convex, and assume that f satisfies an exponential growth condition. Assume that c
i
>
0 are some positive real constants for 1
≤
i
≤
n. Furthermore, let X
,
Y be Itô’s diffusions with x
=
X
(
0
)
=
Y
(
0
), where(4)
X
t
=
X
0
+
∫
0
t
σ
X
s
d
W
s
,
Y
t
=
Y
0
+
∫
0
t
ρ
Y
s
d
W
s
,with n
×
n-matrix-valued bounded Lipschitz-continuous functions x
→
σ
σ
T
(
x
) and y
→
ρ
ρ
T. If σ
σ
T
≤
ρ
ρ
T, then for 0
≤
t
≤
T we have(5)
E
x
f
∑
i
=
1
n
c
i
X
i
t
≤
E
x
f
∑
i
=
1
n
c
i
Y
i
t
.Here, the symbol ≤ refers to the usual order of positive matrices. Furthermore, if in addition f
′
′
≠
0 (in the sense of distributions), then this result holds with strict inequalities.For processes with drift we prove the following.Theorem 2.
LetT
>
0, f
∈
C
(
R
) be nondecreasing and convex, and assume that f satisfies an exponential growth condition. Assume that c
i
>
0 are some positive real constants for 1
≤
i
≤
n. Furthermore, let X
,
Y be Itô’s diffusions with nonzero drifts with x
=
X
(
0
)
=
Y
(
0
), where(6)
X
t
=
X
0
+
∫
0
t
μ
X
s
d
s
+
∫
0
t
σ
X
s
d
W
s
,
Y
t
=
Y
0
+
∫
0
t
ν
Y
s
d
s
+
∫
0
t
ρ
Y
s
d
W
s
,with bounded Lipschitz-continuous drift functions μ
≤
ν and n
×
n-matrix-valued bounded Lipschitz-continuous functions x
→
σ
σ
T
(
x
) and y
→
ρ
ρ
T. If μ
≤
ν and σ
σ
T
≤
ρ
ρ
T, then for 0
≤
t
≤
T we have(7)
E
x
f
∑
i
=
1
n
c
i
X
i
t
≤
E
x
f
∑
i
=
1
n
c
i
Y
i
t
.Here, μ
≤
ν is understood componentwise. Furthermore, if in addition f
′
′
≠
0 (in the sense of distributions), then this result holds with strict inequalities.Remark 3.
Bounded Lipschitz-continuity, that is, the condition that for someC
>
0
(8)
b
x
+
σ
σ
T
x
≤
C
,
b
x
-
b
y
+
σ
x
-
σ
y
≤
C
x
-
yholds for all x
,
y
∈
R
n, implies existence of a t-continuous solution in stochastic L
2 sense of X. Similarly for Y, proofs are based on a generalisation of ODE-proofs to infinite dimensional function spaces and can be found in elementary standard textbooks such as [8].
## 2. Proof of Theorem1
We first remark that the initial data function has to be univariate: for a general multivariate data functionf the results do not hold, because simple examples show that convexity can be strongly violated in this general situation. Since classical representations of the value functions in terms of the probability density (fundamental solution) are not convolutions we use the adjoint of the fundamental solution. For this and other technical reasons we need some more regularity of the data function and the diffusion matrix σ
σ
T in order to treat the problem at an analytical level. We will observe then that the pointwise result is preserved as we consider certain data and coefficient function limits reducing the regularity assumptions. First we need some regularity assumptions which ensure existence of the fundamental solution and the adjoint fundamental solution in a classical sense, that is, have pointwise well-defined spatial derivatives up to second order and a pointwise well-defined partial time derivative up to first order (in the domain where it is continuous). For the sake of possible generalisations in the next section we consider the more general operator(9)
L
≡
∂
∂
t
-
∑
i
,
j
=
1
n
a
i
j
∂
2
∂
x
i
∂
x
j
-
∑
i
=
1
n
b
i
∂
∂
x
i
-
c
.We include even the potential term coefficients c because such a coefficient appears in the adjoint even if c
=
0. Recall that the adjoint operator is given by(10)
L
∗
≡
-
∂
∂
t
+
∑
i
,
j
=
1
n
a
i
j
∗
∂
2
∂
x
i
∂
x
j
+
∑
i
=
1
n
b
i
∗
∂
∂
x
i
+
c
∗
,where(11)
a
i
j
∗
=
a
i
j
,
b
i
∗
=
2
∑
j
=
1
n
a
i
j
,
j
-
b
i
,
c
∗
=
c
+
∑
i
,
j
=
1
n
a
i
j
,
i
j
-
∑
i
=
1
n
b
i
,
i
.Here we use Einstein notation, that is, a
i
j
,
j
≔
(
∂
/
∂
x
j
)
a
i
j and a
i
j
,
j
≔
(
∂
2
/
∂
x
i
∂
x
j
)
a
i
j
,
i
j, and b
i
,
i
≔
(
∂
/
∂
x
i
)
b
i for the sake of brevity. In this section we will assume that b
i
≡
0 and c
≡
0. Note that even in this restrictive situation we have b
i
∗
≠
0 and c
∗
≠
0. For our purposes it suffices to assume that the coefficients are of spatial dependence (the generalisation to additional time dependence is straightforward). In order that the adjoint exists in a classical sense we should have bounded continuous derivatives.We assume(i)
(12)
a
i
j
∈
C
2
∩
H
2
,
∀
1
≤
i
,
j
≤
n
,
whereC
m
≡
C
m
R
n denotes the space of real-valued m-time continuously differentiable functions and H
m denotes the standard Sobolev space of order m
≥
0. In the next section we assume in addition that b
i
∈
C
1
∩
H
1 for all 1
≤
i
≤
n. For the following considerations concerning the adjoint we assume that c
∈
C
0
∩
L
2 if a potential coefficient is considered.
(ii)
We have uniform ellipticity; that is, there exists0
<
λ
<
Λ
<
∞ such that(13)
λ
y
2
≤
∑
i
,
j
=
1
n
a
i
j
x
y
i
y
j
≤
Λ
y
2
,
∀
x
,
y
∈
R
n
.We use one observation concerning the adjoint. Note that the adjoint equations are known in the context of probability theory as the forward and backward Kolmogorov equations. The derivation of these equations (cf. Feller’s classical treatment) shows that the density and its adjoint are equal and that it is possible to switch from a representation of Cauchy problem solutions in terms of the backward density to an equivalent representation in terms of the forward equations. For example, forward and backward representations of option prices for regime switching models are used in [9]. However, the essential additional observation we need here is the relation of partial derivatives of densities and their adjoint densities. We use again Einstein’s notation for classical derivatives.Lemma 4.
Assume that conditions (12) and (13) hold and let p be the fundamental solution of(14)
L
p
=
0
,and let p
∗ be the fundamental solution of(15)
L
∗
p
∗
=
0
.Then for s
<
t and x
,
y
∈
R
n
p, p
∗ have spatial derivatives up to order 2:(16)
p
t
,
x
;
s
,
y
=
p
∗
s
,
y
;
t
,
x
,
p
,
i
t
,
x
;
s
,
y
=
p
,
i
∗
s
,
y
;
t
,
x
,
p
,
i
j
t
,
x
;
s
,
y
=
p
,
i
j
∗
s
,
y
;
t
,
x
.Here, for x
=
(
x
1
,
…
,
x
n
) and e
i
=
(
e
i
1
,
…
,
e
i
n
) along with e
i
j
=
δ
i
j (Kronecker δ) and s
<
t, x
,
y
∈
R
n we denote(17)
p
,
i
t
,
x
;
s
,
y
=
lim
h
↓
0
p
t
,
x
+
h
e
i
;
s
,
y
-
p
t
,
x
;
s
,
y
h
,
p
,
i
j
t
,
x
;
s
,
y
=
lim
h
↓
0
p
,
i
t
,
x
+
h
e
j
;
s
,
y
-
p
,
i
t
,
x
;
s
,
y
h
,
p
,
i
∗
t
,
x
;
s
,
y
=
lim
h
↓
0
p
∗
s
,
y
+
h
e
i
;
t
,
x
-
p
∗
s
,
y
;
t
,
x
h
,
p
,
i
j
∗
t
,
x
;
s
,
y
=
lim
h
↓
0
p
,
i
∗
t
,
s
,
y
+
h
e
j
;
t
,
x
-
p
,
i
∗
s
,
y
;
t
,
x
h
.Proof.
Forq
(
τ
,
z
)
=
p
(
τ
,
z
;
s
,
y
) and r
(
τ
,
z
)
=
p
∗
(
τ
,
z
;
t
,
x
) for s
<
τ
<
t we show that for 1
≤
i
,
j
≤
n
(18)
q
t
,
x
=
r
s
,
y
,
q
,
i
t
,
x
=
r
,
i
s
,
y
,
q
,
i
j
t
,
x
=
r
,
i
j
s
,
yhold. Let B
R be the ball of radius R around zero. As s
<
t there exists δ
>
0 such that s
+
δ
<
t
-
δ and using Green’s identity Gaussian upper bounds of the fundamental solution and its first-order spatial derivatives, L
q
=
0 and L
∗
r
=
0, we get(19)
0
=
lim
R
↑
∞
∫
s
+
δ
t
-
δ
∫
B
R
∂
∂
τ
q
r
τ
,
z
d
τ
d
z
=
∫
R
n
q
t
-
δ
,
z
p
∗
t
-
δ
,
z
;
t
,
x
d
z
-
∫
R
n
r
t
+
δ
,
z
p
t
+
δ
,
z
;
s
,
y
d
z
.This leads to the identities(20)
∫
R
n
q
,
i
t
-
δ
,
z
p
∗
t
-
δ
,
z
;
t
,
x
d
z
=
∫
R
n
r
,
i
t
+
δ
,
z
p
t
+
δ
,
z
;
s
,
y
d
z
,
∫
R
n
q
,
i
j
t
-
δ
,
z
p
∗
t
-
δ
,
z
;
t
,
x
d
z
=
∫
R
n
r
,
i
j
t
+
δ
,
z
p
t
+
δ
,
z
;
s
,
y
d
z
.In the limit δ
↓
0 we get the relations stated.For technical reasons we need more approximations concerning the data. As we are aiming at a pointwise comparison result and we have Gaussian upper bounds it suffices to consider approximating data which are regular convex in a core region and decay to zero at spatial infinity. We have the following.Proposition 5.
Letf
∈
C
(
R
) be a real-valued continuous convex function. Let B
R
⊂
R
n be the ball of finite radius R around the origin. Then there is a function f
R
ϵ
∈
C
2
∩
H
2 such that(i)
(21)
f
x
-
f
R
ϵ
x
≤
ϵ
,
∀
x
∈
B
R
;
(ii)
the second (classically well-defined) derivative is strictly positive; that is,(22)
f
′
′
x
>
0
,
∀
x
∈
B
R
.Proposition5 can be proved by using regular polynomial interpolation. Here the fact that classical derivatives of second order exist for the convex continuous function f almost everywhere can be used. The function f
ϵ
,
R is not convex in general of course, but it is convex in a core region B
R
(
x
). For all ϵ
>
0 and all R
>
0 using Lemma 4 and integration by parts we get(23)
v
,
i
j
ϵ
,
R
t
,
x
=
∫
R
n
f
R
ϵ
∑
i
=
1
n
c
i
y
i
p
,
i
j
t
,
x
;
0
,
y
d
y
=
∫
R
n
f
R
ϵ
∑
i
=
1
n
c
i
y
i
p
,
i
j
∗
0
,
y
;
t
,
x
d
y
=
∫
R
n
c
i
c
j
f
R
ϵ
′
′
∑
i
=
1
n
c
i
y
i
p
∗
0
,
y
;
t
,
x
d
y
.Here for a univariate function g
∈
C
2 the symbol g
′
′ denotes its second derivative. Since f
ϵ
(
z
)
>
0, for all z
∈
B
R
(
z
), and p
∗
≥
0 and by the standard Gaussian estimate(24)
p
∗
σ
,
η
;
τ
,
ξ
≤
C
∗
τ
-
σ
n
exp
-
λ
∗
η
-
ξ
2
τ
-
σfor some finite constants C
∗
,
λ
∗ we get from (23)(25)
∀
r
>
0
,
∀
x
∈
B
r
,
∃
R
0
>
r
such that
∀
R
≥
R
0
,
v
,
i
,
j
ϵ
,
R
t
,
x
≥
0which means that the Hessian is positive in a smaller core region B
r
=
x
∣
x
≤
r for R large enough. Furthermore, classical regularity theory (cf. [10] and references therein) tells us that(26)
∀
ϵ
>
0
,
v
ϵ
t
,
·
∈
C
2
,
∀
ϵ
>
0
,
∀
R
>
0
,
v
ϵ
,
R
t
,
·
∈
C
2
,where v
ϵ
(
t
,
·
)
=
lim
R
↑
∞
v
ϵ
,
R
(
t
,
·
).Remark 6.
Actually, the regularityv
ϵ
(
t
,
·
)
∈
C
2 follows from the smoothness of the density p for positive time and holds in the more general context of highly degenerate parabolic equations of second order (cf. [11]). In this paper we consider equations with uniform elliptic second-order part, because this implies that the density, its adjoint, and spatial derivatives up to second order have spatial decay at infinity to zero. This is not true in general for highly degenerate equations (cf. [12]). Extensions to some classes of degenerate equations are possible (cf. [10]).It follows that(27)
∀
x
∈
R
n
,
∀
t
∈
0
,
T
,
∃
R
0
>
0
such that
∀
R
≥
R
0
,
∀
ϵ
>
0
,
Tr
A
x
D
2
v
ϵ
,
R
t
,
x
≥
0
,where A
(
x
)
=
(
a
i
j
(
x
)
) is the coefficient matrix and D
2
v
ϵ
,
R
(
t
,
x
) is the Hessian of v
ϵ
,
R evaluated at (
t
,
x
). Hence,(28)
∀
x
∈
R
n
,
∀
t
∈
0
,
T
,
∀
ϵ
>
0
,
Tr
A
x
D
2
v
ϵ
t
,
x
≥
0
,and as the ϵ
↓
0 limit of the Hessian is well defined for t
∈
(
0
,
T
] we get(29)
∀
x
∈
R
n
,
∀
t
∈
0
,
T
,
∀
ϵ
>
0
,
Tr
A
x
D
2
v
t
,
x
≥
0
.Now consider matrices (
a
i
j
v
1
) and (
a
i
j
v
2
) where v
1 and v
2 solve(30)
∂
v
1
∂
t
-
∑
i
j
a
i
j
v
1
∂
v
1
∂
x
i
∂
x
j
=
0
,
∂
v
2
∂
t
-
∑
i
j
a
i
j
v
2
∂
v
2
∂
x
i
∂
x
j
=
0
,and v
1
(
0
,
·
)
=
v
2
(
0
,
·
). Note that δ
v
=
v
1
-
v
2 satisfies(31)
∂
δ
v
t
,
x
∂
t
=
∑
i
j
a
i
j
v
2
-
a
i
j
v
1
∂
v
1
∂
x
i
∂
x
j
+
∑
i
j
a
i
j
v
2
∂
2
δ
v
∂
x
i
∂
x
j
,where δ
v
(
0
,
x
)
=
0 for all x
∈
R
n. We have the classical representation(32)
δ
v
t
,
x
=
∫
0
t
∫
R
n
∑
i
j
a
i
j
v
2
-
a
i
j
v
2
s
,
y
∂
2
v
1
∂
x
i
∂
x
j
s
,
y
p
v
2
t
,
x
,
s
,
y
d
s
d
y
,where p
v
2 is the fundamental solution of(33)
∂
δ
v
t
,
x
∂
t
-
∑
i
j
a
i
j
v
2
∂
2
δ
v
∂
x
i
∂
x
j
=
0
.As ∑
i
j
a
i
j
v
2
-
a
i
j
v
1
s
,
y
(
∂
v
1
/
∂
x
i
∂
x
j
)
(
s
,
y
)
≥
0 and p
v
2
(
t
,
x
,
s
,
y
)
≥
0 we conclude that δ
v
≥
0. Now we have proved the main theorem for a
i
j
∈
C
2
∩
H
2. Next, for each ϵ
>
0 and R
>
0 there exists a matrix (
a
i
j
ϵ
,
R
) with components in C
2
∩
H
2, where for all x
∈
R
n
(34)
a
ϵ
,
R
x
=
σ
ϵ
,
R
σ
ϵ
,
R
,
T
x
,with σ
ϵ
,
R
,
T
(
x
) being the transpose of σ
ϵ
,
R, and where(35)
sup
x
∈
B
R
σ
ϵ
,
R
x
-
σ
x
≤
ϵ
.Here σ is the original dispersion matrix related to the process X of the main theorem (which is assumed to be bounded and Lipschitz-continuous). Consider the following:(36)
X
ϵ
,
R
t
=
X
0
+
∫
0
t
σ
ϵ
,
R
X
ϵ
,
R
s
d
W
s
.For ρ
ϵ
,
R
(
x
) which satisfies analogous conditions we define(37)
Y
ϵ
,
R
t
=
Y
0
+
∫
0
t
ρ
ϵ
,
R
Y
ϵ
,
R
s
d
W
s
.Then the preceding argument shows that for σ
ϵ
,
R
σ
ϵ
,
R
,
T
≤
ρ
ϵ
,
R
ρ
ϵ
,
R
,
T and then for 0
≤
t
≤
T we have(38)
∀
ϵ
>
0
,
∀
r
>
0
,
∀
x
∈
B
r
,
∃
R
0
such that
∀
R
≥
R
0
,
E
x
f
ϵ
,
R
∑
i
=
1
n
c
i
X
i
ϵ
,
R
t
≤
E
x
f
ϵ
,
R
∑
i
=
1
n
c
i
Y
i
ϵ
,
R
t
.This leads to(39)
∀
r
>
0
,
∀
x
∈
B
r
,
∃
R
0
such that
∀
R
≥
R
0
,
E
x
f
R
∑
i
=
1
n
c
i
X
i
R
t
≤
E
x
f
R
∑
i
=
1
n
c
i
Y
i
R
t
,where X
R are processes:(40)
X
ϵ
,
R
t
=
X
0
+
∫
0
t
σ
R
X
R
s
d
W
s
,with a bounded continuous σ
R which satisfies(41)
∀
x
∈
B
R
,
σ
R
x
=
σ
x
=
0
.The process Y
R is defined analogously. Similarly f
R is a limit of functions f
ϵ
,
R
∈
C
2
∩
H
2 which equals f on B
R. In (39) X
R can be replaced by X and Y
R by Y by the probability law of the processes, and a limit consideration for data which equal for each R the function f on B
R leads to the statement of the theorem by an uniform exponential bound of the data functions, the boundedness of the Lipschitz-continuous coefficients, and the Gaussian law of the Brownian motion.
## 3. Additional Note for the Proof of Theorem2
Ifw
1 and w
2 solve(42)
∂
w
1
∂
t
-
∑
i
j
a
i
j
w
1
∂
w
1
∂
x
i
∂
x
j
+
∑
i
b
i
w
1
x
∂
w
1
∂
x
i
=
0
,
∂
w
2
∂
t
-
∑
i
j
a
i
j
w
2
∂
w
2
∂
x
i
∂
x
j
+
∑
i
b
i
w
2
x
∂
w
2
∂
x
i
=
0
,and w
1
(
0
,
·
)
=
w
2
(
0
,
·
), note that δ
w
=
w
1
-
w
2 satisfies(43)
∂
δ
w
t
,
x
∂
t
=
∑
i
j
a
i
j
w
2
-
a
i
j
w
1
∂
w
1
∂
x
i
∂
x
j
+
∑
i
j
a
i
j
w
2
∂
δ
w
∂
x
i
∂
x
j
-
b
i
w
2
∂
δ
w
∂
x
i
+
∑
i
b
i
w
2
x
-
b
i
w
1
x
∂
w
1
∂
x
i
,where δ
w
(
0
,
x
)
=
0 for all x
∈
R
n. Consider the following:(44)
δ
w
t
,
x
=
∫
0
t
∫
R
n
∑
i
j
a
i
j
w
2
-
a
i
j
w
1
s
,
y
w
,
i
,
j
1
s
,
y
+
∑
i
b
i
w
2
x
-
b
i
w
1
x
∂
w
1
∂
x
i
p
w
2
t
,
x
,
s
,
y
d
s
d
y
,where p
w
2 is the fundamental solution of(45)
∂
δ
w
∂
t
-
∑
i
j
a
i
j
w
2
∂
δ
w
∂
x
i
∂
x
j
+
b
i
w
2
∂
δ
w
∂
x
i
=
0
.As∑
i
j
a
i
j
w
2
-
a
i
j
w
1
(
s
,
y
)
(
∂
w
1
/
∂
x
i
∂
x
j
)
(
s
,
y
)
≥
0 and p
w
2
(
t
,
x
,
s
,
y
)
≥
0 we conclude that δ
w
≥
0 if ∑
i
b
i
w
2
x
-
b
i
w
1
x
∂
w
1
/
∂
x
i
). As b
i
w
2
(
x
)
-
b
i
w
1
(
x
)
≥
0 for all x this condition reduces to the monotonicity condition ∂
w
1
/
∂
x
i
≥
0. The truth of the latter monotonicity condition for the value function w
1 can be proved using the adjoint using the same trick as in the preceding section.Remark 7.
These notes are from my lecture notes “Die Fundamentallösung Parabolischer Gleichungen und Schwache Schemata Höherer Ordnung für Stochastische Diffusionsprozesse” of WS 2005/2006 in Heidelberg, which are not published. The argument given there is published now upon request, as research is going on concerning applications of comparison principles. Originally the relevance of stochastic comparison results was pointed out to the author by P. Laurence and V. Henderson. The main theorems proved here are stated essentially in the conference notes in [6, 7] but were not strictly proved there. In these notes applications to American options and to passport options are considered. For example, explicit solutions for optimal strategies related to the optimal control problem of passport options and the dependence of that strategy on correlations between assets can be obtained. The proof given here can be applied in the univariate case as well and recovers the result of Hajek in [1] using the result of [2].
---
*Source: 1018509-2016-09-05.xml* | 2016 |
# Direct and Inverse Approximation Theorems for Baskakov Operators with the Jacobi-Type Weight
**Authors:** Guo Feng
**Journal:** Abstract and Applied Analysis
(2011)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2011/101852
---
## Abstract
We introduce a new norm and a newK-functional Kφλ(f;t)w,λ. Using this K-functional, direct and inverse approximation theorems for the Baskakov operators with the Jacobi-type weight are obtained in this paper.
---
## Body
## 1. Introduction and Main Results
Letf be a function defined on the interval [0,∞). The operators Vn(f;x) are defined as follows:(1.1)Vn(f;x)=∑k=0∞f(kn)vn,k(x),
where (1.2)vn,k(x)=(n+k-1k)xk(1+x)-n-k,
which were introduced by Baskakov in 1957 [1]. Becker [2] and Ditzian [3] had studied these operators and obtained direct and converse theorems. In [4, 5] Totik gave a result: if f∈CB[0,+∞),0<α<1, then ∥Vn(f;x)-f(x)∥∞=O(nα) if and only if xα(1+x)α|Δh2(f;x)|≤kh2α, where h>0 and k is a positive constant. We may formulate the following question: do the Baskakov operators have similar property in the case of weighted approximation with the Jacobi weights? It is well known that the weighted approximation is not a simple extension, because the Baskakov operators are unbounded for the usual weighted norm ∥f∥w=∥wf∥∞. Xun and Zhou [6] introduced the norm (1.3)‖f‖w=‖wf‖∞+|f(0)|,f∈CB[0,∞)
and have discussed the rate of convergence for the Baskakov operators with the Jacobi weights and obtained(1.4)w(x)|Vn(f;x)-f(x)|=O(n-α)⟺K(f;t)w=O(tα),
where w(x)=xa(1+x)-b,0<a<1,b>0,0<α<1, and CB[0,∞) is the set of bounded continuous functions on [0,∞).In this paper, we introduce a new norm and a newK-functional, using the K-functional, and we get direct and inverse approximation theorems for the Baskakov operators with the Jacobi-type weight.First, we introduce some useful definitions and notations.Definition 1.1.
LetCB[0,∞) denote the set of bounded continuous functions on the interval [0,∞), and let
(1.5)Ca,b,λ={f∣f∈CB[0,∞),φ2(2-λ)wf∈CB[0,∞)},Ca,b,λ0={f∣f∈Ca,b,λ,f(0)=0},
where φ(x)=x(1+x),w(x)=xa(1+x)-b,x∈[0,∞),0≤a<λ≤1, and b≥0.
Moreover, theK-functional is given by
(1.6)Kφλ(f;t)w,λ=infg∈D{‖φ2(1-λ)(f-g)‖w+t‖φ2(2-λ)g′′‖w},
where D={g∣g∈Ca,b,λ0,g′∈A.C.loc[0,∞),∥φ2(2-λ)g′′∥w<∞}.
We are now in a position to state our main results.Theorem 1.2.
Iff∈Ca,b,λ0, then
(1.7)‖φ2(1-λ)(Vn(f)-f)‖w≤MKφλ(f;n-1)w,λ.Theorem 1.3.
Supposef∈Ca,b,λ0,0<α<1. Then the following statements are equivalent:
(1.8)(1)φ2(1-λ)(x)w(x)|(Vn(f(x))-f(x))|=O(n-α),n≥2;(2)Kφλ(f;t)w,λ=O(tα),0<t<1.Throughout this paper,M denotes a positive constant independent of x,n, and f which may be different in different places. It is worth mentioning that for λ=1, we recover the results of [6].
## 2. Auxiliary Lemmas
To prove the theorems, we need some lemmas. By simple computation, we have(2.1)Vn′′(f;x)=n(n+1)∑k=0∞vn+2,k(x)(f(k+2n)-2f(k+1n)+f(kn))
or (2.2)Vn′′(f;x)=∑k=0∞vn,k(x)f(kn)(k(k-1)x2-2k(n+k)x(1+x)+(n+k)(n+k+1)(1+x)2).Lemma 2.1.
Letc≥0,d∈ℝ. Then
(2.3)∑k=1∞vn,k(x)(kn)-c(1+kn)-d≤Mx-c(1+x)-d,forx>0.Proof.
We notice [7]
(2.4)∑k=1∞vn,k(x)(nk)l≤Mx-l,forl∈N,∑k=0∞vn,k(x)(1+kn)m≤M(1+x)m,form∈Z.
For c=0,d=0, the result of (2.3) is obvious. For c>0,d≠0, there exists m∈ℤ, such that 0<-2d/m<1. Using Hölder's inequality, we have
(2.5)∑k=1∞vn,k(x)(kn)-c(1+kn)-d≤(∑k=1∞vn,k(x)(kn)-2c)1/2(∑k=1∞vn,k(x)(1+kn)-2d)1/2≤(∑k=1∞vn,k(x)(nk)[2c]+1)c/([2c]+1)(∑k=1∞vn,k(x)(1+kn)m)-d/m≤M(x-([2c]+1))c/([2c]+1)((1+x)m)-d/m≤Mx-c(1+x)-d.
For c>0,d=0 or c=0,d≠0, the proof is similar to that of (2.5). Thus, this proof is completed.Lemma 2.2.
Letf∈Ca,b,λ0,n∈ℕ. Then
(2.6)|w(x)φ2(1-λ)(x)Vn(f;x)|≤M‖φ2(1-λ)f‖w.Proof.
By Lemma2.1, we get
(2.7)|w(x)φ2(1-λ)(x)Vn(f;x)|=|w(x)φ2(1-λ)(x)∑k=1∞f(kn)vn,k(x)|≤‖φ2(1-λ)f‖ww(x)φ2(1-λ)(x)∑k=1∞vn,k(x)w-1(kn)φ2(λ-1)(kn)≤M‖φ2(1-λ)f‖w.Lemma 2.3.
Letf∈Ca,b,λ0,n∈ℕ. Then
(2.8)‖φ2(2-λ)Vn′′(f)‖w≤Mn‖φ2(1-λ)f‖w.Proof.
Forx∈Enc=[0,1/n],x≠0,(n+1)x(x+1)≤2n·2x≤4; using (2.1) and Lemma 2.1, we have
(2.9)|w(x)φ2(2-λ)(x)Vn′′(f;x)|≤w(x)φ2(1-λ)(x)n(n+1)x(1+x)×(∑k=0∞vn+2,k(x)w-1(k+2n)φ-2(1-λ)(k+2n)+2∑k=0∞vn+2,k(x)w-1(k+1n)φ-2(1-λ)(k+1n)+∑k=1∞vn+2,k(x)w-1(kn)φ-2(1-λ)(kn))‖φ2(1-λ)f‖w≤Mnw(x)φ2(1-λ)(x)w-1(x)φ-2(1-λ)(x)‖φ2(1-λ)f‖w≤Mn‖φ2(1-λ)f‖w.
For x∈En=(1/n,∞), by (2.2), we get
(2.10)|w(x)φ2(2-λ)(x)Vn′′(f;x)|=|n2w(x)φ-2λ(x)∑k=1∞vn,k(x)f(kn)((kn-x)2-1+2xn(kn-x)-x(1+x)n)|≤n2w(x)φ-2λ(x)‖φ2(1-λ)f‖w∑k=1∞vn,k(x)w-1(kn)φ-2(1-λ)(kn)⋅((kn-x)2+1+2xn|kn-x|+x(1+x)n)∶=n2w(x)φ-2λ(x)‖φ2(1-λ)f‖w(I1(n,x)+I2(n,x)+I3(n,x)).
Note that for x∈En, one has the following inequality [7]
(2.11)n2mVn((t-x)2m;x)≤Mnm(φ(x))2m,m∈N.
Applying Hölder’s inequality and Lemma 2.1, we have
(2.12)I1(n,x)=∑k=1∞vn,k(x)w-1(kn)φ-2(1-λ)(kn)(kn-x)2≤(∑k=1∞vn,k(x)w-2(kn)φ-4(1-λ)(kn))1/2(∑k=1∞vn,k(x)(kn-x)4)1/2≤Mx-a-1+λ(1+x)b+λ-1x(1+x)n≤Mn-1w-1(x)φ2λ(x),(2.13)I2(n,x)=∑k=1∞vn,k(x)w-1(kn)φ-2(1-λ)(kn)|kn-x|1+2xn≤1+2xn(∑k=1∞vn,k(x)w-2(kn)φ-4(1-λ)(kn))1/2(∑k=1∞vn,k(x)(kn-x)2)1/2≤Mw-1(x)φ2λ(x)n-3/2(1+1x)1/2.
Note that for x>1/n, one has 1+1/x<2n. Hence,
(2.14)I2(n,x)≤Mn-1w-1(x)φ2λ(x),(2.15)I3(n,x)=∑k=1∞vn,k(x)w-1(kn)φ-2(1-λ)(kn)x(1+x)n≤Mn-1x(1+x)x-a-1+λ(1+x)b-1+λ=Mn-1w-1(x)φ2λ(x).
Combining (2.9)–(2.14), we get
(2.16)|w(x)φ2(2-λ)(x)Vn′′(f;x)|w≤Mn‖φ2(1-λ)f‖w.
Thus,
(2.17)‖φ2(2-λ)Vn′′(f)‖w≤Mn‖φ2(1-λ)f‖w.
The proof is completed.Lemma 2.4.
Letf∈D,n∈ℕ, and n≥2. Then
(2.18)‖φ2(2-λ)Vn′′(f)‖w≤M‖φ2(2-λ)f′′‖w.Proof.
(1) For the caseλ≠1 or a≠0, if λ-2+b≥0, using (2.1) and Lemma 2.1, we have
(2.19)|w(x)φ2(2-λ)(x)Vn′′(f;x)|=|w(x)φ2(2-λ)(x)n(n+1)∑k=0∞vn+2,k(x)∫01/n∫01/nf′′(kn+u+v)dudv|≤‖φ2(2-λ)f′′‖w|w(x)φ2(2-λ)(x)n(n+1)∑k=0∞vn+2,k(x)⋅∫01/n∫01/n(kn+u+v)λ-a-2(1+kn+u+v)λ+b-2dudv|≤‖φ2(2-λ)f′′‖w|w(x)φ2(2-λ)(x)n(n+1)∑k=1∞vn+2,k(x)∫01/n∫01/n(kn)λ-a-2(1+k+2n)λ+b-2dudv|+‖φ2(2-λ)f′′‖w|w(x)φ2(2-λ)(x)n(n+1)vn+2,0(x)∫01/n∫01/n(u+v)λ-a-2(1+u+v)λ+b-2dudv|≤‖φ2(2-λ)f′′‖ww(x)φ2(2-λ)(x)n(n+1)∑k=1∞vn+2,k(x)n-2(kn)λ-a-2(1+k+2n)λ+b-2+3λ+b-2‖φ2(2-λ)f′′‖ww(x)φ2(2-λ)(x)n(n+1)vn+2,0(x)∫01/n11+a-λuλ-a-1du≤2⋅3λ+b-2‖φ2(2-λ)f′′‖ww(x)φ2(2-λ)(x)∑k=1∞vn+2,k(x)(kn)λ-a-2(1+kn)λ+b-2+2⋅3λ+b-2‖φ2(2-λ)f′′‖ww(x)φ2(2-λ)(x)n(n+1)vn+2,0(x)1(1+a-λ)(λ-a)(1n)λ-a≤2⋅3λ+b-2‖φ2(2-λ)f′′‖w(1+x2+a-λn(n+1)(1+a-λ)(λ-a)(1+x)n(1n)λ-a).
(i)
Ifx∈Enc,(2.20)x2+a-λn(n+1)(1+a-λ)(λ-a)(1+x)n(1n)λ-a≤n(n+1)(1+a-λ)(λ-a)(1n)2≤2(1+a-λ)(λ-a).(ii)
Ifx∈En,n≥2,(2.21)x2+a-λn(n+1)(1+a-λ)(λ-a)(1+x)n(1n)λ-a≤nλ-ax2n(n+1)(1+a-λ)(λ-a)(1+x)n(1n)λ-a≤x2n(n+1)(1+a-λ)(λ-a)n(n-1)x2≤3(1+a-λ)(λ-a).
Combining (2.19)–(2.21), we have
(2.22)|w(x)φ2(2-λ)(x)Vn′′(f;x)|≤M‖φ2(2-λ)f′′‖w.
Thus,
(2.23)‖φ2(2-λ)(x)Vn′′(f)‖w≤M‖φ2(2-λ)f′′‖w.
If λ-2+b<0, we have
(2.24)|w(x)φ2(2-λ)(x)Vn′′(f;x)|≤‖φ2(2-λ)f′′‖w|w(x)φ2(2-λ)(x)n(n+1)∑k=1∞vn+2,k(x)∫01/n∫01/n(kn)λ-a-2(1+kn)λ+b-2dudv|+‖φ2(2-λ)f′′‖w|w(x)φ2(2-λ)(x)n(n+1)vn+2,0(x)∫01/n∫01/n(u+v)λ-a-2dudv|.
By using the method similar to that of (2.19)–(2.23), it is not difficult to obtain the same inequality as (2.23).
(2) For the caseλ=1,a=0, the proof is similar to that of case (1) and even simpler. Therefore the proof is completed.Lemma 2.5 (see [8, page 200]).
LetΩ(t) be an increasing positive function on (0,a), the inequality (r>α)(2.25)Ω(h)≤M[tα+(ht)rΩ(t)]
holds true for h,t∈(0,a). Then one has
(2.26)Ω(t)=O(tα).
## 3. Proofs of Theorems
### 3.1. Proof of Theorem1.2
Proof.
First, we prove it as follows.(i)
Ifx∈Enc, then(3.1)w(x)φ2(1-λ)(x)∑k=0∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mn-1.(ii)
Ifx∈En, then(3.2)Vn((t-x)2(1+t)b-2+λ;x)≤Mn-1φ2(x)(1+x)b-2+λ.
The Proof of (3.1)
In fact, (i) fork=0, since x∈Enc, we have
(3.3)w(x)φ2(1-λ)(x)vn,0(x)∫0xuw-1(u)φ-2(2-λ)(u)du=w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu.
If b-2+λ≤0, we get
(3.4)w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu≤Mw(x)φ2(1-λ)(x)(1+x)-nxλ-a≤Mn-1.
If b-2+λ>0, we have
(3.5)w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu≤Mw(x)φ2(1-λ)(x)(1+x)-n+b-2+λ∫0xuλ-a-1du≤Mn-1.
(ii) If k≥1, since x∈Enc, we have
(3.6)w(x)φ2(1-λ)(x)∑k=1∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mw(x)φ2(1-λ)(x)∑k=1∞vn,k(x)(kn-x)φ-2(2-λ)(x)(1+kn)b|∫k/nxu-adu|≤Mw(x)φ-2(x)∑k=1∞vn,k(x)(kn-x)(1+kn)b((kn)1-a-x1-a)≤Mw(x)φ-2(x)∑k=1∞vn,k(x)(kn-x)2-a(1+kn)b≤Mn-1.
Combining (3.4), (3.5) and (3.6), we obtain (3.1).The proof of (3.2)
Ifb-2+λ≤0, by (9.5.10) and (9.6.3) of [7], using the Cauchy-Schwarz inequality and the Hölder inequality, we obtain
(3.7)Vn((t-x)2(1+t)b-2+λ;x)≤(Vn((t-x)4;x))1/2(Vn((1+t)2(b-2+λ);x))1/2≤(Vn((t-x)4;x))1/2(Vn((1+t)-2;x))(2-b-λ)/2≤Mn-1φ2(x)(1+x)b-2+λ.
Ifb-2+λ>0, by (2.3), we get Vn((1+t)b-2+λ;x)≤M(1+x)b-2+λ, and using the Cauchy-Schwarz inequality and the Hölder inequality, we have
(3.8)Vn((t-x)2(1+t)b-2+λ;x)≤(Vn((t-x)4;x))1/2(Vn((1+t)2(b-2+λ);x))1/2≤Mn-1φ2(x)(1+x)b-2+λ.
Combining (3.7) and (3.8), we obtain (3.2).
Next, we prove Theorem1.2. For g∈D, if x∈Enc, by (3.1), we have
(3.9)|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|=|w(x)φ2(1-λ)(x)Vn(∫xt(t-u)g′′(u)du;x)|≤w(x)φ2(1-λ)(x)‖φ2(2-λ)g′′‖wVn(|∫xt|t-u|w-1(u)φ-2(2-λ)(u)du|;x)≤M‖φ2(2-λ)g′′‖ww(x)φ2(1-λ)(x)∑k=0∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mn-1‖φ2(2-λ)g′′‖w.
If x∈En, by (3.2), we get
(3.10)|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|=|w(x)φ2(1-λ)(x)Vn(∫xt(t-u)g′′(u)du;x)|≤M‖φ2(2-λ)g′′‖ww(x)φ2(1-λ)(x)Vn(|∫xt|t-u|w-1(u)φ-2(2-λ)(u)du|;x)≤M‖φ2(2-λ)g′′‖w|φ-2(x)Vn((t-x)2;x)+x-2-a+λw(x)φ2(1-λ)(x)Vn((t-x)2(1+t)b-2+λ;x)|≤Mn-1‖φ2(2-λ)g′′‖w.
Therefore, for f∈Ca,b,λ0,g∈D, by Lemma 2.2 and (3.9), (3.10), and the definition of Kφλ(f;n-1)w,λ, we obtain
(3.11)|w(x)φ2(1-λ)(x)(Vn(f;x)-f(x))|≤|w(x)φ2(1-λ)(x)(Vn(f-g;x))|+|w(x)φ2(1-λ)(x)(f(x)-g(x))|+|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|≤M‖φ2(1-λ)(f-g)‖w+|w(x)φ2(1-λ)(x)(f(x)-g(x))|+|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|≤M{‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w}.
Taking the infimum on the right-hand side over all g∈D, we get
(3.12)|w(x)φ2(1-λ)(x)(Vn(f;x)-f(x))|≤MKφλ(f;n-1)w,λ.
This completes the proof of Theorem 1.2.
### 3.2. Proof of Theorem1.3
Proof.
By Theorem1.2, we know (2)⇒(1). Now,we will prove (1)⇒(2). In view of (1), we get
(3.13)‖φ2(1-λ)(Vn(f)-f)‖w≤Mn-α.
By the definition of K-functional, we may choose g∈D to satisfy
(3.14)‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w≤2Kφλ(f;n-1)w,λ.
Using Lemma 2.2 and Lemma 2.3, we have
(3.15)Kφλ(f;t)w,λ≤‖φ2(1-λ)(Vn(f)-f)‖w+t‖φ2(2-λ)Vn′′(f)‖w≤Mn-α+t(‖φ2(2-λ)Vn′′(f-g)‖w+‖φ2(2-λ)Vn′′(g)‖w)≤Mn-α+t(nM‖φ2(1-λ)(f-g)‖w+M‖φ2(2-λ)g′′‖w)≤Mn-α+tnM(‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w).
Taking the infimum on the right-hand side over all g∈D, we get
(3.16)Kφλ(f;t)w,λ≤M(n-α+tn-1Kφλ(f;n-1)w,λ).
By Lemma 2.4, we get
(3.17)Kφλ(f;n-1)w,λ≤M(n-α).
Leting (n+1)-1<t≤n-1, we get
(3.18)Kφλ(f;t)w,λ≤MKφλ(f;n-1)w,λ≤M(nn+1)-α(n+1)-α≤M(n+1)-α≤Mtα.
This completes the proof of Theorem 1.3.
## 3.1. Proof of Theorem1.2
Proof.
First, we prove it as follows.(i)
Ifx∈Enc, then(3.1)w(x)φ2(1-λ)(x)∑k=0∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mn-1.(ii)
Ifx∈En, then(3.2)Vn((t-x)2(1+t)b-2+λ;x)≤Mn-1φ2(x)(1+x)b-2+λ.
The Proof of (3.1)
In fact, (i) fork=0, since x∈Enc, we have
(3.3)w(x)φ2(1-λ)(x)vn,0(x)∫0xuw-1(u)φ-2(2-λ)(u)du=w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu.
If b-2+λ≤0, we get
(3.4)w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu≤Mw(x)φ2(1-λ)(x)(1+x)-nxλ-a≤Mn-1.
If b-2+λ>0, we have
(3.5)w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu≤Mw(x)φ2(1-λ)(x)(1+x)-n+b-2+λ∫0xuλ-a-1du≤Mn-1.
(ii) If k≥1, since x∈Enc, we have
(3.6)w(x)φ2(1-λ)(x)∑k=1∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mw(x)φ2(1-λ)(x)∑k=1∞vn,k(x)(kn-x)φ-2(2-λ)(x)(1+kn)b|∫k/nxu-adu|≤Mw(x)φ-2(x)∑k=1∞vn,k(x)(kn-x)(1+kn)b((kn)1-a-x1-a)≤Mw(x)φ-2(x)∑k=1∞vn,k(x)(kn-x)2-a(1+kn)b≤Mn-1.
Combining (3.4), (3.5) and (3.6), we obtain (3.1).The proof of (3.2)
Ifb-2+λ≤0, by (9.5.10) and (9.6.3) of [7], using the Cauchy-Schwarz inequality and the Hölder inequality, we obtain
(3.7)Vn((t-x)2(1+t)b-2+λ;x)≤(Vn((t-x)4;x))1/2(Vn((1+t)2(b-2+λ);x))1/2≤(Vn((t-x)4;x))1/2(Vn((1+t)-2;x))(2-b-λ)/2≤Mn-1φ2(x)(1+x)b-2+λ.
Ifb-2+λ>0, by (2.3), we get Vn((1+t)b-2+λ;x)≤M(1+x)b-2+λ, and using the Cauchy-Schwarz inequality and the Hölder inequality, we have
(3.8)Vn((t-x)2(1+t)b-2+λ;x)≤(Vn((t-x)4;x))1/2(Vn((1+t)2(b-2+λ);x))1/2≤Mn-1φ2(x)(1+x)b-2+λ.
Combining (3.7) and (3.8), we obtain (3.2).
Next, we prove Theorem1.2. For g∈D, if x∈Enc, by (3.1), we have
(3.9)|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|=|w(x)φ2(1-λ)(x)Vn(∫xt(t-u)g′′(u)du;x)|≤w(x)φ2(1-λ)(x)‖φ2(2-λ)g′′‖wVn(|∫xt|t-u|w-1(u)φ-2(2-λ)(u)du|;x)≤M‖φ2(2-λ)g′′‖ww(x)φ2(1-λ)(x)∑k=0∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mn-1‖φ2(2-λ)g′′‖w.
If x∈En, by (3.2), we get
(3.10)|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|=|w(x)φ2(1-λ)(x)Vn(∫xt(t-u)g′′(u)du;x)|≤M‖φ2(2-λ)g′′‖ww(x)φ2(1-λ)(x)Vn(|∫xt|t-u|w-1(u)φ-2(2-λ)(u)du|;x)≤M‖φ2(2-λ)g′′‖w|φ-2(x)Vn((t-x)2;x)+x-2-a+λw(x)φ2(1-λ)(x)Vn((t-x)2(1+t)b-2+λ;x)|≤Mn-1‖φ2(2-λ)g′′‖w.
Therefore, for f∈Ca,b,λ0,g∈D, by Lemma 2.2 and (3.9), (3.10), and the definition of Kφλ(f;n-1)w,λ, we obtain
(3.11)|w(x)φ2(1-λ)(x)(Vn(f;x)-f(x))|≤|w(x)φ2(1-λ)(x)(Vn(f-g;x))|+|w(x)φ2(1-λ)(x)(f(x)-g(x))|+|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|≤M‖φ2(1-λ)(f-g)‖w+|w(x)φ2(1-λ)(x)(f(x)-g(x))|+|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|≤M{‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w}.
Taking the infimum on the right-hand side over all g∈D, we get
(3.12)|w(x)φ2(1-λ)(x)(Vn(f;x)-f(x))|≤MKφλ(f;n-1)w,λ.
This completes the proof of Theorem 1.2.
## 3.2. Proof of Theorem1.3
Proof.
By Theorem1.2, we know (2)⇒(1). Now,we will prove (1)⇒(2). In view of (1), we get
(3.13)‖φ2(1-λ)(Vn(f)-f)‖w≤Mn-α.
By the definition of K-functional, we may choose g∈D to satisfy
(3.14)‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w≤2Kφλ(f;n-1)w,λ.
Using Lemma 2.2 and Lemma 2.3, we have
(3.15)Kφλ(f;t)w,λ≤‖φ2(1-λ)(Vn(f)-f)‖w+t‖φ2(2-λ)Vn′′(f)‖w≤Mn-α+t(‖φ2(2-λ)Vn′′(f-g)‖w+‖φ2(2-λ)Vn′′(g)‖w)≤Mn-α+t(nM‖φ2(1-λ)(f-g)‖w+M‖φ2(2-λ)g′′‖w)≤Mn-α+tnM(‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w).
Taking the infimum on the right-hand side over all g∈D, we get
(3.16)Kφλ(f;t)w,λ≤M(n-α+tn-1Kφλ(f;n-1)w,λ).
By Lemma 2.4, we get
(3.17)Kφλ(f;n-1)w,λ≤M(n-α).
Leting (n+1)-1<t≤n-1, we get
(3.18)Kφλ(f;t)w,λ≤MKφλ(f;n-1)w,λ≤M(nn+1)-α(n+1)-α≤M(n+1)-α≤Mtα.
This completes the proof of Theorem 1.3.
---
*Source: 101852-2011-11-03.xml* | 101852-2011-11-03_101852-2011-11-03.md | 15,750 | Direct and Inverse Approximation Theorems for Baskakov Operators with the Jacobi-Type Weight | Guo Feng | Abstract and Applied Analysis
(2011) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2011/101852 | 101852-2011-11-03.xml | ---
## Abstract
We introduce a new norm and a newK-functional Kφλ(f;t)w,λ. Using this K-functional, direct and inverse approximation theorems for the Baskakov operators with the Jacobi-type weight are obtained in this paper.
---
## Body
## 1. Introduction and Main Results
Letf be a function defined on the interval [0,∞). The operators Vn(f;x) are defined as follows:(1.1)Vn(f;x)=∑k=0∞f(kn)vn,k(x),
where (1.2)vn,k(x)=(n+k-1k)xk(1+x)-n-k,
which were introduced by Baskakov in 1957 [1]. Becker [2] and Ditzian [3] had studied these operators and obtained direct and converse theorems. In [4, 5] Totik gave a result: if f∈CB[0,+∞),0<α<1, then ∥Vn(f;x)-f(x)∥∞=O(nα) if and only if xα(1+x)α|Δh2(f;x)|≤kh2α, where h>0 and k is a positive constant. We may formulate the following question: do the Baskakov operators have similar property in the case of weighted approximation with the Jacobi weights? It is well known that the weighted approximation is not a simple extension, because the Baskakov operators are unbounded for the usual weighted norm ∥f∥w=∥wf∥∞. Xun and Zhou [6] introduced the norm (1.3)‖f‖w=‖wf‖∞+|f(0)|,f∈CB[0,∞)
and have discussed the rate of convergence for the Baskakov operators with the Jacobi weights and obtained(1.4)w(x)|Vn(f;x)-f(x)|=O(n-α)⟺K(f;t)w=O(tα),
where w(x)=xa(1+x)-b,0<a<1,b>0,0<α<1, and CB[0,∞) is the set of bounded continuous functions on [0,∞).In this paper, we introduce a new norm and a newK-functional, using the K-functional, and we get direct and inverse approximation theorems for the Baskakov operators with the Jacobi-type weight.First, we introduce some useful definitions and notations.Definition 1.1.
LetCB[0,∞) denote the set of bounded continuous functions on the interval [0,∞), and let
(1.5)Ca,b,λ={f∣f∈CB[0,∞),φ2(2-λ)wf∈CB[0,∞)},Ca,b,λ0={f∣f∈Ca,b,λ,f(0)=0},
where φ(x)=x(1+x),w(x)=xa(1+x)-b,x∈[0,∞),0≤a<λ≤1, and b≥0.
Moreover, theK-functional is given by
(1.6)Kφλ(f;t)w,λ=infg∈D{‖φ2(1-λ)(f-g)‖w+t‖φ2(2-λ)g′′‖w},
where D={g∣g∈Ca,b,λ0,g′∈A.C.loc[0,∞),∥φ2(2-λ)g′′∥w<∞}.
We are now in a position to state our main results.Theorem 1.2.
Iff∈Ca,b,λ0, then
(1.7)‖φ2(1-λ)(Vn(f)-f)‖w≤MKφλ(f;n-1)w,λ.Theorem 1.3.
Supposef∈Ca,b,λ0,0<α<1. Then the following statements are equivalent:
(1.8)(1)φ2(1-λ)(x)w(x)|(Vn(f(x))-f(x))|=O(n-α),n≥2;(2)Kφλ(f;t)w,λ=O(tα),0<t<1.Throughout this paper,M denotes a positive constant independent of x,n, and f which may be different in different places. It is worth mentioning that for λ=1, we recover the results of [6].
## 2. Auxiliary Lemmas
To prove the theorems, we need some lemmas. By simple computation, we have(2.1)Vn′′(f;x)=n(n+1)∑k=0∞vn+2,k(x)(f(k+2n)-2f(k+1n)+f(kn))
or (2.2)Vn′′(f;x)=∑k=0∞vn,k(x)f(kn)(k(k-1)x2-2k(n+k)x(1+x)+(n+k)(n+k+1)(1+x)2).Lemma 2.1.
Letc≥0,d∈ℝ. Then
(2.3)∑k=1∞vn,k(x)(kn)-c(1+kn)-d≤Mx-c(1+x)-d,forx>0.Proof.
We notice [7]
(2.4)∑k=1∞vn,k(x)(nk)l≤Mx-l,forl∈N,∑k=0∞vn,k(x)(1+kn)m≤M(1+x)m,form∈Z.
For c=0,d=0, the result of (2.3) is obvious. For c>0,d≠0, there exists m∈ℤ, such that 0<-2d/m<1. Using Hölder's inequality, we have
(2.5)∑k=1∞vn,k(x)(kn)-c(1+kn)-d≤(∑k=1∞vn,k(x)(kn)-2c)1/2(∑k=1∞vn,k(x)(1+kn)-2d)1/2≤(∑k=1∞vn,k(x)(nk)[2c]+1)c/([2c]+1)(∑k=1∞vn,k(x)(1+kn)m)-d/m≤M(x-([2c]+1))c/([2c]+1)((1+x)m)-d/m≤Mx-c(1+x)-d.
For c>0,d=0 or c=0,d≠0, the proof is similar to that of (2.5). Thus, this proof is completed.Lemma 2.2.
Letf∈Ca,b,λ0,n∈ℕ. Then
(2.6)|w(x)φ2(1-λ)(x)Vn(f;x)|≤M‖φ2(1-λ)f‖w.Proof.
By Lemma2.1, we get
(2.7)|w(x)φ2(1-λ)(x)Vn(f;x)|=|w(x)φ2(1-λ)(x)∑k=1∞f(kn)vn,k(x)|≤‖φ2(1-λ)f‖ww(x)φ2(1-λ)(x)∑k=1∞vn,k(x)w-1(kn)φ2(λ-1)(kn)≤M‖φ2(1-λ)f‖w.Lemma 2.3.
Letf∈Ca,b,λ0,n∈ℕ. Then
(2.8)‖φ2(2-λ)Vn′′(f)‖w≤Mn‖φ2(1-λ)f‖w.Proof.
Forx∈Enc=[0,1/n],x≠0,(n+1)x(x+1)≤2n·2x≤4; using (2.1) and Lemma 2.1, we have
(2.9)|w(x)φ2(2-λ)(x)Vn′′(f;x)|≤w(x)φ2(1-λ)(x)n(n+1)x(1+x)×(∑k=0∞vn+2,k(x)w-1(k+2n)φ-2(1-λ)(k+2n)+2∑k=0∞vn+2,k(x)w-1(k+1n)φ-2(1-λ)(k+1n)+∑k=1∞vn+2,k(x)w-1(kn)φ-2(1-λ)(kn))‖φ2(1-λ)f‖w≤Mnw(x)φ2(1-λ)(x)w-1(x)φ-2(1-λ)(x)‖φ2(1-λ)f‖w≤Mn‖φ2(1-λ)f‖w.
For x∈En=(1/n,∞), by (2.2), we get
(2.10)|w(x)φ2(2-λ)(x)Vn′′(f;x)|=|n2w(x)φ-2λ(x)∑k=1∞vn,k(x)f(kn)((kn-x)2-1+2xn(kn-x)-x(1+x)n)|≤n2w(x)φ-2λ(x)‖φ2(1-λ)f‖w∑k=1∞vn,k(x)w-1(kn)φ-2(1-λ)(kn)⋅((kn-x)2+1+2xn|kn-x|+x(1+x)n)∶=n2w(x)φ-2λ(x)‖φ2(1-λ)f‖w(I1(n,x)+I2(n,x)+I3(n,x)).
Note that for x∈En, one has the following inequality [7]
(2.11)n2mVn((t-x)2m;x)≤Mnm(φ(x))2m,m∈N.
Applying Hölder’s inequality and Lemma 2.1, we have
(2.12)I1(n,x)=∑k=1∞vn,k(x)w-1(kn)φ-2(1-λ)(kn)(kn-x)2≤(∑k=1∞vn,k(x)w-2(kn)φ-4(1-λ)(kn))1/2(∑k=1∞vn,k(x)(kn-x)4)1/2≤Mx-a-1+λ(1+x)b+λ-1x(1+x)n≤Mn-1w-1(x)φ2λ(x),(2.13)I2(n,x)=∑k=1∞vn,k(x)w-1(kn)φ-2(1-λ)(kn)|kn-x|1+2xn≤1+2xn(∑k=1∞vn,k(x)w-2(kn)φ-4(1-λ)(kn))1/2(∑k=1∞vn,k(x)(kn-x)2)1/2≤Mw-1(x)φ2λ(x)n-3/2(1+1x)1/2.
Note that for x>1/n, one has 1+1/x<2n. Hence,
(2.14)I2(n,x)≤Mn-1w-1(x)φ2λ(x),(2.15)I3(n,x)=∑k=1∞vn,k(x)w-1(kn)φ-2(1-λ)(kn)x(1+x)n≤Mn-1x(1+x)x-a-1+λ(1+x)b-1+λ=Mn-1w-1(x)φ2λ(x).
Combining (2.9)–(2.14), we get
(2.16)|w(x)φ2(2-λ)(x)Vn′′(f;x)|w≤Mn‖φ2(1-λ)f‖w.
Thus,
(2.17)‖φ2(2-λ)Vn′′(f)‖w≤Mn‖φ2(1-λ)f‖w.
The proof is completed.Lemma 2.4.
Letf∈D,n∈ℕ, and n≥2. Then
(2.18)‖φ2(2-λ)Vn′′(f)‖w≤M‖φ2(2-λ)f′′‖w.Proof.
(1) For the caseλ≠1 or a≠0, if λ-2+b≥0, using (2.1) and Lemma 2.1, we have
(2.19)|w(x)φ2(2-λ)(x)Vn′′(f;x)|=|w(x)φ2(2-λ)(x)n(n+1)∑k=0∞vn+2,k(x)∫01/n∫01/nf′′(kn+u+v)dudv|≤‖φ2(2-λ)f′′‖w|w(x)φ2(2-λ)(x)n(n+1)∑k=0∞vn+2,k(x)⋅∫01/n∫01/n(kn+u+v)λ-a-2(1+kn+u+v)λ+b-2dudv|≤‖φ2(2-λ)f′′‖w|w(x)φ2(2-λ)(x)n(n+1)∑k=1∞vn+2,k(x)∫01/n∫01/n(kn)λ-a-2(1+k+2n)λ+b-2dudv|+‖φ2(2-λ)f′′‖w|w(x)φ2(2-λ)(x)n(n+1)vn+2,0(x)∫01/n∫01/n(u+v)λ-a-2(1+u+v)λ+b-2dudv|≤‖φ2(2-λ)f′′‖ww(x)φ2(2-λ)(x)n(n+1)∑k=1∞vn+2,k(x)n-2(kn)λ-a-2(1+k+2n)λ+b-2+3λ+b-2‖φ2(2-λ)f′′‖ww(x)φ2(2-λ)(x)n(n+1)vn+2,0(x)∫01/n11+a-λuλ-a-1du≤2⋅3λ+b-2‖φ2(2-λ)f′′‖ww(x)φ2(2-λ)(x)∑k=1∞vn+2,k(x)(kn)λ-a-2(1+kn)λ+b-2+2⋅3λ+b-2‖φ2(2-λ)f′′‖ww(x)φ2(2-λ)(x)n(n+1)vn+2,0(x)1(1+a-λ)(λ-a)(1n)λ-a≤2⋅3λ+b-2‖φ2(2-λ)f′′‖w(1+x2+a-λn(n+1)(1+a-λ)(λ-a)(1+x)n(1n)λ-a).
(i)
Ifx∈Enc,(2.20)x2+a-λn(n+1)(1+a-λ)(λ-a)(1+x)n(1n)λ-a≤n(n+1)(1+a-λ)(λ-a)(1n)2≤2(1+a-λ)(λ-a).(ii)
Ifx∈En,n≥2,(2.21)x2+a-λn(n+1)(1+a-λ)(λ-a)(1+x)n(1n)λ-a≤nλ-ax2n(n+1)(1+a-λ)(λ-a)(1+x)n(1n)λ-a≤x2n(n+1)(1+a-λ)(λ-a)n(n-1)x2≤3(1+a-λ)(λ-a).
Combining (2.19)–(2.21), we have
(2.22)|w(x)φ2(2-λ)(x)Vn′′(f;x)|≤M‖φ2(2-λ)f′′‖w.
Thus,
(2.23)‖φ2(2-λ)(x)Vn′′(f)‖w≤M‖φ2(2-λ)f′′‖w.
If λ-2+b<0, we have
(2.24)|w(x)φ2(2-λ)(x)Vn′′(f;x)|≤‖φ2(2-λ)f′′‖w|w(x)φ2(2-λ)(x)n(n+1)∑k=1∞vn+2,k(x)∫01/n∫01/n(kn)λ-a-2(1+kn)λ+b-2dudv|+‖φ2(2-λ)f′′‖w|w(x)φ2(2-λ)(x)n(n+1)vn+2,0(x)∫01/n∫01/n(u+v)λ-a-2dudv|.
By using the method similar to that of (2.19)–(2.23), it is not difficult to obtain the same inequality as (2.23).
(2) For the caseλ=1,a=0, the proof is similar to that of case (1) and even simpler. Therefore the proof is completed.Lemma 2.5 (see [8, page 200]).
LetΩ(t) be an increasing positive function on (0,a), the inequality (r>α)(2.25)Ω(h)≤M[tα+(ht)rΩ(t)]
holds true for h,t∈(0,a). Then one has
(2.26)Ω(t)=O(tα).
## 3. Proofs of Theorems
### 3.1. Proof of Theorem1.2
Proof.
First, we prove it as follows.(i)
Ifx∈Enc, then(3.1)w(x)φ2(1-λ)(x)∑k=0∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mn-1.(ii)
Ifx∈En, then(3.2)Vn((t-x)2(1+t)b-2+λ;x)≤Mn-1φ2(x)(1+x)b-2+λ.
The Proof of (3.1)
In fact, (i) fork=0, since x∈Enc, we have
(3.3)w(x)φ2(1-λ)(x)vn,0(x)∫0xuw-1(u)φ-2(2-λ)(u)du=w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu.
If b-2+λ≤0, we get
(3.4)w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu≤Mw(x)φ2(1-λ)(x)(1+x)-nxλ-a≤Mn-1.
If b-2+λ>0, we have
(3.5)w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu≤Mw(x)φ2(1-λ)(x)(1+x)-n+b-2+λ∫0xuλ-a-1du≤Mn-1.
(ii) If k≥1, since x∈Enc, we have
(3.6)w(x)φ2(1-λ)(x)∑k=1∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mw(x)φ2(1-λ)(x)∑k=1∞vn,k(x)(kn-x)φ-2(2-λ)(x)(1+kn)b|∫k/nxu-adu|≤Mw(x)φ-2(x)∑k=1∞vn,k(x)(kn-x)(1+kn)b((kn)1-a-x1-a)≤Mw(x)φ-2(x)∑k=1∞vn,k(x)(kn-x)2-a(1+kn)b≤Mn-1.
Combining (3.4), (3.5) and (3.6), we obtain (3.1).The proof of (3.2)
Ifb-2+λ≤0, by (9.5.10) and (9.6.3) of [7], using the Cauchy-Schwarz inequality and the Hölder inequality, we obtain
(3.7)Vn((t-x)2(1+t)b-2+λ;x)≤(Vn((t-x)4;x))1/2(Vn((1+t)2(b-2+λ);x))1/2≤(Vn((t-x)4;x))1/2(Vn((1+t)-2;x))(2-b-λ)/2≤Mn-1φ2(x)(1+x)b-2+λ.
Ifb-2+λ>0, by (2.3), we get Vn((1+t)b-2+λ;x)≤M(1+x)b-2+λ, and using the Cauchy-Schwarz inequality and the Hölder inequality, we have
(3.8)Vn((t-x)2(1+t)b-2+λ;x)≤(Vn((t-x)4;x))1/2(Vn((1+t)2(b-2+λ);x))1/2≤Mn-1φ2(x)(1+x)b-2+λ.
Combining (3.7) and (3.8), we obtain (3.2).
Next, we prove Theorem1.2. For g∈D, if x∈Enc, by (3.1), we have
(3.9)|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|=|w(x)φ2(1-λ)(x)Vn(∫xt(t-u)g′′(u)du;x)|≤w(x)φ2(1-λ)(x)‖φ2(2-λ)g′′‖wVn(|∫xt|t-u|w-1(u)φ-2(2-λ)(u)du|;x)≤M‖φ2(2-λ)g′′‖ww(x)φ2(1-λ)(x)∑k=0∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mn-1‖φ2(2-λ)g′′‖w.
If x∈En, by (3.2), we get
(3.10)|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|=|w(x)φ2(1-λ)(x)Vn(∫xt(t-u)g′′(u)du;x)|≤M‖φ2(2-λ)g′′‖ww(x)φ2(1-λ)(x)Vn(|∫xt|t-u|w-1(u)φ-2(2-λ)(u)du|;x)≤M‖φ2(2-λ)g′′‖w|φ-2(x)Vn((t-x)2;x)+x-2-a+λw(x)φ2(1-λ)(x)Vn((t-x)2(1+t)b-2+λ;x)|≤Mn-1‖φ2(2-λ)g′′‖w.
Therefore, for f∈Ca,b,λ0,g∈D, by Lemma 2.2 and (3.9), (3.10), and the definition of Kφλ(f;n-1)w,λ, we obtain
(3.11)|w(x)φ2(1-λ)(x)(Vn(f;x)-f(x))|≤|w(x)φ2(1-λ)(x)(Vn(f-g;x))|+|w(x)φ2(1-λ)(x)(f(x)-g(x))|+|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|≤M‖φ2(1-λ)(f-g)‖w+|w(x)φ2(1-λ)(x)(f(x)-g(x))|+|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|≤M{‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w}.
Taking the infimum on the right-hand side over all g∈D, we get
(3.12)|w(x)φ2(1-λ)(x)(Vn(f;x)-f(x))|≤MKφλ(f;n-1)w,λ.
This completes the proof of Theorem 1.2.
### 3.2. Proof of Theorem1.3
Proof.
By Theorem1.2, we know (2)⇒(1). Now,we will prove (1)⇒(2). In view of (1), we get
(3.13)‖φ2(1-λ)(Vn(f)-f)‖w≤Mn-α.
By the definition of K-functional, we may choose g∈D to satisfy
(3.14)‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w≤2Kφλ(f;n-1)w,λ.
Using Lemma 2.2 and Lemma 2.3, we have
(3.15)Kφλ(f;t)w,λ≤‖φ2(1-λ)(Vn(f)-f)‖w+t‖φ2(2-λ)Vn′′(f)‖w≤Mn-α+t(‖φ2(2-λ)Vn′′(f-g)‖w+‖φ2(2-λ)Vn′′(g)‖w)≤Mn-α+t(nM‖φ2(1-λ)(f-g)‖w+M‖φ2(2-λ)g′′‖w)≤Mn-α+tnM(‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w).
Taking the infimum on the right-hand side over all g∈D, we get
(3.16)Kφλ(f;t)w,λ≤M(n-α+tn-1Kφλ(f;n-1)w,λ).
By Lemma 2.4, we get
(3.17)Kφλ(f;n-1)w,λ≤M(n-α).
Leting (n+1)-1<t≤n-1, we get
(3.18)Kφλ(f;t)w,λ≤MKφλ(f;n-1)w,λ≤M(nn+1)-α(n+1)-α≤M(n+1)-α≤Mtα.
This completes the proof of Theorem 1.3.
## 3.1. Proof of Theorem1.2
Proof.
First, we prove it as follows.(i)
Ifx∈Enc, then(3.1)w(x)φ2(1-λ)(x)∑k=0∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mn-1.(ii)
Ifx∈En, then(3.2)Vn((t-x)2(1+t)b-2+λ;x)≤Mn-1φ2(x)(1+x)b-2+λ.
The Proof of (3.1)
In fact, (i) fork=0, since x∈Enc, we have
(3.3)w(x)φ2(1-λ)(x)vn,0(x)∫0xuw-1(u)φ-2(2-λ)(u)du=w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu.
If b-2+λ≤0, we get
(3.4)w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu≤Mw(x)φ2(1-λ)(x)(1+x)-nxλ-a≤Mn-1.
If b-2+λ>0, we have
(3.5)w(x)φ2(1-λ)(x)(1+x)-n∫0xuλ-a-1(1+u)b-2+λdu≤Mw(x)φ2(1-λ)(x)(1+x)-n+b-2+λ∫0xuλ-a-1du≤Mn-1.
(ii) If k≥1, since x∈Enc, we have
(3.6)w(x)φ2(1-λ)(x)∑k=1∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mw(x)φ2(1-λ)(x)∑k=1∞vn,k(x)(kn-x)φ-2(2-λ)(x)(1+kn)b|∫k/nxu-adu|≤Mw(x)φ-2(x)∑k=1∞vn,k(x)(kn-x)(1+kn)b((kn)1-a-x1-a)≤Mw(x)φ-2(x)∑k=1∞vn,k(x)(kn-x)2-a(1+kn)b≤Mn-1.
Combining (3.4), (3.5) and (3.6), we obtain (3.1).The proof of (3.2)
Ifb-2+λ≤0, by (9.5.10) and (9.6.3) of [7], using the Cauchy-Schwarz inequality and the Hölder inequality, we obtain
(3.7)Vn((t-x)2(1+t)b-2+λ;x)≤(Vn((t-x)4;x))1/2(Vn((1+t)2(b-2+λ);x))1/2≤(Vn((t-x)4;x))1/2(Vn((1+t)-2;x))(2-b-λ)/2≤Mn-1φ2(x)(1+x)b-2+λ.
Ifb-2+λ>0, by (2.3), we get Vn((1+t)b-2+λ;x)≤M(1+x)b-2+λ, and using the Cauchy-Schwarz inequality and the Hölder inequality, we have
(3.8)Vn((t-x)2(1+t)b-2+λ;x)≤(Vn((t-x)4;x))1/2(Vn((1+t)2(b-2+λ);x))1/2≤Mn-1φ2(x)(1+x)b-2+λ.
Combining (3.7) and (3.8), we obtain (3.2).
Next, we prove Theorem1.2. For g∈D, if x∈Enc, by (3.1), we have
(3.9)|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|=|w(x)φ2(1-λ)(x)Vn(∫xt(t-u)g′′(u)du;x)|≤w(x)φ2(1-λ)(x)‖φ2(2-λ)g′′‖wVn(|∫xt|t-u|w-1(u)φ-2(2-λ)(u)du|;x)≤M‖φ2(2-λ)g′′‖ww(x)φ2(1-λ)(x)∑k=0∞vn,k(x)|∫k/nx|kn-u|w-1(u)φ-2(2-λ)(u)du|≤Mn-1‖φ2(2-λ)g′′‖w.
If x∈En, by (3.2), we get
(3.10)|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|=|w(x)φ2(1-λ)(x)Vn(∫xt(t-u)g′′(u)du;x)|≤M‖φ2(2-λ)g′′‖ww(x)φ2(1-λ)(x)Vn(|∫xt|t-u|w-1(u)φ-2(2-λ)(u)du|;x)≤M‖φ2(2-λ)g′′‖w|φ-2(x)Vn((t-x)2;x)+x-2-a+λw(x)φ2(1-λ)(x)Vn((t-x)2(1+t)b-2+λ;x)|≤Mn-1‖φ2(2-λ)g′′‖w.
Therefore, for f∈Ca,b,λ0,g∈D, by Lemma 2.2 and (3.9), (3.10), and the definition of Kφλ(f;n-1)w,λ, we obtain
(3.11)|w(x)φ2(1-λ)(x)(Vn(f;x)-f(x))|≤|w(x)φ2(1-λ)(x)(Vn(f-g;x))|+|w(x)φ2(1-λ)(x)(f(x)-g(x))|+|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|≤M‖φ2(1-λ)(f-g)‖w+|w(x)φ2(1-λ)(x)(f(x)-g(x))|+|w(x)φ2(1-λ)(x)(Vn(g;x)-g(x))|≤M{‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w}.
Taking the infimum on the right-hand side over all g∈D, we get
(3.12)|w(x)φ2(1-λ)(x)(Vn(f;x)-f(x))|≤MKφλ(f;n-1)w,λ.
This completes the proof of Theorem 1.2.
## 3.2. Proof of Theorem1.3
Proof.
By Theorem1.2, we know (2)⇒(1). Now,we will prove (1)⇒(2). In view of (1), we get
(3.13)‖φ2(1-λ)(Vn(f)-f)‖w≤Mn-α.
By the definition of K-functional, we may choose g∈D to satisfy
(3.14)‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w≤2Kφλ(f;n-1)w,λ.
Using Lemma 2.2 and Lemma 2.3, we have
(3.15)Kφλ(f;t)w,λ≤‖φ2(1-λ)(Vn(f)-f)‖w+t‖φ2(2-λ)Vn′′(f)‖w≤Mn-α+t(‖φ2(2-λ)Vn′′(f-g)‖w+‖φ2(2-λ)Vn′′(g)‖w)≤Mn-α+t(nM‖φ2(1-λ)(f-g)‖w+M‖φ2(2-λ)g′′‖w)≤Mn-α+tnM(‖φ2(1-λ)(f-g)‖w+n-1‖φ2(2-λ)g′′‖w).
Taking the infimum on the right-hand side over all g∈D, we get
(3.16)Kφλ(f;t)w,λ≤M(n-α+tn-1Kφλ(f;n-1)w,λ).
By Lemma 2.4, we get
(3.17)Kφλ(f;n-1)w,λ≤M(n-α).
Leting (n+1)-1<t≤n-1, we get
(3.18)Kφλ(f;t)w,λ≤MKφλ(f;n-1)w,λ≤M(nn+1)-α(n+1)-α≤M(n+1)-α≤Mtα.
This completes the proof of Theorem 1.3.
---
*Source: 101852-2011-11-03.xml* | 2011 |
# Activation of PPARγ by Rosiglitazone Does Not Negatively Impact Male Sex Steroid Hormones in Diabetic Rats
**Authors:** Mahmoud Mansour; Elaine Coleman; John Dennis; Benson Akingbemi; Dean Schwartz; Tim Braden; Robert Judd; Eric Plaisance; Laura Ken Stewart; Edward Morrison
**Journal:** PPAR Research
(2009)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2009/101857
---
## Abstract
Peroxisome proliferator-activated receptor gamma (PPARγ) activation decreased serum testosterone (T) in women with hyperthecosis and/or polycystic ovary syndrome and reduced the conversion of androgens to estradiol (E2) in female rats. This implies modulation of female sex steroid hormones by PPARγ. It is not clear if PPARγ modulates sex steroid hormones in diabetic males. Because PPARγ activation by thiazolidinedione increased insulin sensitivity in type 2 diabetes, understanding the long term impact of PPARγ activation on steroid sex hormones in males is critical. Our objective was to determine the effect of PPARγ activation on serum and intratesticular T, luteinizing hormone (LH), follicle stimulating hormone (FSH) and E2 concentrations in male Zucker diabetic fatty (ZDF) rats treated with the PPARγ agonist rosiglitazone (a thiazolidinedione). Treatment for eight weeks increased PPARγ mRNA and protein in the testis and elevated serum adiponectin, an adipokine marker for PPARγ activation. PPARγ activation did not alter serum or intratesticular T concentrations. In contrast, serum T level but not intratesticular T was reduced by diabetes. Neither diabetes nor PPARγ activation altered serum E2 or gonadotropins FSH and LH concentrations. The results suggest that activation of PPARγ by rosiglitazone has no negative impact on sex hormones in male ZDF rats.
---
## Body
## 1. Introduction
Peroxisome proliferator-activated receptors (PPARs) are a group of nuclear transcription factors which belong to the steroid receptor superfamily but are not activated by steroid hormones. Three PPAR isotypes have been identified and include PPARα (NR1C1), PPARβ (NR1C2, δ, NUC-1, fatty acid-activated receptor (FAAR)), and PPARγ (NR1C3). A large number of both endogenous (natural) and exogenous (synthetic) ligands activate either a single PPAR isoform or all isoforms, albeit with different binding affinities and specificities [1]. Among the important PPARγsynthetic activators are the thiazolidinediones (TZDs) drugs often used in the treatment of type 2 diabetes. These include Avandia (rosiglitazone), Actos (pioglitazone), a combination drug, Avandamet (rosiglitazone and metformin), and Rezulin (troglitazone). Troglitazone was withdrawn from the market because of idiosyncratic liver toxicity. Activation of PPARγby TZDs increases insulin sensitivity and thus improves body glycemic control [2, 3].PPARs are involved in a broad range of functions that include lipid homeostasis [2], tissue remodeling, angiogenesis, prostaglandin production [3], and steroidogenesis [4]. Additionally, PPARs also regulate inflammatory pathway by transrepression of transcription activity of proinflammatory transcription factors such as nuclear factor κB (NF-κB) [5]. Likewise, several data implicate PPARs in regulation of profibrotic [6–8] and oxidative stress responses in several cell types [9–11].Support for the hypothesis that activation of PPARs, specifically PPARγ, has an impact on sex steroid hormones action and/or production comes from several TZDs studies including two studies in male subjects [4, 12–22]. A study in healthy nondiabetic men showed that rosiglitazone treatment (8 mg/d for seven days) reduced the production rate of testosterone (T) and dihydrotestosterone (DHT) [14]. Similarly, rosiglitazone treatment of obese nondiabetic Zucker rats (0.01% wt/wt food admixture equivalent to 4 mg/kg/d for 36 days) reduced DHT but did not alter serum T [22].Multiple studies using ovarian and other cell culture models support a steroidogenic role for PPARγ. First, activation of PPARγwith troglitazone, a TZD insulin sensitizer and putative PPARγagonist, inhibited aromatase cytochrome P450 activity, the enzyme critical in the conversion of androgens to estradiol (E2), in human adipose tissue [15] and in ovarian granulosa cells [20]. Similarly, activation of PPARγ by troglitazone in vitro cultures of human and porcine granulosa cells inhibited progesterone production [4]. Troglitazone was also reported to competitively inhibit 3β-hydroxysteriod dehydrogenase (3β-HSD), the enzyme that catalyzes the conversion of pregnenolone to progesterone in the ovary [16]. Likewise, troglitazone was shown to inhibit androgen biosynthesis stimulated by combined LH and insulin in primary porcine thecal cell culture in a dose-dependent fashion [17].In human adrenal NCI-H295R cells, an established in vitro model of steroidogenesis of the human adrenal cortex, both rosiglitazone and pioglitazone inhibited the activities of P450c17 and 3β-HSD type II both of which are key microsomal enzymes in the biosynthesis of all steroid hormones [18]. In diabetic women with polycystic ovarian syndromes (PCOs), a condition characterized by anovulatory androgen secretion, relatively high E2, and excessive LH production [23–25], treatment with the PPARγ agonists rosiglitazone or pioglitazone improved insulin resistance and decreased hyperandrogenism in multiple studies (reviewed in [26–28]).Toxicological studies showed that phthalate esters, used as plasticizers and stabilizers in several consumer products, activate PPARγ[29], decrease key testicular steroidogenic enzymes [30] and reduce serum T production [31–34].Although it is axiomatic that steroidogenic inhibition is a general characteristic of TZD compounds, involvement of PPARγ and rosiglitazone in steroidogenic modulation under diabetic conditions remains unclear for several reasons. First, a study showed that the in vitro IC50 for rosiglitazone steroidogenic inhibition is far beyond its recommended therapeutic dose [13]. Second, a number of studies showed that TZDs, including rosiglitazone, directly inhibit 3β-HSDII and P450c17 steroidogenic enzymes independent of PPARγ [13, 18]. Third, in the aforementioned male studies PPARγ activation was not determined in parallel with sex hormone measurements. More importantly none of the research subjects used was diabetic where the steroidogenic effect of diabetes is an important component for evaluation of TZDs-PPARγ activators. Finally, the treatment period used in the aforementioned male studies was short and varies between 7 and 36 days. Because of the above limitations, the objective of this study was to determine the link between relatively short term (8 weeks) activation of PPARγ and the profile of T and E2 in male Zucker diabetic fatty rats (ZDFs) treated with a therapeutic dose of rosiglitazone.
## 2. Materials and Methods
### 2.1. Animals and Treatments
Male ZDF (fa/fa) rats and their age-matched lean controls (ZDF lean, fa/+ or +/+) were obtained from Charles River Laboratories (Indianapolis, Ind, USA) at 6 weeks of age. The (fa/fa) ZDF rats lack a functional leptin receptor and become hyperphagic and diabetic when fed a high fat diet. Rats were maintained under standard housing conditions (constant temperature of 22°C, ad libitum food and water, and 12:12 hours light/dark cycles) at an AAALAC-accredited lab animal facility at the College of Veterinary Medicine, Auburn University. Rats were housed in pairs and assigned to three groups with 8 rats per group. Lean nondiabetic group (group 1); ZDF rats randomly assigned to ZDF untreated group (group 2) and ZDF group treated with rosiglitazone (group 3). Lean rats were fed regular rat chow whereas ZDF rats in group 2 and 3 were fed Purina 5008 modified rat chow (Purina Mills, Richmond, Ind, USA). Rosiglitazone maleate (generously provided by GlaxoSmithKline, USA) was dissolved in 0.5% carboxymethylcelluose and administered daily via oral gavage at 3 mg/kg/d/rat starting at week 7 of age for 8 weeks. Rats in groups 1 and 2 received 0.5% carboxymethylcelluose vehicle. All rats were weighed and blood glucose was monitored from the tail vein weekly using an ACCU-CHEK glucose meter (Roche Diagnostics Co. Indianapolis, Ind, USA). Diabetes was confirmed by two consecutive measurements of blood glucose of >200 mg/dl. All animal procedures were approved by the Institutional Animal Care and Use Committee at Auburn University.
### 2.2. Necropsy and Tissue Collection
Rats were sacrificed by deep anesthesia with pentobarbital (50 mg/kg intraperitoneal, IP) followed with decapitation. Testes were excised, and sampled for histopathology, RNA extraction, and intratesticular T assay. Visceral epididymal fat and prostate were collected for use as positive sources for PPARγ expression in real-time PCR analysis. Tissues intended for RNA and hormone analysis were immediately frozen in liquid nitrogen and transferred to -80°C until processing. Trunk blood was collected for serum isolation and stored at -30°C prior to hormone analysis.
### 2.3. Total RNA Isolation
Total RNA was isolated using TRIzol reagent (Invitrogen-Life Technologies Inc., Carlsbad, Calif, USA), according to the manufacturer’s instructions and as described previously in our laboratory [35]. Briefly, RNA concentrations were determined at 260 nm wavelength and the ratio of 260/280 was obtained using UV spectrophotometry (DU640, Beckman Coulter Fullerton, Calif, USA). RNA samples were treated with DNase (Ambion Inc.) to remove possible genomic DNA contamination and samples with 260/280 ratio of ≥1.8 were used.
### 2.4. Real-Time PCR and Agarose Gel Electrophoresis
Real-time PCR was used to determine expression of testicular PPARγ mRNA and to quantify changes in mRNA level. Quantitative real-time PCR analysis was performed in 25 μL reaction mixture containing RT2 Real-Time SYBR/Fluorescein Green PCR master mix with final concentrations of 10 mM Tris-Cl, 50 mM KCL, 2.0 mM MgCl2, 0.2 mM dNTPs, and 2.5 units of HotStart Taq DNA polymerase (Super Array Bioscience Corporation, Frederic, Md, USA). The reaction was completed with addition of 1 μL first strand cDNA transcribed from 2 μg total RNA, and 0.2 mM RT2 validated PCR primers for PPARγ or GAPDH house keeping gene (Super Array Bioscience). Samples were run in 96-well PCR plates (Bio-Rad, Hercules, Calif, USA) in duplicates, and the results were normalized to GAPDH expression. The amplification protocol was set at 95°C for 15 minutes, and 40 cycles each at (95°C for 30 seconds, 55°C for 30 seconds, and 72°C for 30 seconds) followed by a melting curve determination between 55°C and 95°C to ensure detection of a single PCR product. Real-time PCR products at the end of each assay were combined for each treatment group and stored at -30°C for viewing on agarose gel electrophoresis. Verification of PCR product was confirmed by determination of expected band size and sequence analysis as we described previously [35]. The resulting sequences were matched with previously published rat sequences in Genbank (accession number NM_013124 for PPARγ) using Chromas 2.31 software (Technelysium Pty ltd, Tewantin Qld 4565, Australia). RNA templates from white adipose tissue and prostate were used to generate standard curves for PPARγ and GAPDH using 10-fold dilutions. Curves were made by plotting threshold cycle (Ct value) for each dilution versus the log of the dilution factor used. Relative differences in expression (fold increase or decrease) were calculated as described previously [36]. Pearson correlation coefficients (r values) for standard curves were between 0.98 and 0.99, and amplification efficiency was considered 100%.
### 2.5. Immunohistochemistry (IHC)
Immunolocalization of PPARγ by IHC was performed as described previously by our laboratory [35]. Briefly, cross sections from testes were fixed in 4% paraformaldehyde for 48 hours, embedded in paraffin, and cut at 5 μm thickness. Sections were also fixed in Bouin's fixative (BioSciences) for staining with hematoxylin and eosin. Mounted sections were deparaffinized in Hemo-D (Scientific Safety Products) and hydrated in distilled water. Antigen retrieval was performed by heating in citrate buffer. Sections were incubated in 5% normal goat serum containing 2.5% BSA to reduce nonspecific staining. PPARγ was detected with mouse anti-PPARγ monoclonal antibody (Santa Cruz: sc7273; diluted 1:80 in blocker) and the antibody-antigen complexes were visualized with Alexa 488-conjugated goat antimouse IgG (Molecular Probes). Sections were examined with a Nikon TE2000E microscope and digital images were made with an attached Retiga EX CCD digital camera (Q Imaging, Burnaby, BC, Canada).
### 2.6. Hormonal Assays
Total serum T (intraassay coefficient of variation (CV) was 4.3%), E2 (intraassay CV was 2.3%), and intratesticular (intraassay CV was 6%) level were determined by radioimmunoassay (RIA) using kits from Siemens Medical Solutions Diagnostics (Los Angles, Calif, USA) according to manufacturer’s instructions. For intratesticular T, 100 mg of testicular tissue was homogenized in 500μL Tris-PBS buffer (0.01 M Tris-HCl; pH 7.4) in plastic tubes. An additional 500 μL was added and the homogenate was mixed with eight volumes of diethyl ether. The mixture was then vigorously vortexed and the aqueous phase quickly frozen in a dry ice bath (70% ethanol; dry ice). Extracts were subsequently air dried (warm bath at approximately 50°C under the hood) and samples were subsequently resuspended in 500 μL PBS-buffer. 50 μL of 1:10 diluted sample were used in the COAT-A-COUNT radioimmunoassay and counted in a Cobra D5005-gamma counter (Packard Instrument Co., Downers Grove, Il, USA). All samples were quantified in duplicates in a single assay. FSH and LH were determined by radioimmunoassay at the Endocrine Laboratory, Fort Collins, Colorado State University.
### 2.7. Serum Adiponectin
Total serum adiponectin concentration was assayed using a sandwich ELISA method (Millipore Corporation, Billerica, Mass, USA) per manufacturer’s instructions. The intraassay CV was 1.1% to 1.3%.
### 2.8. Statistical Analysis
Analysis of real-time PCR data was performed using a modification of the delta deltaCt method (ΔΔCt). ΔCt calculated from real-time PCR data were subjected to analyses of variance using Sigma Stat statistical software (Jandel Scientific, Chicago, IL). Hormonal data were subjected to analysis of variance. Treatment groups with means significantly different (P<.05) from controls were identified using Dunnett's test. When data were not distributed normally, or heterogeneity of variance was identified, analyses were performed on transformed data or ranked data.
## 2.1. Animals and Treatments
Male ZDF (fa/fa) rats and their age-matched lean controls (ZDF lean, fa/+ or +/+) were obtained from Charles River Laboratories (Indianapolis, Ind, USA) at 6 weeks of age. The (fa/fa) ZDF rats lack a functional leptin receptor and become hyperphagic and diabetic when fed a high fat diet. Rats were maintained under standard housing conditions (constant temperature of 22°C, ad libitum food and water, and 12:12 hours light/dark cycles) at an AAALAC-accredited lab animal facility at the College of Veterinary Medicine, Auburn University. Rats were housed in pairs and assigned to three groups with 8 rats per group. Lean nondiabetic group (group 1); ZDF rats randomly assigned to ZDF untreated group (group 2) and ZDF group treated with rosiglitazone (group 3). Lean rats were fed regular rat chow whereas ZDF rats in group 2 and 3 were fed Purina 5008 modified rat chow (Purina Mills, Richmond, Ind, USA). Rosiglitazone maleate (generously provided by GlaxoSmithKline, USA) was dissolved in 0.5% carboxymethylcelluose and administered daily via oral gavage at 3 mg/kg/d/rat starting at week 7 of age for 8 weeks. Rats in groups 1 and 2 received 0.5% carboxymethylcelluose vehicle. All rats were weighed and blood glucose was monitored from the tail vein weekly using an ACCU-CHEK glucose meter (Roche Diagnostics Co. Indianapolis, Ind, USA). Diabetes was confirmed by two consecutive measurements of blood glucose of >200 mg/dl. All animal procedures were approved by the Institutional Animal Care and Use Committee at Auburn University.
## 2.2. Necropsy and Tissue Collection
Rats were sacrificed by deep anesthesia with pentobarbital (50 mg/kg intraperitoneal, IP) followed with decapitation. Testes were excised, and sampled for histopathology, RNA extraction, and intratesticular T assay. Visceral epididymal fat and prostate were collected for use as positive sources for PPARγ expression in real-time PCR analysis. Tissues intended for RNA and hormone analysis were immediately frozen in liquid nitrogen and transferred to -80°C until processing. Trunk blood was collected for serum isolation and stored at -30°C prior to hormone analysis.
## 2.3. Total RNA Isolation
Total RNA was isolated using TRIzol reagent (Invitrogen-Life Technologies Inc., Carlsbad, Calif, USA), according to the manufacturer’s instructions and as described previously in our laboratory [35]. Briefly, RNA concentrations were determined at 260 nm wavelength and the ratio of 260/280 was obtained using UV spectrophotometry (DU640, Beckman Coulter Fullerton, Calif, USA). RNA samples were treated with DNase (Ambion Inc.) to remove possible genomic DNA contamination and samples with 260/280 ratio of ≥1.8 were used.
## 2.4. Real-Time PCR and Agarose Gel Electrophoresis
Real-time PCR was used to determine expression of testicular PPARγ mRNA and to quantify changes in mRNA level. Quantitative real-time PCR analysis was performed in 25 μL reaction mixture containing RT2 Real-Time SYBR/Fluorescein Green PCR master mix with final concentrations of 10 mM Tris-Cl, 50 mM KCL, 2.0 mM MgCl2, 0.2 mM dNTPs, and 2.5 units of HotStart Taq DNA polymerase (Super Array Bioscience Corporation, Frederic, Md, USA). The reaction was completed with addition of 1 μL first strand cDNA transcribed from 2 μg total RNA, and 0.2 mM RT2 validated PCR primers for PPARγ or GAPDH house keeping gene (Super Array Bioscience). Samples were run in 96-well PCR plates (Bio-Rad, Hercules, Calif, USA) in duplicates, and the results were normalized to GAPDH expression. The amplification protocol was set at 95°C for 15 minutes, and 40 cycles each at (95°C for 30 seconds, 55°C for 30 seconds, and 72°C for 30 seconds) followed by a melting curve determination between 55°C and 95°C to ensure detection of a single PCR product. Real-time PCR products at the end of each assay were combined for each treatment group and stored at -30°C for viewing on agarose gel electrophoresis. Verification of PCR product was confirmed by determination of expected band size and sequence analysis as we described previously [35]. The resulting sequences were matched with previously published rat sequences in Genbank (accession number NM_013124 for PPARγ) using Chromas 2.31 software (Technelysium Pty ltd, Tewantin Qld 4565, Australia). RNA templates from white adipose tissue and prostate were used to generate standard curves for PPARγ and GAPDH using 10-fold dilutions. Curves were made by plotting threshold cycle (Ct value) for each dilution versus the log of the dilution factor used. Relative differences in expression (fold increase or decrease) were calculated as described previously [36]. Pearson correlation coefficients (r values) for standard curves were between 0.98 and 0.99, and amplification efficiency was considered 100%.
## 2.5. Immunohistochemistry (IHC)
Immunolocalization of PPARγ by IHC was performed as described previously by our laboratory [35]. Briefly, cross sections from testes were fixed in 4% paraformaldehyde for 48 hours, embedded in paraffin, and cut at 5 μm thickness. Sections were also fixed in Bouin's fixative (BioSciences) for staining with hematoxylin and eosin. Mounted sections were deparaffinized in Hemo-D (Scientific Safety Products) and hydrated in distilled water. Antigen retrieval was performed by heating in citrate buffer. Sections were incubated in 5% normal goat serum containing 2.5% BSA to reduce nonspecific staining. PPARγ was detected with mouse anti-PPARγ monoclonal antibody (Santa Cruz: sc7273; diluted 1:80 in blocker) and the antibody-antigen complexes were visualized with Alexa 488-conjugated goat antimouse IgG (Molecular Probes). Sections were examined with a Nikon TE2000E microscope and digital images were made with an attached Retiga EX CCD digital camera (Q Imaging, Burnaby, BC, Canada).
## 2.6. Hormonal Assays
Total serum T (intraassay coefficient of variation (CV) was 4.3%), E2 (intraassay CV was 2.3%), and intratesticular (intraassay CV was 6%) level were determined by radioimmunoassay (RIA) using kits from Siemens Medical Solutions Diagnostics (Los Angles, Calif, USA) according to manufacturer’s instructions. For intratesticular T, 100 mg of testicular tissue was homogenized in 500μL Tris-PBS buffer (0.01 M Tris-HCl; pH 7.4) in plastic tubes. An additional 500 μL was added and the homogenate was mixed with eight volumes of diethyl ether. The mixture was then vigorously vortexed and the aqueous phase quickly frozen in a dry ice bath (70% ethanol; dry ice). Extracts were subsequently air dried (warm bath at approximately 50°C under the hood) and samples were subsequently resuspended in 500 μL PBS-buffer. 50 μL of 1:10 diluted sample were used in the COAT-A-COUNT radioimmunoassay and counted in a Cobra D5005-gamma counter (Packard Instrument Co., Downers Grove, Il, USA). All samples were quantified in duplicates in a single assay. FSH and LH were determined by radioimmunoassay at the Endocrine Laboratory, Fort Collins, Colorado State University.
## 2.7. Serum Adiponectin
Total serum adiponectin concentration was assayed using a sandwich ELISA method (Millipore Corporation, Billerica, Mass, USA) per manufacturer’s instructions. The intraassay CV was 1.1% to 1.3%.
## 2.8. Statistical Analysis
Analysis of real-time PCR data was performed using a modification of the delta deltaCt method (ΔΔCt). ΔCt calculated from real-time PCR data were subjected to analyses of variance using Sigma Stat statistical software (Jandel Scientific, Chicago, IL). Hormonal data were subjected to analysis of variance. Treatment groups with means significantly different (P<.05) from controls were identified using Dunnett's test. When data were not distributed normally, or heterogeneity of variance was identified, analyses were performed on transformed data or ranked data.
## 3. Results
### 3.1. Blood Glucose and Body Weight
The ZDF rats fed Purina 5008 high fat diet in groups 2 and 3 became diabetic by week 7 of age. By week 15 of age the mean blood glucose concentration, determined shortly before necropsy, was>600 mg/dl (above glucose meter range) in ZDF-untreated controls (group 2) versus 123±1.7 mg/dl in lean nondiabetic controls (group 1) and 163.6±17.7 in ZDF rats treated with rosiglitazone (group 3). Rats in all three experimental groups gained weight over time irrespective of treatment. The ZDF-untreated rats mean body weight at week 15 was not significantly different from lean nondiabetic (382.75±10.94 versus 365±8.2 gm, resp.; P>.05). In contrast, the mean body weight of ZDF treated rats (group 3) was more than 40% above that of ZDF untreated rats at week 15 (638.6±14.67 versus 382.75±10.9 gm, P<.001).
### 3.2. Serum Adiponectin
Serum adiponectin was determined to confirm PPARγ activation [37]. As expected, treatment with rosiglitazone increased serum adiponectin significantly in ZDF-treated compared with ZDF-untreated rats (46.33±2.83 versus 12.71±0.69 μg/mL, P<.001) or lean nondiabetic untreated rats (46.33±2.83 versus 17.13±0.95 μg/mL, P<.001). In contrast, adiponectin was significantly reduced by diabetes in ZDF-untreated compared with lean nondiabetic rats (17.13±0.95 versus 12.71±0.69 μg/mL, P<.05).
### 3.3. Real-Time PCR, Agarose Gel Electrophoresis and IHC
Real-time PCR data showed that PPARγ mRNA was expressed in the testis and was upregulated by more than two folds with rosiglitazone treatment (Figures 1(a) and 1(b)). As shown in the IHC data (Figure 1(c)) PPARγ protein was specifically localized in Leydig cells located in the interstitial space between the seminiferous tubules and in spermatocytes within the inside of seminiferous tubules basement membranes.(a) Real-time PCR analysis of testicular PPARγ mRNA levels in lean nondiabetic controls (Lean C), Zucker diabetic fatty (ZDF) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). Data are expressed as mean ± SE. n=8 per group, *P<.05. (b) Agarose gel (2%) showing real-time PCR products generated in (a). Lane 1, DNA markers, lanes 2-3, RNA templates (instead of cDNA) from fat (-VF) and prostate (-VPr) as negative controls. Lanes 4-5, fat from untreated rats (FU) and prostate (Pr) as positive controls. Lanes 6–9, testicular PCR products from RNA negative controls (-VT), diabetic treated (DT), Diabetic untreated (DU), and lean untreated (LU) rats. Lane 10, fat from ZDF-treated rats (FT). (c) Representative IHC of PPARγ protein in the testis of DT (panel 2a with insert box magnified in 2b), DU (panel 3), and LU rats (panel 4). Panel 1, -VT = negative control testis section (minus primary antibody. Arrows indicate PPARγ localization in spermatogonia and in Leydig cells.
(a)(b)(c)
### 3.4. Testicular Morphology and Histopathology
Detachment and disorganization of germ cells was evident in ZDF-untreated rats but there was no significant changes in the overall morphology of seminiferous tubules (Figure2). Likewise, treatment did not alter seminiferous tubules morphology but desirably reversed germ cells sloughing (ZDF-treated in panel 2 versus ZDF-untreated in panel 3).Representative photomicrographs of hematoxylin-eosin-stained sections of testis of nondiabetic Zucker lean control (Lean C, panel 1), Zucker diabetic fatty (ZDF) untreated (Diabetic U, panel 2), and ZDF rats treated with rosiglitazone (Diabetic T, panel 3). Arrows indicate germ cells strewn in the lumen of seminiferous tubules.
(a)(b)(c)
### 3.5. Serum and Intratesticular T
Total serum and intratesticular T were not significantly altered by PPARγ activation in the testis (ZDF-treated versus ZDF-untreated, P>.05) (Figures 3 and 4). As expected total serum T was significantly lowered by diabetes (ZDF-untreated or treated versus lean nondiabetic, P<.05) (Figure 3). Surprisingly, the significant reduction in total serum T in diabetic rats was not associated with a corresponding significant reduction in intratesticular T production (Figure 4). A trend toward lower intratesticular T in ZDF versus lean rats was apparent irrespective of rosiglitazone treatment.Figure 3
Serum T level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). Data are expressed as means ± SE. n=8 per group. *P<.05.Figure 4
Intratesticular T level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=8 per group. Data are expressed as means ± SE.
### 3.6. Serum E2
Neither diabetes nor PPARγactivation with rosiglitazone adversely altered serum E2 (Figure 5).Figure 5
Serum E2 level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=6 per group. Data are expressed as means ± SE.
### 3.7. Serum FSH and LH
Serum FSH and LH were not significantly altered by activation of PPARγ with rosiglitazone and/or by diabetes (P>.05). The ZDF-untreated rats, however, showed a trend for lower FSH and LH (ZDF-untreated versus lean nondiabetic) and treatment with rosiglitazone reversed this tendency (ZDF-treated versus ZDF-untreated rats) (Figures 6(a) and 6(b)).Serum FSH (a) and LH (b) in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=7-8 per group. Data are expressed as means ± SE.
(a)(b)
## 3.1. Blood Glucose and Body Weight
The ZDF rats fed Purina 5008 high fat diet in groups 2 and 3 became diabetic by week 7 of age. By week 15 of age the mean blood glucose concentration, determined shortly before necropsy, was>600 mg/dl (above glucose meter range) in ZDF-untreated controls (group 2) versus 123±1.7 mg/dl in lean nondiabetic controls (group 1) and 163.6±17.7 in ZDF rats treated with rosiglitazone (group 3). Rats in all three experimental groups gained weight over time irrespective of treatment. The ZDF-untreated rats mean body weight at week 15 was not significantly different from lean nondiabetic (382.75±10.94 versus 365±8.2 gm, resp.; P>.05). In contrast, the mean body weight of ZDF treated rats (group 3) was more than 40% above that of ZDF untreated rats at week 15 (638.6±14.67 versus 382.75±10.9 gm, P<.001).
## 3.2. Serum Adiponectin
Serum adiponectin was determined to confirm PPARγ activation [37]. As expected, treatment with rosiglitazone increased serum adiponectin significantly in ZDF-treated compared with ZDF-untreated rats (46.33±2.83 versus 12.71±0.69 μg/mL, P<.001) or lean nondiabetic untreated rats (46.33±2.83 versus 17.13±0.95 μg/mL, P<.001). In contrast, adiponectin was significantly reduced by diabetes in ZDF-untreated compared with lean nondiabetic rats (17.13±0.95 versus 12.71±0.69 μg/mL, P<.05).
## 3.3. Real-Time PCR, Agarose Gel Electrophoresis and IHC
Real-time PCR data showed that PPARγ mRNA was expressed in the testis and was upregulated by more than two folds with rosiglitazone treatment (Figures 1(a) and 1(b)). As shown in the IHC data (Figure 1(c)) PPARγ protein was specifically localized in Leydig cells located in the interstitial space between the seminiferous tubules and in spermatocytes within the inside of seminiferous tubules basement membranes.(a) Real-time PCR analysis of testicular PPARγ mRNA levels in lean nondiabetic controls (Lean C), Zucker diabetic fatty (ZDF) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). Data are expressed as mean ± SE. n=8 per group, *P<.05. (b) Agarose gel (2%) showing real-time PCR products generated in (a). Lane 1, DNA markers, lanes 2-3, RNA templates (instead of cDNA) from fat (-VF) and prostate (-VPr) as negative controls. Lanes 4-5, fat from untreated rats (FU) and prostate (Pr) as positive controls. Lanes 6–9, testicular PCR products from RNA negative controls (-VT), diabetic treated (DT), Diabetic untreated (DU), and lean untreated (LU) rats. Lane 10, fat from ZDF-treated rats (FT). (c) Representative IHC of PPARγ protein in the testis of DT (panel 2a with insert box magnified in 2b), DU (panel 3), and LU rats (panel 4). Panel 1, -VT = negative control testis section (minus primary antibody. Arrows indicate PPARγ localization in spermatogonia and in Leydig cells.
(a)(b)(c)
## 3.4. Testicular Morphology and Histopathology
Detachment and disorganization of germ cells was evident in ZDF-untreated rats but there was no significant changes in the overall morphology of seminiferous tubules (Figure2). Likewise, treatment did not alter seminiferous tubules morphology but desirably reversed germ cells sloughing (ZDF-treated in panel 2 versus ZDF-untreated in panel 3).Representative photomicrographs of hematoxylin-eosin-stained sections of testis of nondiabetic Zucker lean control (Lean C, panel 1), Zucker diabetic fatty (ZDF) untreated (Diabetic U, panel 2), and ZDF rats treated with rosiglitazone (Diabetic T, panel 3). Arrows indicate germ cells strewn in the lumen of seminiferous tubules.
(a)(b)(c)
## 3.5. Serum and Intratesticular T
Total serum and intratesticular T were not significantly altered by PPARγ activation in the testis (ZDF-treated versus ZDF-untreated, P>.05) (Figures 3 and 4). As expected total serum T was significantly lowered by diabetes (ZDF-untreated or treated versus lean nondiabetic, P<.05) (Figure 3). Surprisingly, the significant reduction in total serum T in diabetic rats was not associated with a corresponding significant reduction in intratesticular T production (Figure 4). A trend toward lower intratesticular T in ZDF versus lean rats was apparent irrespective of rosiglitazone treatment.Figure 3
Serum T level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). Data are expressed as means ± SE. n=8 per group. *P<.05.Figure 4
Intratesticular T level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=8 per group. Data are expressed as means ± SE.
## 3.6. Serum E2
Neither diabetes nor PPARγactivation with rosiglitazone adversely altered serum E2 (Figure 5).Figure 5
Serum E2 level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=6 per group. Data are expressed as means ± SE.
## 3.7. Serum FSH and LH
Serum FSH and LH were not significantly altered by activation of PPARγ with rosiglitazone and/or by diabetes (P>.05). The ZDF-untreated rats, however, showed a trend for lower FSH and LH (ZDF-untreated versus lean nondiabetic) and treatment with rosiglitazone reversed this tendency (ZDF-treated versus ZDF-untreated rats) (Figures 6(a) and 6(b)).Serum FSH (a) and LH (b) in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=7-8 per group. Data are expressed as means ± SE.
(a)(b)
## 4. Discussion
Rosiglitazone and other TZDs were shown to decrease hyperandrogenemia in women with PCOs and repress major steriodogenic enzymes (reviewed in [26]). Although the in vitro steroidogenic repression potency of rosiglitazone was ranked intermediate between troglitazone and pioglitazone [13], its in vivo impact on sex steroids in diabetic subjects was unknown. Specifically, information on how changes in serum steroids in male diabetic subjects relate to changes in PPARγ activity was lacking. This study documents the link between testicular PPARγ activation with the anti-diabetic drug rosiglitazone and male sex hormone profile under diabetic conditions. Daily oral gavage of ZDF rats with rosiglitazone activated PPARγ in the testis and normalized testicular germ cell derangement seen in diabetic testis. The stimulation of systemic and testicular PPARγactivity by rosiglitazone, however, did not significantly alter the concentrations of gonadotropins LH and FSH, sex steroid T (both total serum and intratesticular) or serum E2.Systemic activation of PPARγ in this study was evident by the reversal of hyperglycemia, increased serum adipocytokine adiponectin, and by weight gain in ZDF-treated rats. These effects are considered biological signatures for rosiglitazone-induced PPARγ activation in typically body fat depots [38, 39]. Specific activation of testicular PPARγ shown here is consistent with other unrelated studies that showed the presence of this receptor in rat testis [40] and its modulation by synthetic chemicals such as phthalate esters [34, 41, 42]. Because PPARγ can also be activated by endogenous ligands such as polyunsaturated fatty acids from diet sources and by metabolites of arachidonic acid [43], contribution of these ligands to the observed up regulation of testicular PPARγ mRNA and protein in ZDF-treated rats could not be ruled out.Compared to nondiabetic lean rats, ZDF-untreated rats showed consistent structural disorganization of germinal epithelium evident by abnormal accumulation of germ cells in the lumen of seminiferous tubules. This effect likely resulted from diabetes-induced oxidative stress previously recognized in diabetic rat testis [44]. Interestingly, the positive staining of germ cells for PPARγ protein in IHC data suggests that rosiglitazone crosses the blood-testis barrier to enter the adluminal compartment of the seminiferous tubules resulting in activation of PPARγ and reversal of germ cell sloughing. This novel effect is possibly mediated by the reported functional property of PPARγ to ameliorate oxidative stress [11].As previously known [45], diabetes in this study significantly lowered serum T. Rosiglitazone treatment, however, neither restored nor reduced serum T in ZDF-treated rats. Paradoxically, the diabetes-induced reduction in serum T in ZDF-treated and untreated rats were not associated with a corresponding significant drop in intratesticular T production, or androgen receptor expression (data not shown). Although unexpected, this finding was consistent with the nonsignificant changes observed in serum LH concentration. The unaltered intratesticular T production contrasted with significantly lowered serum T concentration may reflect a reduction in sex hormone binding globulin (SHBG) essential for T transportation in blood [46]. Reagents for determination of rat SHBG are not currently available and attempts to quantify rat SHBG using a human ELISA kit were not successful because of reagents incompatibility.Rosiglitazone treatment reversed obesity-induced reduction of T intermediate steroid hormone precursor 17-hydroxyprogesterone in the genetically related obese but nondiabetic Zucker rats [22]. In the same study, however, rosiglitazone treatment did not alter total serum T. Compared with the above findings the data in our study allow for differentiation of diabetes steroidogenic effects (ZDF-untreated versus lean nondiabetic rats) from those of PPARγ activation (ZDF-treated versus ZDF-untreated rats) but does not permit a way for separation of obesity effects from those of diabetes. The lack of significant change in total serum T in the above study was nevertheless consistent with our findings. In both the aforementioned and our study a tendency toward lowered serum T production was observed in ZDF and Zucker obese versus lean littermates irrespective of treatment. A normal transient T reduction was reported in 2-4 months old Zucker obese rats [47]. The age of rats at the point of serum collection in this study was 3.75 months and thus the lowered T observed in the ZDF versus lean rats could be a reflection of the above observation in these two genetically related rat models.Studies that support a steroidogenic regulatory role for PPARγin women treated with TZDs and in vitro ovarian cell culture models have produced contrasting results as outlined in several reviews [1, 21, 26, 27]. While some in vitro studies showed that rosiglitazone and other TZDs such as troglitazone could inhibit steroidogenic enzymes independent of PPARγactivation [4, 13, 16–18, 20], an inhibitory effect of rosiglitazone was not reflected in total T and E2 concentrations in ZDF-treated rats in our study. In summary, our data show that PPARγ activation with rosiglitazone for eight weeks had no negative impact on total sex hormone concentrations in diabetic male rats. This finding is firmly in line with studies that showed strong TZD-PPARγ activation [48, 49] but weaker in vitro steroidogenic inhibitory effects of rosiglitazone [13].
---
*Source: 101857-2009-06-11.xml* | 101857-2009-06-11_101857-2009-06-11.md | 39,947 | Activation of PPARγ by Rosiglitazone Does Not Negatively Impact Male Sex Steroid Hormones in Diabetic Rats | Mahmoud Mansour; Elaine Coleman; John Dennis; Benson Akingbemi; Dean Schwartz; Tim Braden; Robert Judd; Eric Plaisance; Laura Ken Stewart; Edward Morrison | PPAR Research
(2009) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2009/101857 | 101857-2009-06-11.xml | ---
## Abstract
Peroxisome proliferator-activated receptor gamma (PPARγ) activation decreased serum testosterone (T) in women with hyperthecosis and/or polycystic ovary syndrome and reduced the conversion of androgens to estradiol (E2) in female rats. This implies modulation of female sex steroid hormones by PPARγ. It is not clear if PPARγ modulates sex steroid hormones in diabetic males. Because PPARγ activation by thiazolidinedione increased insulin sensitivity in type 2 diabetes, understanding the long term impact of PPARγ activation on steroid sex hormones in males is critical. Our objective was to determine the effect of PPARγ activation on serum and intratesticular T, luteinizing hormone (LH), follicle stimulating hormone (FSH) and E2 concentrations in male Zucker diabetic fatty (ZDF) rats treated with the PPARγ agonist rosiglitazone (a thiazolidinedione). Treatment for eight weeks increased PPARγ mRNA and protein in the testis and elevated serum adiponectin, an adipokine marker for PPARγ activation. PPARγ activation did not alter serum or intratesticular T concentrations. In contrast, serum T level but not intratesticular T was reduced by diabetes. Neither diabetes nor PPARγ activation altered serum E2 or gonadotropins FSH and LH concentrations. The results suggest that activation of PPARγ by rosiglitazone has no negative impact on sex hormones in male ZDF rats.
---
## Body
## 1. Introduction
Peroxisome proliferator-activated receptors (PPARs) are a group of nuclear transcription factors which belong to the steroid receptor superfamily but are not activated by steroid hormones. Three PPAR isotypes have been identified and include PPARα (NR1C1), PPARβ (NR1C2, δ, NUC-1, fatty acid-activated receptor (FAAR)), and PPARγ (NR1C3). A large number of both endogenous (natural) and exogenous (synthetic) ligands activate either a single PPAR isoform or all isoforms, albeit with different binding affinities and specificities [1]. Among the important PPARγsynthetic activators are the thiazolidinediones (TZDs) drugs often used in the treatment of type 2 diabetes. These include Avandia (rosiglitazone), Actos (pioglitazone), a combination drug, Avandamet (rosiglitazone and metformin), and Rezulin (troglitazone). Troglitazone was withdrawn from the market because of idiosyncratic liver toxicity. Activation of PPARγby TZDs increases insulin sensitivity and thus improves body glycemic control [2, 3].PPARs are involved in a broad range of functions that include lipid homeostasis [2], tissue remodeling, angiogenesis, prostaglandin production [3], and steroidogenesis [4]. Additionally, PPARs also regulate inflammatory pathway by transrepression of transcription activity of proinflammatory transcription factors such as nuclear factor κB (NF-κB) [5]. Likewise, several data implicate PPARs in regulation of profibrotic [6–8] and oxidative stress responses in several cell types [9–11].Support for the hypothesis that activation of PPARs, specifically PPARγ, has an impact on sex steroid hormones action and/or production comes from several TZDs studies including two studies in male subjects [4, 12–22]. A study in healthy nondiabetic men showed that rosiglitazone treatment (8 mg/d for seven days) reduced the production rate of testosterone (T) and dihydrotestosterone (DHT) [14]. Similarly, rosiglitazone treatment of obese nondiabetic Zucker rats (0.01% wt/wt food admixture equivalent to 4 mg/kg/d for 36 days) reduced DHT but did not alter serum T [22].Multiple studies using ovarian and other cell culture models support a steroidogenic role for PPARγ. First, activation of PPARγwith troglitazone, a TZD insulin sensitizer and putative PPARγagonist, inhibited aromatase cytochrome P450 activity, the enzyme critical in the conversion of androgens to estradiol (E2), in human adipose tissue [15] and in ovarian granulosa cells [20]. Similarly, activation of PPARγ by troglitazone in vitro cultures of human and porcine granulosa cells inhibited progesterone production [4]. Troglitazone was also reported to competitively inhibit 3β-hydroxysteriod dehydrogenase (3β-HSD), the enzyme that catalyzes the conversion of pregnenolone to progesterone in the ovary [16]. Likewise, troglitazone was shown to inhibit androgen biosynthesis stimulated by combined LH and insulin in primary porcine thecal cell culture in a dose-dependent fashion [17].In human adrenal NCI-H295R cells, an established in vitro model of steroidogenesis of the human adrenal cortex, both rosiglitazone and pioglitazone inhibited the activities of P450c17 and 3β-HSD type II both of which are key microsomal enzymes in the biosynthesis of all steroid hormones [18]. In diabetic women with polycystic ovarian syndromes (PCOs), a condition characterized by anovulatory androgen secretion, relatively high E2, and excessive LH production [23–25], treatment with the PPARγ agonists rosiglitazone or pioglitazone improved insulin resistance and decreased hyperandrogenism in multiple studies (reviewed in [26–28]).Toxicological studies showed that phthalate esters, used as plasticizers and stabilizers in several consumer products, activate PPARγ[29], decrease key testicular steroidogenic enzymes [30] and reduce serum T production [31–34].Although it is axiomatic that steroidogenic inhibition is a general characteristic of TZD compounds, involvement of PPARγ and rosiglitazone in steroidogenic modulation under diabetic conditions remains unclear for several reasons. First, a study showed that the in vitro IC50 for rosiglitazone steroidogenic inhibition is far beyond its recommended therapeutic dose [13]. Second, a number of studies showed that TZDs, including rosiglitazone, directly inhibit 3β-HSDII and P450c17 steroidogenic enzymes independent of PPARγ [13, 18]. Third, in the aforementioned male studies PPARγ activation was not determined in parallel with sex hormone measurements. More importantly none of the research subjects used was diabetic where the steroidogenic effect of diabetes is an important component for evaluation of TZDs-PPARγ activators. Finally, the treatment period used in the aforementioned male studies was short and varies between 7 and 36 days. Because of the above limitations, the objective of this study was to determine the link between relatively short term (8 weeks) activation of PPARγ and the profile of T and E2 in male Zucker diabetic fatty rats (ZDFs) treated with a therapeutic dose of rosiglitazone.
## 2. Materials and Methods
### 2.1. Animals and Treatments
Male ZDF (fa/fa) rats and their age-matched lean controls (ZDF lean, fa/+ or +/+) were obtained from Charles River Laboratories (Indianapolis, Ind, USA) at 6 weeks of age. The (fa/fa) ZDF rats lack a functional leptin receptor and become hyperphagic and diabetic when fed a high fat diet. Rats were maintained under standard housing conditions (constant temperature of 22°C, ad libitum food and water, and 12:12 hours light/dark cycles) at an AAALAC-accredited lab animal facility at the College of Veterinary Medicine, Auburn University. Rats were housed in pairs and assigned to three groups with 8 rats per group. Lean nondiabetic group (group 1); ZDF rats randomly assigned to ZDF untreated group (group 2) and ZDF group treated with rosiglitazone (group 3). Lean rats were fed regular rat chow whereas ZDF rats in group 2 and 3 were fed Purina 5008 modified rat chow (Purina Mills, Richmond, Ind, USA). Rosiglitazone maleate (generously provided by GlaxoSmithKline, USA) was dissolved in 0.5% carboxymethylcelluose and administered daily via oral gavage at 3 mg/kg/d/rat starting at week 7 of age for 8 weeks. Rats in groups 1 and 2 received 0.5% carboxymethylcelluose vehicle. All rats were weighed and blood glucose was monitored from the tail vein weekly using an ACCU-CHEK glucose meter (Roche Diagnostics Co. Indianapolis, Ind, USA). Diabetes was confirmed by two consecutive measurements of blood glucose of >200 mg/dl. All animal procedures were approved by the Institutional Animal Care and Use Committee at Auburn University.
### 2.2. Necropsy and Tissue Collection
Rats were sacrificed by deep anesthesia with pentobarbital (50 mg/kg intraperitoneal, IP) followed with decapitation. Testes were excised, and sampled for histopathology, RNA extraction, and intratesticular T assay. Visceral epididymal fat and prostate were collected for use as positive sources for PPARγ expression in real-time PCR analysis. Tissues intended for RNA and hormone analysis were immediately frozen in liquid nitrogen and transferred to -80°C until processing. Trunk blood was collected for serum isolation and stored at -30°C prior to hormone analysis.
### 2.3. Total RNA Isolation
Total RNA was isolated using TRIzol reagent (Invitrogen-Life Technologies Inc., Carlsbad, Calif, USA), according to the manufacturer’s instructions and as described previously in our laboratory [35]. Briefly, RNA concentrations were determined at 260 nm wavelength and the ratio of 260/280 was obtained using UV spectrophotometry (DU640, Beckman Coulter Fullerton, Calif, USA). RNA samples were treated with DNase (Ambion Inc.) to remove possible genomic DNA contamination and samples with 260/280 ratio of ≥1.8 were used.
### 2.4. Real-Time PCR and Agarose Gel Electrophoresis
Real-time PCR was used to determine expression of testicular PPARγ mRNA and to quantify changes in mRNA level. Quantitative real-time PCR analysis was performed in 25 μL reaction mixture containing RT2 Real-Time SYBR/Fluorescein Green PCR master mix with final concentrations of 10 mM Tris-Cl, 50 mM KCL, 2.0 mM MgCl2, 0.2 mM dNTPs, and 2.5 units of HotStart Taq DNA polymerase (Super Array Bioscience Corporation, Frederic, Md, USA). The reaction was completed with addition of 1 μL first strand cDNA transcribed from 2 μg total RNA, and 0.2 mM RT2 validated PCR primers for PPARγ or GAPDH house keeping gene (Super Array Bioscience). Samples were run in 96-well PCR plates (Bio-Rad, Hercules, Calif, USA) in duplicates, and the results were normalized to GAPDH expression. The amplification protocol was set at 95°C for 15 minutes, and 40 cycles each at (95°C for 30 seconds, 55°C for 30 seconds, and 72°C for 30 seconds) followed by a melting curve determination between 55°C and 95°C to ensure detection of a single PCR product. Real-time PCR products at the end of each assay were combined for each treatment group and stored at -30°C for viewing on agarose gel electrophoresis. Verification of PCR product was confirmed by determination of expected band size and sequence analysis as we described previously [35]. The resulting sequences were matched with previously published rat sequences in Genbank (accession number NM_013124 for PPARγ) using Chromas 2.31 software (Technelysium Pty ltd, Tewantin Qld 4565, Australia). RNA templates from white adipose tissue and prostate were used to generate standard curves for PPARγ and GAPDH using 10-fold dilutions. Curves were made by plotting threshold cycle (Ct value) for each dilution versus the log of the dilution factor used. Relative differences in expression (fold increase or decrease) were calculated as described previously [36]. Pearson correlation coefficients (r values) for standard curves were between 0.98 and 0.99, and amplification efficiency was considered 100%.
### 2.5. Immunohistochemistry (IHC)
Immunolocalization of PPARγ by IHC was performed as described previously by our laboratory [35]. Briefly, cross sections from testes were fixed in 4% paraformaldehyde for 48 hours, embedded in paraffin, and cut at 5 μm thickness. Sections were also fixed in Bouin's fixative (BioSciences) for staining with hematoxylin and eosin. Mounted sections were deparaffinized in Hemo-D (Scientific Safety Products) and hydrated in distilled water. Antigen retrieval was performed by heating in citrate buffer. Sections were incubated in 5% normal goat serum containing 2.5% BSA to reduce nonspecific staining. PPARγ was detected with mouse anti-PPARγ monoclonal antibody (Santa Cruz: sc7273; diluted 1:80 in blocker) and the antibody-antigen complexes were visualized with Alexa 488-conjugated goat antimouse IgG (Molecular Probes). Sections were examined with a Nikon TE2000E microscope and digital images were made with an attached Retiga EX CCD digital camera (Q Imaging, Burnaby, BC, Canada).
### 2.6. Hormonal Assays
Total serum T (intraassay coefficient of variation (CV) was 4.3%), E2 (intraassay CV was 2.3%), and intratesticular (intraassay CV was 6%) level were determined by radioimmunoassay (RIA) using kits from Siemens Medical Solutions Diagnostics (Los Angles, Calif, USA) according to manufacturer’s instructions. For intratesticular T, 100 mg of testicular tissue was homogenized in 500μL Tris-PBS buffer (0.01 M Tris-HCl; pH 7.4) in plastic tubes. An additional 500 μL was added and the homogenate was mixed with eight volumes of diethyl ether. The mixture was then vigorously vortexed and the aqueous phase quickly frozen in a dry ice bath (70% ethanol; dry ice). Extracts were subsequently air dried (warm bath at approximately 50°C under the hood) and samples were subsequently resuspended in 500 μL PBS-buffer. 50 μL of 1:10 diluted sample were used in the COAT-A-COUNT radioimmunoassay and counted in a Cobra D5005-gamma counter (Packard Instrument Co., Downers Grove, Il, USA). All samples were quantified in duplicates in a single assay. FSH and LH were determined by radioimmunoassay at the Endocrine Laboratory, Fort Collins, Colorado State University.
### 2.7. Serum Adiponectin
Total serum adiponectin concentration was assayed using a sandwich ELISA method (Millipore Corporation, Billerica, Mass, USA) per manufacturer’s instructions. The intraassay CV was 1.1% to 1.3%.
### 2.8. Statistical Analysis
Analysis of real-time PCR data was performed using a modification of the delta deltaCt method (ΔΔCt). ΔCt calculated from real-time PCR data were subjected to analyses of variance using Sigma Stat statistical software (Jandel Scientific, Chicago, IL). Hormonal data were subjected to analysis of variance. Treatment groups with means significantly different (P<.05) from controls were identified using Dunnett's test. When data were not distributed normally, or heterogeneity of variance was identified, analyses were performed on transformed data or ranked data.
## 2.1. Animals and Treatments
Male ZDF (fa/fa) rats and their age-matched lean controls (ZDF lean, fa/+ or +/+) were obtained from Charles River Laboratories (Indianapolis, Ind, USA) at 6 weeks of age. The (fa/fa) ZDF rats lack a functional leptin receptor and become hyperphagic and diabetic when fed a high fat diet. Rats were maintained under standard housing conditions (constant temperature of 22°C, ad libitum food and water, and 12:12 hours light/dark cycles) at an AAALAC-accredited lab animal facility at the College of Veterinary Medicine, Auburn University. Rats were housed in pairs and assigned to three groups with 8 rats per group. Lean nondiabetic group (group 1); ZDF rats randomly assigned to ZDF untreated group (group 2) and ZDF group treated with rosiglitazone (group 3). Lean rats were fed regular rat chow whereas ZDF rats in group 2 and 3 were fed Purina 5008 modified rat chow (Purina Mills, Richmond, Ind, USA). Rosiglitazone maleate (generously provided by GlaxoSmithKline, USA) was dissolved in 0.5% carboxymethylcelluose and administered daily via oral gavage at 3 mg/kg/d/rat starting at week 7 of age for 8 weeks. Rats in groups 1 and 2 received 0.5% carboxymethylcelluose vehicle. All rats were weighed and blood glucose was monitored from the tail vein weekly using an ACCU-CHEK glucose meter (Roche Diagnostics Co. Indianapolis, Ind, USA). Diabetes was confirmed by two consecutive measurements of blood glucose of >200 mg/dl. All animal procedures were approved by the Institutional Animal Care and Use Committee at Auburn University.
## 2.2. Necropsy and Tissue Collection
Rats were sacrificed by deep anesthesia with pentobarbital (50 mg/kg intraperitoneal, IP) followed with decapitation. Testes were excised, and sampled for histopathology, RNA extraction, and intratesticular T assay. Visceral epididymal fat and prostate were collected for use as positive sources for PPARγ expression in real-time PCR analysis. Tissues intended for RNA and hormone analysis were immediately frozen in liquid nitrogen and transferred to -80°C until processing. Trunk blood was collected for serum isolation and stored at -30°C prior to hormone analysis.
## 2.3. Total RNA Isolation
Total RNA was isolated using TRIzol reagent (Invitrogen-Life Technologies Inc., Carlsbad, Calif, USA), according to the manufacturer’s instructions and as described previously in our laboratory [35]. Briefly, RNA concentrations were determined at 260 nm wavelength and the ratio of 260/280 was obtained using UV spectrophotometry (DU640, Beckman Coulter Fullerton, Calif, USA). RNA samples were treated with DNase (Ambion Inc.) to remove possible genomic DNA contamination and samples with 260/280 ratio of ≥1.8 were used.
## 2.4. Real-Time PCR and Agarose Gel Electrophoresis
Real-time PCR was used to determine expression of testicular PPARγ mRNA and to quantify changes in mRNA level. Quantitative real-time PCR analysis was performed in 25 μL reaction mixture containing RT2 Real-Time SYBR/Fluorescein Green PCR master mix with final concentrations of 10 mM Tris-Cl, 50 mM KCL, 2.0 mM MgCl2, 0.2 mM dNTPs, and 2.5 units of HotStart Taq DNA polymerase (Super Array Bioscience Corporation, Frederic, Md, USA). The reaction was completed with addition of 1 μL first strand cDNA transcribed from 2 μg total RNA, and 0.2 mM RT2 validated PCR primers for PPARγ or GAPDH house keeping gene (Super Array Bioscience). Samples were run in 96-well PCR plates (Bio-Rad, Hercules, Calif, USA) in duplicates, and the results were normalized to GAPDH expression. The amplification protocol was set at 95°C for 15 minutes, and 40 cycles each at (95°C for 30 seconds, 55°C for 30 seconds, and 72°C for 30 seconds) followed by a melting curve determination between 55°C and 95°C to ensure detection of a single PCR product. Real-time PCR products at the end of each assay were combined for each treatment group and stored at -30°C for viewing on agarose gel electrophoresis. Verification of PCR product was confirmed by determination of expected band size and sequence analysis as we described previously [35]. The resulting sequences were matched with previously published rat sequences in Genbank (accession number NM_013124 for PPARγ) using Chromas 2.31 software (Technelysium Pty ltd, Tewantin Qld 4565, Australia). RNA templates from white adipose tissue and prostate were used to generate standard curves for PPARγ and GAPDH using 10-fold dilutions. Curves were made by plotting threshold cycle (Ct value) for each dilution versus the log of the dilution factor used. Relative differences in expression (fold increase or decrease) were calculated as described previously [36]. Pearson correlation coefficients (r values) for standard curves were between 0.98 and 0.99, and amplification efficiency was considered 100%.
## 2.5. Immunohistochemistry (IHC)
Immunolocalization of PPARγ by IHC was performed as described previously by our laboratory [35]. Briefly, cross sections from testes were fixed in 4% paraformaldehyde for 48 hours, embedded in paraffin, and cut at 5 μm thickness. Sections were also fixed in Bouin's fixative (BioSciences) for staining with hematoxylin and eosin. Mounted sections were deparaffinized in Hemo-D (Scientific Safety Products) and hydrated in distilled water. Antigen retrieval was performed by heating in citrate buffer. Sections were incubated in 5% normal goat serum containing 2.5% BSA to reduce nonspecific staining. PPARγ was detected with mouse anti-PPARγ monoclonal antibody (Santa Cruz: sc7273; diluted 1:80 in blocker) and the antibody-antigen complexes were visualized with Alexa 488-conjugated goat antimouse IgG (Molecular Probes). Sections were examined with a Nikon TE2000E microscope and digital images were made with an attached Retiga EX CCD digital camera (Q Imaging, Burnaby, BC, Canada).
## 2.6. Hormonal Assays
Total serum T (intraassay coefficient of variation (CV) was 4.3%), E2 (intraassay CV was 2.3%), and intratesticular (intraassay CV was 6%) level were determined by radioimmunoassay (RIA) using kits from Siemens Medical Solutions Diagnostics (Los Angles, Calif, USA) according to manufacturer’s instructions. For intratesticular T, 100 mg of testicular tissue was homogenized in 500μL Tris-PBS buffer (0.01 M Tris-HCl; pH 7.4) in plastic tubes. An additional 500 μL was added and the homogenate was mixed with eight volumes of diethyl ether. The mixture was then vigorously vortexed and the aqueous phase quickly frozen in a dry ice bath (70% ethanol; dry ice). Extracts were subsequently air dried (warm bath at approximately 50°C under the hood) and samples were subsequently resuspended in 500 μL PBS-buffer. 50 μL of 1:10 diluted sample were used in the COAT-A-COUNT radioimmunoassay and counted in a Cobra D5005-gamma counter (Packard Instrument Co., Downers Grove, Il, USA). All samples were quantified in duplicates in a single assay. FSH and LH were determined by radioimmunoassay at the Endocrine Laboratory, Fort Collins, Colorado State University.
## 2.7. Serum Adiponectin
Total serum adiponectin concentration was assayed using a sandwich ELISA method (Millipore Corporation, Billerica, Mass, USA) per manufacturer’s instructions. The intraassay CV was 1.1% to 1.3%.
## 2.8. Statistical Analysis
Analysis of real-time PCR data was performed using a modification of the delta deltaCt method (ΔΔCt). ΔCt calculated from real-time PCR data were subjected to analyses of variance using Sigma Stat statistical software (Jandel Scientific, Chicago, IL). Hormonal data were subjected to analysis of variance. Treatment groups with means significantly different (P<.05) from controls were identified using Dunnett's test. When data were not distributed normally, or heterogeneity of variance was identified, analyses were performed on transformed data or ranked data.
## 3. Results
### 3.1. Blood Glucose and Body Weight
The ZDF rats fed Purina 5008 high fat diet in groups 2 and 3 became diabetic by week 7 of age. By week 15 of age the mean blood glucose concentration, determined shortly before necropsy, was>600 mg/dl (above glucose meter range) in ZDF-untreated controls (group 2) versus 123±1.7 mg/dl in lean nondiabetic controls (group 1) and 163.6±17.7 in ZDF rats treated with rosiglitazone (group 3). Rats in all three experimental groups gained weight over time irrespective of treatment. The ZDF-untreated rats mean body weight at week 15 was not significantly different from lean nondiabetic (382.75±10.94 versus 365±8.2 gm, resp.; P>.05). In contrast, the mean body weight of ZDF treated rats (group 3) was more than 40% above that of ZDF untreated rats at week 15 (638.6±14.67 versus 382.75±10.9 gm, P<.001).
### 3.2. Serum Adiponectin
Serum adiponectin was determined to confirm PPARγ activation [37]. As expected, treatment with rosiglitazone increased serum adiponectin significantly in ZDF-treated compared with ZDF-untreated rats (46.33±2.83 versus 12.71±0.69 μg/mL, P<.001) or lean nondiabetic untreated rats (46.33±2.83 versus 17.13±0.95 μg/mL, P<.001). In contrast, adiponectin was significantly reduced by diabetes in ZDF-untreated compared with lean nondiabetic rats (17.13±0.95 versus 12.71±0.69 μg/mL, P<.05).
### 3.3. Real-Time PCR, Agarose Gel Electrophoresis and IHC
Real-time PCR data showed that PPARγ mRNA was expressed in the testis and was upregulated by more than two folds with rosiglitazone treatment (Figures 1(a) and 1(b)). As shown in the IHC data (Figure 1(c)) PPARγ protein was specifically localized in Leydig cells located in the interstitial space between the seminiferous tubules and in spermatocytes within the inside of seminiferous tubules basement membranes.(a) Real-time PCR analysis of testicular PPARγ mRNA levels in lean nondiabetic controls (Lean C), Zucker diabetic fatty (ZDF) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). Data are expressed as mean ± SE. n=8 per group, *P<.05. (b) Agarose gel (2%) showing real-time PCR products generated in (a). Lane 1, DNA markers, lanes 2-3, RNA templates (instead of cDNA) from fat (-VF) and prostate (-VPr) as negative controls. Lanes 4-5, fat from untreated rats (FU) and prostate (Pr) as positive controls. Lanes 6–9, testicular PCR products from RNA negative controls (-VT), diabetic treated (DT), Diabetic untreated (DU), and lean untreated (LU) rats. Lane 10, fat from ZDF-treated rats (FT). (c) Representative IHC of PPARγ protein in the testis of DT (panel 2a with insert box magnified in 2b), DU (panel 3), and LU rats (panel 4). Panel 1, -VT = negative control testis section (minus primary antibody. Arrows indicate PPARγ localization in spermatogonia and in Leydig cells.
(a)(b)(c)
### 3.4. Testicular Morphology and Histopathology
Detachment and disorganization of germ cells was evident in ZDF-untreated rats but there was no significant changes in the overall morphology of seminiferous tubules (Figure2). Likewise, treatment did not alter seminiferous tubules morphology but desirably reversed germ cells sloughing (ZDF-treated in panel 2 versus ZDF-untreated in panel 3).Representative photomicrographs of hematoxylin-eosin-stained sections of testis of nondiabetic Zucker lean control (Lean C, panel 1), Zucker diabetic fatty (ZDF) untreated (Diabetic U, panel 2), and ZDF rats treated with rosiglitazone (Diabetic T, panel 3). Arrows indicate germ cells strewn in the lumen of seminiferous tubules.
(a)(b)(c)
### 3.5. Serum and Intratesticular T
Total serum and intratesticular T were not significantly altered by PPARγ activation in the testis (ZDF-treated versus ZDF-untreated, P>.05) (Figures 3 and 4). As expected total serum T was significantly lowered by diabetes (ZDF-untreated or treated versus lean nondiabetic, P<.05) (Figure 3). Surprisingly, the significant reduction in total serum T in diabetic rats was not associated with a corresponding significant reduction in intratesticular T production (Figure 4). A trend toward lower intratesticular T in ZDF versus lean rats was apparent irrespective of rosiglitazone treatment.Figure 3
Serum T level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). Data are expressed as means ± SE. n=8 per group. *P<.05.Figure 4
Intratesticular T level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=8 per group. Data are expressed as means ± SE.
### 3.6. Serum E2
Neither diabetes nor PPARγactivation with rosiglitazone adversely altered serum E2 (Figure 5).Figure 5
Serum E2 level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=6 per group. Data are expressed as means ± SE.
### 3.7. Serum FSH and LH
Serum FSH and LH were not significantly altered by activation of PPARγ with rosiglitazone and/or by diabetes (P>.05). The ZDF-untreated rats, however, showed a trend for lower FSH and LH (ZDF-untreated versus lean nondiabetic) and treatment with rosiglitazone reversed this tendency (ZDF-treated versus ZDF-untreated rats) (Figures 6(a) and 6(b)).Serum FSH (a) and LH (b) in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=7-8 per group. Data are expressed as means ± SE.
(a)(b)
## 3.1. Blood Glucose and Body Weight
The ZDF rats fed Purina 5008 high fat diet in groups 2 and 3 became diabetic by week 7 of age. By week 15 of age the mean blood glucose concentration, determined shortly before necropsy, was>600 mg/dl (above glucose meter range) in ZDF-untreated controls (group 2) versus 123±1.7 mg/dl in lean nondiabetic controls (group 1) and 163.6±17.7 in ZDF rats treated with rosiglitazone (group 3). Rats in all three experimental groups gained weight over time irrespective of treatment. The ZDF-untreated rats mean body weight at week 15 was not significantly different from lean nondiabetic (382.75±10.94 versus 365±8.2 gm, resp.; P>.05). In contrast, the mean body weight of ZDF treated rats (group 3) was more than 40% above that of ZDF untreated rats at week 15 (638.6±14.67 versus 382.75±10.9 gm, P<.001).
## 3.2. Serum Adiponectin
Serum adiponectin was determined to confirm PPARγ activation [37]. As expected, treatment with rosiglitazone increased serum adiponectin significantly in ZDF-treated compared with ZDF-untreated rats (46.33±2.83 versus 12.71±0.69 μg/mL, P<.001) or lean nondiabetic untreated rats (46.33±2.83 versus 17.13±0.95 μg/mL, P<.001). In contrast, adiponectin was significantly reduced by diabetes in ZDF-untreated compared with lean nondiabetic rats (17.13±0.95 versus 12.71±0.69 μg/mL, P<.05).
## 3.3. Real-Time PCR, Agarose Gel Electrophoresis and IHC
Real-time PCR data showed that PPARγ mRNA was expressed in the testis and was upregulated by more than two folds with rosiglitazone treatment (Figures 1(a) and 1(b)). As shown in the IHC data (Figure 1(c)) PPARγ protein was specifically localized in Leydig cells located in the interstitial space between the seminiferous tubules and in spermatocytes within the inside of seminiferous tubules basement membranes.(a) Real-time PCR analysis of testicular PPARγ mRNA levels in lean nondiabetic controls (Lean C), Zucker diabetic fatty (ZDF) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). Data are expressed as mean ± SE. n=8 per group, *P<.05. (b) Agarose gel (2%) showing real-time PCR products generated in (a). Lane 1, DNA markers, lanes 2-3, RNA templates (instead of cDNA) from fat (-VF) and prostate (-VPr) as negative controls. Lanes 4-5, fat from untreated rats (FU) and prostate (Pr) as positive controls. Lanes 6–9, testicular PCR products from RNA negative controls (-VT), diabetic treated (DT), Diabetic untreated (DU), and lean untreated (LU) rats. Lane 10, fat from ZDF-treated rats (FT). (c) Representative IHC of PPARγ protein in the testis of DT (panel 2a with insert box magnified in 2b), DU (panel 3), and LU rats (panel 4). Panel 1, -VT = negative control testis section (minus primary antibody. Arrows indicate PPARγ localization in spermatogonia and in Leydig cells.
(a)(b)(c)
## 3.4. Testicular Morphology and Histopathology
Detachment and disorganization of germ cells was evident in ZDF-untreated rats but there was no significant changes in the overall morphology of seminiferous tubules (Figure2). Likewise, treatment did not alter seminiferous tubules morphology but desirably reversed germ cells sloughing (ZDF-treated in panel 2 versus ZDF-untreated in panel 3).Representative photomicrographs of hematoxylin-eosin-stained sections of testis of nondiabetic Zucker lean control (Lean C, panel 1), Zucker diabetic fatty (ZDF) untreated (Diabetic U, panel 2), and ZDF rats treated with rosiglitazone (Diabetic T, panel 3). Arrows indicate germ cells strewn in the lumen of seminiferous tubules.
(a)(b)(c)
## 3.5. Serum and Intratesticular T
Total serum and intratesticular T were not significantly altered by PPARγ activation in the testis (ZDF-treated versus ZDF-untreated, P>.05) (Figures 3 and 4). As expected total serum T was significantly lowered by diabetes (ZDF-untreated or treated versus lean nondiabetic, P<.05) (Figure 3). Surprisingly, the significant reduction in total serum T in diabetic rats was not associated with a corresponding significant reduction in intratesticular T production (Figure 4). A trend toward lower intratesticular T in ZDF versus lean rats was apparent irrespective of rosiglitazone treatment.Figure 3
Serum T level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). Data are expressed as means ± SE. n=8 per group. *P<.05.Figure 4
Intratesticular T level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=8 per group. Data are expressed as means ± SE.
## 3.6. Serum E2
Neither diabetes nor PPARγactivation with rosiglitazone adversely altered serum E2 (Figure 5).Figure 5
Serum E2 level in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=6 per group. Data are expressed as means ± SE.
## 3.7. Serum FSH and LH
Serum FSH and LH were not significantly altered by activation of PPARγ with rosiglitazone and/or by diabetes (P>.05). The ZDF-untreated rats, however, showed a trend for lower FSH and LH (ZDF-untreated versus lean nondiabetic) and treatment with rosiglitazone reversed this tendency (ZDF-treated versus ZDF-untreated rats) (Figures 6(a) and 6(b)).Serum FSH (a) and LH (b) in Zucker lean nondiabetic controls (Lean C), Zucker diabetic fatty- (ZDF-) untreated (Diabetic U), and ZDF rats treated with rosiglitazone (Diabetic T). n=7-8 per group. Data are expressed as means ± SE.
(a)(b)
## 4. Discussion
Rosiglitazone and other TZDs were shown to decrease hyperandrogenemia in women with PCOs and repress major steriodogenic enzymes (reviewed in [26]). Although the in vitro steroidogenic repression potency of rosiglitazone was ranked intermediate between troglitazone and pioglitazone [13], its in vivo impact on sex steroids in diabetic subjects was unknown. Specifically, information on how changes in serum steroids in male diabetic subjects relate to changes in PPARγ activity was lacking. This study documents the link between testicular PPARγ activation with the anti-diabetic drug rosiglitazone and male sex hormone profile under diabetic conditions. Daily oral gavage of ZDF rats with rosiglitazone activated PPARγ in the testis and normalized testicular germ cell derangement seen in diabetic testis. The stimulation of systemic and testicular PPARγactivity by rosiglitazone, however, did not significantly alter the concentrations of gonadotropins LH and FSH, sex steroid T (both total serum and intratesticular) or serum E2.Systemic activation of PPARγ in this study was evident by the reversal of hyperglycemia, increased serum adipocytokine adiponectin, and by weight gain in ZDF-treated rats. These effects are considered biological signatures for rosiglitazone-induced PPARγ activation in typically body fat depots [38, 39]. Specific activation of testicular PPARγ shown here is consistent with other unrelated studies that showed the presence of this receptor in rat testis [40] and its modulation by synthetic chemicals such as phthalate esters [34, 41, 42]. Because PPARγ can also be activated by endogenous ligands such as polyunsaturated fatty acids from diet sources and by metabolites of arachidonic acid [43], contribution of these ligands to the observed up regulation of testicular PPARγ mRNA and protein in ZDF-treated rats could not be ruled out.Compared to nondiabetic lean rats, ZDF-untreated rats showed consistent structural disorganization of germinal epithelium evident by abnormal accumulation of germ cells in the lumen of seminiferous tubules. This effect likely resulted from diabetes-induced oxidative stress previously recognized in diabetic rat testis [44]. Interestingly, the positive staining of germ cells for PPARγ protein in IHC data suggests that rosiglitazone crosses the blood-testis barrier to enter the adluminal compartment of the seminiferous tubules resulting in activation of PPARγ and reversal of germ cell sloughing. This novel effect is possibly mediated by the reported functional property of PPARγ to ameliorate oxidative stress [11].As previously known [45], diabetes in this study significantly lowered serum T. Rosiglitazone treatment, however, neither restored nor reduced serum T in ZDF-treated rats. Paradoxically, the diabetes-induced reduction in serum T in ZDF-treated and untreated rats were not associated with a corresponding significant drop in intratesticular T production, or androgen receptor expression (data not shown). Although unexpected, this finding was consistent with the nonsignificant changes observed in serum LH concentration. The unaltered intratesticular T production contrasted with significantly lowered serum T concentration may reflect a reduction in sex hormone binding globulin (SHBG) essential for T transportation in blood [46]. Reagents for determination of rat SHBG are not currently available and attempts to quantify rat SHBG using a human ELISA kit were not successful because of reagents incompatibility.Rosiglitazone treatment reversed obesity-induced reduction of T intermediate steroid hormone precursor 17-hydroxyprogesterone in the genetically related obese but nondiabetic Zucker rats [22]. In the same study, however, rosiglitazone treatment did not alter total serum T. Compared with the above findings the data in our study allow for differentiation of diabetes steroidogenic effects (ZDF-untreated versus lean nondiabetic rats) from those of PPARγ activation (ZDF-treated versus ZDF-untreated rats) but does not permit a way for separation of obesity effects from those of diabetes. The lack of significant change in total serum T in the above study was nevertheless consistent with our findings. In both the aforementioned and our study a tendency toward lowered serum T production was observed in ZDF and Zucker obese versus lean littermates irrespective of treatment. A normal transient T reduction was reported in 2-4 months old Zucker obese rats [47]. The age of rats at the point of serum collection in this study was 3.75 months and thus the lowered T observed in the ZDF versus lean rats could be a reflection of the above observation in these two genetically related rat models.Studies that support a steroidogenic regulatory role for PPARγin women treated with TZDs and in vitro ovarian cell culture models have produced contrasting results as outlined in several reviews [1, 21, 26, 27]. While some in vitro studies showed that rosiglitazone and other TZDs such as troglitazone could inhibit steroidogenic enzymes independent of PPARγactivation [4, 13, 16–18, 20], an inhibitory effect of rosiglitazone was not reflected in total T and E2 concentrations in ZDF-treated rats in our study. In summary, our data show that PPARγ activation with rosiglitazone for eight weeks had no negative impact on total sex hormone concentrations in diabetic male rats. This finding is firmly in line with studies that showed strong TZD-PPARγ activation [48, 49] but weaker in vitro steroidogenic inhibitory effects of rosiglitazone [13].
---
*Source: 101857-2009-06-11.xml* | 2009 |
# Facilitative and Inhibitory Effect of Litter on Seedling Emergence and Early Growth of Six Herbaceous Species in an Early Successional Old Field Ecosystem
**Authors:** Qiang Li; Pujia Yu; Xiaoying Chen; Guangdi Li; Daowei Zhou; Wei Zheng
**Journal:** The Scientific World Journal
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101860
---
## Abstract
In the current study, a field experiment was conducted to examine effects of litter on seedling emergence and early growth of four dominant weed species from the early successional stages of old field ecosystem and two perennial grassland species in late successional stages. Our results showed that increased litter cover decreased soil temperature and temperature variability over time and improved soil moisture status. Surface soil electrical conductivity increased as litter increased. The increased litter delayed seedling emergence time and rate. The emergence percentage of seedlings and establishment success rate firstly increased then decreased as litter cover increased. When litter biomass was below 600 g m−2, litter increased seedlings emergence and establishment success in all species. With litter increasing, the basal diameter of seedling decreased, but seedling height increased. Increasing amounts of litter tended to increase seedling dry weight and stem leaf ratio. Different species responded differently to the increase of litter. Puccinellia tenuiflora and Chloris virgata will acquire more emergence benefits under high litter amount. It is predicted that Chloris virgata will dominate further in this natural succession old field ecosystem with litter accumulation. Artificial P. tenuiflora seeds addition may be required to accelerate old field succession toward matured grassland.
---
## Body
## 1. Introduction
The emergence and early seedling growth are two crucial processes for establishment and performance of plants [1, 2]. For species in natural communities, a successful emergence not only implies that seedling finally breaks through the soil surface but also emphasizes the importance of emergence time and rate. Previous studies suggested that small difference in the emergence order of plants could determine their final fates under ubiquitous competition [3, 4]. Generally, during the establishment of seedling in community, earlier emergence can be a good trait for plants, because the quicker emergence can help plants occupy the priority of resource utilization, including light, soil water, and nutrition [5]. Apart from emergence, the early seedling growth also has great importance for the establishment and performance of plants [2]. In shorter time after emergence, the seedlings with relatively faster growth rate can approach greater plant size which helps it to occupy wider niche [6], consequently to acquire more advantage in competition for resource, particularly when the resources are limited [7].Seedling emergence and early growth can be impacted by various factors, such as the seed characteristics [8, 9], seed position in soil profile [10, 11], environment condition such as climate [12], soil physical and chemical properties [12], and biological interference [11]. Increasing evidence suggests that the environment condition plays important role in regulating the emergence and early growth of seedling, which not only acts directly on seedling emergence and growth process but also modifies the effects of other factors on these two processes [13–16].The amount of litter induced by land use change is an important environmental factor for global plant system [17, 18], which may control species recruitment and affect the structure and dynamics of plant communities [19, 20]. Effects of litter on seedling emergence and growth have largely been reported [17, 18, 21]. Litter can promote the seed germination and seedling growth by keeping soil temperature and moisture with ground cover and increase soil nutrient through decomposition [21–23]. However, the litter also may be disadvantageous for seedling emergence and growth with regard to reducing the light radiation to the soil surface [24], forming mechanical barrier [19, 25], or possibly releasing toxic secondary metabolites [26, 27]. The net effect of litter on seedling emergence and growth is the balance between facilitative and inhibitory actions. Previous studies showed that this “net effect” was controlled by the ecosystem type, litter amount, seed and seedling characteristics, and experiment method (greenhouse versus field) [11, 17, 18, 28]; therefore, for better understanding of the roles of litter in regulating emergence and early growth of seedling, it is necessary to further run experiments under different ecosystem types and experimental methods.The abandonment of reclaimed grassland occurs globally due to degradation and declined yield, which causes old fields to form. These old fields will renewably be converted into grassland after going through long time natural succession [29–31]. Once abandoned, the old fields are occupied by natural vegetation, which increases litter cover [29, 32] and, subsequently, induces change in surface soil environment [33], which may have important effects on plant establishment and species recruitment. During early successional stage, these old fields generally are dominated by some pioneer species (normally volunteer weed) which reserve a large number of seeds in soil. The fate of these seeds will increase litter cover and influence the further community assembly greatly [34].In Songnen plain of northeast China, due to arid and soil alkalization, large area of croplands were abandoned and became old fields. The restoration of these old fields ecosystem has important ecological and economic significance for this region. The soil moisture and alkali are two important factors to regulate old field succession, and we expect that these two factors can be improved by increasing litter amount. We designed this experiment to examine how varying litter covers impact the soil properties, emergence features including emergence time and rate, and early growth of seedlings from four dominant weed species in early successional stage and two perennial species in late successional stage in this old field ecosystem. We expected that our study will provide further understanding for the relationship between litter cover and plant establishment and also support a decision tool making to restore this old field ecosystem toward grassland.
## 2. Materials and Methods
### 2.1. Study Site
This study was conducted at Grassland Farming Research Station (E123°31′, N44°33′; Elevation 145 m) of Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, which is located at the Songnen Plain of northeast China. This study site has semiarid with continental climate. Mean annual temperature is 4.9°C; annual precipitation is approximately 410 mm, with 70% falling from June to September. The soil type is meadow saline-alkali soil, with high soil basic salt content. The typical vegetation in this study site isLeymus chinensis meadow. A large area ofL. chinensis meadow was converted into cropland due to the demand of grain during the last few decades. However, because the decline of soil fertility and crop yield after continuous tillage, some of the reclaimed croplands were abandoned as old field and were expected to restore grassland. In 2011, the current study was conducted on a recent abandoned cropland, two year ago. At the start of the experiment, the site was dominated by a range of annual weed species. The soil bulk density, soil organic carbon, and total nitrogen concentration at the depth of 0–20 cm were 1.47
±
0.04 g cm−3, 10.45
±
0.58 g kg−1, and 0.98
±
0.09 g kg−1, respectively.
### 2.2. Study Species
We selected four dominant weed species in this old field, includingAbutilon theophrasti (Malvaceae),Chenopodium glaucum (Chenopodiaceae),Sonchus brachyotus (Compositae), andChloris virgata (Gramineae), which occupied more than 90% of the aboveground biomass in the community. Among these four species,A. theophrasti,C. glaucum, andS. brachyotus were typical weed species in cropland, while theC. virgata was a common species in cropland and natural grassland. Two perennial species from natural grassland,Puccinellia tenuiflora (Gramineae) andLespedeza davurica (Leguminosae), which are potentially recruited species in later successional stage of old field (Table 1).Table 1
Habitat, life form, family, mass per seed (mg), and germinating capacity of each species included in the study.
Habitat
Life form
Species
The start time of seed ripening and falling
Mass per seed(mg)
Germinating capacity (%)
Grassland
Perennial
P. tenuiflora
Early July
0.562 ± 0.052
82
Perennial
L. davurica
Late August
2.053 ± 0.044
86
Old field
Perennial
S. brachyotus
Mid-August
1.138 ± 0.017
94
Annual
A. theophrasti
Mid-August
8.799 ± 0.125
92
Annual
C. glaucum
Mid-August
0.475 ± 0.028
92
Grassland, old field
Annual
C. virgata
Early July
0.337 ± 0.012
84The seeds of each species were collected in autumn 2010 from 10 different populations and at least 10 individuals of each population. Seeds were stored in darkness at room temperature (20°C) until sowing on 26 April 2011. An initial germinating capacity test on additional seed batches was conducted by examining germination rate under optimum light, temperature, and water condition.
### 2.3. Experimental Design
The experiment was a completely randomised block design. There were 4 repeated blocks; 5 litter treatments (0, 200, 400, 600, and 800 g m−2) were randomly assigned to each block. Totally, there were 20 plots, and the plot size was 4 m × 4 m with 0.5 m buffers between plots. Within each plot, two microplots (1 m × 1 m) were placed at the centre of the plot, side by side, with 0.5 m space between two plots.In mid-April 2011, all aboveground plant materials were removed. The soil at the depth of 0–20 cm in all microplots was collected and then steam-sterilised prior to the experiments to kill any plant seeds potentially present in the substrate. The steam-sterilised soil was filled back into original microplots.On 26 April 2011, one of the microplots, randomly selected, was sowed with 50 seeds of each species, and the other microplot did not receive any seed as control plot. Any germination from control plot was from external seeds. Prior to sowing,L. davurica seeds were soaked in 98% H2SO4 for 0.5 hour to break hard seed coat. The seeds were spread on the soil surface and covered by a thin layer of soil. Immediately after sowing, the whole plot was covered by litters with designated amount according to treatments. The litter wasL. chinensis hay harvested in adjacent meadow in autumn 2010. The designed litter addition level represented the natural litter production from low to high productivity in the old field ecosystem.
### 2.4. Measurements
#### 2.4.1. Soil Properties
When the soil was filled, a soil water probe and a soil temperature probe, connected on a FDS-100 Automatic Temperature and Moisture Recorder (Handan Electronic Technology Company, Handan, China), were installed at the 5 cm depth in each microplot. Soil temperature and moisture at 5 cm were recorded automatically every 1 hour during the experimental period from 27 April to 7 June 2011. For reflecting soil salinity, soil electrical conductivity (EC) was measured every 6 or 7 days at 0–10 cm on each of the sown microplots using a Field Operated Meter (Easy Test, Poland).
#### 2.4.2. Seedling Emergence
The emerged seedling was checked and marked every day in all microplots from sowing till 7 June 2011. In fact, no new germination was counted after 26 May. At each count day, all new emergent seedlings from 6 study species were marked using plastic label with species name and emergence date. The seedling emergence was defined as seedling successfully penetrated through the litter cover.
#### 2.4.3. Early Seedling Growth
On 8 June 2011, the marked seedlings for each species were counted, and seedling mortality was recorded in each sown plot. Five seedlings according to emergence sequence for each species were randomly selected from each microplot to measure height and basal diameter. Each seedling was divided into stem and leaf (with petiole) to determine dry weight after being oven-dried at 70°C for 48 h.
### 2.5. Data Analysis
Seedling emergence time was the number of days from seed sowing to seedling emergence [35]. Seedling emergence rates were represented using the Emergence Rate Index (ERI) as described by Erbach [36]. The ERI is calculated using the following:
(1)
ERI
=
∑
FD
LD
P
n
-
P
(
n
-
1
)
N
,
where N is the number of days since planting, P
n is the percentage of plants emerged on day n, P
(
n
-
1
) is the percentage of plants emerged on day (n
-
1), LD is the last day when emergence was complete, and FD is the first day counting began. In this study, FD was set at 1.The emergence percentage was calculated as the ratio of emergence seedling number to germinable seed number. The survival rate of seedling was calculated as ratio of remaining marked seedlings to totally marked seedlings. The establishment success rate for each species was calculated by emergence percentage multiplying the survival rate of seedling [35].Repeated-measures ANOVA tests were used to examine the effects of litter cover on soil electrical conductivity with time as a fixed factor. A simple lineal regression was used to examine the correlation relationship between soil EC and soil moisture. Two-way ANOVA analysis was applied to determine the main and interactive effects of litter cover and species on seedling emergence, seedling establishment, and seedling growth. The mean comparison was conducted amongst treatments using Duncan’st-test after all data were assured to be normal. Significant differences for all statistical tests were evaluated at P
=
0.05. All data analyses were conducted with the SPSS16.0 software (Chicago, IL, USA).
## 2.1. Study Site
This study was conducted at Grassland Farming Research Station (E123°31′, N44°33′; Elevation 145 m) of Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, which is located at the Songnen Plain of northeast China. This study site has semiarid with continental climate. Mean annual temperature is 4.9°C; annual precipitation is approximately 410 mm, with 70% falling from June to September. The soil type is meadow saline-alkali soil, with high soil basic salt content. The typical vegetation in this study site isLeymus chinensis meadow. A large area ofL. chinensis meadow was converted into cropland due to the demand of grain during the last few decades. However, because the decline of soil fertility and crop yield after continuous tillage, some of the reclaimed croplands were abandoned as old field and were expected to restore grassland. In 2011, the current study was conducted on a recent abandoned cropland, two year ago. At the start of the experiment, the site was dominated by a range of annual weed species. The soil bulk density, soil organic carbon, and total nitrogen concentration at the depth of 0–20 cm were 1.47
±
0.04 g cm−3, 10.45
±
0.58 g kg−1, and 0.98
±
0.09 g kg−1, respectively.
## 2.2. Study Species
We selected four dominant weed species in this old field, includingAbutilon theophrasti (Malvaceae),Chenopodium glaucum (Chenopodiaceae),Sonchus brachyotus (Compositae), andChloris virgata (Gramineae), which occupied more than 90% of the aboveground biomass in the community. Among these four species,A. theophrasti,C. glaucum, andS. brachyotus were typical weed species in cropland, while theC. virgata was a common species in cropland and natural grassland. Two perennial species from natural grassland,Puccinellia tenuiflora (Gramineae) andLespedeza davurica (Leguminosae), which are potentially recruited species in later successional stage of old field (Table 1).Table 1
Habitat, life form, family, mass per seed (mg), and germinating capacity of each species included in the study.
Habitat
Life form
Species
The start time of seed ripening and falling
Mass per seed(mg)
Germinating capacity (%)
Grassland
Perennial
P. tenuiflora
Early July
0.562 ± 0.052
82
Perennial
L. davurica
Late August
2.053 ± 0.044
86
Old field
Perennial
S. brachyotus
Mid-August
1.138 ± 0.017
94
Annual
A. theophrasti
Mid-August
8.799 ± 0.125
92
Annual
C. glaucum
Mid-August
0.475 ± 0.028
92
Grassland, old field
Annual
C. virgata
Early July
0.337 ± 0.012
84The seeds of each species were collected in autumn 2010 from 10 different populations and at least 10 individuals of each population. Seeds were stored in darkness at room temperature (20°C) until sowing on 26 April 2011. An initial germinating capacity test on additional seed batches was conducted by examining germination rate under optimum light, temperature, and water condition.
## 2.3. Experimental Design
The experiment was a completely randomised block design. There were 4 repeated blocks; 5 litter treatments (0, 200, 400, 600, and 800 g m−2) were randomly assigned to each block. Totally, there were 20 plots, and the plot size was 4 m × 4 m with 0.5 m buffers between plots. Within each plot, two microplots (1 m × 1 m) were placed at the centre of the plot, side by side, with 0.5 m space between two plots.In mid-April 2011, all aboveground plant materials were removed. The soil at the depth of 0–20 cm in all microplots was collected and then steam-sterilised prior to the experiments to kill any plant seeds potentially present in the substrate. The steam-sterilised soil was filled back into original microplots.On 26 April 2011, one of the microplots, randomly selected, was sowed with 50 seeds of each species, and the other microplot did not receive any seed as control plot. Any germination from control plot was from external seeds. Prior to sowing,L. davurica seeds were soaked in 98% H2SO4 for 0.5 hour to break hard seed coat. The seeds were spread on the soil surface and covered by a thin layer of soil. Immediately after sowing, the whole plot was covered by litters with designated amount according to treatments. The litter wasL. chinensis hay harvested in adjacent meadow in autumn 2010. The designed litter addition level represented the natural litter production from low to high productivity in the old field ecosystem.
## 2.4. Measurements
### 2.4.1. Soil Properties
When the soil was filled, a soil water probe and a soil temperature probe, connected on a FDS-100 Automatic Temperature and Moisture Recorder (Handan Electronic Technology Company, Handan, China), were installed at the 5 cm depth in each microplot. Soil temperature and moisture at 5 cm were recorded automatically every 1 hour during the experimental period from 27 April to 7 June 2011. For reflecting soil salinity, soil electrical conductivity (EC) was measured every 6 or 7 days at 0–10 cm on each of the sown microplots using a Field Operated Meter (Easy Test, Poland).
### 2.4.2. Seedling Emergence
The emerged seedling was checked and marked every day in all microplots from sowing till 7 June 2011. In fact, no new germination was counted after 26 May. At each count day, all new emergent seedlings from 6 study species were marked using plastic label with species name and emergence date. The seedling emergence was defined as seedling successfully penetrated through the litter cover.
### 2.4.3. Early Seedling Growth
On 8 June 2011, the marked seedlings for each species were counted, and seedling mortality was recorded in each sown plot. Five seedlings according to emergence sequence for each species were randomly selected from each microplot to measure height and basal diameter. Each seedling was divided into stem and leaf (with petiole) to determine dry weight after being oven-dried at 70°C for 48 h.
## 2.4.1. Soil Properties
When the soil was filled, a soil water probe and a soil temperature probe, connected on a FDS-100 Automatic Temperature and Moisture Recorder (Handan Electronic Technology Company, Handan, China), were installed at the 5 cm depth in each microplot. Soil temperature and moisture at 5 cm were recorded automatically every 1 hour during the experimental period from 27 April to 7 June 2011. For reflecting soil salinity, soil electrical conductivity (EC) was measured every 6 or 7 days at 0–10 cm on each of the sown microplots using a Field Operated Meter (Easy Test, Poland).
## 2.4.2. Seedling Emergence
The emerged seedling was checked and marked every day in all microplots from sowing till 7 June 2011. In fact, no new germination was counted after 26 May. At each count day, all new emergent seedlings from 6 study species were marked using plastic label with species name and emergence date. The seedling emergence was defined as seedling successfully penetrated through the litter cover.
## 2.4.3. Early Seedling Growth
On 8 June 2011, the marked seedlings for each species were counted, and seedling mortality was recorded in each sown plot. Five seedlings according to emergence sequence for each species were randomly selected from each microplot to measure height and basal diameter. Each seedling was divided into stem and leaf (with petiole) to determine dry weight after being oven-dried at 70°C for 48 h.
## 2.5. Data Analysis
Seedling emergence time was the number of days from seed sowing to seedling emergence [35]. Seedling emergence rates were represented using the Emergence Rate Index (ERI) as described by Erbach [36]. The ERI is calculated using the following:
(1)
ERI
=
∑
FD
LD
P
n
-
P
(
n
-
1
)
N
,
where N is the number of days since planting, P
n is the percentage of plants emerged on day n, P
(
n
-
1
) is the percentage of plants emerged on day (n
-
1), LD is the last day when emergence was complete, and FD is the first day counting began. In this study, FD was set at 1.The emergence percentage was calculated as the ratio of emergence seedling number to germinable seed number. The survival rate of seedling was calculated as ratio of remaining marked seedlings to totally marked seedlings. The establishment success rate for each species was calculated by emergence percentage multiplying the survival rate of seedling [35].Repeated-measures ANOVA tests were used to examine the effects of litter cover on soil electrical conductivity with time as a fixed factor. A simple lineal regression was used to examine the correlation relationship between soil EC and soil moisture. Two-way ANOVA analysis was applied to determine the main and interactive effects of litter cover and species on seedling emergence, seedling establishment, and seedling growth. The mean comparison was conducted amongst treatments using Duncan’st-test after all data were assured to be normal. Significant differences for all statistical tests were evaluated at P
=
0.05. All data analyses were conducted with the SPSS16.0 software (Chicago, IL, USA).
## 3. Results
### 3.1. Soil Properties
Mean daily soil temperature and moisture (belowground 5 cm) varied greatly over the growing seasons (Figures1(a) and 1(c)). Mean daily soil temperature increased over time for all treatments but with more fluctuating pattern under lower than high litter cover treatments. The soil temperature gradually decreased as litter cover increased. Mean soil temperature under 600 and 800 g m−2 litter cover was significantly lower than 0, 200, and 400 g m−2 litter cover treatments (Figures 1(a) and 1(b)). Soil moisture generally increased with increased litter cover. The 800 g m−2 litter cover treatment had the highest soil moisture (Figures 1(c) and 1(d)). Soil EC showed a strong temporal pattern (P
<
0.001), which was significantly influenced by litter cover (P
<
0.001; Figure 2(a)). Soil EC decreased with increase of litter cover over the experimental period (Figure 2(b)). Regression analysis showed that there was a significantly negative correlation relationship between soil EC and soil moisture (Figure 2(c)).Soil temperature ((a) and (b)) and moisture ((c) and (d)) under different litter cover treatments. The lines represent the temporal dynamic of soil temperature and moisture; the bars with mean + SE represent mean soil temperature and moisture from 27 April 2011 to 7 June 2011; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
(d)The temporal dynamic (a) and mean values (b) of soil electrical conductivity (EC), and the relationship between soil EC and soil moisture (c); the bars with mean ± SE represent mean soil temperature and moisture from 27 April 2011 to 7 June 2011; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
### 3.2. Seedling Emergence Time, Rate, and Percentage
The seedling emergence time, emergence rate, and emergence percentage of seedlings varied among different litter cover and species. There was significant interaction between litter cover and species on emergence rate of seedlings (Table2; Figure 3).C. glaucum germinated at the earliest time and fastest rate, followed byP. tenuiflora andC. virgata, andL. davurica had the latest emergence time and slowest emergence rate (Figures 3(a) and 3(b)). The increase of litter cover tended to delay seedling emergence time and emergence rate, although slightly higher emergence rate ofC. glaucum was found with 400 and 600 g m−2 than 0 and 200 g m−2 litter cover (Figures 3(a) and 3(b)). The emergence time ofL. davurica seedlings was delayed by two days from 0 to 800 g m−2 litter cover (Figure 3(a)). The emergence rate index ofC. virgata seedlings was decreased by 28% from 0 to 800 g m−2 litter cover (Figure 3(b)). Amongst species,C. virgata had the highest emergence percentage, followed byP. tenuiflora, andC. glaucum had the lowest emergence percentage under all litter covers (Figure 3(c)). The emergence percentage of seedlings firstly increased then decreased as litter cover increased, and 400 g m−2 litter cover generally had the most positive effect on emergence percentage of seedlings (Figure 3(c)). When litter cover was below 600 g m−2, the presence of litter increased seedlings emergence percentage in all species, in particularP. tenuiflora andC. virgata, whose emergence percentage still was higher under 800 g m−2 litter cover than no litter (Figure 3(c)).Table 2
TheF and P values of two-way ANOVA analysis for litter cover, species, and their interactions on seedling emergence, seedling establishment, and seedling growth.
Variables
Litter cover
Species
Litter cover × species
F
P
F
P
F
P
Emergence time (day)
26.25
<0.001
885.28
<0.001
1.34
0.117
Emergence rate index
7.11
<0.001
325.33
<0.001
3.08
<0.001
Emergence percentage (%)
15.48
<0.001
77.44
<0.001
0.68
0.840
Seedling survival rate (%)
43.04
<0.001
6.37
<0.001
1.63
0.062
Establishment success rate (%)
19.31
<0.001
79.59
<0.001
0.62
0.892
Basal diameter of seedling (mm)
9.49
<0.001
487.26
<0.001
1.67
0.034
Seedling height (cm)
103.94
<0.001
398.22
<0.001
4.41
<0.001
Seedling dry weight (mg)
6.23
<0.001
220.66
<0.001
6.10
<0.001
Stem leaf ratio
176.85
<0.001
48.12
<0.001
5.94
<0.001The emergence time (a), emergence rate (b), and emergence percentage (c) of seedling under different species and litter covers. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
### 3.3. Seedling Survival Rate and Establishment Success Rate
Two-way ANOVA indicated that litter cover and species had significant impact on the seedling survival rate and establishment success rate (Table2). The litter cover enhanced the seedling survival rate (Figure 4(a)).P. tenuiflora andC. virgata had relative higher seedling survival rate compared with other species under all litter covers.A. theophrasti had the lowest seedlings survival rate under 0 litter cover compared with other species. As litter cover increases, seedling survival rate increased greatly (Figure 4(a)). The establishment success rate showed similar trend to the emergence percentage of seedling (Figure 4(b)).Seedling survival rate (a) and emergence success rate (b) under different species and litter covers. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
### 3.4. Seedling Growth Characteristics
There were significant interactions in growth characteristics of seedling between litter cover and species (Table2). Increase of litter cover resulted in decrease in basal diameter of seedling but increase in seedling height (Figures 5(a) and 5(b)). In all species,A. theophrasti andS. brachyotus had the highest seedling dry weight, followed byC. glaucum andC. virgata, andL. davurica had the lowest among all litter covers. High litter cover tended to increase seedling dry weight in most species except forS. brachyotus which showed a significant decrease in seedling dry weight from 400 to 800 g m−2 litter cover andL. davurica which showed no change with litter cover (Figure 5(c)). The stem leaf ratio for all species generally increased as litter cover increased. The stem leaf ratio under 800 g m−2 was significantly higher than other litter cover treatments for all species (Figure 5(d)).Seedling basal diameter (a), height (b), dry weight (c), and stem leaf ratio (d) under different species and litter cover. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
(d)
## 3.1. Soil Properties
Mean daily soil temperature and moisture (belowground 5 cm) varied greatly over the growing seasons (Figures1(a) and 1(c)). Mean daily soil temperature increased over time for all treatments but with more fluctuating pattern under lower than high litter cover treatments. The soil temperature gradually decreased as litter cover increased. Mean soil temperature under 600 and 800 g m−2 litter cover was significantly lower than 0, 200, and 400 g m−2 litter cover treatments (Figures 1(a) and 1(b)). Soil moisture generally increased with increased litter cover. The 800 g m−2 litter cover treatment had the highest soil moisture (Figures 1(c) and 1(d)). Soil EC showed a strong temporal pattern (P
<
0.001), which was significantly influenced by litter cover (P
<
0.001; Figure 2(a)). Soil EC decreased with increase of litter cover over the experimental period (Figure 2(b)). Regression analysis showed that there was a significantly negative correlation relationship between soil EC and soil moisture (Figure 2(c)).Soil temperature ((a) and (b)) and moisture ((c) and (d)) under different litter cover treatments. The lines represent the temporal dynamic of soil temperature and moisture; the bars with mean + SE represent mean soil temperature and moisture from 27 April 2011 to 7 June 2011; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
(d)The temporal dynamic (a) and mean values (b) of soil electrical conductivity (EC), and the relationship between soil EC and soil moisture (c); the bars with mean ± SE represent mean soil temperature and moisture from 27 April 2011 to 7 June 2011; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
## 3.2. Seedling Emergence Time, Rate, and Percentage
The seedling emergence time, emergence rate, and emergence percentage of seedlings varied among different litter cover and species. There was significant interaction between litter cover and species on emergence rate of seedlings (Table2; Figure 3).C. glaucum germinated at the earliest time and fastest rate, followed byP. tenuiflora andC. virgata, andL. davurica had the latest emergence time and slowest emergence rate (Figures 3(a) and 3(b)). The increase of litter cover tended to delay seedling emergence time and emergence rate, although slightly higher emergence rate ofC. glaucum was found with 400 and 600 g m−2 than 0 and 200 g m−2 litter cover (Figures 3(a) and 3(b)). The emergence time ofL. davurica seedlings was delayed by two days from 0 to 800 g m−2 litter cover (Figure 3(a)). The emergence rate index ofC. virgata seedlings was decreased by 28% from 0 to 800 g m−2 litter cover (Figure 3(b)). Amongst species,C. virgata had the highest emergence percentage, followed byP. tenuiflora, andC. glaucum had the lowest emergence percentage under all litter covers (Figure 3(c)). The emergence percentage of seedlings firstly increased then decreased as litter cover increased, and 400 g m−2 litter cover generally had the most positive effect on emergence percentage of seedlings (Figure 3(c)). When litter cover was below 600 g m−2, the presence of litter increased seedlings emergence percentage in all species, in particularP. tenuiflora andC. virgata, whose emergence percentage still was higher under 800 g m−2 litter cover than no litter (Figure 3(c)).Table 2
TheF and P values of two-way ANOVA analysis for litter cover, species, and their interactions on seedling emergence, seedling establishment, and seedling growth.
Variables
Litter cover
Species
Litter cover × species
F
P
F
P
F
P
Emergence time (day)
26.25
<0.001
885.28
<0.001
1.34
0.117
Emergence rate index
7.11
<0.001
325.33
<0.001
3.08
<0.001
Emergence percentage (%)
15.48
<0.001
77.44
<0.001
0.68
0.840
Seedling survival rate (%)
43.04
<0.001
6.37
<0.001
1.63
0.062
Establishment success rate (%)
19.31
<0.001
79.59
<0.001
0.62
0.892
Basal diameter of seedling (mm)
9.49
<0.001
487.26
<0.001
1.67
0.034
Seedling height (cm)
103.94
<0.001
398.22
<0.001
4.41
<0.001
Seedling dry weight (mg)
6.23
<0.001
220.66
<0.001
6.10
<0.001
Stem leaf ratio
176.85
<0.001
48.12
<0.001
5.94
<0.001The emergence time (a), emergence rate (b), and emergence percentage (c) of seedling under different species and litter covers. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
## 3.3. Seedling Survival Rate and Establishment Success Rate
Two-way ANOVA indicated that litter cover and species had significant impact on the seedling survival rate and establishment success rate (Table2). The litter cover enhanced the seedling survival rate (Figure 4(a)).P. tenuiflora andC. virgata had relative higher seedling survival rate compared with other species under all litter covers.A. theophrasti had the lowest seedlings survival rate under 0 litter cover compared with other species. As litter cover increases, seedling survival rate increased greatly (Figure 4(a)). The establishment success rate showed similar trend to the emergence percentage of seedling (Figure 4(b)).Seedling survival rate (a) and emergence success rate (b) under different species and litter covers. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
## 3.4. Seedling Growth Characteristics
There were significant interactions in growth characteristics of seedling between litter cover and species (Table2). Increase of litter cover resulted in decrease in basal diameter of seedling but increase in seedling height (Figures 5(a) and 5(b)). In all species,A. theophrasti andS. brachyotus had the highest seedling dry weight, followed byC. glaucum andC. virgata, andL. davurica had the lowest among all litter covers. High litter cover tended to increase seedling dry weight in most species except forS. brachyotus which showed a significant decrease in seedling dry weight from 400 to 800 g m−2 litter cover andL. davurica which showed no change with litter cover (Figure 5(c)). The stem leaf ratio for all species generally increased as litter cover increased. The stem leaf ratio under 800 g m−2 was significantly higher than other litter cover treatments for all species (Figure 5(d)).Seedling basal diameter (a), height (b), dry weight (c), and stem leaf ratio (d) under different species and litter cover. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
(d)
## 4. Discussion
### 4.1. The Influence of Litter on Soil Environment
Litter intercepts incident light and rain and changes the surface structure, hence, affecting the transfer of heat and water between the soil and the atmosphere [21], which can greatly influence soil temperature and moisture. Our results showed that increasing litter cover reduced soil temperature and increased temperature variability over time, which is consistent with previous researches [21, 24]. As Facelli and Pickett explained [37], litter intercepted incoming solar radiation and outgoing longwave radiation, which forms an insulating layer for soil to avoid direct heating from solar and heat absorption from atmosphere. Litter cover can improve soil moisture status as evidenced by increased soil moisture with increased litter cover in the current study. Murphy et al. reported that increase litter cover improves water infiltration and reduce water evaporation, which can be helpful for maintaining soil moisture [23].Soil drought and alkalization are two primary factors to limit plant establishment and growth. At the study site, surface soil drought and alkalization generally simultaneously occur due to the rising of soluble salt from deep soil layer with water transpiration. Results showed that surface soil EC was negatively correlated to increased soil moisture and the increase of litter cover reduced surface soil EC, indicating that high litter cover can increase water infiltration and decrease water evaporation, hence, reducing soil salinity simply by keeping salt in deeper soil layer. The changes of soil temperature, moisture, and salinity due to litter cover may facilitate plant emergence and establishment [38]. On the other hand, however, litter cover can reduce the quantity and quality of light (e.g., the red : far-red ratio) experienced by seeds and seedlings [37] or forms physical obstruction to seedlings growth [25], which may negatively act on emergence and establishment of plants [17, 21].
### 4.2. The Effects of Litter on Emergence and Early Growth of Seedling
The balance between facilitative and inhibitory effects of litter on seedlings emergence and growth depends on the amount of litter cover [39, 40]. Moderate litter covers may support seedlings emergence and growth by improving soil microclimate conditions [24, 41, 42]. However, facilitative effects are reduced when amount of litter covers are too high [17, 18], because high litter cover reduces light quantity and quality to cause deep shade or darkness [24, 41] and may create an impenetrable physical barrier for seedlings [37]. Loydi et al. found that litter cover had positive effects on emergence in grassland ecosystems when litter was below 500 g m−2 and seedling survival and biomass increased with <250 g m−2 litter cover [18]. Our results showed that emergence percentage and establishment success rate of all species increased when litter cover was below 600 g m−2, and no significant differences were found between 800 g m−2 and 0 litter treatments, in particular for two grass species (Figures 3(c) and 4(b)). Nevertheless, the emergence percentage and establishment success rate ofP. tenuiflora andC. virgata still were higher under 800 g m−2 litter cover than no litter. Moreover, in our study, even under high litter cover, the seedlings survival rate of all species was higher than 0 litter treatment, and increasing litter cover tended to increase seedling biomass of all species exceptS. brachyotus (Figure 5(c)). In addition, more litter can greatly increase soil moisture and reduce soil salinity. These results indicate that the facilitative effects of increasing litter cover are more important than its inhibitory effects at our study site.The emergence time and rate may determine the plant sequence in resource utilization, which can influence the plant fate in community, in particular when resources are limited [5]. Our study indicated that high litter cover delayed the emergence time of all species and reduced the emergence rate of most species except forC. glaucum (Figure 3(b)), as seedlings needed more time to penetrate a thick litter layer.The change of seedling morphology reflects the plant adaptability to environment conditions [43]. Our results showed that the basal diameter of seedlings decreased, but seedlings height and stem leaf ratio increased as litter cover increased. It was because increasing litter cover enhances the obstruction for seedling emergence [25] and also decreased the near-surface light availability [37], which caused seedlings to invest more energy to stem for upward growth to penetrate litter and intercept light [44], consequently inducing more biomass allocation to stem. The reduced seedling basal diameter has advantage for seedlings to pass through small gap under dense litter, which may be an effective adaptive strategy for plants subjected to thick litter cover.
### 4.3. The Responses of Emergence and Early Growth for Different Species to Litter Cover Change
Different species respond to litter cover differently in terms of seedling emergence rate and early growth. Seed size was considered as a good predictor for the effect of litter [28, 45, 46]. Loydi et al. showed that litter had stronger negative effects on emergence, survival, and biomass of seedlings from smaller seed (<1 mg) but slight positive effects on species with bigger seed (>1 mg) [18]. Relevant mechanisms were proposed to explain these differences, including light requirement of small seed species during germination process [47], the reserve effect [48], seedling size effect [49, 50], and the metabolic effect [18] related to seed size. In our study, the effect of litter on emergence rate and seedling growth varied between species, but no orderly species responses were found to be related to seed sizes. Regardless the seed size, we did not observe any trend of seedling emergence rate and growth related to life form or source habitat. Compared with previous studies, our results either were because there may be more complicate interrelations between environments and plants to control the effect of litter in this old field ecosystem or the study species is too few to reflect a general rule.Although no general relationships presented between seedling emergence, growth, and species properties, our results showed thatP. tenuiflora andC. virgata had relative higher emergence percentage, survival rate of seedling, and establishment success rate compared with other species (Figures 3(c) and 4). The emergence and seedling biomass of these two species showed more positive response to high litter cover than most other species, which may attribute to their subuliform morphology in earlier seedling stage. As typical weed species, althoughA. theophrasti,C. glaucum, andS. brachyotus had high seedling biomass (Figure 5(c)), their emergence percentage and establishment success rate were relatively lower than all other species (Figures 3(c) and 4(b)), which were strongly influenced by high litter cover. The emergence rate and establishment success rate ofL. davurica were higher thanA. theophrasti andC. glaucum, however, its seedlings performance was the worst among all species (Figure 5). This result indicated thatC. virgata will dominate further in this naturally successional old field ecosystem as litter accumulation increases. First,C. virgata will keep higher emergence and establishment capacity under high litter cover than other three weed species. Secondly,C. virgata can produce smaller but more seeds. Lastly, the seeds ofC. virgata mature and fall in early July before new litter is formed (Table 1). As a result, theC. virgata seeds can easily pass through litter layer and form soil seed bank [11, 28]. In contrast, the seeds from other three weed species are bigger thanC. virgata, and these seeds commonly mature and fall after mid-August when more litter has fallen, which may greatly reduce the opportunity for seeds to contact with soil, consequently having less opportunities to germinate and establish compared withC. virgata.
## 4.1. The Influence of Litter on Soil Environment
Litter intercepts incident light and rain and changes the surface structure, hence, affecting the transfer of heat and water between the soil and the atmosphere [21], which can greatly influence soil temperature and moisture. Our results showed that increasing litter cover reduced soil temperature and increased temperature variability over time, which is consistent with previous researches [21, 24]. As Facelli and Pickett explained [37], litter intercepted incoming solar radiation and outgoing longwave radiation, which forms an insulating layer for soil to avoid direct heating from solar and heat absorption from atmosphere. Litter cover can improve soil moisture status as evidenced by increased soil moisture with increased litter cover in the current study. Murphy et al. reported that increase litter cover improves water infiltration and reduce water evaporation, which can be helpful for maintaining soil moisture [23].Soil drought and alkalization are two primary factors to limit plant establishment and growth. At the study site, surface soil drought and alkalization generally simultaneously occur due to the rising of soluble salt from deep soil layer with water transpiration. Results showed that surface soil EC was negatively correlated to increased soil moisture and the increase of litter cover reduced surface soil EC, indicating that high litter cover can increase water infiltration and decrease water evaporation, hence, reducing soil salinity simply by keeping salt in deeper soil layer. The changes of soil temperature, moisture, and salinity due to litter cover may facilitate plant emergence and establishment [38]. On the other hand, however, litter cover can reduce the quantity and quality of light (e.g., the red : far-red ratio) experienced by seeds and seedlings [37] or forms physical obstruction to seedlings growth [25], which may negatively act on emergence and establishment of plants [17, 21].
## 4.2. The Effects of Litter on Emergence and Early Growth of Seedling
The balance between facilitative and inhibitory effects of litter on seedlings emergence and growth depends on the amount of litter cover [39, 40]. Moderate litter covers may support seedlings emergence and growth by improving soil microclimate conditions [24, 41, 42]. However, facilitative effects are reduced when amount of litter covers are too high [17, 18], because high litter cover reduces light quantity and quality to cause deep shade or darkness [24, 41] and may create an impenetrable physical barrier for seedlings [37]. Loydi et al. found that litter cover had positive effects on emergence in grassland ecosystems when litter was below 500 g m−2 and seedling survival and biomass increased with <250 g m−2 litter cover [18]. Our results showed that emergence percentage and establishment success rate of all species increased when litter cover was below 600 g m−2, and no significant differences were found between 800 g m−2 and 0 litter treatments, in particular for two grass species (Figures 3(c) and 4(b)). Nevertheless, the emergence percentage and establishment success rate ofP. tenuiflora andC. virgata still were higher under 800 g m−2 litter cover than no litter. Moreover, in our study, even under high litter cover, the seedlings survival rate of all species was higher than 0 litter treatment, and increasing litter cover tended to increase seedling biomass of all species exceptS. brachyotus (Figure 5(c)). In addition, more litter can greatly increase soil moisture and reduce soil salinity. These results indicate that the facilitative effects of increasing litter cover are more important than its inhibitory effects at our study site.The emergence time and rate may determine the plant sequence in resource utilization, which can influence the plant fate in community, in particular when resources are limited [5]. Our study indicated that high litter cover delayed the emergence time of all species and reduced the emergence rate of most species except forC. glaucum (Figure 3(b)), as seedlings needed more time to penetrate a thick litter layer.The change of seedling morphology reflects the plant adaptability to environment conditions [43]. Our results showed that the basal diameter of seedlings decreased, but seedlings height and stem leaf ratio increased as litter cover increased. It was because increasing litter cover enhances the obstruction for seedling emergence [25] and also decreased the near-surface light availability [37], which caused seedlings to invest more energy to stem for upward growth to penetrate litter and intercept light [44], consequently inducing more biomass allocation to stem. The reduced seedling basal diameter has advantage for seedlings to pass through small gap under dense litter, which may be an effective adaptive strategy for plants subjected to thick litter cover.
## 4.3. The Responses of Emergence and Early Growth for Different Species to Litter Cover Change
Different species respond to litter cover differently in terms of seedling emergence rate and early growth. Seed size was considered as a good predictor for the effect of litter [28, 45, 46]. Loydi et al. showed that litter had stronger negative effects on emergence, survival, and biomass of seedlings from smaller seed (<1 mg) but slight positive effects on species with bigger seed (>1 mg) [18]. Relevant mechanisms were proposed to explain these differences, including light requirement of small seed species during germination process [47], the reserve effect [48], seedling size effect [49, 50], and the metabolic effect [18] related to seed size. In our study, the effect of litter on emergence rate and seedling growth varied between species, but no orderly species responses were found to be related to seed sizes. Regardless the seed size, we did not observe any trend of seedling emergence rate and growth related to life form or source habitat. Compared with previous studies, our results either were because there may be more complicate interrelations between environments and plants to control the effect of litter in this old field ecosystem or the study species is too few to reflect a general rule.Although no general relationships presented between seedling emergence, growth, and species properties, our results showed thatP. tenuiflora andC. virgata had relative higher emergence percentage, survival rate of seedling, and establishment success rate compared with other species (Figures 3(c) and 4). The emergence and seedling biomass of these two species showed more positive response to high litter cover than most other species, which may attribute to their subuliform morphology in earlier seedling stage. As typical weed species, althoughA. theophrasti,C. glaucum, andS. brachyotus had high seedling biomass (Figure 5(c)), their emergence percentage and establishment success rate were relatively lower than all other species (Figures 3(c) and 4(b)), which were strongly influenced by high litter cover. The emergence rate and establishment success rate ofL. davurica were higher thanA. theophrasti andC. glaucum, however, its seedlings performance was the worst among all species (Figure 5). This result indicated thatC. virgata will dominate further in this naturally successional old field ecosystem as litter accumulation increases. First,C. virgata will keep higher emergence and establishment capacity under high litter cover than other three weed species. Secondly,C. virgata can produce smaller but more seeds. Lastly, the seeds ofC. virgata mature and fall in early July before new litter is formed (Table 1). As a result, theC. virgata seeds can easily pass through litter layer and form soil seed bank [11, 28]. In contrast, the seeds from other three weed species are bigger thanC. virgata, and these seeds commonly mature and fall after mid-August when more litter has fallen, which may greatly reduce the opportunity for seeds to contact with soil, consequently having less opportunities to germinate and establish compared withC. virgata.
## 5. Conclusions
The increase of litter cover can increase soil moisture and decrease surface soil salinity. Increased litter cover can delay seedlings emergence time and rate. Litter cover below 600 g m−2 can promote the seedling emergence and establishment of all studied species due to improved soil moisture conditions. Different species respond differently with increased litter cover.P. tenuiflora andC. virgata will acquire more emergence benefits under high litter cover. Therefore, it is expected that this old field ecosystem possibly becomesC. virgata dominated annual grass grassland under natural succession. With respect to similar emergence performance and seed features withC. virgata,P. tenuiflora may be potential to invade and establish into this old field ecosystem and accelerate the old field succession toward mature grassland, but the seed dispersal limitation may impede the invasion ofP. tenuiflora; therefore, an artificial seed addition may be necessary.
---
*Source: 101860-2014-07-01.xml* | 101860-2014-07-01_101860-2014-07-01.md | 53,905 | Facilitative and Inhibitory Effect of Litter on Seedling Emergence and Early Growth of Six Herbaceous Species in an Early Successional Old Field Ecosystem | Qiang Li; Pujia Yu; Xiaoying Chen; Guangdi Li; Daowei Zhou; Wei Zheng | The Scientific World Journal
(2014) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101860 | 101860-2014-07-01.xml | ---
## Abstract
In the current study, a field experiment was conducted to examine effects of litter on seedling emergence and early growth of four dominant weed species from the early successional stages of old field ecosystem and two perennial grassland species in late successional stages. Our results showed that increased litter cover decreased soil temperature and temperature variability over time and improved soil moisture status. Surface soil electrical conductivity increased as litter increased. The increased litter delayed seedling emergence time and rate. The emergence percentage of seedlings and establishment success rate firstly increased then decreased as litter cover increased. When litter biomass was below 600 g m−2, litter increased seedlings emergence and establishment success in all species. With litter increasing, the basal diameter of seedling decreased, but seedling height increased. Increasing amounts of litter tended to increase seedling dry weight and stem leaf ratio. Different species responded differently to the increase of litter. Puccinellia tenuiflora and Chloris virgata will acquire more emergence benefits under high litter amount. It is predicted that Chloris virgata will dominate further in this natural succession old field ecosystem with litter accumulation. Artificial P. tenuiflora seeds addition may be required to accelerate old field succession toward matured grassland.
---
## Body
## 1. Introduction
The emergence and early seedling growth are two crucial processes for establishment and performance of plants [1, 2]. For species in natural communities, a successful emergence not only implies that seedling finally breaks through the soil surface but also emphasizes the importance of emergence time and rate. Previous studies suggested that small difference in the emergence order of plants could determine their final fates under ubiquitous competition [3, 4]. Generally, during the establishment of seedling in community, earlier emergence can be a good trait for plants, because the quicker emergence can help plants occupy the priority of resource utilization, including light, soil water, and nutrition [5]. Apart from emergence, the early seedling growth also has great importance for the establishment and performance of plants [2]. In shorter time after emergence, the seedlings with relatively faster growth rate can approach greater plant size which helps it to occupy wider niche [6], consequently to acquire more advantage in competition for resource, particularly when the resources are limited [7].Seedling emergence and early growth can be impacted by various factors, such as the seed characteristics [8, 9], seed position in soil profile [10, 11], environment condition such as climate [12], soil physical and chemical properties [12], and biological interference [11]. Increasing evidence suggests that the environment condition plays important role in regulating the emergence and early growth of seedling, which not only acts directly on seedling emergence and growth process but also modifies the effects of other factors on these two processes [13–16].The amount of litter induced by land use change is an important environmental factor for global plant system [17, 18], which may control species recruitment and affect the structure and dynamics of plant communities [19, 20]. Effects of litter on seedling emergence and growth have largely been reported [17, 18, 21]. Litter can promote the seed germination and seedling growth by keeping soil temperature and moisture with ground cover and increase soil nutrient through decomposition [21–23]. However, the litter also may be disadvantageous for seedling emergence and growth with regard to reducing the light radiation to the soil surface [24], forming mechanical barrier [19, 25], or possibly releasing toxic secondary metabolites [26, 27]. The net effect of litter on seedling emergence and growth is the balance between facilitative and inhibitory actions. Previous studies showed that this “net effect” was controlled by the ecosystem type, litter amount, seed and seedling characteristics, and experiment method (greenhouse versus field) [11, 17, 18, 28]; therefore, for better understanding of the roles of litter in regulating emergence and early growth of seedling, it is necessary to further run experiments under different ecosystem types and experimental methods.The abandonment of reclaimed grassland occurs globally due to degradation and declined yield, which causes old fields to form. These old fields will renewably be converted into grassland after going through long time natural succession [29–31]. Once abandoned, the old fields are occupied by natural vegetation, which increases litter cover [29, 32] and, subsequently, induces change in surface soil environment [33], which may have important effects on plant establishment and species recruitment. During early successional stage, these old fields generally are dominated by some pioneer species (normally volunteer weed) which reserve a large number of seeds in soil. The fate of these seeds will increase litter cover and influence the further community assembly greatly [34].In Songnen plain of northeast China, due to arid and soil alkalization, large area of croplands were abandoned and became old fields. The restoration of these old fields ecosystem has important ecological and economic significance for this region. The soil moisture and alkali are two important factors to regulate old field succession, and we expect that these two factors can be improved by increasing litter amount. We designed this experiment to examine how varying litter covers impact the soil properties, emergence features including emergence time and rate, and early growth of seedlings from four dominant weed species in early successional stage and two perennial species in late successional stage in this old field ecosystem. We expected that our study will provide further understanding for the relationship between litter cover and plant establishment and also support a decision tool making to restore this old field ecosystem toward grassland.
## 2. Materials and Methods
### 2.1. Study Site
This study was conducted at Grassland Farming Research Station (E123°31′, N44°33′; Elevation 145 m) of Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, which is located at the Songnen Plain of northeast China. This study site has semiarid with continental climate. Mean annual temperature is 4.9°C; annual precipitation is approximately 410 mm, with 70% falling from June to September. The soil type is meadow saline-alkali soil, with high soil basic salt content. The typical vegetation in this study site isLeymus chinensis meadow. A large area ofL. chinensis meadow was converted into cropland due to the demand of grain during the last few decades. However, because the decline of soil fertility and crop yield after continuous tillage, some of the reclaimed croplands were abandoned as old field and were expected to restore grassland. In 2011, the current study was conducted on a recent abandoned cropland, two year ago. At the start of the experiment, the site was dominated by a range of annual weed species. The soil bulk density, soil organic carbon, and total nitrogen concentration at the depth of 0–20 cm were 1.47
±
0.04 g cm−3, 10.45
±
0.58 g kg−1, and 0.98
±
0.09 g kg−1, respectively.
### 2.2. Study Species
We selected four dominant weed species in this old field, includingAbutilon theophrasti (Malvaceae),Chenopodium glaucum (Chenopodiaceae),Sonchus brachyotus (Compositae), andChloris virgata (Gramineae), which occupied more than 90% of the aboveground biomass in the community. Among these four species,A. theophrasti,C. glaucum, andS. brachyotus were typical weed species in cropland, while theC. virgata was a common species in cropland and natural grassland. Two perennial species from natural grassland,Puccinellia tenuiflora (Gramineae) andLespedeza davurica (Leguminosae), which are potentially recruited species in later successional stage of old field (Table 1).Table 1
Habitat, life form, family, mass per seed (mg), and germinating capacity of each species included in the study.
Habitat
Life form
Species
The start time of seed ripening and falling
Mass per seed(mg)
Germinating capacity (%)
Grassland
Perennial
P. tenuiflora
Early July
0.562 ± 0.052
82
Perennial
L. davurica
Late August
2.053 ± 0.044
86
Old field
Perennial
S. brachyotus
Mid-August
1.138 ± 0.017
94
Annual
A. theophrasti
Mid-August
8.799 ± 0.125
92
Annual
C. glaucum
Mid-August
0.475 ± 0.028
92
Grassland, old field
Annual
C. virgata
Early July
0.337 ± 0.012
84The seeds of each species were collected in autumn 2010 from 10 different populations and at least 10 individuals of each population. Seeds were stored in darkness at room temperature (20°C) until sowing on 26 April 2011. An initial germinating capacity test on additional seed batches was conducted by examining germination rate under optimum light, temperature, and water condition.
### 2.3. Experimental Design
The experiment was a completely randomised block design. There were 4 repeated blocks; 5 litter treatments (0, 200, 400, 600, and 800 g m−2) were randomly assigned to each block. Totally, there were 20 plots, and the plot size was 4 m × 4 m with 0.5 m buffers between plots. Within each plot, two microplots (1 m × 1 m) were placed at the centre of the plot, side by side, with 0.5 m space between two plots.In mid-April 2011, all aboveground plant materials were removed. The soil at the depth of 0–20 cm in all microplots was collected and then steam-sterilised prior to the experiments to kill any plant seeds potentially present in the substrate. The steam-sterilised soil was filled back into original microplots.On 26 April 2011, one of the microplots, randomly selected, was sowed with 50 seeds of each species, and the other microplot did not receive any seed as control plot. Any germination from control plot was from external seeds. Prior to sowing,L. davurica seeds were soaked in 98% H2SO4 for 0.5 hour to break hard seed coat. The seeds were spread on the soil surface and covered by a thin layer of soil. Immediately after sowing, the whole plot was covered by litters with designated amount according to treatments. The litter wasL. chinensis hay harvested in adjacent meadow in autumn 2010. The designed litter addition level represented the natural litter production from low to high productivity in the old field ecosystem.
### 2.4. Measurements
#### 2.4.1. Soil Properties
When the soil was filled, a soil water probe and a soil temperature probe, connected on a FDS-100 Automatic Temperature and Moisture Recorder (Handan Electronic Technology Company, Handan, China), were installed at the 5 cm depth in each microplot. Soil temperature and moisture at 5 cm were recorded automatically every 1 hour during the experimental period from 27 April to 7 June 2011. For reflecting soil salinity, soil electrical conductivity (EC) was measured every 6 or 7 days at 0–10 cm on each of the sown microplots using a Field Operated Meter (Easy Test, Poland).
#### 2.4.2. Seedling Emergence
The emerged seedling was checked and marked every day in all microplots from sowing till 7 June 2011. In fact, no new germination was counted after 26 May. At each count day, all new emergent seedlings from 6 study species were marked using plastic label with species name and emergence date. The seedling emergence was defined as seedling successfully penetrated through the litter cover.
#### 2.4.3. Early Seedling Growth
On 8 June 2011, the marked seedlings for each species were counted, and seedling mortality was recorded in each sown plot. Five seedlings according to emergence sequence for each species were randomly selected from each microplot to measure height and basal diameter. Each seedling was divided into stem and leaf (with petiole) to determine dry weight after being oven-dried at 70°C for 48 h.
### 2.5. Data Analysis
Seedling emergence time was the number of days from seed sowing to seedling emergence [35]. Seedling emergence rates were represented using the Emergence Rate Index (ERI) as described by Erbach [36]. The ERI is calculated using the following:
(1)
ERI
=
∑
FD
LD
P
n
-
P
(
n
-
1
)
N
,
where N is the number of days since planting, P
n is the percentage of plants emerged on day n, P
(
n
-
1
) is the percentage of plants emerged on day (n
-
1), LD is the last day when emergence was complete, and FD is the first day counting began. In this study, FD was set at 1.The emergence percentage was calculated as the ratio of emergence seedling number to germinable seed number. The survival rate of seedling was calculated as ratio of remaining marked seedlings to totally marked seedlings. The establishment success rate for each species was calculated by emergence percentage multiplying the survival rate of seedling [35].Repeated-measures ANOVA tests were used to examine the effects of litter cover on soil electrical conductivity with time as a fixed factor. A simple lineal regression was used to examine the correlation relationship between soil EC and soil moisture. Two-way ANOVA analysis was applied to determine the main and interactive effects of litter cover and species on seedling emergence, seedling establishment, and seedling growth. The mean comparison was conducted amongst treatments using Duncan’st-test after all data were assured to be normal. Significant differences for all statistical tests were evaluated at P
=
0.05. All data analyses were conducted with the SPSS16.0 software (Chicago, IL, USA).
## 2.1. Study Site
This study was conducted at Grassland Farming Research Station (E123°31′, N44°33′; Elevation 145 m) of Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, which is located at the Songnen Plain of northeast China. This study site has semiarid with continental climate. Mean annual temperature is 4.9°C; annual precipitation is approximately 410 mm, with 70% falling from June to September. The soil type is meadow saline-alkali soil, with high soil basic salt content. The typical vegetation in this study site isLeymus chinensis meadow. A large area ofL. chinensis meadow was converted into cropland due to the demand of grain during the last few decades. However, because the decline of soil fertility and crop yield after continuous tillage, some of the reclaimed croplands were abandoned as old field and were expected to restore grassland. In 2011, the current study was conducted on a recent abandoned cropland, two year ago. At the start of the experiment, the site was dominated by a range of annual weed species. The soil bulk density, soil organic carbon, and total nitrogen concentration at the depth of 0–20 cm were 1.47
±
0.04 g cm−3, 10.45
±
0.58 g kg−1, and 0.98
±
0.09 g kg−1, respectively.
## 2.2. Study Species
We selected four dominant weed species in this old field, includingAbutilon theophrasti (Malvaceae),Chenopodium glaucum (Chenopodiaceae),Sonchus brachyotus (Compositae), andChloris virgata (Gramineae), which occupied more than 90% of the aboveground biomass in the community. Among these four species,A. theophrasti,C. glaucum, andS. brachyotus were typical weed species in cropland, while theC. virgata was a common species in cropland and natural grassland. Two perennial species from natural grassland,Puccinellia tenuiflora (Gramineae) andLespedeza davurica (Leguminosae), which are potentially recruited species in later successional stage of old field (Table 1).Table 1
Habitat, life form, family, mass per seed (mg), and germinating capacity of each species included in the study.
Habitat
Life form
Species
The start time of seed ripening and falling
Mass per seed(mg)
Germinating capacity (%)
Grassland
Perennial
P. tenuiflora
Early July
0.562 ± 0.052
82
Perennial
L. davurica
Late August
2.053 ± 0.044
86
Old field
Perennial
S. brachyotus
Mid-August
1.138 ± 0.017
94
Annual
A. theophrasti
Mid-August
8.799 ± 0.125
92
Annual
C. glaucum
Mid-August
0.475 ± 0.028
92
Grassland, old field
Annual
C. virgata
Early July
0.337 ± 0.012
84The seeds of each species were collected in autumn 2010 from 10 different populations and at least 10 individuals of each population. Seeds were stored in darkness at room temperature (20°C) until sowing on 26 April 2011. An initial germinating capacity test on additional seed batches was conducted by examining germination rate under optimum light, temperature, and water condition.
## 2.3. Experimental Design
The experiment was a completely randomised block design. There were 4 repeated blocks; 5 litter treatments (0, 200, 400, 600, and 800 g m−2) were randomly assigned to each block. Totally, there were 20 plots, and the plot size was 4 m × 4 m with 0.5 m buffers between plots. Within each plot, two microplots (1 m × 1 m) were placed at the centre of the plot, side by side, with 0.5 m space between two plots.In mid-April 2011, all aboveground plant materials were removed. The soil at the depth of 0–20 cm in all microplots was collected and then steam-sterilised prior to the experiments to kill any plant seeds potentially present in the substrate. The steam-sterilised soil was filled back into original microplots.On 26 April 2011, one of the microplots, randomly selected, was sowed with 50 seeds of each species, and the other microplot did not receive any seed as control plot. Any germination from control plot was from external seeds. Prior to sowing,L. davurica seeds were soaked in 98% H2SO4 for 0.5 hour to break hard seed coat. The seeds were spread on the soil surface and covered by a thin layer of soil. Immediately after sowing, the whole plot was covered by litters with designated amount according to treatments. The litter wasL. chinensis hay harvested in adjacent meadow in autumn 2010. The designed litter addition level represented the natural litter production from low to high productivity in the old field ecosystem.
## 2.4. Measurements
### 2.4.1. Soil Properties
When the soil was filled, a soil water probe and a soil temperature probe, connected on a FDS-100 Automatic Temperature and Moisture Recorder (Handan Electronic Technology Company, Handan, China), were installed at the 5 cm depth in each microplot. Soil temperature and moisture at 5 cm were recorded automatically every 1 hour during the experimental period from 27 April to 7 June 2011. For reflecting soil salinity, soil electrical conductivity (EC) was measured every 6 or 7 days at 0–10 cm on each of the sown microplots using a Field Operated Meter (Easy Test, Poland).
### 2.4.2. Seedling Emergence
The emerged seedling was checked and marked every day in all microplots from sowing till 7 June 2011. In fact, no new germination was counted after 26 May. At each count day, all new emergent seedlings from 6 study species were marked using plastic label with species name and emergence date. The seedling emergence was defined as seedling successfully penetrated through the litter cover.
### 2.4.3. Early Seedling Growth
On 8 June 2011, the marked seedlings for each species were counted, and seedling mortality was recorded in each sown plot. Five seedlings according to emergence sequence for each species were randomly selected from each microplot to measure height and basal diameter. Each seedling was divided into stem and leaf (with petiole) to determine dry weight after being oven-dried at 70°C for 48 h.
## 2.4.1. Soil Properties
When the soil was filled, a soil water probe and a soil temperature probe, connected on a FDS-100 Automatic Temperature and Moisture Recorder (Handan Electronic Technology Company, Handan, China), were installed at the 5 cm depth in each microplot. Soil temperature and moisture at 5 cm were recorded automatically every 1 hour during the experimental period from 27 April to 7 June 2011. For reflecting soil salinity, soil electrical conductivity (EC) was measured every 6 or 7 days at 0–10 cm on each of the sown microplots using a Field Operated Meter (Easy Test, Poland).
## 2.4.2. Seedling Emergence
The emerged seedling was checked and marked every day in all microplots from sowing till 7 June 2011. In fact, no new germination was counted after 26 May. At each count day, all new emergent seedlings from 6 study species were marked using plastic label with species name and emergence date. The seedling emergence was defined as seedling successfully penetrated through the litter cover.
## 2.4.3. Early Seedling Growth
On 8 June 2011, the marked seedlings for each species were counted, and seedling mortality was recorded in each sown plot. Five seedlings according to emergence sequence for each species were randomly selected from each microplot to measure height and basal diameter. Each seedling was divided into stem and leaf (with petiole) to determine dry weight after being oven-dried at 70°C for 48 h.
## 2.5. Data Analysis
Seedling emergence time was the number of days from seed sowing to seedling emergence [35]. Seedling emergence rates were represented using the Emergence Rate Index (ERI) as described by Erbach [36]. The ERI is calculated using the following:
(1)
ERI
=
∑
FD
LD
P
n
-
P
(
n
-
1
)
N
,
where N is the number of days since planting, P
n is the percentage of plants emerged on day n, P
(
n
-
1
) is the percentage of plants emerged on day (n
-
1), LD is the last day when emergence was complete, and FD is the first day counting began. In this study, FD was set at 1.The emergence percentage was calculated as the ratio of emergence seedling number to germinable seed number. The survival rate of seedling was calculated as ratio of remaining marked seedlings to totally marked seedlings. The establishment success rate for each species was calculated by emergence percentage multiplying the survival rate of seedling [35].Repeated-measures ANOVA tests were used to examine the effects of litter cover on soil electrical conductivity with time as a fixed factor. A simple lineal regression was used to examine the correlation relationship between soil EC and soil moisture. Two-way ANOVA analysis was applied to determine the main and interactive effects of litter cover and species on seedling emergence, seedling establishment, and seedling growth. The mean comparison was conducted amongst treatments using Duncan’st-test after all data were assured to be normal. Significant differences for all statistical tests were evaluated at P
=
0.05. All data analyses were conducted with the SPSS16.0 software (Chicago, IL, USA).
## 3. Results
### 3.1. Soil Properties
Mean daily soil temperature and moisture (belowground 5 cm) varied greatly over the growing seasons (Figures1(a) and 1(c)). Mean daily soil temperature increased over time for all treatments but with more fluctuating pattern under lower than high litter cover treatments. The soil temperature gradually decreased as litter cover increased. Mean soil temperature under 600 and 800 g m−2 litter cover was significantly lower than 0, 200, and 400 g m−2 litter cover treatments (Figures 1(a) and 1(b)). Soil moisture generally increased with increased litter cover. The 800 g m−2 litter cover treatment had the highest soil moisture (Figures 1(c) and 1(d)). Soil EC showed a strong temporal pattern (P
<
0.001), which was significantly influenced by litter cover (P
<
0.001; Figure 2(a)). Soil EC decreased with increase of litter cover over the experimental period (Figure 2(b)). Regression analysis showed that there was a significantly negative correlation relationship between soil EC and soil moisture (Figure 2(c)).Soil temperature ((a) and (b)) and moisture ((c) and (d)) under different litter cover treatments. The lines represent the temporal dynamic of soil temperature and moisture; the bars with mean + SE represent mean soil temperature and moisture from 27 April 2011 to 7 June 2011; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
(d)The temporal dynamic (a) and mean values (b) of soil electrical conductivity (EC), and the relationship between soil EC and soil moisture (c); the bars with mean ± SE represent mean soil temperature and moisture from 27 April 2011 to 7 June 2011; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
### 3.2. Seedling Emergence Time, Rate, and Percentage
The seedling emergence time, emergence rate, and emergence percentage of seedlings varied among different litter cover and species. There was significant interaction between litter cover and species on emergence rate of seedlings (Table2; Figure 3).C. glaucum germinated at the earliest time and fastest rate, followed byP. tenuiflora andC. virgata, andL. davurica had the latest emergence time and slowest emergence rate (Figures 3(a) and 3(b)). The increase of litter cover tended to delay seedling emergence time and emergence rate, although slightly higher emergence rate ofC. glaucum was found with 400 and 600 g m−2 than 0 and 200 g m−2 litter cover (Figures 3(a) and 3(b)). The emergence time ofL. davurica seedlings was delayed by two days from 0 to 800 g m−2 litter cover (Figure 3(a)). The emergence rate index ofC. virgata seedlings was decreased by 28% from 0 to 800 g m−2 litter cover (Figure 3(b)). Amongst species,C. virgata had the highest emergence percentage, followed byP. tenuiflora, andC. glaucum had the lowest emergence percentage under all litter covers (Figure 3(c)). The emergence percentage of seedlings firstly increased then decreased as litter cover increased, and 400 g m−2 litter cover generally had the most positive effect on emergence percentage of seedlings (Figure 3(c)). When litter cover was below 600 g m−2, the presence of litter increased seedlings emergence percentage in all species, in particularP. tenuiflora andC. virgata, whose emergence percentage still was higher under 800 g m−2 litter cover than no litter (Figure 3(c)).Table 2
TheF and P values of two-way ANOVA analysis for litter cover, species, and their interactions on seedling emergence, seedling establishment, and seedling growth.
Variables
Litter cover
Species
Litter cover × species
F
P
F
P
F
P
Emergence time (day)
26.25
<0.001
885.28
<0.001
1.34
0.117
Emergence rate index
7.11
<0.001
325.33
<0.001
3.08
<0.001
Emergence percentage (%)
15.48
<0.001
77.44
<0.001
0.68
0.840
Seedling survival rate (%)
43.04
<0.001
6.37
<0.001
1.63
0.062
Establishment success rate (%)
19.31
<0.001
79.59
<0.001
0.62
0.892
Basal diameter of seedling (mm)
9.49
<0.001
487.26
<0.001
1.67
0.034
Seedling height (cm)
103.94
<0.001
398.22
<0.001
4.41
<0.001
Seedling dry weight (mg)
6.23
<0.001
220.66
<0.001
6.10
<0.001
Stem leaf ratio
176.85
<0.001
48.12
<0.001
5.94
<0.001The emergence time (a), emergence rate (b), and emergence percentage (c) of seedling under different species and litter covers. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
### 3.3. Seedling Survival Rate and Establishment Success Rate
Two-way ANOVA indicated that litter cover and species had significant impact on the seedling survival rate and establishment success rate (Table2). The litter cover enhanced the seedling survival rate (Figure 4(a)).P. tenuiflora andC. virgata had relative higher seedling survival rate compared with other species under all litter covers.A. theophrasti had the lowest seedlings survival rate under 0 litter cover compared with other species. As litter cover increases, seedling survival rate increased greatly (Figure 4(a)). The establishment success rate showed similar trend to the emergence percentage of seedling (Figure 4(b)).Seedling survival rate (a) and emergence success rate (b) under different species and litter covers. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
### 3.4. Seedling Growth Characteristics
There were significant interactions in growth characteristics of seedling between litter cover and species (Table2). Increase of litter cover resulted in decrease in basal diameter of seedling but increase in seedling height (Figures 5(a) and 5(b)). In all species,A. theophrasti andS. brachyotus had the highest seedling dry weight, followed byC. glaucum andC. virgata, andL. davurica had the lowest among all litter covers. High litter cover tended to increase seedling dry weight in most species except forS. brachyotus which showed a significant decrease in seedling dry weight from 400 to 800 g m−2 litter cover andL. davurica which showed no change with litter cover (Figure 5(c)). The stem leaf ratio for all species generally increased as litter cover increased. The stem leaf ratio under 800 g m−2 was significantly higher than other litter cover treatments for all species (Figure 5(d)).Seedling basal diameter (a), height (b), dry weight (c), and stem leaf ratio (d) under different species and litter cover. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
(d)
## 3.1. Soil Properties
Mean daily soil temperature and moisture (belowground 5 cm) varied greatly over the growing seasons (Figures1(a) and 1(c)). Mean daily soil temperature increased over time for all treatments but with more fluctuating pattern under lower than high litter cover treatments. The soil temperature gradually decreased as litter cover increased. Mean soil temperature under 600 and 800 g m−2 litter cover was significantly lower than 0, 200, and 400 g m−2 litter cover treatments (Figures 1(a) and 1(b)). Soil moisture generally increased with increased litter cover. The 800 g m−2 litter cover treatment had the highest soil moisture (Figures 1(c) and 1(d)). Soil EC showed a strong temporal pattern (P
<
0.001), which was significantly influenced by litter cover (P
<
0.001; Figure 2(a)). Soil EC decreased with increase of litter cover over the experimental period (Figure 2(b)). Regression analysis showed that there was a significantly negative correlation relationship between soil EC and soil moisture (Figure 2(c)).Soil temperature ((a) and (b)) and moisture ((c) and (d)) under different litter cover treatments. The lines represent the temporal dynamic of soil temperature and moisture; the bars with mean + SE represent mean soil temperature and moisture from 27 April 2011 to 7 June 2011; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
(d)The temporal dynamic (a) and mean values (b) of soil electrical conductivity (EC), and the relationship between soil EC and soil moisture (c); the bars with mean ± SE represent mean soil temperature and moisture from 27 April 2011 to 7 June 2011; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
## 3.2. Seedling Emergence Time, Rate, and Percentage
The seedling emergence time, emergence rate, and emergence percentage of seedlings varied among different litter cover and species. There was significant interaction between litter cover and species on emergence rate of seedlings (Table2; Figure 3).C. glaucum germinated at the earliest time and fastest rate, followed byP. tenuiflora andC. virgata, andL. davurica had the latest emergence time and slowest emergence rate (Figures 3(a) and 3(b)). The increase of litter cover tended to delay seedling emergence time and emergence rate, although slightly higher emergence rate ofC. glaucum was found with 400 and 600 g m−2 than 0 and 200 g m−2 litter cover (Figures 3(a) and 3(b)). The emergence time ofL. davurica seedlings was delayed by two days from 0 to 800 g m−2 litter cover (Figure 3(a)). The emergence rate index ofC. virgata seedlings was decreased by 28% from 0 to 800 g m−2 litter cover (Figure 3(b)). Amongst species,C. virgata had the highest emergence percentage, followed byP. tenuiflora, andC. glaucum had the lowest emergence percentage under all litter covers (Figure 3(c)). The emergence percentage of seedlings firstly increased then decreased as litter cover increased, and 400 g m−2 litter cover generally had the most positive effect on emergence percentage of seedlings (Figure 3(c)). When litter cover was below 600 g m−2, the presence of litter increased seedlings emergence percentage in all species, in particularP. tenuiflora andC. virgata, whose emergence percentage still was higher under 800 g m−2 litter cover than no litter (Figure 3(c)).Table 2
TheF and P values of two-way ANOVA analysis for litter cover, species, and their interactions on seedling emergence, seedling establishment, and seedling growth.
Variables
Litter cover
Species
Litter cover × species
F
P
F
P
F
P
Emergence time (day)
26.25
<0.001
885.28
<0.001
1.34
0.117
Emergence rate index
7.11
<0.001
325.33
<0.001
3.08
<0.001
Emergence percentage (%)
15.48
<0.001
77.44
<0.001
0.68
0.840
Seedling survival rate (%)
43.04
<0.001
6.37
<0.001
1.63
0.062
Establishment success rate (%)
19.31
<0.001
79.59
<0.001
0.62
0.892
Basal diameter of seedling (mm)
9.49
<0.001
487.26
<0.001
1.67
0.034
Seedling height (cm)
103.94
<0.001
398.22
<0.001
4.41
<0.001
Seedling dry weight (mg)
6.23
<0.001
220.66
<0.001
6.10
<0.001
Stem leaf ratio
176.85
<0.001
48.12
<0.001
5.94
<0.001The emergence time (a), emergence rate (b), and emergence percentage (c) of seedling under different species and litter covers. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
## 3.3. Seedling Survival Rate and Establishment Success Rate
Two-way ANOVA indicated that litter cover and species had significant impact on the seedling survival rate and establishment success rate (Table2). The litter cover enhanced the seedling survival rate (Figure 4(a)).P. tenuiflora andC. virgata had relative higher seedling survival rate compared with other species under all litter covers.A. theophrasti had the lowest seedlings survival rate under 0 litter cover compared with other species. As litter cover increases, seedling survival rate increased greatly (Figure 4(a)). The establishment success rate showed similar trend to the emergence percentage of seedling (Figure 4(b)).Seedling survival rate (a) and emergence success rate (b) under different species and litter covers. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
## 3.4. Seedling Growth Characteristics
There were significant interactions in growth characteristics of seedling between litter cover and species (Table2). Increase of litter cover resulted in decrease in basal diameter of seedling but increase in seedling height (Figures 5(a) and 5(b)). In all species,A. theophrasti andS. brachyotus had the highest seedling dry weight, followed byC. glaucum andC. virgata, andL. davurica had the lowest among all litter covers. High litter cover tended to increase seedling dry weight in most species except forS. brachyotus which showed a significant decrease in seedling dry weight from 400 to 800 g m−2 litter cover andL. davurica which showed no change with litter cover (Figure 5(c)). The stem leaf ratio for all species generally increased as litter cover increased. The stem leaf ratio under 800 g m−2 was significantly higher than other litter cover treatments for all species (Figure 5(d)).Seedling basal diameter (a), height (b), dry weight (c), and stem leaf ratio (d) under different species and litter cover. The values were represented as mean + SE; different letters indicate significant difference between treatments atP
<
0.05.
(a)
(b)
(c)
(d)
## 4. Discussion
### 4.1. The Influence of Litter on Soil Environment
Litter intercepts incident light and rain and changes the surface structure, hence, affecting the transfer of heat and water between the soil and the atmosphere [21], which can greatly influence soil temperature and moisture. Our results showed that increasing litter cover reduced soil temperature and increased temperature variability over time, which is consistent with previous researches [21, 24]. As Facelli and Pickett explained [37], litter intercepted incoming solar radiation and outgoing longwave radiation, which forms an insulating layer for soil to avoid direct heating from solar and heat absorption from atmosphere. Litter cover can improve soil moisture status as evidenced by increased soil moisture with increased litter cover in the current study. Murphy et al. reported that increase litter cover improves water infiltration and reduce water evaporation, which can be helpful for maintaining soil moisture [23].Soil drought and alkalization are two primary factors to limit plant establishment and growth. At the study site, surface soil drought and alkalization generally simultaneously occur due to the rising of soluble salt from deep soil layer with water transpiration. Results showed that surface soil EC was negatively correlated to increased soil moisture and the increase of litter cover reduced surface soil EC, indicating that high litter cover can increase water infiltration and decrease water evaporation, hence, reducing soil salinity simply by keeping salt in deeper soil layer. The changes of soil temperature, moisture, and salinity due to litter cover may facilitate plant emergence and establishment [38]. On the other hand, however, litter cover can reduce the quantity and quality of light (e.g., the red : far-red ratio) experienced by seeds and seedlings [37] or forms physical obstruction to seedlings growth [25], which may negatively act on emergence and establishment of plants [17, 21].
### 4.2. The Effects of Litter on Emergence and Early Growth of Seedling
The balance between facilitative and inhibitory effects of litter on seedlings emergence and growth depends on the amount of litter cover [39, 40]. Moderate litter covers may support seedlings emergence and growth by improving soil microclimate conditions [24, 41, 42]. However, facilitative effects are reduced when amount of litter covers are too high [17, 18], because high litter cover reduces light quantity and quality to cause deep shade or darkness [24, 41] and may create an impenetrable physical barrier for seedlings [37]. Loydi et al. found that litter cover had positive effects on emergence in grassland ecosystems when litter was below 500 g m−2 and seedling survival and biomass increased with <250 g m−2 litter cover [18]. Our results showed that emergence percentage and establishment success rate of all species increased when litter cover was below 600 g m−2, and no significant differences were found between 800 g m−2 and 0 litter treatments, in particular for two grass species (Figures 3(c) and 4(b)). Nevertheless, the emergence percentage and establishment success rate ofP. tenuiflora andC. virgata still were higher under 800 g m−2 litter cover than no litter. Moreover, in our study, even under high litter cover, the seedlings survival rate of all species was higher than 0 litter treatment, and increasing litter cover tended to increase seedling biomass of all species exceptS. brachyotus (Figure 5(c)). In addition, more litter can greatly increase soil moisture and reduce soil salinity. These results indicate that the facilitative effects of increasing litter cover are more important than its inhibitory effects at our study site.The emergence time and rate may determine the plant sequence in resource utilization, which can influence the plant fate in community, in particular when resources are limited [5]. Our study indicated that high litter cover delayed the emergence time of all species and reduced the emergence rate of most species except forC. glaucum (Figure 3(b)), as seedlings needed more time to penetrate a thick litter layer.The change of seedling morphology reflects the plant adaptability to environment conditions [43]. Our results showed that the basal diameter of seedlings decreased, but seedlings height and stem leaf ratio increased as litter cover increased. It was because increasing litter cover enhances the obstruction for seedling emergence [25] and also decreased the near-surface light availability [37], which caused seedlings to invest more energy to stem for upward growth to penetrate litter and intercept light [44], consequently inducing more biomass allocation to stem. The reduced seedling basal diameter has advantage for seedlings to pass through small gap under dense litter, which may be an effective adaptive strategy for plants subjected to thick litter cover.
### 4.3. The Responses of Emergence and Early Growth for Different Species to Litter Cover Change
Different species respond to litter cover differently in terms of seedling emergence rate and early growth. Seed size was considered as a good predictor for the effect of litter [28, 45, 46]. Loydi et al. showed that litter had stronger negative effects on emergence, survival, and biomass of seedlings from smaller seed (<1 mg) but slight positive effects on species with bigger seed (>1 mg) [18]. Relevant mechanisms were proposed to explain these differences, including light requirement of small seed species during germination process [47], the reserve effect [48], seedling size effect [49, 50], and the metabolic effect [18] related to seed size. In our study, the effect of litter on emergence rate and seedling growth varied between species, but no orderly species responses were found to be related to seed sizes. Regardless the seed size, we did not observe any trend of seedling emergence rate and growth related to life form or source habitat. Compared with previous studies, our results either were because there may be more complicate interrelations between environments and plants to control the effect of litter in this old field ecosystem or the study species is too few to reflect a general rule.Although no general relationships presented between seedling emergence, growth, and species properties, our results showed thatP. tenuiflora andC. virgata had relative higher emergence percentage, survival rate of seedling, and establishment success rate compared with other species (Figures 3(c) and 4). The emergence and seedling biomass of these two species showed more positive response to high litter cover than most other species, which may attribute to their subuliform morphology in earlier seedling stage. As typical weed species, althoughA. theophrasti,C. glaucum, andS. brachyotus had high seedling biomass (Figure 5(c)), their emergence percentage and establishment success rate were relatively lower than all other species (Figures 3(c) and 4(b)), which were strongly influenced by high litter cover. The emergence rate and establishment success rate ofL. davurica were higher thanA. theophrasti andC. glaucum, however, its seedlings performance was the worst among all species (Figure 5). This result indicated thatC. virgata will dominate further in this naturally successional old field ecosystem as litter accumulation increases. First,C. virgata will keep higher emergence and establishment capacity under high litter cover than other three weed species. Secondly,C. virgata can produce smaller but more seeds. Lastly, the seeds ofC. virgata mature and fall in early July before new litter is formed (Table 1). As a result, theC. virgata seeds can easily pass through litter layer and form soil seed bank [11, 28]. In contrast, the seeds from other three weed species are bigger thanC. virgata, and these seeds commonly mature and fall after mid-August when more litter has fallen, which may greatly reduce the opportunity for seeds to contact with soil, consequently having less opportunities to germinate and establish compared withC. virgata.
## 4.1. The Influence of Litter on Soil Environment
Litter intercepts incident light and rain and changes the surface structure, hence, affecting the transfer of heat and water between the soil and the atmosphere [21], which can greatly influence soil temperature and moisture. Our results showed that increasing litter cover reduced soil temperature and increased temperature variability over time, which is consistent with previous researches [21, 24]. As Facelli and Pickett explained [37], litter intercepted incoming solar radiation and outgoing longwave radiation, which forms an insulating layer for soil to avoid direct heating from solar and heat absorption from atmosphere. Litter cover can improve soil moisture status as evidenced by increased soil moisture with increased litter cover in the current study. Murphy et al. reported that increase litter cover improves water infiltration and reduce water evaporation, which can be helpful for maintaining soil moisture [23].Soil drought and alkalization are two primary factors to limit plant establishment and growth. At the study site, surface soil drought and alkalization generally simultaneously occur due to the rising of soluble salt from deep soil layer with water transpiration. Results showed that surface soil EC was negatively correlated to increased soil moisture and the increase of litter cover reduced surface soil EC, indicating that high litter cover can increase water infiltration and decrease water evaporation, hence, reducing soil salinity simply by keeping salt in deeper soil layer. The changes of soil temperature, moisture, and salinity due to litter cover may facilitate plant emergence and establishment [38]. On the other hand, however, litter cover can reduce the quantity and quality of light (e.g., the red : far-red ratio) experienced by seeds and seedlings [37] or forms physical obstruction to seedlings growth [25], which may negatively act on emergence and establishment of plants [17, 21].
## 4.2. The Effects of Litter on Emergence and Early Growth of Seedling
The balance between facilitative and inhibitory effects of litter on seedlings emergence and growth depends on the amount of litter cover [39, 40]. Moderate litter covers may support seedlings emergence and growth by improving soil microclimate conditions [24, 41, 42]. However, facilitative effects are reduced when amount of litter covers are too high [17, 18], because high litter cover reduces light quantity and quality to cause deep shade or darkness [24, 41] and may create an impenetrable physical barrier for seedlings [37]. Loydi et al. found that litter cover had positive effects on emergence in grassland ecosystems when litter was below 500 g m−2 and seedling survival and biomass increased with <250 g m−2 litter cover [18]. Our results showed that emergence percentage and establishment success rate of all species increased when litter cover was below 600 g m−2, and no significant differences were found between 800 g m−2 and 0 litter treatments, in particular for two grass species (Figures 3(c) and 4(b)). Nevertheless, the emergence percentage and establishment success rate ofP. tenuiflora andC. virgata still were higher under 800 g m−2 litter cover than no litter. Moreover, in our study, even under high litter cover, the seedlings survival rate of all species was higher than 0 litter treatment, and increasing litter cover tended to increase seedling biomass of all species exceptS. brachyotus (Figure 5(c)). In addition, more litter can greatly increase soil moisture and reduce soil salinity. These results indicate that the facilitative effects of increasing litter cover are more important than its inhibitory effects at our study site.The emergence time and rate may determine the plant sequence in resource utilization, which can influence the plant fate in community, in particular when resources are limited [5]. Our study indicated that high litter cover delayed the emergence time of all species and reduced the emergence rate of most species except forC. glaucum (Figure 3(b)), as seedlings needed more time to penetrate a thick litter layer.The change of seedling morphology reflects the plant adaptability to environment conditions [43]. Our results showed that the basal diameter of seedlings decreased, but seedlings height and stem leaf ratio increased as litter cover increased. It was because increasing litter cover enhances the obstruction for seedling emergence [25] and also decreased the near-surface light availability [37], which caused seedlings to invest more energy to stem for upward growth to penetrate litter and intercept light [44], consequently inducing more biomass allocation to stem. The reduced seedling basal diameter has advantage for seedlings to pass through small gap under dense litter, which may be an effective adaptive strategy for plants subjected to thick litter cover.
## 4.3. The Responses of Emergence and Early Growth for Different Species to Litter Cover Change
Different species respond to litter cover differently in terms of seedling emergence rate and early growth. Seed size was considered as a good predictor for the effect of litter [28, 45, 46]. Loydi et al. showed that litter had stronger negative effects on emergence, survival, and biomass of seedlings from smaller seed (<1 mg) but slight positive effects on species with bigger seed (>1 mg) [18]. Relevant mechanisms were proposed to explain these differences, including light requirement of small seed species during germination process [47], the reserve effect [48], seedling size effect [49, 50], and the metabolic effect [18] related to seed size. In our study, the effect of litter on emergence rate and seedling growth varied between species, but no orderly species responses were found to be related to seed sizes. Regardless the seed size, we did not observe any trend of seedling emergence rate and growth related to life form or source habitat. Compared with previous studies, our results either were because there may be more complicate interrelations between environments and plants to control the effect of litter in this old field ecosystem or the study species is too few to reflect a general rule.Although no general relationships presented between seedling emergence, growth, and species properties, our results showed thatP. tenuiflora andC. virgata had relative higher emergence percentage, survival rate of seedling, and establishment success rate compared with other species (Figures 3(c) and 4). The emergence and seedling biomass of these two species showed more positive response to high litter cover than most other species, which may attribute to their subuliform morphology in earlier seedling stage. As typical weed species, althoughA. theophrasti,C. glaucum, andS. brachyotus had high seedling biomass (Figure 5(c)), their emergence percentage and establishment success rate were relatively lower than all other species (Figures 3(c) and 4(b)), which were strongly influenced by high litter cover. The emergence rate and establishment success rate ofL. davurica were higher thanA. theophrasti andC. glaucum, however, its seedlings performance was the worst among all species (Figure 5). This result indicated thatC. virgata will dominate further in this naturally successional old field ecosystem as litter accumulation increases. First,C. virgata will keep higher emergence and establishment capacity under high litter cover than other three weed species. Secondly,C. virgata can produce smaller but more seeds. Lastly, the seeds ofC. virgata mature and fall in early July before new litter is formed (Table 1). As a result, theC. virgata seeds can easily pass through litter layer and form soil seed bank [11, 28]. In contrast, the seeds from other three weed species are bigger thanC. virgata, and these seeds commonly mature and fall after mid-August when more litter has fallen, which may greatly reduce the opportunity for seeds to contact with soil, consequently having less opportunities to germinate and establish compared withC. virgata.
## 5. Conclusions
The increase of litter cover can increase soil moisture and decrease surface soil salinity. Increased litter cover can delay seedlings emergence time and rate. Litter cover below 600 g m−2 can promote the seedling emergence and establishment of all studied species due to improved soil moisture conditions. Different species respond differently with increased litter cover.P. tenuiflora andC. virgata will acquire more emergence benefits under high litter cover. Therefore, it is expected that this old field ecosystem possibly becomesC. virgata dominated annual grass grassland under natural succession. With respect to similar emergence performance and seed features withC. virgata,P. tenuiflora may be potential to invade and establish into this old field ecosystem and accelerate the old field succession toward mature grassland, but the seed dispersal limitation may impede the invasion ofP. tenuiflora; therefore, an artificial seed addition may be necessary.
---
*Source: 101860-2014-07-01.xml* | 2014 |
# Combined Hepatocellular Carcinoma and Fibrolamellar Carcinoma Presenting as Two Adjacent Separate Lesions in a Young Boy: First Case Report from Asia
**Authors:** Pradyumn Singh; Banumathi Ramakrishna
**Journal:** Case Reports in Hepatology
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101862
---
## Abstract
We report a rare case of combined hepatocellular carcinoma and fibrolamellar carcinoma arising in a noncirrhotic liver, in a 14-year-old boy who underwent right hepatectomy. We discuss the clinicopathological and immunohistochemical features and the clinical outcome in this unusual tumor.
---
## Body
## 1. Introduction
Hepatocellular carcinoma of adult type (HCC) and fibrolamellar carcinoma (FLC) are two distinct entities unique in their clinical, histological, and biological aspects [1]. There have been occasional reports of combined occurrence of FLC and HCC [2–5]. However, as the criteria to diagnose combined occurrence is not yet established in the literature, there seem to be cases which in a true sense are not combined occurrence of FLC-HCC but rather an admixture of FLC-like areas in the usual HCC thus confusing the literature. Here, we report a rare case of a separate FLC and HCC presenting synchronously in a 14-year-old boy, with well-characterized morphology and treatment outcome, and discuss the distinctive features which would help in the correct identification of such unusual lesions.
## 2. Case Report
A 14-year-old boy presented with history of right sided abdominal pain for 2 months. On examination there was an irregular, firm, swelling palpable in the liver extending up to 7 cm below the right costal margin. Computed tomography (CT) of the abdomen showed a large 9.6 × 10.2 × 10.5 cm well-defined, heterogeneously enhancing mass in the right lobe of liver in segments 6 and 5 and another lobulated 9.8 × 6.7 cm mass with homogenous enhancement in segments 8 and 7 suggestive of malignant liver masses with intrahepatic metastases. Investigations showed no elevation of serum alphafetoprotein (AFP), beta human chorionic gonadotrophin, or carcinoembryonic antigen. Hepatitis B surface antigen, hepatitis C antibody, and human immunodeficiency virus antibody were negative. Liver function tests at the time of admission showed total serum bilirubin level of 0.4 mg/dL, direct 0.2 mg/dL, total protein 8.2 g/dL, albumin 3.8 g/dL, aspartate aminotransferase (AST) 289 U/L, alanine aminotransferase (ALT) 99 U/L and alkaline phosphatase (ALP) 143 U/L, and INR of 0.96. There was no history of parasitic infestation or known exposure to environmental toxins. A wedge biopsy from the liver mass revealed a moderately differentiated hepatocellular carcinoma following which the patient underwent right hepatectomy.Histopathological findings: grossly, the right lobe of the liver weighed 1000 g. The external surface was focally nodular and on sectioning revealed a circumscribed tumor measuring 10 × 8 × 12 cm with solid brown cut surface almost reaching the capsular surface. Adjoining this tumor was another nodular lesion measuring 8 × 9 × 9 cm with lobulated grey white cut surface (Figure1). Also seen were few smaller nodules measuring 2–4 cm in maximum dimension with brown to grey-brown cut surface. The adjacent liver parenchyma grossly appeared normal.Figure 1
A case of combined HCC (brown cut surface) and FLC (lobulated grey white cut surface)—right hepatectomy specimen.Microscopic examination of the larger brown tumor and the small nodules showed large polygonal cells arranged in pseudoglandular and trabecular pattern (Figure2) separated by sinusoidal vascular channels. The tumor cells had abundant amounts of eosinophilic to amphophilic cytoplasm and pleomorphic vesicular nuclei displaying prominent nucleoli, increased mitotic activity (8-9/10 hpf) including atypical forms, and nuclear pseudoinclusions. A rim of fibrous tissue was noted around the large tumor separating it from adjacent liver in areas. Sections from the grey white mass showed a tumor with a fibrolamellar pattern composed of trabeculae of large polygonal cells separated by lamellar bands of hyalinized collagen (Figure 3). These tumor cells had abundant brightly eosinophilic granular cytoplasm with moderately pleomorphic, hyperchromatic nuclei, and some with prominent nucleoli. Eosinophilic cytoplasmic globules were present in some cells. Mitotic activity was inconspicuous. Occasional lymphovascular tumor emboli were seen in the adjacent fibrous tissue. The adjacent liver parenchyma showed preserved architecture, focal mild steatosis, and no evidence of fibrosis or cirrhosis.Figure 2
HCC showing tumor cells arranged in pseudoglandular and trabecular pattern, hematoxylin-eosin 200x.Figure 3
Fibrolamellar carcinoma showing tumor cells with abundant eosinophilic cytoplasm separated by lamellar bands of hyalinised collagen, hematoxylin-eosin 100x.Immunohistochemistry revealed tumor cells of HCC with diffuse positivity for Hep Par1 (Figure4) and focal positivity for CK7 (Figure 5) whereas the tumor cells of fibrolamellar-type showed diffuse strong positivity for CK7 (Figure 6). Ep-CAM was positive in both HCC and in FLC but the percentage of positive cells and the intensity of staining were more in the HCC component. Immunostaining for beta catenin showed diffuse membranous but no nuclear staining in tumor cells of both types. A final diagnosis of multinodular combined adult type moderately differentiated HCC and FLC was made.Figure 4
Tumor cells showing diffuse granular cytoplasmic positivity for Hep-par 1 in HCC, immunohistochemistry 200x.Figure 5
Tumor cells of HCC showing focal positivity for CK7, immunohistochemistry 100x.Figure 6
Tumor cells of FLC showing diffuse positivity for CK7, immunohistochemistry 100x.The patient was given chemotherapy which included cisplatin, carboplatin, and adriamycin, which he tolerated well, completed the course, and was on followup. Four months after surgery, S. bilirubin, total was 0.5 mg/dL, S. protein 7.5 g/dL, S. albumin 4.4 g/dL, AST 22 U/L, ALT 11 U/L and ALP 290 U/L.Two years after surgery, new lesions were detected in the liver, on ultrasonography. Subsequent CT scan showed a heterogeneous mass in the porta, between the hepatic artery and inferior vena cava, fairly well-defined measuring approximately 9.7 × 4.6 × 4.6 cm with a lobulated contour and calcification on its inferior aspect, likely to represent a nodal mass. Considering disease recurrence and inoperability, the patient was put on Sorafenib treatment from March 2012. Presently, till the writing of this paper, he has survived and is on palliative care.
## 3. Discussion
FLC is an uncommon tumor which occurs in adolescents and young adults and is rare in the Asian population [1]. It is quite distinct from the usual adult type HCC as it occurs without predisposing chronic liver disease and is significantly different in morphology and immunohistochemical staining patterns compared to typical HCC [1, 6, 7]. FLC, therefore, has now been classified as a separate tumor in the latest WHO classification, hence should be analyzed separately from typical HCC [1], while earlier it was recognized as a unique histological pattern of HCC.There are occasional reports of FLC coexistent with or admixed with HCC [2, 4, 5]. Transformation of FLC to HCC in recurrent lesions [8, 9] has also been reported in adults, in the literature, from other countries.We report the first case of synchronous FLC and HCC, presenting as 2 adjacent, but separate, well-defined gross and microscopic lesions in a 14-year-old boy from India.Typical HCC with fibrous stroma can resemble FLC but these cases should not be mistaken for mixed HCC and FLC. Immunohistochemistry is useful in resolving these cases. The tumor cells of FLC show strong positivity for CK7, which is focal in HCC, as seen in our case. A recent study has shown strong positivity for CD68 in FLC with 96% sensitivity and 80% specificity, with 98% negative predictive value [10]. These authors suggest that in addition to the typical histological features, all cases of FLC should be immunostained with CK7 and CD68 and in the absence of CK7 and/or CD68 positivity, the diagnosis of FLC should be questioned [10, 11]. Positive staining for Ep-CAM has been shown in FLC [7], which was also seen in our case.Mutations in the beta catenin gene have been reported in all categories of hepatocellular neoplasm but not in FLC and there was no nuclear staining for beta catenin by immunohistochemistry [11]. We did not find nuclear staining for beta catenin in tumor cells of both HCC and FLC in the present case. Our previous study [12] has also shown that the beta-catenin pathway appears to be infrequent in hepatitis B related HCC in India.Most of the earlier case reports [2–4, 8, 9] have not performed these IHC stains and thus cannot be included for definitive comparison in retrospect. Hyaline bodies (cytoplasmic inclusions that are eosinophilic and tend to be smaller than pale bodies) are also present in nearly half of FLC cases [13]. But pale bodies and hyaline bodies are not unique to FLC, as they can also be found in ordinary HCC, so that a diagnosis of FLC should not be made on the presence of these inclusions alone [11]. In our case, on immunostaining, the tumor cells of the HCC type showed only focal positivity for CK7, whereas those of the fibrolamellar-type showed diffuse strong positivity. CD68 staining was not carried out, since the literature on that came out only much later.The presence of CK7 and/or CK19 has been suggested to represent evidence of hepatic progenitor cell origin [14]. The cell of origin of FLC is not certain but combined occurrence of FLC-HCC can suggest a possible derivation from hepatic progenitor cells with transdifferentiation in a subset to typical HCC phenotype and loss of CK7 positivity.AFP levels are typically normal in FLC [7]. So raised serum AFP levels, older age at diagnosis, and presence of chronic liver disease should deter one from making a diagnosis of FLC-HCC and such cases are best regarded as HCC for management. We found only one case of combined FLC-HCC available in the indexed English literature from Europe [5] that was comparable to our case. Currently available experience about combined FLC-HCC cases is very limited in the literature as regards their pathogenesis, morphology, disease course, or survival outcome. Ours is perhaps the first clinico-pathologically well-characterized case from Asia with treatment and follow-up details. Pure FLC is generally accepted to have an overall better prognosis than to those with HCC [11, 13]. Clinically, our patient responded well to treatment by partial hepatectomy and chemotherapy and has been doing well on followup. After two years from initial diagnosis and treatment he has recently developed recurrence in liver at the porta, which was inoperable. He has therefore been put on Sorafenib treatment and is undergoing palliative care.The occurrence of two morphologically distinct types of liver tumor in the same patient raises the possibility that they could have arisen from prehepatocytic cancer stem cell. This may be worth investigating further in the future.
---
*Source: 101862-2013-03-03.xml* | 101862-2013-03-03_101862-2013-03-03.md | 11,432 | Combined Hepatocellular Carcinoma and Fibrolamellar Carcinoma Presenting as Two Adjacent Separate Lesions in a Young Boy: First Case Report from Asia | Pradyumn Singh; Banumathi Ramakrishna | Case Reports in Hepatology
(2013) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101862 | 101862-2013-03-03.xml | ---
## Abstract
We report a rare case of combined hepatocellular carcinoma and fibrolamellar carcinoma arising in a noncirrhotic liver, in a 14-year-old boy who underwent right hepatectomy. We discuss the clinicopathological and immunohistochemical features and the clinical outcome in this unusual tumor.
---
## Body
## 1. Introduction
Hepatocellular carcinoma of adult type (HCC) and fibrolamellar carcinoma (FLC) are two distinct entities unique in their clinical, histological, and biological aspects [1]. There have been occasional reports of combined occurrence of FLC and HCC [2–5]. However, as the criteria to diagnose combined occurrence is not yet established in the literature, there seem to be cases which in a true sense are not combined occurrence of FLC-HCC but rather an admixture of FLC-like areas in the usual HCC thus confusing the literature. Here, we report a rare case of a separate FLC and HCC presenting synchronously in a 14-year-old boy, with well-characterized morphology and treatment outcome, and discuss the distinctive features which would help in the correct identification of such unusual lesions.
## 2. Case Report
A 14-year-old boy presented with history of right sided abdominal pain for 2 months. On examination there was an irregular, firm, swelling palpable in the liver extending up to 7 cm below the right costal margin. Computed tomography (CT) of the abdomen showed a large 9.6 × 10.2 × 10.5 cm well-defined, heterogeneously enhancing mass in the right lobe of liver in segments 6 and 5 and another lobulated 9.8 × 6.7 cm mass with homogenous enhancement in segments 8 and 7 suggestive of malignant liver masses with intrahepatic metastases. Investigations showed no elevation of serum alphafetoprotein (AFP), beta human chorionic gonadotrophin, or carcinoembryonic antigen. Hepatitis B surface antigen, hepatitis C antibody, and human immunodeficiency virus antibody were negative. Liver function tests at the time of admission showed total serum bilirubin level of 0.4 mg/dL, direct 0.2 mg/dL, total protein 8.2 g/dL, albumin 3.8 g/dL, aspartate aminotransferase (AST) 289 U/L, alanine aminotransferase (ALT) 99 U/L and alkaline phosphatase (ALP) 143 U/L, and INR of 0.96. There was no history of parasitic infestation or known exposure to environmental toxins. A wedge biopsy from the liver mass revealed a moderately differentiated hepatocellular carcinoma following which the patient underwent right hepatectomy.Histopathological findings: grossly, the right lobe of the liver weighed 1000 g. The external surface was focally nodular and on sectioning revealed a circumscribed tumor measuring 10 × 8 × 12 cm with solid brown cut surface almost reaching the capsular surface. Adjoining this tumor was another nodular lesion measuring 8 × 9 × 9 cm with lobulated grey white cut surface (Figure1). Also seen were few smaller nodules measuring 2–4 cm in maximum dimension with brown to grey-brown cut surface. The adjacent liver parenchyma grossly appeared normal.Figure 1
A case of combined HCC (brown cut surface) and FLC (lobulated grey white cut surface)—right hepatectomy specimen.Microscopic examination of the larger brown tumor and the small nodules showed large polygonal cells arranged in pseudoglandular and trabecular pattern (Figure2) separated by sinusoidal vascular channels. The tumor cells had abundant amounts of eosinophilic to amphophilic cytoplasm and pleomorphic vesicular nuclei displaying prominent nucleoli, increased mitotic activity (8-9/10 hpf) including atypical forms, and nuclear pseudoinclusions. A rim of fibrous tissue was noted around the large tumor separating it from adjacent liver in areas. Sections from the grey white mass showed a tumor with a fibrolamellar pattern composed of trabeculae of large polygonal cells separated by lamellar bands of hyalinized collagen (Figure 3). These tumor cells had abundant brightly eosinophilic granular cytoplasm with moderately pleomorphic, hyperchromatic nuclei, and some with prominent nucleoli. Eosinophilic cytoplasmic globules were present in some cells. Mitotic activity was inconspicuous. Occasional lymphovascular tumor emboli were seen in the adjacent fibrous tissue. The adjacent liver parenchyma showed preserved architecture, focal mild steatosis, and no evidence of fibrosis or cirrhosis.Figure 2
HCC showing tumor cells arranged in pseudoglandular and trabecular pattern, hematoxylin-eosin 200x.Figure 3
Fibrolamellar carcinoma showing tumor cells with abundant eosinophilic cytoplasm separated by lamellar bands of hyalinised collagen, hematoxylin-eosin 100x.Immunohistochemistry revealed tumor cells of HCC with diffuse positivity for Hep Par1 (Figure4) and focal positivity for CK7 (Figure 5) whereas the tumor cells of fibrolamellar-type showed diffuse strong positivity for CK7 (Figure 6). Ep-CAM was positive in both HCC and in FLC but the percentage of positive cells and the intensity of staining were more in the HCC component. Immunostaining for beta catenin showed diffuse membranous but no nuclear staining in tumor cells of both types. A final diagnosis of multinodular combined adult type moderately differentiated HCC and FLC was made.Figure 4
Tumor cells showing diffuse granular cytoplasmic positivity for Hep-par 1 in HCC, immunohistochemistry 200x.Figure 5
Tumor cells of HCC showing focal positivity for CK7, immunohistochemistry 100x.Figure 6
Tumor cells of FLC showing diffuse positivity for CK7, immunohistochemistry 100x.The patient was given chemotherapy which included cisplatin, carboplatin, and adriamycin, which he tolerated well, completed the course, and was on followup. Four months after surgery, S. bilirubin, total was 0.5 mg/dL, S. protein 7.5 g/dL, S. albumin 4.4 g/dL, AST 22 U/L, ALT 11 U/L and ALP 290 U/L.Two years after surgery, new lesions were detected in the liver, on ultrasonography. Subsequent CT scan showed a heterogeneous mass in the porta, between the hepatic artery and inferior vena cava, fairly well-defined measuring approximately 9.7 × 4.6 × 4.6 cm with a lobulated contour and calcification on its inferior aspect, likely to represent a nodal mass. Considering disease recurrence and inoperability, the patient was put on Sorafenib treatment from March 2012. Presently, till the writing of this paper, he has survived and is on palliative care.
## 3. Discussion
FLC is an uncommon tumor which occurs in adolescents and young adults and is rare in the Asian population [1]. It is quite distinct from the usual adult type HCC as it occurs without predisposing chronic liver disease and is significantly different in morphology and immunohistochemical staining patterns compared to typical HCC [1, 6, 7]. FLC, therefore, has now been classified as a separate tumor in the latest WHO classification, hence should be analyzed separately from typical HCC [1], while earlier it was recognized as a unique histological pattern of HCC.There are occasional reports of FLC coexistent with or admixed with HCC [2, 4, 5]. Transformation of FLC to HCC in recurrent lesions [8, 9] has also been reported in adults, in the literature, from other countries.We report the first case of synchronous FLC and HCC, presenting as 2 adjacent, but separate, well-defined gross and microscopic lesions in a 14-year-old boy from India.Typical HCC with fibrous stroma can resemble FLC but these cases should not be mistaken for mixed HCC and FLC. Immunohistochemistry is useful in resolving these cases. The tumor cells of FLC show strong positivity for CK7, which is focal in HCC, as seen in our case. A recent study has shown strong positivity for CD68 in FLC with 96% sensitivity and 80% specificity, with 98% negative predictive value [10]. These authors suggest that in addition to the typical histological features, all cases of FLC should be immunostained with CK7 and CD68 and in the absence of CK7 and/or CD68 positivity, the diagnosis of FLC should be questioned [10, 11]. Positive staining for Ep-CAM has been shown in FLC [7], which was also seen in our case.Mutations in the beta catenin gene have been reported in all categories of hepatocellular neoplasm but not in FLC and there was no nuclear staining for beta catenin by immunohistochemistry [11]. We did not find nuclear staining for beta catenin in tumor cells of both HCC and FLC in the present case. Our previous study [12] has also shown that the beta-catenin pathway appears to be infrequent in hepatitis B related HCC in India.Most of the earlier case reports [2–4, 8, 9] have not performed these IHC stains and thus cannot be included for definitive comparison in retrospect. Hyaline bodies (cytoplasmic inclusions that are eosinophilic and tend to be smaller than pale bodies) are also present in nearly half of FLC cases [13]. But pale bodies and hyaline bodies are not unique to FLC, as they can also be found in ordinary HCC, so that a diagnosis of FLC should not be made on the presence of these inclusions alone [11]. In our case, on immunostaining, the tumor cells of the HCC type showed only focal positivity for CK7, whereas those of the fibrolamellar-type showed diffuse strong positivity. CD68 staining was not carried out, since the literature on that came out only much later.The presence of CK7 and/or CK19 has been suggested to represent evidence of hepatic progenitor cell origin [14]. The cell of origin of FLC is not certain but combined occurrence of FLC-HCC can suggest a possible derivation from hepatic progenitor cells with transdifferentiation in a subset to typical HCC phenotype and loss of CK7 positivity.AFP levels are typically normal in FLC [7]. So raised serum AFP levels, older age at diagnosis, and presence of chronic liver disease should deter one from making a diagnosis of FLC-HCC and such cases are best regarded as HCC for management. We found only one case of combined FLC-HCC available in the indexed English literature from Europe [5] that was comparable to our case. Currently available experience about combined FLC-HCC cases is very limited in the literature as regards their pathogenesis, morphology, disease course, or survival outcome. Ours is perhaps the first clinico-pathologically well-characterized case from Asia with treatment and follow-up details. Pure FLC is generally accepted to have an overall better prognosis than to those with HCC [11, 13]. Clinically, our patient responded well to treatment by partial hepatectomy and chemotherapy and has been doing well on followup. After two years from initial diagnosis and treatment he has recently developed recurrence in liver at the porta, which was inoperable. He has therefore been put on Sorafenib treatment and is undergoing palliative care.The occurrence of two morphologically distinct types of liver tumor in the same patient raises the possibility that they could have arisen from prehepatocytic cancer stem cell. This may be worth investigating further in the future.
---
*Source: 101862-2013-03-03.xml* | 2013 |
# Optical Spectroscopy for Noninvasive Monitoring of Stem Cell Differentiation
**Authors:** Andrew Downes; Rabah Mouras; Alistair Elfick
**Journal:** Journal of Biomedicine and Biotechnology
(2010)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2010/101864
---
## Abstract
There is a requirement for a noninvasive technique to monitor stem cell differentiation. Several candidates based on optical spectroscopy are discussed in this review: Fourier transform infrared (FTIR) spectroscopy, Raman spectroscopy, and coherent anti-Stokes Raman scattering (CARS) microscopy. These techniques are briefly described, and the ability of each to distinguish undifferentiated from differentiated cells is discussed. FTIR spectroscopy has demonstrated its ability to distinguish between stem cells and their derivatives. Raman spectroscopy shows a clear reduction in DNA and RNA concentrations during embryonic stem cell differentiation (agreeing with the well-known reduction in the nucleus to cytoplasm ratio) and also shows clear increases in mineral content during differentiation of mesenchymal stem cells. CARS microscopy can map these DNA, RNA, and mineral concentrations at high speed, and Mutliplex CARS spectroscopy/microscopy is highlighted as the technique with most promise for future applications.
---
## Body
## 1. Introduction
### 1.1. Challenges in Stem Cell Science
In current stem cell biology and regenerative medicine, two of the greatest challenges [1, 2] are to control the differentiation of stem cells and to ensure the purity of isolated cells. These can both be addressed by careful monitoring and characterization of cells. The process of stem cell differentiation is at present monitored by biological assays, namely, immunocytochemistry [3, 4]. However, this process is time consuming as well as requiring biomarkers or labels. There is a clear need for a truly noninvasive technique which can monitor the degree of differentiation rapidly. Such a technique will most likely involve a form of optical imaging or spectroscopy but must not involve the addition of any kind of biomarker. Biomarkers are used to sort embryonic stem cells, in conjunction with fluorescent [5, 6] or magnetic [7] labels. These techniques are lengthy and time-consuming, but careful monitoring of stem cell differentiation is essential: in clinical applications, a population of fully differentiated cells is often implanted, but teratomas can result if any stem cells remain undifferentiated [8].There are a number of issue with the use of biomarkers for the characterization and sorting of stem cells and their derivatives. Firstly, only a limited number of biomarkers exists each one being cell-specific. Many cell types lack biomarkers, for example, cardiomyocytes [9], gastrointestinal stem cells [10], and corneal stem cells [11]. Secondly, the use of biomarkers raises issues with both biological researchers and clinicians, who would strongly prefer a label-free technique. Finally, these biomarkers cannot easily be translated; for example, embryonic stem cell biomarkers are not always applicable to adult stem cells.There are further issues with the use of fluorescent and magnetic markers. Fluorescent biomarkers [5, 6] have been employed in cell sorting and characterization, but fluorescent techniques have a number of drawbacks. Firstly, photobleaching means that signal levels drop over time; so long-term studies of differentiation are prohibited. Secondly, this process of photobleaching produces free radical singlet oxygen species which will damage live cells. Finally, the use of biomarkers causes modification to cells’ surface chemistry, and stem cells are highly sensitive to small changes in their surface chemistry. Magnetic beads cannot easily be visualised in microscopy; they must all be removed from the cells; a large mass could cause large mechanical stresses to the cells, which can affect the cells’ behaviour.There is thus a requirement from the stem cell community for a rapid, easy, sensitive, nondestructive, noninvasive, label-free technique which can be applied both on the single cell level and to monitoring or sorting large populations of cells. This review will concentrate on label-free optical spectroscopy techniques, which are noninvasive and have sufficiently high resolution to be applied at the single cell level.White light imaging—either phase contrast or differential interference contrast (DIC)—can reveal the approximate level of differentiation in situ, to those who are expert in stem cell culture. However, it is only really suitable for monolayers of cells. As white light imaging is usually only qualitative, it would benefit by being replaced by a more advanced optical technique capable of a quantitative measurement on individual cells. Such a technique should therefore be capable of high speed characterization, to enable large numbers of cells to be studied—in monolayer cultures, embryos, and scaffolds.
### 1.2. Infrared Absorption Spectroscopy
The first optical technique suitable for noninvasive characterization of cells is infrared absorption spectroscopy. Infrared light is absorbed by the wide variety of chemical bonds within molecules, which all have well-defined vibrational frequencies. Hence, an absorption spectrum of a cell should give a characteristic snapshot of the chemistry, and an undifferentiated cell’s spectrum could differ enough from that of a differentiated cell enough to characterize them. Simple infrared spectrometers use a broadband light source containing a wide range of wavelengths, which is typically passed through a cuvette of solution, through a dispersing spectrometer onto a single detector. This technique is slow, as the spectrum is built up from around 1000 sequential data points. In order to collect a full spectrum without losing the vast majority of the signal, Fourier transform infrared (FTIR) spectroscopy [12] uses both interferometry and a Fourier transform of the signal: from the time to frequency domain. A typical FTIR setup is illustrated in Figure 1(a), which requires a mirror to scan one half of the interferometer arm over a distance of a few millimetres. A full spectrum is typically acquired in around a second on live cells [13]. Synchrotron sources have promised vastly improved spectral acquisition times—up to 1000 times faster [14] than benchtop FTIR—but it is not clear whether this is applicable to live cells, as heating by absorption may prevent any increase in speed.Schematic experimental arrangements for (a) Fourier transform infrared (FTIR) spectroscopy, (b) Raman spectroscopy, and (c) coherent anti-Stokes Raman scattering (CARS) microscopy and spectroscopy.
(a)
FTIR(b)
Raman(c)
CARSThe lateral resolution of optical techniques is normally approximated by 0.6λ/N.A. where λ is the wavelength of illuminating light, and N.A. is the numerical aperture of illumination. Although an N.A. of 1.4 is achievable with objective lenses using visible light, infrared light has very low transmission through standard glass objectives; so a parabolic mirror (known as a Cassegrain objective) is normally used to focus light. These objectives have a typical N.A. of 0.4. The bonds in molecules are typically excited with infrared light of wavelengths between 2.8 and 16 μm, which corresponds to a lateral resolution of 4.2 to 24 μm, which is small enough to be applied to individual isolated cells or to average over groups of cells, but will not usually be cell specific when applied to an embryo or group of cells tightly bound together. FTIR microscopy has been achieved on fixed adherent mesenchymal stem cells (MSCs) [15] with a diameter of around 50 μm, but only for high-frequency (short wavelength) vibrations. These vibrational frequencies are described by spectroscopists as inverse wavelengths in units of cm−1 [“wavenumbers”]. The lowest frequency vibrations occur in cells around 600 cm−1 (λ = 16.7 μm) and the highest frequencies relate to the C−H stretch (2800–3000 cm−1, λ = 3.3–3.6 μm) and O−H stretch (~3500 cm−1, λ = 2.8 μm).One of the major issues with infrared radiation is its extremely low penetration depth in water, which limits the depth which can be probed in solution to the range 10–100μm. To combat absorption of infrared light through a thick cuvette, attenuated total reflection- (ATR-) FTIR [16] probes only the first micrometre above the substrate. An array of spectra can be acquired rapidly enough to perform imaging within a few minutes on live cells [17]. Results from FTIR on stem cells will be discussed later.
### 1.3. Raman Spectroscopy
The second optical technique suitable for stem cell characterization is Raman spectroscopy [18]. In the most widespread form—Stokes scattering—visible or near-infrared light loses energy (frequency) by exciting molecules into their excited state, as depicted in Figure 2. This means that some of the laser light is red-shifted after interacting with the sample. A typical setup is illustrated in Figure 1(b): after filtering out the laser, the remaining red-shifted light is passed through a spectrometer onto a cooled CCD camera. A full Raman spectrum is normally acquired in 1 to 10 seconds, so is typically slower than FTIR. As silicon CCDs have a response which dies away rapidly at a wavelength of around 1000 nm, the longest laser wavelength which can be used is 785 nm (AlGaAs diode) which is the preferred wavelength for Raman spectroscopy in biology. The strong C−H stretch frequencies are the highest measurable with such a laser wavelength. The other popular laser wavelength for biological samples is 633 nm (HeNe laser) which has lower power than 785 nm lasers, and heating in cells and tissue is the lowest using near-IR illumination (700–1100 nm). Heating has been measured directly in cells—using 100 mW of 1064 nm radiation in an optical trap, a temperature rise of <1°C was observed [19]. Prolonged exposure to 300 mW illumination at constant power (“continuous wave”, CW) caused photodamage—a light-induced reduction in cell viability—attributed to two-photon absorption. Visible light (λ<700 nm) is believed to cause photodamage at far lower thresholds than this, due to increased absorption by proteins, but shorter wavelengths produce more Raman signal [20].Figure 2
Schematic energy level diagrams for Raman, CARS, and FTIR processes. In Raman scattering one illuminating laser photon is absorbed (at pump frequencyνP), and another is radiated (at Stokes frequency νS)—the difference in frequencies being equal to a vibrational frequency, νVIB. In CARS three photons are absorbed—two at frequency νP and one at νS—and one is emitted at the anti-Stokes frequency νAS. FTIR relies solely on the absorption of infrared radiation at νVIB.The reduction in wavelengths for Raman spectroscopy, compared to FTIR, equates to a greatly improved lateral resolution of 350 nm and axial (depth) resolution of 1150 nm. This is sufficient to allow spectroscopy on individual cells, and even at the subcellular level. Furthermore, Raman microspectroscopy—otherwise known as Raman mapping or Raman microscopy—can be performed [21]. Maps can be produced containing the total signal under a given peak, or more subtle differences between spectra can be exploited with principal component analysis (PCA) [22] or cluster analysis. This reduces the large number of peaks in the Raman spectrum to a smaller number of independent variables, and cluster analysis produces colour-coded maps with regions of similar chemistry deduced from the set of spectra. Due to the long acquisition time required, only Raman microscopy of fixed cells had been performed until recently, as live cells move far more quickly than the set of spectra could be recorded. However, recently Raman microscopy has been performed on live cells with 100 mW power at 647 nm [23] and acquisition time of 0.5 second, albeit with only 32 × 32 pixels. An alternative approach is to illuminate a line rather than a spot—then a series of spectra can be recorded across a single CCD, each relating to a pixel along the line. This technique has been applied to live cells [20] with 5 seconds per line (3 minutes per image) with 532 nm at 3.5 mW/μm2 intensity, again with a small number of imaging pixels. It remains to be seen whether photodamage was occurring in both of these live cell imaging publications.PCA can be used to extract the most significant variations between groups of spectra acquired on large numbers of cells. Thus, determining an unknown cell type from two possibilities—notably, stem cell and differentiated cell—can be accomplished using large numbers of spectra fromknown cell types. The most important differences are highlighted, rather than any uncorrelated and unimportant variations, to improve the sensitivity of the technique. No knowledge of the chemistry is required with this unsupervised technique. Two improved variants of PCA are discussed in relation to FTIR, and both show marked improvements in their ability to distinguish cell types [24]. Before analysis can be performed, spectra require processing to remove unwanted autofluorescence from tissue—normally by baseline subtraction, as well as the removal of cosmic rays, and substrate and media contribution to the spectrum. All this processing and analysis of spectra requires significant computing power, when applied to large datasets.Raman spectra consist of a large number of peaks at well-defined frequencies, as demonstrated in Figures3 and 4. The relative intensities of the peaks change between cells, but the frequencies themselves do not shift. Peaks are usually around 5–10 cm−1 in width, except for the single peak at 1002 cm−1. This frequency relates to the in-plane vibration of the aromatic ring of the phenylalanine molecule which is highly symmetric and is narrower than the other peaks because of the lack of vibration out of the plane and lack of variation in other atoms attached to the benzene ring. Frequencies in FTIR can be slightly shifted by a few cm−1—and tend to be broader. Both techniques excite these vibrations to different extents; so the relative peak intensities will be different in Raman and FTIR spectra.IR spectral analysis of a small intestinal crypt using synchrotron FTIR microspectroscopy. (a) Ten IR spectra of the entire biochemical-cell fingerprint region (900 to 1,800 cm−1) acquired from the assigned transit-amplifying location (locations 1–3; black lines, top), the putative stem cell location (locations 4–6; red lines, middle), and the differentiated location (locations 7–10; blue lines, bottom). (b) PC analysis of a small intestinal crypt’s IR spectra using the entire biochemical-cell fingerprint region. Reprinted with permission from [39] (Copyright AlphaMed press 2008).
(a)(b)Figure 4
(a) First principal component, describing the major differences between two groups: undifferentiated murine embryonic stem cells and the differentiated cells via formation of embryoid bodies. (b) Raman spectrum of reference RNA, revealing a good deal of similarity with (a)—hence, the major change to the spectrum during differentiation is related to a reduction in RNA levels. Note the strong peaks around 785 cm−1 (cytosine and uracil ring stretching), 811 cm−1 (phosphodiester bond stretching), and 1096 cm−1 (phosphodioxy group stretching). All spectra were acquired in 2 minutes (Reprinted with permission from [40]; Copyright 2004 American Chemical Society).
### 1.4. CARS Microscopy and Spectroscopy
Raman scattering is a weak process—typically only 1 in~1010 incident photons gives rise to a Raman-shifted photon. This is largely due to the excitation of the bond vibration far above its resonant frequency. In order to increase this efficiency, in coherent anti-Stokes Raman scattering (CARS) [25–28] the vibration is excited with two laser frequencies—the difference (or beat) frequency between them is matched to the vibrational frequency of interest. This gives between 4 and 6 orders of magnitude more signal than standard Raman. This means that CARS images are acquired in seconds, whereas a similar quality Raman map would require days to complete. The process is best excited with pulsed lasers with durations of around 6 ps, which should mean that no photodamage occurs with extensive use of powers of at least 12 mW [29]. A CARS microscope, designed for biological imaging, is described in detail elsewehere [28], which has a lateral resolution of 350 nm and an axial resolution of 1100 nm. Live cell imaging is slightly slower than that of fixed cells, but high-quality images are acquired within 1 minute.Picosecond CARS excitation pulses have an estimated width of around 3 cm−1, so are ideal for biological molecules, but this means that only one vibration (spectral peak) may be excited during an image. Images can be acquired sequentially at different wavenumbers, by retuning one laser, but this is not ideal given the motion occurring during live cell imaging. Some CARS systems are able to acquire images at two different vibrational frequencies simultaneously [28]. Multiplex CARS [30–34] uses normal (narrowband) pulses for one laser, νP, and broadband pulses for νS. The broadband supercontinuum excitation pulse is several hundred nanometres wide and is excited in a photonic crystal fibre by a femtosecond laser. A full spectrum is acquired in around 100 milliseconds on live yeast [30], but an estimate of photodamage [29] suggests that 1 second per pixel would be more appropriate for eukaryotic cells. Further improvements to excitation sources could see this fall to the millisecond range—enabling high-quality, noninvasive full spectral mapping of live cells in minutes.A different approach to increase the speed of Raman microscopy, termed Stimulated Raman Scattering, has recently been published [35]. In a similar way to how stimulated emission depopulates the excited state in lasers, the excited state in a simple Raman excitation (pumped with νP) can be rapidly depopulated by a second laser (at νS) modulated in the MHz range. In this way, the signal at νS is increased slightly (Stimulated Raman Gain) and the pump power at νP is decreased slightly (Stimulated Raman Loss, SRL). Monitoring either signal, filtered by a lock-in amplifier, produces images which are background-free and directly proportional to the concentration. The standard CARS signal has a quadratic dependence on concentration and can have a large unwanted background. Heterodyne CARS [36] is another technique which is able to circumvent both of these problems encountered in standard CARS imaging. SRL is a linear optical technique and is suitable for extension into optical coherence tomography deep into tissue, using low numerical aperture lenses [37].
## 1.1. Challenges in Stem Cell Science
In current stem cell biology and regenerative medicine, two of the greatest challenges [1, 2] are to control the differentiation of stem cells and to ensure the purity of isolated cells. These can both be addressed by careful monitoring and characterization of cells. The process of stem cell differentiation is at present monitored by biological assays, namely, immunocytochemistry [3, 4]. However, this process is time consuming as well as requiring biomarkers or labels. There is a clear need for a truly noninvasive technique which can monitor the degree of differentiation rapidly. Such a technique will most likely involve a form of optical imaging or spectroscopy but must not involve the addition of any kind of biomarker. Biomarkers are used to sort embryonic stem cells, in conjunction with fluorescent [5, 6] or magnetic [7] labels. These techniques are lengthy and time-consuming, but careful monitoring of stem cell differentiation is essential: in clinical applications, a population of fully differentiated cells is often implanted, but teratomas can result if any stem cells remain undifferentiated [8].There are a number of issue with the use of biomarkers for the characterization and sorting of stem cells and their derivatives. Firstly, only a limited number of biomarkers exists each one being cell-specific. Many cell types lack biomarkers, for example, cardiomyocytes [9], gastrointestinal stem cells [10], and corneal stem cells [11]. Secondly, the use of biomarkers raises issues with both biological researchers and clinicians, who would strongly prefer a label-free technique. Finally, these biomarkers cannot easily be translated; for example, embryonic stem cell biomarkers are not always applicable to adult stem cells.There are further issues with the use of fluorescent and magnetic markers. Fluorescent biomarkers [5, 6] have been employed in cell sorting and characterization, but fluorescent techniques have a number of drawbacks. Firstly, photobleaching means that signal levels drop over time; so long-term studies of differentiation are prohibited. Secondly, this process of photobleaching produces free radical singlet oxygen species which will damage live cells. Finally, the use of biomarkers causes modification to cells’ surface chemistry, and stem cells are highly sensitive to small changes in their surface chemistry. Magnetic beads cannot easily be visualised in microscopy; they must all be removed from the cells; a large mass could cause large mechanical stresses to the cells, which can affect the cells’ behaviour.There is thus a requirement from the stem cell community for a rapid, easy, sensitive, nondestructive, noninvasive, label-free technique which can be applied both on the single cell level and to monitoring or sorting large populations of cells. This review will concentrate on label-free optical spectroscopy techniques, which are noninvasive and have sufficiently high resolution to be applied at the single cell level.White light imaging—either phase contrast or differential interference contrast (DIC)—can reveal the approximate level of differentiation in situ, to those who are expert in stem cell culture. However, it is only really suitable for monolayers of cells. As white light imaging is usually only qualitative, it would benefit by being replaced by a more advanced optical technique capable of a quantitative measurement on individual cells. Such a technique should therefore be capable of high speed characterization, to enable large numbers of cells to be studied—in monolayer cultures, embryos, and scaffolds.
## 1.2. Infrared Absorption Spectroscopy
The first optical technique suitable for noninvasive characterization of cells is infrared absorption spectroscopy. Infrared light is absorbed by the wide variety of chemical bonds within molecules, which all have well-defined vibrational frequencies. Hence, an absorption spectrum of a cell should give a characteristic snapshot of the chemistry, and an undifferentiated cell’s spectrum could differ enough from that of a differentiated cell enough to characterize them. Simple infrared spectrometers use a broadband light source containing a wide range of wavelengths, which is typically passed through a cuvette of solution, through a dispersing spectrometer onto a single detector. This technique is slow, as the spectrum is built up from around 1000 sequential data points. In order to collect a full spectrum without losing the vast majority of the signal, Fourier transform infrared (FTIR) spectroscopy [12] uses both interferometry and a Fourier transform of the signal: from the time to frequency domain. A typical FTIR setup is illustrated in Figure 1(a), which requires a mirror to scan one half of the interferometer arm over a distance of a few millimetres. A full spectrum is typically acquired in around a second on live cells [13]. Synchrotron sources have promised vastly improved spectral acquisition times—up to 1000 times faster [14] than benchtop FTIR—but it is not clear whether this is applicable to live cells, as heating by absorption may prevent any increase in speed.Schematic experimental arrangements for (a) Fourier transform infrared (FTIR) spectroscopy, (b) Raman spectroscopy, and (c) coherent anti-Stokes Raman scattering (CARS) microscopy and spectroscopy.
(a)
FTIR(b)
Raman(c)
CARSThe lateral resolution of optical techniques is normally approximated by 0.6λ/N.A. where λ is the wavelength of illuminating light, and N.A. is the numerical aperture of illumination. Although an N.A. of 1.4 is achievable with objective lenses using visible light, infrared light has very low transmission through standard glass objectives; so a parabolic mirror (known as a Cassegrain objective) is normally used to focus light. These objectives have a typical N.A. of 0.4. The bonds in molecules are typically excited with infrared light of wavelengths between 2.8 and 16 μm, which corresponds to a lateral resolution of 4.2 to 24 μm, which is small enough to be applied to individual isolated cells or to average over groups of cells, but will not usually be cell specific when applied to an embryo or group of cells tightly bound together. FTIR microscopy has been achieved on fixed adherent mesenchymal stem cells (MSCs) [15] with a diameter of around 50 μm, but only for high-frequency (short wavelength) vibrations. These vibrational frequencies are described by spectroscopists as inverse wavelengths in units of cm−1 [“wavenumbers”]. The lowest frequency vibrations occur in cells around 600 cm−1 (λ = 16.7 μm) and the highest frequencies relate to the C−H stretch (2800–3000 cm−1, λ = 3.3–3.6 μm) and O−H stretch (~3500 cm−1, λ = 2.8 μm).One of the major issues with infrared radiation is its extremely low penetration depth in water, which limits the depth which can be probed in solution to the range 10–100μm. To combat absorption of infrared light through a thick cuvette, attenuated total reflection- (ATR-) FTIR [16] probes only the first micrometre above the substrate. An array of spectra can be acquired rapidly enough to perform imaging within a few minutes on live cells [17]. Results from FTIR on stem cells will be discussed later.
## 1.3. Raman Spectroscopy
The second optical technique suitable for stem cell characterization is Raman spectroscopy [18]. In the most widespread form—Stokes scattering—visible or near-infrared light loses energy (frequency) by exciting molecules into their excited state, as depicted in Figure 2. This means that some of the laser light is red-shifted after interacting with the sample. A typical setup is illustrated in Figure 1(b): after filtering out the laser, the remaining red-shifted light is passed through a spectrometer onto a cooled CCD camera. A full Raman spectrum is normally acquired in 1 to 10 seconds, so is typically slower than FTIR. As silicon CCDs have a response which dies away rapidly at a wavelength of around 1000 nm, the longest laser wavelength which can be used is 785 nm (AlGaAs diode) which is the preferred wavelength for Raman spectroscopy in biology. The strong C−H stretch frequencies are the highest measurable with such a laser wavelength. The other popular laser wavelength for biological samples is 633 nm (HeNe laser) which has lower power than 785 nm lasers, and heating in cells and tissue is the lowest using near-IR illumination (700–1100 nm). Heating has been measured directly in cells—using 100 mW of 1064 nm radiation in an optical trap, a temperature rise of <1°C was observed [19]. Prolonged exposure to 300 mW illumination at constant power (“continuous wave”, CW) caused photodamage—a light-induced reduction in cell viability—attributed to two-photon absorption. Visible light (λ<700 nm) is believed to cause photodamage at far lower thresholds than this, due to increased absorption by proteins, but shorter wavelengths produce more Raman signal [20].Figure 2
Schematic energy level diagrams for Raman, CARS, and FTIR processes. In Raman scattering one illuminating laser photon is absorbed (at pump frequencyνP), and another is radiated (at Stokes frequency νS)—the difference in frequencies being equal to a vibrational frequency, νVIB. In CARS three photons are absorbed—two at frequency νP and one at νS—and one is emitted at the anti-Stokes frequency νAS. FTIR relies solely on the absorption of infrared radiation at νVIB.The reduction in wavelengths for Raman spectroscopy, compared to FTIR, equates to a greatly improved lateral resolution of 350 nm and axial (depth) resolution of 1150 nm. This is sufficient to allow spectroscopy on individual cells, and even at the subcellular level. Furthermore, Raman microspectroscopy—otherwise known as Raman mapping or Raman microscopy—can be performed [21]. Maps can be produced containing the total signal under a given peak, or more subtle differences between spectra can be exploited with principal component analysis (PCA) [22] or cluster analysis. This reduces the large number of peaks in the Raman spectrum to a smaller number of independent variables, and cluster analysis produces colour-coded maps with regions of similar chemistry deduced from the set of spectra. Due to the long acquisition time required, only Raman microscopy of fixed cells had been performed until recently, as live cells move far more quickly than the set of spectra could be recorded. However, recently Raman microscopy has been performed on live cells with 100 mW power at 647 nm [23] and acquisition time of 0.5 second, albeit with only 32 × 32 pixels. An alternative approach is to illuminate a line rather than a spot—then a series of spectra can be recorded across a single CCD, each relating to a pixel along the line. This technique has been applied to live cells [20] with 5 seconds per line (3 minutes per image) with 532 nm at 3.5 mW/μm2 intensity, again with a small number of imaging pixels. It remains to be seen whether photodamage was occurring in both of these live cell imaging publications.PCA can be used to extract the most significant variations between groups of spectra acquired on large numbers of cells. Thus, determining an unknown cell type from two possibilities—notably, stem cell and differentiated cell—can be accomplished using large numbers of spectra fromknown cell types. The most important differences are highlighted, rather than any uncorrelated and unimportant variations, to improve the sensitivity of the technique. No knowledge of the chemistry is required with this unsupervised technique. Two improved variants of PCA are discussed in relation to FTIR, and both show marked improvements in their ability to distinguish cell types [24]. Before analysis can be performed, spectra require processing to remove unwanted autofluorescence from tissue—normally by baseline subtraction, as well as the removal of cosmic rays, and substrate and media contribution to the spectrum. All this processing and analysis of spectra requires significant computing power, when applied to large datasets.Raman spectra consist of a large number of peaks at well-defined frequencies, as demonstrated in Figures3 and 4. The relative intensities of the peaks change between cells, but the frequencies themselves do not shift. Peaks are usually around 5–10 cm−1 in width, except for the single peak at 1002 cm−1. This frequency relates to the in-plane vibration of the aromatic ring of the phenylalanine molecule which is highly symmetric and is narrower than the other peaks because of the lack of vibration out of the plane and lack of variation in other atoms attached to the benzene ring. Frequencies in FTIR can be slightly shifted by a few cm−1—and tend to be broader. Both techniques excite these vibrations to different extents; so the relative peak intensities will be different in Raman and FTIR spectra.IR spectral analysis of a small intestinal crypt using synchrotron FTIR microspectroscopy. (a) Ten IR spectra of the entire biochemical-cell fingerprint region (900 to 1,800 cm−1) acquired from the assigned transit-amplifying location (locations 1–3; black lines, top), the putative stem cell location (locations 4–6; red lines, middle), and the differentiated location (locations 7–10; blue lines, bottom). (b) PC analysis of a small intestinal crypt’s IR spectra using the entire biochemical-cell fingerprint region. Reprinted with permission from [39] (Copyright AlphaMed press 2008).
(a)(b)Figure 4
(a) First principal component, describing the major differences between two groups: undifferentiated murine embryonic stem cells and the differentiated cells via formation of embryoid bodies. (b) Raman spectrum of reference RNA, revealing a good deal of similarity with (a)—hence, the major change to the spectrum during differentiation is related to a reduction in RNA levels. Note the strong peaks around 785 cm−1 (cytosine and uracil ring stretching), 811 cm−1 (phosphodiester bond stretching), and 1096 cm−1 (phosphodioxy group stretching). All spectra were acquired in 2 minutes (Reprinted with permission from [40]; Copyright 2004 American Chemical Society).
## 1.4. CARS Microscopy and Spectroscopy
Raman scattering is a weak process—typically only 1 in~1010 incident photons gives rise to a Raman-shifted photon. This is largely due to the excitation of the bond vibration far above its resonant frequency. In order to increase this efficiency, in coherent anti-Stokes Raman scattering (CARS) [25–28] the vibration is excited with two laser frequencies—the difference (or beat) frequency between them is matched to the vibrational frequency of interest. This gives between 4 and 6 orders of magnitude more signal than standard Raman. This means that CARS images are acquired in seconds, whereas a similar quality Raman map would require days to complete. The process is best excited with pulsed lasers with durations of around 6 ps, which should mean that no photodamage occurs with extensive use of powers of at least 12 mW [29]. A CARS microscope, designed for biological imaging, is described in detail elsewehere [28], which has a lateral resolution of 350 nm and an axial resolution of 1100 nm. Live cell imaging is slightly slower than that of fixed cells, but high-quality images are acquired within 1 minute.Picosecond CARS excitation pulses have an estimated width of around 3 cm−1, so are ideal for biological molecules, but this means that only one vibration (spectral peak) may be excited during an image. Images can be acquired sequentially at different wavenumbers, by retuning one laser, but this is not ideal given the motion occurring during live cell imaging. Some CARS systems are able to acquire images at two different vibrational frequencies simultaneously [28]. Multiplex CARS [30–34] uses normal (narrowband) pulses for one laser, νP, and broadband pulses for νS. The broadband supercontinuum excitation pulse is several hundred nanometres wide and is excited in a photonic crystal fibre by a femtosecond laser. A full spectrum is acquired in around 100 milliseconds on live yeast [30], but an estimate of photodamage [29] suggests that 1 second per pixel would be more appropriate for eukaryotic cells. Further improvements to excitation sources could see this fall to the millisecond range—enabling high-quality, noninvasive full spectral mapping of live cells in minutes.A different approach to increase the speed of Raman microscopy, termed Stimulated Raman Scattering, has recently been published [35]. In a similar way to how stimulated emission depopulates the excited state in lasers, the excited state in a simple Raman excitation (pumped with νP) can be rapidly depopulated by a second laser (at νS) modulated in the MHz range. In this way, the signal at νS is increased slightly (Stimulated Raman Gain) and the pump power at νP is decreased slightly (Stimulated Raman Loss, SRL). Monitoring either signal, filtered by a lock-in amplifier, produces images which are background-free and directly proportional to the concentration. The standard CARS signal has a quadratic dependence on concentration and can have a large unwanted background. Heterodyne CARS [36] is another technique which is able to circumvent both of these problems encountered in standard CARS imaging. SRL is a linear optical technique and is suitable for extension into optical coherence tomography deep into tissue, using low numerical aperture lenses [37].
## 2. Results and Discussion
### 2.1. Fourier-Transform Infrared (FTIR) Spectroscopy
FTIR spectroscopy was used to study murine embryonic stem cells by Ami et al. [38]. After 4–7 days of differentiation, changes to the absorption spectrum of fixed cells were noticed: features in the amide I band (1600–1700 cm−1) were enhanced, and those in the nucleic acid region (850–1050 cm−1) diminish. This means that the overall levels of DNA and RNA decrease, and the alpha helix content of proteins increases over time. Furthermore, new DNA/RNA hybrid bands at 899 cm−1 and 954 cm−1 start to occur around day 4–7, suggesting that mRNA translation is occurring at this time.German et al. [11] employed high-intensity synchrotron radiation to probe 10 μm thick cryosections of bovine cornea. They used PCA to clearly distinguish the three cells types of interest: stem cells, transit-amplifying cells, and terminal differentiated cells. No biomarkers of corneal stem cells exist; so spectroscopic techniques offer the only viable method of cell characterization here.From the same group, Walsh et al. [39] again used synchrotron FTIR, this time on paraffin-embedded human intestinal crypts, which were dewaxed. The position of cells along the crypt denotes the change from stem cell location to transit-amplifying region to differentiated location. PCA was used to compare spectral features and was able to separate cell types from three positions along the crypt, which is shown in Figure 3. This method of characterization was compared with tissue stained with two different immunophenotypical markers: rabbit polyclonal anti-CD133 and β-catenin antibodies. The authors state that the dominant FTIR absorption peak at 1080 cm−1, relating to the symmetric (PO2)- stretch, is a more robust marker than the two biomarkers. As gastrointestinal stem cells lack specific biomarkers, they went on to compare FTIR data against a number of chemical differences which are discussed at length in a further publication [10].Salasznyk et al. [41] used FTIR to study osteoblasts derived from human mesenchymal stem cells after 28 days of cell culture. Samples were dried and ground into a powder, then pressed into a pellet. The spectrally derived mineral-to-matrix ratio was calculated as the ratio of the integrated areas of the phosphate absorbance (900–1200 cm−1) and protein amide I band (1585–1720 cm−1). They observed a significant decrease in the mineral-to-matrix ratio in the extracellular matrix produced by focal adhesion kinase- (FAK-) knockdown cells when compared to untreated (control) cells. These FTIR results are compared favourably with biochemical assays.Krafft et al. [15] also used FTIR to study human mesenchymal stem cells differentiating into osteoblasts. Their samples were fixed in methanol then dried, and they were able to distinguish cells stimulated in osteogenic medium for 7 days, from nonstimulated cells. FTIR microscopy on isolated adherent cells (of size ~50 μm) showed that some of the nonstimulated cells had high levels of glycogen accumulation, and some stimulated cells had a high expression of calcium phosphate. Stimulated cells had reduced levels of amide I (at 1631 cm−1), meaning that lower concentrations of beta-sheet proteins were present. This compares well with Ami et al. [38], who measured a higher alpha helix proportion during differentiation. Nucleic acids were hard to detect in this study.Bentley et al. [42] measured FTIR spectra of human corneal stem cells and their derivatives: transit amplifying cells. They made 10 μm thick cryosections of cornea, which were left to dry and observed differences in synchrotron FTIR spectra which were attributed to nucleic acids. They were able to distinguish between both cell types with spectra acquired from two different regions in the tissue, using PCA, albeit with an overlap of 16% between the two populations.In summary, FTIR is able to distinguish between stem cells and their derivatives. Using synchrotron sources, the speed of data acquisition and the quality of comparison have improved greatly. Even microscopy has been performed, requiring a spectral acquisition at each imaging pixel. To be able to distinguish between individual cells, rather than populations, requires subconfluent adherent cells due to the spatial resolution (governed by the wavelength of infrared radiation) being of the order of the cell size. The technique has until now been limited to dried samples due to the high absorption coefficient of water; so the technique has not been used to noninvasively monitor stem cell differentiation in vitro as the required drying is clearly destructive. However, ATR-FTIR can be used on live cell cultures; hence it could potentially be employed as a real-time noninvasive technique for monitoring stem cell differentiation on adherent cells (though no results have yet been published). Such a technique would be highly preferable to the use of biomarkers in clinical as well as research applications. One issue with ATR-FTIR is that it only probes the first 1-2μm above a substrate; so the nucleus would only give a small contribution to the overall signal. Given that nucleic acids play a large part in the reported changes in spectral signatures during differentiation, and the nucleus may remain above this penetration depth, it remains to be seen whether ATR-FTIR can indeed be used as a noninvasive biomarker-free analytical technique for live cell studies.
### 2.2. Raman Spectroscopy
Notingher et al. [43] used Raman spectroscopy to investigate live murine embryonic stem cells. 100 mW of 785 nm laser light was focussed onto a spot of size 10 μm × 5 μm × 25 μm. The group grew cells on a gelatine-coated quartz substrate, as plastic contains a number of vibrational bonds which are also present in cells. Glass gives a strong fluorescence background; so quartz or magnesium fluoride is preferred as a substrate. However, stem cells do not adhere well to glass and crystals; so a thin coating to the substrate is required, such as the gelatine used in this study. Great care must also be taken to ensure that the stem cells do not differentiate spontaneously on a given substrate or coating. Over 16 days of differentiation, they observed a decrease in the RNA peak (at 813 cm−1, O−P−O stretch) by 75% [40] and a drop of 50% in the DNA peak (at 788 cm−1, cytosine ring vibration). Peaks were normalized to the total Raman signal. They also extracted the first principal component spectrum, reproduced in Figure 4, which reveals the spectrum responsible for most of the differences between spectra of stem cells and differentiated cells. Note the high coincidence of many peaks of this principal component spectrum, with the spectrum of RNA. This confirms that a reduction in RNA levels dominates the chemical changes during differentiation.Chan et al. [9] acquired Raman spectra on live human embryonic stem cells (hESCs) as they differentiated into cardiomyocytes and were able to distinguish between hESCs and hESC-derived cardiomyocytes with an accuracy of 66%. They found that the RNA peak (811 cm−1) and DNA peaks (e.g., 785 cm−1, 1090 cm−1) were all reduced in intensity during differentiation.We acquired data which is previously unpublished—Raman spectra of fixed mesenchymal stem cells grown on gelatine-coated quartz, and used a diffraction-limited laser spot, instead of averaging over a large area. This meant that spectra could be acquired separately from within the nucleus and within the cytoplasm—these spectra are shown in Figure5. The spectrum from the substrate has been subtracted, but no baseline subtraction was performed—to highlight the requirement for automated subtraction in quantitative spectroscopic analysis. Small variations in laser power and focus position are thought to be responsible for the offset spectra at low wavenumbers. The spectra in Figure 5 as expected clearly demonstrate that the nucleus contains far more DNA and RNA than does the cytoplasm; and that the cytoplasm contains far more proteins and lipids than the nucleus. These cells were fixed rather than live, but a study of fixation methods on Raman spectra [44] indicates that the effect of aldehyde cross-linking fixation on spectra is minimal. This study, together with the data from Notingher et al. [40, 43], implies that it is the nucleus size (or nucleic acid density therein) which shrinks during differentiation. Hence, monitoring the nucleus size in a quantitative manner—using imaging processing techniques—would seem to be a good potential approach to monitoring the state of differentiation.Figure 5
Individual Raman spectra within a single mesenchymal stem cell (red: nucleus, black: cytoplasm). The following peaks are specific to DNA and RNA:A (785 cm−1, uracil/cytosine/thymine ring breathing, O−P−O stretch), B (813 cm−1, O−P−O stretch), C (828 cm−1, O−P−O antisymmetric stretch), F (1093 cm−1, O−P−O stretch and C−C stretch), and H (1580 cm−1, adenine/guanine C−N stretch). Peaks specific to proteins are D (854 cm−1, tyrosine ring breathing), E (1004 cm−1, phenylalanine ring breathing), and I (1660 cm−1, amide I alpha helix). Peaks dominated by lipids are G (1448 cm−1, CH2 deformation) andJ (2800–3000 cm−1, C−H stretch).Mesenchymal stem cells were monitored by Raman spectroscopy during differentiation into osteoblasts [45]. Mineralization was monitored at two frequencies: 960 cm−1 (P–O stretch) and 1070 cm−1 (PO43-). The 960 cm−1 peak relates to the mineral hydroxyapatite—Ca5(PO4)3(OH)—and was by far the strongest signal; this peak height rose linearly from zero to the dominant peak in the spectrum over the 21 days of differentiation. The peak at 1030 cm−1 for (CO32-) remained constant throughout.Pelled et al. [46] used Raman spectroscopy to compare tissue-engineered bone derived from mesenchymal stem cells, with femoral bone. They found a very good similarity in phosphate (960 cm−1) and carbonate (595 cm−1) levels, with only minor spectral differences such as a larger amount of protein. Liu [47] also observed a major peak at 960 cm−1 due to hydroxyapatite, in mineral extracted from odontoblast nodules—which were formed by the differentiation of dental pulp stem cells.Azrad et al. [48] used Raman spectroscopy to characterize the production of mineral content from mesenchymal stem cells, under the influence of two osteogenic agents: quality elk velvet antler (QEVA) extract, and dexamethasone. They measured no mineralization from the control group, some mineralization from the dexamethasone-fed cells, but most mineralization from cells supplemented with the elk velvet antler. Peaks indicated phosphate derivatives, which for QEVA were mostly related to hydroxyapatite (at 960 cm−1) and its precursors (amorphous calcium phosphate, Ca9(PO4)6·H2O, at 952 cm−1 and octacalcium phosphate, Ca8(PO4)6·5H2O, at 957 cm−1). For dexamethasone-fed cells, the dominant peak was of octacalcium phosphate, and lower amounts of hydroxyapatite and amorphous calcium phosphate were measured.Gentleman et al. [49] compared mineralized nodules in vitro from 3 sources: embryonic stem cells, neonatal calvarial osteoblasts, and adult bone-marrow-derived mesenchymal stem cells. After 28 days in osteogenic medium, sets of Raman spectra were acquired of the mineralized nodules, and PCA was performed on the spectra to extract their chemical components. The osteoblasts and mesenchymal stem cells both produced nodules which were chemically similar to native bone: a combination in descending order of concentration, of (a) carbonate-substituted hydroxyapatite, (b) crystalline nonsubstituted hydroxyapatite, and (c) amorphous phosphate species. However, embryonic stem cells produce nodules which were dominated only by the first mineral component, which was similar to synthetic carbonated hydroxyapatite. Nanoindentation tests showed that these nodules derived from embryonic stem cells were more than an order of magnitude less stiff than those derived from osteoblasts and mesenchymal stem cells.These Raman studies all show a similar ability to distinguish stem cells from their derivatives, that is, a similar sensitivity, as the FTIR technique. In addition, mineralization can be monitored with great subtlety. The major advantage of FTIR over Raman is its speed; hence mapping signals (derived from a full spectral acquisition at each imaging pixel) into images is easier with FTIR. The major advantage of Raman over FTIR is that has been used on live cells, so is truly noninvasive.
### 2.3. CARS Microscopy and Spectroscopy
CARS microscopy has been performed on live murine embryonic stem cells by Konorov et al. [50], but the laser setup only permitted pixel dwell times of 300 milliseconds compared to microseconds in standard CARS microscopy [25–28]. Hence the image quality was poor—the groups were unable to distinguish any individual cells or features when imaging at the DNA and RNA frequencies. However, CARS spectroscopy showed a large reduction in the RNA peak intensity (at 811 cm−1) in differentiated cells.Figure6 shows our CARS microscopy image, acquired on MCF-7 human breast cancer cells which have a similarly large nucleus to cytoplasm ratio as stem cells. Two sequentially acquired images are overlaid: in green, DNA/RNA is mapped at the phosphate backbone O−P−O stretch frequency (1095 cm−1), and in red, lipids and the cytoskeleton are mapped at the CH2 deformation frequency (1448 cm−1). The O−P−O stretch frequency is also present in Figure 4(a)—the principal component spectrum which describes most of the changes during differentiation—and in Figure 4(b)—the RNA spectrum. This method can be easily extended to monitoring the nucleus size in live stem cells, with a reduced frame rate of at least 1 image per minute (we find that CARS imaging in live cells is slower than in dried, fixed cells).Figure 6
CARS microscopy image of fixed MCF-7 breast cancer cells, of size 100× 100 μm. The green channel is specific to the O−P−O stretch frequency (1095 cm−1) and originates from the DNA backbone, so highlights the nucleus (whose shape may have been distorted in some cells, by the fixation process). The red channel is specific to the CH2 deformation (1448 cm−1) and is dominated by lipids and the cytoskeleton. The image plane is restricted to a 1 μm slice, acquired several micrometres above the glass substrate, and the lateral resolution is around 350 nm. Images of both channels were acquired in 1 second, and averaged 5 times; both channels were acquired sequentially after retuning one laser source. Pulse widths of 6 ps correspond to a spectral resolution of ~3 cm−1.CARS microscopy is limited to monitoring one vibrational mode at a time, rather than comparing the full spectral signature. However, the RNA and DNA peaks have been shown to drop considerably, and RNA is the dominant change to spectra. So it is possible that CARS imaging will be able to measure the stage of differentiation purely by monitoring the size of each nucleus. White light imaging is less exact at measuring the nucleus size than either fluorescence or CARS microscopy and could not be extended from monolayers; so CARS is preferred as a noninvasive technique to monitor nuclear size. We expect that CARS should also be able to map mineralization from mesenchymal stem cells, by mapping the peak at 960 cm−1. The clear advantage of CARS is its high speed compared to the other spectroscopic techniques.One of the most promising techniques over the coming years should be Multiplex CARS, which acquires a full vibrational spectrum at higher speed than Raman, and is also applicable to the single cell level. This is bound to be more sensitive than monitoring just one peak in standard CARS. If the spectral acquisition speed is improved to the millisecond scale, the technology could be applied to both flow cytometry and microscopy.
## 2.1. Fourier-Transform Infrared (FTIR) Spectroscopy
FTIR spectroscopy was used to study murine embryonic stem cells by Ami et al. [38]. After 4–7 days of differentiation, changes to the absorption spectrum of fixed cells were noticed: features in the amide I band (1600–1700 cm−1) were enhanced, and those in the nucleic acid region (850–1050 cm−1) diminish. This means that the overall levels of DNA and RNA decrease, and the alpha helix content of proteins increases over time. Furthermore, new DNA/RNA hybrid bands at 899 cm−1 and 954 cm−1 start to occur around day 4–7, suggesting that mRNA translation is occurring at this time.German et al. [11] employed high-intensity synchrotron radiation to probe 10 μm thick cryosections of bovine cornea. They used PCA to clearly distinguish the three cells types of interest: stem cells, transit-amplifying cells, and terminal differentiated cells. No biomarkers of corneal stem cells exist; so spectroscopic techniques offer the only viable method of cell characterization here.From the same group, Walsh et al. [39] again used synchrotron FTIR, this time on paraffin-embedded human intestinal crypts, which were dewaxed. The position of cells along the crypt denotes the change from stem cell location to transit-amplifying region to differentiated location. PCA was used to compare spectral features and was able to separate cell types from three positions along the crypt, which is shown in Figure 3. This method of characterization was compared with tissue stained with two different immunophenotypical markers: rabbit polyclonal anti-CD133 and β-catenin antibodies. The authors state that the dominant FTIR absorption peak at 1080 cm−1, relating to the symmetric (PO2)- stretch, is a more robust marker than the two biomarkers. As gastrointestinal stem cells lack specific biomarkers, they went on to compare FTIR data against a number of chemical differences which are discussed at length in a further publication [10].Salasznyk et al. [41] used FTIR to study osteoblasts derived from human mesenchymal stem cells after 28 days of cell culture. Samples were dried and ground into a powder, then pressed into a pellet. The spectrally derived mineral-to-matrix ratio was calculated as the ratio of the integrated areas of the phosphate absorbance (900–1200 cm−1) and protein amide I band (1585–1720 cm−1). They observed a significant decrease in the mineral-to-matrix ratio in the extracellular matrix produced by focal adhesion kinase- (FAK-) knockdown cells when compared to untreated (control) cells. These FTIR results are compared favourably with biochemical assays.Krafft et al. [15] also used FTIR to study human mesenchymal stem cells differentiating into osteoblasts. Their samples were fixed in methanol then dried, and they were able to distinguish cells stimulated in osteogenic medium for 7 days, from nonstimulated cells. FTIR microscopy on isolated adherent cells (of size ~50 μm) showed that some of the nonstimulated cells had high levels of glycogen accumulation, and some stimulated cells had a high expression of calcium phosphate. Stimulated cells had reduced levels of amide I (at 1631 cm−1), meaning that lower concentrations of beta-sheet proteins were present. This compares well with Ami et al. [38], who measured a higher alpha helix proportion during differentiation. Nucleic acids were hard to detect in this study.Bentley et al. [42] measured FTIR spectra of human corneal stem cells and their derivatives: transit amplifying cells. They made 10 μm thick cryosections of cornea, which were left to dry and observed differences in synchrotron FTIR spectra which were attributed to nucleic acids. They were able to distinguish between both cell types with spectra acquired from two different regions in the tissue, using PCA, albeit with an overlap of 16% between the two populations.In summary, FTIR is able to distinguish between stem cells and their derivatives. Using synchrotron sources, the speed of data acquisition and the quality of comparison have improved greatly. Even microscopy has been performed, requiring a spectral acquisition at each imaging pixel. To be able to distinguish between individual cells, rather than populations, requires subconfluent adherent cells due to the spatial resolution (governed by the wavelength of infrared radiation) being of the order of the cell size. The technique has until now been limited to dried samples due to the high absorption coefficient of water; so the technique has not been used to noninvasively monitor stem cell differentiation in vitro as the required drying is clearly destructive. However, ATR-FTIR can be used on live cell cultures; hence it could potentially be employed as a real-time noninvasive technique for monitoring stem cell differentiation on adherent cells (though no results have yet been published). Such a technique would be highly preferable to the use of biomarkers in clinical as well as research applications. One issue with ATR-FTIR is that it only probes the first 1-2μm above a substrate; so the nucleus would only give a small contribution to the overall signal. Given that nucleic acids play a large part in the reported changes in spectral signatures during differentiation, and the nucleus may remain above this penetration depth, it remains to be seen whether ATR-FTIR can indeed be used as a noninvasive biomarker-free analytical technique for live cell studies.
## 2.2. Raman Spectroscopy
Notingher et al. [43] used Raman spectroscopy to investigate live murine embryonic stem cells. 100 mW of 785 nm laser light was focussed onto a spot of size 10 μm × 5 μm × 25 μm. The group grew cells on a gelatine-coated quartz substrate, as plastic contains a number of vibrational bonds which are also present in cells. Glass gives a strong fluorescence background; so quartz or magnesium fluoride is preferred as a substrate. However, stem cells do not adhere well to glass and crystals; so a thin coating to the substrate is required, such as the gelatine used in this study. Great care must also be taken to ensure that the stem cells do not differentiate spontaneously on a given substrate or coating. Over 16 days of differentiation, they observed a decrease in the RNA peak (at 813 cm−1, O−P−O stretch) by 75% [40] and a drop of 50% in the DNA peak (at 788 cm−1, cytosine ring vibration). Peaks were normalized to the total Raman signal. They also extracted the first principal component spectrum, reproduced in Figure 4, which reveals the spectrum responsible for most of the differences between spectra of stem cells and differentiated cells. Note the high coincidence of many peaks of this principal component spectrum, with the spectrum of RNA. This confirms that a reduction in RNA levels dominates the chemical changes during differentiation.Chan et al. [9] acquired Raman spectra on live human embryonic stem cells (hESCs) as they differentiated into cardiomyocytes and were able to distinguish between hESCs and hESC-derived cardiomyocytes with an accuracy of 66%. They found that the RNA peak (811 cm−1) and DNA peaks (e.g., 785 cm−1, 1090 cm−1) were all reduced in intensity during differentiation.We acquired data which is previously unpublished—Raman spectra of fixed mesenchymal stem cells grown on gelatine-coated quartz, and used a diffraction-limited laser spot, instead of averaging over a large area. This meant that spectra could be acquired separately from within the nucleus and within the cytoplasm—these spectra are shown in Figure5. The spectrum from the substrate has been subtracted, but no baseline subtraction was performed—to highlight the requirement for automated subtraction in quantitative spectroscopic analysis. Small variations in laser power and focus position are thought to be responsible for the offset spectra at low wavenumbers. The spectra in Figure 5 as expected clearly demonstrate that the nucleus contains far more DNA and RNA than does the cytoplasm; and that the cytoplasm contains far more proteins and lipids than the nucleus. These cells were fixed rather than live, but a study of fixation methods on Raman spectra [44] indicates that the effect of aldehyde cross-linking fixation on spectra is minimal. This study, together with the data from Notingher et al. [40, 43], implies that it is the nucleus size (or nucleic acid density therein) which shrinks during differentiation. Hence, monitoring the nucleus size in a quantitative manner—using imaging processing techniques—would seem to be a good potential approach to monitoring the state of differentiation.Figure 5
Individual Raman spectra within a single mesenchymal stem cell (red: nucleus, black: cytoplasm). The following peaks are specific to DNA and RNA:A (785 cm−1, uracil/cytosine/thymine ring breathing, O−P−O stretch), B (813 cm−1, O−P−O stretch), C (828 cm−1, O−P−O antisymmetric stretch), F (1093 cm−1, O−P−O stretch and C−C stretch), and H (1580 cm−1, adenine/guanine C−N stretch). Peaks specific to proteins are D (854 cm−1, tyrosine ring breathing), E (1004 cm−1, phenylalanine ring breathing), and I (1660 cm−1, amide I alpha helix). Peaks dominated by lipids are G (1448 cm−1, CH2 deformation) andJ (2800–3000 cm−1, C−H stretch).Mesenchymal stem cells were monitored by Raman spectroscopy during differentiation into osteoblasts [45]. Mineralization was monitored at two frequencies: 960 cm−1 (P–O stretch) and 1070 cm−1 (PO43-). The 960 cm−1 peak relates to the mineral hydroxyapatite—Ca5(PO4)3(OH)—and was by far the strongest signal; this peak height rose linearly from zero to the dominant peak in the spectrum over the 21 days of differentiation. The peak at 1030 cm−1 for (CO32-) remained constant throughout.Pelled et al. [46] used Raman spectroscopy to compare tissue-engineered bone derived from mesenchymal stem cells, with femoral bone. They found a very good similarity in phosphate (960 cm−1) and carbonate (595 cm−1) levels, with only minor spectral differences such as a larger amount of protein. Liu [47] also observed a major peak at 960 cm−1 due to hydroxyapatite, in mineral extracted from odontoblast nodules—which were formed by the differentiation of dental pulp stem cells.Azrad et al. [48] used Raman spectroscopy to characterize the production of mineral content from mesenchymal stem cells, under the influence of two osteogenic agents: quality elk velvet antler (QEVA) extract, and dexamethasone. They measured no mineralization from the control group, some mineralization from the dexamethasone-fed cells, but most mineralization from cells supplemented with the elk velvet antler. Peaks indicated phosphate derivatives, which for QEVA were mostly related to hydroxyapatite (at 960 cm−1) and its precursors (amorphous calcium phosphate, Ca9(PO4)6·H2O, at 952 cm−1 and octacalcium phosphate, Ca8(PO4)6·5H2O, at 957 cm−1). For dexamethasone-fed cells, the dominant peak was of octacalcium phosphate, and lower amounts of hydroxyapatite and amorphous calcium phosphate were measured.Gentleman et al. [49] compared mineralized nodules in vitro from 3 sources: embryonic stem cells, neonatal calvarial osteoblasts, and adult bone-marrow-derived mesenchymal stem cells. After 28 days in osteogenic medium, sets of Raman spectra were acquired of the mineralized nodules, and PCA was performed on the spectra to extract their chemical components. The osteoblasts and mesenchymal stem cells both produced nodules which were chemically similar to native bone: a combination in descending order of concentration, of (a) carbonate-substituted hydroxyapatite, (b) crystalline nonsubstituted hydroxyapatite, and (c) amorphous phosphate species. However, embryonic stem cells produce nodules which were dominated only by the first mineral component, which was similar to synthetic carbonated hydroxyapatite. Nanoindentation tests showed that these nodules derived from embryonic stem cells were more than an order of magnitude less stiff than those derived from osteoblasts and mesenchymal stem cells.These Raman studies all show a similar ability to distinguish stem cells from their derivatives, that is, a similar sensitivity, as the FTIR technique. In addition, mineralization can be monitored with great subtlety. The major advantage of FTIR over Raman is its speed; hence mapping signals (derived from a full spectral acquisition at each imaging pixel) into images is easier with FTIR. The major advantage of Raman over FTIR is that has been used on live cells, so is truly noninvasive.
## 2.3. CARS Microscopy and Spectroscopy
CARS microscopy has been performed on live murine embryonic stem cells by Konorov et al. [50], but the laser setup only permitted pixel dwell times of 300 milliseconds compared to microseconds in standard CARS microscopy [25–28]. Hence the image quality was poor—the groups were unable to distinguish any individual cells or features when imaging at the DNA and RNA frequencies. However, CARS spectroscopy showed a large reduction in the RNA peak intensity (at 811 cm−1) in differentiated cells.Figure6 shows our CARS microscopy image, acquired on MCF-7 human breast cancer cells which have a similarly large nucleus to cytoplasm ratio as stem cells. Two sequentially acquired images are overlaid: in green, DNA/RNA is mapped at the phosphate backbone O−P−O stretch frequency (1095 cm−1), and in red, lipids and the cytoskeleton are mapped at the CH2 deformation frequency (1448 cm−1). The O−P−O stretch frequency is also present in Figure 4(a)—the principal component spectrum which describes most of the changes during differentiation—and in Figure 4(b)—the RNA spectrum. This method can be easily extended to monitoring the nucleus size in live stem cells, with a reduced frame rate of at least 1 image per minute (we find that CARS imaging in live cells is slower than in dried, fixed cells).Figure 6
CARS microscopy image of fixed MCF-7 breast cancer cells, of size 100× 100 μm. The green channel is specific to the O−P−O stretch frequency (1095 cm−1) and originates from the DNA backbone, so highlights the nucleus (whose shape may have been distorted in some cells, by the fixation process). The red channel is specific to the CH2 deformation (1448 cm−1) and is dominated by lipids and the cytoskeleton. The image plane is restricted to a 1 μm slice, acquired several micrometres above the glass substrate, and the lateral resolution is around 350 nm. Images of both channels were acquired in 1 second, and averaged 5 times; both channels were acquired sequentially after retuning one laser source. Pulse widths of 6 ps correspond to a spectral resolution of ~3 cm−1.CARS microscopy is limited to monitoring one vibrational mode at a time, rather than comparing the full spectral signature. However, the RNA and DNA peaks have been shown to drop considerably, and RNA is the dominant change to spectra. So it is possible that CARS imaging will be able to measure the stage of differentiation purely by monitoring the size of each nucleus. White light imaging is less exact at measuring the nucleus size than either fluorescence or CARS microscopy and could not be extended from monolayers; so CARS is preferred as a noninvasive technique to monitor nuclear size. We expect that CARS should also be able to map mineralization from mesenchymal stem cells, by mapping the peak at 960 cm−1. The clear advantage of CARS is its high speed compared to the other spectroscopic techniques.One of the most promising techniques over the coming years should be Multiplex CARS, which acquires a full vibrational spectrum at higher speed than Raman, and is also applicable to the single cell level. This is bound to be more sensitive than monitoring just one peak in standard CARS. If the spectral acquisition speed is improved to the millisecond scale, the technology could be applied to both flow cytometry and microscopy.
## 3. Conclusions
The major noninvasive optical spectroscopy techniques suitable for analysis of stem cell differentiation have been outlined: namely, FTIR, Raman, and CARS. FTIR spectroscopy has only been performed on fixed or dried stem cells, but ATR-FTIR can be used in order to investigate live cells in future. Raman spectroscopy has demonstrated the ability to distinguish between live stem cells and differentiated cells: both murine and human embryonic stem cells display a large reduction in peak intensities of both RNA and DNA. When mesenchymal stem cells differentiate into osteoblasts, they display a clear peak (or peaks) relating solely to mineral composition.It is well known that the nucleus to cytoplasm (volume) ratio in embryonic stem cells is elevated [51–54]. Hence, the reduction in RNA and DNA levels during differentiation is not entirely surprising. The assumption is therefore that the nucleus volume shrinks by around 50% during differentiation [53]—rather than the concentration of nucleic acids reducing. CARS microscopy is well suited to monitoring the nucleus size or mineralization content of each individual cell (around 960 cm−1) and can be applied to large numbers of cells in cultures and to engineered tissue scaffolds. Monitoring the CARS signal of the RNA or DNA peak could also be applied to high-speed cell flow cytometry. White light imaging (normally DIC or phase contrast) could offer a low-cost solution for monitoring the nucleus/cytoplasm ratio of cell monolayers, in conjunction with automated image recognition cytometry. CARS only monitors one (or possibly two) spectral peaks; so it will be less sensitive than Raman and FTIR which both acquire a full vibrational spectrum. Multiplex CARS can acquire a full spectrum and promises to replace Raman spectroscopy in time, due to its improved speed.Each spectroscopic technique has its own benefits and drawbacks; so it is more suited to characterization of stem cells in different ways and on different “platforms.” Raman and FTIR spectroscopy are both most suited to monitoring cultures averaged over a large number cells. FTIR does not have the required resolution to address single cells in confluent monolayer cultures, or a sufficient penetration depth for flow cytometry. Raman spectroscopy is too slow to characterize enough individual cells to be a worthwhile clinical technique but could be used extensively in biomedical research. Both FTIR and Raman techniques are more sensitive than normal CARS which relies on excitation of just one spectral peak. However, CARS may be sufficiently sensitive to apply to characterization of differentiation in microscopy and flow cytometry. Raman and CARS have been integrated into one instrument, to combine the benefits of both techniques [55]. In future, Multiplex CARS promises to be the technique of choice for all platforms, due to its combined attributes of speed, full spectral analysis, and applicability to individual live cells.
---
*Source: 101864-2010-02-16.xml* | 101864-2010-02-16_101864-2010-02-16.md | 72,141 | Optical Spectroscopy for Noninvasive Monitoring of Stem Cell Differentiation | Andrew Downes; Rabah Mouras; Alistair Elfick | Journal of Biomedicine and Biotechnology
(2010) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2010/101864 | 101864-2010-02-16.xml | ---
## Abstract
There is a requirement for a noninvasive technique to monitor stem cell differentiation. Several candidates based on optical spectroscopy are discussed in this review: Fourier transform infrared (FTIR) spectroscopy, Raman spectroscopy, and coherent anti-Stokes Raman scattering (CARS) microscopy. These techniques are briefly described, and the ability of each to distinguish undifferentiated from differentiated cells is discussed. FTIR spectroscopy has demonstrated its ability to distinguish between stem cells and their derivatives. Raman spectroscopy shows a clear reduction in DNA and RNA concentrations during embryonic stem cell differentiation (agreeing with the well-known reduction in the nucleus to cytoplasm ratio) and also shows clear increases in mineral content during differentiation of mesenchymal stem cells. CARS microscopy can map these DNA, RNA, and mineral concentrations at high speed, and Mutliplex CARS spectroscopy/microscopy is highlighted as the technique with most promise for future applications.
---
## Body
## 1. Introduction
### 1.1. Challenges in Stem Cell Science
In current stem cell biology and regenerative medicine, two of the greatest challenges [1, 2] are to control the differentiation of stem cells and to ensure the purity of isolated cells. These can both be addressed by careful monitoring and characterization of cells. The process of stem cell differentiation is at present monitored by biological assays, namely, immunocytochemistry [3, 4]. However, this process is time consuming as well as requiring biomarkers or labels. There is a clear need for a truly noninvasive technique which can monitor the degree of differentiation rapidly. Such a technique will most likely involve a form of optical imaging or spectroscopy but must not involve the addition of any kind of biomarker. Biomarkers are used to sort embryonic stem cells, in conjunction with fluorescent [5, 6] or magnetic [7] labels. These techniques are lengthy and time-consuming, but careful monitoring of stem cell differentiation is essential: in clinical applications, a population of fully differentiated cells is often implanted, but teratomas can result if any stem cells remain undifferentiated [8].There are a number of issue with the use of biomarkers for the characterization and sorting of stem cells and their derivatives. Firstly, only a limited number of biomarkers exists each one being cell-specific. Many cell types lack biomarkers, for example, cardiomyocytes [9], gastrointestinal stem cells [10], and corneal stem cells [11]. Secondly, the use of biomarkers raises issues with both biological researchers and clinicians, who would strongly prefer a label-free technique. Finally, these biomarkers cannot easily be translated; for example, embryonic stem cell biomarkers are not always applicable to adult stem cells.There are further issues with the use of fluorescent and magnetic markers. Fluorescent biomarkers [5, 6] have been employed in cell sorting and characterization, but fluorescent techniques have a number of drawbacks. Firstly, photobleaching means that signal levels drop over time; so long-term studies of differentiation are prohibited. Secondly, this process of photobleaching produces free radical singlet oxygen species which will damage live cells. Finally, the use of biomarkers causes modification to cells’ surface chemistry, and stem cells are highly sensitive to small changes in their surface chemistry. Magnetic beads cannot easily be visualised in microscopy; they must all be removed from the cells; a large mass could cause large mechanical stresses to the cells, which can affect the cells’ behaviour.There is thus a requirement from the stem cell community for a rapid, easy, sensitive, nondestructive, noninvasive, label-free technique which can be applied both on the single cell level and to monitoring or sorting large populations of cells. This review will concentrate on label-free optical spectroscopy techniques, which are noninvasive and have sufficiently high resolution to be applied at the single cell level.White light imaging—either phase contrast or differential interference contrast (DIC)—can reveal the approximate level of differentiation in situ, to those who are expert in stem cell culture. However, it is only really suitable for monolayers of cells. As white light imaging is usually only qualitative, it would benefit by being replaced by a more advanced optical technique capable of a quantitative measurement on individual cells. Such a technique should therefore be capable of high speed characterization, to enable large numbers of cells to be studied—in monolayer cultures, embryos, and scaffolds.
### 1.2. Infrared Absorption Spectroscopy
The first optical technique suitable for noninvasive characterization of cells is infrared absorption spectroscopy. Infrared light is absorbed by the wide variety of chemical bonds within molecules, which all have well-defined vibrational frequencies. Hence, an absorption spectrum of a cell should give a characteristic snapshot of the chemistry, and an undifferentiated cell’s spectrum could differ enough from that of a differentiated cell enough to characterize them. Simple infrared spectrometers use a broadband light source containing a wide range of wavelengths, which is typically passed through a cuvette of solution, through a dispersing spectrometer onto a single detector. This technique is slow, as the spectrum is built up from around 1000 sequential data points. In order to collect a full spectrum without losing the vast majority of the signal, Fourier transform infrared (FTIR) spectroscopy [12] uses both interferometry and a Fourier transform of the signal: from the time to frequency domain. A typical FTIR setup is illustrated in Figure 1(a), which requires a mirror to scan one half of the interferometer arm over a distance of a few millimetres. A full spectrum is typically acquired in around a second on live cells [13]. Synchrotron sources have promised vastly improved spectral acquisition times—up to 1000 times faster [14] than benchtop FTIR—but it is not clear whether this is applicable to live cells, as heating by absorption may prevent any increase in speed.Schematic experimental arrangements for (a) Fourier transform infrared (FTIR) spectroscopy, (b) Raman spectroscopy, and (c) coherent anti-Stokes Raman scattering (CARS) microscopy and spectroscopy.
(a)
FTIR(b)
Raman(c)
CARSThe lateral resolution of optical techniques is normally approximated by 0.6λ/N.A. where λ is the wavelength of illuminating light, and N.A. is the numerical aperture of illumination. Although an N.A. of 1.4 is achievable with objective lenses using visible light, infrared light has very low transmission through standard glass objectives; so a parabolic mirror (known as a Cassegrain objective) is normally used to focus light. These objectives have a typical N.A. of 0.4. The bonds in molecules are typically excited with infrared light of wavelengths between 2.8 and 16 μm, which corresponds to a lateral resolution of 4.2 to 24 μm, which is small enough to be applied to individual isolated cells or to average over groups of cells, but will not usually be cell specific when applied to an embryo or group of cells tightly bound together. FTIR microscopy has been achieved on fixed adherent mesenchymal stem cells (MSCs) [15] with a diameter of around 50 μm, but only for high-frequency (short wavelength) vibrations. These vibrational frequencies are described by spectroscopists as inverse wavelengths in units of cm−1 [“wavenumbers”]. The lowest frequency vibrations occur in cells around 600 cm−1 (λ = 16.7 μm) and the highest frequencies relate to the C−H stretch (2800–3000 cm−1, λ = 3.3–3.6 μm) and O−H stretch (~3500 cm−1, λ = 2.8 μm).One of the major issues with infrared radiation is its extremely low penetration depth in water, which limits the depth which can be probed in solution to the range 10–100μm. To combat absorption of infrared light through a thick cuvette, attenuated total reflection- (ATR-) FTIR [16] probes only the first micrometre above the substrate. An array of spectra can be acquired rapidly enough to perform imaging within a few minutes on live cells [17]. Results from FTIR on stem cells will be discussed later.
### 1.3. Raman Spectroscopy
The second optical technique suitable for stem cell characterization is Raman spectroscopy [18]. In the most widespread form—Stokes scattering—visible or near-infrared light loses energy (frequency) by exciting molecules into their excited state, as depicted in Figure 2. This means that some of the laser light is red-shifted after interacting with the sample. A typical setup is illustrated in Figure 1(b): after filtering out the laser, the remaining red-shifted light is passed through a spectrometer onto a cooled CCD camera. A full Raman spectrum is normally acquired in 1 to 10 seconds, so is typically slower than FTIR. As silicon CCDs have a response which dies away rapidly at a wavelength of around 1000 nm, the longest laser wavelength which can be used is 785 nm (AlGaAs diode) which is the preferred wavelength for Raman spectroscopy in biology. The strong C−H stretch frequencies are the highest measurable with such a laser wavelength. The other popular laser wavelength for biological samples is 633 nm (HeNe laser) which has lower power than 785 nm lasers, and heating in cells and tissue is the lowest using near-IR illumination (700–1100 nm). Heating has been measured directly in cells—using 100 mW of 1064 nm radiation in an optical trap, a temperature rise of <1°C was observed [19]. Prolonged exposure to 300 mW illumination at constant power (“continuous wave”, CW) caused photodamage—a light-induced reduction in cell viability—attributed to two-photon absorption. Visible light (λ<700 nm) is believed to cause photodamage at far lower thresholds than this, due to increased absorption by proteins, but shorter wavelengths produce more Raman signal [20].Figure 2
Schematic energy level diagrams for Raman, CARS, and FTIR processes. In Raman scattering one illuminating laser photon is absorbed (at pump frequencyνP), and another is radiated (at Stokes frequency νS)—the difference in frequencies being equal to a vibrational frequency, νVIB. In CARS three photons are absorbed—two at frequency νP and one at νS—and one is emitted at the anti-Stokes frequency νAS. FTIR relies solely on the absorption of infrared radiation at νVIB.The reduction in wavelengths for Raman spectroscopy, compared to FTIR, equates to a greatly improved lateral resolution of 350 nm and axial (depth) resolution of 1150 nm. This is sufficient to allow spectroscopy on individual cells, and even at the subcellular level. Furthermore, Raman microspectroscopy—otherwise known as Raman mapping or Raman microscopy—can be performed [21]. Maps can be produced containing the total signal under a given peak, or more subtle differences between spectra can be exploited with principal component analysis (PCA) [22] or cluster analysis. This reduces the large number of peaks in the Raman spectrum to a smaller number of independent variables, and cluster analysis produces colour-coded maps with regions of similar chemistry deduced from the set of spectra. Due to the long acquisition time required, only Raman microscopy of fixed cells had been performed until recently, as live cells move far more quickly than the set of spectra could be recorded. However, recently Raman microscopy has been performed on live cells with 100 mW power at 647 nm [23] and acquisition time of 0.5 second, albeit with only 32 × 32 pixels. An alternative approach is to illuminate a line rather than a spot—then a series of spectra can be recorded across a single CCD, each relating to a pixel along the line. This technique has been applied to live cells [20] with 5 seconds per line (3 minutes per image) with 532 nm at 3.5 mW/μm2 intensity, again with a small number of imaging pixels. It remains to be seen whether photodamage was occurring in both of these live cell imaging publications.PCA can be used to extract the most significant variations between groups of spectra acquired on large numbers of cells. Thus, determining an unknown cell type from two possibilities—notably, stem cell and differentiated cell—can be accomplished using large numbers of spectra fromknown cell types. The most important differences are highlighted, rather than any uncorrelated and unimportant variations, to improve the sensitivity of the technique. No knowledge of the chemistry is required with this unsupervised technique. Two improved variants of PCA are discussed in relation to FTIR, and both show marked improvements in their ability to distinguish cell types [24]. Before analysis can be performed, spectra require processing to remove unwanted autofluorescence from tissue—normally by baseline subtraction, as well as the removal of cosmic rays, and substrate and media contribution to the spectrum. All this processing and analysis of spectra requires significant computing power, when applied to large datasets.Raman spectra consist of a large number of peaks at well-defined frequencies, as demonstrated in Figures3 and 4. The relative intensities of the peaks change between cells, but the frequencies themselves do not shift. Peaks are usually around 5–10 cm−1 in width, except for the single peak at 1002 cm−1. This frequency relates to the in-plane vibration of the aromatic ring of the phenylalanine molecule which is highly symmetric and is narrower than the other peaks because of the lack of vibration out of the plane and lack of variation in other atoms attached to the benzene ring. Frequencies in FTIR can be slightly shifted by a few cm−1—and tend to be broader. Both techniques excite these vibrations to different extents; so the relative peak intensities will be different in Raman and FTIR spectra.IR spectral analysis of a small intestinal crypt using synchrotron FTIR microspectroscopy. (a) Ten IR spectra of the entire biochemical-cell fingerprint region (900 to 1,800 cm−1) acquired from the assigned transit-amplifying location (locations 1–3; black lines, top), the putative stem cell location (locations 4–6; red lines, middle), and the differentiated location (locations 7–10; blue lines, bottom). (b) PC analysis of a small intestinal crypt’s IR spectra using the entire biochemical-cell fingerprint region. Reprinted with permission from [39] (Copyright AlphaMed press 2008).
(a)(b)Figure 4
(a) First principal component, describing the major differences between two groups: undifferentiated murine embryonic stem cells and the differentiated cells via formation of embryoid bodies. (b) Raman spectrum of reference RNA, revealing a good deal of similarity with (a)—hence, the major change to the spectrum during differentiation is related to a reduction in RNA levels. Note the strong peaks around 785 cm−1 (cytosine and uracil ring stretching), 811 cm−1 (phosphodiester bond stretching), and 1096 cm−1 (phosphodioxy group stretching). All spectra were acquired in 2 minutes (Reprinted with permission from [40]; Copyright 2004 American Chemical Society).
### 1.4. CARS Microscopy and Spectroscopy
Raman scattering is a weak process—typically only 1 in~1010 incident photons gives rise to a Raman-shifted photon. This is largely due to the excitation of the bond vibration far above its resonant frequency. In order to increase this efficiency, in coherent anti-Stokes Raman scattering (CARS) [25–28] the vibration is excited with two laser frequencies—the difference (or beat) frequency between them is matched to the vibrational frequency of interest. This gives between 4 and 6 orders of magnitude more signal than standard Raman. This means that CARS images are acquired in seconds, whereas a similar quality Raman map would require days to complete. The process is best excited with pulsed lasers with durations of around 6 ps, which should mean that no photodamage occurs with extensive use of powers of at least 12 mW [29]. A CARS microscope, designed for biological imaging, is described in detail elsewehere [28], which has a lateral resolution of 350 nm and an axial resolution of 1100 nm. Live cell imaging is slightly slower than that of fixed cells, but high-quality images are acquired within 1 minute.Picosecond CARS excitation pulses have an estimated width of around 3 cm−1, so are ideal for biological molecules, but this means that only one vibration (spectral peak) may be excited during an image. Images can be acquired sequentially at different wavenumbers, by retuning one laser, but this is not ideal given the motion occurring during live cell imaging. Some CARS systems are able to acquire images at two different vibrational frequencies simultaneously [28]. Multiplex CARS [30–34] uses normal (narrowband) pulses for one laser, νP, and broadband pulses for νS. The broadband supercontinuum excitation pulse is several hundred nanometres wide and is excited in a photonic crystal fibre by a femtosecond laser. A full spectrum is acquired in around 100 milliseconds on live yeast [30], but an estimate of photodamage [29] suggests that 1 second per pixel would be more appropriate for eukaryotic cells. Further improvements to excitation sources could see this fall to the millisecond range—enabling high-quality, noninvasive full spectral mapping of live cells in minutes.A different approach to increase the speed of Raman microscopy, termed Stimulated Raman Scattering, has recently been published [35]. In a similar way to how stimulated emission depopulates the excited state in lasers, the excited state in a simple Raman excitation (pumped with νP) can be rapidly depopulated by a second laser (at νS) modulated in the MHz range. In this way, the signal at νS is increased slightly (Stimulated Raman Gain) and the pump power at νP is decreased slightly (Stimulated Raman Loss, SRL). Monitoring either signal, filtered by a lock-in amplifier, produces images which are background-free and directly proportional to the concentration. The standard CARS signal has a quadratic dependence on concentration and can have a large unwanted background. Heterodyne CARS [36] is another technique which is able to circumvent both of these problems encountered in standard CARS imaging. SRL is a linear optical technique and is suitable for extension into optical coherence tomography deep into tissue, using low numerical aperture lenses [37].
## 1.1. Challenges in Stem Cell Science
In current stem cell biology and regenerative medicine, two of the greatest challenges [1, 2] are to control the differentiation of stem cells and to ensure the purity of isolated cells. These can both be addressed by careful monitoring and characterization of cells. The process of stem cell differentiation is at present monitored by biological assays, namely, immunocytochemistry [3, 4]. However, this process is time consuming as well as requiring biomarkers or labels. There is a clear need for a truly noninvasive technique which can monitor the degree of differentiation rapidly. Such a technique will most likely involve a form of optical imaging or spectroscopy but must not involve the addition of any kind of biomarker. Biomarkers are used to sort embryonic stem cells, in conjunction with fluorescent [5, 6] or magnetic [7] labels. These techniques are lengthy and time-consuming, but careful monitoring of stem cell differentiation is essential: in clinical applications, a population of fully differentiated cells is often implanted, but teratomas can result if any stem cells remain undifferentiated [8].There are a number of issue with the use of biomarkers for the characterization and sorting of stem cells and their derivatives. Firstly, only a limited number of biomarkers exists each one being cell-specific. Many cell types lack biomarkers, for example, cardiomyocytes [9], gastrointestinal stem cells [10], and corneal stem cells [11]. Secondly, the use of biomarkers raises issues with both biological researchers and clinicians, who would strongly prefer a label-free technique. Finally, these biomarkers cannot easily be translated; for example, embryonic stem cell biomarkers are not always applicable to adult stem cells.There are further issues with the use of fluorescent and magnetic markers. Fluorescent biomarkers [5, 6] have been employed in cell sorting and characterization, but fluorescent techniques have a number of drawbacks. Firstly, photobleaching means that signal levels drop over time; so long-term studies of differentiation are prohibited. Secondly, this process of photobleaching produces free radical singlet oxygen species which will damage live cells. Finally, the use of biomarkers causes modification to cells’ surface chemistry, and stem cells are highly sensitive to small changes in their surface chemistry. Magnetic beads cannot easily be visualised in microscopy; they must all be removed from the cells; a large mass could cause large mechanical stresses to the cells, which can affect the cells’ behaviour.There is thus a requirement from the stem cell community for a rapid, easy, sensitive, nondestructive, noninvasive, label-free technique which can be applied both on the single cell level and to monitoring or sorting large populations of cells. This review will concentrate on label-free optical spectroscopy techniques, which are noninvasive and have sufficiently high resolution to be applied at the single cell level.White light imaging—either phase contrast or differential interference contrast (DIC)—can reveal the approximate level of differentiation in situ, to those who are expert in stem cell culture. However, it is only really suitable for monolayers of cells. As white light imaging is usually only qualitative, it would benefit by being replaced by a more advanced optical technique capable of a quantitative measurement on individual cells. Such a technique should therefore be capable of high speed characterization, to enable large numbers of cells to be studied—in monolayer cultures, embryos, and scaffolds.
## 1.2. Infrared Absorption Spectroscopy
The first optical technique suitable for noninvasive characterization of cells is infrared absorption spectroscopy. Infrared light is absorbed by the wide variety of chemical bonds within molecules, which all have well-defined vibrational frequencies. Hence, an absorption spectrum of a cell should give a characteristic snapshot of the chemistry, and an undifferentiated cell’s spectrum could differ enough from that of a differentiated cell enough to characterize them. Simple infrared spectrometers use a broadband light source containing a wide range of wavelengths, which is typically passed through a cuvette of solution, through a dispersing spectrometer onto a single detector. This technique is slow, as the spectrum is built up from around 1000 sequential data points. In order to collect a full spectrum without losing the vast majority of the signal, Fourier transform infrared (FTIR) spectroscopy [12] uses both interferometry and a Fourier transform of the signal: from the time to frequency domain. A typical FTIR setup is illustrated in Figure 1(a), which requires a mirror to scan one half of the interferometer arm over a distance of a few millimetres. A full spectrum is typically acquired in around a second on live cells [13]. Synchrotron sources have promised vastly improved spectral acquisition times—up to 1000 times faster [14] than benchtop FTIR—but it is not clear whether this is applicable to live cells, as heating by absorption may prevent any increase in speed.Schematic experimental arrangements for (a) Fourier transform infrared (FTIR) spectroscopy, (b) Raman spectroscopy, and (c) coherent anti-Stokes Raman scattering (CARS) microscopy and spectroscopy.
(a)
FTIR(b)
Raman(c)
CARSThe lateral resolution of optical techniques is normally approximated by 0.6λ/N.A. where λ is the wavelength of illuminating light, and N.A. is the numerical aperture of illumination. Although an N.A. of 1.4 is achievable with objective lenses using visible light, infrared light has very low transmission through standard glass objectives; so a parabolic mirror (known as a Cassegrain objective) is normally used to focus light. These objectives have a typical N.A. of 0.4. The bonds in molecules are typically excited with infrared light of wavelengths between 2.8 and 16 μm, which corresponds to a lateral resolution of 4.2 to 24 μm, which is small enough to be applied to individual isolated cells or to average over groups of cells, but will not usually be cell specific when applied to an embryo or group of cells tightly bound together. FTIR microscopy has been achieved on fixed adherent mesenchymal stem cells (MSCs) [15] with a diameter of around 50 μm, but only for high-frequency (short wavelength) vibrations. These vibrational frequencies are described by spectroscopists as inverse wavelengths in units of cm−1 [“wavenumbers”]. The lowest frequency vibrations occur in cells around 600 cm−1 (λ = 16.7 μm) and the highest frequencies relate to the C−H stretch (2800–3000 cm−1, λ = 3.3–3.6 μm) and O−H stretch (~3500 cm−1, λ = 2.8 μm).One of the major issues with infrared radiation is its extremely low penetration depth in water, which limits the depth which can be probed in solution to the range 10–100μm. To combat absorption of infrared light through a thick cuvette, attenuated total reflection- (ATR-) FTIR [16] probes only the first micrometre above the substrate. An array of spectra can be acquired rapidly enough to perform imaging within a few minutes on live cells [17]. Results from FTIR on stem cells will be discussed later.
## 1.3. Raman Spectroscopy
The second optical technique suitable for stem cell characterization is Raman spectroscopy [18]. In the most widespread form—Stokes scattering—visible or near-infrared light loses energy (frequency) by exciting molecules into their excited state, as depicted in Figure 2. This means that some of the laser light is red-shifted after interacting with the sample. A typical setup is illustrated in Figure 1(b): after filtering out the laser, the remaining red-shifted light is passed through a spectrometer onto a cooled CCD camera. A full Raman spectrum is normally acquired in 1 to 10 seconds, so is typically slower than FTIR. As silicon CCDs have a response which dies away rapidly at a wavelength of around 1000 nm, the longest laser wavelength which can be used is 785 nm (AlGaAs diode) which is the preferred wavelength for Raman spectroscopy in biology. The strong C−H stretch frequencies are the highest measurable with such a laser wavelength. The other popular laser wavelength for biological samples is 633 nm (HeNe laser) which has lower power than 785 nm lasers, and heating in cells and tissue is the lowest using near-IR illumination (700–1100 nm). Heating has been measured directly in cells—using 100 mW of 1064 nm radiation in an optical trap, a temperature rise of <1°C was observed [19]. Prolonged exposure to 300 mW illumination at constant power (“continuous wave”, CW) caused photodamage—a light-induced reduction in cell viability—attributed to two-photon absorption. Visible light (λ<700 nm) is believed to cause photodamage at far lower thresholds than this, due to increased absorption by proteins, but shorter wavelengths produce more Raman signal [20].Figure 2
Schematic energy level diagrams for Raman, CARS, and FTIR processes. In Raman scattering one illuminating laser photon is absorbed (at pump frequencyνP), and another is radiated (at Stokes frequency νS)—the difference in frequencies being equal to a vibrational frequency, νVIB. In CARS three photons are absorbed—two at frequency νP and one at νS—and one is emitted at the anti-Stokes frequency νAS. FTIR relies solely on the absorption of infrared radiation at νVIB.The reduction in wavelengths for Raman spectroscopy, compared to FTIR, equates to a greatly improved lateral resolution of 350 nm and axial (depth) resolution of 1150 nm. This is sufficient to allow spectroscopy on individual cells, and even at the subcellular level. Furthermore, Raman microspectroscopy—otherwise known as Raman mapping or Raman microscopy—can be performed [21]. Maps can be produced containing the total signal under a given peak, or more subtle differences between spectra can be exploited with principal component analysis (PCA) [22] or cluster analysis. This reduces the large number of peaks in the Raman spectrum to a smaller number of independent variables, and cluster analysis produces colour-coded maps with regions of similar chemistry deduced from the set of spectra. Due to the long acquisition time required, only Raman microscopy of fixed cells had been performed until recently, as live cells move far more quickly than the set of spectra could be recorded. However, recently Raman microscopy has been performed on live cells with 100 mW power at 647 nm [23] and acquisition time of 0.5 second, albeit with only 32 × 32 pixels. An alternative approach is to illuminate a line rather than a spot—then a series of spectra can be recorded across a single CCD, each relating to a pixel along the line. This technique has been applied to live cells [20] with 5 seconds per line (3 minutes per image) with 532 nm at 3.5 mW/μm2 intensity, again with a small number of imaging pixels. It remains to be seen whether photodamage was occurring in both of these live cell imaging publications.PCA can be used to extract the most significant variations between groups of spectra acquired on large numbers of cells. Thus, determining an unknown cell type from two possibilities—notably, stem cell and differentiated cell—can be accomplished using large numbers of spectra fromknown cell types. The most important differences are highlighted, rather than any uncorrelated and unimportant variations, to improve the sensitivity of the technique. No knowledge of the chemistry is required with this unsupervised technique. Two improved variants of PCA are discussed in relation to FTIR, and both show marked improvements in their ability to distinguish cell types [24]. Before analysis can be performed, spectra require processing to remove unwanted autofluorescence from tissue—normally by baseline subtraction, as well as the removal of cosmic rays, and substrate and media contribution to the spectrum. All this processing and analysis of spectra requires significant computing power, when applied to large datasets.Raman spectra consist of a large number of peaks at well-defined frequencies, as demonstrated in Figures3 and 4. The relative intensities of the peaks change between cells, but the frequencies themselves do not shift. Peaks are usually around 5–10 cm−1 in width, except for the single peak at 1002 cm−1. This frequency relates to the in-plane vibration of the aromatic ring of the phenylalanine molecule which is highly symmetric and is narrower than the other peaks because of the lack of vibration out of the plane and lack of variation in other atoms attached to the benzene ring. Frequencies in FTIR can be slightly shifted by a few cm−1—and tend to be broader. Both techniques excite these vibrations to different extents; so the relative peak intensities will be different in Raman and FTIR spectra.IR spectral analysis of a small intestinal crypt using synchrotron FTIR microspectroscopy. (a) Ten IR spectra of the entire biochemical-cell fingerprint region (900 to 1,800 cm−1) acquired from the assigned transit-amplifying location (locations 1–3; black lines, top), the putative stem cell location (locations 4–6; red lines, middle), and the differentiated location (locations 7–10; blue lines, bottom). (b) PC analysis of a small intestinal crypt’s IR spectra using the entire biochemical-cell fingerprint region. Reprinted with permission from [39] (Copyright AlphaMed press 2008).
(a)(b)Figure 4
(a) First principal component, describing the major differences between two groups: undifferentiated murine embryonic stem cells and the differentiated cells via formation of embryoid bodies. (b) Raman spectrum of reference RNA, revealing a good deal of similarity with (a)—hence, the major change to the spectrum during differentiation is related to a reduction in RNA levels. Note the strong peaks around 785 cm−1 (cytosine and uracil ring stretching), 811 cm−1 (phosphodiester bond stretching), and 1096 cm−1 (phosphodioxy group stretching). All spectra were acquired in 2 minutes (Reprinted with permission from [40]; Copyright 2004 American Chemical Society).
## 1.4. CARS Microscopy and Spectroscopy
Raman scattering is a weak process—typically only 1 in~1010 incident photons gives rise to a Raman-shifted photon. This is largely due to the excitation of the bond vibration far above its resonant frequency. In order to increase this efficiency, in coherent anti-Stokes Raman scattering (CARS) [25–28] the vibration is excited with two laser frequencies—the difference (or beat) frequency between them is matched to the vibrational frequency of interest. This gives between 4 and 6 orders of magnitude more signal than standard Raman. This means that CARS images are acquired in seconds, whereas a similar quality Raman map would require days to complete. The process is best excited with pulsed lasers with durations of around 6 ps, which should mean that no photodamage occurs with extensive use of powers of at least 12 mW [29]. A CARS microscope, designed for biological imaging, is described in detail elsewehere [28], which has a lateral resolution of 350 nm and an axial resolution of 1100 nm. Live cell imaging is slightly slower than that of fixed cells, but high-quality images are acquired within 1 minute.Picosecond CARS excitation pulses have an estimated width of around 3 cm−1, so are ideal for biological molecules, but this means that only one vibration (spectral peak) may be excited during an image. Images can be acquired sequentially at different wavenumbers, by retuning one laser, but this is not ideal given the motion occurring during live cell imaging. Some CARS systems are able to acquire images at two different vibrational frequencies simultaneously [28]. Multiplex CARS [30–34] uses normal (narrowband) pulses for one laser, νP, and broadband pulses for νS. The broadband supercontinuum excitation pulse is several hundred nanometres wide and is excited in a photonic crystal fibre by a femtosecond laser. A full spectrum is acquired in around 100 milliseconds on live yeast [30], but an estimate of photodamage [29] suggests that 1 second per pixel would be more appropriate for eukaryotic cells. Further improvements to excitation sources could see this fall to the millisecond range—enabling high-quality, noninvasive full spectral mapping of live cells in minutes.A different approach to increase the speed of Raman microscopy, termed Stimulated Raman Scattering, has recently been published [35]. In a similar way to how stimulated emission depopulates the excited state in lasers, the excited state in a simple Raman excitation (pumped with νP) can be rapidly depopulated by a second laser (at νS) modulated in the MHz range. In this way, the signal at νS is increased slightly (Stimulated Raman Gain) and the pump power at νP is decreased slightly (Stimulated Raman Loss, SRL). Monitoring either signal, filtered by a lock-in amplifier, produces images which are background-free and directly proportional to the concentration. The standard CARS signal has a quadratic dependence on concentration and can have a large unwanted background. Heterodyne CARS [36] is another technique which is able to circumvent both of these problems encountered in standard CARS imaging. SRL is a linear optical technique and is suitable for extension into optical coherence tomography deep into tissue, using low numerical aperture lenses [37].
## 2. Results and Discussion
### 2.1. Fourier-Transform Infrared (FTIR) Spectroscopy
FTIR spectroscopy was used to study murine embryonic stem cells by Ami et al. [38]. After 4–7 days of differentiation, changes to the absorption spectrum of fixed cells were noticed: features in the amide I band (1600–1700 cm−1) were enhanced, and those in the nucleic acid region (850–1050 cm−1) diminish. This means that the overall levels of DNA and RNA decrease, and the alpha helix content of proteins increases over time. Furthermore, new DNA/RNA hybrid bands at 899 cm−1 and 954 cm−1 start to occur around day 4–7, suggesting that mRNA translation is occurring at this time.German et al. [11] employed high-intensity synchrotron radiation to probe 10 μm thick cryosections of bovine cornea. They used PCA to clearly distinguish the three cells types of interest: stem cells, transit-amplifying cells, and terminal differentiated cells. No biomarkers of corneal stem cells exist; so spectroscopic techniques offer the only viable method of cell characterization here.From the same group, Walsh et al. [39] again used synchrotron FTIR, this time on paraffin-embedded human intestinal crypts, which were dewaxed. The position of cells along the crypt denotes the change from stem cell location to transit-amplifying region to differentiated location. PCA was used to compare spectral features and was able to separate cell types from three positions along the crypt, which is shown in Figure 3. This method of characterization was compared with tissue stained with two different immunophenotypical markers: rabbit polyclonal anti-CD133 and β-catenin antibodies. The authors state that the dominant FTIR absorption peak at 1080 cm−1, relating to the symmetric (PO2)- stretch, is a more robust marker than the two biomarkers. As gastrointestinal stem cells lack specific biomarkers, they went on to compare FTIR data against a number of chemical differences which are discussed at length in a further publication [10].Salasznyk et al. [41] used FTIR to study osteoblasts derived from human mesenchymal stem cells after 28 days of cell culture. Samples were dried and ground into a powder, then pressed into a pellet. The spectrally derived mineral-to-matrix ratio was calculated as the ratio of the integrated areas of the phosphate absorbance (900–1200 cm−1) and protein amide I band (1585–1720 cm−1). They observed a significant decrease in the mineral-to-matrix ratio in the extracellular matrix produced by focal adhesion kinase- (FAK-) knockdown cells when compared to untreated (control) cells. These FTIR results are compared favourably with biochemical assays.Krafft et al. [15] also used FTIR to study human mesenchymal stem cells differentiating into osteoblasts. Their samples were fixed in methanol then dried, and they were able to distinguish cells stimulated in osteogenic medium for 7 days, from nonstimulated cells. FTIR microscopy on isolated adherent cells (of size ~50 μm) showed that some of the nonstimulated cells had high levels of glycogen accumulation, and some stimulated cells had a high expression of calcium phosphate. Stimulated cells had reduced levels of amide I (at 1631 cm−1), meaning that lower concentrations of beta-sheet proteins were present. This compares well with Ami et al. [38], who measured a higher alpha helix proportion during differentiation. Nucleic acids were hard to detect in this study.Bentley et al. [42] measured FTIR spectra of human corneal stem cells and their derivatives: transit amplifying cells. They made 10 μm thick cryosections of cornea, which were left to dry and observed differences in synchrotron FTIR spectra which were attributed to nucleic acids. They were able to distinguish between both cell types with spectra acquired from two different regions in the tissue, using PCA, albeit with an overlap of 16% between the two populations.In summary, FTIR is able to distinguish between stem cells and their derivatives. Using synchrotron sources, the speed of data acquisition and the quality of comparison have improved greatly. Even microscopy has been performed, requiring a spectral acquisition at each imaging pixel. To be able to distinguish between individual cells, rather than populations, requires subconfluent adherent cells due to the spatial resolution (governed by the wavelength of infrared radiation) being of the order of the cell size. The technique has until now been limited to dried samples due to the high absorption coefficient of water; so the technique has not been used to noninvasively monitor stem cell differentiation in vitro as the required drying is clearly destructive. However, ATR-FTIR can be used on live cell cultures; hence it could potentially be employed as a real-time noninvasive technique for monitoring stem cell differentiation on adherent cells (though no results have yet been published). Such a technique would be highly preferable to the use of biomarkers in clinical as well as research applications. One issue with ATR-FTIR is that it only probes the first 1-2μm above a substrate; so the nucleus would only give a small contribution to the overall signal. Given that nucleic acids play a large part in the reported changes in spectral signatures during differentiation, and the nucleus may remain above this penetration depth, it remains to be seen whether ATR-FTIR can indeed be used as a noninvasive biomarker-free analytical technique for live cell studies.
### 2.2. Raman Spectroscopy
Notingher et al. [43] used Raman spectroscopy to investigate live murine embryonic stem cells. 100 mW of 785 nm laser light was focussed onto a spot of size 10 μm × 5 μm × 25 μm. The group grew cells on a gelatine-coated quartz substrate, as plastic contains a number of vibrational bonds which are also present in cells. Glass gives a strong fluorescence background; so quartz or magnesium fluoride is preferred as a substrate. However, stem cells do not adhere well to glass and crystals; so a thin coating to the substrate is required, such as the gelatine used in this study. Great care must also be taken to ensure that the stem cells do not differentiate spontaneously on a given substrate or coating. Over 16 days of differentiation, they observed a decrease in the RNA peak (at 813 cm−1, O−P−O stretch) by 75% [40] and a drop of 50% in the DNA peak (at 788 cm−1, cytosine ring vibration). Peaks were normalized to the total Raman signal. They also extracted the first principal component spectrum, reproduced in Figure 4, which reveals the spectrum responsible for most of the differences between spectra of stem cells and differentiated cells. Note the high coincidence of many peaks of this principal component spectrum, with the spectrum of RNA. This confirms that a reduction in RNA levels dominates the chemical changes during differentiation.Chan et al. [9] acquired Raman spectra on live human embryonic stem cells (hESCs) as they differentiated into cardiomyocytes and were able to distinguish between hESCs and hESC-derived cardiomyocytes with an accuracy of 66%. They found that the RNA peak (811 cm−1) and DNA peaks (e.g., 785 cm−1, 1090 cm−1) were all reduced in intensity during differentiation.We acquired data which is previously unpublished—Raman spectra of fixed mesenchymal stem cells grown on gelatine-coated quartz, and used a diffraction-limited laser spot, instead of averaging over a large area. This meant that spectra could be acquired separately from within the nucleus and within the cytoplasm—these spectra are shown in Figure5. The spectrum from the substrate has been subtracted, but no baseline subtraction was performed—to highlight the requirement for automated subtraction in quantitative spectroscopic analysis. Small variations in laser power and focus position are thought to be responsible for the offset spectra at low wavenumbers. The spectra in Figure 5 as expected clearly demonstrate that the nucleus contains far more DNA and RNA than does the cytoplasm; and that the cytoplasm contains far more proteins and lipids than the nucleus. These cells were fixed rather than live, but a study of fixation methods on Raman spectra [44] indicates that the effect of aldehyde cross-linking fixation on spectra is minimal. This study, together with the data from Notingher et al. [40, 43], implies that it is the nucleus size (or nucleic acid density therein) which shrinks during differentiation. Hence, monitoring the nucleus size in a quantitative manner—using imaging processing techniques—would seem to be a good potential approach to monitoring the state of differentiation.Figure 5
Individual Raman spectra within a single mesenchymal stem cell (red: nucleus, black: cytoplasm). The following peaks are specific to DNA and RNA:A (785 cm−1, uracil/cytosine/thymine ring breathing, O−P−O stretch), B (813 cm−1, O−P−O stretch), C (828 cm−1, O−P−O antisymmetric stretch), F (1093 cm−1, O−P−O stretch and C−C stretch), and H (1580 cm−1, adenine/guanine C−N stretch). Peaks specific to proteins are D (854 cm−1, tyrosine ring breathing), E (1004 cm−1, phenylalanine ring breathing), and I (1660 cm−1, amide I alpha helix). Peaks dominated by lipids are G (1448 cm−1, CH2 deformation) andJ (2800–3000 cm−1, C−H stretch).Mesenchymal stem cells were monitored by Raman spectroscopy during differentiation into osteoblasts [45]. Mineralization was monitored at two frequencies: 960 cm−1 (P–O stretch) and 1070 cm−1 (PO43-). The 960 cm−1 peak relates to the mineral hydroxyapatite—Ca5(PO4)3(OH)—and was by far the strongest signal; this peak height rose linearly from zero to the dominant peak in the spectrum over the 21 days of differentiation. The peak at 1030 cm−1 for (CO32-) remained constant throughout.Pelled et al. [46] used Raman spectroscopy to compare tissue-engineered bone derived from mesenchymal stem cells, with femoral bone. They found a very good similarity in phosphate (960 cm−1) and carbonate (595 cm−1) levels, with only minor spectral differences such as a larger amount of protein. Liu [47] also observed a major peak at 960 cm−1 due to hydroxyapatite, in mineral extracted from odontoblast nodules—which were formed by the differentiation of dental pulp stem cells.Azrad et al. [48] used Raman spectroscopy to characterize the production of mineral content from mesenchymal stem cells, under the influence of two osteogenic agents: quality elk velvet antler (QEVA) extract, and dexamethasone. They measured no mineralization from the control group, some mineralization from the dexamethasone-fed cells, but most mineralization from cells supplemented with the elk velvet antler. Peaks indicated phosphate derivatives, which for QEVA were mostly related to hydroxyapatite (at 960 cm−1) and its precursors (amorphous calcium phosphate, Ca9(PO4)6·H2O, at 952 cm−1 and octacalcium phosphate, Ca8(PO4)6·5H2O, at 957 cm−1). For dexamethasone-fed cells, the dominant peak was of octacalcium phosphate, and lower amounts of hydroxyapatite and amorphous calcium phosphate were measured.Gentleman et al. [49] compared mineralized nodules in vitro from 3 sources: embryonic stem cells, neonatal calvarial osteoblasts, and adult bone-marrow-derived mesenchymal stem cells. After 28 days in osteogenic medium, sets of Raman spectra were acquired of the mineralized nodules, and PCA was performed on the spectra to extract their chemical components. The osteoblasts and mesenchymal stem cells both produced nodules which were chemically similar to native bone: a combination in descending order of concentration, of (a) carbonate-substituted hydroxyapatite, (b) crystalline nonsubstituted hydroxyapatite, and (c) amorphous phosphate species. However, embryonic stem cells produce nodules which were dominated only by the first mineral component, which was similar to synthetic carbonated hydroxyapatite. Nanoindentation tests showed that these nodules derived from embryonic stem cells were more than an order of magnitude less stiff than those derived from osteoblasts and mesenchymal stem cells.These Raman studies all show a similar ability to distinguish stem cells from their derivatives, that is, a similar sensitivity, as the FTIR technique. In addition, mineralization can be monitored with great subtlety. The major advantage of FTIR over Raman is its speed; hence mapping signals (derived from a full spectral acquisition at each imaging pixel) into images is easier with FTIR. The major advantage of Raman over FTIR is that has been used on live cells, so is truly noninvasive.
### 2.3. CARS Microscopy and Spectroscopy
CARS microscopy has been performed on live murine embryonic stem cells by Konorov et al. [50], but the laser setup only permitted pixel dwell times of 300 milliseconds compared to microseconds in standard CARS microscopy [25–28]. Hence the image quality was poor—the groups were unable to distinguish any individual cells or features when imaging at the DNA and RNA frequencies. However, CARS spectroscopy showed a large reduction in the RNA peak intensity (at 811 cm−1) in differentiated cells.Figure6 shows our CARS microscopy image, acquired on MCF-7 human breast cancer cells which have a similarly large nucleus to cytoplasm ratio as stem cells. Two sequentially acquired images are overlaid: in green, DNA/RNA is mapped at the phosphate backbone O−P−O stretch frequency (1095 cm−1), and in red, lipids and the cytoskeleton are mapped at the CH2 deformation frequency (1448 cm−1). The O−P−O stretch frequency is also present in Figure 4(a)—the principal component spectrum which describes most of the changes during differentiation—and in Figure 4(b)—the RNA spectrum. This method can be easily extended to monitoring the nucleus size in live stem cells, with a reduced frame rate of at least 1 image per minute (we find that CARS imaging in live cells is slower than in dried, fixed cells).Figure 6
CARS microscopy image of fixed MCF-7 breast cancer cells, of size 100× 100 μm. The green channel is specific to the O−P−O stretch frequency (1095 cm−1) and originates from the DNA backbone, so highlights the nucleus (whose shape may have been distorted in some cells, by the fixation process). The red channel is specific to the CH2 deformation (1448 cm−1) and is dominated by lipids and the cytoskeleton. The image plane is restricted to a 1 μm slice, acquired several micrometres above the glass substrate, and the lateral resolution is around 350 nm. Images of both channels were acquired in 1 second, and averaged 5 times; both channels were acquired sequentially after retuning one laser source. Pulse widths of 6 ps correspond to a spectral resolution of ~3 cm−1.CARS microscopy is limited to monitoring one vibrational mode at a time, rather than comparing the full spectral signature. However, the RNA and DNA peaks have been shown to drop considerably, and RNA is the dominant change to spectra. So it is possible that CARS imaging will be able to measure the stage of differentiation purely by monitoring the size of each nucleus. White light imaging is less exact at measuring the nucleus size than either fluorescence or CARS microscopy and could not be extended from monolayers; so CARS is preferred as a noninvasive technique to monitor nuclear size. We expect that CARS should also be able to map mineralization from mesenchymal stem cells, by mapping the peak at 960 cm−1. The clear advantage of CARS is its high speed compared to the other spectroscopic techniques.One of the most promising techniques over the coming years should be Multiplex CARS, which acquires a full vibrational spectrum at higher speed than Raman, and is also applicable to the single cell level. This is bound to be more sensitive than monitoring just one peak in standard CARS. If the spectral acquisition speed is improved to the millisecond scale, the technology could be applied to both flow cytometry and microscopy.
## 2.1. Fourier-Transform Infrared (FTIR) Spectroscopy
FTIR spectroscopy was used to study murine embryonic stem cells by Ami et al. [38]. After 4–7 days of differentiation, changes to the absorption spectrum of fixed cells were noticed: features in the amide I band (1600–1700 cm−1) were enhanced, and those in the nucleic acid region (850–1050 cm−1) diminish. This means that the overall levels of DNA and RNA decrease, and the alpha helix content of proteins increases over time. Furthermore, new DNA/RNA hybrid bands at 899 cm−1 and 954 cm−1 start to occur around day 4–7, suggesting that mRNA translation is occurring at this time.German et al. [11] employed high-intensity synchrotron radiation to probe 10 μm thick cryosections of bovine cornea. They used PCA to clearly distinguish the three cells types of interest: stem cells, transit-amplifying cells, and terminal differentiated cells. No biomarkers of corneal stem cells exist; so spectroscopic techniques offer the only viable method of cell characterization here.From the same group, Walsh et al. [39] again used synchrotron FTIR, this time on paraffin-embedded human intestinal crypts, which were dewaxed. The position of cells along the crypt denotes the change from stem cell location to transit-amplifying region to differentiated location. PCA was used to compare spectral features and was able to separate cell types from three positions along the crypt, which is shown in Figure 3. This method of characterization was compared with tissue stained with two different immunophenotypical markers: rabbit polyclonal anti-CD133 and β-catenin antibodies. The authors state that the dominant FTIR absorption peak at 1080 cm−1, relating to the symmetric (PO2)- stretch, is a more robust marker than the two biomarkers. As gastrointestinal stem cells lack specific biomarkers, they went on to compare FTIR data against a number of chemical differences which are discussed at length in a further publication [10].Salasznyk et al. [41] used FTIR to study osteoblasts derived from human mesenchymal stem cells after 28 days of cell culture. Samples were dried and ground into a powder, then pressed into a pellet. The spectrally derived mineral-to-matrix ratio was calculated as the ratio of the integrated areas of the phosphate absorbance (900–1200 cm−1) and protein amide I band (1585–1720 cm−1). They observed a significant decrease in the mineral-to-matrix ratio in the extracellular matrix produced by focal adhesion kinase- (FAK-) knockdown cells when compared to untreated (control) cells. These FTIR results are compared favourably with biochemical assays.Krafft et al. [15] also used FTIR to study human mesenchymal stem cells differentiating into osteoblasts. Their samples were fixed in methanol then dried, and they were able to distinguish cells stimulated in osteogenic medium for 7 days, from nonstimulated cells. FTIR microscopy on isolated adherent cells (of size ~50 μm) showed that some of the nonstimulated cells had high levels of glycogen accumulation, and some stimulated cells had a high expression of calcium phosphate. Stimulated cells had reduced levels of amide I (at 1631 cm−1), meaning that lower concentrations of beta-sheet proteins were present. This compares well with Ami et al. [38], who measured a higher alpha helix proportion during differentiation. Nucleic acids were hard to detect in this study.Bentley et al. [42] measured FTIR spectra of human corneal stem cells and their derivatives: transit amplifying cells. They made 10 μm thick cryosections of cornea, which were left to dry and observed differences in synchrotron FTIR spectra which were attributed to nucleic acids. They were able to distinguish between both cell types with spectra acquired from two different regions in the tissue, using PCA, albeit with an overlap of 16% between the two populations.In summary, FTIR is able to distinguish between stem cells and their derivatives. Using synchrotron sources, the speed of data acquisition and the quality of comparison have improved greatly. Even microscopy has been performed, requiring a spectral acquisition at each imaging pixel. To be able to distinguish between individual cells, rather than populations, requires subconfluent adherent cells due to the spatial resolution (governed by the wavelength of infrared radiation) being of the order of the cell size. The technique has until now been limited to dried samples due to the high absorption coefficient of water; so the technique has not been used to noninvasively monitor stem cell differentiation in vitro as the required drying is clearly destructive. However, ATR-FTIR can be used on live cell cultures; hence it could potentially be employed as a real-time noninvasive technique for monitoring stem cell differentiation on adherent cells (though no results have yet been published). Such a technique would be highly preferable to the use of biomarkers in clinical as well as research applications. One issue with ATR-FTIR is that it only probes the first 1-2μm above a substrate; so the nucleus would only give a small contribution to the overall signal. Given that nucleic acids play a large part in the reported changes in spectral signatures during differentiation, and the nucleus may remain above this penetration depth, it remains to be seen whether ATR-FTIR can indeed be used as a noninvasive biomarker-free analytical technique for live cell studies.
## 2.2. Raman Spectroscopy
Notingher et al. [43] used Raman spectroscopy to investigate live murine embryonic stem cells. 100 mW of 785 nm laser light was focussed onto a spot of size 10 μm × 5 μm × 25 μm. The group grew cells on a gelatine-coated quartz substrate, as plastic contains a number of vibrational bonds which are also present in cells. Glass gives a strong fluorescence background; so quartz or magnesium fluoride is preferred as a substrate. However, stem cells do not adhere well to glass and crystals; so a thin coating to the substrate is required, such as the gelatine used in this study. Great care must also be taken to ensure that the stem cells do not differentiate spontaneously on a given substrate or coating. Over 16 days of differentiation, they observed a decrease in the RNA peak (at 813 cm−1, O−P−O stretch) by 75% [40] and a drop of 50% in the DNA peak (at 788 cm−1, cytosine ring vibration). Peaks were normalized to the total Raman signal. They also extracted the first principal component spectrum, reproduced in Figure 4, which reveals the spectrum responsible for most of the differences between spectra of stem cells and differentiated cells. Note the high coincidence of many peaks of this principal component spectrum, with the spectrum of RNA. This confirms that a reduction in RNA levels dominates the chemical changes during differentiation.Chan et al. [9] acquired Raman spectra on live human embryonic stem cells (hESCs) as they differentiated into cardiomyocytes and were able to distinguish between hESCs and hESC-derived cardiomyocytes with an accuracy of 66%. They found that the RNA peak (811 cm−1) and DNA peaks (e.g., 785 cm−1, 1090 cm−1) were all reduced in intensity during differentiation.We acquired data which is previously unpublished—Raman spectra of fixed mesenchymal stem cells grown on gelatine-coated quartz, and used a diffraction-limited laser spot, instead of averaging over a large area. This meant that spectra could be acquired separately from within the nucleus and within the cytoplasm—these spectra are shown in Figure5. The spectrum from the substrate has been subtracted, but no baseline subtraction was performed—to highlight the requirement for automated subtraction in quantitative spectroscopic analysis. Small variations in laser power and focus position are thought to be responsible for the offset spectra at low wavenumbers. The spectra in Figure 5 as expected clearly demonstrate that the nucleus contains far more DNA and RNA than does the cytoplasm; and that the cytoplasm contains far more proteins and lipids than the nucleus. These cells were fixed rather than live, but a study of fixation methods on Raman spectra [44] indicates that the effect of aldehyde cross-linking fixation on spectra is minimal. This study, together with the data from Notingher et al. [40, 43], implies that it is the nucleus size (or nucleic acid density therein) which shrinks during differentiation. Hence, monitoring the nucleus size in a quantitative manner—using imaging processing techniques—would seem to be a good potential approach to monitoring the state of differentiation.Figure 5
Individual Raman spectra within a single mesenchymal stem cell (red: nucleus, black: cytoplasm). The following peaks are specific to DNA and RNA:A (785 cm−1, uracil/cytosine/thymine ring breathing, O−P−O stretch), B (813 cm−1, O−P−O stretch), C (828 cm−1, O−P−O antisymmetric stretch), F (1093 cm−1, O−P−O stretch and C−C stretch), and H (1580 cm−1, adenine/guanine C−N stretch). Peaks specific to proteins are D (854 cm−1, tyrosine ring breathing), E (1004 cm−1, phenylalanine ring breathing), and I (1660 cm−1, amide I alpha helix). Peaks dominated by lipids are G (1448 cm−1, CH2 deformation) andJ (2800–3000 cm−1, C−H stretch).Mesenchymal stem cells were monitored by Raman spectroscopy during differentiation into osteoblasts [45]. Mineralization was monitored at two frequencies: 960 cm−1 (P–O stretch) and 1070 cm−1 (PO43-). The 960 cm−1 peak relates to the mineral hydroxyapatite—Ca5(PO4)3(OH)—and was by far the strongest signal; this peak height rose linearly from zero to the dominant peak in the spectrum over the 21 days of differentiation. The peak at 1030 cm−1 for (CO32-) remained constant throughout.Pelled et al. [46] used Raman spectroscopy to compare tissue-engineered bone derived from mesenchymal stem cells, with femoral bone. They found a very good similarity in phosphate (960 cm−1) and carbonate (595 cm−1) levels, with only minor spectral differences such as a larger amount of protein. Liu [47] also observed a major peak at 960 cm−1 due to hydroxyapatite, in mineral extracted from odontoblast nodules—which were formed by the differentiation of dental pulp stem cells.Azrad et al. [48] used Raman spectroscopy to characterize the production of mineral content from mesenchymal stem cells, under the influence of two osteogenic agents: quality elk velvet antler (QEVA) extract, and dexamethasone. They measured no mineralization from the control group, some mineralization from the dexamethasone-fed cells, but most mineralization from cells supplemented with the elk velvet antler. Peaks indicated phosphate derivatives, which for QEVA were mostly related to hydroxyapatite (at 960 cm−1) and its precursors (amorphous calcium phosphate, Ca9(PO4)6·H2O, at 952 cm−1 and octacalcium phosphate, Ca8(PO4)6·5H2O, at 957 cm−1). For dexamethasone-fed cells, the dominant peak was of octacalcium phosphate, and lower amounts of hydroxyapatite and amorphous calcium phosphate were measured.Gentleman et al. [49] compared mineralized nodules in vitro from 3 sources: embryonic stem cells, neonatal calvarial osteoblasts, and adult bone-marrow-derived mesenchymal stem cells. After 28 days in osteogenic medium, sets of Raman spectra were acquired of the mineralized nodules, and PCA was performed on the spectra to extract their chemical components. The osteoblasts and mesenchymal stem cells both produced nodules which were chemically similar to native bone: a combination in descending order of concentration, of (a) carbonate-substituted hydroxyapatite, (b) crystalline nonsubstituted hydroxyapatite, and (c) amorphous phosphate species. However, embryonic stem cells produce nodules which were dominated only by the first mineral component, which was similar to synthetic carbonated hydroxyapatite. Nanoindentation tests showed that these nodules derived from embryonic stem cells were more than an order of magnitude less stiff than those derived from osteoblasts and mesenchymal stem cells.These Raman studies all show a similar ability to distinguish stem cells from their derivatives, that is, a similar sensitivity, as the FTIR technique. In addition, mineralization can be monitored with great subtlety. The major advantage of FTIR over Raman is its speed; hence mapping signals (derived from a full spectral acquisition at each imaging pixel) into images is easier with FTIR. The major advantage of Raman over FTIR is that has been used on live cells, so is truly noninvasive.
## 2.3. CARS Microscopy and Spectroscopy
CARS microscopy has been performed on live murine embryonic stem cells by Konorov et al. [50], but the laser setup only permitted pixel dwell times of 300 milliseconds compared to microseconds in standard CARS microscopy [25–28]. Hence the image quality was poor—the groups were unable to distinguish any individual cells or features when imaging at the DNA and RNA frequencies. However, CARS spectroscopy showed a large reduction in the RNA peak intensity (at 811 cm−1) in differentiated cells.Figure6 shows our CARS microscopy image, acquired on MCF-7 human breast cancer cells which have a similarly large nucleus to cytoplasm ratio as stem cells. Two sequentially acquired images are overlaid: in green, DNA/RNA is mapped at the phosphate backbone O−P−O stretch frequency (1095 cm−1), and in red, lipids and the cytoskeleton are mapped at the CH2 deformation frequency (1448 cm−1). The O−P−O stretch frequency is also present in Figure 4(a)—the principal component spectrum which describes most of the changes during differentiation—and in Figure 4(b)—the RNA spectrum. This method can be easily extended to monitoring the nucleus size in live stem cells, with a reduced frame rate of at least 1 image per minute (we find that CARS imaging in live cells is slower than in dried, fixed cells).Figure 6
CARS microscopy image of fixed MCF-7 breast cancer cells, of size 100× 100 μm. The green channel is specific to the O−P−O stretch frequency (1095 cm−1) and originates from the DNA backbone, so highlights the nucleus (whose shape may have been distorted in some cells, by the fixation process). The red channel is specific to the CH2 deformation (1448 cm−1) and is dominated by lipids and the cytoskeleton. The image plane is restricted to a 1 μm slice, acquired several micrometres above the glass substrate, and the lateral resolution is around 350 nm. Images of both channels were acquired in 1 second, and averaged 5 times; both channels were acquired sequentially after retuning one laser source. Pulse widths of 6 ps correspond to a spectral resolution of ~3 cm−1.CARS microscopy is limited to monitoring one vibrational mode at a time, rather than comparing the full spectral signature. However, the RNA and DNA peaks have been shown to drop considerably, and RNA is the dominant change to spectra. So it is possible that CARS imaging will be able to measure the stage of differentiation purely by monitoring the size of each nucleus. White light imaging is less exact at measuring the nucleus size than either fluorescence or CARS microscopy and could not be extended from monolayers; so CARS is preferred as a noninvasive technique to monitor nuclear size. We expect that CARS should also be able to map mineralization from mesenchymal stem cells, by mapping the peak at 960 cm−1. The clear advantage of CARS is its high speed compared to the other spectroscopic techniques.One of the most promising techniques over the coming years should be Multiplex CARS, which acquires a full vibrational spectrum at higher speed than Raman, and is also applicable to the single cell level. This is bound to be more sensitive than monitoring just one peak in standard CARS. If the spectral acquisition speed is improved to the millisecond scale, the technology could be applied to both flow cytometry and microscopy.
## 3. Conclusions
The major noninvasive optical spectroscopy techniques suitable for analysis of stem cell differentiation have been outlined: namely, FTIR, Raman, and CARS. FTIR spectroscopy has only been performed on fixed or dried stem cells, but ATR-FTIR can be used in order to investigate live cells in future. Raman spectroscopy has demonstrated the ability to distinguish between live stem cells and differentiated cells: both murine and human embryonic stem cells display a large reduction in peak intensities of both RNA and DNA. When mesenchymal stem cells differentiate into osteoblasts, they display a clear peak (or peaks) relating solely to mineral composition.It is well known that the nucleus to cytoplasm (volume) ratio in embryonic stem cells is elevated [51–54]. Hence, the reduction in RNA and DNA levels during differentiation is not entirely surprising. The assumption is therefore that the nucleus volume shrinks by around 50% during differentiation [53]—rather than the concentration of nucleic acids reducing. CARS microscopy is well suited to monitoring the nucleus size or mineralization content of each individual cell (around 960 cm−1) and can be applied to large numbers of cells in cultures and to engineered tissue scaffolds. Monitoring the CARS signal of the RNA or DNA peak could also be applied to high-speed cell flow cytometry. White light imaging (normally DIC or phase contrast) could offer a low-cost solution for monitoring the nucleus/cytoplasm ratio of cell monolayers, in conjunction with automated image recognition cytometry. CARS only monitors one (or possibly two) spectral peaks; so it will be less sensitive than Raman and FTIR which both acquire a full vibrational spectrum. Multiplex CARS can acquire a full spectrum and promises to replace Raman spectroscopy in time, due to its improved speed.Each spectroscopic technique has its own benefits and drawbacks; so it is more suited to characterization of stem cells in different ways and on different “platforms.” Raman and FTIR spectroscopy are both most suited to monitoring cultures averaged over a large number cells. FTIR does not have the required resolution to address single cells in confluent monolayer cultures, or a sufficient penetration depth for flow cytometry. Raman spectroscopy is too slow to characterize enough individual cells to be a worthwhile clinical technique but could be used extensively in biomedical research. Both FTIR and Raman techniques are more sensitive than normal CARS which relies on excitation of just one spectral peak. However, CARS may be sufficiently sensitive to apply to characterization of differentiation in microscopy and flow cytometry. Raman and CARS have been integrated into one instrument, to combine the benefits of both techniques [55]. In future, Multiplex CARS promises to be the technique of choice for all platforms, due to its combined attributes of speed, full spectral analysis, and applicability to individual live cells.
---
*Source: 101864-2010-02-16.xml* | 2010 |
# Rolling Bearing Fault Diagnosis Based on CEEMD and Time Series Modeling
**Authors:** Liye Zhao; Wei Yu; Ruqiang Yan
**Journal:** Mathematical Problems in Engineering
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101867
---
## Abstract
Accurately identifying faults in rolling bearing systems by analyzing vibration signals, which are often nonstationary, is challenging. To address this issue, a new approach based on complementary ensemble empirical mode decomposition (CEEMD) and time series modeling is proposed in this paper. This approach seeks to identify faults appearing in a rolling bearing system using proper autoregressive (AR) model established from the nonstationary vibration signal. First, vibration signals measured from a rolling bearing test system with different defect conditions are decomposed into a set of intrinsic mode functions (IMFs) by means of the CEEMD method. Second, vibration signals are filtered with calculated filtering parameters. Third, the IMF which is closely correlated to the filtered signal is selected according to the correlation coefficient between the filtered signal and each IMF, and then the AR model of the selected IMF is established. Subsequently, the AR model parameters are considered as the input feature vectors, and the hidden Markov model (HMM) is used to identify the fault pattern of a rolling bearing. Experimental study performed on a bearing test system has shown that the presented approach can accurately identify faults in rolling bearings.
---
## Body
## 1. Introduction
Rolling element bearing failure is one of the foremost causes of failures in rotating machinery, and such failure may result in costly production loss and catastrophic accidents. Early detection and diagnosis of bearing faults while the machine is still in operation can help to avoid abnormal event progression and to reduce productivity loss [1]. Since structural defects can cause changes of the bearing dynamic characteristics as manifested in vibrations, vibration-based analysis has long been established as a commonly used technique for diagnosing bearing faults [2]. However, some nonlinear factors such as clearance, friction, and stiffness affect complexity of the vibration signals; thus it is difficult to make an accurate evaluation on the working condition of rolling bearings only through analysis in time or frequency domain as it does traditionally [3].In order to overcome limitations of the traditional techniques, autoregressive (AR) model has been successfully applied to extracting features from vibration signals for fault diagnosis in recent years [4–6]. This is because AR model is a time series analysis method whose parameters comprise important information of the system condition, and an accurate AR model can reflect the characteristics of a dynamic system [7]. For example, AR model was combined with a fuzzy classifier for fault diagnosis in vehicle transmission gear [8]. Three distinct techniques of autoregressive modeling were compared for their performance and reliability under conditions of various bearings signal lengths [9]. A diagnosis method based on the AR model and continuous HMM has also been used to monitor and diagnose the rolling bearing working conditions [10]. However, when the AR model is applied directly to the nonstationary bearing vibration signals, the analysis results are imperfect since the estimation method of the autoregression parameters of the AR model is no longer applicable. Because the vibration signal is nonstationary, whereas the AR model is suitable for stationary signal processing, it is, therefore, necessary to preprocess the vibration signals before the AR model is generated.Empirical mode decomposition (EMD) is an adaptive time-frequency signal processing method [11]. With EMD, a signal is decomposed into a series of intrinsic mode functions (IMFs) according to its own characteristics [12]. For example, a new fault feature extraction approach based on EMD method and AR model was used to process vibration signals of roller bearings [3]. However, when the EMD method is applied to the nonstationary signals containing intermittent signal components, the original signal cannot be decomposed accurately because of the problem of mode mixing [13]. To alleviate mode mixing, Wu and Huang developed ensemble empirical mode decomposition (EEMD) to improve EMD. By adding noise to the original signal and calculating the means of IMFs repeatedly, EEMD is more accurate and effective for signal decomposition [13]. Although the EEMD method has effectively resolved the mode-mixing problem, it is time consuming for implementing the large enough ensemble mean. That is to say, the algorithm efficiency will be greatly reduced. Aiming at solving this problem, the complementary ensemble EMD (CEEMD) method is proposed [14]. In this approach, the residue of added white noises can be extracted from the mixtures of data and white noises via pairs of complementary ensemble IMFs with positive and negative added white noises. The CEEMD method has the same performance as the EEMD, but the computational efficiency is greatly improved.In this paper, we combine the advantages of CEEMD and time series model and propose a new method based on CEEMD and AR model for rolling bearing fault diagnosis. The CEEMD is used as the pretreatment to filter the signal and extract the IMF which is closely correlated to the filtered signal, and then the AR model of the selected IMF is established. The AR model parameters are used as the feature vectors to a classifier, where the hidden Markov model (HMM) is used to identify the fault pattern of a rolling bearing. The rest of this paper is organized as follows. In Section2, the review of the fault diagnosis method based on AR model is presented, and the proposed method for rolling bearing fault diagnosis is discussed. The evaluations and experiments are presented in Section 3. Finally, concluding remarks are drawn in Section 4.
## 2. Theoretical Framework
### 2.1. Time Series Modeling
Autoregressive moving average (ARMA) model is the representative time series model, which can be expressed in linear difference equation form as(1)
x
t
+
φ
1
x
t
-
1
+
⋯
+
φ
n
x
t
-
n
=
a
t
+
θ
1
a
t
-
1
+
⋯
+
θ
m
a
t
-
m
,
where n and m are the parameters of the ARMA (n
,
m) model, x
t is zero mean stationary random sequence, a
t is white noise sequence, and φ
i and θ
j are model parameters to be estimated. The parameters of φ
i and θ
j are estimated by the time sequence of x
t (t
=
1,2
,
3
…), which is called the time series modeling. If φ
i
=
0, the ARMA (n
,
m) model will degrade to m order MA(m) model, and if θ
i
=
0, the ARMA (n
,
m) model will degrade as n order AR (n) model in (1). The AR model is stable and its structure is simpler than ARMA model. Therefore, the AR model will be established for characterizing the rolling bearing vibration signal, if the precision of the model is enough for expressing the system, which is expressed as
(2)
x
t
=
φ
1
x
t
-
1
+
⋯
+
φ
n
x
t
-
n
+
a
t
,
where t
=
1,2
,
…
,
N, N is the length of the time series x
t, n is the order number, and a
t
~
NID
(
0
,
σ
a
2
). The σ
a
2 is expressed as
(3)
σ
α
2
=
1
N
-
n
n
∑
t
=
n
+
1
n
(
x
t
-
∑
ι
=
1
n
φ
ι
x
t
-
1
)
2
.It is critical to determine the order number of the AR model, because the accuracy of the order not only affects the accuracy of identification of the system, but also influences the stability of the system. In order to estimate the order of the AR model correctly, FPE criterion, BIC criterion, and AIC criterion are usually used [15], and they are expressed asFPE criterion(4)
FPE
(
n
)
=
N
+
n
N
-
n
σ
a
2
,
BIC criterion(5)
AIC
(
n
)
=
N
ln
σ
a
2
+
2
n
,
AIC criterion(6)
BIC
(
n
)
=
N
ln
σ
a
2
+
n
ln
N
.After the model order is determined, the nonlinear least squares method can be used to estimate model parameters, and then the AR model with specific parameters is established.
### 2.2. Complementary Ensemble Empirical Mode Decomposition
Complementary ensemble empirical mode decomposition (CEEMD) is an improved algorithm based on empirical mode decomposition (EMD). Through EMD process, any complex time series can be decomposed into finite numbers of intrinsic mode functions (IMFs), and each IMF reflects the dynamic characteristic of the original signal. The IMF component must satisfy two conditions: (a) the number of poles and zeros is either equal to each other or differs at most by one; (b) the upper and lower envelopes must be locally symmetric about the timeline. The basic principle of EMD method is to decompose the original signalx
(
t
) into the form as shown in (7) by continuously eliminating the mean of the upper and lower envelope connected with the minimum and maximum of the signal [16]. Consider
(7)
x
t
=
∑
i
=
1
n
im
f
i
(
t
)
+
r
n
(
t
)
,
where x
t is the vibration signal, im
f
i
(
t
) is the IMF component including different frequency bands ranging from high to low, and r
n
(
t
) is the residue of the decomposition process, which is the mean trend of x
t.The EMD method is a kind of adaptive local analysis method, with each IMF highlighting the local features of the data. However, EMD decomposition results often suffer from mode mixing, which is defined as either a single IMF consisting of widely disparate scales or a signal residing in different IMF components [17]. To make it clear, a simulated signal s
(
t
) consists of a Gaussian-type impulse interference s
1
(
t
) and a cosine component with 500 Hz frequency s
2
(
t
), and a trend term s
3
(
t
) is used as an example. The equation of the simulated signal is expressed as
(8)
s
(
t
)
=
sin
(
2
π
α
t
)
e
-
(
(
t
-
t
0
)
2
/
σ
)
+
cos
(
2
π
β
t
)
+
50
t
,
where α
=
3000, β
=
500, and σ
=
10
6.The waveform of the simulated signal is shown in Figure1, and the corresponding EMD results for the signal s
(
t
) are shown in Figure 2, where the mode mixing happens.Figure 1
Signal waveforms.Figure 2
The decomposition result by EMD.To overcome the problem of mode mixing, the ensemble empirical mode decomposition (EEMD) was proposed [18], where Gaussian white noises with finite amplitude are added to the original signal during the entire decomposition process. Due to the uniform distribution statistical characteristics of the white noise, the signal with white noise becomes continuous in different time scales, and no missing scales are present. As a result, mode mixing is effectively eliminated by the EEMD process [18]. The EEMD decomposition result of signal s
(
t
) is shown in Figure 3, where the added white noise amplitude is 0.25 times the original signal standard deviation, and the number of decompositions is 200 times.Figure 3
The decomposition result by EEMD.It should be noted that, during the EEMD process, each individual trial may produce noisy results, but the effect of the added noise can be suppressed by large number of ensemble mean computations. This would be too time consuming to implement. An improved algorithm, named complementary ensemble mode decomposition (CEEMD), is suggested to improve the computation efficiency. In this algorithm, the residue of the added white noises can be extracted from the mixtures of data and white noises via pairs of complementary ensemble IMFs with positive and negative added white noises. Although this new approach yields IMF with a similar RMS noise to EEMD, it eliminates residue noise in the IMFs and overcomes the problem of mode mixing with much more efficiency [14]. The procedure on implementing CEEMD is shown below:(a)
x
1 and x
2 are constructed by adding a pair of opposite phase Gaussian white noises x
n with the same amplitude. Then x
1
=
x
+
x
n and x
2
=
x
-
x
n;
(b)
x
1 and x
2 are decomposed by EMD only a few times, and IMFx1 and IMFx2 are ensemble means of the corresponding IMF generated from each trial;
(c)
the average of corresponding component inIM
F
x
1 and IM
F
x
2 is calculated as the CEEMD decomposition results; that is,
(9)
IMF
=
(
IM
F
x
1
+
IM
F
x
2
)
2
.The flow chart of CEEMD is shown in Figure 4, where n is the decomposition trials.Figure 4
Decomposition flow chart of CEEMD.Figure5 is the decomposition result by CEEMD for the signal s
(
t
). As compared to the result shown in Figure 3, the decomposition accuracies of EEMD and CEEMD are consistent, while EEMD takes 1.62 s and CEEMD only needs 0.13 s.Figure 5
The decomposition result by CEEMD.
### 2.3. Fault Diagnosis Based on CEEMD and Time Series Model
Based on CEEMD and time series model, a hybrid fault diagnosis approach can be designed. The hybrid approach combines the advantages of CEEMD method in the nonstationary signal decomposition with the ability of time series modeling in feature extraction. The flow chart of the developed approach is shown in Figure6.Figure 6
The flow chart of the proposed method.The main steps are as follows.Step 1. The rolling bearing vibration signal is sampled and then decomposed by CEEMD with the process shown in Figure 4.Step 2. The product of energy density and average period of the IMFs which is a constant value according to [19] is calculated using (10) and parameter R
P
j is calculated using (11). Then the signal is filtered by comparing the parameter R
P
j and the given threshold value; that is to say, when R
P
j
⩾
1, the previous j
-
1 IMFs with the trend term need to be removed as noise and to rebuild the residual IMFs as filtered signal [19, 20]:
(10)
P
j
=
E
j
×
T
j
,
(11)
R
P
j
=
|
P
j
-
(
(
1
/
(
j
-
1
)
)
∑
i
=
1
j
-
1
P
j
)
(
1
/
(
j
-
1
)
)
∑
i
=
1
j
-
1
P
j
|
,
(
j
≥
2
)
,
where E
j
=
(
1
/
N
)
∑
i
=
1
N
[
A
j
(
i
)
]
2 is the energy density of the j
th IMF, T
j
=
2
N
/
O
j is the average period of the j
th IMF, N is the length of each IMF, A
j is the amplitude of the j
th IMF, and O
j is the total number of extreme points of j
th IMF.Step 3. Equation (12) is used to calculate the correlation coefficient between the filtered signal and each IMF, and the IMF which is closely correlated to the filtered signal is selected for AR modeling [21]:
(12)
ρ
x
y
=
∑
k
=
1
N
x
(
k
)
y
(
k
)
[
∑
k
=
1
N
x
(
k
)
2
∑
k
=
1
N
y
(
k
)
2
]
1
/
2
.Step 4. The least square method is used to estimate the parameters vectors of the AR model established in Step3, and the parameters vectors are considered as the model feature vector.Step 5. After scalar quantization by index calculation formula of Lloyds algorithm in (13) [22], the feature vector is used to train the HMM of each bearing working condition:
(13)
indx
(
x
)
=
{
1
x
≤
partition
(
i
)
i
+
1
partition
(
i
)
<
x
≤
partition
(
i
+
1
)
N
partition
(
N
-
1
)
<
x
,
where N is the length of the codebook vector, partition (i) is the partition vector with the length of N
-
1, and x is the feature vector for scalar quantization.Step 6. A test vibration signal can then be acquired for diagnosis, and the model feature vector is first extracted. After scalar quantization, the feature vector is put into the well-trained HMMs, and the corresponding HMM which has the maximum probability is regarded as the classification result [23].
## 2.1. Time Series Modeling
Autoregressive moving average (ARMA) model is the representative time series model, which can be expressed in linear difference equation form as(1)
x
t
+
φ
1
x
t
-
1
+
⋯
+
φ
n
x
t
-
n
=
a
t
+
θ
1
a
t
-
1
+
⋯
+
θ
m
a
t
-
m
,
where n and m are the parameters of the ARMA (n
,
m) model, x
t is zero mean stationary random sequence, a
t is white noise sequence, and φ
i and θ
j are model parameters to be estimated. The parameters of φ
i and θ
j are estimated by the time sequence of x
t (t
=
1,2
,
3
…), which is called the time series modeling. If φ
i
=
0, the ARMA (n
,
m) model will degrade to m order MA(m) model, and if θ
i
=
0, the ARMA (n
,
m) model will degrade as n order AR (n) model in (1). The AR model is stable and its structure is simpler than ARMA model. Therefore, the AR model will be established for characterizing the rolling bearing vibration signal, if the precision of the model is enough for expressing the system, which is expressed as
(2)
x
t
=
φ
1
x
t
-
1
+
⋯
+
φ
n
x
t
-
n
+
a
t
,
where t
=
1,2
,
…
,
N, N is the length of the time series x
t, n is the order number, and a
t
~
NID
(
0
,
σ
a
2
). The σ
a
2 is expressed as
(3)
σ
α
2
=
1
N
-
n
n
∑
t
=
n
+
1
n
(
x
t
-
∑
ι
=
1
n
φ
ι
x
t
-
1
)
2
.It is critical to determine the order number of the AR model, because the accuracy of the order not only affects the accuracy of identification of the system, but also influences the stability of the system. In order to estimate the order of the AR model correctly, FPE criterion, BIC criterion, and AIC criterion are usually used [15], and they are expressed asFPE criterion(4)
FPE
(
n
)
=
N
+
n
N
-
n
σ
a
2
,
BIC criterion(5)
AIC
(
n
)
=
N
ln
σ
a
2
+
2
n
,
AIC criterion(6)
BIC
(
n
)
=
N
ln
σ
a
2
+
n
ln
N
.After the model order is determined, the nonlinear least squares method can be used to estimate model parameters, and then the AR model with specific parameters is established.
## 2.2. Complementary Ensemble Empirical Mode Decomposition
Complementary ensemble empirical mode decomposition (CEEMD) is an improved algorithm based on empirical mode decomposition (EMD). Through EMD process, any complex time series can be decomposed into finite numbers of intrinsic mode functions (IMFs), and each IMF reflects the dynamic characteristic of the original signal. The IMF component must satisfy two conditions: (a) the number of poles and zeros is either equal to each other or differs at most by one; (b) the upper and lower envelopes must be locally symmetric about the timeline. The basic principle of EMD method is to decompose the original signalx
(
t
) into the form as shown in (7) by continuously eliminating the mean of the upper and lower envelope connected with the minimum and maximum of the signal [16]. Consider
(7)
x
t
=
∑
i
=
1
n
im
f
i
(
t
)
+
r
n
(
t
)
,
where x
t is the vibration signal, im
f
i
(
t
) is the IMF component including different frequency bands ranging from high to low, and r
n
(
t
) is the residue of the decomposition process, which is the mean trend of x
t.The EMD method is a kind of adaptive local analysis method, with each IMF highlighting the local features of the data. However, EMD decomposition results often suffer from mode mixing, which is defined as either a single IMF consisting of widely disparate scales or a signal residing in different IMF components [17]. To make it clear, a simulated signal s
(
t
) consists of a Gaussian-type impulse interference s
1
(
t
) and a cosine component with 500 Hz frequency s
2
(
t
), and a trend term s
3
(
t
) is used as an example. The equation of the simulated signal is expressed as
(8)
s
(
t
)
=
sin
(
2
π
α
t
)
e
-
(
(
t
-
t
0
)
2
/
σ
)
+
cos
(
2
π
β
t
)
+
50
t
,
where α
=
3000, β
=
500, and σ
=
10
6.The waveform of the simulated signal is shown in Figure1, and the corresponding EMD results for the signal s
(
t
) are shown in Figure 2, where the mode mixing happens.Figure 1
Signal waveforms.Figure 2
The decomposition result by EMD.To overcome the problem of mode mixing, the ensemble empirical mode decomposition (EEMD) was proposed [18], where Gaussian white noises with finite amplitude are added to the original signal during the entire decomposition process. Due to the uniform distribution statistical characteristics of the white noise, the signal with white noise becomes continuous in different time scales, and no missing scales are present. As a result, mode mixing is effectively eliminated by the EEMD process [18]. The EEMD decomposition result of signal s
(
t
) is shown in Figure 3, where the added white noise amplitude is 0.25 times the original signal standard deviation, and the number of decompositions is 200 times.Figure 3
The decomposition result by EEMD.It should be noted that, during the EEMD process, each individual trial may produce noisy results, but the effect of the added noise can be suppressed by large number of ensemble mean computations. This would be too time consuming to implement. An improved algorithm, named complementary ensemble mode decomposition (CEEMD), is suggested to improve the computation efficiency. In this algorithm, the residue of the added white noises can be extracted from the mixtures of data and white noises via pairs of complementary ensemble IMFs with positive and negative added white noises. Although this new approach yields IMF with a similar RMS noise to EEMD, it eliminates residue noise in the IMFs and overcomes the problem of mode mixing with much more efficiency [14]. The procedure on implementing CEEMD is shown below:(a)
x
1 and x
2 are constructed by adding a pair of opposite phase Gaussian white noises x
n with the same amplitude. Then x
1
=
x
+
x
n and x
2
=
x
-
x
n;
(b)
x
1 and x
2 are decomposed by EMD only a few times, and IMFx1 and IMFx2 are ensemble means of the corresponding IMF generated from each trial;
(c)
the average of corresponding component inIM
F
x
1 and IM
F
x
2 is calculated as the CEEMD decomposition results; that is,
(9)
IMF
=
(
IM
F
x
1
+
IM
F
x
2
)
2
.The flow chart of CEEMD is shown in Figure 4, where n is the decomposition trials.Figure 4
Decomposition flow chart of CEEMD.Figure5 is the decomposition result by CEEMD for the signal s
(
t
). As compared to the result shown in Figure 3, the decomposition accuracies of EEMD and CEEMD are consistent, while EEMD takes 1.62 s and CEEMD only needs 0.13 s.Figure 5
The decomposition result by CEEMD.
## 2.3. Fault Diagnosis Based on CEEMD and Time Series Model
Based on CEEMD and time series model, a hybrid fault diagnosis approach can be designed. The hybrid approach combines the advantages of CEEMD method in the nonstationary signal decomposition with the ability of time series modeling in feature extraction. The flow chart of the developed approach is shown in Figure6.Figure 6
The flow chart of the proposed method.The main steps are as follows.Step 1. The rolling bearing vibration signal is sampled and then decomposed by CEEMD with the process shown in Figure 4.Step 2. The product of energy density and average period of the IMFs which is a constant value according to [19] is calculated using (10) and parameter R
P
j is calculated using (11). Then the signal is filtered by comparing the parameter R
P
j and the given threshold value; that is to say, when R
P
j
⩾
1, the previous j
-
1 IMFs with the trend term need to be removed as noise and to rebuild the residual IMFs as filtered signal [19, 20]:
(10)
P
j
=
E
j
×
T
j
,
(11)
R
P
j
=
|
P
j
-
(
(
1
/
(
j
-
1
)
)
∑
i
=
1
j
-
1
P
j
)
(
1
/
(
j
-
1
)
)
∑
i
=
1
j
-
1
P
j
|
,
(
j
≥
2
)
,
where E
j
=
(
1
/
N
)
∑
i
=
1
N
[
A
j
(
i
)
]
2 is the energy density of the j
th IMF, T
j
=
2
N
/
O
j is the average period of the j
th IMF, N is the length of each IMF, A
j is the amplitude of the j
th IMF, and O
j is the total number of extreme points of j
th IMF.Step 3. Equation (12) is used to calculate the correlation coefficient between the filtered signal and each IMF, and the IMF which is closely correlated to the filtered signal is selected for AR modeling [21]:
(12)
ρ
x
y
=
∑
k
=
1
N
x
(
k
)
y
(
k
)
[
∑
k
=
1
N
x
(
k
)
2
∑
k
=
1
N
y
(
k
)
2
]
1
/
2
.Step 4. The least square method is used to estimate the parameters vectors of the AR model established in Step3, and the parameters vectors are considered as the model feature vector.Step 5. After scalar quantization by index calculation formula of Lloyds algorithm in (13) [22], the feature vector is used to train the HMM of each bearing working condition:
(13)
indx
(
x
)
=
{
1
x
≤
partition
(
i
)
i
+
1
partition
(
i
)
<
x
≤
partition
(
i
+
1
)
N
partition
(
N
-
1
)
<
x
,
where N is the length of the codebook vector, partition (i) is the partition vector with the length of N
-
1, and x is the feature vector for scalar quantization.Step 6. A test vibration signal can then be acquired for diagnosis, and the model feature vector is first extracted. After scalar quantization, the feature vector is put into the well-trained HMMs, and the corresponding HMM which has the maximum probability is regarded as the classification result [23].
## 3. Evaluation of the Method Based on CEEMD and AR Model
### 3.1. Evaluation Using Simulated Signals
To demonstrate the validity of the method proposed in this study, three signalsx
1
(
t
), x
2
(
t
), and x
3
(
t
) are simulated as shown in Figure 7. The signal x
1
(
t
) consists of a Gaussian-type impulse interference, a cosine component with 10 Hz frequency, a trend term, and white noise. The signal x
2
(
t
) consists of a Gaussian-type impulse interference, a square wave with 65% duty ratio, a trend term, and white noise. The signal x
3
(
t
) consists of a Gaussian-type impulse interference, a sawtooth wave with 15 Hz frequency, a trend term, and white noise.Figure 7
Signal waveforms ofx
1
(
t
), x
2
(
t
), and x
3
(
t
).Figure8 shows the results of the CEEMD of signals x
1
(
t
), x
2
(
t
), and x
3
(
t
). Correlation coefficients between filtered signal and each IMF are illustrated in Table 1.Table 1
Correlation coefficients between filtered signal and each IMF.
Signal
Correlation coefficient
IMF1
IMF2
IMF3
IMF4
IMF5
IMF6
IMF7
IMF8
x
1
(
t
)
−0.0031
−0.0009
0.0371
0.4096
0.9668
0.2428
0.1273
−0.0448
x
2
(
t
)
0.0051
0.0004
0.0435
0.2111
0.4695
0.8887
0.7214
−0.0201
x
3
(
t
)
−0.0234
−0.0154
0.0286
0.5900
0.8953
0.1649
0.1887
−0.0210The decomposition results by CEEMD.
(a)
x
1
(
t
)
(b)
x
2
(
t
)
(c)
x
3
(
t
)It can be seen in Table1 that the IMF which is closely correlated to the filtered signal is IMF5 for both signal x
1
(
t
) and signal x
3
(
t
) and IMF6 for signal x
2
(
t
). They are used to construct the AR models, and the corresponding feature vectors are estimated as shown in Table 2. After scalar quantization, the feature vectors are used to train the HMM for signal classification.Table 2
Model parameter estimation results.
Signal
Model parameter
φ
1
φ
2
φ
3
φ
4
φ
5
φ
6
x
1
(
t
)
4.7183
−9.1103
9.2034
−5.1408
1.5207
−0.1914
x
2
(
t
)
4.8894
−9.8269
10.3945
−6.1194
1.9153
−0.2531
x
3
(
t
)
4.8718
−9.9955
11.1616
−7.2529
2.6430
−0.4282A total of 90 feature vectors were collected from three groups of signals using the proposed approach. One-third of the feature vectors in each condition were used for training the classifier and others were used for testing. The results of the signal classification are listed in Table3.Table 3
Signal classification results.
Signal type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
x
1
(
t
)
x
2
(
t
)
x
3
(
t
)
x
1
(
t
)
20
19
1
0
95
96.7
x
2
(
t
)
20
0
19
1
95
x
3
(
t
)
20
0
0
20
100Results in Table3 indicate that the presented method based on CEEMD and time series modeling can effectively identify different signals, and the overall classification rate is 96.7%. For the purpose of comparison, the signal classification rates use the method based on time series modeling only, and the method based on EMD and time series modeling is also calculated. 88.3% and 93.3% classification rates are obtained, respectively. It is obvious that efficiency of the signal classification method proposed in this paper is improved to a certain extent.
### 3.2. Evaluation Using Experimental Data
In order to illustrate the practicability and effectiveness of the proposed method, a bearing fault data set from the electrical engineering laboratory of Case Western Reserve University is analyzed [24]. The data set is acquired from the test stand shown in Figure 9, where it consists of a 2 hp motor, a torque transducer, a dynamometer, and control electronics. The test bearings support the motor shaft which is the deep groove ball bearings with the type of 6205-2RS JEMSKF. Vibration data was collected at 12,000 samples per second using accelerometers, which are attached to the housing with magnetic bases. The motor load level was controlled by the fan in the right side of Figure 9.Figure 9
Bearing test stand.Figure10 illustrates representative waveforms of the sample vibration signals measured from the test bearings under four initial conditions: (a) signal from a healthy bearing, (b) signal from a bearing with inner ring defect, (c) signal from a bearing with rolling element defect, and (d) signal from a bearing with outer ring defect. These signals were measured under 0 hp motor load with the motor speed of 1797 rpm. The decomposed IMFs of these signals are shown in Figure 11.Vibration signal waveforms of different conditions.
(a)
(b)
(c)
(d)The decomposition results by CEEMD under different conditions.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectCorrelation coefficients calculated between the filtered signal and each IMF are shown in Table4.Table 4
Correlation coefficients between filtered signals and each IMF.
Signal
Correlation coefficient
IMF1
IMF2
IMF3
IMF4
IMF5
IMF6
IMF7
IMF8
(a)
0.4135
0.7538
0.4381
0.4880
0.4356
0.1792
0.0971
−0.0056
(b)
0.8794
0.4275
0.2583
0.1337
0.0421
0.0285
−0.0009
−0.0074
(c)
0.9509
0.2180
0.2325
0.1337
0.0821
0.0350
−0.0017
0.0009
(d)
0.9878
0.1267
0.0636
0.0509
0.0136
0.0060
−0.0008
−0.0068The IMF which is closely correlated to the filtered signal is IMF2 for signal (a) and IMF1 for signals (b), (c), and (d), respectively. These IMFs are used for AR model construction. The model order estimation curves of the four conditions based on the principle of FPE criterion are shown in Figure12. We can see that when the model order is 6, each model's residual tends to be stable. Therefore the model order is selected as 6, and the results of parameters estimation are listed in Table 5.Table 5
Model parameter estimation results.
Signal
Model parameter
φ
1
φ
2
φ
3
φ
4
φ
5
φ
6
(a)
3.1280
−4.7797
4.2245
−2.1489
0.4241
0.0356
(b)
0.2084
−1.3585
0.5142
−0.6356
0.3471
−0.0422
(c)
0.1335
−1.6472
0.3941
−0.8473
0.2142
−0.1011
(d)
−0.1172
−1.2159
0.1178
−0.1283
0.1467
0.2533The model order estimation curves.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectThe parameters in Table5 were quantified by Lloyds algorithm in (12) as feature vectors for training the HMMs of different conditions. The results of quantization are revealed in Figure 13.The results of quantization.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectA total of 160 feature vectors were collected from the four conditions, half of the feature vectors were used for training the classifier and others for signal classification, and the classification results are listed in Table6. Out of 80 test feature vectors, just two cases were not correctly classified, and the overall classification rate is 97.5%.Table 6
Fault diagnosis using CEEMD and time series model.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
20
0
0
0
100
97.5
Inner ring defect
20
0
19
1
0
95
Rolling element defect
20
0
1
19
0
95
Outer ring defect
20
0
0
0
20
100For comparison, Tables7 and 8 list classification results based on time series modeling using measured signal directly and based on EMD and time series model method. From the comparison results, the proposed method is efficient for rolling bearing fault diagnosis, and the overall classification rate of the proposed method is higher to a certain extent than the other two methods mentioned above.Table 7
Fault diagnosis using time series model only.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
19
1
0
0
95
90.0
Inner ring defect
20
1
17
2
0
85
Rolling element defect
20
0
2
17
1
85
Outer ring defect
20
0
0
1
19
95Table 8
Fault diagnosis using EMD and time series model.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
19
1
0
0
95
93.75
Inner ring defect
20
0
18
2
0
90
Rolling element defect
20
0
1
19
0
95
Outer ring defect
20
0
0
1
19
95
## 3.1. Evaluation Using Simulated Signals
To demonstrate the validity of the method proposed in this study, three signalsx
1
(
t
), x
2
(
t
), and x
3
(
t
) are simulated as shown in Figure 7. The signal x
1
(
t
) consists of a Gaussian-type impulse interference, a cosine component with 10 Hz frequency, a trend term, and white noise. The signal x
2
(
t
) consists of a Gaussian-type impulse interference, a square wave with 65% duty ratio, a trend term, and white noise. The signal x
3
(
t
) consists of a Gaussian-type impulse interference, a sawtooth wave with 15 Hz frequency, a trend term, and white noise.Figure 7
Signal waveforms ofx
1
(
t
), x
2
(
t
), and x
3
(
t
).Figure8 shows the results of the CEEMD of signals x
1
(
t
), x
2
(
t
), and x
3
(
t
). Correlation coefficients between filtered signal and each IMF are illustrated in Table 1.Table 1
Correlation coefficients between filtered signal and each IMF.
Signal
Correlation coefficient
IMF1
IMF2
IMF3
IMF4
IMF5
IMF6
IMF7
IMF8
x
1
(
t
)
−0.0031
−0.0009
0.0371
0.4096
0.9668
0.2428
0.1273
−0.0448
x
2
(
t
)
0.0051
0.0004
0.0435
0.2111
0.4695
0.8887
0.7214
−0.0201
x
3
(
t
)
−0.0234
−0.0154
0.0286
0.5900
0.8953
0.1649
0.1887
−0.0210The decomposition results by CEEMD.
(a)
x
1
(
t
)
(b)
x
2
(
t
)
(c)
x
3
(
t
)It can be seen in Table1 that the IMF which is closely correlated to the filtered signal is IMF5 for both signal x
1
(
t
) and signal x
3
(
t
) and IMF6 for signal x
2
(
t
). They are used to construct the AR models, and the corresponding feature vectors are estimated as shown in Table 2. After scalar quantization, the feature vectors are used to train the HMM for signal classification.Table 2
Model parameter estimation results.
Signal
Model parameter
φ
1
φ
2
φ
3
φ
4
φ
5
φ
6
x
1
(
t
)
4.7183
−9.1103
9.2034
−5.1408
1.5207
−0.1914
x
2
(
t
)
4.8894
−9.8269
10.3945
−6.1194
1.9153
−0.2531
x
3
(
t
)
4.8718
−9.9955
11.1616
−7.2529
2.6430
−0.4282A total of 90 feature vectors were collected from three groups of signals using the proposed approach. One-third of the feature vectors in each condition were used for training the classifier and others were used for testing. The results of the signal classification are listed in Table3.Table 3
Signal classification results.
Signal type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
x
1
(
t
)
x
2
(
t
)
x
3
(
t
)
x
1
(
t
)
20
19
1
0
95
96.7
x
2
(
t
)
20
0
19
1
95
x
3
(
t
)
20
0
0
20
100Results in Table3 indicate that the presented method based on CEEMD and time series modeling can effectively identify different signals, and the overall classification rate is 96.7%. For the purpose of comparison, the signal classification rates use the method based on time series modeling only, and the method based on EMD and time series modeling is also calculated. 88.3% and 93.3% classification rates are obtained, respectively. It is obvious that efficiency of the signal classification method proposed in this paper is improved to a certain extent.
## 3.2. Evaluation Using Experimental Data
In order to illustrate the practicability and effectiveness of the proposed method, a bearing fault data set from the electrical engineering laboratory of Case Western Reserve University is analyzed [24]. The data set is acquired from the test stand shown in Figure 9, where it consists of a 2 hp motor, a torque transducer, a dynamometer, and control electronics. The test bearings support the motor shaft which is the deep groove ball bearings with the type of 6205-2RS JEMSKF. Vibration data was collected at 12,000 samples per second using accelerometers, which are attached to the housing with magnetic bases. The motor load level was controlled by the fan in the right side of Figure 9.Figure 9
Bearing test stand.Figure10 illustrates representative waveforms of the sample vibration signals measured from the test bearings under four initial conditions: (a) signal from a healthy bearing, (b) signal from a bearing with inner ring defect, (c) signal from a bearing with rolling element defect, and (d) signal from a bearing with outer ring defect. These signals were measured under 0 hp motor load with the motor speed of 1797 rpm. The decomposed IMFs of these signals are shown in Figure 11.Vibration signal waveforms of different conditions.
(a)
(b)
(c)
(d)The decomposition results by CEEMD under different conditions.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectCorrelation coefficients calculated between the filtered signal and each IMF are shown in Table4.Table 4
Correlation coefficients between filtered signals and each IMF.
Signal
Correlation coefficient
IMF1
IMF2
IMF3
IMF4
IMF5
IMF6
IMF7
IMF8
(a)
0.4135
0.7538
0.4381
0.4880
0.4356
0.1792
0.0971
−0.0056
(b)
0.8794
0.4275
0.2583
0.1337
0.0421
0.0285
−0.0009
−0.0074
(c)
0.9509
0.2180
0.2325
0.1337
0.0821
0.0350
−0.0017
0.0009
(d)
0.9878
0.1267
0.0636
0.0509
0.0136
0.0060
−0.0008
−0.0068The IMF which is closely correlated to the filtered signal is IMF2 for signal (a) and IMF1 for signals (b), (c), and (d), respectively. These IMFs are used for AR model construction. The model order estimation curves of the four conditions based on the principle of FPE criterion are shown in Figure12. We can see that when the model order is 6, each model's residual tends to be stable. Therefore the model order is selected as 6, and the results of parameters estimation are listed in Table 5.Table 5
Model parameter estimation results.
Signal
Model parameter
φ
1
φ
2
φ
3
φ
4
φ
5
φ
6
(a)
3.1280
−4.7797
4.2245
−2.1489
0.4241
0.0356
(b)
0.2084
−1.3585
0.5142
−0.6356
0.3471
−0.0422
(c)
0.1335
−1.6472
0.3941
−0.8473
0.2142
−0.1011
(d)
−0.1172
−1.2159
0.1178
−0.1283
0.1467
0.2533The model order estimation curves.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectThe parameters in Table5 were quantified by Lloyds algorithm in (12) as feature vectors for training the HMMs of different conditions. The results of quantization are revealed in Figure 13.The results of quantization.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectA total of 160 feature vectors were collected from the four conditions, half of the feature vectors were used for training the classifier and others for signal classification, and the classification results are listed in Table6. Out of 80 test feature vectors, just two cases were not correctly classified, and the overall classification rate is 97.5%.Table 6
Fault diagnosis using CEEMD and time series model.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
20
0
0
0
100
97.5
Inner ring defect
20
0
19
1
0
95
Rolling element defect
20
0
1
19
0
95
Outer ring defect
20
0
0
0
20
100For comparison, Tables7 and 8 list classification results based on time series modeling using measured signal directly and based on EMD and time series model method. From the comparison results, the proposed method is efficient for rolling bearing fault diagnosis, and the overall classification rate of the proposed method is higher to a certain extent than the other two methods mentioned above.Table 7
Fault diagnosis using time series model only.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
19
1
0
0
95
90.0
Inner ring defect
20
1
17
2
0
85
Rolling element defect
20
0
2
17
1
85
Outer ring defect
20
0
0
1
19
95Table 8
Fault diagnosis using EMD and time series model.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
19
1
0
0
95
93.75
Inner ring defect
20
0
18
2
0
90
Rolling element defect
20
0
1
19
0
95
Outer ring defect
20
0
0
1
19
95
## 4. Conclusions
Aiming at diagnosing rolling bearing faults, a hybrid approach based on CEEMD and time series modeling is proposed in this paper. The CEEMD method can decompose the nonstationary signal into a series of IMFs with low computation. AR model is an effective approach to extract the fault feature of the vibration signals and the fault pattern can be identified directly by the extracted fault features without establishing the mathematical model and studying the fault mechanism of the system. In this paper, the CEEMD method is used as a pretreatment, which can increase the accuracy of the AR model for the measured signal, and the AR model of the IMF which is closely correlated to the filtered signal is established to extract the fault feature parameters. Comparing to the EMD-AR approach and the direct modeling approach where raw signals are directly used as input for AR modeling, a higher classification rate was shown to be achieved by using the new approach (e.g., 96.7% for simulated signals and 97.5% for experimental data). Meanwhile we anticipate that the proposed method can also be used for incipient fault diagnosis in rolling bearing, where further experiments are needed to verify the accuracy. Since the approach presented in this study is generic in nature, it can be readily adapted to a broad range of applications for machine fault diagnosis.
---
*Source: 101867-2014-07-07.xml* | 101867-2014-07-07_101867-2014-07-07.md | 42,191 | Rolling Bearing Fault Diagnosis Based on CEEMD and Time Series Modeling | Liye Zhao; Wei Yu; Ruqiang Yan | Mathematical Problems in Engineering
(2014) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101867 | 101867-2014-07-07.xml | ---
## Abstract
Accurately identifying faults in rolling bearing systems by analyzing vibration signals, which are often nonstationary, is challenging. To address this issue, a new approach based on complementary ensemble empirical mode decomposition (CEEMD) and time series modeling is proposed in this paper. This approach seeks to identify faults appearing in a rolling bearing system using proper autoregressive (AR) model established from the nonstationary vibration signal. First, vibration signals measured from a rolling bearing test system with different defect conditions are decomposed into a set of intrinsic mode functions (IMFs) by means of the CEEMD method. Second, vibration signals are filtered with calculated filtering parameters. Third, the IMF which is closely correlated to the filtered signal is selected according to the correlation coefficient between the filtered signal and each IMF, and then the AR model of the selected IMF is established. Subsequently, the AR model parameters are considered as the input feature vectors, and the hidden Markov model (HMM) is used to identify the fault pattern of a rolling bearing. Experimental study performed on a bearing test system has shown that the presented approach can accurately identify faults in rolling bearings.
---
## Body
## 1. Introduction
Rolling element bearing failure is one of the foremost causes of failures in rotating machinery, and such failure may result in costly production loss and catastrophic accidents. Early detection and diagnosis of bearing faults while the machine is still in operation can help to avoid abnormal event progression and to reduce productivity loss [1]. Since structural defects can cause changes of the bearing dynamic characteristics as manifested in vibrations, vibration-based analysis has long been established as a commonly used technique for diagnosing bearing faults [2]. However, some nonlinear factors such as clearance, friction, and stiffness affect complexity of the vibration signals; thus it is difficult to make an accurate evaluation on the working condition of rolling bearings only through analysis in time or frequency domain as it does traditionally [3].In order to overcome limitations of the traditional techniques, autoregressive (AR) model has been successfully applied to extracting features from vibration signals for fault diagnosis in recent years [4–6]. This is because AR model is a time series analysis method whose parameters comprise important information of the system condition, and an accurate AR model can reflect the characteristics of a dynamic system [7]. For example, AR model was combined with a fuzzy classifier for fault diagnosis in vehicle transmission gear [8]. Three distinct techniques of autoregressive modeling were compared for their performance and reliability under conditions of various bearings signal lengths [9]. A diagnosis method based on the AR model and continuous HMM has also been used to monitor and diagnose the rolling bearing working conditions [10]. However, when the AR model is applied directly to the nonstationary bearing vibration signals, the analysis results are imperfect since the estimation method of the autoregression parameters of the AR model is no longer applicable. Because the vibration signal is nonstationary, whereas the AR model is suitable for stationary signal processing, it is, therefore, necessary to preprocess the vibration signals before the AR model is generated.Empirical mode decomposition (EMD) is an adaptive time-frequency signal processing method [11]. With EMD, a signal is decomposed into a series of intrinsic mode functions (IMFs) according to its own characteristics [12]. For example, a new fault feature extraction approach based on EMD method and AR model was used to process vibration signals of roller bearings [3]. However, when the EMD method is applied to the nonstationary signals containing intermittent signal components, the original signal cannot be decomposed accurately because of the problem of mode mixing [13]. To alleviate mode mixing, Wu and Huang developed ensemble empirical mode decomposition (EEMD) to improve EMD. By adding noise to the original signal and calculating the means of IMFs repeatedly, EEMD is more accurate and effective for signal decomposition [13]. Although the EEMD method has effectively resolved the mode-mixing problem, it is time consuming for implementing the large enough ensemble mean. That is to say, the algorithm efficiency will be greatly reduced. Aiming at solving this problem, the complementary ensemble EMD (CEEMD) method is proposed [14]. In this approach, the residue of added white noises can be extracted from the mixtures of data and white noises via pairs of complementary ensemble IMFs with positive and negative added white noises. The CEEMD method has the same performance as the EEMD, but the computational efficiency is greatly improved.In this paper, we combine the advantages of CEEMD and time series model and propose a new method based on CEEMD and AR model for rolling bearing fault diagnosis. The CEEMD is used as the pretreatment to filter the signal and extract the IMF which is closely correlated to the filtered signal, and then the AR model of the selected IMF is established. The AR model parameters are used as the feature vectors to a classifier, where the hidden Markov model (HMM) is used to identify the fault pattern of a rolling bearing. The rest of this paper is organized as follows. In Section2, the review of the fault diagnosis method based on AR model is presented, and the proposed method for rolling bearing fault diagnosis is discussed. The evaluations and experiments are presented in Section 3. Finally, concluding remarks are drawn in Section 4.
## 2. Theoretical Framework
### 2.1. Time Series Modeling
Autoregressive moving average (ARMA) model is the representative time series model, which can be expressed in linear difference equation form as(1)
x
t
+
φ
1
x
t
-
1
+
⋯
+
φ
n
x
t
-
n
=
a
t
+
θ
1
a
t
-
1
+
⋯
+
θ
m
a
t
-
m
,
where n and m are the parameters of the ARMA (n
,
m) model, x
t is zero mean stationary random sequence, a
t is white noise sequence, and φ
i and θ
j are model parameters to be estimated. The parameters of φ
i and θ
j are estimated by the time sequence of x
t (t
=
1,2
,
3
…), which is called the time series modeling. If φ
i
=
0, the ARMA (n
,
m) model will degrade to m order MA(m) model, and if θ
i
=
0, the ARMA (n
,
m) model will degrade as n order AR (n) model in (1). The AR model is stable and its structure is simpler than ARMA model. Therefore, the AR model will be established for characterizing the rolling bearing vibration signal, if the precision of the model is enough for expressing the system, which is expressed as
(2)
x
t
=
φ
1
x
t
-
1
+
⋯
+
φ
n
x
t
-
n
+
a
t
,
where t
=
1,2
,
…
,
N, N is the length of the time series x
t, n is the order number, and a
t
~
NID
(
0
,
σ
a
2
). The σ
a
2 is expressed as
(3)
σ
α
2
=
1
N
-
n
n
∑
t
=
n
+
1
n
(
x
t
-
∑
ι
=
1
n
φ
ι
x
t
-
1
)
2
.It is critical to determine the order number of the AR model, because the accuracy of the order not only affects the accuracy of identification of the system, but also influences the stability of the system. In order to estimate the order of the AR model correctly, FPE criterion, BIC criterion, and AIC criterion are usually used [15], and they are expressed asFPE criterion(4)
FPE
(
n
)
=
N
+
n
N
-
n
σ
a
2
,
BIC criterion(5)
AIC
(
n
)
=
N
ln
σ
a
2
+
2
n
,
AIC criterion(6)
BIC
(
n
)
=
N
ln
σ
a
2
+
n
ln
N
.After the model order is determined, the nonlinear least squares method can be used to estimate model parameters, and then the AR model with specific parameters is established.
### 2.2. Complementary Ensemble Empirical Mode Decomposition
Complementary ensemble empirical mode decomposition (CEEMD) is an improved algorithm based on empirical mode decomposition (EMD). Through EMD process, any complex time series can be decomposed into finite numbers of intrinsic mode functions (IMFs), and each IMF reflects the dynamic characteristic of the original signal. The IMF component must satisfy two conditions: (a) the number of poles and zeros is either equal to each other or differs at most by one; (b) the upper and lower envelopes must be locally symmetric about the timeline. The basic principle of EMD method is to decompose the original signalx
(
t
) into the form as shown in (7) by continuously eliminating the mean of the upper and lower envelope connected with the minimum and maximum of the signal [16]. Consider
(7)
x
t
=
∑
i
=
1
n
im
f
i
(
t
)
+
r
n
(
t
)
,
where x
t is the vibration signal, im
f
i
(
t
) is the IMF component including different frequency bands ranging from high to low, and r
n
(
t
) is the residue of the decomposition process, which is the mean trend of x
t.The EMD method is a kind of adaptive local analysis method, with each IMF highlighting the local features of the data. However, EMD decomposition results often suffer from mode mixing, which is defined as either a single IMF consisting of widely disparate scales or a signal residing in different IMF components [17]. To make it clear, a simulated signal s
(
t
) consists of a Gaussian-type impulse interference s
1
(
t
) and a cosine component with 500 Hz frequency s
2
(
t
), and a trend term s
3
(
t
) is used as an example. The equation of the simulated signal is expressed as
(8)
s
(
t
)
=
sin
(
2
π
α
t
)
e
-
(
(
t
-
t
0
)
2
/
σ
)
+
cos
(
2
π
β
t
)
+
50
t
,
where α
=
3000, β
=
500, and σ
=
10
6.The waveform of the simulated signal is shown in Figure1, and the corresponding EMD results for the signal s
(
t
) are shown in Figure 2, where the mode mixing happens.Figure 1
Signal waveforms.Figure 2
The decomposition result by EMD.To overcome the problem of mode mixing, the ensemble empirical mode decomposition (EEMD) was proposed [18], where Gaussian white noises with finite amplitude are added to the original signal during the entire decomposition process. Due to the uniform distribution statistical characteristics of the white noise, the signal with white noise becomes continuous in different time scales, and no missing scales are present. As a result, mode mixing is effectively eliminated by the EEMD process [18]. The EEMD decomposition result of signal s
(
t
) is shown in Figure 3, where the added white noise amplitude is 0.25 times the original signal standard deviation, and the number of decompositions is 200 times.Figure 3
The decomposition result by EEMD.It should be noted that, during the EEMD process, each individual trial may produce noisy results, but the effect of the added noise can be suppressed by large number of ensemble mean computations. This would be too time consuming to implement. An improved algorithm, named complementary ensemble mode decomposition (CEEMD), is suggested to improve the computation efficiency. In this algorithm, the residue of the added white noises can be extracted from the mixtures of data and white noises via pairs of complementary ensemble IMFs with positive and negative added white noises. Although this new approach yields IMF with a similar RMS noise to EEMD, it eliminates residue noise in the IMFs and overcomes the problem of mode mixing with much more efficiency [14]. The procedure on implementing CEEMD is shown below:(a)
x
1 and x
2 are constructed by adding a pair of opposite phase Gaussian white noises x
n with the same amplitude. Then x
1
=
x
+
x
n and x
2
=
x
-
x
n;
(b)
x
1 and x
2 are decomposed by EMD only a few times, and IMFx1 and IMFx2 are ensemble means of the corresponding IMF generated from each trial;
(c)
the average of corresponding component inIM
F
x
1 and IM
F
x
2 is calculated as the CEEMD decomposition results; that is,
(9)
IMF
=
(
IM
F
x
1
+
IM
F
x
2
)
2
.The flow chart of CEEMD is shown in Figure 4, where n is the decomposition trials.Figure 4
Decomposition flow chart of CEEMD.Figure5 is the decomposition result by CEEMD for the signal s
(
t
). As compared to the result shown in Figure 3, the decomposition accuracies of EEMD and CEEMD are consistent, while EEMD takes 1.62 s and CEEMD only needs 0.13 s.Figure 5
The decomposition result by CEEMD.
### 2.3. Fault Diagnosis Based on CEEMD and Time Series Model
Based on CEEMD and time series model, a hybrid fault diagnosis approach can be designed. The hybrid approach combines the advantages of CEEMD method in the nonstationary signal decomposition with the ability of time series modeling in feature extraction. The flow chart of the developed approach is shown in Figure6.Figure 6
The flow chart of the proposed method.The main steps are as follows.Step 1. The rolling bearing vibration signal is sampled and then decomposed by CEEMD with the process shown in Figure 4.Step 2. The product of energy density and average period of the IMFs which is a constant value according to [19] is calculated using (10) and parameter R
P
j is calculated using (11). Then the signal is filtered by comparing the parameter R
P
j and the given threshold value; that is to say, when R
P
j
⩾
1, the previous j
-
1 IMFs with the trend term need to be removed as noise and to rebuild the residual IMFs as filtered signal [19, 20]:
(10)
P
j
=
E
j
×
T
j
,
(11)
R
P
j
=
|
P
j
-
(
(
1
/
(
j
-
1
)
)
∑
i
=
1
j
-
1
P
j
)
(
1
/
(
j
-
1
)
)
∑
i
=
1
j
-
1
P
j
|
,
(
j
≥
2
)
,
where E
j
=
(
1
/
N
)
∑
i
=
1
N
[
A
j
(
i
)
]
2 is the energy density of the j
th IMF, T
j
=
2
N
/
O
j is the average period of the j
th IMF, N is the length of each IMF, A
j is the amplitude of the j
th IMF, and O
j is the total number of extreme points of j
th IMF.Step 3. Equation (12) is used to calculate the correlation coefficient between the filtered signal and each IMF, and the IMF which is closely correlated to the filtered signal is selected for AR modeling [21]:
(12)
ρ
x
y
=
∑
k
=
1
N
x
(
k
)
y
(
k
)
[
∑
k
=
1
N
x
(
k
)
2
∑
k
=
1
N
y
(
k
)
2
]
1
/
2
.Step 4. The least square method is used to estimate the parameters vectors of the AR model established in Step3, and the parameters vectors are considered as the model feature vector.Step 5. After scalar quantization by index calculation formula of Lloyds algorithm in (13) [22], the feature vector is used to train the HMM of each bearing working condition:
(13)
indx
(
x
)
=
{
1
x
≤
partition
(
i
)
i
+
1
partition
(
i
)
<
x
≤
partition
(
i
+
1
)
N
partition
(
N
-
1
)
<
x
,
where N is the length of the codebook vector, partition (i) is the partition vector with the length of N
-
1, and x is the feature vector for scalar quantization.Step 6. A test vibration signal can then be acquired for diagnosis, and the model feature vector is first extracted. After scalar quantization, the feature vector is put into the well-trained HMMs, and the corresponding HMM which has the maximum probability is regarded as the classification result [23].
## 2.1. Time Series Modeling
Autoregressive moving average (ARMA) model is the representative time series model, which can be expressed in linear difference equation form as(1)
x
t
+
φ
1
x
t
-
1
+
⋯
+
φ
n
x
t
-
n
=
a
t
+
θ
1
a
t
-
1
+
⋯
+
θ
m
a
t
-
m
,
where n and m are the parameters of the ARMA (n
,
m) model, x
t is zero mean stationary random sequence, a
t is white noise sequence, and φ
i and θ
j are model parameters to be estimated. The parameters of φ
i and θ
j are estimated by the time sequence of x
t (t
=
1,2
,
3
…), which is called the time series modeling. If φ
i
=
0, the ARMA (n
,
m) model will degrade to m order MA(m) model, and if θ
i
=
0, the ARMA (n
,
m) model will degrade as n order AR (n) model in (1). The AR model is stable and its structure is simpler than ARMA model. Therefore, the AR model will be established for characterizing the rolling bearing vibration signal, if the precision of the model is enough for expressing the system, which is expressed as
(2)
x
t
=
φ
1
x
t
-
1
+
⋯
+
φ
n
x
t
-
n
+
a
t
,
where t
=
1,2
,
…
,
N, N is the length of the time series x
t, n is the order number, and a
t
~
NID
(
0
,
σ
a
2
). The σ
a
2 is expressed as
(3)
σ
α
2
=
1
N
-
n
n
∑
t
=
n
+
1
n
(
x
t
-
∑
ι
=
1
n
φ
ι
x
t
-
1
)
2
.It is critical to determine the order number of the AR model, because the accuracy of the order not only affects the accuracy of identification of the system, but also influences the stability of the system. In order to estimate the order of the AR model correctly, FPE criterion, BIC criterion, and AIC criterion are usually used [15], and they are expressed asFPE criterion(4)
FPE
(
n
)
=
N
+
n
N
-
n
σ
a
2
,
BIC criterion(5)
AIC
(
n
)
=
N
ln
σ
a
2
+
2
n
,
AIC criterion(6)
BIC
(
n
)
=
N
ln
σ
a
2
+
n
ln
N
.After the model order is determined, the nonlinear least squares method can be used to estimate model parameters, and then the AR model with specific parameters is established.
## 2.2. Complementary Ensemble Empirical Mode Decomposition
Complementary ensemble empirical mode decomposition (CEEMD) is an improved algorithm based on empirical mode decomposition (EMD). Through EMD process, any complex time series can be decomposed into finite numbers of intrinsic mode functions (IMFs), and each IMF reflects the dynamic characteristic of the original signal. The IMF component must satisfy two conditions: (a) the number of poles and zeros is either equal to each other or differs at most by one; (b) the upper and lower envelopes must be locally symmetric about the timeline. The basic principle of EMD method is to decompose the original signalx
(
t
) into the form as shown in (7) by continuously eliminating the mean of the upper and lower envelope connected with the minimum and maximum of the signal [16]. Consider
(7)
x
t
=
∑
i
=
1
n
im
f
i
(
t
)
+
r
n
(
t
)
,
where x
t is the vibration signal, im
f
i
(
t
) is the IMF component including different frequency bands ranging from high to low, and r
n
(
t
) is the residue of the decomposition process, which is the mean trend of x
t.The EMD method is a kind of adaptive local analysis method, with each IMF highlighting the local features of the data. However, EMD decomposition results often suffer from mode mixing, which is defined as either a single IMF consisting of widely disparate scales or a signal residing in different IMF components [17]. To make it clear, a simulated signal s
(
t
) consists of a Gaussian-type impulse interference s
1
(
t
) and a cosine component with 500 Hz frequency s
2
(
t
), and a trend term s
3
(
t
) is used as an example. The equation of the simulated signal is expressed as
(8)
s
(
t
)
=
sin
(
2
π
α
t
)
e
-
(
(
t
-
t
0
)
2
/
σ
)
+
cos
(
2
π
β
t
)
+
50
t
,
where α
=
3000, β
=
500, and σ
=
10
6.The waveform of the simulated signal is shown in Figure1, and the corresponding EMD results for the signal s
(
t
) are shown in Figure 2, where the mode mixing happens.Figure 1
Signal waveforms.Figure 2
The decomposition result by EMD.To overcome the problem of mode mixing, the ensemble empirical mode decomposition (EEMD) was proposed [18], where Gaussian white noises with finite amplitude are added to the original signal during the entire decomposition process. Due to the uniform distribution statistical characteristics of the white noise, the signal with white noise becomes continuous in different time scales, and no missing scales are present. As a result, mode mixing is effectively eliminated by the EEMD process [18]. The EEMD decomposition result of signal s
(
t
) is shown in Figure 3, where the added white noise amplitude is 0.25 times the original signal standard deviation, and the number of decompositions is 200 times.Figure 3
The decomposition result by EEMD.It should be noted that, during the EEMD process, each individual trial may produce noisy results, but the effect of the added noise can be suppressed by large number of ensemble mean computations. This would be too time consuming to implement. An improved algorithm, named complementary ensemble mode decomposition (CEEMD), is suggested to improve the computation efficiency. In this algorithm, the residue of the added white noises can be extracted from the mixtures of data and white noises via pairs of complementary ensemble IMFs with positive and negative added white noises. Although this new approach yields IMF with a similar RMS noise to EEMD, it eliminates residue noise in the IMFs and overcomes the problem of mode mixing with much more efficiency [14]. The procedure on implementing CEEMD is shown below:(a)
x
1 and x
2 are constructed by adding a pair of opposite phase Gaussian white noises x
n with the same amplitude. Then x
1
=
x
+
x
n and x
2
=
x
-
x
n;
(b)
x
1 and x
2 are decomposed by EMD only a few times, and IMFx1 and IMFx2 are ensemble means of the corresponding IMF generated from each trial;
(c)
the average of corresponding component inIM
F
x
1 and IM
F
x
2 is calculated as the CEEMD decomposition results; that is,
(9)
IMF
=
(
IM
F
x
1
+
IM
F
x
2
)
2
.The flow chart of CEEMD is shown in Figure 4, where n is the decomposition trials.Figure 4
Decomposition flow chart of CEEMD.Figure5 is the decomposition result by CEEMD for the signal s
(
t
). As compared to the result shown in Figure 3, the decomposition accuracies of EEMD and CEEMD are consistent, while EEMD takes 1.62 s and CEEMD only needs 0.13 s.Figure 5
The decomposition result by CEEMD.
## 2.3. Fault Diagnosis Based on CEEMD and Time Series Model
Based on CEEMD and time series model, a hybrid fault diagnosis approach can be designed. The hybrid approach combines the advantages of CEEMD method in the nonstationary signal decomposition with the ability of time series modeling in feature extraction. The flow chart of the developed approach is shown in Figure6.Figure 6
The flow chart of the proposed method.The main steps are as follows.Step 1. The rolling bearing vibration signal is sampled and then decomposed by CEEMD with the process shown in Figure 4.Step 2. The product of energy density and average period of the IMFs which is a constant value according to [19] is calculated using (10) and parameter R
P
j is calculated using (11). Then the signal is filtered by comparing the parameter R
P
j and the given threshold value; that is to say, when R
P
j
⩾
1, the previous j
-
1 IMFs with the trend term need to be removed as noise and to rebuild the residual IMFs as filtered signal [19, 20]:
(10)
P
j
=
E
j
×
T
j
,
(11)
R
P
j
=
|
P
j
-
(
(
1
/
(
j
-
1
)
)
∑
i
=
1
j
-
1
P
j
)
(
1
/
(
j
-
1
)
)
∑
i
=
1
j
-
1
P
j
|
,
(
j
≥
2
)
,
where E
j
=
(
1
/
N
)
∑
i
=
1
N
[
A
j
(
i
)
]
2 is the energy density of the j
th IMF, T
j
=
2
N
/
O
j is the average period of the j
th IMF, N is the length of each IMF, A
j is the amplitude of the j
th IMF, and O
j is the total number of extreme points of j
th IMF.Step 3. Equation (12) is used to calculate the correlation coefficient between the filtered signal and each IMF, and the IMF which is closely correlated to the filtered signal is selected for AR modeling [21]:
(12)
ρ
x
y
=
∑
k
=
1
N
x
(
k
)
y
(
k
)
[
∑
k
=
1
N
x
(
k
)
2
∑
k
=
1
N
y
(
k
)
2
]
1
/
2
.Step 4. The least square method is used to estimate the parameters vectors of the AR model established in Step3, and the parameters vectors are considered as the model feature vector.Step 5. After scalar quantization by index calculation formula of Lloyds algorithm in (13) [22], the feature vector is used to train the HMM of each bearing working condition:
(13)
indx
(
x
)
=
{
1
x
≤
partition
(
i
)
i
+
1
partition
(
i
)
<
x
≤
partition
(
i
+
1
)
N
partition
(
N
-
1
)
<
x
,
where N is the length of the codebook vector, partition (i) is the partition vector with the length of N
-
1, and x is the feature vector for scalar quantization.Step 6. A test vibration signal can then be acquired for diagnosis, and the model feature vector is first extracted. After scalar quantization, the feature vector is put into the well-trained HMMs, and the corresponding HMM which has the maximum probability is regarded as the classification result [23].
## 3. Evaluation of the Method Based on CEEMD and AR Model
### 3.1. Evaluation Using Simulated Signals
To demonstrate the validity of the method proposed in this study, three signalsx
1
(
t
), x
2
(
t
), and x
3
(
t
) are simulated as shown in Figure 7. The signal x
1
(
t
) consists of a Gaussian-type impulse interference, a cosine component with 10 Hz frequency, a trend term, and white noise. The signal x
2
(
t
) consists of a Gaussian-type impulse interference, a square wave with 65% duty ratio, a trend term, and white noise. The signal x
3
(
t
) consists of a Gaussian-type impulse interference, a sawtooth wave with 15 Hz frequency, a trend term, and white noise.Figure 7
Signal waveforms ofx
1
(
t
), x
2
(
t
), and x
3
(
t
).Figure8 shows the results of the CEEMD of signals x
1
(
t
), x
2
(
t
), and x
3
(
t
). Correlation coefficients between filtered signal and each IMF are illustrated in Table 1.Table 1
Correlation coefficients between filtered signal and each IMF.
Signal
Correlation coefficient
IMF1
IMF2
IMF3
IMF4
IMF5
IMF6
IMF7
IMF8
x
1
(
t
)
−0.0031
−0.0009
0.0371
0.4096
0.9668
0.2428
0.1273
−0.0448
x
2
(
t
)
0.0051
0.0004
0.0435
0.2111
0.4695
0.8887
0.7214
−0.0201
x
3
(
t
)
−0.0234
−0.0154
0.0286
0.5900
0.8953
0.1649
0.1887
−0.0210The decomposition results by CEEMD.
(a)
x
1
(
t
)
(b)
x
2
(
t
)
(c)
x
3
(
t
)It can be seen in Table1 that the IMF which is closely correlated to the filtered signal is IMF5 for both signal x
1
(
t
) and signal x
3
(
t
) and IMF6 for signal x
2
(
t
). They are used to construct the AR models, and the corresponding feature vectors are estimated as shown in Table 2. After scalar quantization, the feature vectors are used to train the HMM for signal classification.Table 2
Model parameter estimation results.
Signal
Model parameter
φ
1
φ
2
φ
3
φ
4
φ
5
φ
6
x
1
(
t
)
4.7183
−9.1103
9.2034
−5.1408
1.5207
−0.1914
x
2
(
t
)
4.8894
−9.8269
10.3945
−6.1194
1.9153
−0.2531
x
3
(
t
)
4.8718
−9.9955
11.1616
−7.2529
2.6430
−0.4282A total of 90 feature vectors were collected from three groups of signals using the proposed approach. One-third of the feature vectors in each condition were used for training the classifier and others were used for testing. The results of the signal classification are listed in Table3.Table 3
Signal classification results.
Signal type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
x
1
(
t
)
x
2
(
t
)
x
3
(
t
)
x
1
(
t
)
20
19
1
0
95
96.7
x
2
(
t
)
20
0
19
1
95
x
3
(
t
)
20
0
0
20
100Results in Table3 indicate that the presented method based on CEEMD and time series modeling can effectively identify different signals, and the overall classification rate is 96.7%. For the purpose of comparison, the signal classification rates use the method based on time series modeling only, and the method based on EMD and time series modeling is also calculated. 88.3% and 93.3% classification rates are obtained, respectively. It is obvious that efficiency of the signal classification method proposed in this paper is improved to a certain extent.
### 3.2. Evaluation Using Experimental Data
In order to illustrate the practicability and effectiveness of the proposed method, a bearing fault data set from the electrical engineering laboratory of Case Western Reserve University is analyzed [24]. The data set is acquired from the test stand shown in Figure 9, where it consists of a 2 hp motor, a torque transducer, a dynamometer, and control electronics. The test bearings support the motor shaft which is the deep groove ball bearings with the type of 6205-2RS JEMSKF. Vibration data was collected at 12,000 samples per second using accelerometers, which are attached to the housing with magnetic bases. The motor load level was controlled by the fan in the right side of Figure 9.Figure 9
Bearing test stand.Figure10 illustrates representative waveforms of the sample vibration signals measured from the test bearings under four initial conditions: (a) signal from a healthy bearing, (b) signal from a bearing with inner ring defect, (c) signal from a bearing with rolling element defect, and (d) signal from a bearing with outer ring defect. These signals were measured under 0 hp motor load with the motor speed of 1797 rpm. The decomposed IMFs of these signals are shown in Figure 11.Vibration signal waveforms of different conditions.
(a)
(b)
(c)
(d)The decomposition results by CEEMD under different conditions.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectCorrelation coefficients calculated between the filtered signal and each IMF are shown in Table4.Table 4
Correlation coefficients between filtered signals and each IMF.
Signal
Correlation coefficient
IMF1
IMF2
IMF3
IMF4
IMF5
IMF6
IMF7
IMF8
(a)
0.4135
0.7538
0.4381
0.4880
0.4356
0.1792
0.0971
−0.0056
(b)
0.8794
0.4275
0.2583
0.1337
0.0421
0.0285
−0.0009
−0.0074
(c)
0.9509
0.2180
0.2325
0.1337
0.0821
0.0350
−0.0017
0.0009
(d)
0.9878
0.1267
0.0636
0.0509
0.0136
0.0060
−0.0008
−0.0068The IMF which is closely correlated to the filtered signal is IMF2 for signal (a) and IMF1 for signals (b), (c), and (d), respectively. These IMFs are used for AR model construction. The model order estimation curves of the four conditions based on the principle of FPE criterion are shown in Figure12. We can see that when the model order is 6, each model's residual tends to be stable. Therefore the model order is selected as 6, and the results of parameters estimation are listed in Table 5.Table 5
Model parameter estimation results.
Signal
Model parameter
φ
1
φ
2
φ
3
φ
4
φ
5
φ
6
(a)
3.1280
−4.7797
4.2245
−2.1489
0.4241
0.0356
(b)
0.2084
−1.3585
0.5142
−0.6356
0.3471
−0.0422
(c)
0.1335
−1.6472
0.3941
−0.8473
0.2142
−0.1011
(d)
−0.1172
−1.2159
0.1178
−0.1283
0.1467
0.2533The model order estimation curves.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectThe parameters in Table5 were quantified by Lloyds algorithm in (12) as feature vectors for training the HMMs of different conditions. The results of quantization are revealed in Figure 13.The results of quantization.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectA total of 160 feature vectors were collected from the four conditions, half of the feature vectors were used for training the classifier and others for signal classification, and the classification results are listed in Table6. Out of 80 test feature vectors, just two cases were not correctly classified, and the overall classification rate is 97.5%.Table 6
Fault diagnosis using CEEMD and time series model.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
20
0
0
0
100
97.5
Inner ring defect
20
0
19
1
0
95
Rolling element defect
20
0
1
19
0
95
Outer ring defect
20
0
0
0
20
100For comparison, Tables7 and 8 list classification results based on time series modeling using measured signal directly and based on EMD and time series model method. From the comparison results, the proposed method is efficient for rolling bearing fault diagnosis, and the overall classification rate of the proposed method is higher to a certain extent than the other two methods mentioned above.Table 7
Fault diagnosis using time series model only.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
19
1
0
0
95
90.0
Inner ring defect
20
1
17
2
0
85
Rolling element defect
20
0
2
17
1
85
Outer ring defect
20
0
0
1
19
95Table 8
Fault diagnosis using EMD and time series model.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
19
1
0
0
95
93.75
Inner ring defect
20
0
18
2
0
90
Rolling element defect
20
0
1
19
0
95
Outer ring defect
20
0
0
1
19
95
## 3.1. Evaluation Using Simulated Signals
To demonstrate the validity of the method proposed in this study, three signalsx
1
(
t
), x
2
(
t
), and x
3
(
t
) are simulated as shown in Figure 7. The signal x
1
(
t
) consists of a Gaussian-type impulse interference, a cosine component with 10 Hz frequency, a trend term, and white noise. The signal x
2
(
t
) consists of a Gaussian-type impulse interference, a square wave with 65% duty ratio, a trend term, and white noise. The signal x
3
(
t
) consists of a Gaussian-type impulse interference, a sawtooth wave with 15 Hz frequency, a trend term, and white noise.Figure 7
Signal waveforms ofx
1
(
t
), x
2
(
t
), and x
3
(
t
).Figure8 shows the results of the CEEMD of signals x
1
(
t
), x
2
(
t
), and x
3
(
t
). Correlation coefficients between filtered signal and each IMF are illustrated in Table 1.Table 1
Correlation coefficients between filtered signal and each IMF.
Signal
Correlation coefficient
IMF1
IMF2
IMF3
IMF4
IMF5
IMF6
IMF7
IMF8
x
1
(
t
)
−0.0031
−0.0009
0.0371
0.4096
0.9668
0.2428
0.1273
−0.0448
x
2
(
t
)
0.0051
0.0004
0.0435
0.2111
0.4695
0.8887
0.7214
−0.0201
x
3
(
t
)
−0.0234
−0.0154
0.0286
0.5900
0.8953
0.1649
0.1887
−0.0210The decomposition results by CEEMD.
(a)
x
1
(
t
)
(b)
x
2
(
t
)
(c)
x
3
(
t
)It can be seen in Table1 that the IMF which is closely correlated to the filtered signal is IMF5 for both signal x
1
(
t
) and signal x
3
(
t
) and IMF6 for signal x
2
(
t
). They are used to construct the AR models, and the corresponding feature vectors are estimated as shown in Table 2. After scalar quantization, the feature vectors are used to train the HMM for signal classification.Table 2
Model parameter estimation results.
Signal
Model parameter
φ
1
φ
2
φ
3
φ
4
φ
5
φ
6
x
1
(
t
)
4.7183
−9.1103
9.2034
−5.1408
1.5207
−0.1914
x
2
(
t
)
4.8894
−9.8269
10.3945
−6.1194
1.9153
−0.2531
x
3
(
t
)
4.8718
−9.9955
11.1616
−7.2529
2.6430
−0.4282A total of 90 feature vectors were collected from three groups of signals using the proposed approach. One-third of the feature vectors in each condition were used for training the classifier and others were used for testing. The results of the signal classification are listed in Table3.Table 3
Signal classification results.
Signal type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
x
1
(
t
)
x
2
(
t
)
x
3
(
t
)
x
1
(
t
)
20
19
1
0
95
96.7
x
2
(
t
)
20
0
19
1
95
x
3
(
t
)
20
0
0
20
100Results in Table3 indicate that the presented method based on CEEMD and time series modeling can effectively identify different signals, and the overall classification rate is 96.7%. For the purpose of comparison, the signal classification rates use the method based on time series modeling only, and the method based on EMD and time series modeling is also calculated. 88.3% and 93.3% classification rates are obtained, respectively. It is obvious that efficiency of the signal classification method proposed in this paper is improved to a certain extent.
## 3.2. Evaluation Using Experimental Data
In order to illustrate the practicability and effectiveness of the proposed method, a bearing fault data set from the electrical engineering laboratory of Case Western Reserve University is analyzed [24]. The data set is acquired from the test stand shown in Figure 9, where it consists of a 2 hp motor, a torque transducer, a dynamometer, and control electronics. The test bearings support the motor shaft which is the deep groove ball bearings with the type of 6205-2RS JEMSKF. Vibration data was collected at 12,000 samples per second using accelerometers, which are attached to the housing with magnetic bases. The motor load level was controlled by the fan in the right side of Figure 9.Figure 9
Bearing test stand.Figure10 illustrates representative waveforms of the sample vibration signals measured from the test bearings under four initial conditions: (a) signal from a healthy bearing, (b) signal from a bearing with inner ring defect, (c) signal from a bearing with rolling element defect, and (d) signal from a bearing with outer ring defect. These signals were measured under 0 hp motor load with the motor speed of 1797 rpm. The decomposed IMFs of these signals are shown in Figure 11.Vibration signal waveforms of different conditions.
(a)
(b)
(c)
(d)The decomposition results by CEEMD under different conditions.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectCorrelation coefficients calculated between the filtered signal and each IMF are shown in Table4.Table 4
Correlation coefficients between filtered signals and each IMF.
Signal
Correlation coefficient
IMF1
IMF2
IMF3
IMF4
IMF5
IMF6
IMF7
IMF8
(a)
0.4135
0.7538
0.4381
0.4880
0.4356
0.1792
0.0971
−0.0056
(b)
0.8794
0.4275
0.2583
0.1337
0.0421
0.0285
−0.0009
−0.0074
(c)
0.9509
0.2180
0.2325
0.1337
0.0821
0.0350
−0.0017
0.0009
(d)
0.9878
0.1267
0.0636
0.0509
0.0136
0.0060
−0.0008
−0.0068The IMF which is closely correlated to the filtered signal is IMF2 for signal (a) and IMF1 for signals (b), (c), and (d), respectively. These IMFs are used for AR model construction. The model order estimation curves of the four conditions based on the principle of FPE criterion are shown in Figure12. We can see that when the model order is 6, each model's residual tends to be stable. Therefore the model order is selected as 6, and the results of parameters estimation are listed in Table 5.Table 5
Model parameter estimation results.
Signal
Model parameter
φ
1
φ
2
φ
3
φ
4
φ
5
φ
6
(a)
3.1280
−4.7797
4.2245
−2.1489
0.4241
0.0356
(b)
0.2084
−1.3585
0.5142
−0.6356
0.3471
−0.0422
(c)
0.1335
−1.6472
0.3941
−0.8473
0.2142
−0.1011
(d)
−0.1172
−1.2159
0.1178
−0.1283
0.1467
0.2533The model order estimation curves.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectThe parameters in Table5 were quantified by Lloyds algorithm in (12) as feature vectors for training the HMMs of different conditions. The results of quantization are revealed in Figure 13.The results of quantization.
(a)
No defect
(b)
Inner ring defect
(c)
Rolling element defect
(d)
Outer ring defectA total of 160 feature vectors were collected from the four conditions, half of the feature vectors were used for training the classifier and others for signal classification, and the classification results are listed in Table6. Out of 80 test feature vectors, just two cases were not correctly classified, and the overall classification rate is 97.5%.Table 6
Fault diagnosis using CEEMD and time series model.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
20
0
0
0
100
97.5
Inner ring defect
20
0
19
1
0
95
Rolling element defect
20
0
1
19
0
95
Outer ring defect
20
0
0
0
20
100For comparison, Tables7 and 8 list classification results based on time series modeling using measured signal directly and based on EMD and time series model method. From the comparison results, the proposed method is efficient for rolling bearing fault diagnosis, and the overall classification rate of the proposed method is higher to a certain extent than the other two methods mentioned above.Table 7
Fault diagnosis using time series model only.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
19
1
0
0
95
90.0
Inner ring defect
20
1
17
2
0
85
Rolling element defect
20
0
2
17
1
85
Outer ring defect
20
0
0
1
19
95Table 8
Fault diagnosis using EMD and time series model.
Fault type
Test sample
Classification results
Classification rate [%]
Overall classification rate [%]
No defect
Inner ring defect
Rolling element defect
Outer ring defect
No defect
20
19
1
0
0
95
93.75
Inner ring defect
20
0
18
2
0
90
Rolling element defect
20
0
1
19
0
95
Outer ring defect
20
0
0
1
19
95
## 4. Conclusions
Aiming at diagnosing rolling bearing faults, a hybrid approach based on CEEMD and time series modeling is proposed in this paper. The CEEMD method can decompose the nonstationary signal into a series of IMFs with low computation. AR model is an effective approach to extract the fault feature of the vibration signals and the fault pattern can be identified directly by the extracted fault features without establishing the mathematical model and studying the fault mechanism of the system. In this paper, the CEEMD method is used as a pretreatment, which can increase the accuracy of the AR model for the measured signal, and the AR model of the IMF which is closely correlated to the filtered signal is established to extract the fault feature parameters. Comparing to the EMD-AR approach and the direct modeling approach where raw signals are directly used as input for AR modeling, a higher classification rate was shown to be achieved by using the new approach (e.g., 96.7% for simulated signals and 97.5% for experimental data). Meanwhile we anticipate that the proposed method can also be used for incipient fault diagnosis in rolling bearing, where further experiments are needed to verify the accuracy. Since the approach presented in this study is generic in nature, it can be readily adapted to a broad range of applications for machine fault diagnosis.
---
*Source: 101867-2014-07-07.xml* | 2014 |
# QSAR Modeling and Molecular Docking Analysis of Some Active Compounds againstMycobacterium tuberculosis Receptor (Mtb CYP121)
**Authors:** Shola Elijah Adeniji; Sani Uba; Adamu Uzairu
**Journal:** Journal of Pathogens
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1018694
---
## Abstract
A quantitative structure-activity relationship (QSAR) study was performed to develop a model that relates the structures of 50 compounds to their activities againstM. tuberculosis. The compounds were optimized by employing density functional theory (DFT) with B3LYP/6-31G⁎. The Genetic Function Algorithm (GFA) was used to select the descriptors and to generate the correlation model that relates the structural features of the compounds to their biological activities. The optimum model has squared correlation coefficient (R2) of 0.9202, adjusted squared correlation coefficient (Radj) of 0.91012, and leave-one-out (LOO) cross-validation coefficient (Qcv2) value of 0.8954. The external validation test used for confirming the predictive power of the built model has R2pred value of 0.8842. These parameters confirm the stability and robustness of the model. Docking analysis showed the best compound with high docking affinity of −14.6 kcal/mol which formed hydrophobic interaction and hydrogen bond with amino acid residues ofM. tuberculosis cytochromes (Mtb CYP121). QSAR and molecular docking studies provide valuable approach for pharmaceutical and medicinal chemists to design and synthesize new anti-Mycobacterium tuberculosis compounds.
---
## Body
## 1. Introduction
Mycobacterium tuberculosis is a bacterial species responsible for causing tuberculosis (TB). It mainly affects the lungs and other parts of the body such as spine, kidney, and brain unless urgent treatment is provided. Tuberculosis remains one of the most prevalent infectious bacterial diseases, resulting in the death of 1.4 million people worldwide [1]. There are drugs like isoniazid, rifampicin, ciprofloxacin, and ethambutol available as a cure for tuberculosis.However, due to the emergence of multidrug-resistant (MDR) and extensively drug-resistant (XDR) tuberculosis, this poses a big challenge towards the successful treatment of tuberculosis [2]. This led to development of new therapeutics against diverse strains ofM. tuberculosis [3]. New synthesized 1,2,4-Triazole derivative compounds demonstrate tuberculosis inhibition activity [4]. Synthesis of novel molecules is typically developed using a trial-and-error approach, which is time-consuming and costly.Quantitative structure-activity relationship (QSAR) plays a crucial part in novel drug design via a ligand-based approach [5]. The key success of the QSAR method is the possibility to predict the properties of new chemical compounds without the need to synthesize and test them. This technique is broadly utilized for the prediction of physicochemical properties in the chemical, industrial, pharmaceutical, biological, and environmental spheres [6]. Moreover, the QSAR strategies save resources and accelerate the process of developing new molecules for use as drugs, materials, and additives or for whatever purposes [7]. Meanwhile molecular docking is a computational method used to determine the binding strength between the active site residues and specific molecule(s) [8]. Molecular docking is expedient tool used in the drug discovery field to investigate the binding compatibility of molecules (ligands) to target (receptor) [9].The aim of this research was to develop QSAR model to predict the activity of 1,2,4-Triazole derivatives as potent anti-Mycobacterium tuberculosis compounds and to elucidate the interaction between the inhibitor molecules andMycobacterium tuberculosis target site.
## 2. Materials and Methods
### 2.1. Data Collection
Fifty molecules of 1,2,4-Triazole derivatives as potent antitubercular agents that were used in this study were obtained from the literature [4].
### 2.2. Biological Activities (BA)
The biological activities of 1,2,4-Triazole derivatives againstMycobacterium tuberculosis in aerobic active stage were initially expressed in percentage (%) and then converted to logarithm unit using (1) in order to increase the linearity and approach normal distribution of the activity values. The observed structures and the biological activities of these compounds were presented in Table 1.(1)pBA=logmolecular weightg/molDoseg/molpercentage%100-percentage%.Table 1
Molecular structure of 1,2,4-Triazole derivatives and their activities.
S/N Molecules Experimentalactivity(pBA) 1 4.925 2 a 5.0345 3 5.0064 4 5.7386 5 a 5.5994 6 a 5.4543 7 4.7441 8 6.1674 9 a 6.3456 10 7.4134 11 5.7441 12 5.9258 1 3 a 5.6754 14 6.3793 15 6.1667 1 6 a 5.8765 17 6.4171 18 5.9413 19 7.6397 20 8.0899 21 6.3981 22 5.8131 23 6.2878 24 5.7268 25 7.366 2 6 a 7.0123 27 6.5267 28 5.7405 2 9 a 5.6533 3 0 a 6.1923 31 7.3233 32 6.0097 33 6.0928 34 7.3279 35 6.8568 3 6 a 6.2234 37 7.3079 38 7.314 3 9 a 8.5854 40 8.0615 41 8.0615 4 2 a 6.8494 43 7.9432 4 4 a 7.4535 45 7.9759 46 7.9759 47 7.9294 4 8 a 6.1213 49 5.4406 5 0 a 4.9074 Superscripta represents the test set.
### 2.3. Geometry Optimization
The chemical structures of the molecules were drawn with ChemDraw Ultra Version 12.0. Each molecule was first preoptimized with the molecular mechanics (MMFF) and further reoptimized with density functional theory (DFT) utilizing the B3LYP and 6-31G∗ basis set [10, 11] with the aid of Spartan 14 Version 1.1.0 software.
### 2.4. Molecular Descriptor Calculation
Molecular descriptors are mathematical values that describe the properties of a molecule. Descriptors calculation for all the 50 molecules of 1,2,4-Triazole derivatives was done using PaDEL-Descriptor Version 2.20 software. A total of 1876 molecular descriptors were calculated.
### 2.5. Normalization and Data Pretreatment
The descriptors’ values were normalized using (2) in order to give each variable the same opportunity at the onset to influence the model [12].(2)X=X1-XminXmax-Xmin,where Xi is the value of each descriptor for a given molecule and Xmax and Xmin are the maximum and minimum value for each column of descriptors X. The normalized data were subjected to pretreatment using data pretreatment software in order to remove noise and redundant data.
### 2.6. Training and Test Set
The dataset was split into training set and test set by employing Kennard and Stone’s algorithm. The training set comprises 70% of the dataset which was used to build the model, while the remaining 30% of the dataset (test set) was used to validate the built model.
### 2.7. Relative Importance of Each Descriptor in the Model
Absolute value of the mean effect of each descriptor was used to evaluate the relative importance and contribution of the descriptor to the model. The mean effect is defined as(3)ME=βj∑inDj∑jmβj∑inDj,where ME is the mean effect of a descriptorj in a model, βj is the coefficient of the descriptor j in that model, Dj is the value of each descriptor in the data matrix for each molecule in the training set, m is the number of descriptors that appear in the model, and n is the number of molecules in the training set [13].
### 2.8. Degree of Contribution of Selected Descriptors
Contribution of each descriptor in the model was measured by calculating their standardized regression coefficients (bjs) using (6).(4)bjs=SjbjSy,where bj is the regression coefficient of descriptor j. Sj and Sy are the standard deviations for each descriptor and activity, respectively. Statistical property bjs allows one to assign a greater importance to those molecular descriptors that exhibit larger absolute standardized coefficients
### 2.9. Internal Validation of Model
Internal validation of the model was carried out using Materials Studio Version 8 software by employing the Genetic Function Approximation (GFA) method. The models were estimated using the LOF. The LOF is measured using a slight variation of the original Friedman formula, so that the best fitness score can be received. LOF is expressed as follows [14]:(5)LOF=SEE1-C+d×p/M2,where SSE is the sum of squares of errors, C is the number of terms in the model, d is a user-defined smoothing parameter, p is the total number of descriptors contained in the model, and M is the amount of data in the training set. SEE is defined as [15](6)SEE=Yexp-Ypred2N-P-1.The square of the correlation coefficient (R2) defines the fraction of the total variation attributed to the model. The closer the value of R2 to 1.0, the better the model generated. R2 is expressed as(7)R2=1-∑Yexp-Ypred2∑Yexp-Y¯training2,where Yexp, Ypred, and Y¯training are the experimental activity, the predicted activity, and the mean experimental activity of the samples in the training set, respectively.Value ofR2 varies directly with the increase in number of descriptors. Thus, R2 is not reliable to measure the goodness of fit of the model. Therefore, R2 is adjusted for the number of explanatory variables in the model. The adjusted R2 is defined as(8)R2adj=R2-kn-1n-p+1,where k is the number of independent variables in the model and n is the number of descriptors.The strength of the QSAR equation to predict bioactivity of a compound was determined using the leave-one-out cross--validation method. The revised formula for cross-validation regression coefficient (Qcv2) is(9)Qcv2=1-∑Ypred-Yexp2∑Yexp-Y¯training2,where Ypred, Yexp, and Y¯training are the predicted, experimental, and mean values of experimental activity of the training set.
### 2.10. External Validation of Model
External validation of model was assessed byRtest2 value. The closer the value of Rtest2 to 1.0, the better the model generated.(10)Rtest2=1-∑Ypredtest-Yexptest2∑Ypredtest-Y¯training2,where Ypredtest and Yexptest are the predicted and experimental activity test set, while Y¯training represents mean values of experimental activity of the training set.
### 2.11.Y-Randomization Test
Y-Randomization test is another useful external validation parameter to confirm that the built QSAR model is strong and is not inferred by chance. The Y-Randomization test was performed on the training set data [16]. For the built model to pass the Y-Randomization test, cRp2 should be more than 0.5.(11)cRp2=R×R2-Rr22,where cRp2 is coefficient of determination for Y-Randomization, R is coefficient of correlation for Y-Randomization, and Rr is average “R” of random models.
### 2.12. Evaluation of the Applicability Domain of the Model
Assessment of the applicability domain of a QSAR model is an important approach to confirm that the model built is able to make good predictions within the chemical space for which it was developed [16]. The leverage approach was employed to describe the applicability domain of the QSAR model [17]. Leverage of a given chemical compound is defined as follows:(12)hi=XiXTX-1XiT,where hi is the leverage of each compound, Xi is the descriptor row-vector of the query compound i, and X is the n×k descriptor matrix of the training set compounds used to build the model. As a prediction tool, the warning leverage (h∗) is the limit of normal values for X outliers and is defined as(13)h∗=3d+1m,where m is the number of training compounds and d is the number of descriptors in the model. The Williams plot, a plot of standardized residual versus leverage value, was employed to elucidate the relevance area of the model in terms of chemical space. Data is said to be an outlier if the standardized cross-validated residual produced by the model is greater than ±3.
### 2.13. Quality Assurance of the Model
The fitting ability, stability, robustness, reliability, and predictive ability of the developed models were evaluated by internal and external validation parameters. The validation parameters were compared with the minimum recommended value for a generally acceptable QSAR model [17] shown in Table 2.Table 2
Minimum recommended value of validation parameters for a generally acceptable QSAR model.
Validation parameter Name Value R 2 Coefficient of determination ≥0.6 P ( 95 % ) Confidence interval at 95% confidence level <0.05 Q c v 2 Cross-validation coefficient >0.5 R 2−Qcv2 Difference betweenR2 and Qcv2 ≤0.3 N e x t . t e s t s e t Minimum number of external test sets ≥5 R 2 test Coefficient of determination for external test set ≥0.6 c R p 2 Coefficient of determination forY-randomization >0.5
### 2.14. Docking Studies
Molecular docking study was carried out in order to elucidate which of the 1,2,4-Triazole derivatives has the best binding affinity against Mtb CYP121. The structure of Mtb CYP121 used in the study was obtained from protein data bank with PDB code 51BG. The prepared ligand and receptor were shown in Figure1. The optimized structures of 1,2,4-Triazole derivatives initially saved as SDF files were converted to PDB files using Spartan 14 V 1.1.4. The prepared ligands were docked with prepared structures of Mtb CYP121 using AutoDock Vina incorporated in PyRx software. The docked results were compiled, visualized, and analyzed using Discovery Studio Visualizer.Figure 1
(a) Prepared structure of Mtb CYP121. (b) 3D structures of the prepared ligands.
(a) (b)
## 2.1. Data Collection
Fifty molecules of 1,2,4-Triazole derivatives as potent antitubercular agents that were used in this study were obtained from the literature [4].
## 2.2. Biological Activities (BA)
The biological activities of 1,2,4-Triazole derivatives againstMycobacterium tuberculosis in aerobic active stage were initially expressed in percentage (%) and then converted to logarithm unit using (1) in order to increase the linearity and approach normal distribution of the activity values. The observed structures and the biological activities of these compounds were presented in Table 1.(1)pBA=logmolecular weightg/molDoseg/molpercentage%100-percentage%.Table 1
Molecular structure of 1,2,4-Triazole derivatives and their activities.
S/N Molecules Experimentalactivity(pBA) 1 4.925 2 a 5.0345 3 5.0064 4 5.7386 5 a 5.5994 6 a 5.4543 7 4.7441 8 6.1674 9 a 6.3456 10 7.4134 11 5.7441 12 5.9258 1 3 a 5.6754 14 6.3793 15 6.1667 1 6 a 5.8765 17 6.4171 18 5.9413 19 7.6397 20 8.0899 21 6.3981 22 5.8131 23 6.2878 24 5.7268 25 7.366 2 6 a 7.0123 27 6.5267 28 5.7405 2 9 a 5.6533 3 0 a 6.1923 31 7.3233 32 6.0097 33 6.0928 34 7.3279 35 6.8568 3 6 a 6.2234 37 7.3079 38 7.314 3 9 a 8.5854 40 8.0615 41 8.0615 4 2 a 6.8494 43 7.9432 4 4 a 7.4535 45 7.9759 46 7.9759 47 7.9294 4 8 a 6.1213 49 5.4406 5 0 a 4.9074 Superscripta represents the test set.
## 2.3. Geometry Optimization
The chemical structures of the molecules were drawn with ChemDraw Ultra Version 12.0. Each molecule was first preoptimized with the molecular mechanics (MMFF) and further reoptimized with density functional theory (DFT) utilizing the B3LYP and 6-31G∗ basis set [10, 11] with the aid of Spartan 14 Version 1.1.0 software.
## 2.4. Molecular Descriptor Calculation
Molecular descriptors are mathematical values that describe the properties of a molecule. Descriptors calculation for all the 50 molecules of 1,2,4-Triazole derivatives was done using PaDEL-Descriptor Version 2.20 software. A total of 1876 molecular descriptors were calculated.
## 2.5. Normalization and Data Pretreatment
The descriptors’ values were normalized using (2) in order to give each variable the same opportunity at the onset to influence the model [12].(2)X=X1-XminXmax-Xmin,where Xi is the value of each descriptor for a given molecule and Xmax and Xmin are the maximum and minimum value for each column of descriptors X. The normalized data were subjected to pretreatment using data pretreatment software in order to remove noise and redundant data.
## 2.6. Training and Test Set
The dataset was split into training set and test set by employing Kennard and Stone’s algorithm. The training set comprises 70% of the dataset which was used to build the model, while the remaining 30% of the dataset (test set) was used to validate the built model.
## 2.7. Relative Importance of Each Descriptor in the Model
Absolute value of the mean effect of each descriptor was used to evaluate the relative importance and contribution of the descriptor to the model. The mean effect is defined as(3)ME=βj∑inDj∑jmβj∑inDj,where ME is the mean effect of a descriptorj in a model, βj is the coefficient of the descriptor j in that model, Dj is the value of each descriptor in the data matrix for each molecule in the training set, m is the number of descriptors that appear in the model, and n is the number of molecules in the training set [13].
## 2.8. Degree of Contribution of Selected Descriptors
Contribution of each descriptor in the model was measured by calculating their standardized regression coefficients (bjs) using (6).(4)bjs=SjbjSy,where bj is the regression coefficient of descriptor j. Sj and Sy are the standard deviations for each descriptor and activity, respectively. Statistical property bjs allows one to assign a greater importance to those molecular descriptors that exhibit larger absolute standardized coefficients
## 2.9. Internal Validation of Model
Internal validation of the model was carried out using Materials Studio Version 8 software by employing the Genetic Function Approximation (GFA) method. The models were estimated using the LOF. The LOF is measured using a slight variation of the original Friedman formula, so that the best fitness score can be received. LOF is expressed as follows [14]:(5)LOF=SEE1-C+d×p/M2,where SSE is the sum of squares of errors, C is the number of terms in the model, d is a user-defined smoothing parameter, p is the total number of descriptors contained in the model, and M is the amount of data in the training set. SEE is defined as [15](6)SEE=Yexp-Ypred2N-P-1.The square of the correlation coefficient (R2) defines the fraction of the total variation attributed to the model. The closer the value of R2 to 1.0, the better the model generated. R2 is expressed as(7)R2=1-∑Yexp-Ypred2∑Yexp-Y¯training2,where Yexp, Ypred, and Y¯training are the experimental activity, the predicted activity, and the mean experimental activity of the samples in the training set, respectively.Value ofR2 varies directly with the increase in number of descriptors. Thus, R2 is not reliable to measure the goodness of fit of the model. Therefore, R2 is adjusted for the number of explanatory variables in the model. The adjusted R2 is defined as(8)R2adj=R2-kn-1n-p+1,where k is the number of independent variables in the model and n is the number of descriptors.The strength of the QSAR equation to predict bioactivity of a compound was determined using the leave-one-out cross--validation method. The revised formula for cross-validation regression coefficient (Qcv2) is(9)Qcv2=1-∑Ypred-Yexp2∑Yexp-Y¯training2,where Ypred, Yexp, and Y¯training are the predicted, experimental, and mean values of experimental activity of the training set.
## 2.10. External Validation of Model
External validation of model was assessed byRtest2 value. The closer the value of Rtest2 to 1.0, the better the model generated.(10)Rtest2=1-∑Ypredtest-Yexptest2∑Ypredtest-Y¯training2,where Ypredtest and Yexptest are the predicted and experimental activity test set, while Y¯training represents mean values of experimental activity of the training set.
## 2.11.Y-Randomization Test
Y-Randomization test is another useful external validation parameter to confirm that the built QSAR model is strong and is not inferred by chance. The Y-Randomization test was performed on the training set data [16]. For the built model to pass the Y-Randomization test, cRp2 should be more than 0.5.(11)cRp2=R×R2-Rr22,where cRp2 is coefficient of determination for Y-Randomization, R is coefficient of correlation for Y-Randomization, and Rr is average “R” of random models.
## 2.12. Evaluation of the Applicability Domain of the Model
Assessment of the applicability domain of a QSAR model is an important approach to confirm that the model built is able to make good predictions within the chemical space for which it was developed [16]. The leverage approach was employed to describe the applicability domain of the QSAR model [17]. Leverage of a given chemical compound is defined as follows:(12)hi=XiXTX-1XiT,where hi is the leverage of each compound, Xi is the descriptor row-vector of the query compound i, and X is the n×k descriptor matrix of the training set compounds used to build the model. As a prediction tool, the warning leverage (h∗) is the limit of normal values for X outliers and is defined as(13)h∗=3d+1m,where m is the number of training compounds and d is the number of descriptors in the model. The Williams plot, a plot of standardized residual versus leverage value, was employed to elucidate the relevance area of the model in terms of chemical space. Data is said to be an outlier if the standardized cross-validated residual produced by the model is greater than ±3.
## 2.13. Quality Assurance of the Model
The fitting ability, stability, robustness, reliability, and predictive ability of the developed models were evaluated by internal and external validation parameters. The validation parameters were compared with the minimum recommended value for a generally acceptable QSAR model [17] shown in Table 2.Table 2
Minimum recommended value of validation parameters for a generally acceptable QSAR model.
Validation parameter Name Value R 2 Coefficient of determination ≥0.6 P ( 95 % ) Confidence interval at 95% confidence level <0.05 Q c v 2 Cross-validation coefficient >0.5 R 2−Qcv2 Difference betweenR2 and Qcv2 ≤0.3 N e x t . t e s t s e t Minimum number of external test sets ≥5 R 2 test Coefficient of determination for external test set ≥0.6 c R p 2 Coefficient of determination forY-randomization >0.5
## 2.14. Docking Studies
Molecular docking study was carried out in order to elucidate which of the 1,2,4-Triazole derivatives has the best binding affinity against Mtb CYP121. The structure of Mtb CYP121 used in the study was obtained from protein data bank with PDB code 51BG. The prepared ligand and receptor were shown in Figure1. The optimized structures of 1,2,4-Triazole derivatives initially saved as SDF files were converted to PDB files using Spartan 14 V 1.1.4. The prepared ligands were docked with prepared structures of Mtb CYP121 using AutoDock Vina incorporated in PyRx software. The docked results were compiled, visualized, and analyzed using Discovery Studio Visualizer.Figure 1
(a) Prepared structure of Mtb CYP121. (b) 3D structures of the prepared ligands.
(a) (b)
## 3. Results and Discussion
QSAR was performed to investigate the structure-activity relationship of 50 compounds as potent antitubercular agents. The nature of model in a QSAR study is expressed by its fitting ability, stability, robustness, reliability, and forecast capacity.Experimental and predicted activities for 1,2,4-Triazole derivatives were presented in Table3. The low residual value between experimental and predicted activity indicates that the model is of high predictability.Table 3
Experimental, predicted, and residual values for 1,2,4-Triazole derivatives.
S/number(molecules) Experimentalactivity(pBA) Predictedactivity(pBA) Residual 1 4.925 4.8922 0.0328 2 5.0345 4.8716 0.1629 3 5.0064 5.0941 −0.0877 4 5.7386 5.8308 −0.0922 5 5.5994 5.5803 0.0191 6 5.4543 5.6969 −0.2426 7 4.7441 4.8047 −0.0606 8 6.1674 6.2999 −0.1325 9 6.3456 6.5053 −0.1597 10 7.4134 7.1548 0.2586 11 5.7441 6.0862 −0.3421 12 5.9258 5.6383 0.2875 13 5.6754 5.4834 0.192 14 6.3793 6.3443 0.035 15 6.1667 6.5432 −0.3765 16 5.8765 6.8765 −1.000 17 6.4171 6.1354 0.2817 18 5.9413 6.02517 −0.08387 19 7.6397 7.6055 0.0342 20 8.0899 7.8436 0.2463 21 6.3981 6.2094 0.1887 22 5.8131 6.4308 −0.6177 23 6.2878 6.30457 −0.01677 24 5.7268 5.9933 −0.2665 25 7.366 7.5444 −0.1784 26 7.0123 6.8471 0.1652 27 6.5267 5.9850 0.5417 28 5.7405 6.0962 −0.3557 29 5.6533 6.4796 −0.8263 30 6.1923 6.0426 0.1497 31 7.3233 6.5095 0.8138 32 6.0097 6.3151 −0.3054 33 6.0928 5.9501 0.1427 34 7.3279 7.3990 −0.0711 35 6.8568 6.8761 −0.0193 36 6.2234 8.6487 −2.4253 37 7.3079 7.2405 0.0674 38 7.314 7.5050 −0.191 39 8.5854 5.6969 2.8885 40 8.0615 8.1009 −0.0394 41 8.0615 7.8073 0.2542 42 6.8494 7.6746 −0.8252 43 7.9432 7.9352 0.008 44 7.4535 7.6946 −0.2411 45 7.9759 7.8569 0.119 46 7.9759 8.2103 −0.2344 47 7.9294 7.9408 −0.0114 48 6.1213 5.9165 0.2048 49 5.4406 5.2695 0.1711 50 4.9074 4.8495 0.0579The genetic algorithm-multiple linear regression (GA-MLR) investigation led to the selection of six descriptors that were used to assemble a linear model for calculating predictive activity onMycobacterium tuberculosis. Five QSAR models were built using Genetic Function Algorithm (GFA), but due to the statistical significance, model 1 was selected and reported as given below:(14)pBA=-0.307001458AATS7s+1.528715398nHBint3+3.976720227minHCsatu+0.016199645TDB9e+0.089381479RDF90i-0.107407822RDF110s+4.057082751.Ntrain=35, R2= 0.92023900, Radj=0.91017400, Qcv2 = 0.89538600, and the external validation for the test set was found to be R2pred = 0.8842.All the validation parameters for this mode were reported in Table4 and were all in agreement with parameters presented in Table 2, which actually confirmed the robustness of the model.Table 4
Validation of the genetic function approximation from Materials Studio.
S/number Equation 1 Friedman LOF 0.40847300 2 R-squared 0.92023900 3 AdjustedR-squared 0.91017400 4 Cross-validatedR-squared (Qcv2) 0.89538600 5 Significant regression Yes 6 Significance-of-regressionF-value 58.41835200 7 Critical SORF-value (95%) 2.45854700 8 Replicate points 0 9 Computed experimental error 0.00000000 10 Lack-of-fit points 28 11 Min expt. error for nonsignificant LOF (95%) 0.24688800The QSAR model generated in this research was compared with the models obtained in the literature [18, 19] as shown below:(15)pMIC=4.77374+/-0.03903-0.18609+/-0.04924AATS4i+0.50382+/-0.05235SCH-3-0.44712+/-0.06573AVP-1-0.22376+/-0.05623maxHCsats-0.18403+/-0.04374PSA.Ntrain=16, R2=0.9184, Qcv2=0.84987, and R2pred = 0.79343 [18].(16)pIC50=-2.040810634∗nCl-19.024890361∗MATS2m+1.855704759∗RDF140s+6.739013671.Ntrain=27, R2=0.9480, Radj=0.9350, Qcv2=0.87994, and R2pred = 0.76907 [19].From the above models, it could be seen that maxHCsats and 3D-radial distribution function (RDF) descriptors were also observed in the model generated in this research. This indicates that these descriptors have great influence on the activities of the inhibitory compounds againstMycobacterium tuberculosis. The validation parameters reported in this work and those reported in the literature were all in agreement with parameters presented in Table 2, which actually confirmed the robustness of the model.Descriptive statistics of the activity values of the training and test set data reported in Table5 show that test set value range (8.2854 to 4.9074) was within the training set value range (8.0899 to 4.7441). Also, the mean and standard deviation of the test set activity value (6.4989 and 0.93) were approximately similar to those of the training set value (6.6222 and 0.96). This indicates that the test set is interpolative within the training. Therefore, Kennard and Stone’s algorithm employed in this study was able to generate a test set that is a good reflection of the training set.Table 5
Descriptive statistics of the inhibition data.
Statistical parameters Activity Training set Test set Number of sample points 35 15 Range 3.3458 3.678 Maximum 8.0899 8.2854 Minimum 4.7441 4.9074 Mean 6.622234 6.498873 Median 6.3981 6.1213 Variance 0.924712 0.866467 Standard deviation 0.96162 0.93084 Mean absolute deviation 0.871588 0.703515 Skewness - 8.48 E - 04 0.87066 Kurtosis −1.24682 0.153415The name and symbol of the descriptors used in the QSAR optimization model were reported in Table6. The presence of the three 2D and three 3D descriptors in the model suggests that these types of descriptors are able to characterize better anti-Mycobacterium tuberculosis activities of the compounds. Pearson’s correlation matrix and statistics of the six descriptors employed in the QSAR model were reported in Table 7, which shows clearly that the correlation coefficients between each pair of descriptors are very low; thus, it can be inferred that there exists no significant intercorrelation among the descriptors used in building the model. The absolute t-statistics value for each descriptor is greater than 2 at 95% significant level, which indicates that the selected descriptors were good. The estimated Variance Inflation Factor (VIF) values for all the descriptors were less than 4, which imply that the model generated was statistically significant and the descriptors were orthogonal.Table 6
List of some descriptors used in the QSAR optimization model.
S/number Descriptors symbols Name of descriptor(s) Class 1 AATS7s Average Moreau-Broto Autocorrelation-lag 7/weighted by I-state 2D 2 nHBint3 Count of E-state descriptors of strength for potential hydrogen bonds of path length 3 2D 3 minHCsatu Minimum atom-type H E-state: H on C sp3 bonded to unsaturated C 2D 4 TDB9e 3D topological distance based autocorrelation-lag 9/weighted by Sanderson electronegativities 3D 5 RDF90i Radial distribution function-090/weighted by relative first ionization potential 3D 6 RDF110s Radial distribution function-110/weighted by relative I-state 3DTable 7
Pearson’s correlation matrix and statistics for descriptor used in the QSAR optimization model.
Intercorrelation Statistics Descriptors AATS7s nHBint3 minHCsatu TDB9e RDF90i RDF110s t-Stat VIF AATS7s 1 −3.9153 1.8931 nHBint3 −0.29824 1 11.6469 1.2779 minHCsatu 0.196097 0.269067 1 10.0386 3.6622 TDB9e 0.446768 −0.19131 −0.14868 1 5.66824 1.3493 RDF90i 0.097382 −0.13902 −0.39183 0.144839 1 9.45783 3.0968 RDF110s 0.116862 −0.25217 −0.66819 0.208747 0.227911 1 −5.5848 3.0275The mean effect (ME) values and standard regression coefficient (bjs) reported in Table 8 provide important information on the effect of the molecular descriptors and the degree of contribution in the developed model. The signs and the magnitude of these descriptors combined with their mean effects indicate their individual strength and direction in influencing the activity of a compound. The null hypothesis says that there is no significant relationship between the descriptors and the activities of the inhibitor compounds. The p values of the descriptors at 95% confidence limit shown in Table 8 are all less than 0.05. This implies that the alternative hypothesis is accepted. Hence, there is a relationship between the descriptors used in generating the model and the activities of the inhibitor compounds which take preference over the null hypothesis.Table 8
Specification of entered descriptors in genetic algorithm-multiple regression model.
Descriptors Standard regression coefficient (bj) Mean effect (ME) p value(confidence interval) AATS7s −0.2769 −0.31421 0.000527 nHBint3 0.67675 0.153246 3 E - 12 minHCsatu 0.987436 0.58264 8.84 E - 11 TDB9e 0.338438 0.351968 4.48 E - 06 RDF90i 1.097495 0.34097 3.25 E - 10 RDF110s −0.49948 −0.11461 5.62 E - 06Y-Randomization parameter test was reported in Table 9. The low R2 and Q2 values for several trials confirm that the developed QSAR model is robust, while the cRp2 value greater than 0.5 affirms that the created model is powerful and is not inferred by chance.Table 9
Y- Randomization parameters test.
Model R R 2 Q 2 Original 0.962302 0.926026 0.895386 Random 1 0.387394 0.150074 −0.28301 Random 2 0.534646 0.285847 −0.15518 Random 3 0.357333 0.127687 −0.43633 Random 4 0.509588 0.25968 −0.08884 Random 5 0.231807 0.053735 −0.60188 Random 6 0.140884 0.019848 −0.61556 Random 7 0.513288 0.263465 −0.11043 Random 8 0.548099 0.300412 −0.062 Random 9 0.36673 0.134491 −0.25601 Random 10 0.505524 0.255554 −0.12398 Random models parameters Averager 0.409529 Averager2 0.185079 AverageQ2 −0.27332 cRp2 0.837983
### 3.1. Interpretation of Selected Descriptors
AATS7s is average Moreau-Broto Autocorrelation-lag 7/weighted by I-state autocorrelation descriptor. It is based on spatial dependent autocorrelation function that measures the strength of the relationship between observations (atomic or molecular properties) and space separating them (lag). This descriptor is obtained by taking the molecule atoms as the set of discrete points in space and an atomic property as the function evaluated at those points. When this descriptor is calculated on molecular graph, the lag coincides with the topological distance between any pair of the vertices. AATS7s is defined on the molecular graphs using atomic masses (m), Sanderson electronegativity (e), and inductive effect of pairs of atoms 7 bonds apart as the weighting scheme. These observations suggested that atomic masses and electronic distribution of the atoms that made up the molecule had significant effect on the antitubercular activity of the dataset. In addition, the signs of the regression coefficients for each descriptor indicated the direction of influence of the descriptors in the models such that positive regression coefficient associated with a descriptor will augment the activity profile of a compound, while the negative coefficient will diminish the activity of the compound.Electrotopological state atom-type descriptor nHBint3 represents count of E-state descriptors of strength for potential hydrogen bonds of path length 3. It is a spatial dependent 2D autocorrelation descriptor with the incorporation of Moran coefficient (index) in the measurement of the strength of the relationship between observations and space separating them. This Moran autocorrelation descriptor contained in the model reported in this study was defined on the molecular graphs using atomic masses (m), Sanderson electronegativity (e), and inductive effect of pairs of atom 3 bonds apart as the weighting scheme. These observations supported the claim that atomic masses and electronic distribution had significant effect on the antitubercular activities of the molecules The positive mean effect of this descriptor indicates that the inhibitory activity of 1,2,4-Triazole derivatives will increase with hydrogen bonds of path length 3.minHCsatu (minimum atom-type H E-state: H on C sp3 bonded to unsaturated C) is a 2D electrotopological state (E-state indices) atom-type descriptor. In general, E-state indices encode the intrinsic electronic state of each atom as perturbed by the electronic influences of all other atoms in the molecule within the context of the topological character of the molecule. maxHCsatu favors the addition of –CH3 to unsaturated C atom, for example, in benzene ring. Positive contribution of minHCsatu indicates that the inhibitory activity of 1,2,4-Triazole derivatives will increase with increase in the molecular descriptor.TDB9e (3D topological distance based autocorrelation-lag 9/weighted by Sanderson electronegativities) is positively correlated to the anticonvulsant activity, meaning that increase in its value augments the activity of the studied compounds. The descriptor measures the strength of the connection between atomic charges 9 bonds apart. The number of rings in the molecular system tends to increase the values of this descriptor as observed for molecules. This may be due to increase in the amount ofπ-electrons in the molecular system, bringing about increase in the charge difference between atoms 9 bonds apart. The positive mean effect indicates a positive impact on the activity of the inhibitory compounds, which means that increasing the value of this descriptor produces higher activity of these compounds.RDF90i and RDF110s are 3D radial distribution functions at 2.5 and 7.0 interatomic distance weighted by atomic masses. The radial distribution function is probability distribution to find an atom in a spherical volume of radius. RDF descriptors are independent of the size and rotation of the entire molecule. They describe the steric hindrance or the structure-activity properties of a molecule. The RDF descriptor provides valuable information about the bond distances, ring types, planar and nonplanar systems, and atom types. The presence of these descriptors in the model suggested the occurrence of a linear relationship between antitubercular activity and the 3D molecular distribution of atomic masses in the molecules calculated at radius of 2.0 Å and 7.0 Å from the geometrical centers of each molecule. RDF90i with positive mean effect (MF) indicates positive impact on the activity, while RDF110s with negative mean effect (MF) indicates negative contribution on the activity.Predicted activity against experimental activity of training and test set was shown in Figures2 and 3. The R2 value of 0.9202 for training set and R2 value of 0.8842 for test set recorded in this study were in agreement with GFA-derived R2 value reported in Table 2. This confirms the stability, reliability, and robustness of the model. Plot of standardized residual versus experimental activity shown in Figure 4 indicates a symmetric distribution or random scattering of data points above and below the standardized residual line equal to zero. Also, all the data points were within the boundary defined by standardized residual of ±2. Thus, it implies that there was no systemic error in model developed as the spread of residuals was pragmatic on both sides of zero [20].Figure 2
Plot of predicted activity against experimental activity of training set.Figure 3
Plot of predicted activity against experimental activity of test set.Figure 4
Plot of standardized residual activity versus experimental activity.The standardized residuals in the dataset were plotted against their leverages for every compound, leading to discovery of outliers and influential molecules in the models. The Williams plot of the standardized residuals versus the leverage value is shown in Figure5. From our result, it is evident that no outlier is found, since all the compounds for both the training and test set were within the applicability domain of the square area except for three compounds that are structurally influential molecules (i.e., compounds 50, 39, and 36). These three compounds are said to be structurally influential molecules, since their leverage values are greater than the warning leverage (h∗=0.60). Their high leverage values are responsible for swaying the performance of the model. This was attributed to strong differences in their chemical structures compared to other compounds in the dataset.Figure 5
The Williams plot of the standardized residuals versus the leverage value.
### 3.2. Molecular Docking
Molecular docking study was carried out between the target (Mtb CYP121) and 1,2,4-Triazole derivatives. All the compounds were found to inhibit the receptor by occupying the active sites of the target protein (Mtb CYP121).For target protein, binding affinity values for all the compounds range from −5.1 to −14.6 kcal/mol as reported in Table10. However, four ligands (compounds 7, 8, 13, and 14) have higher binding score, which ranges from −10.0 to 14.6 kcal/mol, which were greater than their co-ligands.Table 10
Binding affinity of 1,2,4-Triazole derivatives withM. tuberculosis target (Mtb CYP121).
Ligand Target Binding affinity (BA)Kcal/mol 1 Mtb CYP121 −7.3 2 Mtb CYP121 −7.8 3 Mtb CYP121 −8.5 4 Mtb CYP121 −9.1 5 Mtb CYP121 −9.6 6 Mtb CYP121 −9.8 7 Mtb CYP121 −10.3 8 Mtb CYP121 −14.6 9 Mtb CYP121 −9.6 10 Mtb CYP121 −9.6 12 Mtb CYP121 −9.2 13 Mtb CYP121 −11.2 14 Mtb CYP121 −11.2 15 Mtb CYP121 −5.1 11 Mtb CYP121 −9.9 16 Mtb CYP121 −5.3 17 Mtb CYP121 −6.1 18 Mtb CYP121 −7.9 19 Mtb CYP121 −7 20 Mtb CYP121 −7.8 21 Mtb CYP121 −5.5 22 Mtb CYP121 −5.7 23 Mtb CYP121 −5.5 24 Mtb CYP121 −6.9 25 Mtb CYP121 −6.6 26 Mtb CYP121 −6.7 27 Mtb CYP121 −5.4 28 Mtb CYP121 −5.1 29 Mtb CYP121 −5.4 30 Mtb CYP121 −7.5 31 Mtb CYP121 −7.3 32 Mtb CYP121 −6.6 33 Mtb CYP121 −5.6 34 Mtb CYP121 −6 35 Mtb CYP121 −6.3 36 Mtb CYP121 −7.8 37 Mtb CYP121 −7.8 38 Mtb CYP121 −8.4 39 Mtb CYP121 −5.7 40 Mtb CYP121 −6.3 41 Mtb CYP121 −6.3 42 Mtb CYP121 −5.9 43 Mtb CYP121 −5.6 44 Mtb CYP121 −5.5 45 Mtb CYP121 −6.2 46 Mtb CYP121 −5.7 47 Mtb CYP121 −5.9 48 Mtb CYP121 −5.7 49 Mtb CYP121 −5.2 50 Mtb CYP121 −7.8These four ligands were visualized and analyzed in Discovery Studio Visualizer as shown in Figure6. Binding affinity, hydrogen bond, and hydrophobic bond of ligands 7, 8, 13, and 14 withM. tuberculosis target (Mtb CYP121) are reported in Table 11. Ligand (compound 7) formed hydrophobic interactions with VAL83 PRO285, VAL78, and ALA167 of the target site. In addition, ligand 7 also forms hydrogen bonds (2.16131 Å) with GLN385. Ligand 8 made three hydrogen bonds (2.82894, 2.34089, and 2.47314 Å) with ALA337, HIS343, and ALA233 of the target, while hydrophobic interactions were observed with PHE280, ALA233, CYS345, MET86, ALA233, and PRO346. Ligand 13 made two hydrogen bonds (2.34218 and 3.0328 Å) with ASN74 and GLN385, while VAL78, ALA233, PRO285, ALA233, PRO346, and ALA167 form the hydrophobic interaction. Ligand 14 formed hydrophobic interaction with LEU164, VAL228, VAL78, ALA233, PRO285, ALA233, PRO346, ALA167, and ALA233, while two hydrogen bonds (2.36479 and 3.03627 Å) were formed between ASN74 and GLN385 of the target.Table 11
Binding affinity, hydrogen bond, and hydrophobic bond of ligands 7, 8, 13, and 14 withM. tuberculosis target (Mtb CYP121).
Ligand Binding affinity (BA)Kcal/mol Target Hydrogen bond Hydrophobic interaction Amino acid Bond length (Å) Amino acid 7 −10.3 Mtb CYP121 GLN385 2.16131 VAL83, PRO285, VAL78, VAL78, LA167 8 −14.6 Mtb CYP121 ALA337HIS343ALA233 2.828942.340892.47314 PHE280, ALA233, CYS345, MET86, ALA233, PRO346 13 −11.2 Mtb CYP121 ASN74GLN385 2.342183.0328 VAL78, ALA233, PRO285, ALA233, PRO346ALA167 14 −11.2 Mtb CYP121 ASN74GLN385 2.364793.03627 LEU164, VAL228, VAL78, ALA233,PRO285ALA233, PRO346, ALA167, ALA233Figure 6
(7a) and (7b) show the 3D and 2D interactions between Mtb CYP121 and ligand 7. (8a) and (8b) show the 3D and 2D interactions between Mtb CYP121 and ligand 8. (13a) and (13b) show the 3D and 2D interactions between Mtb CYP121 and ligand 13. (14a) and (14b) show the 3D and 2D interactions between Mtb CYP121 and ligand 14.
## 3.1. Interpretation of Selected Descriptors
AATS7s is average Moreau-Broto Autocorrelation-lag 7/weighted by I-state autocorrelation descriptor. It is based on spatial dependent autocorrelation function that measures the strength of the relationship between observations (atomic or molecular properties) and space separating them (lag). This descriptor is obtained by taking the molecule atoms as the set of discrete points in space and an atomic property as the function evaluated at those points. When this descriptor is calculated on molecular graph, the lag coincides with the topological distance between any pair of the vertices. AATS7s is defined on the molecular graphs using atomic masses (m), Sanderson electronegativity (e), and inductive effect of pairs of atoms 7 bonds apart as the weighting scheme. These observations suggested that atomic masses and electronic distribution of the atoms that made up the molecule had significant effect on the antitubercular activity of the dataset. In addition, the signs of the regression coefficients for each descriptor indicated the direction of influence of the descriptors in the models such that positive regression coefficient associated with a descriptor will augment the activity profile of a compound, while the negative coefficient will diminish the activity of the compound.Electrotopological state atom-type descriptor nHBint3 represents count of E-state descriptors of strength for potential hydrogen bonds of path length 3. It is a spatial dependent 2D autocorrelation descriptor with the incorporation of Moran coefficient (index) in the measurement of the strength of the relationship between observations and space separating them. This Moran autocorrelation descriptor contained in the model reported in this study was defined on the molecular graphs using atomic masses (m), Sanderson electronegativity (e), and inductive effect of pairs of atom 3 bonds apart as the weighting scheme. These observations supported the claim that atomic masses and electronic distribution had significant effect on the antitubercular activities of the molecules The positive mean effect of this descriptor indicates that the inhibitory activity of 1,2,4-Triazole derivatives will increase with hydrogen bonds of path length 3.minHCsatu (minimum atom-type H E-state: H on C sp3 bonded to unsaturated C) is a 2D electrotopological state (E-state indices) atom-type descriptor. In general, E-state indices encode the intrinsic electronic state of each atom as perturbed by the electronic influences of all other atoms in the molecule within the context of the topological character of the molecule. maxHCsatu favors the addition of –CH3 to unsaturated C atom, for example, in benzene ring. Positive contribution of minHCsatu indicates that the inhibitory activity of 1,2,4-Triazole derivatives will increase with increase in the molecular descriptor.TDB9e (3D topological distance based autocorrelation-lag 9/weighted by Sanderson electronegativities) is positively correlated to the anticonvulsant activity, meaning that increase in its value augments the activity of the studied compounds. The descriptor measures the strength of the connection between atomic charges 9 bonds apart. The number of rings in the molecular system tends to increase the values of this descriptor as observed for molecules. This may be due to increase in the amount ofπ-electrons in the molecular system, bringing about increase in the charge difference between atoms 9 bonds apart. The positive mean effect indicates a positive impact on the activity of the inhibitory compounds, which means that increasing the value of this descriptor produces higher activity of these compounds.RDF90i and RDF110s are 3D radial distribution functions at 2.5 and 7.0 interatomic distance weighted by atomic masses. The radial distribution function is probability distribution to find an atom in a spherical volume of radius. RDF descriptors are independent of the size and rotation of the entire molecule. They describe the steric hindrance or the structure-activity properties of a molecule. The RDF descriptor provides valuable information about the bond distances, ring types, planar and nonplanar systems, and atom types. The presence of these descriptors in the model suggested the occurrence of a linear relationship between antitubercular activity and the 3D molecular distribution of atomic masses in the molecules calculated at radius of 2.0 Å and 7.0 Å from the geometrical centers of each molecule. RDF90i with positive mean effect (MF) indicates positive impact on the activity, while RDF110s with negative mean effect (MF) indicates negative contribution on the activity.Predicted activity against experimental activity of training and test set was shown in Figures2 and 3. The R2 value of 0.9202 for training set and R2 value of 0.8842 for test set recorded in this study were in agreement with GFA-derived R2 value reported in Table 2. This confirms the stability, reliability, and robustness of the model. Plot of standardized residual versus experimental activity shown in Figure 4 indicates a symmetric distribution or random scattering of data points above and below the standardized residual line equal to zero. Also, all the data points were within the boundary defined by standardized residual of ±2. Thus, it implies that there was no systemic error in model developed as the spread of residuals was pragmatic on both sides of zero [20].Figure 2
Plot of predicted activity against experimental activity of training set.Figure 3
Plot of predicted activity against experimental activity of test set.Figure 4
Plot of standardized residual activity versus experimental activity.The standardized residuals in the dataset were plotted against their leverages for every compound, leading to discovery of outliers and influential molecules in the models. The Williams plot of the standardized residuals versus the leverage value is shown in Figure5. From our result, it is evident that no outlier is found, since all the compounds for both the training and test set were within the applicability domain of the square area except for three compounds that are structurally influential molecules (i.e., compounds 50, 39, and 36). These three compounds are said to be structurally influential molecules, since their leverage values are greater than the warning leverage (h∗=0.60). Their high leverage values are responsible for swaying the performance of the model. This was attributed to strong differences in their chemical structures compared to other compounds in the dataset.Figure 5
The Williams plot of the standardized residuals versus the leverage value.
## 3.2. Molecular Docking
Molecular docking study was carried out between the target (Mtb CYP121) and 1,2,4-Triazole derivatives. All the compounds were found to inhibit the receptor by occupying the active sites of the target protein (Mtb CYP121).For target protein, binding affinity values for all the compounds range from −5.1 to −14.6 kcal/mol as reported in Table10. However, four ligands (compounds 7, 8, 13, and 14) have higher binding score, which ranges from −10.0 to 14.6 kcal/mol, which were greater than their co-ligands.Table 10
Binding affinity of 1,2,4-Triazole derivatives withM. tuberculosis target (Mtb CYP121).
Ligand Target Binding affinity (BA)Kcal/mol 1 Mtb CYP121 −7.3 2 Mtb CYP121 −7.8 3 Mtb CYP121 −8.5 4 Mtb CYP121 −9.1 5 Mtb CYP121 −9.6 6 Mtb CYP121 −9.8 7 Mtb CYP121 −10.3 8 Mtb CYP121 −14.6 9 Mtb CYP121 −9.6 10 Mtb CYP121 −9.6 12 Mtb CYP121 −9.2 13 Mtb CYP121 −11.2 14 Mtb CYP121 −11.2 15 Mtb CYP121 −5.1 11 Mtb CYP121 −9.9 16 Mtb CYP121 −5.3 17 Mtb CYP121 −6.1 18 Mtb CYP121 −7.9 19 Mtb CYP121 −7 20 Mtb CYP121 −7.8 21 Mtb CYP121 −5.5 22 Mtb CYP121 −5.7 23 Mtb CYP121 −5.5 24 Mtb CYP121 −6.9 25 Mtb CYP121 −6.6 26 Mtb CYP121 −6.7 27 Mtb CYP121 −5.4 28 Mtb CYP121 −5.1 29 Mtb CYP121 −5.4 30 Mtb CYP121 −7.5 31 Mtb CYP121 −7.3 32 Mtb CYP121 −6.6 33 Mtb CYP121 −5.6 34 Mtb CYP121 −6 35 Mtb CYP121 −6.3 36 Mtb CYP121 −7.8 37 Mtb CYP121 −7.8 38 Mtb CYP121 −8.4 39 Mtb CYP121 −5.7 40 Mtb CYP121 −6.3 41 Mtb CYP121 −6.3 42 Mtb CYP121 −5.9 43 Mtb CYP121 −5.6 44 Mtb CYP121 −5.5 45 Mtb CYP121 −6.2 46 Mtb CYP121 −5.7 47 Mtb CYP121 −5.9 48 Mtb CYP121 −5.7 49 Mtb CYP121 −5.2 50 Mtb CYP121 −7.8These four ligands were visualized and analyzed in Discovery Studio Visualizer as shown in Figure6. Binding affinity, hydrogen bond, and hydrophobic bond of ligands 7, 8, 13, and 14 withM. tuberculosis target (Mtb CYP121) are reported in Table 11. Ligand (compound 7) formed hydrophobic interactions with VAL83 PRO285, VAL78, and ALA167 of the target site. In addition, ligand 7 also forms hydrogen bonds (2.16131 Å) with GLN385. Ligand 8 made three hydrogen bonds (2.82894, 2.34089, and 2.47314 Å) with ALA337, HIS343, and ALA233 of the target, while hydrophobic interactions were observed with PHE280, ALA233, CYS345, MET86, ALA233, and PRO346. Ligand 13 made two hydrogen bonds (2.34218 and 3.0328 Å) with ASN74 and GLN385, while VAL78, ALA233, PRO285, ALA233, PRO346, and ALA167 form the hydrophobic interaction. Ligand 14 formed hydrophobic interaction with LEU164, VAL228, VAL78, ALA233, PRO285, ALA233, PRO346, ALA167, and ALA233, while two hydrogen bonds (2.36479 and 3.03627 Å) were formed between ASN74 and GLN385 of the target.Table 11
Binding affinity, hydrogen bond, and hydrophobic bond of ligands 7, 8, 13, and 14 withM. tuberculosis target (Mtb CYP121).
Ligand Binding affinity (BA)Kcal/mol Target Hydrogen bond Hydrophobic interaction Amino acid Bond length (Å) Amino acid 7 −10.3 Mtb CYP121 GLN385 2.16131 VAL83, PRO285, VAL78, VAL78, LA167 8 −14.6 Mtb CYP121 ALA337HIS343ALA233 2.828942.340892.47314 PHE280, ALA233, CYS345, MET86, ALA233, PRO346 13 −11.2 Mtb CYP121 ASN74GLN385 2.342183.0328 VAL78, ALA233, PRO285, ALA233, PRO346ALA167 14 −11.2 Mtb CYP121 ASN74GLN385 2.364793.03627 LEU164, VAL228, VAL78, ALA233,PRO285ALA233, PRO346, ALA167, ALA233Figure 6
(7a) and (7b) show the 3D and 2D interactions between Mtb CYP121 and ligand 7. (8a) and (8b) show the 3D and 2D interactions between Mtb CYP121 and ligand 8. (13a) and (13b) show the 3D and 2D interactions between Mtb CYP121 and ligand 13. (14a) and (14b) show the 3D and 2D interactions between Mtb CYP121 and ligand 14.
## 4. Conclusion
The model with 2D and 3D descriptors is of higher excellence and presents a satisfactory correlation with the anti-Mycobacterium tuberculosis activity. The combination of 2D and 3D descriptors produces a better model to predict the anti-Mycobacterium tuberculosis activities of these compounds. The QSAR model generated met the criteria for minimum recommended value of validation parameters for a generally acceptable QSAR model. The molecular docking analysis has shown that nearly all the 1,2,4-Triazole derivatives potentially inhibit Mtb CYP121. However, compounds 7, 8, 13, and 14 have higher bind score ranging from −10.03 to −11.02 kcal/mol. These four compounds were able to be docked deeply within the binding pocket region of the Mtb CYP121, forming a hydrogen bond and hydrophobic interactions with amino acid of the target. The QSAR model generated provides a valuable approach for ligand base design, while the molecular docking studies provide a valuable approach for structure base design. These two approaches will be of great help for pharmaceutical and medicinal chemists to design and synthesize new anti-Mycobacterium tuberculosis compounds.
---
*Source: 1018694-2018-05-10.xml* | 1018694-2018-05-10_1018694-2018-05-10.md | 54,718 | QSAR Modeling and Molecular Docking Analysis of Some Active Compounds againstMycobacterium tuberculosis Receptor (Mtb CYP121) | Shola Elijah Adeniji; Sani Uba; Adamu Uzairu | Journal of Pathogens
(2018) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1018694 | 1018694-2018-05-10.xml | ---
## Abstract
A quantitative structure-activity relationship (QSAR) study was performed to develop a model that relates the structures of 50 compounds to their activities againstM. tuberculosis. The compounds were optimized by employing density functional theory (DFT) with B3LYP/6-31G⁎. The Genetic Function Algorithm (GFA) was used to select the descriptors and to generate the correlation model that relates the structural features of the compounds to their biological activities. The optimum model has squared correlation coefficient (R2) of 0.9202, adjusted squared correlation coefficient (Radj) of 0.91012, and leave-one-out (LOO) cross-validation coefficient (Qcv2) value of 0.8954. The external validation test used for confirming the predictive power of the built model has R2pred value of 0.8842. These parameters confirm the stability and robustness of the model. Docking analysis showed the best compound with high docking affinity of −14.6 kcal/mol which formed hydrophobic interaction and hydrogen bond with amino acid residues ofM. tuberculosis cytochromes (Mtb CYP121). QSAR and molecular docking studies provide valuable approach for pharmaceutical and medicinal chemists to design and synthesize new anti-Mycobacterium tuberculosis compounds.
---
## Body
## 1. Introduction
Mycobacterium tuberculosis is a bacterial species responsible for causing tuberculosis (TB). It mainly affects the lungs and other parts of the body such as spine, kidney, and brain unless urgent treatment is provided. Tuberculosis remains one of the most prevalent infectious bacterial diseases, resulting in the death of 1.4 million people worldwide [1]. There are drugs like isoniazid, rifampicin, ciprofloxacin, and ethambutol available as a cure for tuberculosis.However, due to the emergence of multidrug-resistant (MDR) and extensively drug-resistant (XDR) tuberculosis, this poses a big challenge towards the successful treatment of tuberculosis [2]. This led to development of new therapeutics against diverse strains ofM. tuberculosis [3]. New synthesized 1,2,4-Triazole derivative compounds demonstrate tuberculosis inhibition activity [4]. Synthesis of novel molecules is typically developed using a trial-and-error approach, which is time-consuming and costly.Quantitative structure-activity relationship (QSAR) plays a crucial part in novel drug design via a ligand-based approach [5]. The key success of the QSAR method is the possibility to predict the properties of new chemical compounds without the need to synthesize and test them. This technique is broadly utilized for the prediction of physicochemical properties in the chemical, industrial, pharmaceutical, biological, and environmental spheres [6]. Moreover, the QSAR strategies save resources and accelerate the process of developing new molecules for use as drugs, materials, and additives or for whatever purposes [7]. Meanwhile molecular docking is a computational method used to determine the binding strength between the active site residues and specific molecule(s) [8]. Molecular docking is expedient tool used in the drug discovery field to investigate the binding compatibility of molecules (ligands) to target (receptor) [9].The aim of this research was to develop QSAR model to predict the activity of 1,2,4-Triazole derivatives as potent anti-Mycobacterium tuberculosis compounds and to elucidate the interaction between the inhibitor molecules andMycobacterium tuberculosis target site.
## 2. Materials and Methods
### 2.1. Data Collection
Fifty molecules of 1,2,4-Triazole derivatives as potent antitubercular agents that were used in this study were obtained from the literature [4].
### 2.2. Biological Activities (BA)
The biological activities of 1,2,4-Triazole derivatives againstMycobacterium tuberculosis in aerobic active stage were initially expressed in percentage (%) and then converted to logarithm unit using (1) in order to increase the linearity and approach normal distribution of the activity values. The observed structures and the biological activities of these compounds were presented in Table 1.(1)pBA=logmolecular weightg/molDoseg/molpercentage%100-percentage%.Table 1
Molecular structure of 1,2,4-Triazole derivatives and their activities.
S/N Molecules Experimentalactivity(pBA) 1 4.925 2 a 5.0345 3 5.0064 4 5.7386 5 a 5.5994 6 a 5.4543 7 4.7441 8 6.1674 9 a 6.3456 10 7.4134 11 5.7441 12 5.9258 1 3 a 5.6754 14 6.3793 15 6.1667 1 6 a 5.8765 17 6.4171 18 5.9413 19 7.6397 20 8.0899 21 6.3981 22 5.8131 23 6.2878 24 5.7268 25 7.366 2 6 a 7.0123 27 6.5267 28 5.7405 2 9 a 5.6533 3 0 a 6.1923 31 7.3233 32 6.0097 33 6.0928 34 7.3279 35 6.8568 3 6 a 6.2234 37 7.3079 38 7.314 3 9 a 8.5854 40 8.0615 41 8.0615 4 2 a 6.8494 43 7.9432 4 4 a 7.4535 45 7.9759 46 7.9759 47 7.9294 4 8 a 6.1213 49 5.4406 5 0 a 4.9074 Superscripta represents the test set.
### 2.3. Geometry Optimization
The chemical structures of the molecules were drawn with ChemDraw Ultra Version 12.0. Each molecule was first preoptimized with the molecular mechanics (MMFF) and further reoptimized with density functional theory (DFT) utilizing the B3LYP and 6-31G∗ basis set [10, 11] with the aid of Spartan 14 Version 1.1.0 software.
### 2.4. Molecular Descriptor Calculation
Molecular descriptors are mathematical values that describe the properties of a molecule. Descriptors calculation for all the 50 molecules of 1,2,4-Triazole derivatives was done using PaDEL-Descriptor Version 2.20 software. A total of 1876 molecular descriptors were calculated.
### 2.5. Normalization and Data Pretreatment
The descriptors’ values were normalized using (2) in order to give each variable the same opportunity at the onset to influence the model [12].(2)X=X1-XminXmax-Xmin,where Xi is the value of each descriptor for a given molecule and Xmax and Xmin are the maximum and minimum value for each column of descriptors X. The normalized data were subjected to pretreatment using data pretreatment software in order to remove noise and redundant data.
### 2.6. Training and Test Set
The dataset was split into training set and test set by employing Kennard and Stone’s algorithm. The training set comprises 70% of the dataset which was used to build the model, while the remaining 30% of the dataset (test set) was used to validate the built model.
### 2.7. Relative Importance of Each Descriptor in the Model
Absolute value of the mean effect of each descriptor was used to evaluate the relative importance and contribution of the descriptor to the model. The mean effect is defined as(3)ME=βj∑inDj∑jmβj∑inDj,where ME is the mean effect of a descriptorj in a model, βj is the coefficient of the descriptor j in that model, Dj is the value of each descriptor in the data matrix for each molecule in the training set, m is the number of descriptors that appear in the model, and n is the number of molecules in the training set [13].
### 2.8. Degree of Contribution of Selected Descriptors
Contribution of each descriptor in the model was measured by calculating their standardized regression coefficients (bjs) using (6).(4)bjs=SjbjSy,where bj is the regression coefficient of descriptor j. Sj and Sy are the standard deviations for each descriptor and activity, respectively. Statistical property bjs allows one to assign a greater importance to those molecular descriptors that exhibit larger absolute standardized coefficients
### 2.9. Internal Validation of Model
Internal validation of the model was carried out using Materials Studio Version 8 software by employing the Genetic Function Approximation (GFA) method. The models were estimated using the LOF. The LOF is measured using a slight variation of the original Friedman formula, so that the best fitness score can be received. LOF is expressed as follows [14]:(5)LOF=SEE1-C+d×p/M2,where SSE is the sum of squares of errors, C is the number of terms in the model, d is a user-defined smoothing parameter, p is the total number of descriptors contained in the model, and M is the amount of data in the training set. SEE is defined as [15](6)SEE=Yexp-Ypred2N-P-1.The square of the correlation coefficient (R2) defines the fraction of the total variation attributed to the model. The closer the value of R2 to 1.0, the better the model generated. R2 is expressed as(7)R2=1-∑Yexp-Ypred2∑Yexp-Y¯training2,where Yexp, Ypred, and Y¯training are the experimental activity, the predicted activity, and the mean experimental activity of the samples in the training set, respectively.Value ofR2 varies directly with the increase in number of descriptors. Thus, R2 is not reliable to measure the goodness of fit of the model. Therefore, R2 is adjusted for the number of explanatory variables in the model. The adjusted R2 is defined as(8)R2adj=R2-kn-1n-p+1,where k is the number of independent variables in the model and n is the number of descriptors.The strength of the QSAR equation to predict bioactivity of a compound was determined using the leave-one-out cross--validation method. The revised formula for cross-validation regression coefficient (Qcv2) is(9)Qcv2=1-∑Ypred-Yexp2∑Yexp-Y¯training2,where Ypred, Yexp, and Y¯training are the predicted, experimental, and mean values of experimental activity of the training set.
### 2.10. External Validation of Model
External validation of model was assessed byRtest2 value. The closer the value of Rtest2 to 1.0, the better the model generated.(10)Rtest2=1-∑Ypredtest-Yexptest2∑Ypredtest-Y¯training2,where Ypredtest and Yexptest are the predicted and experimental activity test set, while Y¯training represents mean values of experimental activity of the training set.
### 2.11.Y-Randomization Test
Y-Randomization test is another useful external validation parameter to confirm that the built QSAR model is strong and is not inferred by chance. The Y-Randomization test was performed on the training set data [16]. For the built model to pass the Y-Randomization test, cRp2 should be more than 0.5.(11)cRp2=R×R2-Rr22,where cRp2 is coefficient of determination for Y-Randomization, R is coefficient of correlation for Y-Randomization, and Rr is average “R” of random models.
### 2.12. Evaluation of the Applicability Domain of the Model
Assessment of the applicability domain of a QSAR model is an important approach to confirm that the model built is able to make good predictions within the chemical space for which it was developed [16]. The leverage approach was employed to describe the applicability domain of the QSAR model [17]. Leverage of a given chemical compound is defined as follows:(12)hi=XiXTX-1XiT,where hi is the leverage of each compound, Xi is the descriptor row-vector of the query compound i, and X is the n×k descriptor matrix of the training set compounds used to build the model. As a prediction tool, the warning leverage (h∗) is the limit of normal values for X outliers and is defined as(13)h∗=3d+1m,where m is the number of training compounds and d is the number of descriptors in the model. The Williams plot, a plot of standardized residual versus leverage value, was employed to elucidate the relevance area of the model in terms of chemical space. Data is said to be an outlier if the standardized cross-validated residual produced by the model is greater than ±3.
### 2.13. Quality Assurance of the Model
The fitting ability, stability, robustness, reliability, and predictive ability of the developed models were evaluated by internal and external validation parameters. The validation parameters were compared with the minimum recommended value for a generally acceptable QSAR model [17] shown in Table 2.Table 2
Minimum recommended value of validation parameters for a generally acceptable QSAR model.
Validation parameter Name Value R 2 Coefficient of determination ≥0.6 P ( 95 % ) Confidence interval at 95% confidence level <0.05 Q c v 2 Cross-validation coefficient >0.5 R 2−Qcv2 Difference betweenR2 and Qcv2 ≤0.3 N e x t . t e s t s e t Minimum number of external test sets ≥5 R 2 test Coefficient of determination for external test set ≥0.6 c R p 2 Coefficient of determination forY-randomization >0.5
### 2.14. Docking Studies
Molecular docking study was carried out in order to elucidate which of the 1,2,4-Triazole derivatives has the best binding affinity against Mtb CYP121. The structure of Mtb CYP121 used in the study was obtained from protein data bank with PDB code 51BG. The prepared ligand and receptor were shown in Figure1. The optimized structures of 1,2,4-Triazole derivatives initially saved as SDF files were converted to PDB files using Spartan 14 V 1.1.4. The prepared ligands were docked with prepared structures of Mtb CYP121 using AutoDock Vina incorporated in PyRx software. The docked results were compiled, visualized, and analyzed using Discovery Studio Visualizer.Figure 1
(a) Prepared structure of Mtb CYP121. (b) 3D structures of the prepared ligands.
(a) (b)
## 2.1. Data Collection
Fifty molecules of 1,2,4-Triazole derivatives as potent antitubercular agents that were used in this study were obtained from the literature [4].
## 2.2. Biological Activities (BA)
The biological activities of 1,2,4-Triazole derivatives againstMycobacterium tuberculosis in aerobic active stage were initially expressed in percentage (%) and then converted to logarithm unit using (1) in order to increase the linearity and approach normal distribution of the activity values. The observed structures and the biological activities of these compounds were presented in Table 1.(1)pBA=logmolecular weightg/molDoseg/molpercentage%100-percentage%.Table 1
Molecular structure of 1,2,4-Triazole derivatives and their activities.
S/N Molecules Experimentalactivity(pBA) 1 4.925 2 a 5.0345 3 5.0064 4 5.7386 5 a 5.5994 6 a 5.4543 7 4.7441 8 6.1674 9 a 6.3456 10 7.4134 11 5.7441 12 5.9258 1 3 a 5.6754 14 6.3793 15 6.1667 1 6 a 5.8765 17 6.4171 18 5.9413 19 7.6397 20 8.0899 21 6.3981 22 5.8131 23 6.2878 24 5.7268 25 7.366 2 6 a 7.0123 27 6.5267 28 5.7405 2 9 a 5.6533 3 0 a 6.1923 31 7.3233 32 6.0097 33 6.0928 34 7.3279 35 6.8568 3 6 a 6.2234 37 7.3079 38 7.314 3 9 a 8.5854 40 8.0615 41 8.0615 4 2 a 6.8494 43 7.9432 4 4 a 7.4535 45 7.9759 46 7.9759 47 7.9294 4 8 a 6.1213 49 5.4406 5 0 a 4.9074 Superscripta represents the test set.
## 2.3. Geometry Optimization
The chemical structures of the molecules were drawn with ChemDraw Ultra Version 12.0. Each molecule was first preoptimized with the molecular mechanics (MMFF) and further reoptimized with density functional theory (DFT) utilizing the B3LYP and 6-31G∗ basis set [10, 11] with the aid of Spartan 14 Version 1.1.0 software.
## 2.4. Molecular Descriptor Calculation
Molecular descriptors are mathematical values that describe the properties of a molecule. Descriptors calculation for all the 50 molecules of 1,2,4-Triazole derivatives was done using PaDEL-Descriptor Version 2.20 software. A total of 1876 molecular descriptors were calculated.
## 2.5. Normalization and Data Pretreatment
The descriptors’ values were normalized using (2) in order to give each variable the same opportunity at the onset to influence the model [12].(2)X=X1-XminXmax-Xmin,where Xi is the value of each descriptor for a given molecule and Xmax and Xmin are the maximum and minimum value for each column of descriptors X. The normalized data were subjected to pretreatment using data pretreatment software in order to remove noise and redundant data.
## 2.6. Training and Test Set
The dataset was split into training set and test set by employing Kennard and Stone’s algorithm. The training set comprises 70% of the dataset which was used to build the model, while the remaining 30% of the dataset (test set) was used to validate the built model.
## 2.7. Relative Importance of Each Descriptor in the Model
Absolute value of the mean effect of each descriptor was used to evaluate the relative importance and contribution of the descriptor to the model. The mean effect is defined as(3)ME=βj∑inDj∑jmβj∑inDj,where ME is the mean effect of a descriptorj in a model, βj is the coefficient of the descriptor j in that model, Dj is the value of each descriptor in the data matrix for each molecule in the training set, m is the number of descriptors that appear in the model, and n is the number of molecules in the training set [13].
## 2.8. Degree of Contribution of Selected Descriptors
Contribution of each descriptor in the model was measured by calculating their standardized regression coefficients (bjs) using (6).(4)bjs=SjbjSy,where bj is the regression coefficient of descriptor j. Sj and Sy are the standard deviations for each descriptor and activity, respectively. Statistical property bjs allows one to assign a greater importance to those molecular descriptors that exhibit larger absolute standardized coefficients
## 2.9. Internal Validation of Model
Internal validation of the model was carried out using Materials Studio Version 8 software by employing the Genetic Function Approximation (GFA) method. The models were estimated using the LOF. The LOF is measured using a slight variation of the original Friedman formula, so that the best fitness score can be received. LOF is expressed as follows [14]:(5)LOF=SEE1-C+d×p/M2,where SSE is the sum of squares of errors, C is the number of terms in the model, d is a user-defined smoothing parameter, p is the total number of descriptors contained in the model, and M is the amount of data in the training set. SEE is defined as [15](6)SEE=Yexp-Ypred2N-P-1.The square of the correlation coefficient (R2) defines the fraction of the total variation attributed to the model. The closer the value of R2 to 1.0, the better the model generated. R2 is expressed as(7)R2=1-∑Yexp-Ypred2∑Yexp-Y¯training2,where Yexp, Ypred, and Y¯training are the experimental activity, the predicted activity, and the mean experimental activity of the samples in the training set, respectively.Value ofR2 varies directly with the increase in number of descriptors. Thus, R2 is not reliable to measure the goodness of fit of the model. Therefore, R2 is adjusted for the number of explanatory variables in the model. The adjusted R2 is defined as(8)R2adj=R2-kn-1n-p+1,where k is the number of independent variables in the model and n is the number of descriptors.The strength of the QSAR equation to predict bioactivity of a compound was determined using the leave-one-out cross--validation method. The revised formula for cross-validation regression coefficient (Qcv2) is(9)Qcv2=1-∑Ypred-Yexp2∑Yexp-Y¯training2,where Ypred, Yexp, and Y¯training are the predicted, experimental, and mean values of experimental activity of the training set.
## 2.10. External Validation of Model
External validation of model was assessed byRtest2 value. The closer the value of Rtest2 to 1.0, the better the model generated.(10)Rtest2=1-∑Ypredtest-Yexptest2∑Ypredtest-Y¯training2,where Ypredtest and Yexptest are the predicted and experimental activity test set, while Y¯training represents mean values of experimental activity of the training set.
## 2.11.Y-Randomization Test
Y-Randomization test is another useful external validation parameter to confirm that the built QSAR model is strong and is not inferred by chance. The Y-Randomization test was performed on the training set data [16]. For the built model to pass the Y-Randomization test, cRp2 should be more than 0.5.(11)cRp2=R×R2-Rr22,where cRp2 is coefficient of determination for Y-Randomization, R is coefficient of correlation for Y-Randomization, and Rr is average “R” of random models.
## 2.12. Evaluation of the Applicability Domain of the Model
Assessment of the applicability domain of a QSAR model is an important approach to confirm that the model built is able to make good predictions within the chemical space for which it was developed [16]. The leverage approach was employed to describe the applicability domain of the QSAR model [17]. Leverage of a given chemical compound is defined as follows:(12)hi=XiXTX-1XiT,where hi is the leverage of each compound, Xi is the descriptor row-vector of the query compound i, and X is the n×k descriptor matrix of the training set compounds used to build the model. As a prediction tool, the warning leverage (h∗) is the limit of normal values for X outliers and is defined as(13)h∗=3d+1m,where m is the number of training compounds and d is the number of descriptors in the model. The Williams plot, a plot of standardized residual versus leverage value, was employed to elucidate the relevance area of the model in terms of chemical space. Data is said to be an outlier if the standardized cross-validated residual produced by the model is greater than ±3.
## 2.13. Quality Assurance of the Model
The fitting ability, stability, robustness, reliability, and predictive ability of the developed models were evaluated by internal and external validation parameters. The validation parameters were compared with the minimum recommended value for a generally acceptable QSAR model [17] shown in Table 2.Table 2
Minimum recommended value of validation parameters for a generally acceptable QSAR model.
Validation parameter Name Value R 2 Coefficient of determination ≥0.6 P ( 95 % ) Confidence interval at 95% confidence level <0.05 Q c v 2 Cross-validation coefficient >0.5 R 2−Qcv2 Difference betweenR2 and Qcv2 ≤0.3 N e x t . t e s t s e t Minimum number of external test sets ≥5 R 2 test Coefficient of determination for external test set ≥0.6 c R p 2 Coefficient of determination forY-randomization >0.5
## 2.14. Docking Studies
Molecular docking study was carried out in order to elucidate which of the 1,2,4-Triazole derivatives has the best binding affinity against Mtb CYP121. The structure of Mtb CYP121 used in the study was obtained from protein data bank with PDB code 51BG. The prepared ligand and receptor were shown in Figure1. The optimized structures of 1,2,4-Triazole derivatives initially saved as SDF files were converted to PDB files using Spartan 14 V 1.1.4. The prepared ligands were docked with prepared structures of Mtb CYP121 using AutoDock Vina incorporated in PyRx software. The docked results were compiled, visualized, and analyzed using Discovery Studio Visualizer.Figure 1
(a) Prepared structure of Mtb CYP121. (b) 3D structures of the prepared ligands.
(a) (b)
## 3. Results and Discussion
QSAR was performed to investigate the structure-activity relationship of 50 compounds as potent antitubercular agents. The nature of model in a QSAR study is expressed by its fitting ability, stability, robustness, reliability, and forecast capacity.Experimental and predicted activities for 1,2,4-Triazole derivatives were presented in Table3. The low residual value between experimental and predicted activity indicates that the model is of high predictability.Table 3
Experimental, predicted, and residual values for 1,2,4-Triazole derivatives.
S/number(molecules) Experimentalactivity(pBA) Predictedactivity(pBA) Residual 1 4.925 4.8922 0.0328 2 5.0345 4.8716 0.1629 3 5.0064 5.0941 −0.0877 4 5.7386 5.8308 −0.0922 5 5.5994 5.5803 0.0191 6 5.4543 5.6969 −0.2426 7 4.7441 4.8047 −0.0606 8 6.1674 6.2999 −0.1325 9 6.3456 6.5053 −0.1597 10 7.4134 7.1548 0.2586 11 5.7441 6.0862 −0.3421 12 5.9258 5.6383 0.2875 13 5.6754 5.4834 0.192 14 6.3793 6.3443 0.035 15 6.1667 6.5432 −0.3765 16 5.8765 6.8765 −1.000 17 6.4171 6.1354 0.2817 18 5.9413 6.02517 −0.08387 19 7.6397 7.6055 0.0342 20 8.0899 7.8436 0.2463 21 6.3981 6.2094 0.1887 22 5.8131 6.4308 −0.6177 23 6.2878 6.30457 −0.01677 24 5.7268 5.9933 −0.2665 25 7.366 7.5444 −0.1784 26 7.0123 6.8471 0.1652 27 6.5267 5.9850 0.5417 28 5.7405 6.0962 −0.3557 29 5.6533 6.4796 −0.8263 30 6.1923 6.0426 0.1497 31 7.3233 6.5095 0.8138 32 6.0097 6.3151 −0.3054 33 6.0928 5.9501 0.1427 34 7.3279 7.3990 −0.0711 35 6.8568 6.8761 −0.0193 36 6.2234 8.6487 −2.4253 37 7.3079 7.2405 0.0674 38 7.314 7.5050 −0.191 39 8.5854 5.6969 2.8885 40 8.0615 8.1009 −0.0394 41 8.0615 7.8073 0.2542 42 6.8494 7.6746 −0.8252 43 7.9432 7.9352 0.008 44 7.4535 7.6946 −0.2411 45 7.9759 7.8569 0.119 46 7.9759 8.2103 −0.2344 47 7.9294 7.9408 −0.0114 48 6.1213 5.9165 0.2048 49 5.4406 5.2695 0.1711 50 4.9074 4.8495 0.0579The genetic algorithm-multiple linear regression (GA-MLR) investigation led to the selection of six descriptors that were used to assemble a linear model for calculating predictive activity onMycobacterium tuberculosis. Five QSAR models were built using Genetic Function Algorithm (GFA), but due to the statistical significance, model 1 was selected and reported as given below:(14)pBA=-0.307001458AATS7s+1.528715398nHBint3+3.976720227minHCsatu+0.016199645TDB9e+0.089381479RDF90i-0.107407822RDF110s+4.057082751.Ntrain=35, R2= 0.92023900, Radj=0.91017400, Qcv2 = 0.89538600, and the external validation for the test set was found to be R2pred = 0.8842.All the validation parameters for this mode were reported in Table4 and were all in agreement with parameters presented in Table 2, which actually confirmed the robustness of the model.Table 4
Validation of the genetic function approximation from Materials Studio.
S/number Equation 1 Friedman LOF 0.40847300 2 R-squared 0.92023900 3 AdjustedR-squared 0.91017400 4 Cross-validatedR-squared (Qcv2) 0.89538600 5 Significant regression Yes 6 Significance-of-regressionF-value 58.41835200 7 Critical SORF-value (95%) 2.45854700 8 Replicate points 0 9 Computed experimental error 0.00000000 10 Lack-of-fit points 28 11 Min expt. error for nonsignificant LOF (95%) 0.24688800The QSAR model generated in this research was compared with the models obtained in the literature [18, 19] as shown below:(15)pMIC=4.77374+/-0.03903-0.18609+/-0.04924AATS4i+0.50382+/-0.05235SCH-3-0.44712+/-0.06573AVP-1-0.22376+/-0.05623maxHCsats-0.18403+/-0.04374PSA.Ntrain=16, R2=0.9184, Qcv2=0.84987, and R2pred = 0.79343 [18].(16)pIC50=-2.040810634∗nCl-19.024890361∗MATS2m+1.855704759∗RDF140s+6.739013671.Ntrain=27, R2=0.9480, Radj=0.9350, Qcv2=0.87994, and R2pred = 0.76907 [19].From the above models, it could be seen that maxHCsats and 3D-radial distribution function (RDF) descriptors were also observed in the model generated in this research. This indicates that these descriptors have great influence on the activities of the inhibitory compounds againstMycobacterium tuberculosis. The validation parameters reported in this work and those reported in the literature were all in agreement with parameters presented in Table 2, which actually confirmed the robustness of the model.Descriptive statistics of the activity values of the training and test set data reported in Table5 show that test set value range (8.2854 to 4.9074) was within the training set value range (8.0899 to 4.7441). Also, the mean and standard deviation of the test set activity value (6.4989 and 0.93) were approximately similar to those of the training set value (6.6222 and 0.96). This indicates that the test set is interpolative within the training. Therefore, Kennard and Stone’s algorithm employed in this study was able to generate a test set that is a good reflection of the training set.Table 5
Descriptive statistics of the inhibition data.
Statistical parameters Activity Training set Test set Number of sample points 35 15 Range 3.3458 3.678 Maximum 8.0899 8.2854 Minimum 4.7441 4.9074 Mean 6.622234 6.498873 Median 6.3981 6.1213 Variance 0.924712 0.866467 Standard deviation 0.96162 0.93084 Mean absolute deviation 0.871588 0.703515 Skewness - 8.48 E - 04 0.87066 Kurtosis −1.24682 0.153415The name and symbol of the descriptors used in the QSAR optimization model were reported in Table6. The presence of the three 2D and three 3D descriptors in the model suggests that these types of descriptors are able to characterize better anti-Mycobacterium tuberculosis activities of the compounds. Pearson’s correlation matrix and statistics of the six descriptors employed in the QSAR model were reported in Table 7, which shows clearly that the correlation coefficients between each pair of descriptors are very low; thus, it can be inferred that there exists no significant intercorrelation among the descriptors used in building the model. The absolute t-statistics value for each descriptor is greater than 2 at 95% significant level, which indicates that the selected descriptors were good. The estimated Variance Inflation Factor (VIF) values for all the descriptors were less than 4, which imply that the model generated was statistically significant and the descriptors were orthogonal.Table 6
List of some descriptors used in the QSAR optimization model.
S/number Descriptors symbols Name of descriptor(s) Class 1 AATS7s Average Moreau-Broto Autocorrelation-lag 7/weighted by I-state 2D 2 nHBint3 Count of E-state descriptors of strength for potential hydrogen bonds of path length 3 2D 3 minHCsatu Minimum atom-type H E-state: H on C sp3 bonded to unsaturated C 2D 4 TDB9e 3D topological distance based autocorrelation-lag 9/weighted by Sanderson electronegativities 3D 5 RDF90i Radial distribution function-090/weighted by relative first ionization potential 3D 6 RDF110s Radial distribution function-110/weighted by relative I-state 3DTable 7
Pearson’s correlation matrix and statistics for descriptor used in the QSAR optimization model.
Intercorrelation Statistics Descriptors AATS7s nHBint3 minHCsatu TDB9e RDF90i RDF110s t-Stat VIF AATS7s 1 −3.9153 1.8931 nHBint3 −0.29824 1 11.6469 1.2779 minHCsatu 0.196097 0.269067 1 10.0386 3.6622 TDB9e 0.446768 −0.19131 −0.14868 1 5.66824 1.3493 RDF90i 0.097382 −0.13902 −0.39183 0.144839 1 9.45783 3.0968 RDF110s 0.116862 −0.25217 −0.66819 0.208747 0.227911 1 −5.5848 3.0275The mean effect (ME) values and standard regression coefficient (bjs) reported in Table 8 provide important information on the effect of the molecular descriptors and the degree of contribution in the developed model. The signs and the magnitude of these descriptors combined with their mean effects indicate their individual strength and direction in influencing the activity of a compound. The null hypothesis says that there is no significant relationship between the descriptors and the activities of the inhibitor compounds. The p values of the descriptors at 95% confidence limit shown in Table 8 are all less than 0.05. This implies that the alternative hypothesis is accepted. Hence, there is a relationship between the descriptors used in generating the model and the activities of the inhibitor compounds which take preference over the null hypothesis.Table 8
Specification of entered descriptors in genetic algorithm-multiple regression model.
Descriptors Standard regression coefficient (bj) Mean effect (ME) p value(confidence interval) AATS7s −0.2769 −0.31421 0.000527 nHBint3 0.67675 0.153246 3 E - 12 minHCsatu 0.987436 0.58264 8.84 E - 11 TDB9e 0.338438 0.351968 4.48 E - 06 RDF90i 1.097495 0.34097 3.25 E - 10 RDF110s −0.49948 −0.11461 5.62 E - 06Y-Randomization parameter test was reported in Table 9. The low R2 and Q2 values for several trials confirm that the developed QSAR model is robust, while the cRp2 value greater than 0.5 affirms that the created model is powerful and is not inferred by chance.Table 9
Y- Randomization parameters test.
Model R R 2 Q 2 Original 0.962302 0.926026 0.895386 Random 1 0.387394 0.150074 −0.28301 Random 2 0.534646 0.285847 −0.15518 Random 3 0.357333 0.127687 −0.43633 Random 4 0.509588 0.25968 −0.08884 Random 5 0.231807 0.053735 −0.60188 Random 6 0.140884 0.019848 −0.61556 Random 7 0.513288 0.263465 −0.11043 Random 8 0.548099 0.300412 −0.062 Random 9 0.36673 0.134491 −0.25601 Random 10 0.505524 0.255554 −0.12398 Random models parameters Averager 0.409529 Averager2 0.185079 AverageQ2 −0.27332 cRp2 0.837983
### 3.1. Interpretation of Selected Descriptors
AATS7s is average Moreau-Broto Autocorrelation-lag 7/weighted by I-state autocorrelation descriptor. It is based on spatial dependent autocorrelation function that measures the strength of the relationship between observations (atomic or molecular properties) and space separating them (lag). This descriptor is obtained by taking the molecule atoms as the set of discrete points in space and an atomic property as the function evaluated at those points. When this descriptor is calculated on molecular graph, the lag coincides with the topological distance between any pair of the vertices. AATS7s is defined on the molecular graphs using atomic masses (m), Sanderson electronegativity (e), and inductive effect of pairs of atoms 7 bonds apart as the weighting scheme. These observations suggested that atomic masses and electronic distribution of the atoms that made up the molecule had significant effect on the antitubercular activity of the dataset. In addition, the signs of the regression coefficients for each descriptor indicated the direction of influence of the descriptors in the models such that positive regression coefficient associated with a descriptor will augment the activity profile of a compound, while the negative coefficient will diminish the activity of the compound.Electrotopological state atom-type descriptor nHBint3 represents count of E-state descriptors of strength for potential hydrogen bonds of path length 3. It is a spatial dependent 2D autocorrelation descriptor with the incorporation of Moran coefficient (index) in the measurement of the strength of the relationship between observations and space separating them. This Moran autocorrelation descriptor contained in the model reported in this study was defined on the molecular graphs using atomic masses (m), Sanderson electronegativity (e), and inductive effect of pairs of atom 3 bonds apart as the weighting scheme. These observations supported the claim that atomic masses and electronic distribution had significant effect on the antitubercular activities of the molecules The positive mean effect of this descriptor indicates that the inhibitory activity of 1,2,4-Triazole derivatives will increase with hydrogen bonds of path length 3.minHCsatu (minimum atom-type H E-state: H on C sp3 bonded to unsaturated C) is a 2D electrotopological state (E-state indices) atom-type descriptor. In general, E-state indices encode the intrinsic electronic state of each atom as perturbed by the electronic influences of all other atoms in the molecule within the context of the topological character of the molecule. maxHCsatu favors the addition of –CH3 to unsaturated C atom, for example, in benzene ring. Positive contribution of minHCsatu indicates that the inhibitory activity of 1,2,4-Triazole derivatives will increase with increase in the molecular descriptor.TDB9e (3D topological distance based autocorrelation-lag 9/weighted by Sanderson electronegativities) is positively correlated to the anticonvulsant activity, meaning that increase in its value augments the activity of the studied compounds. The descriptor measures the strength of the connection between atomic charges 9 bonds apart. The number of rings in the molecular system tends to increase the values of this descriptor as observed for molecules. This may be due to increase in the amount ofπ-electrons in the molecular system, bringing about increase in the charge difference between atoms 9 bonds apart. The positive mean effect indicates a positive impact on the activity of the inhibitory compounds, which means that increasing the value of this descriptor produces higher activity of these compounds.RDF90i and RDF110s are 3D radial distribution functions at 2.5 and 7.0 interatomic distance weighted by atomic masses. The radial distribution function is probability distribution to find an atom in a spherical volume of radius. RDF descriptors are independent of the size and rotation of the entire molecule. They describe the steric hindrance or the structure-activity properties of a molecule. The RDF descriptor provides valuable information about the bond distances, ring types, planar and nonplanar systems, and atom types. The presence of these descriptors in the model suggested the occurrence of a linear relationship between antitubercular activity and the 3D molecular distribution of atomic masses in the molecules calculated at radius of 2.0 Å and 7.0 Å from the geometrical centers of each molecule. RDF90i with positive mean effect (MF) indicates positive impact on the activity, while RDF110s with negative mean effect (MF) indicates negative contribution on the activity.Predicted activity against experimental activity of training and test set was shown in Figures2 and 3. The R2 value of 0.9202 for training set and R2 value of 0.8842 for test set recorded in this study were in agreement with GFA-derived R2 value reported in Table 2. This confirms the stability, reliability, and robustness of the model. Plot of standardized residual versus experimental activity shown in Figure 4 indicates a symmetric distribution or random scattering of data points above and below the standardized residual line equal to zero. Also, all the data points were within the boundary defined by standardized residual of ±2. Thus, it implies that there was no systemic error in model developed as the spread of residuals was pragmatic on both sides of zero [20].Figure 2
Plot of predicted activity against experimental activity of training set.Figure 3
Plot of predicted activity against experimental activity of test set.Figure 4
Plot of standardized residual activity versus experimental activity.The standardized residuals in the dataset were plotted against their leverages for every compound, leading to discovery of outliers and influential molecules in the models. The Williams plot of the standardized residuals versus the leverage value is shown in Figure5. From our result, it is evident that no outlier is found, since all the compounds for both the training and test set were within the applicability domain of the square area except for three compounds that are structurally influential molecules (i.e., compounds 50, 39, and 36). These three compounds are said to be structurally influential molecules, since their leverage values are greater than the warning leverage (h∗=0.60). Their high leverage values are responsible for swaying the performance of the model. This was attributed to strong differences in their chemical structures compared to other compounds in the dataset.Figure 5
The Williams plot of the standardized residuals versus the leverage value.
### 3.2. Molecular Docking
Molecular docking study was carried out between the target (Mtb CYP121) and 1,2,4-Triazole derivatives. All the compounds were found to inhibit the receptor by occupying the active sites of the target protein (Mtb CYP121).For target protein, binding affinity values for all the compounds range from −5.1 to −14.6 kcal/mol as reported in Table10. However, four ligands (compounds 7, 8, 13, and 14) have higher binding score, which ranges from −10.0 to 14.6 kcal/mol, which were greater than their co-ligands.Table 10
Binding affinity of 1,2,4-Triazole derivatives withM. tuberculosis target (Mtb CYP121).
Ligand Target Binding affinity (BA)Kcal/mol 1 Mtb CYP121 −7.3 2 Mtb CYP121 −7.8 3 Mtb CYP121 −8.5 4 Mtb CYP121 −9.1 5 Mtb CYP121 −9.6 6 Mtb CYP121 −9.8 7 Mtb CYP121 −10.3 8 Mtb CYP121 −14.6 9 Mtb CYP121 −9.6 10 Mtb CYP121 −9.6 12 Mtb CYP121 −9.2 13 Mtb CYP121 −11.2 14 Mtb CYP121 −11.2 15 Mtb CYP121 −5.1 11 Mtb CYP121 −9.9 16 Mtb CYP121 −5.3 17 Mtb CYP121 −6.1 18 Mtb CYP121 −7.9 19 Mtb CYP121 −7 20 Mtb CYP121 −7.8 21 Mtb CYP121 −5.5 22 Mtb CYP121 −5.7 23 Mtb CYP121 −5.5 24 Mtb CYP121 −6.9 25 Mtb CYP121 −6.6 26 Mtb CYP121 −6.7 27 Mtb CYP121 −5.4 28 Mtb CYP121 −5.1 29 Mtb CYP121 −5.4 30 Mtb CYP121 −7.5 31 Mtb CYP121 −7.3 32 Mtb CYP121 −6.6 33 Mtb CYP121 −5.6 34 Mtb CYP121 −6 35 Mtb CYP121 −6.3 36 Mtb CYP121 −7.8 37 Mtb CYP121 −7.8 38 Mtb CYP121 −8.4 39 Mtb CYP121 −5.7 40 Mtb CYP121 −6.3 41 Mtb CYP121 −6.3 42 Mtb CYP121 −5.9 43 Mtb CYP121 −5.6 44 Mtb CYP121 −5.5 45 Mtb CYP121 −6.2 46 Mtb CYP121 −5.7 47 Mtb CYP121 −5.9 48 Mtb CYP121 −5.7 49 Mtb CYP121 −5.2 50 Mtb CYP121 −7.8These four ligands were visualized and analyzed in Discovery Studio Visualizer as shown in Figure6. Binding affinity, hydrogen bond, and hydrophobic bond of ligands 7, 8, 13, and 14 withM. tuberculosis target (Mtb CYP121) are reported in Table 11. Ligand (compound 7) formed hydrophobic interactions with VAL83 PRO285, VAL78, and ALA167 of the target site. In addition, ligand 7 also forms hydrogen bonds (2.16131 Å) with GLN385. Ligand 8 made three hydrogen bonds (2.82894, 2.34089, and 2.47314 Å) with ALA337, HIS343, and ALA233 of the target, while hydrophobic interactions were observed with PHE280, ALA233, CYS345, MET86, ALA233, and PRO346. Ligand 13 made two hydrogen bonds (2.34218 and 3.0328 Å) with ASN74 and GLN385, while VAL78, ALA233, PRO285, ALA233, PRO346, and ALA167 form the hydrophobic interaction. Ligand 14 formed hydrophobic interaction with LEU164, VAL228, VAL78, ALA233, PRO285, ALA233, PRO346, ALA167, and ALA233, while two hydrogen bonds (2.36479 and 3.03627 Å) were formed between ASN74 and GLN385 of the target.Table 11
Binding affinity, hydrogen bond, and hydrophobic bond of ligands 7, 8, 13, and 14 withM. tuberculosis target (Mtb CYP121).
Ligand Binding affinity (BA)Kcal/mol Target Hydrogen bond Hydrophobic interaction Amino acid Bond length (Å) Amino acid 7 −10.3 Mtb CYP121 GLN385 2.16131 VAL83, PRO285, VAL78, VAL78, LA167 8 −14.6 Mtb CYP121 ALA337HIS343ALA233 2.828942.340892.47314 PHE280, ALA233, CYS345, MET86, ALA233, PRO346 13 −11.2 Mtb CYP121 ASN74GLN385 2.342183.0328 VAL78, ALA233, PRO285, ALA233, PRO346ALA167 14 −11.2 Mtb CYP121 ASN74GLN385 2.364793.03627 LEU164, VAL228, VAL78, ALA233,PRO285ALA233, PRO346, ALA167, ALA233Figure 6
(7a) and (7b) show the 3D and 2D interactions between Mtb CYP121 and ligand 7. (8a) and (8b) show the 3D and 2D interactions between Mtb CYP121 and ligand 8. (13a) and (13b) show the 3D and 2D interactions between Mtb CYP121 and ligand 13. (14a) and (14b) show the 3D and 2D interactions between Mtb CYP121 and ligand 14.
## 3.1. Interpretation of Selected Descriptors
AATS7s is average Moreau-Broto Autocorrelation-lag 7/weighted by I-state autocorrelation descriptor. It is based on spatial dependent autocorrelation function that measures the strength of the relationship between observations (atomic or molecular properties) and space separating them (lag). This descriptor is obtained by taking the molecule atoms as the set of discrete points in space and an atomic property as the function evaluated at those points. When this descriptor is calculated on molecular graph, the lag coincides with the topological distance between any pair of the vertices. AATS7s is defined on the molecular graphs using atomic masses (m), Sanderson electronegativity (e), and inductive effect of pairs of atoms 7 bonds apart as the weighting scheme. These observations suggested that atomic masses and electronic distribution of the atoms that made up the molecule had significant effect on the antitubercular activity of the dataset. In addition, the signs of the regression coefficients for each descriptor indicated the direction of influence of the descriptors in the models such that positive regression coefficient associated with a descriptor will augment the activity profile of a compound, while the negative coefficient will diminish the activity of the compound.Electrotopological state atom-type descriptor nHBint3 represents count of E-state descriptors of strength for potential hydrogen bonds of path length 3. It is a spatial dependent 2D autocorrelation descriptor with the incorporation of Moran coefficient (index) in the measurement of the strength of the relationship between observations and space separating them. This Moran autocorrelation descriptor contained in the model reported in this study was defined on the molecular graphs using atomic masses (m), Sanderson electronegativity (e), and inductive effect of pairs of atom 3 bonds apart as the weighting scheme. These observations supported the claim that atomic masses and electronic distribution had significant effect on the antitubercular activities of the molecules The positive mean effect of this descriptor indicates that the inhibitory activity of 1,2,4-Triazole derivatives will increase with hydrogen bonds of path length 3.minHCsatu (minimum atom-type H E-state: H on C sp3 bonded to unsaturated C) is a 2D electrotopological state (E-state indices) atom-type descriptor. In general, E-state indices encode the intrinsic electronic state of each atom as perturbed by the electronic influences of all other atoms in the molecule within the context of the topological character of the molecule. maxHCsatu favors the addition of –CH3 to unsaturated C atom, for example, in benzene ring. Positive contribution of minHCsatu indicates that the inhibitory activity of 1,2,4-Triazole derivatives will increase with increase in the molecular descriptor.TDB9e (3D topological distance based autocorrelation-lag 9/weighted by Sanderson electronegativities) is positively correlated to the anticonvulsant activity, meaning that increase in its value augments the activity of the studied compounds. The descriptor measures the strength of the connection between atomic charges 9 bonds apart. The number of rings in the molecular system tends to increase the values of this descriptor as observed for molecules. This may be due to increase in the amount ofπ-electrons in the molecular system, bringing about increase in the charge difference between atoms 9 bonds apart. The positive mean effect indicates a positive impact on the activity of the inhibitory compounds, which means that increasing the value of this descriptor produces higher activity of these compounds.RDF90i and RDF110s are 3D radial distribution functions at 2.5 and 7.0 interatomic distance weighted by atomic masses. The radial distribution function is probability distribution to find an atom in a spherical volume of radius. RDF descriptors are independent of the size and rotation of the entire molecule. They describe the steric hindrance or the structure-activity properties of a molecule. The RDF descriptor provides valuable information about the bond distances, ring types, planar and nonplanar systems, and atom types. The presence of these descriptors in the model suggested the occurrence of a linear relationship between antitubercular activity and the 3D molecular distribution of atomic masses in the molecules calculated at radius of 2.0 Å and 7.0 Å from the geometrical centers of each molecule. RDF90i with positive mean effect (MF) indicates positive impact on the activity, while RDF110s with negative mean effect (MF) indicates negative contribution on the activity.Predicted activity against experimental activity of training and test set was shown in Figures2 and 3. The R2 value of 0.9202 for training set and R2 value of 0.8842 for test set recorded in this study were in agreement with GFA-derived R2 value reported in Table 2. This confirms the stability, reliability, and robustness of the model. Plot of standardized residual versus experimental activity shown in Figure 4 indicates a symmetric distribution or random scattering of data points above and below the standardized residual line equal to zero. Also, all the data points were within the boundary defined by standardized residual of ±2. Thus, it implies that there was no systemic error in model developed as the spread of residuals was pragmatic on both sides of zero [20].Figure 2
Plot of predicted activity against experimental activity of training set.Figure 3
Plot of predicted activity against experimental activity of test set.Figure 4
Plot of standardized residual activity versus experimental activity.The standardized residuals in the dataset were plotted against their leverages for every compound, leading to discovery of outliers and influential molecules in the models. The Williams plot of the standardized residuals versus the leverage value is shown in Figure5. From our result, it is evident that no outlier is found, since all the compounds for both the training and test set were within the applicability domain of the square area except for three compounds that are structurally influential molecules (i.e., compounds 50, 39, and 36). These three compounds are said to be structurally influential molecules, since their leverage values are greater than the warning leverage (h∗=0.60). Their high leverage values are responsible for swaying the performance of the model. This was attributed to strong differences in their chemical structures compared to other compounds in the dataset.Figure 5
The Williams plot of the standardized residuals versus the leverage value.
## 3.2. Molecular Docking
Molecular docking study was carried out between the target (Mtb CYP121) and 1,2,4-Triazole derivatives. All the compounds were found to inhibit the receptor by occupying the active sites of the target protein (Mtb CYP121).For target protein, binding affinity values for all the compounds range from −5.1 to −14.6 kcal/mol as reported in Table10. However, four ligands (compounds 7, 8, 13, and 14) have higher binding score, which ranges from −10.0 to 14.6 kcal/mol, which were greater than their co-ligands.Table 10
Binding affinity of 1,2,4-Triazole derivatives withM. tuberculosis target (Mtb CYP121).
Ligand Target Binding affinity (BA)Kcal/mol 1 Mtb CYP121 −7.3 2 Mtb CYP121 −7.8 3 Mtb CYP121 −8.5 4 Mtb CYP121 −9.1 5 Mtb CYP121 −9.6 6 Mtb CYP121 −9.8 7 Mtb CYP121 −10.3 8 Mtb CYP121 −14.6 9 Mtb CYP121 −9.6 10 Mtb CYP121 −9.6 12 Mtb CYP121 −9.2 13 Mtb CYP121 −11.2 14 Mtb CYP121 −11.2 15 Mtb CYP121 −5.1 11 Mtb CYP121 −9.9 16 Mtb CYP121 −5.3 17 Mtb CYP121 −6.1 18 Mtb CYP121 −7.9 19 Mtb CYP121 −7 20 Mtb CYP121 −7.8 21 Mtb CYP121 −5.5 22 Mtb CYP121 −5.7 23 Mtb CYP121 −5.5 24 Mtb CYP121 −6.9 25 Mtb CYP121 −6.6 26 Mtb CYP121 −6.7 27 Mtb CYP121 −5.4 28 Mtb CYP121 −5.1 29 Mtb CYP121 −5.4 30 Mtb CYP121 −7.5 31 Mtb CYP121 −7.3 32 Mtb CYP121 −6.6 33 Mtb CYP121 −5.6 34 Mtb CYP121 −6 35 Mtb CYP121 −6.3 36 Mtb CYP121 −7.8 37 Mtb CYP121 −7.8 38 Mtb CYP121 −8.4 39 Mtb CYP121 −5.7 40 Mtb CYP121 −6.3 41 Mtb CYP121 −6.3 42 Mtb CYP121 −5.9 43 Mtb CYP121 −5.6 44 Mtb CYP121 −5.5 45 Mtb CYP121 −6.2 46 Mtb CYP121 −5.7 47 Mtb CYP121 −5.9 48 Mtb CYP121 −5.7 49 Mtb CYP121 −5.2 50 Mtb CYP121 −7.8These four ligands were visualized and analyzed in Discovery Studio Visualizer as shown in Figure6. Binding affinity, hydrogen bond, and hydrophobic bond of ligands 7, 8, 13, and 14 withM. tuberculosis target (Mtb CYP121) are reported in Table 11. Ligand (compound 7) formed hydrophobic interactions with VAL83 PRO285, VAL78, and ALA167 of the target site. In addition, ligand 7 also forms hydrogen bonds (2.16131 Å) with GLN385. Ligand 8 made three hydrogen bonds (2.82894, 2.34089, and 2.47314 Å) with ALA337, HIS343, and ALA233 of the target, while hydrophobic interactions were observed with PHE280, ALA233, CYS345, MET86, ALA233, and PRO346. Ligand 13 made two hydrogen bonds (2.34218 and 3.0328 Å) with ASN74 and GLN385, while VAL78, ALA233, PRO285, ALA233, PRO346, and ALA167 form the hydrophobic interaction. Ligand 14 formed hydrophobic interaction with LEU164, VAL228, VAL78, ALA233, PRO285, ALA233, PRO346, ALA167, and ALA233, while two hydrogen bonds (2.36479 and 3.03627 Å) were formed between ASN74 and GLN385 of the target.Table 11
Binding affinity, hydrogen bond, and hydrophobic bond of ligands 7, 8, 13, and 14 withM. tuberculosis target (Mtb CYP121).
Ligand Binding affinity (BA)Kcal/mol Target Hydrogen bond Hydrophobic interaction Amino acid Bond length (Å) Amino acid 7 −10.3 Mtb CYP121 GLN385 2.16131 VAL83, PRO285, VAL78, VAL78, LA167 8 −14.6 Mtb CYP121 ALA337HIS343ALA233 2.828942.340892.47314 PHE280, ALA233, CYS345, MET86, ALA233, PRO346 13 −11.2 Mtb CYP121 ASN74GLN385 2.342183.0328 VAL78, ALA233, PRO285, ALA233, PRO346ALA167 14 −11.2 Mtb CYP121 ASN74GLN385 2.364793.03627 LEU164, VAL228, VAL78, ALA233,PRO285ALA233, PRO346, ALA167, ALA233Figure 6
(7a) and (7b) show the 3D and 2D interactions between Mtb CYP121 and ligand 7. (8a) and (8b) show the 3D and 2D interactions between Mtb CYP121 and ligand 8. (13a) and (13b) show the 3D and 2D interactions between Mtb CYP121 and ligand 13. (14a) and (14b) show the 3D and 2D interactions between Mtb CYP121 and ligand 14.
## 4. Conclusion
The model with 2D and 3D descriptors is of higher excellence and presents a satisfactory correlation with the anti-Mycobacterium tuberculosis activity. The combination of 2D and 3D descriptors produces a better model to predict the anti-Mycobacterium tuberculosis activities of these compounds. The QSAR model generated met the criteria for minimum recommended value of validation parameters for a generally acceptable QSAR model. The molecular docking analysis has shown that nearly all the 1,2,4-Triazole derivatives potentially inhibit Mtb CYP121. However, compounds 7, 8, 13, and 14 have higher bind score ranging from −10.03 to −11.02 kcal/mol. These four compounds were able to be docked deeply within the binding pocket region of the Mtb CYP121, forming a hydrogen bond and hydrophobic interactions with amino acid of the target. The QSAR model generated provides a valuable approach for ligand base design, while the molecular docking studies provide a valuable approach for structure base design. These two approaches will be of great help for pharmaceutical and medicinal chemists to design and synthesize new anti-Mycobacterium tuberculosis compounds.
---
*Source: 1018694-2018-05-10.xml* | 2018 |
# Fracture Toughness of Vapor Grown Carbon Nanofiber-Reinforced Polyethylene Composites
**Authors:** A. R. Adhikari; E. Partida; T. W. Petty; R. Jones; K. Lozano; C. Guerrero
**Journal:** Journal of Nanomaterials
(2009)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2009/101870
---
## Abstract
The impact fracture behavior of a vapor grown carbon nanofiber (VGCNF) reinforced high-density polyethylene (PE) composite was evaluated. The samples consisting of pure PE and composites with 10 wt% and 20 wt% of VGCNFs were prepared by a combination of hot-pressing and extrusion methods. Extrusion was used to produce samples with substantially different shear histories. The fracture behavior of these samples was analyzed using the essential work of fracture (EWF) approach. The results showed an increase of 292% in the essential work of fracture for the loading of 10 wt%. Further increasing fiber loading to 20 wt% caused the essential work of fracture to increase only 193% with respect to the unmodified material. Evaluation of the fracture surface morphology indicated that the fibril frequency and microvoid size within the various fiber loadings depended strongly on processing conditions.
---
## Body
## 1. Introduction
Polyethylene is characterized by a great capacity to absorb energy despite its low modulus of elasticity [1]. Because of its high toughness it offers promise as a matrix for highly damage tolerant composites. However development of adequate adhesion between PE and high performance reinforcements has been a challenge. Nanofibers show great promise for modification of existing materials due to the combination of their small size and surface compatibility. Their high thermal, electrical, and mechanical properties offer the prospect of substantial improvements in polymeric systems. This prospect is further enhanced by the strong natural adhesion nanofibers with many thermoplastic matrices [2–6].Prior studies have shown that VGCNFs interact strongly with polymeric matrices and enhance several properties. PE/VGCNF composites have been found to produce a simultaneous increase in both storage (elastic) and loss (viscous dissipation) modulus as measured by dynamic mechanical analysis [7]. Tensile tests of these composites show a remarkable increase in elongation to failure with increased shear history and an apparently new mechanism of void stabilization permitting the formation of widespread stable subcritical voids in the deformed polymer [8]. The current work extends the study of these materials to the regime of dynamic impact behavior.The highly nonlinear nature of the fracture process in PE requires the use of nonlinear fracture analysis. The most widely used methods are the J-integral [9] and the essential work of fracture (EMF) [10, 11]. In the recent years, the concept of EWF has been broadly applied to the evaluation of fracture toughness in ductile polymers and their composites due to its greater simplicity when compared to J-integral measurement [12–15]. This theory was initially proposed by Broberg [16] in 1968 and further developed by Mai and others [11, 17]. The theory poses that the total energy necessary to fracture a cracked material (Wf) contains two components the essential work of fracture (We) and nonessential work or plastic work (Wp) (see Figure 1)(1)Wf=We+Wp.We is the essential work required to rupture in its inner fracture process zone. For a given material thickness (t), We is proportional to the ligament length (ℓ). Wp is the energy consumed by mechanisms of deformation in the outer plastic zone and is a volume energy proportional to ℓ2.Figure 1
Schematic diagram of flow zones during fracture.The expression in (1) can be expanded to(2)Wf=weℓt+βwpℓ2t.
In terms of specific values, total specific work of fracture, wf is given by(3)wf=(Wfℓt)=we+βwpℓ,
where we and wp are the essential specific work of fracture and the specific nonessential work (or specific plastic work), respectively. The term β is a form factor for the plastic zone. Equation (3) provides the we and wp from the intercept and the slope of corresponding linear regression curve of the plot of wf as a function of ℓ. A representative plot of wf as a function of ℓ is shown in Figure 2.Figure 2
A representative plot of energy of impact(wf) versus ligament length (ℓ).In this study, measurements of energy to fracture under impact using the method of essential work of fracture were made. Fracture surface morphology was also evaluated to determine changes in the fracture mechanism caused by nanofibers reinforcement.
## 2. Experimental
### 2.1. Materials
The systems evaluated were based on high-density polyethylene (Marflex PE CL-L-R-240370) provided by Chevron-Philips Chemical Co. and vapor grown carbon nanofibers (Pyrograf III) (VGCNFs) provided by Applied Sciences. The fibers were purified before use according to procedures developed and described elsewhere [18]. Purification of the fibers was done with a process of refluxing nanofibers in dichloromethane followed by rinsing with deionized water. The purpose of purification is to remove amorphous carbon and untangle nested fiber bundles. It also serves to lightly functionalize the surface of the fibers, making them more compatible with some polymer matrices though this may not be critical to performance in PE.
### 2.2. Processing
After purification, the fibers were introduced into the PE matrix using a Haake Polylab 600 mixer which subjects the composite to high shear stresses. Mixing was done at a nominal temperature of 190C° for a total of 16 minutes at varied mixing speeds. A 2-minute mix of pure PE at 90 rpm was followed by fiber addition, 11 minutes at 30 rpm, and 3 minutes at 60 rpm. The length of each processing step was determined by the time necessary to produce a constant shearing torque measured at the mix head. The resulting material was hot pressed (Carver Hot-Press model 3912) and again extruded using a Haake Rheomixer 600 with an extrusion screw speed of 40 rpm and a die temperature of 190C°.After extrusion, the film was stretched by a Haake Tape Postex 600 to create a tape. To introduce varying shear stress histories generated by the extensional flow, drawing was done at 10 and 20 rpm. The resulting tape was pelletized once more and then molded to the final specimen size and thickness in a hydraulic heated press (PHI model 100-1a). The preparation combinations are summarized in Table1. Rectangular 63.5mm×12.7mm×3.2mm bars, with a single-edge-notched 3-point bending configuration, were used for impact testing.Table 1
Process flow used for neat PE and PE/VGCNF composites preparation, MP samples were mixed and pressed; MPEP 10 RMP samples were mixed, pressed, extruded at 10 rpm and pressed again; MPEP 20 RPM samples were mixed pressed extruded at 20 rpm and pressed again.
SampleProcess typeMixedHot pressedExtruded velocity of elongation (rpm)Mold pressed1020Pure PEMP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××PE/VGCNF (10% wt)MP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××PE/VGCNF (20% wt)MP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××
### 2.3. Characterizations
Tests were done at a temperature of 22C° in the Dynatup 830I drop weight system with a span length of 50 mm and an impact velocity of 2.5 m/s. Eight specimens were prepared and tested for each treatment combination. The exact length of the ligament was measured by optical microscopy on an Olympus T4560 imaging analyzer. The specimen was made with a saw 1mm gap and the tip of crack with a fresh razor blade. It was not necessary to immerse in liquid nitrogen because this sample presented a brittle fracture. The fracture morphology was studied using Scanning electron microscopy (SEM).
## 2.1. Materials
The systems evaluated were based on high-density polyethylene (Marflex PE CL-L-R-240370) provided by Chevron-Philips Chemical Co. and vapor grown carbon nanofibers (Pyrograf III) (VGCNFs) provided by Applied Sciences. The fibers were purified before use according to procedures developed and described elsewhere [18]. Purification of the fibers was done with a process of refluxing nanofibers in dichloromethane followed by rinsing with deionized water. The purpose of purification is to remove amorphous carbon and untangle nested fiber bundles. It also serves to lightly functionalize the surface of the fibers, making them more compatible with some polymer matrices though this may not be critical to performance in PE.
## 2.2. Processing
After purification, the fibers were introduced into the PE matrix using a Haake Polylab 600 mixer which subjects the composite to high shear stresses. Mixing was done at a nominal temperature of 190C° for a total of 16 minutes at varied mixing speeds. A 2-minute mix of pure PE at 90 rpm was followed by fiber addition, 11 minutes at 30 rpm, and 3 minutes at 60 rpm. The length of each processing step was determined by the time necessary to produce a constant shearing torque measured at the mix head. The resulting material was hot pressed (Carver Hot-Press model 3912) and again extruded using a Haake Rheomixer 600 with an extrusion screw speed of 40 rpm and a die temperature of 190C°.After extrusion, the film was stretched by a Haake Tape Postex 600 to create a tape. To introduce varying shear stress histories generated by the extensional flow, drawing was done at 10 and 20 rpm. The resulting tape was pelletized once more and then molded to the final specimen size and thickness in a hydraulic heated press (PHI model 100-1a). The preparation combinations are summarized in Table1. Rectangular 63.5mm×12.7mm×3.2mm bars, with a single-edge-notched 3-point bending configuration, were used for impact testing.Table 1
Process flow used for neat PE and PE/VGCNF composites preparation, MP samples were mixed and pressed; MPEP 10 RMP samples were mixed, pressed, extruded at 10 rpm and pressed again; MPEP 20 RPM samples were mixed pressed extruded at 20 rpm and pressed again.
SampleProcess typeMixedHot pressedExtruded velocity of elongation (rpm)Mold pressed1020Pure PEMP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××PE/VGCNF (10% wt)MP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××PE/VGCNF (20% wt)MP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××
## 2.3. Characterizations
Tests were done at a temperature of 22C° in the Dynatup 830I drop weight system with a span length of 50 mm and an impact velocity of 2.5 m/s. Eight specimens were prepared and tested for each treatment combination. The exact length of the ligament was measured by optical microscopy on an Olympus T4560 imaging analyzer. The specimen was made with a saw 1mm gap and the tip of crack with a fresh razor blade. It was not necessary to immerse in liquid nitrogen because this sample presented a brittle fracture. The fracture morphology was studied using Scanning electron microscopy (SEM).
## 3. Results and Discussion
### 3.1. Fracture Analysis
Impact results of PE and its composites were compared using single-edge notched tension specimen. A representative plot of total energy of fracture versus ligament length is shown in Figure2. It clearly shows the effect of increased flow stress with nanofiber addition. The higher flow stress results in increased constraint in the ligament and a reduction in gross viscoplastic flow in the ligament. The reduced gross plasticity reduces the total energy of fracture relative to the unreinforced material. Table 2 summarizes the values of we,wp and total energy. we values for all samples are plotted in Figure 3. The unmodified PE shows an increase in we (local fracture energy) but an overall reduction in fracture energy. The fiber modified systems show a clear increase in we with increasing shear history while total fracture energy falls relative to the unmodified system. In other words, increased processing improves the local material toughness in the process zone while fiber addition reduces the overall toughness by constraining gross plasticity in the specimen.Table 2
EWF parameters for neat PE and PE/VGCNF composites.
SamplesweβwpLigamentr2Total energy(kJ/m2)(kJ/m2)(mm)(kJ/m2)PEMP0.49781.6068.8900.9932.0799MPEP at 10 RPM0.70581.21229.0170.9951.918MPEP at 20 RPM0.99141.17048.1280.9911.8828PE/VGCNFs (10% wt.)MP1.01560.22627.9250.9841.2418MPEP at 10 RPM1.61650.12168.2550.9801.7381MPEP at 20 RPM1.85310.14718.0010.9922.0002PE/VGCNFs (20% wt.)MP0.78910.15888.3570.9800.9479MPEP at 10 RPM1.20150.04468.9660.9151.2461MPEP at 20 RPM1.38610.10128.6360.9771.4873Figure 3
Comparison of % change ofwe with process parameters.The increase in toughness is higher for the 10 wt% PE/VGCNF composite than for the 20 wt% fiber system which could indicate a countervailing effect of fiber loading. Essential work of fracture in the fracture zone corresponds to the energy required to debond PE and VGCNF, and to deform the polymer matrix [19]. Therefore, the likely mechanism for the improvement in toughness is improvement in fiber/polymer adhesion and better dispersion of the fibers. These improvements result in greater void growth through stable fibril formation (see morphology section). At the same time, the plastic work, wp, generally falls, suggesting greater localization of the fracture process and reduction of gross plasticity. The smaller increase in toughness observed in the higher fiber loaded system indicates that the toughening phenomenon is primarily matrix driven and is not due directly to fiber breakage or other fiber driven energy consuming process. Eventually the higher loading of fibers (20 wt%) results in a reduction in the work of fracture through restriction of ductile matrix deformation (caused by constraint as well as a reduction in the volume of polymer available). This leads to a diminution in the specific plastic work of the material. This is similar to the behavior seen in traditional short-glass fiber reinforced polymers [20, 21].The substantial improvement seen in process zone energy dissipation with the addition of nanofibers in an impact environment suggests that these materials should show substantial improvement in resistance to slow stable crack growth or stress corrosion cracking which are quasistatic fracture processes which do not involve significant gross plasticity. That will be the subject of future research.
### 3.2. Morphology
Changes in the fracture process are usually reflected in the morphology of the fracture surface; therefore, specimens were evaluated using the scanning electron microscope. It should be noted that the high level of fibrillation seen in these impact test specimens, especially the levels seen in the drawn materials, are more consistent with quasistatic fracture in neat polyethylene systems than with impact. Nanofiber loading makes possible energy dissipating fracture processes normally prevented by high crack velocities.Representative micrographs are shown in Figure4 for PE/VGCNF composites with fiber contents of 10 and 20 wt% and three processing levels. The fibers were clearly well dispersed in both systems. The number and length of fibrils, along with voids developed within the fracture process zone, increase dramatically after the shearing process (10 rpm and 20 rpm) for 10 wt% fibers loading. However, the development of fibrils in 20 wt% composite at higher shearing (20 rpm) is not similar to that of 10 wt% composites. This can be attributed to the interaction between the nanofibers at higher loading. The observed fracture surface morphology indicates that the higher values of local fracture energy, we, of composites is a result of enhancement in fibrillation and the formation of large stable voids resulting from coalescence of stable voids. The fibers act to stabilize and increase fibrillation thus enhancing the toughness of the matrix in the local process zone.Fracture surface SEM micrograph of 10% (left) and 20% (right) PE/VGCNF Composite: (a) undrawn, (b) 10 rpm drawn, and (c) 20 rpm drawn.
(a)(b)(c)
## 3.1. Fracture Analysis
Impact results of PE and its composites were compared using single-edge notched tension specimen. A representative plot of total energy of fracture versus ligament length is shown in Figure2. It clearly shows the effect of increased flow stress with nanofiber addition. The higher flow stress results in increased constraint in the ligament and a reduction in gross viscoplastic flow in the ligament. The reduced gross plasticity reduces the total energy of fracture relative to the unreinforced material. Table 2 summarizes the values of we,wp and total energy. we values for all samples are plotted in Figure 3. The unmodified PE shows an increase in we (local fracture energy) but an overall reduction in fracture energy. The fiber modified systems show a clear increase in we with increasing shear history while total fracture energy falls relative to the unmodified system. In other words, increased processing improves the local material toughness in the process zone while fiber addition reduces the overall toughness by constraining gross plasticity in the specimen.Table 2
EWF parameters for neat PE and PE/VGCNF composites.
SamplesweβwpLigamentr2Total energy(kJ/m2)(kJ/m2)(mm)(kJ/m2)PEMP0.49781.6068.8900.9932.0799MPEP at 10 RPM0.70581.21229.0170.9951.918MPEP at 20 RPM0.99141.17048.1280.9911.8828PE/VGCNFs (10% wt.)MP1.01560.22627.9250.9841.2418MPEP at 10 RPM1.61650.12168.2550.9801.7381MPEP at 20 RPM1.85310.14718.0010.9922.0002PE/VGCNFs (20% wt.)MP0.78910.15888.3570.9800.9479MPEP at 10 RPM1.20150.04468.9660.9151.2461MPEP at 20 RPM1.38610.10128.6360.9771.4873Figure 3
Comparison of % change ofwe with process parameters.The increase in toughness is higher for the 10 wt% PE/VGCNF composite than for the 20 wt% fiber system which could indicate a countervailing effect of fiber loading. Essential work of fracture in the fracture zone corresponds to the energy required to debond PE and VGCNF, and to deform the polymer matrix [19]. Therefore, the likely mechanism for the improvement in toughness is improvement in fiber/polymer adhesion and better dispersion of the fibers. These improvements result in greater void growth through stable fibril formation (see morphology section). At the same time, the plastic work, wp, generally falls, suggesting greater localization of the fracture process and reduction of gross plasticity. The smaller increase in toughness observed in the higher fiber loaded system indicates that the toughening phenomenon is primarily matrix driven and is not due directly to fiber breakage or other fiber driven energy consuming process. Eventually the higher loading of fibers (20 wt%) results in a reduction in the work of fracture through restriction of ductile matrix deformation (caused by constraint as well as a reduction in the volume of polymer available). This leads to a diminution in the specific plastic work of the material. This is similar to the behavior seen in traditional short-glass fiber reinforced polymers [20, 21].The substantial improvement seen in process zone energy dissipation with the addition of nanofibers in an impact environment suggests that these materials should show substantial improvement in resistance to slow stable crack growth or stress corrosion cracking which are quasistatic fracture processes which do not involve significant gross plasticity. That will be the subject of future research.
## 3.2. Morphology
Changes in the fracture process are usually reflected in the morphology of the fracture surface; therefore, specimens were evaluated using the scanning electron microscope. It should be noted that the high level of fibrillation seen in these impact test specimens, especially the levels seen in the drawn materials, are more consistent with quasistatic fracture in neat polyethylene systems than with impact. Nanofiber loading makes possible energy dissipating fracture processes normally prevented by high crack velocities.Representative micrographs are shown in Figure4 for PE/VGCNF composites with fiber contents of 10 and 20 wt% and three processing levels. The fibers were clearly well dispersed in both systems. The number and length of fibrils, along with voids developed within the fracture process zone, increase dramatically after the shearing process (10 rpm and 20 rpm) for 10 wt% fibers loading. However, the development of fibrils in 20 wt% composite at higher shearing (20 rpm) is not similar to that of 10 wt% composites. This can be attributed to the interaction between the nanofibers at higher loading. The observed fracture surface morphology indicates that the higher values of local fracture energy, we, of composites is a result of enhancement in fibrillation and the formation of large stable voids resulting from coalescence of stable voids. The fibers act to stabilize and increase fibrillation thus enhancing the toughness of the matrix in the local process zone.Fracture surface SEM micrograph of 10% (left) and 20% (right) PE/VGCNF Composite: (a) undrawn, (b) 10 rpm drawn, and (c) 20 rpm drawn.
(a)(b)(c)
## 4. Conclusion
The addition of carbon nanofibers to polyethylene improves the ability of the polymer to form large fibril/void structures even under conditions of process zone constraint due to impact loading. This is reflected in an increase in local fracture toughness measured by the essential work of fracture. Further, this local toughness increases with increasing shear history during processing. The strength of this interaction may arise from one or both of two sources. The extended shear and thermal history may cause molecular scission as evidenced by the reduced total toughness of the unmodified system. The resulting free radicals then may bond to the nanofibers, resulting in an extremely strong matrix/fiber interaction. It is also likely that because of their similar structures, polyethylene and VGCNFs will form strong bonds when polymer chains are stretched out along the fiber surfaces. Either mechanism results in a level of interaction which produces properties and fracture processes not heretofore observed in polymer composites.
---
*Source: 101870-2009-08-13.xml* | 101870-2009-08-13_101870-2009-08-13.md | 22,497 | Fracture Toughness of Vapor Grown Carbon Nanofiber-Reinforced Polyethylene Composites | A. R. Adhikari; E. Partida; T. W. Petty; R. Jones; K. Lozano; C. Guerrero | Journal of Nanomaterials
(2009) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2009/101870 | 101870-2009-08-13.xml | ---
## Abstract
The impact fracture behavior of a vapor grown carbon nanofiber (VGCNF) reinforced high-density polyethylene (PE) composite was evaluated. The samples consisting of pure PE and composites with 10 wt% and 20 wt% of VGCNFs were prepared by a combination of hot-pressing and extrusion methods. Extrusion was used to produce samples with substantially different shear histories. The fracture behavior of these samples was analyzed using the essential work of fracture (EWF) approach. The results showed an increase of 292% in the essential work of fracture for the loading of 10 wt%. Further increasing fiber loading to 20 wt% caused the essential work of fracture to increase only 193% with respect to the unmodified material. Evaluation of the fracture surface morphology indicated that the fibril frequency and microvoid size within the various fiber loadings depended strongly on processing conditions.
---
## Body
## 1. Introduction
Polyethylene is characterized by a great capacity to absorb energy despite its low modulus of elasticity [1]. Because of its high toughness it offers promise as a matrix for highly damage tolerant composites. However development of adequate adhesion between PE and high performance reinforcements has been a challenge. Nanofibers show great promise for modification of existing materials due to the combination of their small size and surface compatibility. Their high thermal, electrical, and mechanical properties offer the prospect of substantial improvements in polymeric systems. This prospect is further enhanced by the strong natural adhesion nanofibers with many thermoplastic matrices [2–6].Prior studies have shown that VGCNFs interact strongly with polymeric matrices and enhance several properties. PE/VGCNF composites have been found to produce a simultaneous increase in both storage (elastic) and loss (viscous dissipation) modulus as measured by dynamic mechanical analysis [7]. Tensile tests of these composites show a remarkable increase in elongation to failure with increased shear history and an apparently new mechanism of void stabilization permitting the formation of widespread stable subcritical voids in the deformed polymer [8]. The current work extends the study of these materials to the regime of dynamic impact behavior.The highly nonlinear nature of the fracture process in PE requires the use of nonlinear fracture analysis. The most widely used methods are the J-integral [9] and the essential work of fracture (EMF) [10, 11]. In the recent years, the concept of EWF has been broadly applied to the evaluation of fracture toughness in ductile polymers and their composites due to its greater simplicity when compared to J-integral measurement [12–15]. This theory was initially proposed by Broberg [16] in 1968 and further developed by Mai and others [11, 17]. The theory poses that the total energy necessary to fracture a cracked material (Wf) contains two components the essential work of fracture (We) and nonessential work or plastic work (Wp) (see Figure 1)(1)Wf=We+Wp.We is the essential work required to rupture in its inner fracture process zone. For a given material thickness (t), We is proportional to the ligament length (ℓ). Wp is the energy consumed by mechanisms of deformation in the outer plastic zone and is a volume energy proportional to ℓ2.Figure 1
Schematic diagram of flow zones during fracture.The expression in (1) can be expanded to(2)Wf=weℓt+βwpℓ2t.
In terms of specific values, total specific work of fracture, wf is given by(3)wf=(Wfℓt)=we+βwpℓ,
where we and wp are the essential specific work of fracture and the specific nonessential work (or specific plastic work), respectively. The term β is a form factor for the plastic zone. Equation (3) provides the we and wp from the intercept and the slope of corresponding linear regression curve of the plot of wf as a function of ℓ. A representative plot of wf as a function of ℓ is shown in Figure 2.Figure 2
A representative plot of energy of impact(wf) versus ligament length (ℓ).In this study, measurements of energy to fracture under impact using the method of essential work of fracture were made. Fracture surface morphology was also evaluated to determine changes in the fracture mechanism caused by nanofibers reinforcement.
## 2. Experimental
### 2.1. Materials
The systems evaluated were based on high-density polyethylene (Marflex PE CL-L-R-240370) provided by Chevron-Philips Chemical Co. and vapor grown carbon nanofibers (Pyrograf III) (VGCNFs) provided by Applied Sciences. The fibers were purified before use according to procedures developed and described elsewhere [18]. Purification of the fibers was done with a process of refluxing nanofibers in dichloromethane followed by rinsing with deionized water. The purpose of purification is to remove amorphous carbon and untangle nested fiber bundles. It also serves to lightly functionalize the surface of the fibers, making them more compatible with some polymer matrices though this may not be critical to performance in PE.
### 2.2. Processing
After purification, the fibers were introduced into the PE matrix using a Haake Polylab 600 mixer which subjects the composite to high shear stresses. Mixing was done at a nominal temperature of 190C° for a total of 16 minutes at varied mixing speeds. A 2-minute mix of pure PE at 90 rpm was followed by fiber addition, 11 minutes at 30 rpm, and 3 minutes at 60 rpm. The length of each processing step was determined by the time necessary to produce a constant shearing torque measured at the mix head. The resulting material was hot pressed (Carver Hot-Press model 3912) and again extruded using a Haake Rheomixer 600 with an extrusion screw speed of 40 rpm and a die temperature of 190C°.After extrusion, the film was stretched by a Haake Tape Postex 600 to create a tape. To introduce varying shear stress histories generated by the extensional flow, drawing was done at 10 and 20 rpm. The resulting tape was pelletized once more and then molded to the final specimen size and thickness in a hydraulic heated press (PHI model 100-1a). The preparation combinations are summarized in Table1. Rectangular 63.5mm×12.7mm×3.2mm bars, with a single-edge-notched 3-point bending configuration, were used for impact testing.Table 1
Process flow used for neat PE and PE/VGCNF composites preparation, MP samples were mixed and pressed; MPEP 10 RMP samples were mixed, pressed, extruded at 10 rpm and pressed again; MPEP 20 RPM samples were mixed pressed extruded at 20 rpm and pressed again.
SampleProcess typeMixedHot pressedExtruded velocity of elongation (rpm)Mold pressed1020Pure PEMP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××PE/VGCNF (10% wt)MP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××PE/VGCNF (20% wt)MP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××
### 2.3. Characterizations
Tests were done at a temperature of 22C° in the Dynatup 830I drop weight system with a span length of 50 mm and an impact velocity of 2.5 m/s. Eight specimens were prepared and tested for each treatment combination. The exact length of the ligament was measured by optical microscopy on an Olympus T4560 imaging analyzer. The specimen was made with a saw 1mm gap and the tip of crack with a fresh razor blade. It was not necessary to immerse in liquid nitrogen because this sample presented a brittle fracture. The fracture morphology was studied using Scanning electron microscopy (SEM).
## 2.1. Materials
The systems evaluated were based on high-density polyethylene (Marflex PE CL-L-R-240370) provided by Chevron-Philips Chemical Co. and vapor grown carbon nanofibers (Pyrograf III) (VGCNFs) provided by Applied Sciences. The fibers were purified before use according to procedures developed and described elsewhere [18]. Purification of the fibers was done with a process of refluxing nanofibers in dichloromethane followed by rinsing with deionized water. The purpose of purification is to remove amorphous carbon and untangle nested fiber bundles. It also serves to lightly functionalize the surface of the fibers, making them more compatible with some polymer matrices though this may not be critical to performance in PE.
## 2.2. Processing
After purification, the fibers were introduced into the PE matrix using a Haake Polylab 600 mixer which subjects the composite to high shear stresses. Mixing was done at a nominal temperature of 190C° for a total of 16 minutes at varied mixing speeds. A 2-minute mix of pure PE at 90 rpm was followed by fiber addition, 11 minutes at 30 rpm, and 3 minutes at 60 rpm. The length of each processing step was determined by the time necessary to produce a constant shearing torque measured at the mix head. The resulting material was hot pressed (Carver Hot-Press model 3912) and again extruded using a Haake Rheomixer 600 with an extrusion screw speed of 40 rpm and a die temperature of 190C°.After extrusion, the film was stretched by a Haake Tape Postex 600 to create a tape. To introduce varying shear stress histories generated by the extensional flow, drawing was done at 10 and 20 rpm. The resulting tape was pelletized once more and then molded to the final specimen size and thickness in a hydraulic heated press (PHI model 100-1a). The preparation combinations are summarized in Table1. Rectangular 63.5mm×12.7mm×3.2mm bars, with a single-edge-notched 3-point bending configuration, were used for impact testing.Table 1
Process flow used for neat PE and PE/VGCNF composites preparation, MP samples were mixed and pressed; MPEP 10 RMP samples were mixed, pressed, extruded at 10 rpm and pressed again; MPEP 20 RPM samples were mixed pressed extruded at 20 rpm and pressed again.
SampleProcess typeMixedHot pressedExtruded velocity of elongation (rpm)Mold pressed1020Pure PEMP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××PE/VGCNF (10% wt)MP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××PE/VGCNF (20% wt)MP××––×MPEP 10 RPM×××–×MPEP 20 RPM××–××
## 2.3. Characterizations
Tests were done at a temperature of 22C° in the Dynatup 830I drop weight system with a span length of 50 mm and an impact velocity of 2.5 m/s. Eight specimens were prepared and tested for each treatment combination. The exact length of the ligament was measured by optical microscopy on an Olympus T4560 imaging analyzer. The specimen was made with a saw 1mm gap and the tip of crack with a fresh razor blade. It was not necessary to immerse in liquid nitrogen because this sample presented a brittle fracture. The fracture morphology was studied using Scanning electron microscopy (SEM).
## 3. Results and Discussion
### 3.1. Fracture Analysis
Impact results of PE and its composites were compared using single-edge notched tension specimen. A representative plot of total energy of fracture versus ligament length is shown in Figure2. It clearly shows the effect of increased flow stress with nanofiber addition. The higher flow stress results in increased constraint in the ligament and a reduction in gross viscoplastic flow in the ligament. The reduced gross plasticity reduces the total energy of fracture relative to the unreinforced material. Table 2 summarizes the values of we,wp and total energy. we values for all samples are plotted in Figure 3. The unmodified PE shows an increase in we (local fracture energy) but an overall reduction in fracture energy. The fiber modified systems show a clear increase in we with increasing shear history while total fracture energy falls relative to the unmodified system. In other words, increased processing improves the local material toughness in the process zone while fiber addition reduces the overall toughness by constraining gross plasticity in the specimen.Table 2
EWF parameters for neat PE and PE/VGCNF composites.
SamplesweβwpLigamentr2Total energy(kJ/m2)(kJ/m2)(mm)(kJ/m2)PEMP0.49781.6068.8900.9932.0799MPEP at 10 RPM0.70581.21229.0170.9951.918MPEP at 20 RPM0.99141.17048.1280.9911.8828PE/VGCNFs (10% wt.)MP1.01560.22627.9250.9841.2418MPEP at 10 RPM1.61650.12168.2550.9801.7381MPEP at 20 RPM1.85310.14718.0010.9922.0002PE/VGCNFs (20% wt.)MP0.78910.15888.3570.9800.9479MPEP at 10 RPM1.20150.04468.9660.9151.2461MPEP at 20 RPM1.38610.10128.6360.9771.4873Figure 3
Comparison of % change ofwe with process parameters.The increase in toughness is higher for the 10 wt% PE/VGCNF composite than for the 20 wt% fiber system which could indicate a countervailing effect of fiber loading. Essential work of fracture in the fracture zone corresponds to the energy required to debond PE and VGCNF, and to deform the polymer matrix [19]. Therefore, the likely mechanism for the improvement in toughness is improvement in fiber/polymer adhesion and better dispersion of the fibers. These improvements result in greater void growth through stable fibril formation (see morphology section). At the same time, the plastic work, wp, generally falls, suggesting greater localization of the fracture process and reduction of gross plasticity. The smaller increase in toughness observed in the higher fiber loaded system indicates that the toughening phenomenon is primarily matrix driven and is not due directly to fiber breakage or other fiber driven energy consuming process. Eventually the higher loading of fibers (20 wt%) results in a reduction in the work of fracture through restriction of ductile matrix deformation (caused by constraint as well as a reduction in the volume of polymer available). This leads to a diminution in the specific plastic work of the material. This is similar to the behavior seen in traditional short-glass fiber reinforced polymers [20, 21].The substantial improvement seen in process zone energy dissipation with the addition of nanofibers in an impact environment suggests that these materials should show substantial improvement in resistance to slow stable crack growth or stress corrosion cracking which are quasistatic fracture processes which do not involve significant gross plasticity. That will be the subject of future research.
### 3.2. Morphology
Changes in the fracture process are usually reflected in the morphology of the fracture surface; therefore, specimens were evaluated using the scanning electron microscope. It should be noted that the high level of fibrillation seen in these impact test specimens, especially the levels seen in the drawn materials, are more consistent with quasistatic fracture in neat polyethylene systems than with impact. Nanofiber loading makes possible energy dissipating fracture processes normally prevented by high crack velocities.Representative micrographs are shown in Figure4 for PE/VGCNF composites with fiber contents of 10 and 20 wt% and three processing levels. The fibers were clearly well dispersed in both systems. The number and length of fibrils, along with voids developed within the fracture process zone, increase dramatically after the shearing process (10 rpm and 20 rpm) for 10 wt% fibers loading. However, the development of fibrils in 20 wt% composite at higher shearing (20 rpm) is not similar to that of 10 wt% composites. This can be attributed to the interaction between the nanofibers at higher loading. The observed fracture surface morphology indicates that the higher values of local fracture energy, we, of composites is a result of enhancement in fibrillation and the formation of large stable voids resulting from coalescence of stable voids. The fibers act to stabilize and increase fibrillation thus enhancing the toughness of the matrix in the local process zone.Fracture surface SEM micrograph of 10% (left) and 20% (right) PE/VGCNF Composite: (a) undrawn, (b) 10 rpm drawn, and (c) 20 rpm drawn.
(a)(b)(c)
## 3.1. Fracture Analysis
Impact results of PE and its composites were compared using single-edge notched tension specimen. A representative plot of total energy of fracture versus ligament length is shown in Figure2. It clearly shows the effect of increased flow stress with nanofiber addition. The higher flow stress results in increased constraint in the ligament and a reduction in gross viscoplastic flow in the ligament. The reduced gross plasticity reduces the total energy of fracture relative to the unreinforced material. Table 2 summarizes the values of we,wp and total energy. we values for all samples are plotted in Figure 3. The unmodified PE shows an increase in we (local fracture energy) but an overall reduction in fracture energy. The fiber modified systems show a clear increase in we with increasing shear history while total fracture energy falls relative to the unmodified system. In other words, increased processing improves the local material toughness in the process zone while fiber addition reduces the overall toughness by constraining gross plasticity in the specimen.Table 2
EWF parameters for neat PE and PE/VGCNF composites.
SamplesweβwpLigamentr2Total energy(kJ/m2)(kJ/m2)(mm)(kJ/m2)PEMP0.49781.6068.8900.9932.0799MPEP at 10 RPM0.70581.21229.0170.9951.918MPEP at 20 RPM0.99141.17048.1280.9911.8828PE/VGCNFs (10% wt.)MP1.01560.22627.9250.9841.2418MPEP at 10 RPM1.61650.12168.2550.9801.7381MPEP at 20 RPM1.85310.14718.0010.9922.0002PE/VGCNFs (20% wt.)MP0.78910.15888.3570.9800.9479MPEP at 10 RPM1.20150.04468.9660.9151.2461MPEP at 20 RPM1.38610.10128.6360.9771.4873Figure 3
Comparison of % change ofwe with process parameters.The increase in toughness is higher for the 10 wt% PE/VGCNF composite than for the 20 wt% fiber system which could indicate a countervailing effect of fiber loading. Essential work of fracture in the fracture zone corresponds to the energy required to debond PE and VGCNF, and to deform the polymer matrix [19]. Therefore, the likely mechanism for the improvement in toughness is improvement in fiber/polymer adhesion and better dispersion of the fibers. These improvements result in greater void growth through stable fibril formation (see morphology section). At the same time, the plastic work, wp, generally falls, suggesting greater localization of the fracture process and reduction of gross plasticity. The smaller increase in toughness observed in the higher fiber loaded system indicates that the toughening phenomenon is primarily matrix driven and is not due directly to fiber breakage or other fiber driven energy consuming process. Eventually the higher loading of fibers (20 wt%) results in a reduction in the work of fracture through restriction of ductile matrix deformation (caused by constraint as well as a reduction in the volume of polymer available). This leads to a diminution in the specific plastic work of the material. This is similar to the behavior seen in traditional short-glass fiber reinforced polymers [20, 21].The substantial improvement seen in process zone energy dissipation with the addition of nanofibers in an impact environment suggests that these materials should show substantial improvement in resistance to slow stable crack growth or stress corrosion cracking which are quasistatic fracture processes which do not involve significant gross plasticity. That will be the subject of future research.
## 3.2. Morphology
Changes in the fracture process are usually reflected in the morphology of the fracture surface; therefore, specimens were evaluated using the scanning electron microscope. It should be noted that the high level of fibrillation seen in these impact test specimens, especially the levels seen in the drawn materials, are more consistent with quasistatic fracture in neat polyethylene systems than with impact. Nanofiber loading makes possible energy dissipating fracture processes normally prevented by high crack velocities.Representative micrographs are shown in Figure4 for PE/VGCNF composites with fiber contents of 10 and 20 wt% and three processing levels. The fibers were clearly well dispersed in both systems. The number and length of fibrils, along with voids developed within the fracture process zone, increase dramatically after the shearing process (10 rpm and 20 rpm) for 10 wt% fibers loading. However, the development of fibrils in 20 wt% composite at higher shearing (20 rpm) is not similar to that of 10 wt% composites. This can be attributed to the interaction between the nanofibers at higher loading. The observed fracture surface morphology indicates that the higher values of local fracture energy, we, of composites is a result of enhancement in fibrillation and the formation of large stable voids resulting from coalescence of stable voids. The fibers act to stabilize and increase fibrillation thus enhancing the toughness of the matrix in the local process zone.Fracture surface SEM micrograph of 10% (left) and 20% (right) PE/VGCNF Composite: (a) undrawn, (b) 10 rpm drawn, and (c) 20 rpm drawn.
(a)(b)(c)
## 4. Conclusion
The addition of carbon nanofibers to polyethylene improves the ability of the polymer to form large fibril/void structures even under conditions of process zone constraint due to impact loading. This is reflected in an increase in local fracture toughness measured by the essential work of fracture. Further, this local toughness increases with increasing shear history during processing. The strength of this interaction may arise from one or both of two sources. The extended shear and thermal history may cause molecular scission as evidenced by the reduced total toughness of the unmodified system. The resulting free radicals then may bond to the nanofibers, resulting in an extremely strong matrix/fiber interaction. It is also likely that because of their similar structures, polyethylene and VGCNFs will form strong bonds when polymer chains are stretched out along the fiber surfaces. Either mechanism results in a level of interaction which produces properties and fracture processes not heretofore observed in polymer composites.
---
*Source: 101870-2009-08-13.xml* | 2009 |
# Effect of Casting Parameters on the Microstructural and Mechanical Behavior of Magnesium AZ31-B Alloy Strips Cast on a Single Belt Casting Simulator
**Authors:** Ahmad Changizi; Mamoun Medraj; Mihaiela Isac
**Journal:** Advances in Materials Science and Engineering
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101872
---
## Abstract
Strips of magnesium alloy AZ31-B were cast on a simulator of a horizontal single belt caster incorporating a moving mold system. Mixtures of CO2 and sulfur hexafluoride (SF6) gases were used as protective atmosphere during melting and casting. The castability of the AZ31-B strips was investigated for a smooth, low carbon steel substrate, and six copper substrates with various textures and roughnesses. Graphite powder was used to coat the substrates. The correlation between strip thickness and heat flux was investigated. It was found that the heat flux from the forming strip to the copper substrate was higher than that to the steel substrate, while coated substrates registered lower heat fluxes than uncoated substrates. The highest heat flux from the strip was recorded for casting on macrotextured copper substrates with 0.15 mm grooves. As the thickness of the strip decreased, the net heat flux decreased. As the heat flux increased, the grain sizes of the strips were reduced, and the SDAS decreased. The mechanical properties were improved when the heat flux increased. The black layers which formed on the strips’ surfaces were analyzed and identified as nanoscale MgO particles. Nano-Scale particles act as light traps and appeared black.
---
## Body
## 1. Introduction
Magnesium is the lightest structural metal in common use [1]. Similarly, supplies of magnesium ores are virtually inexhaustible. Magnesium alloys normally have very good castability and machinability, as well as excellent specific strength and stiffness [2]. However magnesium alloys have some difficulty during rolling due to hexagonal close packed (hcp) lattice structure [3]. Meanwhile, a fine grain structure increases strength and ductility by promoting the operation of nonbasal slip systems and limiting twinning in magnesium alloys [4]. Strip casting of magnesium has become important in recent years. For reducing the cost of thin sheets of magnesium alloys, strip casting technologies such as horizontal single belt casting (HSBC), twin roll casting (TRC), and twin belt casting (TBC) have been developed [3]. With a strip casting process, magnesium alloy strips can typically be produced in thicknesses of 1–10 mm [1]. Direct strip casting, or HSBC, as a near-net-shape casting process, has potential use in the processing of aluminum, copper, zinc, and lead alloys, directly into sheet products.Generally, most metals and alloys are amenable to direct casting into plates, strips, or ribbons. However, a metallurgical understanding of these materials is needed to determine their suitability for casting into thin-gauge strips. Technically, the evaluation of a process must take into account the melting point of the alloy, the freezing range, the oxidation resistance in both the liquid and solid states, the heat transfer behavior, the fluidity of the melt, and the number and type of the liquid-to-solid, and solid-state, transformations that may occur. There is a particular emphasis on magnesium alloys as these are the major candidates for large-scale production by this processing route, given the poor hot rolling capabilities caused by its hexagonal close packed structure [5, 6]. The present study was carried out to investigate the possibility of directly casting magnesium strip products on a horizontal single belt caster (HSBC). The alloy studied was the AZ31-B magnesium alloy.
## 2. Experimental Procedure
### 2.1. Raw Materials and Melting Unit
Commercial magnesium alloy AZ31 grade B bar ingots obtained from Magnesium Electron Co. were used as raw material in the present experiments. The raw bar ingot materials were cut into smaller pieces and prepared for melting. Graphite powder, comprising particles of 0.5–0.6μm, obtained from Asbury Carbons Co., was used to coat the casting substrates. The graphite was of a synthetic variety, and the particles in general were flake shaped.
### 2.2. Strip Casting Simulator
A schematic overview of the strip casting simulator is shown in Figure1. The equipment includes the following: a containment mould, a substrate onto which the melt can be poured, a tundish, a motor that drags the substrate at preselected casting speeds, and a data acquisition system. The simulator can be set to produce strips of w
(
80
)
×
l
(
1100
)
×
t
(
1
-
5
) mm dimensions. The casting substrates can be coated with different materials such as the graphite used in the present research. In order to measure local heat fluxes, two K type thermocouples were placed in each segment of the substrate; one was set near the surface and the other was placed slightly below the first thermocouple.Figure 1
Schematic of the strip casting single belt simulator.
### 2.3. Substrate Specification
Two types of material substrates were used in the present experiments, steel with a polished surface and pure copper with six differently textured inserts/segments. Figure2 illustrates the schematics along with photos of the six copper substrates (2
×
2 inch segments). Table 1 provides dimensional specifications of different areas of copper chill substrate. Each segment had a different macrotexture.Table 1
Dimensional specifications of different areas of copper substrate presented in Figure2.
Segment
a (µm)
x (µm)
y (µm)
z (mm)
I
0
0
0
0
II
64.4
477.7
90
7
III
78.5
825
150
8
IV
317
1103
240
320
V
488
998
300
0.99
VI
451
1074
600
1.07Schematic and photos of different areas of copper substrate.
(a)
(b)
### 2.4. Melting and Pouring of Molten Metal
The melt was prepared in a closed steel crucible, in which the atmosphere of the melt was protected from air ingress at all steps in the casting process, so as to avoid any oxidation or possible burning. The protective atmosphere comprised a mixture of sulfur hexafluoride, 0.5% SF6, in a carrier gas of carbon dioxide, 99.5% CO2.The molten metal was heated using an induction furnace, to a temperature of 710°C, which is above the pouring temperature, 700°C, in order to provide sufficient superheat for skimming and metal transferring purposes. The molten metal was poured directly from the crucible into the tundish to avoid any large temperature drops. Before pouring, the tundish was preheated up to 150°C for drying the refractory walls inside the tundish. At the same time, the substrate was preheated to 30°C for avoiding any moisture. Transferring and pouring the molten metal were done manually. The substrate was propelled by a loaded spring at a constant speed of 0.5 m/sec.
### 2.5. Calculating Heat Flux and Applying IHCP to a Horizontal Single Belt Casting Simulator
Beck put forward a nonlinear estimation method to deal with phase changes and temperature dependent thermal properties of the solidification process used for solving the IHCP [9]. To treat experimental data, statistical principles and the concept of amplitudes temperatures are applied to the thermal capacity and heat conduction of the substrate during subsurface temperature measurements. This application is performed using a nonlinear estimation method to solve the delayed and diminished thermal response problems. The heat flux is taken to be a constant or a linear function of time within a given time interval. This is the principle of Beck’s nonlinear estimation technique, according to which the heat flux is then determined for that period according to the following function:
(1)
F
(
q
)
=
∑
i
N
1
∑
j
N
2
(
N
3
+
1
)
(
T
i
j
-
Y
i
j
)
2
,
where N
1 is the number of internal points in the temperature measurement excluding those used for boundary condition. N
2 is the number of temperature measurements per time interval. N
3 is the future number of time intervals considered for the heat flux calculation at each time interval. T
i
j and Y
i
j are the calculated and measured temperatures of location i and the time instant j, respectively [10].The IHCP (Inverse Heat Conduction Problem) applied for the interface between the melt and substrate is shown schematically in Figure3; the bounding conditions for the governing differential equation (2) are expressed in (3), (4), and (5) [11] as follows:
(2)
ρ
C
p
(
∂
T
∂
t
)
=
k
∂
2
T
∂
x
2
,
(3)
-
k
∂
T
∂
t
∣
x
=
0
=
q
1
t
>
t
l
-
1
,
(4)
T
(
x
1
,
t
=
t
l
-
1
)
=
F
(
x
1
)
,
(5)
T
(
x
2
,
t
=
t
l
-
1
)
=
F
(
x
2
)
.Figure 3
Schematic of applying IHCP to deduce interfacial heat fluxes for the single belt strip casting simulator.The corresponding differential equation and boundary conditions for sensitivity coefficients are [2]
(6)
ρ
C
p
∂
Ø
∂
t
=
k
∂
2
Ø
∂
x
2
,
-
k
∂
Ø
∂
x
|
x
=
0
=
1
1
>
t
l
-
1
,
Ø
(
x
1
,
t
=
t
l
-
1
)
=
0
,
Ø
(
x
2
,
t
=
t
l
-
1
)
=
0
,
where Ø is a sensitivity coefficient and subscript i denotes the time when it was applied. In this investigation, temperatures are recorded using two thermocouples for each segment of the substrate connected to an Omega Data Acquisition System. In order to convert the time versus temperature data to time versus heat flux, the necessary IHCP software was developed from first principles by Isac et al. [7].
## 2.1. Raw Materials and Melting Unit
Commercial magnesium alloy AZ31 grade B bar ingots obtained from Magnesium Electron Co. were used as raw material in the present experiments. The raw bar ingot materials were cut into smaller pieces and prepared for melting. Graphite powder, comprising particles of 0.5–0.6μm, obtained from Asbury Carbons Co., was used to coat the casting substrates. The graphite was of a synthetic variety, and the particles in general were flake shaped.
## 2.2. Strip Casting Simulator
A schematic overview of the strip casting simulator is shown in Figure1. The equipment includes the following: a containment mould, a substrate onto which the melt can be poured, a tundish, a motor that drags the substrate at preselected casting speeds, and a data acquisition system. The simulator can be set to produce strips of w
(
80
)
×
l
(
1100
)
×
t
(
1
-
5
) mm dimensions. The casting substrates can be coated with different materials such as the graphite used in the present research. In order to measure local heat fluxes, two K type thermocouples were placed in each segment of the substrate; one was set near the surface and the other was placed slightly below the first thermocouple.Figure 1
Schematic of the strip casting single belt simulator.
## 2.3. Substrate Specification
Two types of material substrates were used in the present experiments, steel with a polished surface and pure copper with six differently textured inserts/segments. Figure2 illustrates the schematics along with photos of the six copper substrates (2
×
2 inch segments). Table 1 provides dimensional specifications of different areas of copper chill substrate. Each segment had a different macrotexture.Table 1
Dimensional specifications of different areas of copper substrate presented in Figure2.
Segment
a (µm)
x (µm)
y (µm)
z (mm)
I
0
0
0
0
II
64.4
477.7
90
7
III
78.5
825
150
8
IV
317
1103
240
320
V
488
998
300
0.99
VI
451
1074
600
1.07Schematic and photos of different areas of copper substrate.
(a)
(b)
## 2.4. Melting and Pouring of Molten Metal
The melt was prepared in a closed steel crucible, in which the atmosphere of the melt was protected from air ingress at all steps in the casting process, so as to avoid any oxidation or possible burning. The protective atmosphere comprised a mixture of sulfur hexafluoride, 0.5% SF6, in a carrier gas of carbon dioxide, 99.5% CO2.The molten metal was heated using an induction furnace, to a temperature of 710°C, which is above the pouring temperature, 700°C, in order to provide sufficient superheat for skimming and metal transferring purposes. The molten metal was poured directly from the crucible into the tundish to avoid any large temperature drops. Before pouring, the tundish was preheated up to 150°C for drying the refractory walls inside the tundish. At the same time, the substrate was preheated to 30°C for avoiding any moisture. Transferring and pouring the molten metal were done manually. The substrate was propelled by a loaded spring at a constant speed of 0.5 m/sec.
## 2.5. Calculating Heat Flux and Applying IHCP to a Horizontal Single Belt Casting Simulator
Beck put forward a nonlinear estimation method to deal with phase changes and temperature dependent thermal properties of the solidification process used for solving the IHCP [9]. To treat experimental data, statistical principles and the concept of amplitudes temperatures are applied to the thermal capacity and heat conduction of the substrate during subsurface temperature measurements. This application is performed using a nonlinear estimation method to solve the delayed and diminished thermal response problems. The heat flux is taken to be a constant or a linear function of time within a given time interval. This is the principle of Beck’s nonlinear estimation technique, according to which the heat flux is then determined for that period according to the following function:
(1)
F
(
q
)
=
∑
i
N
1
∑
j
N
2
(
N
3
+
1
)
(
T
i
j
-
Y
i
j
)
2
,
where N
1 is the number of internal points in the temperature measurement excluding those used for boundary condition. N
2 is the number of temperature measurements per time interval. N
3 is the future number of time intervals considered for the heat flux calculation at each time interval. T
i
j and Y
i
j are the calculated and measured temperatures of location i and the time instant j, respectively [10].The IHCP (Inverse Heat Conduction Problem) applied for the interface between the melt and substrate is shown schematically in Figure3; the bounding conditions for the governing differential equation (2) are expressed in (3), (4), and (5) [11] as follows:
(2)
ρ
C
p
(
∂
T
∂
t
)
=
k
∂
2
T
∂
x
2
,
(3)
-
k
∂
T
∂
t
∣
x
=
0
=
q
1
t
>
t
l
-
1
,
(4)
T
(
x
1
,
t
=
t
l
-
1
)
=
F
(
x
1
)
,
(5)
T
(
x
2
,
t
=
t
l
-
1
)
=
F
(
x
2
)
.Figure 3
Schematic of applying IHCP to deduce interfacial heat fluxes for the single belt strip casting simulator.The corresponding differential equation and boundary conditions for sensitivity coefficients are [2]
(6)
ρ
C
p
∂
Ø
∂
t
=
k
∂
2
Ø
∂
x
2
,
-
k
∂
Ø
∂
x
|
x
=
0
=
1
1
>
t
l
-
1
,
Ø
(
x
1
,
t
=
t
l
-
1
)
=
0
,
Ø
(
x
2
,
t
=
t
l
-
1
)
=
0
,
where Ø is a sensitivity coefficient and subscript i denotes the time when it was applied. In this investigation, temperatures are recorded using two thermocouples for each segment of the substrate connected to an Omega Data Acquisition System. In order to convert the time versus temperature data to time versus heat flux, the necessary IHCP software was developed from first principles by Isac et al. [7].
## 3. Results and Discussion
### 3.1. Effect of Substrate Material on Heat Flux
Materials of the substrate in the strip casting process are one of the important issues which affect casting parameters. Table2 shows the values of the maximum heat fluxes recorded for different substrate materials. The casting conditions (speed, superheat, and thickness) were the same.Table 2
Value of maximum heat flux for different substrate materials.
Substrate materials
Coating
R
a
Max. heat flux MW/m2
Steel
No
0
1.29
Copper
No
0
5.06
Steel
Yes
0
0.71
Copper
Yes
0
2.85The thermophysical properties of different substrate materials used in the present experiments are summarized in Table3.Table 3
Thermophysical properties of the substrates used [7, 8].
Substrate materials
C
p (kJ/kg °C)
k (W/m·K)
ρ (kg/m3)
α (cm2/sec)
Carbon steel
0.486
48
7753
0.081
Copper
0.383
386
8954
0.415
Graphite
0.71
24
2200
0.0024The measured interfacial heat flux is related to the thermal conductivity and thermal diffusivity of the substrate. Hence, the large difference between the thermal conductivity,k, of steel and copper is reflected in the value of the heat flux at the mould/melt interface. The bare copper substrate with no graphite coating had the highest cooling capacity and produced the highest heat flux at the mould/melt interface. Because of the poor thermal conductivity of graphite and of steel, the steel substrate coated with a graphite layer had the smallest heat flux at the mould/melt interface.
### 3.2. Effect of the Surface Topography of the Substrate on the Interfacial Heat Flux
In the HSBC casting procedure, the metal/mould interface is not in perfect thermal contact. Localized heat flows through the actual contact points between the metal/mould interfaces are significantly less than in the case of perfect contact.Consider the melt hanging between two parallel running peaks at a distance of2
λ apart; then the melt sag (d
sag) depends on the melt surface tension (σ) and the metallostatic pressure (Δ
P) for a nonwetting substrate. The radius of the metal curvature, R, is [12]
(7)
R
=
σ
Δ
P
=
σ
ρ
g
h
,
where σ is the melt surface tension, ρ is the melt density, g is the gravitational constant, and h is the melt height. The melt sag can be calculated as [5, 13]
(8)
d
sag
=
R
-
R
2
-
λ
2
.However significant thermal resistance exists at the substrate/melt interface because of trapped air, oxide layers, gaps made by shrinkage of the solidifying shell from the interface, thermal expansion of the mold, and so forth.Table4 presents the results of measured heat fluxes for different surface topographies. The maximum heat fluxes were measured in segment III, for both coated and uncoated substrates, while minimum heat fluxes were measured in segment VI. We can conclude that the substrate topography of segment number six had an increased thermal resistance compared to that of segment number three.Table 4
Maximum values of heat flux from 3 mm strips of magnesium cast on macrotextured copper substrate.
Segment no.
Coating
Depth of grooves(mm)
Max. heat flux(MW/m2)
I
No
0
5.06
II
No
0.09
5.78
III
No
0.15
6.60
IV
No
0.24
5.18
V
No
0.3
4.38
VI
No
0.6
3.35
I
Yes
0
2.85
II
Yes
0.09
3.20
III
Yes
0.15
3.53
IV
Yes
0.24
2.93
V
Yes
0.3
2.58
VI
Yes
0.6
2.50Consequently, the maximum surface contact of melt-mould occurred with segment III and the highest number of air pockets was in segment VI.
### 3.3. Effect of Coating of Substrates on Measured Heat Fluxes
Figure4 illustrates the measured heat fluxes of casting strips of magnesium AZ31-B alloy on substrates with various roughnesses, with and without a graphite coating. The measured heat fluxes on graphite-coated substrates were all lower than those on equivalent (matching) bare substrates.Figure 4
Effect of surface topology of substrate on heat flux with and without coating.
### 3.4. Effect of Strip Thickness on Heat Flux
The distance between the nozzle and the substrate was decreased, so as to reduce the thickness of the strips produced from 3 to 1 mm. The substrate was coated with a layer of fine graphite to a thickness of 60μm. Based on Figure 5 and the following equations, it is evident that if one increases the strip thickness, the amount of heat transferred from the strip to the substrate must increase. Essentially, the total amount of the heat loss per unit area required to complete solidification of the liquid metal is
(9)
Δ
Q
t
=
ρ
·
d
{
C
p
(
T
c
-
T
M
)
+
Δ
H
}
,
where ρ is the density of liquid metal, d is the thickness of strip produced, C
p is the specific heat, T
c is the casting temperature, T
M is the melting point, andΔ
H is the latent heat of fusion/mass. Note that ρ
·
d
{
C
p
(
T
c
-
T
M
)
} is the sensible heat of the melt and ρ
·
d
·
Δ
H is the latent heat component.Figure 5
Heat flux versus strip thickness and substrate textures.
### 3.5. Microstructural Analysis
Figure6 shows a microstructure of AZ31-B magnesium alloy close to the strips top surface when casting 3 mm strip on segment III of the copper substrate without any graphite coating.Figure 6
Microstructure of top surface of AZ31-B strip cast on segment III of copper substrate without any graphite coating.It has been claimed by Rappaz and Gandin [13] that the dendrites developed at the interface with the substrate were not inclined, their growth direction being verticality upwards given that the direction of the fluid flow directly influences the growth direction of dendrites at the melt/substrate interface. The dendritic growth mechanism in magnesium AZ31-B alloys has been elaborated by Vander [14]. It is concluded that a similar principle governs the dendritic growth for magnesium AZ31-B alloy investigated in this work. During the solidification of the alloy, a symmetrical solute field at the dendrite tip is established. Accordingly, the solute gradient is uniform on both sides of the dendrite tip during the growth. Thus the main origin of the dendrite growth direction is deemed to be the solute distribution. According to the literature [15, 16], the nonhomogeneous distribution of solute atoms between dendrite arms in a magnesium AZ31-B alloy is mainly due to the fact that the solidification takes place over a range of temperatures. In fact, there is insufficient time for atomic diffusion to redistribute the solute both within the liquid in the vicinity of the solid-liquid interface and within the solid, since cooling occurs so rapidly through the two-phase (L + S) regime. By comparing the microstructures of the bottom surfaces of strips, it can be concluded that the grain size depends significantly on the heat flux. The results of microstructural analyses of all samples are summarized in Table 5.Table 5
Microstructure analyses of strips.
Depth of groove
Thickness
Coating
Heat flux
Grain Size (μm)
SDAS (μm)
Bottom
Top
Bottom
Top
0.15
3 mm
No
6.6
35
100
5.39
6.1
0.09
3 mm
No
5.775
64
110
7.7
6.14
0.24
3 mm
No
5.175
70
117
8.5
7.15
0
3 mm
No
5.062
77
121
8.6
9.2
0.3
3 mm
No
4.375
91
136
8.9
10
0.6
3 mm
No
3.35
92
138
9.02
10.5
0.15
3 mm
Yes
3.5
103
110
8.5
9.1
0.09
3 mm
Yes
3.2
108
120
10.04
11.8
0.24
3 mm
Yes
2.9
110
127
10.8
12.4
0
3 mm
Yes
2.85
115
129
11.5
12.8
0.3
3 mm
Yes
2.58
132
137
11.58
13.5
0.6
3 mm
Yes
2.5
134
140
11.91
13.8
0.15
1 mm
Yes
2.5
108
132
8.2
9.8
0.09
1 mm
Yes
2.33
125
135
10.5
11
0.24
1 mm
Yes
2.166
138
142
11.7
12.4
0
1 mm
Yes
2.12
140
150
12.2
14
0.3
1 mm
Yes
1.83
150
162
12.4
15.6
0.6
1 mm
Yes
1.72
155
175
13
17
0 (steel )
3 mm
No
1.2
157
225
14
18
Yes
0.71
350
470
20
25As shown in Table5, with decreasing heat flux and increasing thermal resistance between the substrate and the strip, the grain size increases. The grain size of the top surface is larger than that of the bottom surface because of solidification delay due to smaller heat fluxes at the top surface. Secondary dendrite arm spacing (SDAS) is found to be directly related to heat flux and thermal contact resistance. The SDAS decreases when air pockets are generally trapped at the substrate/melt interface, which dramatically reduces the heat flux, as explained earlier. The occurrence of these air pockets can subsequently influence the microstructure of the strips. Consequently, the grains tend to grow up from the bottom surface to the top surface resulting in a columnar pattern of grain growth. The effect of these air pockets on the grain structure is clearly shown in Figure 7. A similar observation pertaining to the effect of air pocket formation on the strip’s microstructure has been reported by Dubé et al. [17].Figure 7
Directional grain growth because of air pockets at the melt/mould interface, ×200.
### 3.6. Phase Analysis
Scanning Electron Microscopy was used to analyze the chemistry and morphology of the secondary phases present in the strip microstructure, as shown in Figure8.Figure 8
SEM image of AZ31-B strip.The microanalyses using EDX technique, were performed on the intermetallics marked in Figure8, in order to identify their composition. Figure 9 presents one X-Ray spectrum of the analyzed intermetallics.Figure 9
EDX analyses of point 2 of Figure8.EDX analysis, demonstrated in the associated spectra, indicates that the phases contain Mg, Zn, Al, and Mn. The presence of magnesium in the spectra could originate from either the matrix or the intermetallics. Based on the information from the phase diagram, the matrix isα-Mg, and the particles are Mg-Al-Zn phase (most likely (Al,Zn)49Mg32) and Al-Mn (it could be a mixture of Al11Mn4, Al8Mn5, Al9Mn11, and β-Mn(Al)) intermetallics. However, according to the investigation by Cao et al. [18], the α-Mg is a solid solution of Mg-Al-Zn-Mn.
### 3.7. Analysis of Mechanical Properties of AZ31-B Strips
Figure10 and Table 6 summarize the mechanical properties of the strip cast AZ31-B strips. The tensile and yield strength are calculated using equations TS (MPa) = 3.4 × BHN and H
=
3
σ
y. Local Vickers hardness was taken on the cross section of the strips. There was no significant difference between hardness values taken at the top and the bottom surfaces of the strips.Table 6
Mechanical properties of AZ31-B strips.
Substrate no.
Depth of groove (mm)
Thickness
Coating
Heat flux
Rc
HV
YS
TS
W/m2 °C × 10−4
MPa
MPa
III (c)
0.15
3 mm
No
6.6
4.545
62
147
166.6
II (b)
0.09
3 mm
No
5.77
5.195
61
144
163.2
IV (d)
0.24
3 mm
No
5.17
5.797
60
144
163.2
I (a)
0
3 mm
No
5.06
5.927
58
141
159.8
V (e)
0.3
3 mm
No
4.37
6.857
56
138
156.4
VI (f)
0.6
3 mm
No
3.35
8.955
54.5
135
153
III (c)
0.15
3 mm
Yes
3.5
8.571
54.9
135
153
II (b)
0.09
3 mm
Yes
3.2
9.375
54.2
135
153
IV (d)
0.24
3 mm
Yes
2.9
10.345
53.1
132
149.6
I (a)
0
3 mm
Yes
2.85
10.526
50.7
141
159.8
V (e)
0.3
3 mm
Yes
2.5
11.628
50.2
141
159.8
VI (f)
0.6
3 mm
Yes
2.5
12.000
50
141
159.8
III (c)
0.15
1 mm
Yes
2.5
4.000
49.8
126
142.8
II (b)
0.09
1 mm
Yes
2.33
4.292
48.4
126
142.8
IV (d)
0.24
1 mm
Yes
2.16
4.617
48.2
126
142.8
I (a)
0
1 mm
Yes
2.12
4.717
48.1
126
142.8
V (e)
0.3
1 mm
Yes
1.83
5.464
48
126
142.8
VI (f)
0.6
1 mm
Yes
1.72
5.814
47.9
123
139.4
I
0 (steel)
3 mm
No
1.2
25.000
47.5
123
139.4
Yes
0.71
42.254
47
123
139.4Figure 10
The effect of heat flux on the hardness of AZ31-B strips.Based on the experimental results presented above, as the heat flux increases, the hardness and the other mechanical properties increased. In the case of 1 mm thick strip, because of lower weight pressure of the strip on the substrate, the thickness of the air gap between the strip and substrate was increased, and the heat transfer rate from strip to substrate decreased. Based on the lower global heat fluxes, represented as the area under the heat flux-time curve, in strips with 1 mm thickness, grain sizes were large and mechanical properties were lower. This issue was presented in detail in Section3.5.
### 3.8. Evaluation of the Black Layer on AZ31-B Strip Surface When Cast in Air
To protect magnesium alloys from oxidation during strip casting, a mixture of CO2 and SF6 is usually used. If not, a black layer film on the surface of the strip forms as shown in Figure 11. This layer is strongly adherent, and severely compromises the surface quality of the strip for commercial proposes.Figure 11
Black layer of coating on strip cast without protective atmosphere.XRD tests were performed in order to analyze the nature of the black layer, as shown in Figure12.Figure 12
XRD pattern of black layer.Figures13(a) and 13(b), respectively, present an SEM image and an EDX spectrum of the black layer formed on the strip surface when cast in air.(a) SEM image of the black layer. (b) EDX analysis of the black layer.
(a)
(b)It is well known that, under normal conditions, MgO is white and MgAl2O4 is colorless. According to XRD and EDX patterns, the major component of the layer on the strip is definitely MgO. As shown in the SEM image, the particle sizes of MgO are around 0.1 μm. According to the literature [19–22], if the particle size of the oxide is so small that it acts as a light trap, then it will appear black. It should be noted that Mg peaks in the XRD and EDX patterns emanated from the base metal, not from the thin black layer of MgO particles forming on top of the cast strip. Clearly, it will be necessary to use a protective atmosphere in the commercial strip casting of magnesium alloys.
## 3.1. Effect of Substrate Material on Heat Flux
Materials of the substrate in the strip casting process are one of the important issues which affect casting parameters. Table2 shows the values of the maximum heat fluxes recorded for different substrate materials. The casting conditions (speed, superheat, and thickness) were the same.Table 2
Value of maximum heat flux for different substrate materials.
Substrate materials
Coating
R
a
Max. heat flux MW/m2
Steel
No
0
1.29
Copper
No
0
5.06
Steel
Yes
0
0.71
Copper
Yes
0
2.85The thermophysical properties of different substrate materials used in the present experiments are summarized in Table3.Table 3
Thermophysical properties of the substrates used [7, 8].
Substrate materials
C
p (kJ/kg °C)
k (W/m·K)
ρ (kg/m3)
α (cm2/sec)
Carbon steel
0.486
48
7753
0.081
Copper
0.383
386
8954
0.415
Graphite
0.71
24
2200
0.0024The measured interfacial heat flux is related to the thermal conductivity and thermal diffusivity of the substrate. Hence, the large difference between the thermal conductivity,k, of steel and copper is reflected in the value of the heat flux at the mould/melt interface. The bare copper substrate with no graphite coating had the highest cooling capacity and produced the highest heat flux at the mould/melt interface. Because of the poor thermal conductivity of graphite and of steel, the steel substrate coated with a graphite layer had the smallest heat flux at the mould/melt interface.
## 3.2. Effect of the Surface Topography of the Substrate on the Interfacial Heat Flux
In the HSBC casting procedure, the metal/mould interface is not in perfect thermal contact. Localized heat flows through the actual contact points between the metal/mould interfaces are significantly less than in the case of perfect contact.Consider the melt hanging between two parallel running peaks at a distance of2
λ apart; then the melt sag (d
sag) depends on the melt surface tension (σ) and the metallostatic pressure (Δ
P) for a nonwetting substrate. The radius of the metal curvature, R, is [12]
(7)
R
=
σ
Δ
P
=
σ
ρ
g
h
,
where σ is the melt surface tension, ρ is the melt density, g is the gravitational constant, and h is the melt height. The melt sag can be calculated as [5, 13]
(8)
d
sag
=
R
-
R
2
-
λ
2
.However significant thermal resistance exists at the substrate/melt interface because of trapped air, oxide layers, gaps made by shrinkage of the solidifying shell from the interface, thermal expansion of the mold, and so forth.Table4 presents the results of measured heat fluxes for different surface topographies. The maximum heat fluxes were measured in segment III, for both coated and uncoated substrates, while minimum heat fluxes were measured in segment VI. We can conclude that the substrate topography of segment number six had an increased thermal resistance compared to that of segment number three.Table 4
Maximum values of heat flux from 3 mm strips of magnesium cast on macrotextured copper substrate.
Segment no.
Coating
Depth of grooves(mm)
Max. heat flux(MW/m2)
I
No
0
5.06
II
No
0.09
5.78
III
No
0.15
6.60
IV
No
0.24
5.18
V
No
0.3
4.38
VI
No
0.6
3.35
I
Yes
0
2.85
II
Yes
0.09
3.20
III
Yes
0.15
3.53
IV
Yes
0.24
2.93
V
Yes
0.3
2.58
VI
Yes
0.6
2.50Consequently, the maximum surface contact of melt-mould occurred with segment III and the highest number of air pockets was in segment VI.
## 3.3. Effect of Coating of Substrates on Measured Heat Fluxes
Figure4 illustrates the measured heat fluxes of casting strips of magnesium AZ31-B alloy on substrates with various roughnesses, with and without a graphite coating. The measured heat fluxes on graphite-coated substrates were all lower than those on equivalent (matching) bare substrates.Figure 4
Effect of surface topology of substrate on heat flux with and without coating.
## 3.4. Effect of Strip Thickness on Heat Flux
The distance between the nozzle and the substrate was decreased, so as to reduce the thickness of the strips produced from 3 to 1 mm. The substrate was coated with a layer of fine graphite to a thickness of 60μm. Based on Figure 5 and the following equations, it is evident that if one increases the strip thickness, the amount of heat transferred from the strip to the substrate must increase. Essentially, the total amount of the heat loss per unit area required to complete solidification of the liquid metal is
(9)
Δ
Q
t
=
ρ
·
d
{
C
p
(
T
c
-
T
M
)
+
Δ
H
}
,
where ρ is the density of liquid metal, d is the thickness of strip produced, C
p is the specific heat, T
c is the casting temperature, T
M is the melting point, andΔ
H is the latent heat of fusion/mass. Note that ρ
·
d
{
C
p
(
T
c
-
T
M
)
} is the sensible heat of the melt and ρ
·
d
·
Δ
H is the latent heat component.Figure 5
Heat flux versus strip thickness and substrate textures.
## 3.5. Microstructural Analysis
Figure6 shows a microstructure of AZ31-B magnesium alloy close to the strips top surface when casting 3 mm strip on segment III of the copper substrate without any graphite coating.Figure 6
Microstructure of top surface of AZ31-B strip cast on segment III of copper substrate without any graphite coating.It has been claimed by Rappaz and Gandin [13] that the dendrites developed at the interface with the substrate were not inclined, their growth direction being verticality upwards given that the direction of the fluid flow directly influences the growth direction of dendrites at the melt/substrate interface. The dendritic growth mechanism in magnesium AZ31-B alloys has been elaborated by Vander [14]. It is concluded that a similar principle governs the dendritic growth for magnesium AZ31-B alloy investigated in this work. During the solidification of the alloy, a symmetrical solute field at the dendrite tip is established. Accordingly, the solute gradient is uniform on both sides of the dendrite tip during the growth. Thus the main origin of the dendrite growth direction is deemed to be the solute distribution. According to the literature [15, 16], the nonhomogeneous distribution of solute atoms between dendrite arms in a magnesium AZ31-B alloy is mainly due to the fact that the solidification takes place over a range of temperatures. In fact, there is insufficient time for atomic diffusion to redistribute the solute both within the liquid in the vicinity of the solid-liquid interface and within the solid, since cooling occurs so rapidly through the two-phase (L + S) regime. By comparing the microstructures of the bottom surfaces of strips, it can be concluded that the grain size depends significantly on the heat flux. The results of microstructural analyses of all samples are summarized in Table 5.Table 5
Microstructure analyses of strips.
Depth of groove
Thickness
Coating
Heat flux
Grain Size (μm)
SDAS (μm)
Bottom
Top
Bottom
Top
0.15
3 mm
No
6.6
35
100
5.39
6.1
0.09
3 mm
No
5.775
64
110
7.7
6.14
0.24
3 mm
No
5.175
70
117
8.5
7.15
0
3 mm
No
5.062
77
121
8.6
9.2
0.3
3 mm
No
4.375
91
136
8.9
10
0.6
3 mm
No
3.35
92
138
9.02
10.5
0.15
3 mm
Yes
3.5
103
110
8.5
9.1
0.09
3 mm
Yes
3.2
108
120
10.04
11.8
0.24
3 mm
Yes
2.9
110
127
10.8
12.4
0
3 mm
Yes
2.85
115
129
11.5
12.8
0.3
3 mm
Yes
2.58
132
137
11.58
13.5
0.6
3 mm
Yes
2.5
134
140
11.91
13.8
0.15
1 mm
Yes
2.5
108
132
8.2
9.8
0.09
1 mm
Yes
2.33
125
135
10.5
11
0.24
1 mm
Yes
2.166
138
142
11.7
12.4
0
1 mm
Yes
2.12
140
150
12.2
14
0.3
1 mm
Yes
1.83
150
162
12.4
15.6
0.6
1 mm
Yes
1.72
155
175
13
17
0 (steel )
3 mm
No
1.2
157
225
14
18
Yes
0.71
350
470
20
25As shown in Table5, with decreasing heat flux and increasing thermal resistance between the substrate and the strip, the grain size increases. The grain size of the top surface is larger than that of the bottom surface because of solidification delay due to smaller heat fluxes at the top surface. Secondary dendrite arm spacing (SDAS) is found to be directly related to heat flux and thermal contact resistance. The SDAS decreases when air pockets are generally trapped at the substrate/melt interface, which dramatically reduces the heat flux, as explained earlier. The occurrence of these air pockets can subsequently influence the microstructure of the strips. Consequently, the grains tend to grow up from the bottom surface to the top surface resulting in a columnar pattern of grain growth. The effect of these air pockets on the grain structure is clearly shown in Figure 7. A similar observation pertaining to the effect of air pocket formation on the strip’s microstructure has been reported by Dubé et al. [17].Figure 7
Directional grain growth because of air pockets at the melt/mould interface, ×200.
## 3.6. Phase Analysis
Scanning Electron Microscopy was used to analyze the chemistry and morphology of the secondary phases present in the strip microstructure, as shown in Figure8.Figure 8
SEM image of AZ31-B strip.The microanalyses using EDX technique, were performed on the intermetallics marked in Figure8, in order to identify their composition. Figure 9 presents one X-Ray spectrum of the analyzed intermetallics.Figure 9
EDX analyses of point 2 of Figure8.EDX analysis, demonstrated in the associated spectra, indicates that the phases contain Mg, Zn, Al, and Mn. The presence of magnesium in the spectra could originate from either the matrix or the intermetallics. Based on the information from the phase diagram, the matrix isα-Mg, and the particles are Mg-Al-Zn phase (most likely (Al,Zn)49Mg32) and Al-Mn (it could be a mixture of Al11Mn4, Al8Mn5, Al9Mn11, and β-Mn(Al)) intermetallics. However, according to the investigation by Cao et al. [18], the α-Mg is a solid solution of Mg-Al-Zn-Mn.
## 3.7. Analysis of Mechanical Properties of AZ31-B Strips
Figure10 and Table 6 summarize the mechanical properties of the strip cast AZ31-B strips. The tensile and yield strength are calculated using equations TS (MPa) = 3.4 × BHN and H
=
3
σ
y. Local Vickers hardness was taken on the cross section of the strips. There was no significant difference between hardness values taken at the top and the bottom surfaces of the strips.Table 6
Mechanical properties of AZ31-B strips.
Substrate no.
Depth of groove (mm)
Thickness
Coating
Heat flux
Rc
HV
YS
TS
W/m2 °C × 10−4
MPa
MPa
III (c)
0.15
3 mm
No
6.6
4.545
62
147
166.6
II (b)
0.09
3 mm
No
5.77
5.195
61
144
163.2
IV (d)
0.24
3 mm
No
5.17
5.797
60
144
163.2
I (a)
0
3 mm
No
5.06
5.927
58
141
159.8
V (e)
0.3
3 mm
No
4.37
6.857
56
138
156.4
VI (f)
0.6
3 mm
No
3.35
8.955
54.5
135
153
III (c)
0.15
3 mm
Yes
3.5
8.571
54.9
135
153
II (b)
0.09
3 mm
Yes
3.2
9.375
54.2
135
153
IV (d)
0.24
3 mm
Yes
2.9
10.345
53.1
132
149.6
I (a)
0
3 mm
Yes
2.85
10.526
50.7
141
159.8
V (e)
0.3
3 mm
Yes
2.5
11.628
50.2
141
159.8
VI (f)
0.6
3 mm
Yes
2.5
12.000
50
141
159.8
III (c)
0.15
1 mm
Yes
2.5
4.000
49.8
126
142.8
II (b)
0.09
1 mm
Yes
2.33
4.292
48.4
126
142.8
IV (d)
0.24
1 mm
Yes
2.16
4.617
48.2
126
142.8
I (a)
0
1 mm
Yes
2.12
4.717
48.1
126
142.8
V (e)
0.3
1 mm
Yes
1.83
5.464
48
126
142.8
VI (f)
0.6
1 mm
Yes
1.72
5.814
47.9
123
139.4
I
0 (steel)
3 mm
No
1.2
25.000
47.5
123
139.4
Yes
0.71
42.254
47
123
139.4Figure 10
The effect of heat flux on the hardness of AZ31-B strips.Based on the experimental results presented above, as the heat flux increases, the hardness and the other mechanical properties increased. In the case of 1 mm thick strip, because of lower weight pressure of the strip on the substrate, the thickness of the air gap between the strip and substrate was increased, and the heat transfer rate from strip to substrate decreased. Based on the lower global heat fluxes, represented as the area under the heat flux-time curve, in strips with 1 mm thickness, grain sizes were large and mechanical properties were lower. This issue was presented in detail in Section3.5.
## 3.8. Evaluation of the Black Layer on AZ31-B Strip Surface When Cast in Air
To protect magnesium alloys from oxidation during strip casting, a mixture of CO2 and SF6 is usually used. If not, a black layer film on the surface of the strip forms as shown in Figure 11. This layer is strongly adherent, and severely compromises the surface quality of the strip for commercial proposes.Figure 11
Black layer of coating on strip cast without protective atmosphere.XRD tests were performed in order to analyze the nature of the black layer, as shown in Figure12.Figure 12
XRD pattern of black layer.Figures13(a) and 13(b), respectively, present an SEM image and an EDX spectrum of the black layer formed on the strip surface when cast in air.(a) SEM image of the black layer. (b) EDX analysis of the black layer.
(a)
(b)It is well known that, under normal conditions, MgO is white and MgAl2O4 is colorless. According to XRD and EDX patterns, the major component of the layer on the strip is definitely MgO. As shown in the SEM image, the particle sizes of MgO are around 0.1 μm. According to the literature [19–22], if the particle size of the oxide is so small that it acts as a light trap, then it will appear black. It should be noted that Mg peaks in the XRD and EDX patterns emanated from the base metal, not from the thin black layer of MgO particles forming on top of the cast strip. Clearly, it will be necessary to use a protective atmosphere in the commercial strip casting of magnesium alloys.
## 4. Conclusions
In the present research, the effects of casting parameters on the properties of strips of AZ31-B alloy have been investigated. The effect of heat flux on the microstructure and mechanical properties were studied as well. The following conclusions can be drawn.(1)
As substrate roughness increased beyond 0.15 mm for macroscopically grooved substrates, the thermal resistance increased, while heat fluxes decreased.
(2)
As the thickness of strip increased, the heat flux into the substrates increased.
(3)
As the interfacial heat flux increased, the grain size and the SDAS across the strip were decreased.
(4)
No significant differences were recorded between hardness values taken at the top and the bottom surfaces of the strips.
(5)
As heat fluxes increased and the grain sizes decreased, mechanical properties, TS, YS, and HV, all increased.
(6)
Microstructural analyses of AZ31-B strips revealed that the finest grain sizes and lowest SDAS were obtained using a copper substrate versus a steel substrate.
(7)
Coated substrates reduced the capability of heat extraction but stabilized the dimensions of strip and gave a good surface quality.
(8)
The black layer on the strips cast in air is composed of small particles of MgO (in the 100 nm range).
---
*Source: 101872-2014-01-19.xml* | 101872-2014-01-19_101872-2014-01-19.md | 43,318 | Effect of Casting Parameters on the Microstructural and Mechanical Behavior of Magnesium AZ31-B Alloy Strips Cast on a Single Belt Casting Simulator | Ahmad Changizi; Mamoun Medraj; Mihaiela Isac | Advances in Materials Science and Engineering
(2014) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101872 | 101872-2014-01-19.xml | ---
## Abstract
Strips of magnesium alloy AZ31-B were cast on a simulator of a horizontal single belt caster incorporating a moving mold system. Mixtures of CO2 and sulfur hexafluoride (SF6) gases were used as protective atmosphere during melting and casting. The castability of the AZ31-B strips was investigated for a smooth, low carbon steel substrate, and six copper substrates with various textures and roughnesses. Graphite powder was used to coat the substrates. The correlation between strip thickness and heat flux was investigated. It was found that the heat flux from the forming strip to the copper substrate was higher than that to the steel substrate, while coated substrates registered lower heat fluxes than uncoated substrates. The highest heat flux from the strip was recorded for casting on macrotextured copper substrates with 0.15 mm grooves. As the thickness of the strip decreased, the net heat flux decreased. As the heat flux increased, the grain sizes of the strips were reduced, and the SDAS decreased. The mechanical properties were improved when the heat flux increased. The black layers which formed on the strips’ surfaces were analyzed and identified as nanoscale MgO particles. Nano-Scale particles act as light traps and appeared black.
---
## Body
## 1. Introduction
Magnesium is the lightest structural metal in common use [1]. Similarly, supplies of magnesium ores are virtually inexhaustible. Magnesium alloys normally have very good castability and machinability, as well as excellent specific strength and stiffness [2]. However magnesium alloys have some difficulty during rolling due to hexagonal close packed (hcp) lattice structure [3]. Meanwhile, a fine grain structure increases strength and ductility by promoting the operation of nonbasal slip systems and limiting twinning in magnesium alloys [4]. Strip casting of magnesium has become important in recent years. For reducing the cost of thin sheets of magnesium alloys, strip casting technologies such as horizontal single belt casting (HSBC), twin roll casting (TRC), and twin belt casting (TBC) have been developed [3]. With a strip casting process, magnesium alloy strips can typically be produced in thicknesses of 1–10 mm [1]. Direct strip casting, or HSBC, as a near-net-shape casting process, has potential use in the processing of aluminum, copper, zinc, and lead alloys, directly into sheet products.Generally, most metals and alloys are amenable to direct casting into plates, strips, or ribbons. However, a metallurgical understanding of these materials is needed to determine their suitability for casting into thin-gauge strips. Technically, the evaluation of a process must take into account the melting point of the alloy, the freezing range, the oxidation resistance in both the liquid and solid states, the heat transfer behavior, the fluidity of the melt, and the number and type of the liquid-to-solid, and solid-state, transformations that may occur. There is a particular emphasis on magnesium alloys as these are the major candidates for large-scale production by this processing route, given the poor hot rolling capabilities caused by its hexagonal close packed structure [5, 6]. The present study was carried out to investigate the possibility of directly casting magnesium strip products on a horizontal single belt caster (HSBC). The alloy studied was the AZ31-B magnesium alloy.
## 2. Experimental Procedure
### 2.1. Raw Materials and Melting Unit
Commercial magnesium alloy AZ31 grade B bar ingots obtained from Magnesium Electron Co. were used as raw material in the present experiments. The raw bar ingot materials were cut into smaller pieces and prepared for melting. Graphite powder, comprising particles of 0.5–0.6μm, obtained from Asbury Carbons Co., was used to coat the casting substrates. The graphite was of a synthetic variety, and the particles in general were flake shaped.
### 2.2. Strip Casting Simulator
A schematic overview of the strip casting simulator is shown in Figure1. The equipment includes the following: a containment mould, a substrate onto which the melt can be poured, a tundish, a motor that drags the substrate at preselected casting speeds, and a data acquisition system. The simulator can be set to produce strips of w
(
80
)
×
l
(
1100
)
×
t
(
1
-
5
) mm dimensions. The casting substrates can be coated with different materials such as the graphite used in the present research. In order to measure local heat fluxes, two K type thermocouples were placed in each segment of the substrate; one was set near the surface and the other was placed slightly below the first thermocouple.Figure 1
Schematic of the strip casting single belt simulator.
### 2.3. Substrate Specification
Two types of material substrates were used in the present experiments, steel with a polished surface and pure copper with six differently textured inserts/segments. Figure2 illustrates the schematics along with photos of the six copper substrates (2
×
2 inch segments). Table 1 provides dimensional specifications of different areas of copper chill substrate. Each segment had a different macrotexture.Table 1
Dimensional specifications of different areas of copper substrate presented in Figure2.
Segment
a (µm)
x (µm)
y (µm)
z (mm)
I
0
0
0
0
II
64.4
477.7
90
7
III
78.5
825
150
8
IV
317
1103
240
320
V
488
998
300
0.99
VI
451
1074
600
1.07Schematic and photos of different areas of copper substrate.
(a)
(b)
### 2.4. Melting and Pouring of Molten Metal
The melt was prepared in a closed steel crucible, in which the atmosphere of the melt was protected from air ingress at all steps in the casting process, so as to avoid any oxidation or possible burning. The protective atmosphere comprised a mixture of sulfur hexafluoride, 0.5% SF6, in a carrier gas of carbon dioxide, 99.5% CO2.The molten metal was heated using an induction furnace, to a temperature of 710°C, which is above the pouring temperature, 700°C, in order to provide sufficient superheat for skimming and metal transferring purposes. The molten metal was poured directly from the crucible into the tundish to avoid any large temperature drops. Before pouring, the tundish was preheated up to 150°C for drying the refractory walls inside the tundish. At the same time, the substrate was preheated to 30°C for avoiding any moisture. Transferring and pouring the molten metal were done manually. The substrate was propelled by a loaded spring at a constant speed of 0.5 m/sec.
### 2.5. Calculating Heat Flux and Applying IHCP to a Horizontal Single Belt Casting Simulator
Beck put forward a nonlinear estimation method to deal with phase changes and temperature dependent thermal properties of the solidification process used for solving the IHCP [9]. To treat experimental data, statistical principles and the concept of amplitudes temperatures are applied to the thermal capacity and heat conduction of the substrate during subsurface temperature measurements. This application is performed using a nonlinear estimation method to solve the delayed and diminished thermal response problems. The heat flux is taken to be a constant or a linear function of time within a given time interval. This is the principle of Beck’s nonlinear estimation technique, according to which the heat flux is then determined for that period according to the following function:
(1)
F
(
q
)
=
∑
i
N
1
∑
j
N
2
(
N
3
+
1
)
(
T
i
j
-
Y
i
j
)
2
,
where N
1 is the number of internal points in the temperature measurement excluding those used for boundary condition. N
2 is the number of temperature measurements per time interval. N
3 is the future number of time intervals considered for the heat flux calculation at each time interval. T
i
j and Y
i
j are the calculated and measured temperatures of location i and the time instant j, respectively [10].The IHCP (Inverse Heat Conduction Problem) applied for the interface between the melt and substrate is shown schematically in Figure3; the bounding conditions for the governing differential equation (2) are expressed in (3), (4), and (5) [11] as follows:
(2)
ρ
C
p
(
∂
T
∂
t
)
=
k
∂
2
T
∂
x
2
,
(3)
-
k
∂
T
∂
t
∣
x
=
0
=
q
1
t
>
t
l
-
1
,
(4)
T
(
x
1
,
t
=
t
l
-
1
)
=
F
(
x
1
)
,
(5)
T
(
x
2
,
t
=
t
l
-
1
)
=
F
(
x
2
)
.Figure 3
Schematic of applying IHCP to deduce interfacial heat fluxes for the single belt strip casting simulator.The corresponding differential equation and boundary conditions for sensitivity coefficients are [2]
(6)
ρ
C
p
∂
Ø
∂
t
=
k
∂
2
Ø
∂
x
2
,
-
k
∂
Ø
∂
x
|
x
=
0
=
1
1
>
t
l
-
1
,
Ø
(
x
1
,
t
=
t
l
-
1
)
=
0
,
Ø
(
x
2
,
t
=
t
l
-
1
)
=
0
,
where Ø is a sensitivity coefficient and subscript i denotes the time when it was applied. In this investigation, temperatures are recorded using two thermocouples for each segment of the substrate connected to an Omega Data Acquisition System. In order to convert the time versus temperature data to time versus heat flux, the necessary IHCP software was developed from first principles by Isac et al. [7].
## 2.1. Raw Materials and Melting Unit
Commercial magnesium alloy AZ31 grade B bar ingots obtained from Magnesium Electron Co. were used as raw material in the present experiments. The raw bar ingot materials were cut into smaller pieces and prepared for melting. Graphite powder, comprising particles of 0.5–0.6μm, obtained from Asbury Carbons Co., was used to coat the casting substrates. The graphite was of a synthetic variety, and the particles in general were flake shaped.
## 2.2. Strip Casting Simulator
A schematic overview of the strip casting simulator is shown in Figure1. The equipment includes the following: a containment mould, a substrate onto which the melt can be poured, a tundish, a motor that drags the substrate at preselected casting speeds, and a data acquisition system. The simulator can be set to produce strips of w
(
80
)
×
l
(
1100
)
×
t
(
1
-
5
) mm dimensions. The casting substrates can be coated with different materials such as the graphite used in the present research. In order to measure local heat fluxes, two K type thermocouples were placed in each segment of the substrate; one was set near the surface and the other was placed slightly below the first thermocouple.Figure 1
Schematic of the strip casting single belt simulator.
## 2.3. Substrate Specification
Two types of material substrates were used in the present experiments, steel with a polished surface and pure copper with six differently textured inserts/segments. Figure2 illustrates the schematics along with photos of the six copper substrates (2
×
2 inch segments). Table 1 provides dimensional specifications of different areas of copper chill substrate. Each segment had a different macrotexture.Table 1
Dimensional specifications of different areas of copper substrate presented in Figure2.
Segment
a (µm)
x (µm)
y (µm)
z (mm)
I
0
0
0
0
II
64.4
477.7
90
7
III
78.5
825
150
8
IV
317
1103
240
320
V
488
998
300
0.99
VI
451
1074
600
1.07Schematic and photos of different areas of copper substrate.
(a)
(b)
## 2.4. Melting and Pouring of Molten Metal
The melt was prepared in a closed steel crucible, in which the atmosphere of the melt was protected from air ingress at all steps in the casting process, so as to avoid any oxidation or possible burning. The protective atmosphere comprised a mixture of sulfur hexafluoride, 0.5% SF6, in a carrier gas of carbon dioxide, 99.5% CO2.The molten metal was heated using an induction furnace, to a temperature of 710°C, which is above the pouring temperature, 700°C, in order to provide sufficient superheat for skimming and metal transferring purposes. The molten metal was poured directly from the crucible into the tundish to avoid any large temperature drops. Before pouring, the tundish was preheated up to 150°C for drying the refractory walls inside the tundish. At the same time, the substrate was preheated to 30°C for avoiding any moisture. Transferring and pouring the molten metal were done manually. The substrate was propelled by a loaded spring at a constant speed of 0.5 m/sec.
## 2.5. Calculating Heat Flux and Applying IHCP to a Horizontal Single Belt Casting Simulator
Beck put forward a nonlinear estimation method to deal with phase changes and temperature dependent thermal properties of the solidification process used for solving the IHCP [9]. To treat experimental data, statistical principles and the concept of amplitudes temperatures are applied to the thermal capacity and heat conduction of the substrate during subsurface temperature measurements. This application is performed using a nonlinear estimation method to solve the delayed and diminished thermal response problems. The heat flux is taken to be a constant or a linear function of time within a given time interval. This is the principle of Beck’s nonlinear estimation technique, according to which the heat flux is then determined for that period according to the following function:
(1)
F
(
q
)
=
∑
i
N
1
∑
j
N
2
(
N
3
+
1
)
(
T
i
j
-
Y
i
j
)
2
,
where N
1 is the number of internal points in the temperature measurement excluding those used for boundary condition. N
2 is the number of temperature measurements per time interval. N
3 is the future number of time intervals considered for the heat flux calculation at each time interval. T
i
j and Y
i
j are the calculated and measured temperatures of location i and the time instant j, respectively [10].The IHCP (Inverse Heat Conduction Problem) applied for the interface between the melt and substrate is shown schematically in Figure3; the bounding conditions for the governing differential equation (2) are expressed in (3), (4), and (5) [11] as follows:
(2)
ρ
C
p
(
∂
T
∂
t
)
=
k
∂
2
T
∂
x
2
,
(3)
-
k
∂
T
∂
t
∣
x
=
0
=
q
1
t
>
t
l
-
1
,
(4)
T
(
x
1
,
t
=
t
l
-
1
)
=
F
(
x
1
)
,
(5)
T
(
x
2
,
t
=
t
l
-
1
)
=
F
(
x
2
)
.Figure 3
Schematic of applying IHCP to deduce interfacial heat fluxes for the single belt strip casting simulator.The corresponding differential equation and boundary conditions for sensitivity coefficients are [2]
(6)
ρ
C
p
∂
Ø
∂
t
=
k
∂
2
Ø
∂
x
2
,
-
k
∂
Ø
∂
x
|
x
=
0
=
1
1
>
t
l
-
1
,
Ø
(
x
1
,
t
=
t
l
-
1
)
=
0
,
Ø
(
x
2
,
t
=
t
l
-
1
)
=
0
,
where Ø is a sensitivity coefficient and subscript i denotes the time when it was applied. In this investigation, temperatures are recorded using two thermocouples for each segment of the substrate connected to an Omega Data Acquisition System. In order to convert the time versus temperature data to time versus heat flux, the necessary IHCP software was developed from first principles by Isac et al. [7].
## 3. Results and Discussion
### 3.1. Effect of Substrate Material on Heat Flux
Materials of the substrate in the strip casting process are one of the important issues which affect casting parameters. Table2 shows the values of the maximum heat fluxes recorded for different substrate materials. The casting conditions (speed, superheat, and thickness) were the same.Table 2
Value of maximum heat flux for different substrate materials.
Substrate materials
Coating
R
a
Max. heat flux MW/m2
Steel
No
0
1.29
Copper
No
0
5.06
Steel
Yes
0
0.71
Copper
Yes
0
2.85The thermophysical properties of different substrate materials used in the present experiments are summarized in Table3.Table 3
Thermophysical properties of the substrates used [7, 8].
Substrate materials
C
p (kJ/kg °C)
k (W/m·K)
ρ (kg/m3)
α (cm2/sec)
Carbon steel
0.486
48
7753
0.081
Copper
0.383
386
8954
0.415
Graphite
0.71
24
2200
0.0024The measured interfacial heat flux is related to the thermal conductivity and thermal diffusivity of the substrate. Hence, the large difference between the thermal conductivity,k, of steel and copper is reflected in the value of the heat flux at the mould/melt interface. The bare copper substrate with no graphite coating had the highest cooling capacity and produced the highest heat flux at the mould/melt interface. Because of the poor thermal conductivity of graphite and of steel, the steel substrate coated with a graphite layer had the smallest heat flux at the mould/melt interface.
### 3.2. Effect of the Surface Topography of the Substrate on the Interfacial Heat Flux
In the HSBC casting procedure, the metal/mould interface is not in perfect thermal contact. Localized heat flows through the actual contact points between the metal/mould interfaces are significantly less than in the case of perfect contact.Consider the melt hanging between two parallel running peaks at a distance of2
λ apart; then the melt sag (d
sag) depends on the melt surface tension (σ) and the metallostatic pressure (Δ
P) for a nonwetting substrate. The radius of the metal curvature, R, is [12]
(7)
R
=
σ
Δ
P
=
σ
ρ
g
h
,
where σ is the melt surface tension, ρ is the melt density, g is the gravitational constant, and h is the melt height. The melt sag can be calculated as [5, 13]
(8)
d
sag
=
R
-
R
2
-
λ
2
.However significant thermal resistance exists at the substrate/melt interface because of trapped air, oxide layers, gaps made by shrinkage of the solidifying shell from the interface, thermal expansion of the mold, and so forth.Table4 presents the results of measured heat fluxes for different surface topographies. The maximum heat fluxes were measured in segment III, for both coated and uncoated substrates, while minimum heat fluxes were measured in segment VI. We can conclude that the substrate topography of segment number six had an increased thermal resistance compared to that of segment number three.Table 4
Maximum values of heat flux from 3 mm strips of magnesium cast on macrotextured copper substrate.
Segment no.
Coating
Depth of grooves(mm)
Max. heat flux(MW/m2)
I
No
0
5.06
II
No
0.09
5.78
III
No
0.15
6.60
IV
No
0.24
5.18
V
No
0.3
4.38
VI
No
0.6
3.35
I
Yes
0
2.85
II
Yes
0.09
3.20
III
Yes
0.15
3.53
IV
Yes
0.24
2.93
V
Yes
0.3
2.58
VI
Yes
0.6
2.50Consequently, the maximum surface contact of melt-mould occurred with segment III and the highest number of air pockets was in segment VI.
### 3.3. Effect of Coating of Substrates on Measured Heat Fluxes
Figure4 illustrates the measured heat fluxes of casting strips of magnesium AZ31-B alloy on substrates with various roughnesses, with and without a graphite coating. The measured heat fluxes on graphite-coated substrates were all lower than those on equivalent (matching) bare substrates.Figure 4
Effect of surface topology of substrate on heat flux with and without coating.
### 3.4. Effect of Strip Thickness on Heat Flux
The distance between the nozzle and the substrate was decreased, so as to reduce the thickness of the strips produced from 3 to 1 mm. The substrate was coated with a layer of fine graphite to a thickness of 60μm. Based on Figure 5 and the following equations, it is evident that if one increases the strip thickness, the amount of heat transferred from the strip to the substrate must increase. Essentially, the total amount of the heat loss per unit area required to complete solidification of the liquid metal is
(9)
Δ
Q
t
=
ρ
·
d
{
C
p
(
T
c
-
T
M
)
+
Δ
H
}
,
where ρ is the density of liquid metal, d is the thickness of strip produced, C
p is the specific heat, T
c is the casting temperature, T
M is the melting point, andΔ
H is the latent heat of fusion/mass. Note that ρ
·
d
{
C
p
(
T
c
-
T
M
)
} is the sensible heat of the melt and ρ
·
d
·
Δ
H is the latent heat component.Figure 5
Heat flux versus strip thickness and substrate textures.
### 3.5. Microstructural Analysis
Figure6 shows a microstructure of AZ31-B magnesium alloy close to the strips top surface when casting 3 mm strip on segment III of the copper substrate without any graphite coating.Figure 6
Microstructure of top surface of AZ31-B strip cast on segment III of copper substrate without any graphite coating.It has been claimed by Rappaz and Gandin [13] that the dendrites developed at the interface with the substrate were not inclined, their growth direction being verticality upwards given that the direction of the fluid flow directly influences the growth direction of dendrites at the melt/substrate interface. The dendritic growth mechanism in magnesium AZ31-B alloys has been elaborated by Vander [14]. It is concluded that a similar principle governs the dendritic growth for magnesium AZ31-B alloy investigated in this work. During the solidification of the alloy, a symmetrical solute field at the dendrite tip is established. Accordingly, the solute gradient is uniform on both sides of the dendrite tip during the growth. Thus the main origin of the dendrite growth direction is deemed to be the solute distribution. According to the literature [15, 16], the nonhomogeneous distribution of solute atoms between dendrite arms in a magnesium AZ31-B alloy is mainly due to the fact that the solidification takes place over a range of temperatures. In fact, there is insufficient time for atomic diffusion to redistribute the solute both within the liquid in the vicinity of the solid-liquid interface and within the solid, since cooling occurs so rapidly through the two-phase (L + S) regime. By comparing the microstructures of the bottom surfaces of strips, it can be concluded that the grain size depends significantly on the heat flux. The results of microstructural analyses of all samples are summarized in Table 5.Table 5
Microstructure analyses of strips.
Depth of groove
Thickness
Coating
Heat flux
Grain Size (μm)
SDAS (μm)
Bottom
Top
Bottom
Top
0.15
3 mm
No
6.6
35
100
5.39
6.1
0.09
3 mm
No
5.775
64
110
7.7
6.14
0.24
3 mm
No
5.175
70
117
8.5
7.15
0
3 mm
No
5.062
77
121
8.6
9.2
0.3
3 mm
No
4.375
91
136
8.9
10
0.6
3 mm
No
3.35
92
138
9.02
10.5
0.15
3 mm
Yes
3.5
103
110
8.5
9.1
0.09
3 mm
Yes
3.2
108
120
10.04
11.8
0.24
3 mm
Yes
2.9
110
127
10.8
12.4
0
3 mm
Yes
2.85
115
129
11.5
12.8
0.3
3 mm
Yes
2.58
132
137
11.58
13.5
0.6
3 mm
Yes
2.5
134
140
11.91
13.8
0.15
1 mm
Yes
2.5
108
132
8.2
9.8
0.09
1 mm
Yes
2.33
125
135
10.5
11
0.24
1 mm
Yes
2.166
138
142
11.7
12.4
0
1 mm
Yes
2.12
140
150
12.2
14
0.3
1 mm
Yes
1.83
150
162
12.4
15.6
0.6
1 mm
Yes
1.72
155
175
13
17
0 (steel )
3 mm
No
1.2
157
225
14
18
Yes
0.71
350
470
20
25As shown in Table5, with decreasing heat flux and increasing thermal resistance between the substrate and the strip, the grain size increases. The grain size of the top surface is larger than that of the bottom surface because of solidification delay due to smaller heat fluxes at the top surface. Secondary dendrite arm spacing (SDAS) is found to be directly related to heat flux and thermal contact resistance. The SDAS decreases when air pockets are generally trapped at the substrate/melt interface, which dramatically reduces the heat flux, as explained earlier. The occurrence of these air pockets can subsequently influence the microstructure of the strips. Consequently, the grains tend to grow up from the bottom surface to the top surface resulting in a columnar pattern of grain growth. The effect of these air pockets on the grain structure is clearly shown in Figure 7. A similar observation pertaining to the effect of air pocket formation on the strip’s microstructure has been reported by Dubé et al. [17].Figure 7
Directional grain growth because of air pockets at the melt/mould interface, ×200.
### 3.6. Phase Analysis
Scanning Electron Microscopy was used to analyze the chemistry and morphology of the secondary phases present in the strip microstructure, as shown in Figure8.Figure 8
SEM image of AZ31-B strip.The microanalyses using EDX technique, were performed on the intermetallics marked in Figure8, in order to identify their composition. Figure 9 presents one X-Ray spectrum of the analyzed intermetallics.Figure 9
EDX analyses of point 2 of Figure8.EDX analysis, demonstrated in the associated spectra, indicates that the phases contain Mg, Zn, Al, and Mn. The presence of magnesium in the spectra could originate from either the matrix or the intermetallics. Based on the information from the phase diagram, the matrix isα-Mg, and the particles are Mg-Al-Zn phase (most likely (Al,Zn)49Mg32) and Al-Mn (it could be a mixture of Al11Mn4, Al8Mn5, Al9Mn11, and β-Mn(Al)) intermetallics. However, according to the investigation by Cao et al. [18], the α-Mg is a solid solution of Mg-Al-Zn-Mn.
### 3.7. Analysis of Mechanical Properties of AZ31-B Strips
Figure10 and Table 6 summarize the mechanical properties of the strip cast AZ31-B strips. The tensile and yield strength are calculated using equations TS (MPa) = 3.4 × BHN and H
=
3
σ
y. Local Vickers hardness was taken on the cross section of the strips. There was no significant difference between hardness values taken at the top and the bottom surfaces of the strips.Table 6
Mechanical properties of AZ31-B strips.
Substrate no.
Depth of groove (mm)
Thickness
Coating
Heat flux
Rc
HV
YS
TS
W/m2 °C × 10−4
MPa
MPa
III (c)
0.15
3 mm
No
6.6
4.545
62
147
166.6
II (b)
0.09
3 mm
No
5.77
5.195
61
144
163.2
IV (d)
0.24
3 mm
No
5.17
5.797
60
144
163.2
I (a)
0
3 mm
No
5.06
5.927
58
141
159.8
V (e)
0.3
3 mm
No
4.37
6.857
56
138
156.4
VI (f)
0.6
3 mm
No
3.35
8.955
54.5
135
153
III (c)
0.15
3 mm
Yes
3.5
8.571
54.9
135
153
II (b)
0.09
3 mm
Yes
3.2
9.375
54.2
135
153
IV (d)
0.24
3 mm
Yes
2.9
10.345
53.1
132
149.6
I (a)
0
3 mm
Yes
2.85
10.526
50.7
141
159.8
V (e)
0.3
3 mm
Yes
2.5
11.628
50.2
141
159.8
VI (f)
0.6
3 mm
Yes
2.5
12.000
50
141
159.8
III (c)
0.15
1 mm
Yes
2.5
4.000
49.8
126
142.8
II (b)
0.09
1 mm
Yes
2.33
4.292
48.4
126
142.8
IV (d)
0.24
1 mm
Yes
2.16
4.617
48.2
126
142.8
I (a)
0
1 mm
Yes
2.12
4.717
48.1
126
142.8
V (e)
0.3
1 mm
Yes
1.83
5.464
48
126
142.8
VI (f)
0.6
1 mm
Yes
1.72
5.814
47.9
123
139.4
I
0 (steel)
3 mm
No
1.2
25.000
47.5
123
139.4
Yes
0.71
42.254
47
123
139.4Figure 10
The effect of heat flux on the hardness of AZ31-B strips.Based on the experimental results presented above, as the heat flux increases, the hardness and the other mechanical properties increased. In the case of 1 mm thick strip, because of lower weight pressure of the strip on the substrate, the thickness of the air gap between the strip and substrate was increased, and the heat transfer rate from strip to substrate decreased. Based on the lower global heat fluxes, represented as the area under the heat flux-time curve, in strips with 1 mm thickness, grain sizes were large and mechanical properties were lower. This issue was presented in detail in Section3.5.
### 3.8. Evaluation of the Black Layer on AZ31-B Strip Surface When Cast in Air
To protect magnesium alloys from oxidation during strip casting, a mixture of CO2 and SF6 is usually used. If not, a black layer film on the surface of the strip forms as shown in Figure 11. This layer is strongly adherent, and severely compromises the surface quality of the strip for commercial proposes.Figure 11
Black layer of coating on strip cast without protective atmosphere.XRD tests were performed in order to analyze the nature of the black layer, as shown in Figure12.Figure 12
XRD pattern of black layer.Figures13(a) and 13(b), respectively, present an SEM image and an EDX spectrum of the black layer formed on the strip surface when cast in air.(a) SEM image of the black layer. (b) EDX analysis of the black layer.
(a)
(b)It is well known that, under normal conditions, MgO is white and MgAl2O4 is colorless. According to XRD and EDX patterns, the major component of the layer on the strip is definitely MgO. As shown in the SEM image, the particle sizes of MgO are around 0.1 μm. According to the literature [19–22], if the particle size of the oxide is so small that it acts as a light trap, then it will appear black. It should be noted that Mg peaks in the XRD and EDX patterns emanated from the base metal, not from the thin black layer of MgO particles forming on top of the cast strip. Clearly, it will be necessary to use a protective atmosphere in the commercial strip casting of magnesium alloys.
## 3.1. Effect of Substrate Material on Heat Flux
Materials of the substrate in the strip casting process are one of the important issues which affect casting parameters. Table2 shows the values of the maximum heat fluxes recorded for different substrate materials. The casting conditions (speed, superheat, and thickness) were the same.Table 2
Value of maximum heat flux for different substrate materials.
Substrate materials
Coating
R
a
Max. heat flux MW/m2
Steel
No
0
1.29
Copper
No
0
5.06
Steel
Yes
0
0.71
Copper
Yes
0
2.85The thermophysical properties of different substrate materials used in the present experiments are summarized in Table3.Table 3
Thermophysical properties of the substrates used [7, 8].
Substrate materials
C
p (kJ/kg °C)
k (W/m·K)
ρ (kg/m3)
α (cm2/sec)
Carbon steel
0.486
48
7753
0.081
Copper
0.383
386
8954
0.415
Graphite
0.71
24
2200
0.0024The measured interfacial heat flux is related to the thermal conductivity and thermal diffusivity of the substrate. Hence, the large difference between the thermal conductivity,k, of steel and copper is reflected in the value of the heat flux at the mould/melt interface. The bare copper substrate with no graphite coating had the highest cooling capacity and produced the highest heat flux at the mould/melt interface. Because of the poor thermal conductivity of graphite and of steel, the steel substrate coated with a graphite layer had the smallest heat flux at the mould/melt interface.
## 3.2. Effect of the Surface Topography of the Substrate on the Interfacial Heat Flux
In the HSBC casting procedure, the metal/mould interface is not in perfect thermal contact. Localized heat flows through the actual contact points between the metal/mould interfaces are significantly less than in the case of perfect contact.Consider the melt hanging between two parallel running peaks at a distance of2
λ apart; then the melt sag (d
sag) depends on the melt surface tension (σ) and the metallostatic pressure (Δ
P) for a nonwetting substrate. The radius of the metal curvature, R, is [12]
(7)
R
=
σ
Δ
P
=
σ
ρ
g
h
,
where σ is the melt surface tension, ρ is the melt density, g is the gravitational constant, and h is the melt height. The melt sag can be calculated as [5, 13]
(8)
d
sag
=
R
-
R
2
-
λ
2
.However significant thermal resistance exists at the substrate/melt interface because of trapped air, oxide layers, gaps made by shrinkage of the solidifying shell from the interface, thermal expansion of the mold, and so forth.Table4 presents the results of measured heat fluxes for different surface topographies. The maximum heat fluxes were measured in segment III, for both coated and uncoated substrates, while minimum heat fluxes were measured in segment VI. We can conclude that the substrate topography of segment number six had an increased thermal resistance compared to that of segment number three.Table 4
Maximum values of heat flux from 3 mm strips of magnesium cast on macrotextured copper substrate.
Segment no.
Coating
Depth of grooves(mm)
Max. heat flux(MW/m2)
I
No
0
5.06
II
No
0.09
5.78
III
No
0.15
6.60
IV
No
0.24
5.18
V
No
0.3
4.38
VI
No
0.6
3.35
I
Yes
0
2.85
II
Yes
0.09
3.20
III
Yes
0.15
3.53
IV
Yes
0.24
2.93
V
Yes
0.3
2.58
VI
Yes
0.6
2.50Consequently, the maximum surface contact of melt-mould occurred with segment III and the highest number of air pockets was in segment VI.
## 3.3. Effect of Coating of Substrates on Measured Heat Fluxes
Figure4 illustrates the measured heat fluxes of casting strips of magnesium AZ31-B alloy on substrates with various roughnesses, with and without a graphite coating. The measured heat fluxes on graphite-coated substrates were all lower than those on equivalent (matching) bare substrates.Figure 4
Effect of surface topology of substrate on heat flux with and without coating.
## 3.4. Effect of Strip Thickness on Heat Flux
The distance between the nozzle and the substrate was decreased, so as to reduce the thickness of the strips produced from 3 to 1 mm. The substrate was coated with a layer of fine graphite to a thickness of 60μm. Based on Figure 5 and the following equations, it is evident that if one increases the strip thickness, the amount of heat transferred from the strip to the substrate must increase. Essentially, the total amount of the heat loss per unit area required to complete solidification of the liquid metal is
(9)
Δ
Q
t
=
ρ
·
d
{
C
p
(
T
c
-
T
M
)
+
Δ
H
}
,
where ρ is the density of liquid metal, d is the thickness of strip produced, C
p is the specific heat, T
c is the casting temperature, T
M is the melting point, andΔ
H is the latent heat of fusion/mass. Note that ρ
·
d
{
C
p
(
T
c
-
T
M
)
} is the sensible heat of the melt and ρ
·
d
·
Δ
H is the latent heat component.Figure 5
Heat flux versus strip thickness and substrate textures.
## 3.5. Microstructural Analysis
Figure6 shows a microstructure of AZ31-B magnesium alloy close to the strips top surface when casting 3 mm strip on segment III of the copper substrate without any graphite coating.Figure 6
Microstructure of top surface of AZ31-B strip cast on segment III of copper substrate without any graphite coating.It has been claimed by Rappaz and Gandin [13] that the dendrites developed at the interface with the substrate were not inclined, their growth direction being verticality upwards given that the direction of the fluid flow directly influences the growth direction of dendrites at the melt/substrate interface. The dendritic growth mechanism in magnesium AZ31-B alloys has been elaborated by Vander [14]. It is concluded that a similar principle governs the dendritic growth for magnesium AZ31-B alloy investigated in this work. During the solidification of the alloy, a symmetrical solute field at the dendrite tip is established. Accordingly, the solute gradient is uniform on both sides of the dendrite tip during the growth. Thus the main origin of the dendrite growth direction is deemed to be the solute distribution. According to the literature [15, 16], the nonhomogeneous distribution of solute atoms between dendrite arms in a magnesium AZ31-B alloy is mainly due to the fact that the solidification takes place over a range of temperatures. In fact, there is insufficient time for atomic diffusion to redistribute the solute both within the liquid in the vicinity of the solid-liquid interface and within the solid, since cooling occurs so rapidly through the two-phase (L + S) regime. By comparing the microstructures of the bottom surfaces of strips, it can be concluded that the grain size depends significantly on the heat flux. The results of microstructural analyses of all samples are summarized in Table 5.Table 5
Microstructure analyses of strips.
Depth of groove
Thickness
Coating
Heat flux
Grain Size (μm)
SDAS (μm)
Bottom
Top
Bottom
Top
0.15
3 mm
No
6.6
35
100
5.39
6.1
0.09
3 mm
No
5.775
64
110
7.7
6.14
0.24
3 mm
No
5.175
70
117
8.5
7.15
0
3 mm
No
5.062
77
121
8.6
9.2
0.3
3 mm
No
4.375
91
136
8.9
10
0.6
3 mm
No
3.35
92
138
9.02
10.5
0.15
3 mm
Yes
3.5
103
110
8.5
9.1
0.09
3 mm
Yes
3.2
108
120
10.04
11.8
0.24
3 mm
Yes
2.9
110
127
10.8
12.4
0
3 mm
Yes
2.85
115
129
11.5
12.8
0.3
3 mm
Yes
2.58
132
137
11.58
13.5
0.6
3 mm
Yes
2.5
134
140
11.91
13.8
0.15
1 mm
Yes
2.5
108
132
8.2
9.8
0.09
1 mm
Yes
2.33
125
135
10.5
11
0.24
1 mm
Yes
2.166
138
142
11.7
12.4
0
1 mm
Yes
2.12
140
150
12.2
14
0.3
1 mm
Yes
1.83
150
162
12.4
15.6
0.6
1 mm
Yes
1.72
155
175
13
17
0 (steel )
3 mm
No
1.2
157
225
14
18
Yes
0.71
350
470
20
25As shown in Table5, with decreasing heat flux and increasing thermal resistance between the substrate and the strip, the grain size increases. The grain size of the top surface is larger than that of the bottom surface because of solidification delay due to smaller heat fluxes at the top surface. Secondary dendrite arm spacing (SDAS) is found to be directly related to heat flux and thermal contact resistance. The SDAS decreases when air pockets are generally trapped at the substrate/melt interface, which dramatically reduces the heat flux, as explained earlier. The occurrence of these air pockets can subsequently influence the microstructure of the strips. Consequently, the grains tend to grow up from the bottom surface to the top surface resulting in a columnar pattern of grain growth. The effect of these air pockets on the grain structure is clearly shown in Figure 7. A similar observation pertaining to the effect of air pocket formation on the strip’s microstructure has been reported by Dubé et al. [17].Figure 7
Directional grain growth because of air pockets at the melt/mould interface, ×200.
## 3.6. Phase Analysis
Scanning Electron Microscopy was used to analyze the chemistry and morphology of the secondary phases present in the strip microstructure, as shown in Figure8.Figure 8
SEM image of AZ31-B strip.The microanalyses using EDX technique, were performed on the intermetallics marked in Figure8, in order to identify their composition. Figure 9 presents one X-Ray spectrum of the analyzed intermetallics.Figure 9
EDX analyses of point 2 of Figure8.EDX analysis, demonstrated in the associated spectra, indicates that the phases contain Mg, Zn, Al, and Mn. The presence of magnesium in the spectra could originate from either the matrix or the intermetallics. Based on the information from the phase diagram, the matrix isα-Mg, and the particles are Mg-Al-Zn phase (most likely (Al,Zn)49Mg32) and Al-Mn (it could be a mixture of Al11Mn4, Al8Mn5, Al9Mn11, and β-Mn(Al)) intermetallics. However, according to the investigation by Cao et al. [18], the α-Mg is a solid solution of Mg-Al-Zn-Mn.
## 3.7. Analysis of Mechanical Properties of AZ31-B Strips
Figure10 and Table 6 summarize the mechanical properties of the strip cast AZ31-B strips. The tensile and yield strength are calculated using equations TS (MPa) = 3.4 × BHN and H
=
3
σ
y. Local Vickers hardness was taken on the cross section of the strips. There was no significant difference between hardness values taken at the top and the bottom surfaces of the strips.Table 6
Mechanical properties of AZ31-B strips.
Substrate no.
Depth of groove (mm)
Thickness
Coating
Heat flux
Rc
HV
YS
TS
W/m2 °C × 10−4
MPa
MPa
III (c)
0.15
3 mm
No
6.6
4.545
62
147
166.6
II (b)
0.09
3 mm
No
5.77
5.195
61
144
163.2
IV (d)
0.24
3 mm
No
5.17
5.797
60
144
163.2
I (a)
0
3 mm
No
5.06
5.927
58
141
159.8
V (e)
0.3
3 mm
No
4.37
6.857
56
138
156.4
VI (f)
0.6
3 mm
No
3.35
8.955
54.5
135
153
III (c)
0.15
3 mm
Yes
3.5
8.571
54.9
135
153
II (b)
0.09
3 mm
Yes
3.2
9.375
54.2
135
153
IV (d)
0.24
3 mm
Yes
2.9
10.345
53.1
132
149.6
I (a)
0
3 mm
Yes
2.85
10.526
50.7
141
159.8
V (e)
0.3
3 mm
Yes
2.5
11.628
50.2
141
159.8
VI (f)
0.6
3 mm
Yes
2.5
12.000
50
141
159.8
III (c)
0.15
1 mm
Yes
2.5
4.000
49.8
126
142.8
II (b)
0.09
1 mm
Yes
2.33
4.292
48.4
126
142.8
IV (d)
0.24
1 mm
Yes
2.16
4.617
48.2
126
142.8
I (a)
0
1 mm
Yes
2.12
4.717
48.1
126
142.8
V (e)
0.3
1 mm
Yes
1.83
5.464
48
126
142.8
VI (f)
0.6
1 mm
Yes
1.72
5.814
47.9
123
139.4
I
0 (steel)
3 mm
No
1.2
25.000
47.5
123
139.4
Yes
0.71
42.254
47
123
139.4Figure 10
The effect of heat flux on the hardness of AZ31-B strips.Based on the experimental results presented above, as the heat flux increases, the hardness and the other mechanical properties increased. In the case of 1 mm thick strip, because of lower weight pressure of the strip on the substrate, the thickness of the air gap between the strip and substrate was increased, and the heat transfer rate from strip to substrate decreased. Based on the lower global heat fluxes, represented as the area under the heat flux-time curve, in strips with 1 mm thickness, grain sizes were large and mechanical properties were lower. This issue was presented in detail in Section3.5.
## 3.8. Evaluation of the Black Layer on AZ31-B Strip Surface When Cast in Air
To protect magnesium alloys from oxidation during strip casting, a mixture of CO2 and SF6 is usually used. If not, a black layer film on the surface of the strip forms as shown in Figure 11. This layer is strongly adherent, and severely compromises the surface quality of the strip for commercial proposes.Figure 11
Black layer of coating on strip cast without protective atmosphere.XRD tests were performed in order to analyze the nature of the black layer, as shown in Figure12.Figure 12
XRD pattern of black layer.Figures13(a) and 13(b), respectively, present an SEM image and an EDX spectrum of the black layer formed on the strip surface when cast in air.(a) SEM image of the black layer. (b) EDX analysis of the black layer.
(a)
(b)It is well known that, under normal conditions, MgO is white and MgAl2O4 is colorless. According to XRD and EDX patterns, the major component of the layer on the strip is definitely MgO. As shown in the SEM image, the particle sizes of MgO are around 0.1 μm. According to the literature [19–22], if the particle size of the oxide is so small that it acts as a light trap, then it will appear black. It should be noted that Mg peaks in the XRD and EDX patterns emanated from the base metal, not from the thin black layer of MgO particles forming on top of the cast strip. Clearly, it will be necessary to use a protective atmosphere in the commercial strip casting of magnesium alloys.
## 4. Conclusions
In the present research, the effects of casting parameters on the properties of strips of AZ31-B alloy have been investigated. The effect of heat flux on the microstructure and mechanical properties were studied as well. The following conclusions can be drawn.(1)
As substrate roughness increased beyond 0.15 mm for macroscopically grooved substrates, the thermal resistance increased, while heat fluxes decreased.
(2)
As the thickness of strip increased, the heat flux into the substrates increased.
(3)
As the interfacial heat flux increased, the grain size and the SDAS across the strip were decreased.
(4)
No significant differences were recorded between hardness values taken at the top and the bottom surfaces of the strips.
(5)
As heat fluxes increased and the grain sizes decreased, mechanical properties, TS, YS, and HV, all increased.
(6)
Microstructural analyses of AZ31-B strips revealed that the finest grain sizes and lowest SDAS were obtained using a copper substrate versus a steel substrate.
(7)
Coated substrates reduced the capability of heat extraction but stabilized the dimensions of strip and gave a good surface quality.
(8)
The black layer on the strips cast in air is composed of small particles of MgO (in the 100 nm range).
---
*Source: 101872-2014-01-19.xml* | 2014 |
# Hopf Bifurcation of an SIQR Computer Virus Model with Time Delay
**Authors:** Zizhen Zhang; Huizhong Yang
**Journal:** Discrete Dynamics in Nature and Society
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101874
---
## Abstract
A delayed SIQR computer virus model is considered. It has been observed that there exists a critical value of delay for the stability of virus prevalence by choosing the delay as a bifurcation parameter. Furthermore, the properties of the Hopf bifurcation such as direction and stability are investigated by using the normal form method and center manifold theory. Finally, some numerical simulations for supporting our theoretical results are also performed.
---
## Body
## 1. Introduction
Recently, many scholars have been studying the prevalence of computer viruses by establishing reasonable mathematics models [1–5]. In [1], Piqueira and Araujo established a modified version of SIR model for the computer viruses in network and they got the stability and bifurcation conditions of the model. In [3], Gan et al. proposed an epidemic model of computer viruses by incorporating a vaccination probability in the SIRS model with generalized nonlinear incidence rate.As is known, many computer viruses have different kinds of delays when they spread, such as latent period delay [6, 7], temporary immunity period delay [8], and other types [9–11]. In [6], Yang proposed the following SIQR computer virus model with time delay:
(1)dS(t)dt=(1-p)b-βS(t-τ)I(t-τ)-dS(t),dI(t)dt=βS(t-τ)I(t-τ)-(δ+d+α1+γ)I(t),dQ(t)dt=δI(t)-(ɛ+d+α2)Q(t),dR(t)dt=γI(t)+pb+ɛQ(t)-dR(t),
where S(t), I(t), Q(t), and R(t) denote the numbers of nodes in states susceptible, infectious, quarantined, and recovered at time t, respectively. b is the new number of nodes. p is the proportion of new nodes who are immunized directly. β is the probability for a susceptible node to be infected. d is the natural death rate of nodes. α1 and α2 are the death rates due to the virus for the nodes in states infectious and quarantined, respectively. γ, δ, and ɛ are the coefficients of state transmission. τ is the latent period of the virus.Gan et al. investigated global attractivity and sustainability of system (1) in [3]. However, studies on dynamical systems not only involve a discussion of attractivity and sustainability but also involve many dynamical behaviors such as stability, bifurcation, and chaos. In particular, the existence and properties of the Hopf bifurcation for the delayed dynamical systems have been studied by many authors [8–10, 12]. In [8], Feng et al. investigated the Hopf bifurcation of a delayed SIRS viral infection model in computer networks by regarding the delay as a bifurcation parameter. In [12], Zhuang and Zhu investigated the Hopf bifurcation of an improved HIV model with time delay and cure rate. It is well known that the occurrence of Hopf bifurcation means that the state of virus prevalence changes from an equilibrium point to a limit cycle, which is not welcomed in networks. To the best of our knowledge, few papers deal with the research of Hopf bifurcation of system (1). Simulated by this reason and motivated by work above, we consider the Hopf bifurcation of system (1) in this paper.This paper is organized as follows. In Section2, we show that the complex Hopf bifurcation phenomenon at the positive equilibrium of the system (1) can occur as the delay crosses a critical value by choosing the delay as a bifurcation parameter. In Section 3, explicit formulae for the direction and stability of the Hopf bifurcation are derived by using the normal form theory and center manifold theorem. In Section 4, some numerical simulations are carried out to verify the theoretical results. A brief discussion is given to conclude this work in Section 5.
## 2. Stability and Existence of Local Hopf Bifurcation
In this section, we mainly focus on the local stability of positive equilibrium and existence of local Hopf bifurcation. It is not difficult to verify that if the basic reproduction numberR0=(1-p)bβ/d(δ+dα1+γ)>1, system (1) has a unique positive equilibrium D*(S*,I*,Q*,R*), where
(2)S*=δ+d+α1+γβ,I*=(1-p)bβ-d(δ+d+α1+γ)β(δ+d+α1+γ),Q*=δI*ɛ+d+α2,R*=pb+γI*+ɛQ*d.
The linearization of system (1) about the positive equilibrium D* is
(3)dS(t)dt=a1S(t)+b1S(t-τ)+b2I(t-τ),dI(t)dt=a2I(t)+b3S(t-τ)+b4I(t-τ),dQ(t)dt=a3I(t)+a4Q(t),dR(t)dt=a5I(t)a6Q(t)+a7R(t),
where
(4)a1=-d,a2=-ɛ+d+α1+γ,a3=δ,a4=-(ɛ+d+α2),a6=ɛ,a7=-d,b1=-βI*,b2=-βS*,b3=βI*,b4=βS*.
The characteristic equation of system (1) is
(5)λ4+A3λ3+A2λ2+A1λ+A0+(B3λ3+B2λ2+B1λ+B0)e-λτ=0,
where
(6)A0=a1a2a4a7,A1=-a1a2(a4+a7)-a4a7(a1+a2),A2=a1a2+a4a7+(a1+a2)(a4+a7),A3=-(a1+a2+a4+a7),B0=a4a7(a1b4+a2b1),B1=-a4a7(b1+b4)-(a4+a7)(a1b4+a2b1),B2=a1b4+a2b1+(a4+a7)(b1+b4),B3=-(b1+b4).
For τ=0, (7) reduces to
(7)λ4+A3*λ3+A2*λ2+A1*λ+A0*=0,
where
(8)A0*=A0+B0,A1*=A1+B1,A2*=A2+B2,A3*=A3+B3.
By the Routh-Hurwitz criterion, if condition (H1) (10)–(15) holds, D* is locally asymptotically stable in the absence of delay:
(9)Det1=A3*>0,(10)Det2=A3*1A1*A2*>0,(11)Det3=A3*10A1*A2*A3*0A0*A1*>0,(12)Det4=A3*100A1*A2*A3*10A0*A1*A2*000A0*>0.
For τ>0, let iω(ω>0) be the root of (7). Then, we can get
(13)g1(ω)cosτω-g2(ω)sinτω=g3(ω),g1(ω)sinτω+g2(ω)cosτω=g4(ω),
where
(14)g1(ω)=B1ω-B3ω3,g2(ω)=B0-B2ω2,g3(ω)=A3ω3-A1ω,g4(ω)=A2ω2-ω4-A0.
Then, we can obtain the following equation with respect to ω:
(15)ω8+c3ω6+c2ω4+c1ω2+c0,
where
(16)c0=A02-B02,c1=A12-B12-2A0A2+2B0B2,c2=A22-B22+2A0-2A1A3+2B1B3,c3=A32-B32-2A2.
Let ω2=v; then (15) becomes
(17)v4+c3v3+c2v2+c1v+c0.
If the coefficients of system (1) are given, then one can get the roots of (17) by Matlab software package easily. In order to give the main results in this paper, we make the following assumption.(
H
2
) Equation (17) has at least one positive root.If condition(H2) holds, we know that (15) has at least a positive root ω0 such that (7) has a pair of purely imaginary roots ±iω0. The corresponding critical value of the delay is
(18)τ0=1ω0arccosm6ω06+m4ω04+m2ω02+m0n6ω06+n4ω04+n2ω02+n0,
where
(19)m0=-A0B0,m2=A0B2-A1B1+A2B0,m4=A1B3-A2B2+A3B1-B0,m6=B2-A3B3,n0=B02,n2=B12-2B0B2,n4=B22-2B1B3,n6=B32.
Substituting λ(τ) into the left side of (7) and taking the derivative with respect to τ, one can obtain
(20)dλdτ-1=-4λ3+3A3λ2+2A2λ+A1λ(λ4+A3λ3+A2λ2+A1λ+A0)+3B3λ2+2B2λ+B1λ(B3λ3+B2λ2+B1λ+B0).
Thus,
(21)Redλdττ=τ0-1=f′(v*)g1ω02+g2ω02,
where v*=ω02 and f(v)=v4+c3v3+c2v2+c1v+c0.Thus, if conditionH3f′v*≠0, then Re[dλ/dτ]τ=τ0-1. According to the Hopf bifurcation theorem in [13], we have the following results.Theorem 1.
If the conditions(H1)–(H3) hold, then the positive equilibrium D*(S*,I*,Q*,R*) of system (1) is asymptotically stable for τ∈[0,τ0); system (1) undergoes a Hopf bifurcation at the positive equilibrium D*(S*,I*,Q*,R*) when τ=τ0 and a family of periodic solutions bifurcating from the positive equilibrium D*(S*,I*,Q*,R*) near τ=τ0.
## 3. Direction and Stability of the Hopf Bifurcation
In this section, we investigate the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions by using the normal form theory and the center manifold theorem in [13].Letu1(t)=S(t)-S*, u2(t)=I(t)-I*, u3(t)=Q(t)-Q*, u4(t)=R(t)-R*, and τ=τ0+μ, μ∈R and normalize the time delay by t→(t/τ). Then system (1) can be transformed into an FDE as
(22)u˙(t)=Lμut+F(μ,ut),
where
(23)ut=u1t,u2t,u3t,u4tT∈C-1,0,R4,Lμϕ=τ0+μAtrixϕ0+Btrixϕ-1,F(μ,ϕ)=τ0+μ-βϕ1-1ϕ2-1βϕ1-1ϕ2-100,
where
(24)Atrix=a10000a2000a3a400a5a6a7,Btrix=b1b200b3b40000000000.
By the Riesz representation theorem, there is a 4×4 matrix function with bounded variation components η(θ,μ), θ∈[-1,0] such that
(25)Lμϕ=∫-10dη(θ,μ)ϕ(θ),ϕ∈C-1,0,R4.
In fact, we choose
(26)η(θ,μ)=(τ0+μ)(Atrixδ(θ)+Btrixδ(θ+1)),
where δ is the Dirac delta function.Forϕ∈C([-1,0],R5), we define
(27)A(μ)ϕ=dϕ(θ)dθ,-1≤θ<0,∫-10dη(θ,μ)ϕ(θ),θ=0,R(μ)ϕ=0,-1≤θ<0,F(μ,ϕ),θ=0.
Then system (22) is equivalent to the following operator equation:
(28)u˙(t)=A(μ)ut+R(μ)ut.
Next, we define the adjoint operator A* of A(29)A*(φ)=-dφ(s)ds,0<s≤1,∫-10dηT(s,0)φ(-s),s=0,
and a bilinear inner product
(30)φs,ϕθ=φ¯(0)ϕ(0)-∫θ=-10∫ξ=0θφ¯(ξ-θ)dη(θ)ϕ(ξ)dξ,
where η(θ)=η(θ,0).Letρ(θ)=(1,ρ2,ρ3,ρ4)Teiω0τ0θ be the eigenvector of A(0) corresponding to +iω0τ0 and let ρ*(s)=D(1,ρ2*,ρ3*,ρ4*)eiω0τ0s be the eigenvector of A*(0) corresponding to -iω0τ0. From the definition of A(0) and A*(0) and by a simple computation, we obtain
(31)ρ2=iω0-a1-b1e-iω0τ0b2e-iω0τ0,ρ3=a3(iω0-a1-b1e-iω0τ0)b2(iω0-a4)e-iω0τ0,ρ4=a5(iω0-a1-b1e-iω0τ0)b2(iω0-a7)e-iω0τ0+a3a6(iω0-a1-b1e-iω0τ0)b2(iω0-a4)(iω0-a7)e-iω0τ0,ρ2*=-iω0+a1+b1eiω0τ0b3eiω0τ0,ρ3*=-a6(iω0+a2+b4eiω0τ0)ρ2*ia5ω0+a4a5-a3a6,ρ4*=-(iω0+a4)(iω0+a2+b4eiω0τ0)ρ2*ia5ω0+a4a5-a3a6.
From the definition of 〈φ(s),ϕ(θ)〉, we can obtain
(32)q*s,qθ=D¯1+ρ2ρ¯2*+ρ3ρ¯3*+ρ4ρ¯4*b1+b3ρ¯2*+ρ2b2+b4ρ¯2*τ0e-iω0τ0b1+b3ρ¯2*+ρ2b2+b4ρ¯2*hhhhh+τ0e-iω0τ0b1+b3ρ¯2*+ρ2b2+b4ρ¯2*.
Then we choose
(33)D¯=1+ρ2ρ¯2*+ρ3ρ¯3*+ρ4ρ¯4*b1+b3ρ¯2*+ρ2b2+b4ρ¯2*τ0e-iω0τ0b1+b3ρ¯2*+ρ2b2+b4ρ¯2*hhh+τ0e-iω0τ0b1+b3ρ¯2*+ρ2b2+b4ρ¯2*-1
such that 〈q*,q〉=1, 〈q*,q¯〉=0.Following the algorithms given in [13] and using similar computation process in [14], we can get the coefficients which determine the direction and stability of the Hopf bifurcation:
(34)g20=2βτ0D¯ρ(1)(-1)ρ(2)(-1)(ρ¯2*-1),g11=βτ0D¯ρ1-1ρ¯(2)(-1)hhhhh+ρ¯1(-1)ρ2(-1)(ρ¯2*-1),g02=2βτ0D¯ρ¯1-1ρ¯(2)(-1)(ρ¯2*-1),g21=2βτ0D¯(ρ¯2*-1)W11(1)(-1)ρ(2)(-1)+12W202-1ρ¯(1)(-1)hhhhhhhhhhhhhh+12W201-1ρ¯(2)(-1)hhhhhhhhhhhhhh+W11(2)(-1)ρ(1)(-1)hhhhhhhhhhhhhhW11(1)(-1)ρ(2)(-1)+12W202-1ρ¯(1)(-1),
with
(35)W20θ=ig20q(0)ω0τ0eiω0τ0θ+ig¯02q¯(0)3ω0τ0e-iω0τ0θ+E1e2iω0τ0θ,W11(θ)=-ig11q(0)ω0τ0eiω0τ0θ+ig¯11q¯(0)ω0τ0e-iω0τ0θ+E2,
where E1 and E2 can be determined by the following equations, respectively:
(36)E1=2a11′a12′00a21′a22′000-a3a33′00-a5-a6a44′-1E1(1)E1(2)00,E2=-b11′b200b3b22′000a3a400a5a6a7-1E2(1)E2(2)00,
with
(37)a11′=2iω0-a1-b1e-2iω0τ0,a12′=-b2e-2iω0τ0,a21′=-b3e-2iω0τ0,a22′=2iω0-a2-b4e-2iω0τ0,a33′=2iω0-a4,a44′=2iω0-a7,b11′=a1+b1,b22′=a2+b4,E1(1)=-βρ(1)(-1)ρ(2)(-1),E1(2)=βρ(1)(-1)ρ(2)(-1),E2(1)=-βρ1-1ρ¯2-1+ρ¯1-1ρ2-1,E2(2)=βρ1-1ρ¯2-1+ρ¯1-1ρ2-1.
Then, we can get the following coefficients:
(38)C1(0)=i2ω0τ0g11g20-2g112-g0223+g212,μ2=-Re{C1(0)}Re{λ′(τ0)},β2=2Re{C1(0)},T2=-ImC10+μ2Imλ′τ0ω0τ0.
In conclusion, we have the following results.Theorem 2.
For system (1), if μ2>0 (μ2<0), then the Hopf bifurcation is supercritical (subcritical). If β2>0 (β2<0), then the bifurcating periodic solutions are stable (unstable). If T2>0 (T2<0), then the period of the bifurcating periodic solutions increases (decreases).
## 4. Numerical Simulation and Discussion
In this section, a numerical example is given to support the theoretical results in Sections2 and 3. Let p=0.2, β=0.1, d=0.01, α1=0.01, γ=0.5, δ=0.1, ɛ=0.1, α2=0.02, and b=10. Then, we get a particular case of system (1):
(39)dS(t)dt=8-0.1S(t-τ)I(t-τ)-0.01S(t),dI(t)dt=0.1S(t-τ)I(t-τ)-0.62I(t),dQ(t)dt=0.1I(t)-0.13Q(t),dR(t)dt=0.5I(t)+2+0.1Q(t)-0.01R(t).
Then, we can get R0=129.0323>1 and the unique positive equilibrium D*(6.2,12.8032,9.8486,938.646) of system (39). By some complex computation, it can be verified that condition (H1) is satisfied for system (39). Further, we obtain ω0=0.4107, τ0=1.3265. By Theorem 1, we can conclude that when τ∈[0,1.3265) the positive equilibrium D*(6.2,12.8032,9.8486,938.646) is asymptotically stable. This property can be illustrated by Figures 1, 2, and 3. However, if we choose τ=1.3625>τ0=1.3265, the positive equilibrium D*(6.2,12.8032,9.8486,938.646) becomes unstable and a Hopf bifurcation occurs, which can be illustrated by Figures 4, 5, and 6. Furthermore, we obtain λ′(τ0)=1.6465-0.8380i, C1(0)=-11.0407+1.3002i. Then, from (38) we get μ2=6.7482>0, β2=-22.1048, and T2=7.9935>0. Therefore, we can know that the Hopf bifurcation of system (39) is supercritical, the bifurcated periodic solutions are stable, and the period of the periodic solutions increases according to Theorem 2.Figure 1
The track of the statesS, I, Q, and R for τ=1.3025<1.3265=τ0.Figure 2
The phase plot of the statesS, I, and R for τ=1.3025<1.3265=τ0.Figure 3
The phase plot of the statesI, Q, and R for τ=1.3025<1.3265=τ0.Figure 4
The track of the statesS, I, Q, and R for τ=1.3625>1.3265=τ0.Figure 5
The phase plot of the statesS, I, and R for τ=1.3625>1.3265=τ0.Figure 6
The phase plot of the statesI, Q, and R for τ=1.3625>1.3265=τ0.In addition, according to the numerical simulation, we find that the onset of the Hopf bifurcation can be delayed by decreasing the number of new nodes connected to a network or increasing the immunization rate of the new nodes. Therefore, the managers of a real network should control the number of the new nodes connected to network and strengthen the immunization of the new nodes in order to delay and control the onset of the Hopf bifurcation, so as to make the propagation of computer viruses be predicted and controlled easily.
## 5. Conclusion
In this paper, the problem of Hopf bifurcation for a delayed SIQR computer virus model has been studied. The stability of the positive equilibrium and the existence of Hopf bifurcation under this model are analyzed. It has been found that when the delay is suitable small (τ<τ0), the computer virus mode is asymptotically stable. In this case, the characteristics of the propagation of computer viruses can be easily predicted and controlled. However, if the delay passes though the critical value τ0, a Hopf bifurcation occurs. Then, the propagation of computer viruses becomes unstable and out of control. Furthermore, the properties of the Hopf bifurcation such as direction and stability have also been investigated in detail. Finally, numerical results have been presented to verify the analytical predictions.
---
*Source: 101874-2015-01-15.xml* | 101874-2015-01-15_101874-2015-01-15.md | 14,494 | Hopf Bifurcation of an SIQR Computer Virus Model with Time Delay | Zizhen Zhang; Huizhong Yang | Discrete Dynamics in Nature and Society
(2015) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101874 | 101874-2015-01-15.xml | ---
## Abstract
A delayed SIQR computer virus model is considered. It has been observed that there exists a critical value of delay for the stability of virus prevalence by choosing the delay as a bifurcation parameter. Furthermore, the properties of the Hopf bifurcation such as direction and stability are investigated by using the normal form method and center manifold theory. Finally, some numerical simulations for supporting our theoretical results are also performed.
---
## Body
## 1. Introduction
Recently, many scholars have been studying the prevalence of computer viruses by establishing reasonable mathematics models [1–5]. In [1], Piqueira and Araujo established a modified version of SIR model for the computer viruses in network and they got the stability and bifurcation conditions of the model. In [3], Gan et al. proposed an epidemic model of computer viruses by incorporating a vaccination probability in the SIRS model with generalized nonlinear incidence rate.As is known, many computer viruses have different kinds of delays when they spread, such as latent period delay [6, 7], temporary immunity period delay [8], and other types [9–11]. In [6], Yang proposed the following SIQR computer virus model with time delay:
(1)dS(t)dt=(1-p)b-βS(t-τ)I(t-τ)-dS(t),dI(t)dt=βS(t-τ)I(t-τ)-(δ+d+α1+γ)I(t),dQ(t)dt=δI(t)-(ɛ+d+α2)Q(t),dR(t)dt=γI(t)+pb+ɛQ(t)-dR(t),
where S(t), I(t), Q(t), and R(t) denote the numbers of nodes in states susceptible, infectious, quarantined, and recovered at time t, respectively. b is the new number of nodes. p is the proportion of new nodes who are immunized directly. β is the probability for a susceptible node to be infected. d is the natural death rate of nodes. α1 and α2 are the death rates due to the virus for the nodes in states infectious and quarantined, respectively. γ, δ, and ɛ are the coefficients of state transmission. τ is the latent period of the virus.Gan et al. investigated global attractivity and sustainability of system (1) in [3]. However, studies on dynamical systems not only involve a discussion of attractivity and sustainability but also involve many dynamical behaviors such as stability, bifurcation, and chaos. In particular, the existence and properties of the Hopf bifurcation for the delayed dynamical systems have been studied by many authors [8–10, 12]. In [8], Feng et al. investigated the Hopf bifurcation of a delayed SIRS viral infection model in computer networks by regarding the delay as a bifurcation parameter. In [12], Zhuang and Zhu investigated the Hopf bifurcation of an improved HIV model with time delay and cure rate. It is well known that the occurrence of Hopf bifurcation means that the state of virus prevalence changes from an equilibrium point to a limit cycle, which is not welcomed in networks. To the best of our knowledge, few papers deal with the research of Hopf bifurcation of system (1). Simulated by this reason and motivated by work above, we consider the Hopf bifurcation of system (1) in this paper.This paper is organized as follows. In Section2, we show that the complex Hopf bifurcation phenomenon at the positive equilibrium of the system (1) can occur as the delay crosses a critical value by choosing the delay as a bifurcation parameter. In Section 3, explicit formulae for the direction and stability of the Hopf bifurcation are derived by using the normal form theory and center manifold theorem. In Section 4, some numerical simulations are carried out to verify the theoretical results. A brief discussion is given to conclude this work in Section 5.
## 2. Stability and Existence of Local Hopf Bifurcation
In this section, we mainly focus on the local stability of positive equilibrium and existence of local Hopf bifurcation. It is not difficult to verify that if the basic reproduction numberR0=(1-p)bβ/d(δ+dα1+γ)>1, system (1) has a unique positive equilibrium D*(S*,I*,Q*,R*), where
(2)S*=δ+d+α1+γβ,I*=(1-p)bβ-d(δ+d+α1+γ)β(δ+d+α1+γ),Q*=δI*ɛ+d+α2,R*=pb+γI*+ɛQ*d.
The linearization of system (1) about the positive equilibrium D* is
(3)dS(t)dt=a1S(t)+b1S(t-τ)+b2I(t-τ),dI(t)dt=a2I(t)+b3S(t-τ)+b4I(t-τ),dQ(t)dt=a3I(t)+a4Q(t),dR(t)dt=a5I(t)a6Q(t)+a7R(t),
where
(4)a1=-d,a2=-ɛ+d+α1+γ,a3=δ,a4=-(ɛ+d+α2),a6=ɛ,a7=-d,b1=-βI*,b2=-βS*,b3=βI*,b4=βS*.
The characteristic equation of system (1) is
(5)λ4+A3λ3+A2λ2+A1λ+A0+(B3λ3+B2λ2+B1λ+B0)e-λτ=0,
where
(6)A0=a1a2a4a7,A1=-a1a2(a4+a7)-a4a7(a1+a2),A2=a1a2+a4a7+(a1+a2)(a4+a7),A3=-(a1+a2+a4+a7),B0=a4a7(a1b4+a2b1),B1=-a4a7(b1+b4)-(a4+a7)(a1b4+a2b1),B2=a1b4+a2b1+(a4+a7)(b1+b4),B3=-(b1+b4).
For τ=0, (7) reduces to
(7)λ4+A3*λ3+A2*λ2+A1*λ+A0*=0,
where
(8)A0*=A0+B0,A1*=A1+B1,A2*=A2+B2,A3*=A3+B3.
By the Routh-Hurwitz criterion, if condition (H1) (10)–(15) holds, D* is locally asymptotically stable in the absence of delay:
(9)Det1=A3*>0,(10)Det2=A3*1A1*A2*>0,(11)Det3=A3*10A1*A2*A3*0A0*A1*>0,(12)Det4=A3*100A1*A2*A3*10A0*A1*A2*000A0*>0.
For τ>0, let iω(ω>0) be the root of (7). Then, we can get
(13)g1(ω)cosτω-g2(ω)sinτω=g3(ω),g1(ω)sinτω+g2(ω)cosτω=g4(ω),
where
(14)g1(ω)=B1ω-B3ω3,g2(ω)=B0-B2ω2,g3(ω)=A3ω3-A1ω,g4(ω)=A2ω2-ω4-A0.
Then, we can obtain the following equation with respect to ω:
(15)ω8+c3ω6+c2ω4+c1ω2+c0,
where
(16)c0=A02-B02,c1=A12-B12-2A0A2+2B0B2,c2=A22-B22+2A0-2A1A3+2B1B3,c3=A32-B32-2A2.
Let ω2=v; then (15) becomes
(17)v4+c3v3+c2v2+c1v+c0.
If the coefficients of system (1) are given, then one can get the roots of (17) by Matlab software package easily. In order to give the main results in this paper, we make the following assumption.(
H
2
) Equation (17) has at least one positive root.If condition(H2) holds, we know that (15) has at least a positive root ω0 such that (7) has a pair of purely imaginary roots ±iω0. The corresponding critical value of the delay is
(18)τ0=1ω0arccosm6ω06+m4ω04+m2ω02+m0n6ω06+n4ω04+n2ω02+n0,
where
(19)m0=-A0B0,m2=A0B2-A1B1+A2B0,m4=A1B3-A2B2+A3B1-B0,m6=B2-A3B3,n0=B02,n2=B12-2B0B2,n4=B22-2B1B3,n6=B32.
Substituting λ(τ) into the left side of (7) and taking the derivative with respect to τ, one can obtain
(20)dλdτ-1=-4λ3+3A3λ2+2A2λ+A1λ(λ4+A3λ3+A2λ2+A1λ+A0)+3B3λ2+2B2λ+B1λ(B3λ3+B2λ2+B1λ+B0).
Thus,
(21)Redλdττ=τ0-1=f′(v*)g1ω02+g2ω02,
where v*=ω02 and f(v)=v4+c3v3+c2v2+c1v+c0.Thus, if conditionH3f′v*≠0, then Re[dλ/dτ]τ=τ0-1. According to the Hopf bifurcation theorem in [13], we have the following results.Theorem 1.
If the conditions(H1)–(H3) hold, then the positive equilibrium D*(S*,I*,Q*,R*) of system (1) is asymptotically stable for τ∈[0,τ0); system (1) undergoes a Hopf bifurcation at the positive equilibrium D*(S*,I*,Q*,R*) when τ=τ0 and a family of periodic solutions bifurcating from the positive equilibrium D*(S*,I*,Q*,R*) near τ=τ0.
## 3. Direction and Stability of the Hopf Bifurcation
In this section, we investigate the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions by using the normal form theory and the center manifold theorem in [13].Letu1(t)=S(t)-S*, u2(t)=I(t)-I*, u3(t)=Q(t)-Q*, u4(t)=R(t)-R*, and τ=τ0+μ, μ∈R and normalize the time delay by t→(t/τ). Then system (1) can be transformed into an FDE as
(22)u˙(t)=Lμut+F(μ,ut),
where
(23)ut=u1t,u2t,u3t,u4tT∈C-1,0,R4,Lμϕ=τ0+μAtrixϕ0+Btrixϕ-1,F(μ,ϕ)=τ0+μ-βϕ1-1ϕ2-1βϕ1-1ϕ2-100,
where
(24)Atrix=a10000a2000a3a400a5a6a7,Btrix=b1b200b3b40000000000.
By the Riesz representation theorem, there is a 4×4 matrix function with bounded variation components η(θ,μ), θ∈[-1,0] such that
(25)Lμϕ=∫-10dη(θ,μ)ϕ(θ),ϕ∈C-1,0,R4.
In fact, we choose
(26)η(θ,μ)=(τ0+μ)(Atrixδ(θ)+Btrixδ(θ+1)),
where δ is the Dirac delta function.Forϕ∈C([-1,0],R5), we define
(27)A(μ)ϕ=dϕ(θ)dθ,-1≤θ<0,∫-10dη(θ,μ)ϕ(θ),θ=0,R(μ)ϕ=0,-1≤θ<0,F(μ,ϕ),θ=0.
Then system (22) is equivalent to the following operator equation:
(28)u˙(t)=A(μ)ut+R(μ)ut.
Next, we define the adjoint operator A* of A(29)A*(φ)=-dφ(s)ds,0<s≤1,∫-10dηT(s,0)φ(-s),s=0,
and a bilinear inner product
(30)φs,ϕθ=φ¯(0)ϕ(0)-∫θ=-10∫ξ=0θφ¯(ξ-θ)dη(θ)ϕ(ξ)dξ,
where η(θ)=η(θ,0).Letρ(θ)=(1,ρ2,ρ3,ρ4)Teiω0τ0θ be the eigenvector of A(0) corresponding to +iω0τ0 and let ρ*(s)=D(1,ρ2*,ρ3*,ρ4*)eiω0τ0s be the eigenvector of A*(0) corresponding to -iω0τ0. From the definition of A(0) and A*(0) and by a simple computation, we obtain
(31)ρ2=iω0-a1-b1e-iω0τ0b2e-iω0τ0,ρ3=a3(iω0-a1-b1e-iω0τ0)b2(iω0-a4)e-iω0τ0,ρ4=a5(iω0-a1-b1e-iω0τ0)b2(iω0-a7)e-iω0τ0+a3a6(iω0-a1-b1e-iω0τ0)b2(iω0-a4)(iω0-a7)e-iω0τ0,ρ2*=-iω0+a1+b1eiω0τ0b3eiω0τ0,ρ3*=-a6(iω0+a2+b4eiω0τ0)ρ2*ia5ω0+a4a5-a3a6,ρ4*=-(iω0+a4)(iω0+a2+b4eiω0τ0)ρ2*ia5ω0+a4a5-a3a6.
From the definition of 〈φ(s),ϕ(θ)〉, we can obtain
(32)q*s,qθ=D¯1+ρ2ρ¯2*+ρ3ρ¯3*+ρ4ρ¯4*b1+b3ρ¯2*+ρ2b2+b4ρ¯2*τ0e-iω0τ0b1+b3ρ¯2*+ρ2b2+b4ρ¯2*hhhhh+τ0e-iω0τ0b1+b3ρ¯2*+ρ2b2+b4ρ¯2*.
Then we choose
(33)D¯=1+ρ2ρ¯2*+ρ3ρ¯3*+ρ4ρ¯4*b1+b3ρ¯2*+ρ2b2+b4ρ¯2*τ0e-iω0τ0b1+b3ρ¯2*+ρ2b2+b4ρ¯2*hhh+τ0e-iω0τ0b1+b3ρ¯2*+ρ2b2+b4ρ¯2*-1
such that 〈q*,q〉=1, 〈q*,q¯〉=0.Following the algorithms given in [13] and using similar computation process in [14], we can get the coefficients which determine the direction and stability of the Hopf bifurcation:
(34)g20=2βτ0D¯ρ(1)(-1)ρ(2)(-1)(ρ¯2*-1),g11=βτ0D¯ρ1-1ρ¯(2)(-1)hhhhh+ρ¯1(-1)ρ2(-1)(ρ¯2*-1),g02=2βτ0D¯ρ¯1-1ρ¯(2)(-1)(ρ¯2*-1),g21=2βτ0D¯(ρ¯2*-1)W11(1)(-1)ρ(2)(-1)+12W202-1ρ¯(1)(-1)hhhhhhhhhhhhhh+12W201-1ρ¯(2)(-1)hhhhhhhhhhhhhh+W11(2)(-1)ρ(1)(-1)hhhhhhhhhhhhhhW11(1)(-1)ρ(2)(-1)+12W202-1ρ¯(1)(-1),
with
(35)W20θ=ig20q(0)ω0τ0eiω0τ0θ+ig¯02q¯(0)3ω0τ0e-iω0τ0θ+E1e2iω0τ0θ,W11(θ)=-ig11q(0)ω0τ0eiω0τ0θ+ig¯11q¯(0)ω0τ0e-iω0τ0θ+E2,
where E1 and E2 can be determined by the following equations, respectively:
(36)E1=2a11′a12′00a21′a22′000-a3a33′00-a5-a6a44′-1E1(1)E1(2)00,E2=-b11′b200b3b22′000a3a400a5a6a7-1E2(1)E2(2)00,
with
(37)a11′=2iω0-a1-b1e-2iω0τ0,a12′=-b2e-2iω0τ0,a21′=-b3e-2iω0τ0,a22′=2iω0-a2-b4e-2iω0τ0,a33′=2iω0-a4,a44′=2iω0-a7,b11′=a1+b1,b22′=a2+b4,E1(1)=-βρ(1)(-1)ρ(2)(-1),E1(2)=βρ(1)(-1)ρ(2)(-1),E2(1)=-βρ1-1ρ¯2-1+ρ¯1-1ρ2-1,E2(2)=βρ1-1ρ¯2-1+ρ¯1-1ρ2-1.
Then, we can get the following coefficients:
(38)C1(0)=i2ω0τ0g11g20-2g112-g0223+g212,μ2=-Re{C1(0)}Re{λ′(τ0)},β2=2Re{C1(0)},T2=-ImC10+μ2Imλ′τ0ω0τ0.
In conclusion, we have the following results.Theorem 2.
For system (1), if μ2>0 (μ2<0), then the Hopf bifurcation is supercritical (subcritical). If β2>0 (β2<0), then the bifurcating periodic solutions are stable (unstable). If T2>0 (T2<0), then the period of the bifurcating periodic solutions increases (decreases).
## 4. Numerical Simulation and Discussion
In this section, a numerical example is given to support the theoretical results in Sections2 and 3. Let p=0.2, β=0.1, d=0.01, α1=0.01, γ=0.5, δ=0.1, ɛ=0.1, α2=0.02, and b=10. Then, we get a particular case of system (1):
(39)dS(t)dt=8-0.1S(t-τ)I(t-τ)-0.01S(t),dI(t)dt=0.1S(t-τ)I(t-τ)-0.62I(t),dQ(t)dt=0.1I(t)-0.13Q(t),dR(t)dt=0.5I(t)+2+0.1Q(t)-0.01R(t).
Then, we can get R0=129.0323>1 and the unique positive equilibrium D*(6.2,12.8032,9.8486,938.646) of system (39). By some complex computation, it can be verified that condition (H1) is satisfied for system (39). Further, we obtain ω0=0.4107, τ0=1.3265. By Theorem 1, we can conclude that when τ∈[0,1.3265) the positive equilibrium D*(6.2,12.8032,9.8486,938.646) is asymptotically stable. This property can be illustrated by Figures 1, 2, and 3. However, if we choose τ=1.3625>τ0=1.3265, the positive equilibrium D*(6.2,12.8032,9.8486,938.646) becomes unstable and a Hopf bifurcation occurs, which can be illustrated by Figures 4, 5, and 6. Furthermore, we obtain λ′(τ0)=1.6465-0.8380i, C1(0)=-11.0407+1.3002i. Then, from (38) we get μ2=6.7482>0, β2=-22.1048, and T2=7.9935>0. Therefore, we can know that the Hopf bifurcation of system (39) is supercritical, the bifurcated periodic solutions are stable, and the period of the periodic solutions increases according to Theorem 2.Figure 1
The track of the statesS, I, Q, and R for τ=1.3025<1.3265=τ0.Figure 2
The phase plot of the statesS, I, and R for τ=1.3025<1.3265=τ0.Figure 3
The phase plot of the statesI, Q, and R for τ=1.3025<1.3265=τ0.Figure 4
The track of the statesS, I, Q, and R for τ=1.3625>1.3265=τ0.Figure 5
The phase plot of the statesS, I, and R for τ=1.3625>1.3265=τ0.Figure 6
The phase plot of the statesI, Q, and R for τ=1.3625>1.3265=τ0.In addition, according to the numerical simulation, we find that the onset of the Hopf bifurcation can be delayed by decreasing the number of new nodes connected to a network or increasing the immunization rate of the new nodes. Therefore, the managers of a real network should control the number of the new nodes connected to network and strengthen the immunization of the new nodes in order to delay and control the onset of the Hopf bifurcation, so as to make the propagation of computer viruses be predicted and controlled easily.
## 5. Conclusion
In this paper, the problem of Hopf bifurcation for a delayed SIQR computer virus model has been studied. The stability of the positive equilibrium and the existence of Hopf bifurcation under this model are analyzed. It has been found that when the delay is suitable small (τ<τ0), the computer virus mode is asymptotically stable. In this case, the characteristics of the propagation of computer viruses can be easily predicted and controlled. However, if the delay passes though the critical value τ0, a Hopf bifurcation occurs. Then, the propagation of computer viruses becomes unstable and out of control. Furthermore, the properties of the Hopf bifurcation such as direction and stability have also been investigated in detail. Finally, numerical results have been presented to verify the analytical predictions.
---
*Source: 101874-2015-01-15.xml* | 2015 |
# A Multiple Kernel Learning Model Based onp-Norm
**Authors:** Jinshan Qi; Xun Liang; Rui Xu
**Journal:** Computational Intelligence and Neuroscience
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1018789
---
## Abstract
By utilizing kernel functions, support vector machines (SVMs) successfully solve the linearly inseparable problems. Subsequently, its applicable areas have been greatly extended. Using multiple kernels (MKs) to improve the SVM classification accuracy has been a hot topic in the SVM research society for several years. However, most MK learning (MKL) methods employL1-norm constraint on the kernel combination weights, which forms a sparse yet nonsmooth solution for the kernel weights. Alternatively, the Lp-norm constraint on the kernel weights keeps all information in the base kernels. Nonetheless, the solution of Lp-norm constraint MKL is nonsparse and sensitive to the noise. Recently, some scholars presented an efficient sparse generalized MKL (L1- and L2-norms based GMKL) method, in which L1 L2 established an elastic constraint on the kernel weights. In this paper, we further extend the GMKL to a more generalized MKL method based on the p-norm, by joining L1- and Lp-norms. Consequently, the L1- and L2-norms based GMKL is a special case in our method when p=2. Experiments demonstrated that our L1- and Lp-norms based MKL offers a higher accuracy than the L1- and L2-norms based GMKL in the classification, while keeping the properties of the L1- and L2-norms based on GMKL.
---
## Body
## 1. Introduction of MKL
The support vector machine (SVM) is a classification and regression tool based on the statistical machine learning [1]. By utilizing the kernel function, the SVM transfers the data into a high dimension space, builds an optimal separating hyperplane, and consequently solves the nonlinear problem. In solving an SVM problem, it is critical to choose an adequate kernel function. The widely used kernel functions are the radial basis functions and polynomial functions. To select an effective kernel function is very important, and different kernels and parameters produce different classification and regression results. In our paper, we try to use the features of different kernels and improve the classification accuracy of SVM.The multiple kernel learning (MKL) model [2] is a flexible learning model. In the recent research, the MK learning (MKL) can obtain higher classification accuracy than the sole one. As the MKL uses different combinations of kernel functions and has larger flexibility, its performance is normally better. Constructing the MK model, in fact, is the process of seeking the combination of M kernels to get the best classification accuracy. Thus, in the MK framework, to seek the weights of the different kernels is the big problem for MKL [3, 4]. The simplest form of MKL is L1 norm [5]. The L1-norm MKL finds the kernel weight in a simplex form and thus yields a sparse solution [6, 7]. The sparsity of selected kernels is helpful in identifying an appropriate combination of data sources or subsets with different features in real world applications. However, the method may discard useful information and thus result in a suboptimal generalization.Alternatively, theL2-norm MKL was proposed by another group of researchers, and it improves L1-norm MKL in some scenarios. Unfortunately, the solution of L2-norm MKL is nonsparse, which means it uses all kernels in the forecasting stage. Also, the L2-norm MKL is sensitive to noise. Additionally, when there exist noisy data in the training set, the classification accuracy would be greatly decreased. Furthermore, it suffers poor interpretation and can lead to high computational and storage cost, too.Thus, there is research intending to combine theL1-norm MKL and L2-norm MKL. The algorithm is called the generalized MKL (GMKL) [8], which combines both advantages of L1- and L2-norms and is able to have a higher accuracy in classifications. Nonetheless, the GMKL algorithm is just specialized in the combination of the sparse MKL method and the nonsparse kernel learning method, L2-norm MKL. The research made a contribution to the merging of the L1- and L2-norm MKL, and the GMKL in a general model [9]. In this paper, we extend the algorithm in a more general form, which combines the sparse MKL andallnonsparse MKL algorithms. Thus, we would like to generalize the L2-norm MKL to the Lp-norm.In our paper, we combineL1- and Lp-norms [10], by extending the constraint of kernels as v∑j=1Muj+(1-v)∑j=1Mujp≤1. We call our algorithm MKL based on p-norm (MKL-BP). In particular, when p=2 the MKL-BP algorithm will be degenerated into the GMKL algorithm. In our experiments, when p→∞, the accuracy of our algorithm tends to be stable and is higher than the results with p=2. Meantime, compared with the L1- and Lp-norm MKL method, the MKL-BP shows the higher accuracy in the classifications too. The advantage of using Lp norms is that more flexibility can be achieved during the experiments. As p changes, the generalization and precision vary accordingly.The paper is organized as follows: Section2 describes in detail the MKL-BP model. Section 3 analyzes and verifies the relevant definitions and theorems of MKL-BP model. The implementation solution of MKL-BP model is described in Section 4. Section 5 uses the MKL-BP model to carry out experiments on the UCI datasets and compares its accuracy, running time, and so on with those of other MKL models. Section 6 concludes this research with directions for future work.
## 2. Base Framework of MKL-BP
Based on the statistics machine learning in the classification problem, we can get the general model below:(1)f=argminCempf+Ωf.The smallest empirical risk is1/N∑m=1NR(fxm,ym), while the smallest regulation risk is Ωf=1/2w2. The parameter C is a presetting constant, used for balancing the empirical and regulation risks.In theC-SVM, the model could be shown as(2)minw,b12w2+C∑m=1Nξms.t.yiwTϕxm+b≥1-ξm,m=1,…,N,ξm≥0.By optimizing problem (2), the classifier could be shown as(3)fx=wTϕx+b,w∈RdH,b∈R.Using the Langrage function and kernelKxm,xn≤ϕxm, ϕxn>0, we could get the dual form of problem (2):(4)maxα∑m=1Nαm-∑m=1N∑n=1NαmαnKxm,xns.t.∑m=1Nαm=0,0≤αm≤C.Problem (4) is a simplest form of SVM. In the MKL model, kernel K is combined with a series of kernels linearly. The kernel K is shown as(5)K=∑j=1MujKj.In (5), uj refers to the weight of kernel Kj, and M refers to the number of kernels. By using (5) and replacing Kxm,xn in (4), we can get the standard form of MKL:(6)minu∈Amaxα∑m=1Nαm-∑m=1N∑n=1Nαmαn∑j=1MujKjxm,xns.t.∑m=1Nαm=0,0≤αm≤C,where u=(u1,…,uj)T and A refers to the constraint domain of u. In the MKL model, the simplest domain is the L1-norm MKL, where A={uj∣uj≥0,∑j=1Muj≤1}. The research shows that in the L2- and Lp-norm MKLs, where A={uj∣uj≥0,∑j=1Mujp≤1}, there is better classification character in some aspects.The research combined theL1- and L2-norm MKLs, and the GMKL model. The paper showed that the novel model keeps the sparsity of the L1-norm MKL and the classification accuracy does not decrease when facing the noisy data. Domain A in the GMKL model is {uj∣uj≥0,v∑j=1Muj+(1-v)∑j=1Muj2≤1}. The setting constant v is used to balance the L1- and L2-norm MKLs, and 0≤v≤1. The experiments showed that when v=0.5, the model gets the best classification accuracy.However, the paper just specialized the sparse and nonsparse MKL models. In this paper, we would like to generalize the model. Concretely, we generalize domainA as {uj∣uj≥0,v∑j=1Muj+(1-v)∑j=1Mujp≤1}. We called our model the MKL based on p-norm (MKL-BP).We would like to bring the character of our model in the next paragraph, where we will show the model keeping the character of GMKL. Then we give the algorithm of the model to solve the high dimensional constraint problem. We would make some simulation experiments to show the classification accuracy, running time, and used kernel of our model, compared with different models.
## 3. Theorem of MKL-BP
Theorem 1.
Not all the kernels are selected in the MKL-BP model, anduj of the selected kernels are unique.Proof.
By fixingα=(α1,…,αi)T as α∗, we could easily know that the optimizing result of u in (6) would be irrelevant to α∗. We use the Langrage function and get(7)Lu=∑m=1Nαm∗-∑m=1N∑n=1Nαm∗αn∗∑j=1MujKjxm,xn+λv∑j=1Muj+1-v∑j=1Mujp.
By trying to get the partial derivatives ofuj, we get that(8)∂L∂uj=-∑m=1N∑n=1Nαm∗αn∗Kjxm,xn+λv+p1-vujp-1.
By setting∂L/∂uj=0, we get uj:(9)uj=1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kjxm,xn-vp-1.
Considering whenuj in (9) is below zero, we set uj as(10)uj=1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kjxm,xn-vp-1,0.
From (10), we could easily find that when 1/λ∑m=1N∑n=1Nαm∗αn∗Kjxm,xn<v, we get uj=0. So not all kernels would be selected in the model when 0<v<1. Thus, our model successfully selects the useful kernels in optimization. Also, from (10), the optimization result of uj is unique in our model.
Specially, whenv=0, the algorithm is degenerated into the Lp-norm MKL, and we get(11)uj=1pλ∑m=1N∑n=1Nαm∗αn∗Kjxm,xnp-1.
We find that alluj>0, which indicates that all kernels are selected in the Lp-norm MKL, so it would not discard useful kernels in the optimization. However, the model would not get high accuracy in prediction when faced with noisy data. Also in that scenario, the model may cause higher computational complexity.Definition 2 (similar kernel).
With the optimization of (4) and α∗, if the selected kernels Kj and Kq correspond to the formula below, we call them similar kernels:(12)∑m=1N∑n=1Nαm∗αn∗Kjxm,xn-∑m=1N∑n=1Nαm∗αn∗Kqxm,xn≤1.Theorem 3.
Similar kernels would get the same kernel weightsuj when p approaches the limit.Proof.
We calculateuj-uq as below:(13)uj-uq=1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kjxm,xn-vp-1-1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kqxm,xn-vp-1≤1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kjxm,xn-v-1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kqxm,xn-v≤1pλ1-v.
Whenp approaches to the limit, uj-uq→0. Theorem 3 indicates that when p approaches the limit, uj among different kernels would be very small, and thus the classification accuracy does not change.
## 4. Solution of MKL-BP
Although we have presented the MKL-BP model, it is still hard to optimize problem (6). Problem (6) is quadratic programming with a high dimension constraint. In the GMKL algorithm, [11] used the level method to solve the problem. However, in our model, the constraint is p-dimensional and the method in [11] does not work. So, we resort to the Taylor expansion method to solve the problem approximately.We use the coordinate decreasing method to solve the problem in the iteration; we fixu or α, then solve the subproblem, and finally update u or α.Process 1.
Updateα by fixingu. At the first time, uj is initialed as the approximate solution of vuj+1-vujp=1/M; (6) turns to a standard SVM problem below:(14)maxα∑m=1Nαm-∑m=1N∑n=1Nαmαn∑j=1MujtKjxm,xn.s.t.∑m=1Nαm=0,0≤αm≤C.Numbert refers to the iteration time of algorithm. We employ the SMO algorithm to solve this standard problem.Process 2.
Updateu by fixing α; (6) turns to quadratic programming with a high dimensional constraint. Then use the Taylor expansion to decrease the dimension:(15)up≈utp+putp-1u-ut+pp-1utp-2u-ut22=pp-12utp-2u2+2p-p2utp-1u+p2-3p+22utp.
By using the transformation in (15), the constraint turns to(16)v∑j=1Muj+1-v∑j=1Mujp=∑j=1M1-vpp-12uj,tp-2uj2+∑j=1Mv+1-v2p-p2uj,tp-1uj+∑j=1Mp2-3p+22uj,tp.
Now with the Taylor expansion, we successfully changed the high dimensional constraint to a quadratic constraint. Next, we use the level method and CVX toolbox as the GMKL to solve the problem in Process2. CVX toolbox is a useful MATLAB toolbox in solving many mathematic problems.Process 3.
Updateu or α until the stop criterion is satisfied. The stop criterion is that the program has reached the iteration time or the changes of the objective function have reached the threshold.We could find that whenp>2, we successfully changed the problem to the GMKL, so the complexity is the same as that of GMKL. And according to [8], the complexity of GMKL is O(δ-2), when δ is the threshold of solution.
## 5. Experiments
In this section we use the UCI data to evaluate the classification accuracies in different algorithms.We evaluate the following algorithm:(1) Ave-Kernel. We use a base combination of the kernels. The weights of base combination of kernels are u=1/M. We use the standard SVM solver to solve the Ave-kernel.(2) Simple-MKL. It is a traditional L1-MKL model, which is a useful comparison algorithm in many papers.(3)
L
p
-MKL. The constraint of the kernel weight is up≤1; in our paper we set v=0 as Lp-MKL.(4) GMKL. The constraint of kernel weights is {uj∣uj≥0,v∑j=1Muj+(1-v)∑j=1Muj2≤1}, and in our paper, we set p=2 as the GMKL.To be consistent with the past work, all the solvers of the SVM QP are from the LibSVM QP solver. For updating and solving kernel weights, we use the CVX toolbox.For the SVM parameterC, we set it as 100. For the MKL-BP algorithm in our paper, the parameter settings are as below:The setting of parameterp is 2, 3, 4, 5, 6, 7, 8, 16, 32, 64. When p=2, the algorithm is degenerated to the GMKL. The setting of the parameter v is 0.5 as the MKL-BP.We will use the UCI database to analyze our MKL-BP algorithm; the experiment used 5 UCI datasets. The format of the datasets is given in Table1.Table 1
Datasets, where Number means the number of the data in the datasets, and Dim means the character of the datasets.
Data name
Number
Dim
Diabetes
768
9
Heart
270
13
Ionosphere
351
33
Liver-disorders
345
6
Sonar
208
60The setting of kernels is shown as below.(1) Gaussian Kernel. K(xi,xj)=e-xi-xj/σ2. We use 10 parameters {2-3,…,26}.(2) Polynomial Kernel. Kxi,xj=xixjT+1d.The parameters are {1,2,3}.The Gaussian kernel and polynomial kernel are the most popular kernels in SVM, combining them in the same model could combine their character in classification. We imitate the simple-MKL and GMKL algorithm, to normalize the kernel matrix to one unit and we construct 13(d+1) kernels (d represents the dimension of data, and the number 13 is the total number of Gaussian kernel and polynomial kernel). We randomly divided the data into two groups. One group with 50% is used for training, and the other group with 50% is for testing. We test the datasets for 50 times to get the same effects of the cross-validation. For every UCI data we run the experiments for 5 times and count the average accuracy of the experiments.(
1
) The variation of p leads to the accuracy of the algorithm: Table 2 shows that when p>2, the accuracy of the MKL-BP model increases in a small scale. Compared with p=2 the, accuracy increases by 1.21%, 1.81%, 1.11%, 2.27%, and 0.58%, respectively. However, from Figure 1 we found that as p varies, the accuracy does not change in a large scale.Table 2
The variation ofp with regard to the different accuracies of the MKL-Bp model.
p
=
2
p
=
3
p
=
4
p
=
5
p
=
6
p
=
7
p
=
8
p
=
16
p
=
32
p
=
64
Diabetes
76.2917
77.1875
77.2396
77.5
77.4479
77.5
77.5
77.5
77.5
77.5
Heart
77.1481
79.8519
80.1481
79.2593
79.2593
79.4074
79.4074
78.963
78.963
78.963
Ionosphere
90.9318
92.0455
91.9318
92.0455
92.0455
92.0455
91.9318
92.0455
92.0455
92.0455
Liver-disorder
69.5202
72.4855
72.3699
71.9075
71.7919
72.0231
71.9075
71.7919
71.7919
71.7919
Sonar
79.8077
81.7308
80.3846
80.3846
80.3846
80.3846
80.3846
80.3846
80.3846
80.3846Figure 1
The variation ofp leads to the different accuracies of the MKL-BP model.(
2
) Compared with other GMKL accuracies, from Table 3 we found that when p→∞, the MKL-BP model gets better classification accuracy. We discovered that besides the heart data, the MKL-BP model shows the highest accuracy, and the GMKL model (the MKL-BP model when p=2), simple-MKL model, and Lp-MKL model all have similar accuracy a little smaller than MKL-BP, while Ave-kernel model reflects the smallest accuracy. The accuracy comparison of different algorithms is also shown in Figure 2.Table 3
The accuracy comparison of different algorithms (the numbers in the brackets are the ranks of different algorithms: 1 means the highest rank in the five models, and 5 means the lowest rank in the models).
Ave-Kernel
Simple-MKL
L
p-MKL
GMKL (p=2)
MKL-BP
Diabetes
75.1224(5)
75.44(4)
76.5625(2)
76.2917(3)
77.5(1)
Heart
76.1523(5)
80.98(1)
79.4074(2)
77.1481(4)
78.963(3)
Ionosphere
90.5432(5)
91.48(3)
92.0455(1)
90.9318(4)
92.0455(1)
Liver-disorder
64.2538(4)
63.35(5)
70.4855(2)
69.5202(3)
71.7919(1)
Sonar
75.1146(5)
76.71(4)
79.7063(3)
79.8077(2)
80.3846(1)
Average rank
4.8
3.4
2
3.2
1.4Figure 2
The accuracy comparison of different algorithms.(
3
) Compared with other MKL models' running time, Table 4 demonstrates that the Ave-kernel model shows the highest speed, for it only needs to calculate the SVM problem. The running time of Lp-MKL model is a bit faster than the simple-MKL model and the GMKL model, and the simple-MKL model and the GMKL model show similar running time. However, the running time of MKL-BP model is the longest. The reason is that by using the Taylor expansion to calculate uj,(t)p, it needs slightly more time. So how to improve the running speed of our algorithm is a problem which needs to be solved in the future research. In order to compare the results clearly, Figure 3 shows the running time of different models.Table 4
The running time of different models (the numbers in the brackets are the ranks of different models, and the unit in the table is second(s)).
Ave-Kernel
Simple-MKL
L
p-MKL
GMKL (p=2)
MKL-BP
Diabetes
10.75(1)
21.83(3)
17.46(2)
25.09(4)
43.44(5)
Heart
1.96(1)
14.44(3)
10.06(2)
20.08(4)
47.63(5)
Ionosphere
7.81(1)
28.95(4)
27.14(3)
26.38(2)
56.11(5)
Liver-disorder
5.69(1)
10.95(2)
15.30(3)
20.02(4)
46.94(5)
Sonar
1.37(1)
53.41(5)
30.76(2)
33.99(3)
43.14(4)
Average rank
1
3.4
2.4
3.4
4.8Figure 3
The running time of different models.(
4
) The variation of number of selected kernel functions is shown in Table 5: the Ave-kernel model and Lp-MKL model select all the kernel functions, while simple-MKL model, GMKL model, and MKL-BP model only select a portion of the kernels, which indicates that the MKL-BP keeps the sparsity for the kernel selecting of L1-MKL model and GMKL model. We discovered that MKL-BP model and the GMKL model select more kernels than the simple-MKL. The reason is that the simple-MKL may discard some useful kernels while the MKL-BP model retains these kernels similarly to the GMKL model. Figure 4 shows the comparison of the number of the kernels used in the different models.Table 5
The number of the kernels used in the different models.
Ave-Kernel
Simple-MKL
L
p-MKL
GMKL (p=2)
MKL-BP
Diabetes
117
27
117
30
35.8
Heart
182
25
182
36
35.4
Ionosphere
442
45
442
64.6
70
Liver-disorder
91
21
91
29.6
29.8
Sonar
793
90
793
104.8
129.2Figure 4
The number of the kernels used in the different models.(
5
) We could find that when p>2, the accuracy of the MKL-BP model increases in a small scale. However, the accuracy of MKL-BP is not linear to the p. For example, the best classification ofheart is p=4. Therefore, when p changes, the advantage of MKL-BP is that we could get higher accuracy than GMKL, but the disadvantage of MKL-BP is that we could not ensure the optimal p for the model.In summary, multiple kernels improve generalization and precision performance in all the experiments, and the running speed of our model is also very fast.
## 6. Conclusion
In our paper we presented a novel MKL model, MKL-BP model, based on thep-norm. The model combines L1-MKL model and Lp-MKL model, which generalizes the GMKL model with p=2 to our p≥2. The MKL-BP model keeps the sparsity of L1-MKL model and GMKL model, which only selects useful kernels and makes relatively higher classification accuracy when faced with the noisy data. We use the Taylor expansion to optimize the problem.From the experiments we found, compared with other MKL models, our MKL-BP model obtains a higher classification accuracy than other models and the kernels selected are much fewer than Ave-kernel model andLp-MKL model. Nevertheless, how to increase classification speed of MKL-BP model is still a problem which we need to solve in the future research.In the future work, the convergence rates in the experiments may be improved with combining coordinate decreasing method [12].
---
*Source: 1018789-2018-01-23.xml* | 1018789-2018-01-23_1018789-2018-01-23.md | 20,756 | A Multiple Kernel Learning Model Based onp-Norm | Jinshan Qi; Xun Liang; Rui Xu | Computational Intelligence and Neuroscience
(2018) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1018789 | 1018789-2018-01-23.xml | ---
## Abstract
By utilizing kernel functions, support vector machines (SVMs) successfully solve the linearly inseparable problems. Subsequently, its applicable areas have been greatly extended. Using multiple kernels (MKs) to improve the SVM classification accuracy has been a hot topic in the SVM research society for several years. However, most MK learning (MKL) methods employL1-norm constraint on the kernel combination weights, which forms a sparse yet nonsmooth solution for the kernel weights. Alternatively, the Lp-norm constraint on the kernel weights keeps all information in the base kernels. Nonetheless, the solution of Lp-norm constraint MKL is nonsparse and sensitive to the noise. Recently, some scholars presented an efficient sparse generalized MKL (L1- and L2-norms based GMKL) method, in which L1 L2 established an elastic constraint on the kernel weights. In this paper, we further extend the GMKL to a more generalized MKL method based on the p-norm, by joining L1- and Lp-norms. Consequently, the L1- and L2-norms based GMKL is a special case in our method when p=2. Experiments demonstrated that our L1- and Lp-norms based MKL offers a higher accuracy than the L1- and L2-norms based GMKL in the classification, while keeping the properties of the L1- and L2-norms based on GMKL.
---
## Body
## 1. Introduction of MKL
The support vector machine (SVM) is a classification and regression tool based on the statistical machine learning [1]. By utilizing the kernel function, the SVM transfers the data into a high dimension space, builds an optimal separating hyperplane, and consequently solves the nonlinear problem. In solving an SVM problem, it is critical to choose an adequate kernel function. The widely used kernel functions are the radial basis functions and polynomial functions. To select an effective kernel function is very important, and different kernels and parameters produce different classification and regression results. In our paper, we try to use the features of different kernels and improve the classification accuracy of SVM.The multiple kernel learning (MKL) model [2] is a flexible learning model. In the recent research, the MK learning (MKL) can obtain higher classification accuracy than the sole one. As the MKL uses different combinations of kernel functions and has larger flexibility, its performance is normally better. Constructing the MK model, in fact, is the process of seeking the combination of M kernels to get the best classification accuracy. Thus, in the MK framework, to seek the weights of the different kernels is the big problem for MKL [3, 4]. The simplest form of MKL is L1 norm [5]. The L1-norm MKL finds the kernel weight in a simplex form and thus yields a sparse solution [6, 7]. The sparsity of selected kernels is helpful in identifying an appropriate combination of data sources or subsets with different features in real world applications. However, the method may discard useful information and thus result in a suboptimal generalization.Alternatively, theL2-norm MKL was proposed by another group of researchers, and it improves L1-norm MKL in some scenarios. Unfortunately, the solution of L2-norm MKL is nonsparse, which means it uses all kernels in the forecasting stage. Also, the L2-norm MKL is sensitive to noise. Additionally, when there exist noisy data in the training set, the classification accuracy would be greatly decreased. Furthermore, it suffers poor interpretation and can lead to high computational and storage cost, too.Thus, there is research intending to combine theL1-norm MKL and L2-norm MKL. The algorithm is called the generalized MKL (GMKL) [8], which combines both advantages of L1- and L2-norms and is able to have a higher accuracy in classifications. Nonetheless, the GMKL algorithm is just specialized in the combination of the sparse MKL method and the nonsparse kernel learning method, L2-norm MKL. The research made a contribution to the merging of the L1- and L2-norm MKL, and the GMKL in a general model [9]. In this paper, we extend the algorithm in a more general form, which combines the sparse MKL andallnonsparse MKL algorithms. Thus, we would like to generalize the L2-norm MKL to the Lp-norm.In our paper, we combineL1- and Lp-norms [10], by extending the constraint of kernels as v∑j=1Muj+(1-v)∑j=1Mujp≤1. We call our algorithm MKL based on p-norm (MKL-BP). In particular, when p=2 the MKL-BP algorithm will be degenerated into the GMKL algorithm. In our experiments, when p→∞, the accuracy of our algorithm tends to be stable and is higher than the results with p=2. Meantime, compared with the L1- and Lp-norm MKL method, the MKL-BP shows the higher accuracy in the classifications too. The advantage of using Lp norms is that more flexibility can be achieved during the experiments. As p changes, the generalization and precision vary accordingly.The paper is organized as follows: Section2 describes in detail the MKL-BP model. Section 3 analyzes and verifies the relevant definitions and theorems of MKL-BP model. The implementation solution of MKL-BP model is described in Section 4. Section 5 uses the MKL-BP model to carry out experiments on the UCI datasets and compares its accuracy, running time, and so on with those of other MKL models. Section 6 concludes this research with directions for future work.
## 2. Base Framework of MKL-BP
Based on the statistics machine learning in the classification problem, we can get the general model below:(1)f=argminCempf+Ωf.The smallest empirical risk is1/N∑m=1NR(fxm,ym), while the smallest regulation risk is Ωf=1/2w2. The parameter C is a presetting constant, used for balancing the empirical and regulation risks.In theC-SVM, the model could be shown as(2)minw,b12w2+C∑m=1Nξms.t.yiwTϕxm+b≥1-ξm,m=1,…,N,ξm≥0.By optimizing problem (2), the classifier could be shown as(3)fx=wTϕx+b,w∈RdH,b∈R.Using the Langrage function and kernelKxm,xn≤ϕxm, ϕxn>0, we could get the dual form of problem (2):(4)maxα∑m=1Nαm-∑m=1N∑n=1NαmαnKxm,xns.t.∑m=1Nαm=0,0≤αm≤C.Problem (4) is a simplest form of SVM. In the MKL model, kernel K is combined with a series of kernels linearly. The kernel K is shown as(5)K=∑j=1MujKj.In (5), uj refers to the weight of kernel Kj, and M refers to the number of kernels. By using (5) and replacing Kxm,xn in (4), we can get the standard form of MKL:(6)minu∈Amaxα∑m=1Nαm-∑m=1N∑n=1Nαmαn∑j=1MujKjxm,xns.t.∑m=1Nαm=0,0≤αm≤C,where u=(u1,…,uj)T and A refers to the constraint domain of u. In the MKL model, the simplest domain is the L1-norm MKL, where A={uj∣uj≥0,∑j=1Muj≤1}. The research shows that in the L2- and Lp-norm MKLs, where A={uj∣uj≥0,∑j=1Mujp≤1}, there is better classification character in some aspects.The research combined theL1- and L2-norm MKLs, and the GMKL model. The paper showed that the novel model keeps the sparsity of the L1-norm MKL and the classification accuracy does not decrease when facing the noisy data. Domain A in the GMKL model is {uj∣uj≥0,v∑j=1Muj+(1-v)∑j=1Muj2≤1}. The setting constant v is used to balance the L1- and L2-norm MKLs, and 0≤v≤1. The experiments showed that when v=0.5, the model gets the best classification accuracy.However, the paper just specialized the sparse and nonsparse MKL models. In this paper, we would like to generalize the model. Concretely, we generalize domainA as {uj∣uj≥0,v∑j=1Muj+(1-v)∑j=1Mujp≤1}. We called our model the MKL based on p-norm (MKL-BP).We would like to bring the character of our model in the next paragraph, where we will show the model keeping the character of GMKL. Then we give the algorithm of the model to solve the high dimensional constraint problem. We would make some simulation experiments to show the classification accuracy, running time, and used kernel of our model, compared with different models.
## 3. Theorem of MKL-BP
Theorem 1.
Not all the kernels are selected in the MKL-BP model, anduj of the selected kernels are unique.Proof.
By fixingα=(α1,…,αi)T as α∗, we could easily know that the optimizing result of u in (6) would be irrelevant to α∗. We use the Langrage function and get(7)Lu=∑m=1Nαm∗-∑m=1N∑n=1Nαm∗αn∗∑j=1MujKjxm,xn+λv∑j=1Muj+1-v∑j=1Mujp.
By trying to get the partial derivatives ofuj, we get that(8)∂L∂uj=-∑m=1N∑n=1Nαm∗αn∗Kjxm,xn+λv+p1-vujp-1.
By setting∂L/∂uj=0, we get uj:(9)uj=1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kjxm,xn-vp-1.
Considering whenuj in (9) is below zero, we set uj as(10)uj=1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kjxm,xn-vp-1,0.
From (10), we could easily find that when 1/λ∑m=1N∑n=1Nαm∗αn∗Kjxm,xn<v, we get uj=0. So not all kernels would be selected in the model when 0<v<1. Thus, our model successfully selects the useful kernels in optimization. Also, from (10), the optimization result of uj is unique in our model.
Specially, whenv=0, the algorithm is degenerated into the Lp-norm MKL, and we get(11)uj=1pλ∑m=1N∑n=1Nαm∗αn∗Kjxm,xnp-1.
We find that alluj>0, which indicates that all kernels are selected in the Lp-norm MKL, so it would not discard useful kernels in the optimization. However, the model would not get high accuracy in prediction when faced with noisy data. Also in that scenario, the model may cause higher computational complexity.Definition 2 (similar kernel).
With the optimization of (4) and α∗, if the selected kernels Kj and Kq correspond to the formula below, we call them similar kernels:(12)∑m=1N∑n=1Nαm∗αn∗Kjxm,xn-∑m=1N∑n=1Nαm∗αn∗Kqxm,xn≤1.Theorem 3.
Similar kernels would get the same kernel weightsuj when p approaches the limit.Proof.
We calculateuj-uq as below:(13)uj-uq=1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kjxm,xn-vp-1-1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kqxm,xn-vp-1≤1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kjxm,xn-v-1p1-v1λ∑m=1N∑n=1Nαm∗αn∗Kqxm,xn-v≤1pλ1-v.
Whenp approaches to the limit, uj-uq→0. Theorem 3 indicates that when p approaches the limit, uj among different kernels would be very small, and thus the classification accuracy does not change.
## 4. Solution of MKL-BP
Although we have presented the MKL-BP model, it is still hard to optimize problem (6). Problem (6) is quadratic programming with a high dimension constraint. In the GMKL algorithm, [11] used the level method to solve the problem. However, in our model, the constraint is p-dimensional and the method in [11] does not work. So, we resort to the Taylor expansion method to solve the problem approximately.We use the coordinate decreasing method to solve the problem in the iteration; we fixu or α, then solve the subproblem, and finally update u or α.Process 1.
Updateα by fixingu. At the first time, uj is initialed as the approximate solution of vuj+1-vujp=1/M; (6) turns to a standard SVM problem below:(14)maxα∑m=1Nαm-∑m=1N∑n=1Nαmαn∑j=1MujtKjxm,xn.s.t.∑m=1Nαm=0,0≤αm≤C.Numbert refers to the iteration time of algorithm. We employ the SMO algorithm to solve this standard problem.Process 2.
Updateu by fixing α; (6) turns to quadratic programming with a high dimensional constraint. Then use the Taylor expansion to decrease the dimension:(15)up≈utp+putp-1u-ut+pp-1utp-2u-ut22=pp-12utp-2u2+2p-p2utp-1u+p2-3p+22utp.
By using the transformation in (15), the constraint turns to(16)v∑j=1Muj+1-v∑j=1Mujp=∑j=1M1-vpp-12uj,tp-2uj2+∑j=1Mv+1-v2p-p2uj,tp-1uj+∑j=1Mp2-3p+22uj,tp.
Now with the Taylor expansion, we successfully changed the high dimensional constraint to a quadratic constraint. Next, we use the level method and CVX toolbox as the GMKL to solve the problem in Process2. CVX toolbox is a useful MATLAB toolbox in solving many mathematic problems.Process 3.
Updateu or α until the stop criterion is satisfied. The stop criterion is that the program has reached the iteration time or the changes of the objective function have reached the threshold.We could find that whenp>2, we successfully changed the problem to the GMKL, so the complexity is the same as that of GMKL. And according to [8], the complexity of GMKL is O(δ-2), when δ is the threshold of solution.
## 5. Experiments
In this section we use the UCI data to evaluate the classification accuracies in different algorithms.We evaluate the following algorithm:(1) Ave-Kernel. We use a base combination of the kernels. The weights of base combination of kernels are u=1/M. We use the standard SVM solver to solve the Ave-kernel.(2) Simple-MKL. It is a traditional L1-MKL model, which is a useful comparison algorithm in many papers.(3)
L
p
-MKL. The constraint of the kernel weight is up≤1; in our paper we set v=0 as Lp-MKL.(4) GMKL. The constraint of kernel weights is {uj∣uj≥0,v∑j=1Muj+(1-v)∑j=1Muj2≤1}, and in our paper, we set p=2 as the GMKL.To be consistent with the past work, all the solvers of the SVM QP are from the LibSVM QP solver. For updating and solving kernel weights, we use the CVX toolbox.For the SVM parameterC, we set it as 100. For the MKL-BP algorithm in our paper, the parameter settings are as below:The setting of parameterp is 2, 3, 4, 5, 6, 7, 8, 16, 32, 64. When p=2, the algorithm is degenerated to the GMKL. The setting of the parameter v is 0.5 as the MKL-BP.We will use the UCI database to analyze our MKL-BP algorithm; the experiment used 5 UCI datasets. The format of the datasets is given in Table1.Table 1
Datasets, where Number means the number of the data in the datasets, and Dim means the character of the datasets.
Data name
Number
Dim
Diabetes
768
9
Heart
270
13
Ionosphere
351
33
Liver-disorders
345
6
Sonar
208
60The setting of kernels is shown as below.(1) Gaussian Kernel. K(xi,xj)=e-xi-xj/σ2. We use 10 parameters {2-3,…,26}.(2) Polynomial Kernel. Kxi,xj=xixjT+1d.The parameters are {1,2,3}.The Gaussian kernel and polynomial kernel are the most popular kernels in SVM, combining them in the same model could combine their character in classification. We imitate the simple-MKL and GMKL algorithm, to normalize the kernel matrix to one unit and we construct 13(d+1) kernels (d represents the dimension of data, and the number 13 is the total number of Gaussian kernel and polynomial kernel). We randomly divided the data into two groups. One group with 50% is used for training, and the other group with 50% is for testing. We test the datasets for 50 times to get the same effects of the cross-validation. For every UCI data we run the experiments for 5 times and count the average accuracy of the experiments.(
1
) The variation of p leads to the accuracy of the algorithm: Table 2 shows that when p>2, the accuracy of the MKL-BP model increases in a small scale. Compared with p=2 the, accuracy increases by 1.21%, 1.81%, 1.11%, 2.27%, and 0.58%, respectively. However, from Figure 1 we found that as p varies, the accuracy does not change in a large scale.Table 2
The variation ofp with regard to the different accuracies of the MKL-Bp model.
p
=
2
p
=
3
p
=
4
p
=
5
p
=
6
p
=
7
p
=
8
p
=
16
p
=
32
p
=
64
Diabetes
76.2917
77.1875
77.2396
77.5
77.4479
77.5
77.5
77.5
77.5
77.5
Heart
77.1481
79.8519
80.1481
79.2593
79.2593
79.4074
79.4074
78.963
78.963
78.963
Ionosphere
90.9318
92.0455
91.9318
92.0455
92.0455
92.0455
91.9318
92.0455
92.0455
92.0455
Liver-disorder
69.5202
72.4855
72.3699
71.9075
71.7919
72.0231
71.9075
71.7919
71.7919
71.7919
Sonar
79.8077
81.7308
80.3846
80.3846
80.3846
80.3846
80.3846
80.3846
80.3846
80.3846Figure 1
The variation ofp leads to the different accuracies of the MKL-BP model.(
2
) Compared with other GMKL accuracies, from Table 3 we found that when p→∞, the MKL-BP model gets better classification accuracy. We discovered that besides the heart data, the MKL-BP model shows the highest accuracy, and the GMKL model (the MKL-BP model when p=2), simple-MKL model, and Lp-MKL model all have similar accuracy a little smaller than MKL-BP, while Ave-kernel model reflects the smallest accuracy. The accuracy comparison of different algorithms is also shown in Figure 2.Table 3
The accuracy comparison of different algorithms (the numbers in the brackets are the ranks of different algorithms: 1 means the highest rank in the five models, and 5 means the lowest rank in the models).
Ave-Kernel
Simple-MKL
L
p-MKL
GMKL (p=2)
MKL-BP
Diabetes
75.1224(5)
75.44(4)
76.5625(2)
76.2917(3)
77.5(1)
Heart
76.1523(5)
80.98(1)
79.4074(2)
77.1481(4)
78.963(3)
Ionosphere
90.5432(5)
91.48(3)
92.0455(1)
90.9318(4)
92.0455(1)
Liver-disorder
64.2538(4)
63.35(5)
70.4855(2)
69.5202(3)
71.7919(1)
Sonar
75.1146(5)
76.71(4)
79.7063(3)
79.8077(2)
80.3846(1)
Average rank
4.8
3.4
2
3.2
1.4Figure 2
The accuracy comparison of different algorithms.(
3
) Compared with other MKL models' running time, Table 4 demonstrates that the Ave-kernel model shows the highest speed, for it only needs to calculate the SVM problem. The running time of Lp-MKL model is a bit faster than the simple-MKL model and the GMKL model, and the simple-MKL model and the GMKL model show similar running time. However, the running time of MKL-BP model is the longest. The reason is that by using the Taylor expansion to calculate uj,(t)p, it needs slightly more time. So how to improve the running speed of our algorithm is a problem which needs to be solved in the future research. In order to compare the results clearly, Figure 3 shows the running time of different models.Table 4
The running time of different models (the numbers in the brackets are the ranks of different models, and the unit in the table is second(s)).
Ave-Kernel
Simple-MKL
L
p-MKL
GMKL (p=2)
MKL-BP
Diabetes
10.75(1)
21.83(3)
17.46(2)
25.09(4)
43.44(5)
Heart
1.96(1)
14.44(3)
10.06(2)
20.08(4)
47.63(5)
Ionosphere
7.81(1)
28.95(4)
27.14(3)
26.38(2)
56.11(5)
Liver-disorder
5.69(1)
10.95(2)
15.30(3)
20.02(4)
46.94(5)
Sonar
1.37(1)
53.41(5)
30.76(2)
33.99(3)
43.14(4)
Average rank
1
3.4
2.4
3.4
4.8Figure 3
The running time of different models.(
4
) The variation of number of selected kernel functions is shown in Table 5: the Ave-kernel model and Lp-MKL model select all the kernel functions, while simple-MKL model, GMKL model, and MKL-BP model only select a portion of the kernels, which indicates that the MKL-BP keeps the sparsity for the kernel selecting of L1-MKL model and GMKL model. We discovered that MKL-BP model and the GMKL model select more kernels than the simple-MKL. The reason is that the simple-MKL may discard some useful kernels while the MKL-BP model retains these kernels similarly to the GMKL model. Figure 4 shows the comparison of the number of the kernels used in the different models.Table 5
The number of the kernels used in the different models.
Ave-Kernel
Simple-MKL
L
p-MKL
GMKL (p=2)
MKL-BP
Diabetes
117
27
117
30
35.8
Heart
182
25
182
36
35.4
Ionosphere
442
45
442
64.6
70
Liver-disorder
91
21
91
29.6
29.8
Sonar
793
90
793
104.8
129.2Figure 4
The number of the kernels used in the different models.(
5
) We could find that when p>2, the accuracy of the MKL-BP model increases in a small scale. However, the accuracy of MKL-BP is not linear to the p. For example, the best classification ofheart is p=4. Therefore, when p changes, the advantage of MKL-BP is that we could get higher accuracy than GMKL, but the disadvantage of MKL-BP is that we could not ensure the optimal p for the model.In summary, multiple kernels improve generalization and precision performance in all the experiments, and the running speed of our model is also very fast.
## 6. Conclusion
In our paper we presented a novel MKL model, MKL-BP model, based on thep-norm. The model combines L1-MKL model and Lp-MKL model, which generalizes the GMKL model with p=2 to our p≥2. The MKL-BP model keeps the sparsity of L1-MKL model and GMKL model, which only selects useful kernels and makes relatively higher classification accuracy when faced with the noisy data. We use the Taylor expansion to optimize the problem.From the experiments we found, compared with other MKL models, our MKL-BP model obtains a higher classification accuracy than other models and the kernels selected are much fewer than Ave-kernel model andLp-MKL model. Nevertheless, how to increase classification speed of MKL-BP model is still a problem which we need to solve in the future research.In the future work, the convergence rates in the experiments may be improved with combining coordinate decreasing method [12].
---
*Source: 1018789-2018-01-23.xml* | 2018 |
# Polymorphisms Associated with Age at Onset in Patients with
Moderate-to-Severe Plaque Psoriasis
**Authors:** Rocío Prieto-Pérez; Guillermo Solano-López; Teresa Cabaleiro; Manuel Román; Dolores Ochoa; María Talegón; Ofelia Baniandrés; José Luis López-Estebaranz; Pablo de la Cueva; Esteban Daudén; Francisco Abad-Santos
**Journal:** Journal of Immunology Research
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101879
---
## Abstract
Psoriasis is a chronic skin disease in which genetics play a major role. Although many genome-wide association studies have been performed in psoriasis, knowledge of the age at onset remains limited. Therefore, we analyzed 173 single-nucleotide polymorphisms in genes associated with psoriasis and other autoimmune diseases in patients with moderate-to-severe plaque psoriasis type I (early-onset, <40 years) or type II (late-onset, ≥40 years) and healthy controls. Moreover, we performed a comparison between patients with type I psoriasis and patients with type II psoriasis. Our comparison of a stratified population with type I psoriasisn=155 and healthy controls N=197 is the first to reveal a relationship between the CLMN, FBXL19, CCL4L, C17orf51, TYK2, IL13, SLC22A4, CDKAL1, and HLA-B/MICA genes. When we compared type I psoriasis with type II psoriasis N=36, we found a significant association between age at onset and the genes PSORS6, TNF-α, FCGR2A, TNFR1, CD226, HLA-C, TNFAIP3, and CCHCR1. Moreover, we replicated the association between rs12191877 (HLA-C) and type I psoriasis and between type I and type II psoriasis. Our findings highlight the role of genetics in age of onset of psoriasis.
---
## Body
## 1. Introduction
Psoriasis is a chronic inflammatory skin disorder with a major genetic component. The prevalence of chronic plaque psoriasis is around 2% in the general population [1]. The many genetic studies performed in recent years showed that genes such as interleukin 23 receptor (IL23R) andIL12B and tumor necrosis factor alpha (TNFα) are closely associated with psoriasis and related diseases such as rheumatoid arthritis, psoriatic arthritis, and Crohn’s disease [2]. Human leukocyte antigen C (HLA-C)∗0602 is the allele most closely associated with this disease [3].The age at onset of psoriasis follows a bimodal distribution [4]: type I psoriasis appears before the age of 40 years (early-onset), with a peak at 16–22 years; type II psoriasis appears after the age of 40 years (late-onset), with a peak at 57–60 years [5]. Type I psoriasis has been associated with several single-nucleotide polymorphisms (SNPs) in genes associated with the immune response (Table 1). For example, HLA-C∗0602 is more strongly associated with type I psoriasis than with type II psoriasis [5]. Although several association studies have already been performed in psoriasis in both populations (type I or type II psoriasis patients), knowledge of age at onset remains limited and controversial (Table 1) [6]. Therefore, we performed a candidate gene study, where we evaluated genetic susceptibility to type I or type II psoriasis in patients with moderate-to-severe chronic plaque psoriasis. This approach may help us to identify SNPs previously associated with psoriasis or other autoimmune diseases [2] that are specific to type I or type II psoriasis. Furthermore, our genetic study could improve our understanding of psoriasis and of its etiology and pathogenesis.Table 1
SNPs associated with type I (early-onset) and type II (late-onset) psoriasis: an update.
SNP
Gene
Function†
Association with
References
Ps type I
Ps type II
Ps type I versus type II
—
HLA-C
∗0602
X
X
[6–10]
—
HLA-C
∗12:02
X
[8]
rs1265181
HLA-C
Encodes a class I molecule which plays a central role in the immune system by presenting peptides derived from endoplasmic reticulum lumen
X
X
[11]
rs12191877
X
∗
X
∗
[7, 11]
rs4406273
X
X
[11]
rs2395029
X
[11]
rs10484554
X
X
X
[7, 10, 12]
rs13191099
X
[4]
rs10876882
HLA-A
X
[4]
rs33980500
TRAF3IP2
Encodes a protein involved in regulating responses to cytokines by members of the Rel/NF-kappa-B transcription factor family
X
[11]
rs71562288
X
[4]
rs2233278
TNIP1
Encodes A20-binding protein which plays a role in autoimmunity and tissue homeostasis through the regulation of nuclear factor kappa-B activation
X
[11]
rs17728338
TNIP1
X
[11]
rs1295685
IL13
Encodes a cytokine involved in several stages of B cell maturation and differentiation
X
[11]
rs17716942
IFIH1
Encodes an Asp-Glu-Ala-Asp box protein (putative RNA helicases)
X
[7]
rs1990760
X
[4]
rs27524
ERAP1
Encodes an aminopeptidase involved in trimming HLA class I-binding precursors
X
[6]#
rs11209026
IL23R
Encodes a subunit of the receptor for IL23A/IL23
X
[6]#
rs72676067
X
[4]
rs10876882
IL23A
Encodes a subunit of IL23 involved in immune responses
X
[4]
—
LCE3B/LCE3C-del
Encodes precursors of the cornified envelope of the stratum corneum
X
[6, 13]#
rs2546890
IL12B
Encodes a subunit of IL12 that acts on T and natural killer cells
X
X
[4, 14]
rs60813083
RNF114
Encodes a protein that may play a role in spermatogenesis
X
[4]
rs887998
IL1R1
Encodes a receptor for IL1 involved in inflammatory responses
X
[4]
rs16944
IL1B
Encodes a cytokine produced by activated macrophages and is involved in immune responses, cell proliferation, differentiation, and apoptosis
X
[4, 15]
rs2853550
X
[4]
rs26653
ERAP1
Encodes an aminopeptidase involved in trimming HLA class I-binding precursors so that they can be presented on the MHC class I molecule
X
[10]
rs30187
X
[10]
rs2227473
IL22
Encodes an interleukin 22 that contributes to the inflammatory response
X
[16]
rs2227483
X
[16]
INDEL rs35774195/rs10784699
X
[16]
rs6822844
IL2/IL21
Encode cytokines that are important in the innate and adaptive immune responses by inducing differentiation, proliferation, and activity of multiple target cells including macrophages, natural killer cells, B cells, and cytotoxic T cells
X
[17]
rs2069778
X
[17]
rs6311
HTR2A
Encodes a receptor for neurotransmitter serotonin
X
[18]
rs12459358
PSORS6
Encodes genetic locus associated with susceptibility to psoriasis
X
∗
[19]
rs1800629
TNF-α
Encodes a cytokine secreted by macrophages and involved in the regulation of cell proliferation, differentiation, and apoptosis, as well as in lipid metabolism and coagulation
X
[20, 21]
rs361525
X
∗
[15, 20–24]
rs3733197
BANK1
Encodes a protein involved in B cell receptor-induced calcium mobilization from intracellular stores
X
[25]
rs755622
MIF
Encodes a lymphokine involved in cell-mediated immunity, immunoregulation, and inflammation
X
X
[26]
rs6693899
IL10
Encodes a cytokine produced by monocytes and lymphocytes and involved in immunoregulation and inflammation
X
X
[27]
rs1800896
X
[28]
rs4341
ACE
Encodes an enzyme involved in catalyzing the conversion of angiotensin I into a physiologically active peptide angiotensin II
X
[29]
SNPs at positions -1540, -1512, -1451, -460, and -152
VEGFA
Encodes a protein involved in angiogenesis, vasculogenesis, endothelial cell growth, promotion of cell migration, and inhibition of apoptosis
X
[30]
SNPs at positions -386 and -404
CCHCR1
Encodes a protein that may be a regulator of keratinocyte proliferation or differentiation
X
[31]
—
CCHCR1
∗WW allele
X
[9]
SNP: single-nucleotide polymorphism; Ps: psoriasis;†Information available at NCBI (http://www.ncbi.nlm.nih.gov/gene) or GeneCards (http://www.genecards.org/); #Study performed in pediatric-onset psoriasis (patients <18 years); ∗Association found in our study.
## 2. Material and Methods
### 2.1. Experimental Design
We recruited 198 Caucasian patients with moderate-to-severe plaque type psoriasis (psoriasis area and severity index > 10) who attended the department of dermatology in four university hospitals in Madrid between 16/10/2007 and 17/12/2012. Five samples did not fulfill the quality criteria of the Human Genotyping Unit-CeGen (CEGEN, Spanish National Cancer Research Centre, Madrid, Spain), and 2 samples had insufficient volume. We also included 197 healthy volunteers (controls) recruited between 10/01/2011 and 14/12/2012 from the Clinical Pharmacology Service (Hospital Universitario de la Princesa, Madrid, Spain). All the volunteers were Caucasian and had no personal or family history of psoriasis (at least 2 generations).The protocol fulfilled Spanish law on biomedical research and was approved by the Ethics Committee for Clinical Investigation of Hospital Universitario de la Princesa. All controls and patients gave their written informed consent to donate a sample for investigation. The samples are kept in the Clinical Pharmacology Service.
### 2.2. Selection of the Polymorphisms
We preselected 320 SNPs based on an extensive review of 449 articles describing the association between polymorphisms and psoriasis and response to biological drugs and psoriasis and related inflammatory diseases (rheumatoid arthritis, psoriatic arthritis, and Crohn’s disease) [2]. We finally selected 192 SNPs based on minor allele frequency (≥0.05) and on the results of studies performed in Caucasians and psoriatic patients. Information on the 173 SNPs analyzed can be found in supplementary Table S1, which is published in [3].
### 2.3. Sample Processing
A 3-mL peripheral blood sample was extracted from each subject in EDTA tubes. DNA was obtained from samples using an automatic DNA extractor (MagNa Pure System, Roche Applied Science, USA) and its concentration was quantified in Nanodrop ND-1000 Spectrophotometer (Wilmington, USA). The extracted DNA was stored at −80°C in the Clinical Pharmacology Service until use.
### 2.4. Genotyping
A total of 196 samples from patients (2 samples of 198 cases had insufficient volume) and 197 samples from controls were sent to the Human Genotyping Unit-CeGen to genotype 192 SNPs. The analysis was performed using the Illumina Veracode genotyping platform. If fluorescence was low or the genotype clusters were undifferentiated, the SNPs were removed. In addition, if the call rate was less than 95% of the average of the 192 SNPs analyzed, the samples were removed. Since CEGEN quality criteria were not met in 19 SNPs and 5 patients, we finally analyzed 173 SNPs in 191 patients and 197 controls.
### 2.5. Statistical Analysis
The statistical analysis was performed to compare the following stratified populations: patients with type I psoriasis (N=155) or type II psoriasis (N=36) versus controls (N=197) and patients with type I psoriasis versus cases with type II psoriasis. Hardy-Weinberg equilibrium was tested for all the SNPs analyzed using the SNPStats program [32]. Allele and genotype frequencies were also calculated using the SNPStats program. SNPs that were not in Hardy-Weinberg equilibrium in controls were removed from the subsequent analysis [33].The univariate analysis was performed using R 3.0.2. (SNPassoc) [34]. We constructed various logistic regression models depending on the main types of inheritance (codominant, dominant, recessive, and additive). In the additive model, the presence of 2 mutant alleles confers double the risk of 1 mutant allele [33]. The results were adjusted for rs12191877 (SNP that is strongly associated with the HLA-C∗0602 allele and is highly prevalent in our population) [3, 35]. The optimal model was selected using the lower Akaike Information Criterion (AIC). Subsequently, SNPs with p<0.1 in the univariate analysis (adjusted for rs12191877) were included in a multivariate logistic regression model to adjust for relevant confounding factors (SPSS 15.0). The results of the univariate analysis were adjusted for rs12191877, except when we compared patients with type I psoriasis and patients with type II psoriasis (the influence of rs12191877 was not very relevant). We expressed the results as the odds ratio (OR), 95% confidence interval, and p value.
## 2.1. Experimental Design
We recruited 198 Caucasian patients with moderate-to-severe plaque type psoriasis (psoriasis area and severity index > 10) who attended the department of dermatology in four university hospitals in Madrid between 16/10/2007 and 17/12/2012. Five samples did not fulfill the quality criteria of the Human Genotyping Unit-CeGen (CEGEN, Spanish National Cancer Research Centre, Madrid, Spain), and 2 samples had insufficient volume. We also included 197 healthy volunteers (controls) recruited between 10/01/2011 and 14/12/2012 from the Clinical Pharmacology Service (Hospital Universitario de la Princesa, Madrid, Spain). All the volunteers were Caucasian and had no personal or family history of psoriasis (at least 2 generations).The protocol fulfilled Spanish law on biomedical research and was approved by the Ethics Committee for Clinical Investigation of Hospital Universitario de la Princesa. All controls and patients gave their written informed consent to donate a sample for investigation. The samples are kept in the Clinical Pharmacology Service.
## 2.2. Selection of the Polymorphisms
We preselected 320 SNPs based on an extensive review of 449 articles describing the association between polymorphisms and psoriasis and response to biological drugs and psoriasis and related inflammatory diseases (rheumatoid arthritis, psoriatic arthritis, and Crohn’s disease) [2]. We finally selected 192 SNPs based on minor allele frequency (≥0.05) and on the results of studies performed in Caucasians and psoriatic patients. Information on the 173 SNPs analyzed can be found in supplementary Table S1, which is published in [3].
## 2.3. Sample Processing
A 3-mL peripheral blood sample was extracted from each subject in EDTA tubes. DNA was obtained from samples using an automatic DNA extractor (MagNa Pure System, Roche Applied Science, USA) and its concentration was quantified in Nanodrop ND-1000 Spectrophotometer (Wilmington, USA). The extracted DNA was stored at −80°C in the Clinical Pharmacology Service until use.
## 2.4. Genotyping
A total of 196 samples from patients (2 samples of 198 cases had insufficient volume) and 197 samples from controls were sent to the Human Genotyping Unit-CeGen to genotype 192 SNPs. The analysis was performed using the Illumina Veracode genotyping platform. If fluorescence was low or the genotype clusters were undifferentiated, the SNPs were removed. In addition, if the call rate was less than 95% of the average of the 192 SNPs analyzed, the samples were removed. Since CEGEN quality criteria were not met in 19 SNPs and 5 patients, we finally analyzed 173 SNPs in 191 patients and 197 controls.
## 2.5. Statistical Analysis
The statistical analysis was performed to compare the following stratified populations: patients with type I psoriasis (N=155) or type II psoriasis (N=36) versus controls (N=197) and patients with type I psoriasis versus cases with type II psoriasis. Hardy-Weinberg equilibrium was tested for all the SNPs analyzed using the SNPStats program [32]. Allele and genotype frequencies were also calculated using the SNPStats program. SNPs that were not in Hardy-Weinberg equilibrium in controls were removed from the subsequent analysis [33].The univariate analysis was performed using R 3.0.2. (SNPassoc) [34]. We constructed various logistic regression models depending on the main types of inheritance (codominant, dominant, recessive, and additive). In the additive model, the presence of 2 mutant alleles confers double the risk of 1 mutant allele [33]. The results were adjusted for rs12191877 (SNP that is strongly associated with the HLA-C∗0602 allele and is highly prevalent in our population) [3, 35]. The optimal model was selected using the lower Akaike Information Criterion (AIC). Subsequently, SNPs with p<0.1 in the univariate analysis (adjusted for rs12191877) were included in a multivariate logistic regression model to adjust for relevant confounding factors (SPSS 15.0). The results of the univariate analysis were adjusted for rs12191877, except when we compared patients with type I psoriasis and patients with type II psoriasis (the influence of rs12191877 was not very relevant). We expressed the results as the odds ratio (OR), 95% confidence interval, and p value.
## 3. Results
### 3.1. Study Population
The study population included 155 patients with moderate-to-severe chronic plaque type I psoriasis (92 men and 63 women), 36 patients with type II psoriasis (19 men and 17 women), and 197 controls (98 men and 99 women). The mean age was46.01±13.11 years in patients with type I psoriasis (45.72±11.69 in men and 46.43±15.04 in women), 67.72±11.85 years in patients with type II psoriasis (65.95±11.18 in men and 69.71±12.59 in women), and 24.51±4.29 years in the controls (25.07±4.94 in men and 23.95±3.46 in women). The mean age at onset of psoriasis was 23.31±8.52 in patients with type I psoriasis and 52.58±10.45 in patients with type II psoriasis. Analysis of the effect of sex on our results revealed no significant association.
### 3.2. Genotyping Results
A total of 192 SNPs were analyzed (see supplementary Table S1 published in [3]). However, only 173 SNPs fulfilled the quality criteria. One SNP was monomorphic (rs165161 in theJUNB gene) and was excluded from the statistical analysis. The genotyping success rate was 89.82%, and the reproducibility rate was 100%.All the minor allele frequencies were in Hardy-Weinberg equilibrium except 9 SNPs in the controls and 12 SNPs in the patients (see supplementary Table S1 published in [3]). The 9 SNPs which were not in Hardy-Weinberg equilibrium in controls were removed from the statistical analysis [33].
### 3.3. Association with Type I or Type II Psoriasis
Our findings showed an association between type I psoriasis and 10 SNPs (N=155 versus N=197 controls): rs1634517 (CCL4L), rs1975974 (C17orf51), rs12720356 (TYK2), rs1800925 (IL13), and rs6908425 (CDKAL1) decreased the risk of psoriasis 2.94-fold, 2.08-fold, 10-fold, 100-fold, and 2.44-fold, respectively; and rs2282276 (CLMN), rs10782001 (FBXL19), rs3792876 (SLC22A4), rs12191877 (HLA-C), and rs13437088 (HLA-B/MICA) increased the risk of psoriasis 3.90-fold, 2.10-fold, 3.75-fold, 30.54-fold, and 2.52-fold, respectively (Table 2). However, comparison of 36 patients with type II psoriasis and 197 controls revealed no significant association (results not shown).Table 2
Results of univariate linear regression analysis (unadjusted and adjusted for rs12191877 inHLA-C) and multivariate linear regression analysis (155 patients with type I psoriasis versus 197 controls). In the multivariate analysis, we included the SNPs with p<0.1 in the univariate analysis adjusted for HLA-C. Only polymorphisms that were significant in the multivariate analysis are shown.
SNP
Gene
Model
Risk genotype
Univariate unadjusted. Type Ipatients versus controls
Univariate adjusted forHLA-C
Multivariate
OR (95% CI)
p value
OR (95% CI)
p value
OR (95% CI)
p value
rs2282276
CLMN
A
CC/CT
1.74 (0.96–3.15)
0.066
1.95 (1.04–3.65)
0.037
3.90 (1.13–13.38)
0.031
rs10782001
FBXL19
A
GG/AG
1.58 (1.13–2.21)
0.007
1.59 (1.09–2.32)
0.016
2.10 (1.05–4.17)
0.035
rs1634517
CCL4L
D
AA/AC
0.89 (0.58–1.36)
0.590
0.64 (0.39–1.05)
0.073
0.34 (0.14–0.84)
0.019
rs1975974
C17orf51
A
GG/AG
0.80 (0.57–1.14)
0.220
0.66 (0.44–0.99)
0.040
0.48 (0.23–0.99)
0.048
rs12720356
TYK2
A
GG/GT
0.42 (0.21–0.81)
0.019
0.27 (0.13–0.58)
0.0003
0.10 (0.03–0.39)
0.001
rs1800925
IL13
R
TT
0.18 (0.02–1.45)
0.051
0.17 (0.02–1.49)
0.061
0.01 (0.00–0.73)
0.034
rs3792876
SLC22A4
A
TT/CT
1.57 (0.89–2.76)
0.110
1.87 (0.98–3.55)
0.057
3.75 (1.19–11.83)
0.024
rs6908425
CDKAL1
A
TT/CT
0.67 (0.47–0.97)
0.029
0.58 (0.39–0.89)
0.01
0.41 (0.20–0.85)
0.017
rs12191877
HLA-C
A
TT/CT
5.92 (3.83–9.15)
2.50
E
-
19
—
—
30.54 (10.62–87.85)
0.000
rs13437088
HLA-B/MICA
D
TT/CT
2.17 (1.42–3.34)
3.00
E
-
04
1.93 (1.19–3.13)
0.007
2.52 (1.01–6.31)
0.048
CLMN: calponin-like transmembrane gene; FBXL19: F-box and leucine-rich repeat protein 19; CCL4L: chemokine (C-C motif) ligand 4-like; C17orf51: chromosome 17 open reading frame 51; TYK2: nonreceptor tyrosine-protein kinase; IL13: interleukin 13; SLC22A4: solute carrier family 22 member 4; CDKAL1: cyclin-dependent kinase 5 regulatory subunit associated protein 1-like 1; HLA: major histocompatibility complex; MICA: major histocompatibility complex class I polypeptide-related sequence A; SNPs: single-nucleotide polymorphisms; OR: odds ratio of presenting type 1 psoriasis; CI: confidence interval; A: additive; R: recessive; D: dominant; —: no data.Four SNPs were associated with significant decreases in the risk of type I psoriasis (N=155) compared with type II psoriasis (N=36), namely, rs191190 (TNFR1; 126.08-fold), rs361525 (TNF-α; 190.76-fold), and rs10499194 and rs6920220 (TNFAIP3; 155.02-fold and 19.14-fold, resp.). We also found 5 SNPs that were associated with a significant increase in the risk of type I psoriasis, namely, rs1801274 (FCGR2A; 5.26-fold), rs763361 (CD226; 33.3-fold), rs12459358 (PSORS6; 11.11-fold), rs12191877 (HLA-C; 12.5-fold), and rs1576 (CCHCR1; 166.66-fold) (Table 3).Table 3
Results of univariate and multivariate linear regression analyses (155 patients with psoriasis type I versus 36 cases with psoriasis type II). SNPs withp<0.1 in the univariate analysis were included in the multivariate analysis. Only polymorphisms that were significant in the multivariate analysis are shown.
SNP
Gene
Model
Risk genotype
Univariate Ps patients type I versus type II
Multivariate
OR (95% CI)
p value
OR (95% CI)
p value
rs1801274
FCGR2A
A
CC/CT
1.96 (1.12–3.45)
0.016
5.26 (1.11–25)
0.037
rs191190
TNFR1
D
CC/CT
0.43 (0.17–1.11)
0.065
0.01 (1.44E − 04–0.44)
0.018
rs763361
CD226
D
TT/CT
2.08 (0.99–4.35)
0.056
33.33 (1.11–1000)
0.043
rs12459358
PSORS6
A
TT/CT
2.44 (1.32–4.55)
0.002
11.11 (1.32–100)
0.026
rs10499194
TNFAIP3
D
TT/CT
0.38 (0.17–0.90)
0.02
0.01 (6.77E − 05–0.61)
0.030
rs12191877
HLA-C
A
TT/CT
2.33 (1.23–4.35)
0.006
12.50 (1.06–100)
0.045
rs6920220
TNFAIP3
A
AA/AG
0.55 (0.30–1.03)
0.068
0.05 (0.003–0.90)
0.042
rs361525
TNF-α
C
AG
2.17 (0.62–7.69)
0.087
0.01 (5.48E − 05–0.50)
0.024
rs1576
CCHCR1
D
GG/GC
2.56 (1.22–5.26)
0.012
166.67 (2.32–1000)
0.019
FCGR2A: Fc fragment of IgG low affinity IIa receptor; TNFR1: tumor necrosis factor receptor 1; CD226: CD226 antigen; PSORS6: psoriasis susceptibility 6; TNFAIP3: tumor necrosis factor alpha-induced protein 3; HLA-C: major histocompatibility complex; TNFAIP3: tumor necrosis factor alpha-induced protein 3; TNF-α: tumor necrosis factor alpha; CCHCR1: coiled-coil alpha-helical rod protein 1; SNPs: single-nucleotide polymorphisms; OR: odds ratio of presenting type I psoriasis; CI: confidence interval; A: additive; D: dominant; C: codominant.
## 3.1. Study Population
The study population included 155 patients with moderate-to-severe chronic plaque type I psoriasis (92 men and 63 women), 36 patients with type II psoriasis (19 men and 17 women), and 197 controls (98 men and 99 women). The mean age was46.01±13.11 years in patients with type I psoriasis (45.72±11.69 in men and 46.43±15.04 in women), 67.72±11.85 years in patients with type II psoriasis (65.95±11.18 in men and 69.71±12.59 in women), and 24.51±4.29 years in the controls (25.07±4.94 in men and 23.95±3.46 in women). The mean age at onset of psoriasis was 23.31±8.52 in patients with type I psoriasis and 52.58±10.45 in patients with type II psoriasis. Analysis of the effect of sex on our results revealed no significant association.
## 3.2. Genotyping Results
A total of 192 SNPs were analyzed (see supplementary Table S1 published in [3]). However, only 173 SNPs fulfilled the quality criteria. One SNP was monomorphic (rs165161 in theJUNB gene) and was excluded from the statistical analysis. The genotyping success rate was 89.82%, and the reproducibility rate was 100%.All the minor allele frequencies were in Hardy-Weinberg equilibrium except 9 SNPs in the controls and 12 SNPs in the patients (see supplementary Table S1 published in [3]). The 9 SNPs which were not in Hardy-Weinberg equilibrium in controls were removed from the statistical analysis [33].
## 3.3. Association with Type I or Type II Psoriasis
Our findings showed an association between type I psoriasis and 10 SNPs (N=155 versus N=197 controls): rs1634517 (CCL4L), rs1975974 (C17orf51), rs12720356 (TYK2), rs1800925 (IL13), and rs6908425 (CDKAL1) decreased the risk of psoriasis 2.94-fold, 2.08-fold, 10-fold, 100-fold, and 2.44-fold, respectively; and rs2282276 (CLMN), rs10782001 (FBXL19), rs3792876 (SLC22A4), rs12191877 (HLA-C), and rs13437088 (HLA-B/MICA) increased the risk of psoriasis 3.90-fold, 2.10-fold, 3.75-fold, 30.54-fold, and 2.52-fold, respectively (Table 2). However, comparison of 36 patients with type II psoriasis and 197 controls revealed no significant association (results not shown).Table 2
Results of univariate linear regression analysis (unadjusted and adjusted for rs12191877 inHLA-C) and multivariate linear regression analysis (155 patients with type I psoriasis versus 197 controls). In the multivariate analysis, we included the SNPs with p<0.1 in the univariate analysis adjusted for HLA-C. Only polymorphisms that were significant in the multivariate analysis are shown.
SNP
Gene
Model
Risk genotype
Univariate unadjusted. Type Ipatients versus controls
Univariate adjusted forHLA-C
Multivariate
OR (95% CI)
p value
OR (95% CI)
p value
OR (95% CI)
p value
rs2282276
CLMN
A
CC/CT
1.74 (0.96–3.15)
0.066
1.95 (1.04–3.65)
0.037
3.90 (1.13–13.38)
0.031
rs10782001
FBXL19
A
GG/AG
1.58 (1.13–2.21)
0.007
1.59 (1.09–2.32)
0.016
2.10 (1.05–4.17)
0.035
rs1634517
CCL4L
D
AA/AC
0.89 (0.58–1.36)
0.590
0.64 (0.39–1.05)
0.073
0.34 (0.14–0.84)
0.019
rs1975974
C17orf51
A
GG/AG
0.80 (0.57–1.14)
0.220
0.66 (0.44–0.99)
0.040
0.48 (0.23–0.99)
0.048
rs12720356
TYK2
A
GG/GT
0.42 (0.21–0.81)
0.019
0.27 (0.13–0.58)
0.0003
0.10 (0.03–0.39)
0.001
rs1800925
IL13
R
TT
0.18 (0.02–1.45)
0.051
0.17 (0.02–1.49)
0.061
0.01 (0.00–0.73)
0.034
rs3792876
SLC22A4
A
TT/CT
1.57 (0.89–2.76)
0.110
1.87 (0.98–3.55)
0.057
3.75 (1.19–11.83)
0.024
rs6908425
CDKAL1
A
TT/CT
0.67 (0.47–0.97)
0.029
0.58 (0.39–0.89)
0.01
0.41 (0.20–0.85)
0.017
rs12191877
HLA-C
A
TT/CT
5.92 (3.83–9.15)
2.50
E
-
19
—
—
30.54 (10.62–87.85)
0.000
rs13437088
HLA-B/MICA
D
TT/CT
2.17 (1.42–3.34)
3.00
E
-
04
1.93 (1.19–3.13)
0.007
2.52 (1.01–6.31)
0.048
CLMN: calponin-like transmembrane gene; FBXL19: F-box and leucine-rich repeat protein 19; CCL4L: chemokine (C-C motif) ligand 4-like; C17orf51: chromosome 17 open reading frame 51; TYK2: nonreceptor tyrosine-protein kinase; IL13: interleukin 13; SLC22A4: solute carrier family 22 member 4; CDKAL1: cyclin-dependent kinase 5 regulatory subunit associated protein 1-like 1; HLA: major histocompatibility complex; MICA: major histocompatibility complex class I polypeptide-related sequence A; SNPs: single-nucleotide polymorphisms; OR: odds ratio of presenting type 1 psoriasis; CI: confidence interval; A: additive; R: recessive; D: dominant; —: no data.Four SNPs were associated with significant decreases in the risk of type I psoriasis (N=155) compared with type II psoriasis (N=36), namely, rs191190 (TNFR1; 126.08-fold), rs361525 (TNF-α; 190.76-fold), and rs10499194 and rs6920220 (TNFAIP3; 155.02-fold and 19.14-fold, resp.). We also found 5 SNPs that were associated with a significant increase in the risk of type I psoriasis, namely, rs1801274 (FCGR2A; 5.26-fold), rs763361 (CD226; 33.3-fold), rs12459358 (PSORS6; 11.11-fold), rs12191877 (HLA-C; 12.5-fold), and rs1576 (CCHCR1; 166.66-fold) (Table 3).Table 3
Results of univariate and multivariate linear regression analyses (155 patients with psoriasis type I versus 36 cases with psoriasis type II). SNPs withp<0.1 in the univariate analysis were included in the multivariate analysis. Only polymorphisms that were significant in the multivariate analysis are shown.
SNP
Gene
Model
Risk genotype
Univariate Ps patients type I versus type II
Multivariate
OR (95% CI)
p value
OR (95% CI)
p value
rs1801274
FCGR2A
A
CC/CT
1.96 (1.12–3.45)
0.016
5.26 (1.11–25)
0.037
rs191190
TNFR1
D
CC/CT
0.43 (0.17–1.11)
0.065
0.01 (1.44E − 04–0.44)
0.018
rs763361
CD226
D
TT/CT
2.08 (0.99–4.35)
0.056
33.33 (1.11–1000)
0.043
rs12459358
PSORS6
A
TT/CT
2.44 (1.32–4.55)
0.002
11.11 (1.32–100)
0.026
rs10499194
TNFAIP3
D
TT/CT
0.38 (0.17–0.90)
0.02
0.01 (6.77E − 05–0.61)
0.030
rs12191877
HLA-C
A
TT/CT
2.33 (1.23–4.35)
0.006
12.50 (1.06–100)
0.045
rs6920220
TNFAIP3
A
AA/AG
0.55 (0.30–1.03)
0.068
0.05 (0.003–0.90)
0.042
rs361525
TNF-α
C
AG
2.17 (0.62–7.69)
0.087
0.01 (5.48E − 05–0.50)
0.024
rs1576
CCHCR1
D
GG/GC
2.56 (1.22–5.26)
0.012
166.67 (2.32–1000)
0.019
FCGR2A: Fc fragment of IgG low affinity IIa receptor; TNFR1: tumor necrosis factor receptor 1; CD226: CD226 antigen; PSORS6: psoriasis susceptibility 6; TNFAIP3: tumor necrosis factor alpha-induced protein 3; HLA-C: major histocompatibility complex; TNFAIP3: tumor necrosis factor alpha-induced protein 3; TNF-α: tumor necrosis factor alpha; CCHCR1: coiled-coil alpha-helical rod protein 1; SNPs: single-nucleotide polymorphisms; OR: odds ratio of presenting type I psoriasis; CI: confidence interval; A: additive; D: dominant; C: codominant.
## 4. Discussion
About 75% of patients with chronic plaque psoriasis have type I psoriasis before age 40 [4], whereas a lower number of patients develop psoriasis at around 50–60 years [11]. Our results are consistent with these findings, since 79.06% of our patients developed psoriasis before the age of 40.When we compared patients with type I psoriasis and controls, we found 10 significant SNPs inCLMN,FBXL19,CCL4L,C17orf51,TYK2,IL13,SLC22A4,CDKAL1,HLA-C, andHLA-B/MICA.TheHLA-C∗0602 allele is a risk factor for psoriasis [35] and has been associated with both type I [6–9] and type II psoriasis [10]. In one study, 85.3% of patients with type I psoriasis had this allele [5], whereas only 14.7% of patients with type II psoriasis were carriers [5]. Other authors found an association between rs10484554 (HLA-C) and type I psoriasis compared with type II psoriasis (OR=3.24 in type I) [12]. rs10484554 has also been associated with type II psoriasis [10]. In a recent GWAS, theHLA-C gene was associated with type I psoriasis (p=2.97E-18 for rs1265181, p=2.58E-15 for rs12191877, p=1.84E-15 for rs4406273, and p=1.10E-07 for rs2395029), but not with type II psoriasis after application of the Bonferroni correction [11]. In addition, our results showed significant differences in rs12191877 (HLA-C) in patients with type I psoriasis (p=2.50E-19). However, we did not find this association in patients with type II psoriasis, probably owing to the small sample size in this group (N=36).Munir et al. found an association between rs1295685 in theIL13 gene and type I psoriasis (p=2.47E-03) [11]. Our results showed an association between another SNP inIL13 (rs1800925) and type I psoriasis (p=0.034). In addition, Munir et al. did not obtain significant results when they compared controls with type II psoriasis or type I psoriasis with type II psoriasis [11]. Both SNPs inIL13 have been associated with predisposition to psoriasis [36, 37].Our comparison of patients with type I psoriasis and controls is the first to obtain significant results for a series of SNPs in type I psoriasis, although the SNPs have already been associated with the risk of psoriasis. rs10782001 inFBXL19 [38], rs1975974 inC17orf51 [38], rs12720356 inTYK2 [3, 39], rs3792876 inSLC22A4 [3], rs6908425 inCDKAL1 [40], and rs13437088 inHLA-B/MICA [35] have previously been associated with psoriasis, but not with type I psoriasis.Furthermore, SNPs inCLMN (rs2282276) andCCL4L (rs1634517) have not been associated with psoriasis or age at onset.We found no significant differences between patients with type II psoriasis and controls owing to the small sample size (N=36).Comparison between patients with type I psoriasis and patients with type II psoriasis revealed significant associations for the following genes:FCGR2A,TNFR1,CD226,PSORS6,TNFAIP3,HLA-C,TNF-α, andCCHCR1.Polymorphisms inCCHCR1 (−386 and −404, CCHCR1∗WW allele) have been associated with type I psoriasis [9, 31]. We found significant differences between rs1576 inCCHCR1 and age at onset. In a study comparing controls (54.8%) and patients with psoriasis type II (66.0%), Allen et al. showed a significant increase in the number of patients carrying rs1576 [41]. This SNP has been associated with psoriasis elsewhere [42].Douroudis et al. analyzed rs763361 inCD226 in patients with early-onset psoriasis and patients with late-onset psoriasis, although they found no associations [43]. We performed the same analyses and found significant differences between the groups. In addition, rs763361 inCD226 has been associated with severity of psoriasis [43].rs12459358 inPSORS6 has been associated with type I psoriasis (G risk allele, OR=1.47 and p=0.005) [19]. In contrast, our data showed an association between the T allele and type I psoriasis (OR=11.11; p=0.026).rs361525 (−238) in theTNFα gene has been associated with susceptibility to psoriasis [44], and the A allele was more frequent in male patients with type I psoriasis (p=2E-07) [15, 22]. We found significant results in rs361525 (TNF-α) when we compared patients with type I psoriasis and patients with type II psoriasis, although we found no gender differences. Other authors confirmed our association with type I psoriasis in Caucasian [20, 23] and Mongolian patients [24]. A meta-analysis showed an association between rs361525 and type I psoriasis [21]. Baran et al. found no significant differences between rs1800629 in the −308 promoter (TNFα) and type I or type II psoriasis [45].Likewise, rs12191877 inHLA-C has been associated with increased risk of psoriasis [35]. Munir et al. [11] compared patients with type I psoriasis and patients with type II psoriasis and obtained significant results for rs1265181, rs4406273, and rs12191877 inHLA-C. We replicated these results in rs12191877 (T allele risk; p=0.045).rs191190 inTNFR1 [46] and rs10499194 inTNFAIP3 [3] have been associated with psoriasis, but not with age of onset. Moreover, rs1801274 inFCGR2A and rs6920220 inTNFAIP3 have not been studied in patients with psoriasis according to age of onset. Given the small sample size in the group with type II psoriasis in our study, our results should be interpreted with caution.Our results highlight the role of the immune system in psoriasis and enhance our understanding of pathogenic mechanisms. Such knowledge can help to optimize treatment.Our study is subject to a series of limitations. First, mean age varied between the cases and the controls. Second, the sample size was limited by the number of study patients treated in the dermatology department, thus making it difficult to detect SNPs with a low probability of causing psoriasis. Third, since the SNPs were selected based on a literature review, several major SNPs may not yet have been investigated.In conclusion, our study confirmed an association between rs12191877 (HLA-C) and type I psoriasis and between type I and type II psoriasis patients. Ours is the first study to show an association betweenCLMN,FBXL19,CCL4L,C17orf51,TYK2,IL13,SLC22A4,CDKAL1, andHLA-B/MICA and type I psoriasis. Moreover,CLMN andCCL4L have not been previously described in psoriasis. In addition,PSORS6 andTNFα have been described as more prevalent genes in type I psoriasis and we showed a significant association when we compared type I psoriasis and type II psoriasis. Ours is the first study to identify an association betweenFCGR2A,TNFR1,CD226,TNFAIP3, andCCHCR1 and age at onset of psoriasis. Our results suggest that genetics could play a role in age at onset. However, further studies are needed to confirm our findings.
---
*Source: 101879-2015-11-03.xml* | 101879-2015-11-03_101879-2015-11-03.md | 36,171 | Polymorphisms Associated with Age at Onset in Patients with | Rocío Prieto-Pérez; Guillermo Solano-López; Teresa Cabaleiro; Manuel Román; Dolores Ochoa; María Talegón; Ofelia Baniandrés; José Luis López-Estebaranz; Pablo de la Cueva; Esteban Daudén; Francisco Abad-Santos | Journal of Immunology Research
(2015) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101879 | 101879-2015-11-03.xml | ---
## Abstract
Psoriasis is a chronic skin disease in which genetics play a major role. Although many genome-wide association studies have been performed in psoriasis, knowledge of the age at onset remains limited. Therefore, we analyzed 173 single-nucleotide polymorphisms in genes associated with psoriasis and other autoimmune diseases in patients with moderate-to-severe plaque psoriasis type I (early-onset, <40 years) or type II (late-onset, ≥40 years) and healthy controls. Moreover, we performed a comparison between patients with type I psoriasis and patients with type II psoriasis. Our comparison of a stratified population with type I psoriasisn=155 and healthy controls N=197 is the first to reveal a relationship between the CLMN, FBXL19, CCL4L, C17orf51, TYK2, IL13, SLC22A4, CDKAL1, and HLA-B/MICA genes. When we compared type I psoriasis with type II psoriasis N=36, we found a significant association between age at onset and the genes PSORS6, TNF-α, FCGR2A, TNFR1, CD226, HLA-C, TNFAIP3, and CCHCR1. Moreover, we replicated the association between rs12191877 (HLA-C) and type I psoriasis and between type I and type II psoriasis. Our findings highlight the role of genetics in age of onset of psoriasis.
---
## Body
## 1. Introduction
Psoriasis is a chronic inflammatory skin disorder with a major genetic component. The prevalence of chronic plaque psoriasis is around 2% in the general population [1]. The many genetic studies performed in recent years showed that genes such as interleukin 23 receptor (IL23R) andIL12B and tumor necrosis factor alpha (TNFα) are closely associated with psoriasis and related diseases such as rheumatoid arthritis, psoriatic arthritis, and Crohn’s disease [2]. Human leukocyte antigen C (HLA-C)∗0602 is the allele most closely associated with this disease [3].The age at onset of psoriasis follows a bimodal distribution [4]: type I psoriasis appears before the age of 40 years (early-onset), with a peak at 16–22 years; type II psoriasis appears after the age of 40 years (late-onset), with a peak at 57–60 years [5]. Type I psoriasis has been associated with several single-nucleotide polymorphisms (SNPs) in genes associated with the immune response (Table 1). For example, HLA-C∗0602 is more strongly associated with type I psoriasis than with type II psoriasis [5]. Although several association studies have already been performed in psoriasis in both populations (type I or type II psoriasis patients), knowledge of age at onset remains limited and controversial (Table 1) [6]. Therefore, we performed a candidate gene study, where we evaluated genetic susceptibility to type I or type II psoriasis in patients with moderate-to-severe chronic plaque psoriasis. This approach may help us to identify SNPs previously associated with psoriasis or other autoimmune diseases [2] that are specific to type I or type II psoriasis. Furthermore, our genetic study could improve our understanding of psoriasis and of its etiology and pathogenesis.Table 1
SNPs associated with type I (early-onset) and type II (late-onset) psoriasis: an update.
SNP
Gene
Function†
Association with
References
Ps type I
Ps type II
Ps type I versus type II
—
HLA-C
∗0602
X
X
[6–10]
—
HLA-C
∗12:02
X
[8]
rs1265181
HLA-C
Encodes a class I molecule which plays a central role in the immune system by presenting peptides derived from endoplasmic reticulum lumen
X
X
[11]
rs12191877
X
∗
X
∗
[7, 11]
rs4406273
X
X
[11]
rs2395029
X
[11]
rs10484554
X
X
X
[7, 10, 12]
rs13191099
X
[4]
rs10876882
HLA-A
X
[4]
rs33980500
TRAF3IP2
Encodes a protein involved in regulating responses to cytokines by members of the Rel/NF-kappa-B transcription factor family
X
[11]
rs71562288
X
[4]
rs2233278
TNIP1
Encodes A20-binding protein which plays a role in autoimmunity and tissue homeostasis through the regulation of nuclear factor kappa-B activation
X
[11]
rs17728338
TNIP1
X
[11]
rs1295685
IL13
Encodes a cytokine involved in several stages of B cell maturation and differentiation
X
[11]
rs17716942
IFIH1
Encodes an Asp-Glu-Ala-Asp box protein (putative RNA helicases)
X
[7]
rs1990760
X
[4]
rs27524
ERAP1
Encodes an aminopeptidase involved in trimming HLA class I-binding precursors
X
[6]#
rs11209026
IL23R
Encodes a subunit of the receptor for IL23A/IL23
X
[6]#
rs72676067
X
[4]
rs10876882
IL23A
Encodes a subunit of IL23 involved in immune responses
X
[4]
—
LCE3B/LCE3C-del
Encodes precursors of the cornified envelope of the stratum corneum
X
[6, 13]#
rs2546890
IL12B
Encodes a subunit of IL12 that acts on T and natural killer cells
X
X
[4, 14]
rs60813083
RNF114
Encodes a protein that may play a role in spermatogenesis
X
[4]
rs887998
IL1R1
Encodes a receptor for IL1 involved in inflammatory responses
X
[4]
rs16944
IL1B
Encodes a cytokine produced by activated macrophages and is involved in immune responses, cell proliferation, differentiation, and apoptosis
X
[4, 15]
rs2853550
X
[4]
rs26653
ERAP1
Encodes an aminopeptidase involved in trimming HLA class I-binding precursors so that they can be presented on the MHC class I molecule
X
[10]
rs30187
X
[10]
rs2227473
IL22
Encodes an interleukin 22 that contributes to the inflammatory response
X
[16]
rs2227483
X
[16]
INDEL rs35774195/rs10784699
X
[16]
rs6822844
IL2/IL21
Encode cytokines that are important in the innate and adaptive immune responses by inducing differentiation, proliferation, and activity of multiple target cells including macrophages, natural killer cells, B cells, and cytotoxic T cells
X
[17]
rs2069778
X
[17]
rs6311
HTR2A
Encodes a receptor for neurotransmitter serotonin
X
[18]
rs12459358
PSORS6
Encodes genetic locus associated with susceptibility to psoriasis
X
∗
[19]
rs1800629
TNF-α
Encodes a cytokine secreted by macrophages and involved in the regulation of cell proliferation, differentiation, and apoptosis, as well as in lipid metabolism and coagulation
X
[20, 21]
rs361525
X
∗
[15, 20–24]
rs3733197
BANK1
Encodes a protein involved in B cell receptor-induced calcium mobilization from intracellular stores
X
[25]
rs755622
MIF
Encodes a lymphokine involved in cell-mediated immunity, immunoregulation, and inflammation
X
X
[26]
rs6693899
IL10
Encodes a cytokine produced by monocytes and lymphocytes and involved in immunoregulation and inflammation
X
X
[27]
rs1800896
X
[28]
rs4341
ACE
Encodes an enzyme involved in catalyzing the conversion of angiotensin I into a physiologically active peptide angiotensin II
X
[29]
SNPs at positions -1540, -1512, -1451, -460, and -152
VEGFA
Encodes a protein involved in angiogenesis, vasculogenesis, endothelial cell growth, promotion of cell migration, and inhibition of apoptosis
X
[30]
SNPs at positions -386 and -404
CCHCR1
Encodes a protein that may be a regulator of keratinocyte proliferation or differentiation
X
[31]
—
CCHCR1
∗WW allele
X
[9]
SNP: single-nucleotide polymorphism; Ps: psoriasis;†Information available at NCBI (http://www.ncbi.nlm.nih.gov/gene) or GeneCards (http://www.genecards.org/); #Study performed in pediatric-onset psoriasis (patients <18 years); ∗Association found in our study.
## 2. Material and Methods
### 2.1. Experimental Design
We recruited 198 Caucasian patients with moderate-to-severe plaque type psoriasis (psoriasis area and severity index > 10) who attended the department of dermatology in four university hospitals in Madrid between 16/10/2007 and 17/12/2012. Five samples did not fulfill the quality criteria of the Human Genotyping Unit-CeGen (CEGEN, Spanish National Cancer Research Centre, Madrid, Spain), and 2 samples had insufficient volume. We also included 197 healthy volunteers (controls) recruited between 10/01/2011 and 14/12/2012 from the Clinical Pharmacology Service (Hospital Universitario de la Princesa, Madrid, Spain). All the volunteers were Caucasian and had no personal or family history of psoriasis (at least 2 generations).The protocol fulfilled Spanish law on biomedical research and was approved by the Ethics Committee for Clinical Investigation of Hospital Universitario de la Princesa. All controls and patients gave their written informed consent to donate a sample for investigation. The samples are kept in the Clinical Pharmacology Service.
### 2.2. Selection of the Polymorphisms
We preselected 320 SNPs based on an extensive review of 449 articles describing the association between polymorphisms and psoriasis and response to biological drugs and psoriasis and related inflammatory diseases (rheumatoid arthritis, psoriatic arthritis, and Crohn’s disease) [2]. We finally selected 192 SNPs based on minor allele frequency (≥0.05) and on the results of studies performed in Caucasians and psoriatic patients. Information on the 173 SNPs analyzed can be found in supplementary Table S1, which is published in [3].
### 2.3. Sample Processing
A 3-mL peripheral blood sample was extracted from each subject in EDTA tubes. DNA was obtained from samples using an automatic DNA extractor (MagNa Pure System, Roche Applied Science, USA) and its concentration was quantified in Nanodrop ND-1000 Spectrophotometer (Wilmington, USA). The extracted DNA was stored at −80°C in the Clinical Pharmacology Service until use.
### 2.4. Genotyping
A total of 196 samples from patients (2 samples of 198 cases had insufficient volume) and 197 samples from controls were sent to the Human Genotyping Unit-CeGen to genotype 192 SNPs. The analysis was performed using the Illumina Veracode genotyping platform. If fluorescence was low or the genotype clusters were undifferentiated, the SNPs were removed. In addition, if the call rate was less than 95% of the average of the 192 SNPs analyzed, the samples were removed. Since CEGEN quality criteria were not met in 19 SNPs and 5 patients, we finally analyzed 173 SNPs in 191 patients and 197 controls.
### 2.5. Statistical Analysis
The statistical analysis was performed to compare the following stratified populations: patients with type I psoriasis (N=155) or type II psoriasis (N=36) versus controls (N=197) and patients with type I psoriasis versus cases with type II psoriasis. Hardy-Weinberg equilibrium was tested for all the SNPs analyzed using the SNPStats program [32]. Allele and genotype frequencies were also calculated using the SNPStats program. SNPs that were not in Hardy-Weinberg equilibrium in controls were removed from the subsequent analysis [33].The univariate analysis was performed using R 3.0.2. (SNPassoc) [34]. We constructed various logistic regression models depending on the main types of inheritance (codominant, dominant, recessive, and additive). In the additive model, the presence of 2 mutant alleles confers double the risk of 1 mutant allele [33]. The results were adjusted for rs12191877 (SNP that is strongly associated with the HLA-C∗0602 allele and is highly prevalent in our population) [3, 35]. The optimal model was selected using the lower Akaike Information Criterion (AIC). Subsequently, SNPs with p<0.1 in the univariate analysis (adjusted for rs12191877) were included in a multivariate logistic regression model to adjust for relevant confounding factors (SPSS 15.0). The results of the univariate analysis were adjusted for rs12191877, except when we compared patients with type I psoriasis and patients with type II psoriasis (the influence of rs12191877 was not very relevant). We expressed the results as the odds ratio (OR), 95% confidence interval, and p value.
## 2.1. Experimental Design
We recruited 198 Caucasian patients with moderate-to-severe plaque type psoriasis (psoriasis area and severity index > 10) who attended the department of dermatology in four university hospitals in Madrid between 16/10/2007 and 17/12/2012. Five samples did not fulfill the quality criteria of the Human Genotyping Unit-CeGen (CEGEN, Spanish National Cancer Research Centre, Madrid, Spain), and 2 samples had insufficient volume. We also included 197 healthy volunteers (controls) recruited between 10/01/2011 and 14/12/2012 from the Clinical Pharmacology Service (Hospital Universitario de la Princesa, Madrid, Spain). All the volunteers were Caucasian and had no personal or family history of psoriasis (at least 2 generations).The protocol fulfilled Spanish law on biomedical research and was approved by the Ethics Committee for Clinical Investigation of Hospital Universitario de la Princesa. All controls and patients gave their written informed consent to donate a sample for investigation. The samples are kept in the Clinical Pharmacology Service.
## 2.2. Selection of the Polymorphisms
We preselected 320 SNPs based on an extensive review of 449 articles describing the association between polymorphisms and psoriasis and response to biological drugs and psoriasis and related inflammatory diseases (rheumatoid arthritis, psoriatic arthritis, and Crohn’s disease) [2]. We finally selected 192 SNPs based on minor allele frequency (≥0.05) and on the results of studies performed in Caucasians and psoriatic patients. Information on the 173 SNPs analyzed can be found in supplementary Table S1, which is published in [3].
## 2.3. Sample Processing
A 3-mL peripheral blood sample was extracted from each subject in EDTA tubes. DNA was obtained from samples using an automatic DNA extractor (MagNa Pure System, Roche Applied Science, USA) and its concentration was quantified in Nanodrop ND-1000 Spectrophotometer (Wilmington, USA). The extracted DNA was stored at −80°C in the Clinical Pharmacology Service until use.
## 2.4. Genotyping
A total of 196 samples from patients (2 samples of 198 cases had insufficient volume) and 197 samples from controls were sent to the Human Genotyping Unit-CeGen to genotype 192 SNPs. The analysis was performed using the Illumina Veracode genotyping platform. If fluorescence was low or the genotype clusters were undifferentiated, the SNPs were removed. In addition, if the call rate was less than 95% of the average of the 192 SNPs analyzed, the samples were removed. Since CEGEN quality criteria were not met in 19 SNPs and 5 patients, we finally analyzed 173 SNPs in 191 patients and 197 controls.
## 2.5. Statistical Analysis
The statistical analysis was performed to compare the following stratified populations: patients with type I psoriasis (N=155) or type II psoriasis (N=36) versus controls (N=197) and patients with type I psoriasis versus cases with type II psoriasis. Hardy-Weinberg equilibrium was tested for all the SNPs analyzed using the SNPStats program [32]. Allele and genotype frequencies were also calculated using the SNPStats program. SNPs that were not in Hardy-Weinberg equilibrium in controls were removed from the subsequent analysis [33].The univariate analysis was performed using R 3.0.2. (SNPassoc) [34]. We constructed various logistic regression models depending on the main types of inheritance (codominant, dominant, recessive, and additive). In the additive model, the presence of 2 mutant alleles confers double the risk of 1 mutant allele [33]. The results were adjusted for rs12191877 (SNP that is strongly associated with the HLA-C∗0602 allele and is highly prevalent in our population) [3, 35]. The optimal model was selected using the lower Akaike Information Criterion (AIC). Subsequently, SNPs with p<0.1 in the univariate analysis (adjusted for rs12191877) were included in a multivariate logistic regression model to adjust for relevant confounding factors (SPSS 15.0). The results of the univariate analysis were adjusted for rs12191877, except when we compared patients with type I psoriasis and patients with type II psoriasis (the influence of rs12191877 was not very relevant). We expressed the results as the odds ratio (OR), 95% confidence interval, and p value.
## 3. Results
### 3.1. Study Population
The study population included 155 patients with moderate-to-severe chronic plaque type I psoriasis (92 men and 63 women), 36 patients with type II psoriasis (19 men and 17 women), and 197 controls (98 men and 99 women). The mean age was46.01±13.11 years in patients with type I psoriasis (45.72±11.69 in men and 46.43±15.04 in women), 67.72±11.85 years in patients with type II psoriasis (65.95±11.18 in men and 69.71±12.59 in women), and 24.51±4.29 years in the controls (25.07±4.94 in men and 23.95±3.46 in women). The mean age at onset of psoriasis was 23.31±8.52 in patients with type I psoriasis and 52.58±10.45 in patients with type II psoriasis. Analysis of the effect of sex on our results revealed no significant association.
### 3.2. Genotyping Results
A total of 192 SNPs were analyzed (see supplementary Table S1 published in [3]). However, only 173 SNPs fulfilled the quality criteria. One SNP was monomorphic (rs165161 in theJUNB gene) and was excluded from the statistical analysis. The genotyping success rate was 89.82%, and the reproducibility rate was 100%.All the minor allele frequencies were in Hardy-Weinberg equilibrium except 9 SNPs in the controls and 12 SNPs in the patients (see supplementary Table S1 published in [3]). The 9 SNPs which were not in Hardy-Weinberg equilibrium in controls were removed from the statistical analysis [33].
### 3.3. Association with Type I or Type II Psoriasis
Our findings showed an association between type I psoriasis and 10 SNPs (N=155 versus N=197 controls): rs1634517 (CCL4L), rs1975974 (C17orf51), rs12720356 (TYK2), rs1800925 (IL13), and rs6908425 (CDKAL1) decreased the risk of psoriasis 2.94-fold, 2.08-fold, 10-fold, 100-fold, and 2.44-fold, respectively; and rs2282276 (CLMN), rs10782001 (FBXL19), rs3792876 (SLC22A4), rs12191877 (HLA-C), and rs13437088 (HLA-B/MICA) increased the risk of psoriasis 3.90-fold, 2.10-fold, 3.75-fold, 30.54-fold, and 2.52-fold, respectively (Table 2). However, comparison of 36 patients with type II psoriasis and 197 controls revealed no significant association (results not shown).Table 2
Results of univariate linear regression analysis (unadjusted and adjusted for rs12191877 inHLA-C) and multivariate linear regression analysis (155 patients with type I psoriasis versus 197 controls). In the multivariate analysis, we included the SNPs with p<0.1 in the univariate analysis adjusted for HLA-C. Only polymorphisms that were significant in the multivariate analysis are shown.
SNP
Gene
Model
Risk genotype
Univariate unadjusted. Type Ipatients versus controls
Univariate adjusted forHLA-C
Multivariate
OR (95% CI)
p value
OR (95% CI)
p value
OR (95% CI)
p value
rs2282276
CLMN
A
CC/CT
1.74 (0.96–3.15)
0.066
1.95 (1.04–3.65)
0.037
3.90 (1.13–13.38)
0.031
rs10782001
FBXL19
A
GG/AG
1.58 (1.13–2.21)
0.007
1.59 (1.09–2.32)
0.016
2.10 (1.05–4.17)
0.035
rs1634517
CCL4L
D
AA/AC
0.89 (0.58–1.36)
0.590
0.64 (0.39–1.05)
0.073
0.34 (0.14–0.84)
0.019
rs1975974
C17orf51
A
GG/AG
0.80 (0.57–1.14)
0.220
0.66 (0.44–0.99)
0.040
0.48 (0.23–0.99)
0.048
rs12720356
TYK2
A
GG/GT
0.42 (0.21–0.81)
0.019
0.27 (0.13–0.58)
0.0003
0.10 (0.03–0.39)
0.001
rs1800925
IL13
R
TT
0.18 (0.02–1.45)
0.051
0.17 (0.02–1.49)
0.061
0.01 (0.00–0.73)
0.034
rs3792876
SLC22A4
A
TT/CT
1.57 (0.89–2.76)
0.110
1.87 (0.98–3.55)
0.057
3.75 (1.19–11.83)
0.024
rs6908425
CDKAL1
A
TT/CT
0.67 (0.47–0.97)
0.029
0.58 (0.39–0.89)
0.01
0.41 (0.20–0.85)
0.017
rs12191877
HLA-C
A
TT/CT
5.92 (3.83–9.15)
2.50
E
-
19
—
—
30.54 (10.62–87.85)
0.000
rs13437088
HLA-B/MICA
D
TT/CT
2.17 (1.42–3.34)
3.00
E
-
04
1.93 (1.19–3.13)
0.007
2.52 (1.01–6.31)
0.048
CLMN: calponin-like transmembrane gene; FBXL19: F-box and leucine-rich repeat protein 19; CCL4L: chemokine (C-C motif) ligand 4-like; C17orf51: chromosome 17 open reading frame 51; TYK2: nonreceptor tyrosine-protein kinase; IL13: interleukin 13; SLC22A4: solute carrier family 22 member 4; CDKAL1: cyclin-dependent kinase 5 regulatory subunit associated protein 1-like 1; HLA: major histocompatibility complex; MICA: major histocompatibility complex class I polypeptide-related sequence A; SNPs: single-nucleotide polymorphisms; OR: odds ratio of presenting type 1 psoriasis; CI: confidence interval; A: additive; R: recessive; D: dominant; —: no data.Four SNPs were associated with significant decreases in the risk of type I psoriasis (N=155) compared with type II psoriasis (N=36), namely, rs191190 (TNFR1; 126.08-fold), rs361525 (TNF-α; 190.76-fold), and rs10499194 and rs6920220 (TNFAIP3; 155.02-fold and 19.14-fold, resp.). We also found 5 SNPs that were associated with a significant increase in the risk of type I psoriasis, namely, rs1801274 (FCGR2A; 5.26-fold), rs763361 (CD226; 33.3-fold), rs12459358 (PSORS6; 11.11-fold), rs12191877 (HLA-C; 12.5-fold), and rs1576 (CCHCR1; 166.66-fold) (Table 3).Table 3
Results of univariate and multivariate linear regression analyses (155 patients with psoriasis type I versus 36 cases with psoriasis type II). SNPs withp<0.1 in the univariate analysis were included in the multivariate analysis. Only polymorphisms that were significant in the multivariate analysis are shown.
SNP
Gene
Model
Risk genotype
Univariate Ps patients type I versus type II
Multivariate
OR (95% CI)
p value
OR (95% CI)
p value
rs1801274
FCGR2A
A
CC/CT
1.96 (1.12–3.45)
0.016
5.26 (1.11–25)
0.037
rs191190
TNFR1
D
CC/CT
0.43 (0.17–1.11)
0.065
0.01 (1.44E − 04–0.44)
0.018
rs763361
CD226
D
TT/CT
2.08 (0.99–4.35)
0.056
33.33 (1.11–1000)
0.043
rs12459358
PSORS6
A
TT/CT
2.44 (1.32–4.55)
0.002
11.11 (1.32–100)
0.026
rs10499194
TNFAIP3
D
TT/CT
0.38 (0.17–0.90)
0.02
0.01 (6.77E − 05–0.61)
0.030
rs12191877
HLA-C
A
TT/CT
2.33 (1.23–4.35)
0.006
12.50 (1.06–100)
0.045
rs6920220
TNFAIP3
A
AA/AG
0.55 (0.30–1.03)
0.068
0.05 (0.003–0.90)
0.042
rs361525
TNF-α
C
AG
2.17 (0.62–7.69)
0.087
0.01 (5.48E − 05–0.50)
0.024
rs1576
CCHCR1
D
GG/GC
2.56 (1.22–5.26)
0.012
166.67 (2.32–1000)
0.019
FCGR2A: Fc fragment of IgG low affinity IIa receptor; TNFR1: tumor necrosis factor receptor 1; CD226: CD226 antigen; PSORS6: psoriasis susceptibility 6; TNFAIP3: tumor necrosis factor alpha-induced protein 3; HLA-C: major histocompatibility complex; TNFAIP3: tumor necrosis factor alpha-induced protein 3; TNF-α: tumor necrosis factor alpha; CCHCR1: coiled-coil alpha-helical rod protein 1; SNPs: single-nucleotide polymorphisms; OR: odds ratio of presenting type I psoriasis; CI: confidence interval; A: additive; D: dominant; C: codominant.
## 3.1. Study Population
The study population included 155 patients with moderate-to-severe chronic plaque type I psoriasis (92 men and 63 women), 36 patients with type II psoriasis (19 men and 17 women), and 197 controls (98 men and 99 women). The mean age was46.01±13.11 years in patients with type I psoriasis (45.72±11.69 in men and 46.43±15.04 in women), 67.72±11.85 years in patients with type II psoriasis (65.95±11.18 in men and 69.71±12.59 in women), and 24.51±4.29 years in the controls (25.07±4.94 in men and 23.95±3.46 in women). The mean age at onset of psoriasis was 23.31±8.52 in patients with type I psoriasis and 52.58±10.45 in patients with type II psoriasis. Analysis of the effect of sex on our results revealed no significant association.
## 3.2. Genotyping Results
A total of 192 SNPs were analyzed (see supplementary Table S1 published in [3]). However, only 173 SNPs fulfilled the quality criteria. One SNP was monomorphic (rs165161 in theJUNB gene) and was excluded from the statistical analysis. The genotyping success rate was 89.82%, and the reproducibility rate was 100%.All the minor allele frequencies were in Hardy-Weinberg equilibrium except 9 SNPs in the controls and 12 SNPs in the patients (see supplementary Table S1 published in [3]). The 9 SNPs which were not in Hardy-Weinberg equilibrium in controls were removed from the statistical analysis [33].
## 3.3. Association with Type I or Type II Psoriasis
Our findings showed an association between type I psoriasis and 10 SNPs (N=155 versus N=197 controls): rs1634517 (CCL4L), rs1975974 (C17orf51), rs12720356 (TYK2), rs1800925 (IL13), and rs6908425 (CDKAL1) decreased the risk of psoriasis 2.94-fold, 2.08-fold, 10-fold, 100-fold, and 2.44-fold, respectively; and rs2282276 (CLMN), rs10782001 (FBXL19), rs3792876 (SLC22A4), rs12191877 (HLA-C), and rs13437088 (HLA-B/MICA) increased the risk of psoriasis 3.90-fold, 2.10-fold, 3.75-fold, 30.54-fold, and 2.52-fold, respectively (Table 2). However, comparison of 36 patients with type II psoriasis and 197 controls revealed no significant association (results not shown).Table 2
Results of univariate linear regression analysis (unadjusted and adjusted for rs12191877 inHLA-C) and multivariate linear regression analysis (155 patients with type I psoriasis versus 197 controls). In the multivariate analysis, we included the SNPs with p<0.1 in the univariate analysis adjusted for HLA-C. Only polymorphisms that were significant in the multivariate analysis are shown.
SNP
Gene
Model
Risk genotype
Univariate unadjusted. Type Ipatients versus controls
Univariate adjusted forHLA-C
Multivariate
OR (95% CI)
p value
OR (95% CI)
p value
OR (95% CI)
p value
rs2282276
CLMN
A
CC/CT
1.74 (0.96–3.15)
0.066
1.95 (1.04–3.65)
0.037
3.90 (1.13–13.38)
0.031
rs10782001
FBXL19
A
GG/AG
1.58 (1.13–2.21)
0.007
1.59 (1.09–2.32)
0.016
2.10 (1.05–4.17)
0.035
rs1634517
CCL4L
D
AA/AC
0.89 (0.58–1.36)
0.590
0.64 (0.39–1.05)
0.073
0.34 (0.14–0.84)
0.019
rs1975974
C17orf51
A
GG/AG
0.80 (0.57–1.14)
0.220
0.66 (0.44–0.99)
0.040
0.48 (0.23–0.99)
0.048
rs12720356
TYK2
A
GG/GT
0.42 (0.21–0.81)
0.019
0.27 (0.13–0.58)
0.0003
0.10 (0.03–0.39)
0.001
rs1800925
IL13
R
TT
0.18 (0.02–1.45)
0.051
0.17 (0.02–1.49)
0.061
0.01 (0.00–0.73)
0.034
rs3792876
SLC22A4
A
TT/CT
1.57 (0.89–2.76)
0.110
1.87 (0.98–3.55)
0.057
3.75 (1.19–11.83)
0.024
rs6908425
CDKAL1
A
TT/CT
0.67 (0.47–0.97)
0.029
0.58 (0.39–0.89)
0.01
0.41 (0.20–0.85)
0.017
rs12191877
HLA-C
A
TT/CT
5.92 (3.83–9.15)
2.50
E
-
19
—
—
30.54 (10.62–87.85)
0.000
rs13437088
HLA-B/MICA
D
TT/CT
2.17 (1.42–3.34)
3.00
E
-
04
1.93 (1.19–3.13)
0.007
2.52 (1.01–6.31)
0.048
CLMN: calponin-like transmembrane gene; FBXL19: F-box and leucine-rich repeat protein 19; CCL4L: chemokine (C-C motif) ligand 4-like; C17orf51: chromosome 17 open reading frame 51; TYK2: nonreceptor tyrosine-protein kinase; IL13: interleukin 13; SLC22A4: solute carrier family 22 member 4; CDKAL1: cyclin-dependent kinase 5 regulatory subunit associated protein 1-like 1; HLA: major histocompatibility complex; MICA: major histocompatibility complex class I polypeptide-related sequence A; SNPs: single-nucleotide polymorphisms; OR: odds ratio of presenting type 1 psoriasis; CI: confidence interval; A: additive; R: recessive; D: dominant; —: no data.Four SNPs were associated with significant decreases in the risk of type I psoriasis (N=155) compared with type II psoriasis (N=36), namely, rs191190 (TNFR1; 126.08-fold), rs361525 (TNF-α; 190.76-fold), and rs10499194 and rs6920220 (TNFAIP3; 155.02-fold and 19.14-fold, resp.). We also found 5 SNPs that were associated with a significant increase in the risk of type I psoriasis, namely, rs1801274 (FCGR2A; 5.26-fold), rs763361 (CD226; 33.3-fold), rs12459358 (PSORS6; 11.11-fold), rs12191877 (HLA-C; 12.5-fold), and rs1576 (CCHCR1; 166.66-fold) (Table 3).Table 3
Results of univariate and multivariate linear regression analyses (155 patients with psoriasis type I versus 36 cases with psoriasis type II). SNPs withp<0.1 in the univariate analysis were included in the multivariate analysis. Only polymorphisms that were significant in the multivariate analysis are shown.
SNP
Gene
Model
Risk genotype
Univariate Ps patients type I versus type II
Multivariate
OR (95% CI)
p value
OR (95% CI)
p value
rs1801274
FCGR2A
A
CC/CT
1.96 (1.12–3.45)
0.016
5.26 (1.11–25)
0.037
rs191190
TNFR1
D
CC/CT
0.43 (0.17–1.11)
0.065
0.01 (1.44E − 04–0.44)
0.018
rs763361
CD226
D
TT/CT
2.08 (0.99–4.35)
0.056
33.33 (1.11–1000)
0.043
rs12459358
PSORS6
A
TT/CT
2.44 (1.32–4.55)
0.002
11.11 (1.32–100)
0.026
rs10499194
TNFAIP3
D
TT/CT
0.38 (0.17–0.90)
0.02
0.01 (6.77E − 05–0.61)
0.030
rs12191877
HLA-C
A
TT/CT
2.33 (1.23–4.35)
0.006
12.50 (1.06–100)
0.045
rs6920220
TNFAIP3
A
AA/AG
0.55 (0.30–1.03)
0.068
0.05 (0.003–0.90)
0.042
rs361525
TNF-α
C
AG
2.17 (0.62–7.69)
0.087
0.01 (5.48E − 05–0.50)
0.024
rs1576
CCHCR1
D
GG/GC
2.56 (1.22–5.26)
0.012
166.67 (2.32–1000)
0.019
FCGR2A: Fc fragment of IgG low affinity IIa receptor; TNFR1: tumor necrosis factor receptor 1; CD226: CD226 antigen; PSORS6: psoriasis susceptibility 6; TNFAIP3: tumor necrosis factor alpha-induced protein 3; HLA-C: major histocompatibility complex; TNFAIP3: tumor necrosis factor alpha-induced protein 3; TNF-α: tumor necrosis factor alpha; CCHCR1: coiled-coil alpha-helical rod protein 1; SNPs: single-nucleotide polymorphisms; OR: odds ratio of presenting type I psoriasis; CI: confidence interval; A: additive; D: dominant; C: codominant.
## 4. Discussion
About 75% of patients with chronic plaque psoriasis have type I psoriasis before age 40 [4], whereas a lower number of patients develop psoriasis at around 50–60 years [11]. Our results are consistent with these findings, since 79.06% of our patients developed psoriasis before the age of 40.When we compared patients with type I psoriasis and controls, we found 10 significant SNPs inCLMN,FBXL19,CCL4L,C17orf51,TYK2,IL13,SLC22A4,CDKAL1,HLA-C, andHLA-B/MICA.TheHLA-C∗0602 allele is a risk factor for psoriasis [35] and has been associated with both type I [6–9] and type II psoriasis [10]. In one study, 85.3% of patients with type I psoriasis had this allele [5], whereas only 14.7% of patients with type II psoriasis were carriers [5]. Other authors found an association between rs10484554 (HLA-C) and type I psoriasis compared with type II psoriasis (OR=3.24 in type I) [12]. rs10484554 has also been associated with type II psoriasis [10]. In a recent GWAS, theHLA-C gene was associated with type I psoriasis (p=2.97E-18 for rs1265181, p=2.58E-15 for rs12191877, p=1.84E-15 for rs4406273, and p=1.10E-07 for rs2395029), but not with type II psoriasis after application of the Bonferroni correction [11]. In addition, our results showed significant differences in rs12191877 (HLA-C) in patients with type I psoriasis (p=2.50E-19). However, we did not find this association in patients with type II psoriasis, probably owing to the small sample size in this group (N=36).Munir et al. found an association between rs1295685 in theIL13 gene and type I psoriasis (p=2.47E-03) [11]. Our results showed an association between another SNP inIL13 (rs1800925) and type I psoriasis (p=0.034). In addition, Munir et al. did not obtain significant results when they compared controls with type II psoriasis or type I psoriasis with type II psoriasis [11]. Both SNPs inIL13 have been associated with predisposition to psoriasis [36, 37].Our comparison of patients with type I psoriasis and controls is the first to obtain significant results for a series of SNPs in type I psoriasis, although the SNPs have already been associated with the risk of psoriasis. rs10782001 inFBXL19 [38], rs1975974 inC17orf51 [38], rs12720356 inTYK2 [3, 39], rs3792876 inSLC22A4 [3], rs6908425 inCDKAL1 [40], and rs13437088 inHLA-B/MICA [35] have previously been associated with psoriasis, but not with type I psoriasis.Furthermore, SNPs inCLMN (rs2282276) andCCL4L (rs1634517) have not been associated with psoriasis or age at onset.We found no significant differences between patients with type II psoriasis and controls owing to the small sample size (N=36).Comparison between patients with type I psoriasis and patients with type II psoriasis revealed significant associations for the following genes:FCGR2A,TNFR1,CD226,PSORS6,TNFAIP3,HLA-C,TNF-α, andCCHCR1.Polymorphisms inCCHCR1 (−386 and −404, CCHCR1∗WW allele) have been associated with type I psoriasis [9, 31]. We found significant differences between rs1576 inCCHCR1 and age at onset. In a study comparing controls (54.8%) and patients with psoriasis type II (66.0%), Allen et al. showed a significant increase in the number of patients carrying rs1576 [41]. This SNP has been associated with psoriasis elsewhere [42].Douroudis et al. analyzed rs763361 inCD226 in patients with early-onset psoriasis and patients with late-onset psoriasis, although they found no associations [43]. We performed the same analyses and found significant differences between the groups. In addition, rs763361 inCD226 has been associated with severity of psoriasis [43].rs12459358 inPSORS6 has been associated with type I psoriasis (G risk allele, OR=1.47 and p=0.005) [19]. In contrast, our data showed an association between the T allele and type I psoriasis (OR=11.11; p=0.026).rs361525 (−238) in theTNFα gene has been associated with susceptibility to psoriasis [44], and the A allele was more frequent in male patients with type I psoriasis (p=2E-07) [15, 22]. We found significant results in rs361525 (TNF-α) when we compared patients with type I psoriasis and patients with type II psoriasis, although we found no gender differences. Other authors confirmed our association with type I psoriasis in Caucasian [20, 23] and Mongolian patients [24]. A meta-analysis showed an association between rs361525 and type I psoriasis [21]. Baran et al. found no significant differences between rs1800629 in the −308 promoter (TNFα) and type I or type II psoriasis [45].Likewise, rs12191877 inHLA-C has been associated with increased risk of psoriasis [35]. Munir et al. [11] compared patients with type I psoriasis and patients with type II psoriasis and obtained significant results for rs1265181, rs4406273, and rs12191877 inHLA-C. We replicated these results in rs12191877 (T allele risk; p=0.045).rs191190 inTNFR1 [46] and rs10499194 inTNFAIP3 [3] have been associated with psoriasis, but not with age of onset. Moreover, rs1801274 inFCGR2A and rs6920220 inTNFAIP3 have not been studied in patients with psoriasis according to age of onset. Given the small sample size in the group with type II psoriasis in our study, our results should be interpreted with caution.Our results highlight the role of the immune system in psoriasis and enhance our understanding of pathogenic mechanisms. Such knowledge can help to optimize treatment.Our study is subject to a series of limitations. First, mean age varied between the cases and the controls. Second, the sample size was limited by the number of study patients treated in the dermatology department, thus making it difficult to detect SNPs with a low probability of causing psoriasis. Third, since the SNPs were selected based on a literature review, several major SNPs may not yet have been investigated.In conclusion, our study confirmed an association between rs12191877 (HLA-C) and type I psoriasis and between type I and type II psoriasis patients. Ours is the first study to show an association betweenCLMN,FBXL19,CCL4L,C17orf51,TYK2,IL13,SLC22A4,CDKAL1, andHLA-B/MICA and type I psoriasis. Moreover,CLMN andCCL4L have not been previously described in psoriasis. In addition,PSORS6 andTNFα have been described as more prevalent genes in type I psoriasis and we showed a significant association when we compared type I psoriasis and type II psoriasis. Ours is the first study to identify an association betweenFCGR2A,TNFR1,CD226,TNFAIP3, andCCHCR1 and age at onset of psoriasis. Our results suggest that genetics could play a role in age at onset. However, further studies are needed to confirm our findings.
---
*Source: 101879-2015-11-03.xml* | 2015 |
# Influence of Vertical Facial Growth Pattern on Herbst Appliance Effects in Prepubertal Patients: A Retrospective Controlled Study
**Authors:** Maria Rita Giuca; Marco Pasini; Sara Drago; Leonardo Del Corso; Arianna Vanni; Elisabetta Carli; Antonio Manni
**Journal:** International Journal of Dentistry
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1018793
---
## Abstract
Introduction. The Herbst device is widely used for correction of class II malocclusions; however, most of the researches carried out on the Herbst appliance in literature do not take into account patients with a different mandibular divergence. The aim of this study was to investigate the effects of Herbst on dental and skeletal structures and to evaluate possible influence of vertical facial growth patterns. Methods. A retrospective study was conducted on lateral cephalograms of 75 growing patients (mean age: 9.9 ± 1.9 years) with class II malocclusion treated with Herbst. Subjects were divided into 3 groups using the mandibular divergence index (SN and GoMe angle). Cephalometric parameters were evaluated using the modified SO (sagittal occlusion) Pancherz’s analysis. A statistical analysis was conducted to evaluate differences among groups using ANOVA. Results. Our study showed differences in response to treatment depending on patient’s facial vertical growth pattern. Cranial base angle and mandibular rotation were significantly different (p<0.05) between hypodivergent patients and normodivergent patients and between hypodivergent and hyperdivergent subjects. Conclusion. Hypodivergent patients increased their mandibular divergence during treatment to a greater extent than normodivergents; moreover, hyperdivergent patients exhibited a decreased mandibular divergence at the end of the treatment.
---
## Body
## 1. Introduction
Bilateral class II malocclusion represents one of the main orthodontic problems affecting the world population, and it has been observed that this condition affects 27.2% of English adolescents [1], 36.3% of Italian adolescents [2], about 15% of the total United States population [3], and 27.0% of Chinese children [4].This sagittal malocclusion can be skeletal, dental, or combined. In particular, in the great majority of cases (about 75%), the skeletal component is affected [5].The Herbst device is one of the most common appliances for the treatment of skeletal and dental class II, consisting of a piston and a tube anchored to orthodontic bands (or to splints or to cobalt/chrome fusions), which keeps the jaw in a protracted position 24 hours a day [6] through a bilateral telescopic mechanism.The advantages include the following: high treatment speed (average treatment time 6–8 months), reduced request for patient’s compliance, and effectiveness both on the dental and skeletal component [7].The effects on the dental component include a distalisation of the upper dental arch and a mesialisation of the lower dental arch [8], while the effects on the skeletal component include a decreased growth of the maxilla [9] and a stimulation of the mandibular growth with an increase in the average length at the end of treatment greater than 2-3 mm [10]. The mechanism permits vertical opening movements and effect on the vertical tooth position [11], and the skeletal effect is most pronounced during puberty rather than before [12].The main disadvantage of the Herbst consists in a proclination of lower incisors due to anchorage loss in different amounts relative to the type of Herbst used [13]; various modifications of the original orthodontic appliance have been proposed, but none has been able to completely prevent proclination of mandibular incisors [13–15]. Most of the studies carried out on the Herbst appliance do not take into account patients with a different mandibular divergence [15], which affects chin position [16], the direction of the condylar growth [17], and the shape of the jaw [18].Variation in the mandibular divergence with other orthodontic appliances for class II malocclusions has been investigated in a recent systematic review [19].The aim of this study was to investigate the effect of the Herbst appliance on dental and skeletal levels and to evaluate the existing differences between patients with different vertical growth patterns.
## 2. Materials and Methods
A retrospective study was conducted on lateral cephalograms of consecutive patients previously treated in a private office (Lecce, Italy) within the past 5 years: from January 2014 to January 2019.Sample size calculation was performed; estimate of standard deviation was based on data obtained from other 10 subjects who were followed in a preliminary study, considering mandibular divergence as the primary outcome. In order to compare two means with a power of 80%, a size of the test of 5%, a standard deviation of 1.5, and a difference of 1.2, the sample size required 25 patients in each group.A total of 75 lateral cephalograms of patients with a skeletal class II and treated with Herbst appliance (35 males and 40 females; average age at the start of treatment 9.9 ± 1.9 years; average Herbst treatment duration 9.7 ± 1.6 months) were included in this study (test group).The test group was compared with a control group of 75 untreated subjects, obtained from the University of Michigan Growth Study Center, the Bolton-Brush Growth study center, the University of Toronto Burlington Growth Study, the University of Oklahoma Denver Growth Study, the Oregon Growth Study, the Iowa Facial Growth Study, and the UOP Mathews Growth Study, matched for similar vertical relationships, sex, and skeletal age.All procedures were conducted according to the principles expressed in the Declaration of Helsinki (1964), and a written consent (signed by parents or legal guardians) to participate in the study was obtained at the beginning of the orthodontic treatment.The inclusion criteria were as follows: lateral cephalograms taken before and after Herbst treatment, presence of a permanent dentition or late mixed dentition, presence of bilateral angle class II division 1 malocclusion, and presence of mandibular deficiency and normal upper jaw. Exclusion criteria were as follows: presence of serious skeletal malformations, patients with systemic disease, patients undergoing a drug therapy that may cause skeletal abnormalities, and patients with agenesis and/or premature loss of permanent teeth. Lateral cephalograms were divided into 3 groups using the mandibular divergence index, measured on lateral cephalograms at the beginning of the treatment: angle between the straight lines SN (Sella-Nasion) and GoMe (Gonion-Menton).All subjects with SN^GoMe values less than or equal to 26.5° were considered as belonging to the hypodivergent group, all subjects with SN^GoMe values between 26.5° and 36.5° were considered as belonging to the normodivergent group, and all subjects with SN^GoMe values greater than or equal to 36.5° were considered as belonging to the hyperdivergent group.The test group consisted of three different subgroups: group 1 included 25 hypodivergent subjects (12 males and 13 females) with an average age at the start of treatment of 10.6 ± 2.0 years and a mean duration of Herbst treatment of 9.6 ± 1.9 months.Group 2 included 25 normodivergent subjects (11 males and 14 females) with an average age at the beginning of treatment of 9.8 ± 1.9 years and a mean duration of orthodontic treatment of 9.5 ± 1.7 months.Group 3 included 25 hyperdivergent subjects (12 males and 13 females) with a mean age at the start of treatment of 9.4 ± 1.8 years and an average duration of treatment of 9.9 ± 1.3 months. Each subgroup was compared with three different control groups of 25 lateral cephalometrics, matched with the test subgroup for similar SN^GoMe value, sex, and skeletal age that was assessed with cervical vertebral maturation staging [20].
### 2.1. Cephalometric Parameters
The investigation of the Herbst appliance effects at the dental and skeletal levels was performed on lateral cephalograms using the modified SO (sagittal occlusion) Pancherz’s cephalometric analysis.This analysis was carried out by transferring the lines occlusal line (OL) and occlusal perpendicular line (OLp) through the Sella from pretreatment lateral cephalogram to the posttreatment lateral cephalogram by superimposing skeletal stable structures of the anterior cranial base. Modified SO Pancherz’s cephalometric analysis included the following parameters, that are not considered in traditional SO Pancherz analysis: skeletal divergence, skeletal class, and lower incisor inclination (Figure1).Figure 1
Modified SO Pancherz analysis. Reference points and lines: Sella (S), Nasion (N), subnasal (A), pogonion (Pg), Gonion (Go), Menton (Me), articular (Ar), anterior nasal spine (ANS), maxillary incisal (Is), mandibular incisal (Ii), lower incisal apex (API), posterior occlusal (OCLP), maxillary molar (Ms), mandibular molar (Mi), occlusal line (OL), and occlusal line perpendicular (OLP).Cephalograms were performed with teeth in centric occlusion, with relaxed lips and head oriented parallel to the floor according to the Frankfurt plane.For each patient of the test group, two lateral cephalograms were included: pretreatment (T1) and posttreatment (T2).Cephalometric analysis was performed by a single operator using Delta-Dent® software (Orthopiù SRL).
### 2.2. Statistical Analysis
All linear and angular measurements were approximated to the nearest 0.1 mm and 0.1°, respectively. Dahlberg’s formula was adopted after measuring each lateral cephalogram twice, with 14 days between each measurement; the method error was less than 0.5 mm and 1 degree (intraoperator reliability).A blinded statistical analysis was performed. Data were checked for normality using the Shapiro–Wilk test. Continuous variables are given as means and standard deviations (SD), whereas categorical variables were given as number and/or percentage of subjects. The thirteen cephalometric parameters were considered as primary outcome measurements. Outcome baseline differences among treatment groups were tested by one-way ANOVA. In order to investigate the associations of the outcome parameters with divergence groups, the one-way ANOVA was performed again on the differences after-before for each group. A pairedt-test was performed to observe intragroup differences. Subsequently, an independent samples t-test was adopted to evaluate the differences between each group and the controls.The estimatedp values were adjusted for multiple comparisons by the Bonferroni correction method, and when the adjusted p value was less than 0.05, the differences were selected as significant. Data were acquired and analysed in R v3.4.4 software environment.
## 2.1. Cephalometric Parameters
The investigation of the Herbst appliance effects at the dental and skeletal levels was performed on lateral cephalograms using the modified SO (sagittal occlusion) Pancherz’s cephalometric analysis.This analysis was carried out by transferring the lines occlusal line (OL) and occlusal perpendicular line (OLp) through the Sella from pretreatment lateral cephalogram to the posttreatment lateral cephalogram by superimposing skeletal stable structures of the anterior cranial base. Modified SO Pancherz’s cephalometric analysis included the following parameters, that are not considered in traditional SO Pancherz analysis: skeletal divergence, skeletal class, and lower incisor inclination (Figure1).Figure 1
Modified SO Pancherz analysis. Reference points and lines: Sella (S), Nasion (N), subnasal (A), pogonion (Pg), Gonion (Go), Menton (Me), articular (Ar), anterior nasal spine (ANS), maxillary incisal (Is), mandibular incisal (Ii), lower incisal apex (API), posterior occlusal (OCLP), maxillary molar (Ms), mandibular molar (Mi), occlusal line (OL), and occlusal line perpendicular (OLP).Cephalograms were performed with teeth in centric occlusion, with relaxed lips and head oriented parallel to the floor according to the Frankfurt plane.For each patient of the test group, two lateral cephalograms were included: pretreatment (T1) and posttreatment (T2).Cephalometric analysis was performed by a single operator using Delta-Dent® software (Orthopiù SRL).
## 2.2. Statistical Analysis
All linear and angular measurements were approximated to the nearest 0.1 mm and 0.1°, respectively. Dahlberg’s formula was adopted after measuring each lateral cephalogram twice, with 14 days between each measurement; the method error was less than 0.5 mm and 1 degree (intraoperator reliability).A blinded statistical analysis was performed. Data were checked for normality using the Shapiro–Wilk test. Continuous variables are given as means and standard deviations (SD), whereas categorical variables were given as number and/or percentage of subjects. The thirteen cephalometric parameters were considered as primary outcome measurements. Outcome baseline differences among treatment groups were tested by one-way ANOVA. In order to investigate the associations of the outcome parameters with divergence groups, the one-way ANOVA was performed again on the differences after-before for each group. A pairedt-test was performed to observe intragroup differences. Subsequently, an independent samples t-test was adopted to evaluate the differences between each group and the controls.The estimatedp values were adjusted for multiple comparisons by the Bonferroni correction method, and when the adjusted p value was less than 0.05, the differences were selected as significant. Data were acquired and analysed in R v3.4.4 software environment.
## 3. Results
No significant differences between groups were detected at baseline except for SN^GoMe, lower incisor axis inclination, AN^NPg, and skeletal discrepancy (p<0.001, Table 1).Table 1
Baseline characteristics in whole population (N=75).
Outcome variables
Total mean (SD)
Group
p value
Pairwise comparisons
Hypodivergent mean (SD)
Normodivergent mean (SD)
Hyperdivergent mean (SD)
A/Olp
69.91 (4.47)
69.68 (5.25)
70.38 (4.46)
69.67 (3.71)
0.817
Pg/Olp
69.97 (5.87)
70.89 (6.38)
70.13 (6.19)
68.88 (5.02)
0.481
Is/Olp
77.60 (4.86)
76.94 (6.06)
77.79 (4.43)
78.08 (3.95)
0.693
Ii/Olp
70.29 (5.42)
70.32 (6.23)
70.19 (5.27)
70.36 (4.90)
0.993
Ms/Olp
36.53 (4.54)
36.78 (5.14)
36.68 (4.79)
36.13 (3.74)
0.864
Mi/Olp
34.84 (5.39)
34.84 (6.35)
34.78 (5.12)
34.92 (4.80)
0.996
SN^GoMe
32.67 (5.97)
25.68 (2.31)
33.09 (2.02)
39.24 (2.15)
<0.001
Hypo vs. hyper: <0.001
Hypo vs. normo: <0.001
Hyper vs. normo: <0.001
Lower incisor axis inclination
100.26 (6.57)
104.62 (6.56)
99.54 (4.46)
96.62 (6.00)
<0.001
Hypo vs. hyper: <0.001
Hypo vs. normo: <0.001
Hyper vs. normo: 0.2315
AN^NPg
4.78 (2.30)
3.35 (2.01)
5.40 (1.93)
5.59 (2.32)
<0.001
Hypo vs. hyper: <0.001
Hypo vs. normo: <0.001
Hyper vs. normo: 1.00
Skeletal discrepancy
−0.07 (2.68)
−1.24 (2.67)
0.24 (2.78)
0.79 (2.21)
0.019
Hypo vs. hyper: 0.02
Hypo vs. normo: 0.13
Hyper vs. normo: 1.00
Overjet
7.32 (2.47)
6.70 (2.51)
7.55 (2.64)
7.72 (2.23)
0.298
Molar relation
1.69 (1.92)
1.96 (2.14)
1.90 (1.65)
1.21 (1.93)
0.312
Results are expressed as mean (standard deviation);p value = one-way ANOVA; p value, pairwise comparisons: p values adjusted by using the Bonferroni method.Table2 shows for each treatment group any difference over time in all measurements; the ANOVA assessed a significant difference over time among groups for the following parameters: Ii-Olp and SN^GoMe.Table 2
One-way ANOVA results to evidence any difference over time between groups.
Outcome variables
Total mean (SD)
Group
p value
Pairwise comparisons
Hypodivergent mean (SD)
Normodivergent mean (SD)
Hyperdivergent mean (SD)
A/Olp
0.18 (1.58)
0.35 (1.92)
−0.23 (1.32)
0.40 (1.42)
0.296
Pg/Olp
2.33 (2.19)
2.11 (2.31)
2.41 (2.02)
2.46 (2.31)
0.837
Is/Olp
−0.43 (2.04)
0.17 (2.42)
−0.64 (1.80)
−0.82 (1.78)
0.190
Ii/Olp
0.36 (3.57)
−2.72 (2.74)
4.00 (1.91)
−0.19 (2.03)
<0.001
Hypo vs. hyper: <0.001
Hypo vs. normo: <0.001
Hyper vs. normo: <0.001
Ms/Olp
−1.36 (2.20)
−1.61 (2.41)
−1.44 (2.53)
−1.03 (1.61)
0.640
Mi/Olp
4.07 (2.52)
3.56 (2.91)
4.36 (2.42)
4.28 (2.20)
0.473
SN^GoMe
0.28 (2.77)
1.66 (2.53)
−0.01 (2.33)
−0.81 (2.91)
0.004
Hypo vs. hyper: 0.004
Hypo vs. normo: 0.077
Hyper vs. normo: 0.842
Lower incisor axis inclination
5.58 (4.77)
4.51 (3.84)
5.47 (5.52)
6.77 (4.72)
0.246
AN^NPg
−1.49 (1.72)
−1.20 (1.27)
−2.05 (2.16)
−1.24 (1.52)
0.142
Skeletal discrepancy
−2.52 (2.28)
−3.08 (2.66)
−2.43 (2.09)
−2.06 (1.99)
0.281
Overjet
−4.38 (2.21)
−3.62 (2.07)
−4.64 (2.19)
−4.87 (2.26)
0.102
Molar relation
−5.02 (5.54)
−5.53 (3.22)
−4.21 (8.76)
−5.31 (2.50)
0.672
Pairwise comparisons:p values adjusted by using Bonferroni method. In the first column, differences after-before for each group are given.Significant intragroup variations from T1 to T2 in the total sample and in the three test subgroups are summarized in Table3.Table 3
Intragroupp values (test group: T2-T1).
Total sample (75)
Hypodivergent (25)
Normodivergent (25)
Hyperdivergent (25)
p
p
p
p
A/Olp
0.90
0.81
0.88
0.85
Pg/Olp
0.02∗
0.50
0.14
0.11
Is/Olp
0.45
0.74
0.76
0.50
Ii/Olp
0.01∗
0.10
0.01∗
0.01∗
Ms/Olp
0.07
0.20
0.30
0.38
Mi/Olp
0.01∗
0.07
0.01∗
0.01∗
SN^GoMe
0.97
0.05
0.93
0.37
Lower incisor axis inclination
0.01∗
0.04∗
0.01∗
0.01∗
AN^NPg
0.01∗
0.06
0.01∗
0.07
Skeletal discrepancy
0.01∗
0.04∗
0.01∗
0.01∗
Overjet
0.01∗
0.01∗
0.01∗
0.01∗
Molar relation
0.01∗
0.01∗
0.01∗
0.01∗
∗
p
<
0.05.Herbst therapy has determined in the total sample a slight retreat of the upper maxilla, but no significant difference was observed at the end of the treatment. On the contrary, a significant advancement of the lower jaw with a reduction of skeletal discrepancy and improvement of ANPg angle was found after the Herbst treatment (p<0.05).Moreover, orthodontic treatment resulted in a slight retreat of the upper central incisor even if the difference was not significant (p>0.05.05), a marked advancement of the lower central incisor (p<0.05), and a marked reduction of overjet and molar relation (p<0.05).In the total sample, a loss of dental anchorage with an increased lower incisor inclination at the end of the treatment (p<0.05) and a mean increase in cranial base-mandible angle (SN/GoMe) was observed.Cephalometric changes (T2-T1) in the three subgroups are reported in Table4 (hypodivergents vs. controls), Table 5 (normodivergent vs. controls), and Table 6 (hyperdivergent vs. controls).Table 4
Hypodivergent patients versus controls, mean difference between posttreatment (T2) and pretreatment (T1).
Parameter
Hypodivergent test mean (SD)
Hypodivergent control mean (SD)
p value
A/Olp
0.35 ± 1.92
0.1 ± 0.3
0.5
Pg/Olp
2.11 ± 2.31
0.2 ± 0.5
<0.001
Is/Olp
0.17 ± 2.42
0.1 ± 0.4
0.89
Ii/Olp
−2.72 ± 2.74
0.2 ± 0.5
<0.001
Ms/Olp
−1.61 ± 2.41
0.2 ± 0.4
<0.001
Mi/Olp
3.56 ± 2.91
0.2 ± 0.3
<0.001
SN^GoMe
1.66 ± 2.53
−0.5 ± 0.9
<0.001
LII
4.51 ± 3.84
0.1 ± 0.3
<0.001
AN^NPg
−1.20 ± 1.27
0.2 ± 0.3
<0.001
Skeletal discrepancy
−3.08 ± 2.66
0.1 ± 0.2
<0.001
Overjet
−3.62 ± 2.07
0.1 ± 0.3
<0.001
Molar relation
−5.53 ± 3.22
0.1 ± 0.4
<0.001Table 5
Normodivergent patients versus controls, mean difference between posttreatment (T2) and pretreatment (T1).
Parameter
Normodivergent test mean (SD)
Normodivergent control mean (SD)
p value
A/Olp
−0.23 ± 1.32
0.2 ± 0.5
0.13
Pg/Olp
2.41 ± 2.02
0.4 ± 0.7
<0.001
Is/Olp
−0.64 ± 1.80
0.3 ± 0.4
0.01
Ii/Olp
4.00 ± 1.91
0.2 ± 0.4
<0.001
Ms/Olp
−1.44 ± 2.53
0.3 ± 0.6
<0.001
Mi/Olp
4.36 ± 2.42
0.2 ± 0.5
<0.001
SN^GoMe
−0.01 ± 2.33
0.2 ± 0.4
0.66
LII
5.47 ± 5.52
0.2 ± 0.2
<0.001
AN^NPg
−2.05 ± 2.16
0.1 ± 0.2
<0.001
Skeletal discrepancy
−2.43 ± 2.09
0.2 ± 0.3
<0.001
Overjet
−4.64 ± 2.19
0.1 ± 0.4
<0.001
Molar relation
−4.21 ± 8.76
0.1 ± 0.3
<0.001Table 6
Hyperdivergent patients versus controls, mean difference between posttreatment (T2) and pretreatment (T1).
Parameter
Hyperdivergent test mean (SD)
Hyperdivergent control mean (SD)
p value
A/Olp
0.40 ± 1.42
0.1 ± 0.9
0.38
Pg/Olp
2.46 ± 2.31
−0.2 ± 1
<0.001
Is/Olp
−0.82 ± 1.78
0.1 ± 0.8
0.02
Ii/Olp
−0.19 ± 2.03
0.1 ± 0.9
0.51
Ms/Olp
−1.03 ± 1.61
0.2 ± 0.7
0.001
Mi/Olp
4.28 ± 2.20
0.2 ± 0.8
<0.001
SN^GoMe
−0.81 ± 2.91
0.5 ± 0.8
0.03
LII
6.77 ± 4.72
0 ± 0.1
<0.001
AN^NPg
−1.24 ± 1.52
0.2 ± 0.6
<0.001
Skeletal discrepancy
−2.06 ± 1.99
0.1 ± 0.5
<0.001
Overjet
−4.87 ± 2.26
0.1 ± 0.4
<0.001
Molar relation
−5.31 ± 2.50
0.1 ± 0.3
<0.001Hypodivergent patients showed an increased mandibular divergence at the end of the therapy in comparison to the control group (p<0.05), normodivergent subjects did no show significant changes in divergence in comparison to the controls (p>0.05), and hyperdivergents showed a decrease in SN/GoMe angle in comparison to the control group (p<0.05).
## 4. Discussion
Based on the results obtained in this study, it is possible to notice that the Herbst treatment was effective for the resolution of class II malocclusion in all groups.In fact, correction of sagittal dental class was obtained in all patients treated, with a decrease in overjet, skeletal class angle, skeletal discrepancy, and molar relation.These results were obtained in all patients through a distalisation of the upper arch and a mesialisation of the lower arch, and these results are consistent with those of previous studies [8, 10, 17, 21–23]. A slight high-pull headgear effect on the maxillary complex was found in the total sample, while a significant advancement of the mandible was observed (hypodivergents exhibited a slight lower mandibular advancement in comparison to normodivergent and hyperdivergent groups), and these results are in accordance with those of previous studies; Pancherz and Anehus-Pancherz found that the sagittal maxillary jaw base position seemed unaffected by therapy [24]. An increase in the inclination of the lower incisor, with respect to the mandibular base, was recorded in all groups, and a greater mandibular incisor anchorage loss was observed in the hyperdivergent group, while hypodivergent exhibited the lower mandibular incisor anchorage loss. However, no significant differences in mandibular advancement were found among groups.Normodivergents did not show changes in divergence; hypodivergent patients slightly increased their mandibular divergence during orthodontic treatment, while hyperdivergent patients showed a slight decrease in the mandibular divergence.In literature, a significant alteration was found in mandibular divergence at the end of the Herbst treatment in a limited number of studies [25–27], while other previous studies showed a significant change of SN/GoMe at the end of orthodontic therapy [28].It was recorded that, after Herbst treatment, the upper molars moved mesially, the occlusal plane slightly closed, and the palatal plane tipped downward [24].Ruf and Pancherz stated that the mandibular plane angle was slightly affected by Herbst appliance treatment, and at the end of the orthodontic therapy, a continuous decrease in the mandibular plane angle was found [25].In a previous study, a significant difference of cranial base-mandibular angle was found between hypodivergent, normodivergent, and hyperdivergent patients [28], and the results showed that hypodivergent subjects tend to decrease this angle, while hyperdivergents tend to increase it. In fact, these authors observed that hypo- and hyperdivergent patients benefit from Herbst’s headgear effect in the upper maxilla, while hyperdivergent patients exhibited a deleterious backward mandibular rotation. A possible explanation could be that cantilever Herbst appliance with full-coverage stainless steel crowns on the upper and lower first molars was used by Rogers et al. [28], while in the present study, a total acrylic splint extending from the first lower molar to the first contralateral molar was used to reinforce the anchorage. Furthermore, another possible explanation for the rotational differences between subjects with different vertical growth patterns could be the orofacial musculature function as patients with weak jaw musculature could exhibit a backward mandibular rotation.Further studies conducted on a larger number of lateral cephalograms will be necessary to confirm the results of the present study.
## 5. Conclusion
Our study showed differences in response to treatment with the Herbst appliance depending on patient’s vertical growth pattern. Particularly, the changes in Ii/Olp over time were significantly different among groups (p<0.001).Moreover, the results exhibited that hypodivergent patients increased their mandibular divergence during treatment. Normodivergent patients showed very slight differences in mandibular divergence with no significant difference, while hyperdivergent patients exhibited a mandibular divergence decrease at the end of the Herbst treatment, and the difference among groups was significant (p<0.05).
---
*Source: 1018793-2020-01-11.xml* | 1018793-2020-01-11_1018793-2020-01-11.md | 25,660 | Influence of Vertical Facial Growth Pattern on Herbst Appliance Effects in Prepubertal Patients: A Retrospective Controlled Study | Maria Rita Giuca; Marco Pasini; Sara Drago; Leonardo Del Corso; Arianna Vanni; Elisabetta Carli; Antonio Manni | International Journal of Dentistry
(2020) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1018793 | 1018793-2020-01-11.xml | ---
## Abstract
Introduction. The Herbst device is widely used for correction of class II malocclusions; however, most of the researches carried out on the Herbst appliance in literature do not take into account patients with a different mandibular divergence. The aim of this study was to investigate the effects of Herbst on dental and skeletal structures and to evaluate possible influence of vertical facial growth patterns. Methods. A retrospective study was conducted on lateral cephalograms of 75 growing patients (mean age: 9.9 ± 1.9 years) with class II malocclusion treated with Herbst. Subjects were divided into 3 groups using the mandibular divergence index (SN and GoMe angle). Cephalometric parameters were evaluated using the modified SO (sagittal occlusion) Pancherz’s analysis. A statistical analysis was conducted to evaluate differences among groups using ANOVA. Results. Our study showed differences in response to treatment depending on patient’s facial vertical growth pattern. Cranial base angle and mandibular rotation were significantly different (p<0.05) between hypodivergent patients and normodivergent patients and between hypodivergent and hyperdivergent subjects. Conclusion. Hypodivergent patients increased their mandibular divergence during treatment to a greater extent than normodivergents; moreover, hyperdivergent patients exhibited a decreased mandibular divergence at the end of the treatment.
---
## Body
## 1. Introduction
Bilateral class II malocclusion represents one of the main orthodontic problems affecting the world population, and it has been observed that this condition affects 27.2% of English adolescents [1], 36.3% of Italian adolescents [2], about 15% of the total United States population [3], and 27.0% of Chinese children [4].This sagittal malocclusion can be skeletal, dental, or combined. In particular, in the great majority of cases (about 75%), the skeletal component is affected [5].The Herbst device is one of the most common appliances for the treatment of skeletal and dental class II, consisting of a piston and a tube anchored to orthodontic bands (or to splints or to cobalt/chrome fusions), which keeps the jaw in a protracted position 24 hours a day [6] through a bilateral telescopic mechanism.The advantages include the following: high treatment speed (average treatment time 6–8 months), reduced request for patient’s compliance, and effectiveness both on the dental and skeletal component [7].The effects on the dental component include a distalisation of the upper dental arch and a mesialisation of the lower dental arch [8], while the effects on the skeletal component include a decreased growth of the maxilla [9] and a stimulation of the mandibular growth with an increase in the average length at the end of treatment greater than 2-3 mm [10]. The mechanism permits vertical opening movements and effect on the vertical tooth position [11], and the skeletal effect is most pronounced during puberty rather than before [12].The main disadvantage of the Herbst consists in a proclination of lower incisors due to anchorage loss in different amounts relative to the type of Herbst used [13]; various modifications of the original orthodontic appliance have been proposed, but none has been able to completely prevent proclination of mandibular incisors [13–15]. Most of the studies carried out on the Herbst appliance do not take into account patients with a different mandibular divergence [15], which affects chin position [16], the direction of the condylar growth [17], and the shape of the jaw [18].Variation in the mandibular divergence with other orthodontic appliances for class II malocclusions has been investigated in a recent systematic review [19].The aim of this study was to investigate the effect of the Herbst appliance on dental and skeletal levels and to evaluate the existing differences between patients with different vertical growth patterns.
## 2. Materials and Methods
A retrospective study was conducted on lateral cephalograms of consecutive patients previously treated in a private office (Lecce, Italy) within the past 5 years: from January 2014 to January 2019.Sample size calculation was performed; estimate of standard deviation was based on data obtained from other 10 subjects who were followed in a preliminary study, considering mandibular divergence as the primary outcome. In order to compare two means with a power of 80%, a size of the test of 5%, a standard deviation of 1.5, and a difference of 1.2, the sample size required 25 patients in each group.A total of 75 lateral cephalograms of patients with a skeletal class II and treated with Herbst appliance (35 males and 40 females; average age at the start of treatment 9.9 ± 1.9 years; average Herbst treatment duration 9.7 ± 1.6 months) were included in this study (test group).The test group was compared with a control group of 75 untreated subjects, obtained from the University of Michigan Growth Study Center, the Bolton-Brush Growth study center, the University of Toronto Burlington Growth Study, the University of Oklahoma Denver Growth Study, the Oregon Growth Study, the Iowa Facial Growth Study, and the UOP Mathews Growth Study, matched for similar vertical relationships, sex, and skeletal age.All procedures were conducted according to the principles expressed in the Declaration of Helsinki (1964), and a written consent (signed by parents or legal guardians) to participate in the study was obtained at the beginning of the orthodontic treatment.The inclusion criteria were as follows: lateral cephalograms taken before and after Herbst treatment, presence of a permanent dentition or late mixed dentition, presence of bilateral angle class II division 1 malocclusion, and presence of mandibular deficiency and normal upper jaw. Exclusion criteria were as follows: presence of serious skeletal malformations, patients with systemic disease, patients undergoing a drug therapy that may cause skeletal abnormalities, and patients with agenesis and/or premature loss of permanent teeth. Lateral cephalograms were divided into 3 groups using the mandibular divergence index, measured on lateral cephalograms at the beginning of the treatment: angle between the straight lines SN (Sella-Nasion) and GoMe (Gonion-Menton).All subjects with SN^GoMe values less than or equal to 26.5° were considered as belonging to the hypodivergent group, all subjects with SN^GoMe values between 26.5° and 36.5° were considered as belonging to the normodivergent group, and all subjects with SN^GoMe values greater than or equal to 36.5° were considered as belonging to the hyperdivergent group.The test group consisted of three different subgroups: group 1 included 25 hypodivergent subjects (12 males and 13 females) with an average age at the start of treatment of 10.6 ± 2.0 years and a mean duration of Herbst treatment of 9.6 ± 1.9 months.Group 2 included 25 normodivergent subjects (11 males and 14 females) with an average age at the beginning of treatment of 9.8 ± 1.9 years and a mean duration of orthodontic treatment of 9.5 ± 1.7 months.Group 3 included 25 hyperdivergent subjects (12 males and 13 females) with a mean age at the start of treatment of 9.4 ± 1.8 years and an average duration of treatment of 9.9 ± 1.3 months. Each subgroup was compared with three different control groups of 25 lateral cephalometrics, matched with the test subgroup for similar SN^GoMe value, sex, and skeletal age that was assessed with cervical vertebral maturation staging [20].
### 2.1. Cephalometric Parameters
The investigation of the Herbst appliance effects at the dental and skeletal levels was performed on lateral cephalograms using the modified SO (sagittal occlusion) Pancherz’s cephalometric analysis.This analysis was carried out by transferring the lines occlusal line (OL) and occlusal perpendicular line (OLp) through the Sella from pretreatment lateral cephalogram to the posttreatment lateral cephalogram by superimposing skeletal stable structures of the anterior cranial base. Modified SO Pancherz’s cephalometric analysis included the following parameters, that are not considered in traditional SO Pancherz analysis: skeletal divergence, skeletal class, and lower incisor inclination (Figure1).Figure 1
Modified SO Pancherz analysis. Reference points and lines: Sella (S), Nasion (N), subnasal (A), pogonion (Pg), Gonion (Go), Menton (Me), articular (Ar), anterior nasal spine (ANS), maxillary incisal (Is), mandibular incisal (Ii), lower incisal apex (API), posterior occlusal (OCLP), maxillary molar (Ms), mandibular molar (Mi), occlusal line (OL), and occlusal line perpendicular (OLP).Cephalograms were performed with teeth in centric occlusion, with relaxed lips and head oriented parallel to the floor according to the Frankfurt plane.For each patient of the test group, two lateral cephalograms were included: pretreatment (T1) and posttreatment (T2).Cephalometric analysis was performed by a single operator using Delta-Dent® software (Orthopiù SRL).
### 2.2. Statistical Analysis
All linear and angular measurements were approximated to the nearest 0.1 mm and 0.1°, respectively. Dahlberg’s formula was adopted after measuring each lateral cephalogram twice, with 14 days between each measurement; the method error was less than 0.5 mm and 1 degree (intraoperator reliability).A blinded statistical analysis was performed. Data were checked for normality using the Shapiro–Wilk test. Continuous variables are given as means and standard deviations (SD), whereas categorical variables were given as number and/or percentage of subjects. The thirteen cephalometric parameters were considered as primary outcome measurements. Outcome baseline differences among treatment groups were tested by one-way ANOVA. In order to investigate the associations of the outcome parameters with divergence groups, the one-way ANOVA was performed again on the differences after-before for each group. A pairedt-test was performed to observe intragroup differences. Subsequently, an independent samples t-test was adopted to evaluate the differences between each group and the controls.The estimatedp values were adjusted for multiple comparisons by the Bonferroni correction method, and when the adjusted p value was less than 0.05, the differences were selected as significant. Data were acquired and analysed in R v3.4.4 software environment.
## 2.1. Cephalometric Parameters
The investigation of the Herbst appliance effects at the dental and skeletal levels was performed on lateral cephalograms using the modified SO (sagittal occlusion) Pancherz’s cephalometric analysis.This analysis was carried out by transferring the lines occlusal line (OL) and occlusal perpendicular line (OLp) through the Sella from pretreatment lateral cephalogram to the posttreatment lateral cephalogram by superimposing skeletal stable structures of the anterior cranial base. Modified SO Pancherz’s cephalometric analysis included the following parameters, that are not considered in traditional SO Pancherz analysis: skeletal divergence, skeletal class, and lower incisor inclination (Figure1).Figure 1
Modified SO Pancherz analysis. Reference points and lines: Sella (S), Nasion (N), subnasal (A), pogonion (Pg), Gonion (Go), Menton (Me), articular (Ar), anterior nasal spine (ANS), maxillary incisal (Is), mandibular incisal (Ii), lower incisal apex (API), posterior occlusal (OCLP), maxillary molar (Ms), mandibular molar (Mi), occlusal line (OL), and occlusal line perpendicular (OLP).Cephalograms were performed with teeth in centric occlusion, with relaxed lips and head oriented parallel to the floor according to the Frankfurt plane.For each patient of the test group, two lateral cephalograms were included: pretreatment (T1) and posttreatment (T2).Cephalometric analysis was performed by a single operator using Delta-Dent® software (Orthopiù SRL).
## 2.2. Statistical Analysis
All linear and angular measurements were approximated to the nearest 0.1 mm and 0.1°, respectively. Dahlberg’s formula was adopted after measuring each lateral cephalogram twice, with 14 days between each measurement; the method error was less than 0.5 mm and 1 degree (intraoperator reliability).A blinded statistical analysis was performed. Data were checked for normality using the Shapiro–Wilk test. Continuous variables are given as means and standard deviations (SD), whereas categorical variables were given as number and/or percentage of subjects. The thirteen cephalometric parameters were considered as primary outcome measurements. Outcome baseline differences among treatment groups were tested by one-way ANOVA. In order to investigate the associations of the outcome parameters with divergence groups, the one-way ANOVA was performed again on the differences after-before for each group. A pairedt-test was performed to observe intragroup differences. Subsequently, an independent samples t-test was adopted to evaluate the differences between each group and the controls.The estimatedp values were adjusted for multiple comparisons by the Bonferroni correction method, and when the adjusted p value was less than 0.05, the differences were selected as significant. Data were acquired and analysed in R v3.4.4 software environment.
## 3. Results
No significant differences between groups were detected at baseline except for SN^GoMe, lower incisor axis inclination, AN^NPg, and skeletal discrepancy (p<0.001, Table 1).Table 1
Baseline characteristics in whole population (N=75).
Outcome variables
Total mean (SD)
Group
p value
Pairwise comparisons
Hypodivergent mean (SD)
Normodivergent mean (SD)
Hyperdivergent mean (SD)
A/Olp
69.91 (4.47)
69.68 (5.25)
70.38 (4.46)
69.67 (3.71)
0.817
Pg/Olp
69.97 (5.87)
70.89 (6.38)
70.13 (6.19)
68.88 (5.02)
0.481
Is/Olp
77.60 (4.86)
76.94 (6.06)
77.79 (4.43)
78.08 (3.95)
0.693
Ii/Olp
70.29 (5.42)
70.32 (6.23)
70.19 (5.27)
70.36 (4.90)
0.993
Ms/Olp
36.53 (4.54)
36.78 (5.14)
36.68 (4.79)
36.13 (3.74)
0.864
Mi/Olp
34.84 (5.39)
34.84 (6.35)
34.78 (5.12)
34.92 (4.80)
0.996
SN^GoMe
32.67 (5.97)
25.68 (2.31)
33.09 (2.02)
39.24 (2.15)
<0.001
Hypo vs. hyper: <0.001
Hypo vs. normo: <0.001
Hyper vs. normo: <0.001
Lower incisor axis inclination
100.26 (6.57)
104.62 (6.56)
99.54 (4.46)
96.62 (6.00)
<0.001
Hypo vs. hyper: <0.001
Hypo vs. normo: <0.001
Hyper vs. normo: 0.2315
AN^NPg
4.78 (2.30)
3.35 (2.01)
5.40 (1.93)
5.59 (2.32)
<0.001
Hypo vs. hyper: <0.001
Hypo vs. normo: <0.001
Hyper vs. normo: 1.00
Skeletal discrepancy
−0.07 (2.68)
−1.24 (2.67)
0.24 (2.78)
0.79 (2.21)
0.019
Hypo vs. hyper: 0.02
Hypo vs. normo: 0.13
Hyper vs. normo: 1.00
Overjet
7.32 (2.47)
6.70 (2.51)
7.55 (2.64)
7.72 (2.23)
0.298
Molar relation
1.69 (1.92)
1.96 (2.14)
1.90 (1.65)
1.21 (1.93)
0.312
Results are expressed as mean (standard deviation);p value = one-way ANOVA; p value, pairwise comparisons: p values adjusted by using the Bonferroni method.Table2 shows for each treatment group any difference over time in all measurements; the ANOVA assessed a significant difference over time among groups for the following parameters: Ii-Olp and SN^GoMe.Table 2
One-way ANOVA results to evidence any difference over time between groups.
Outcome variables
Total mean (SD)
Group
p value
Pairwise comparisons
Hypodivergent mean (SD)
Normodivergent mean (SD)
Hyperdivergent mean (SD)
A/Olp
0.18 (1.58)
0.35 (1.92)
−0.23 (1.32)
0.40 (1.42)
0.296
Pg/Olp
2.33 (2.19)
2.11 (2.31)
2.41 (2.02)
2.46 (2.31)
0.837
Is/Olp
−0.43 (2.04)
0.17 (2.42)
−0.64 (1.80)
−0.82 (1.78)
0.190
Ii/Olp
0.36 (3.57)
−2.72 (2.74)
4.00 (1.91)
−0.19 (2.03)
<0.001
Hypo vs. hyper: <0.001
Hypo vs. normo: <0.001
Hyper vs. normo: <0.001
Ms/Olp
−1.36 (2.20)
−1.61 (2.41)
−1.44 (2.53)
−1.03 (1.61)
0.640
Mi/Olp
4.07 (2.52)
3.56 (2.91)
4.36 (2.42)
4.28 (2.20)
0.473
SN^GoMe
0.28 (2.77)
1.66 (2.53)
−0.01 (2.33)
−0.81 (2.91)
0.004
Hypo vs. hyper: 0.004
Hypo vs. normo: 0.077
Hyper vs. normo: 0.842
Lower incisor axis inclination
5.58 (4.77)
4.51 (3.84)
5.47 (5.52)
6.77 (4.72)
0.246
AN^NPg
−1.49 (1.72)
−1.20 (1.27)
−2.05 (2.16)
−1.24 (1.52)
0.142
Skeletal discrepancy
−2.52 (2.28)
−3.08 (2.66)
−2.43 (2.09)
−2.06 (1.99)
0.281
Overjet
−4.38 (2.21)
−3.62 (2.07)
−4.64 (2.19)
−4.87 (2.26)
0.102
Molar relation
−5.02 (5.54)
−5.53 (3.22)
−4.21 (8.76)
−5.31 (2.50)
0.672
Pairwise comparisons:p values adjusted by using Bonferroni method. In the first column, differences after-before for each group are given.Significant intragroup variations from T1 to T2 in the total sample and in the three test subgroups are summarized in Table3.Table 3
Intragroupp values (test group: T2-T1).
Total sample (75)
Hypodivergent (25)
Normodivergent (25)
Hyperdivergent (25)
p
p
p
p
A/Olp
0.90
0.81
0.88
0.85
Pg/Olp
0.02∗
0.50
0.14
0.11
Is/Olp
0.45
0.74
0.76
0.50
Ii/Olp
0.01∗
0.10
0.01∗
0.01∗
Ms/Olp
0.07
0.20
0.30
0.38
Mi/Olp
0.01∗
0.07
0.01∗
0.01∗
SN^GoMe
0.97
0.05
0.93
0.37
Lower incisor axis inclination
0.01∗
0.04∗
0.01∗
0.01∗
AN^NPg
0.01∗
0.06
0.01∗
0.07
Skeletal discrepancy
0.01∗
0.04∗
0.01∗
0.01∗
Overjet
0.01∗
0.01∗
0.01∗
0.01∗
Molar relation
0.01∗
0.01∗
0.01∗
0.01∗
∗
p
<
0.05.Herbst therapy has determined in the total sample a slight retreat of the upper maxilla, but no significant difference was observed at the end of the treatment. On the contrary, a significant advancement of the lower jaw with a reduction of skeletal discrepancy and improvement of ANPg angle was found after the Herbst treatment (p<0.05).Moreover, orthodontic treatment resulted in a slight retreat of the upper central incisor even if the difference was not significant (p>0.05.05), a marked advancement of the lower central incisor (p<0.05), and a marked reduction of overjet and molar relation (p<0.05).In the total sample, a loss of dental anchorage with an increased lower incisor inclination at the end of the treatment (p<0.05) and a mean increase in cranial base-mandible angle (SN/GoMe) was observed.Cephalometric changes (T2-T1) in the three subgroups are reported in Table4 (hypodivergents vs. controls), Table 5 (normodivergent vs. controls), and Table 6 (hyperdivergent vs. controls).Table 4
Hypodivergent patients versus controls, mean difference between posttreatment (T2) and pretreatment (T1).
Parameter
Hypodivergent test mean (SD)
Hypodivergent control mean (SD)
p value
A/Olp
0.35 ± 1.92
0.1 ± 0.3
0.5
Pg/Olp
2.11 ± 2.31
0.2 ± 0.5
<0.001
Is/Olp
0.17 ± 2.42
0.1 ± 0.4
0.89
Ii/Olp
−2.72 ± 2.74
0.2 ± 0.5
<0.001
Ms/Olp
−1.61 ± 2.41
0.2 ± 0.4
<0.001
Mi/Olp
3.56 ± 2.91
0.2 ± 0.3
<0.001
SN^GoMe
1.66 ± 2.53
−0.5 ± 0.9
<0.001
LII
4.51 ± 3.84
0.1 ± 0.3
<0.001
AN^NPg
−1.20 ± 1.27
0.2 ± 0.3
<0.001
Skeletal discrepancy
−3.08 ± 2.66
0.1 ± 0.2
<0.001
Overjet
−3.62 ± 2.07
0.1 ± 0.3
<0.001
Molar relation
−5.53 ± 3.22
0.1 ± 0.4
<0.001Table 5
Normodivergent patients versus controls, mean difference between posttreatment (T2) and pretreatment (T1).
Parameter
Normodivergent test mean (SD)
Normodivergent control mean (SD)
p value
A/Olp
−0.23 ± 1.32
0.2 ± 0.5
0.13
Pg/Olp
2.41 ± 2.02
0.4 ± 0.7
<0.001
Is/Olp
−0.64 ± 1.80
0.3 ± 0.4
0.01
Ii/Olp
4.00 ± 1.91
0.2 ± 0.4
<0.001
Ms/Olp
−1.44 ± 2.53
0.3 ± 0.6
<0.001
Mi/Olp
4.36 ± 2.42
0.2 ± 0.5
<0.001
SN^GoMe
−0.01 ± 2.33
0.2 ± 0.4
0.66
LII
5.47 ± 5.52
0.2 ± 0.2
<0.001
AN^NPg
−2.05 ± 2.16
0.1 ± 0.2
<0.001
Skeletal discrepancy
−2.43 ± 2.09
0.2 ± 0.3
<0.001
Overjet
−4.64 ± 2.19
0.1 ± 0.4
<0.001
Molar relation
−4.21 ± 8.76
0.1 ± 0.3
<0.001Table 6
Hyperdivergent patients versus controls, mean difference between posttreatment (T2) and pretreatment (T1).
Parameter
Hyperdivergent test mean (SD)
Hyperdivergent control mean (SD)
p value
A/Olp
0.40 ± 1.42
0.1 ± 0.9
0.38
Pg/Olp
2.46 ± 2.31
−0.2 ± 1
<0.001
Is/Olp
−0.82 ± 1.78
0.1 ± 0.8
0.02
Ii/Olp
−0.19 ± 2.03
0.1 ± 0.9
0.51
Ms/Olp
−1.03 ± 1.61
0.2 ± 0.7
0.001
Mi/Olp
4.28 ± 2.20
0.2 ± 0.8
<0.001
SN^GoMe
−0.81 ± 2.91
0.5 ± 0.8
0.03
LII
6.77 ± 4.72
0 ± 0.1
<0.001
AN^NPg
−1.24 ± 1.52
0.2 ± 0.6
<0.001
Skeletal discrepancy
−2.06 ± 1.99
0.1 ± 0.5
<0.001
Overjet
−4.87 ± 2.26
0.1 ± 0.4
<0.001
Molar relation
−5.31 ± 2.50
0.1 ± 0.3
<0.001Hypodivergent patients showed an increased mandibular divergence at the end of the therapy in comparison to the control group (p<0.05), normodivergent subjects did no show significant changes in divergence in comparison to the controls (p>0.05), and hyperdivergents showed a decrease in SN/GoMe angle in comparison to the control group (p<0.05).
## 4. Discussion
Based on the results obtained in this study, it is possible to notice that the Herbst treatment was effective for the resolution of class II malocclusion in all groups.In fact, correction of sagittal dental class was obtained in all patients treated, with a decrease in overjet, skeletal class angle, skeletal discrepancy, and molar relation.These results were obtained in all patients through a distalisation of the upper arch and a mesialisation of the lower arch, and these results are consistent with those of previous studies [8, 10, 17, 21–23]. A slight high-pull headgear effect on the maxillary complex was found in the total sample, while a significant advancement of the mandible was observed (hypodivergents exhibited a slight lower mandibular advancement in comparison to normodivergent and hyperdivergent groups), and these results are in accordance with those of previous studies; Pancherz and Anehus-Pancherz found that the sagittal maxillary jaw base position seemed unaffected by therapy [24]. An increase in the inclination of the lower incisor, with respect to the mandibular base, was recorded in all groups, and a greater mandibular incisor anchorage loss was observed in the hyperdivergent group, while hypodivergent exhibited the lower mandibular incisor anchorage loss. However, no significant differences in mandibular advancement were found among groups.Normodivergents did not show changes in divergence; hypodivergent patients slightly increased their mandibular divergence during orthodontic treatment, while hyperdivergent patients showed a slight decrease in the mandibular divergence.In literature, a significant alteration was found in mandibular divergence at the end of the Herbst treatment in a limited number of studies [25–27], while other previous studies showed a significant change of SN/GoMe at the end of orthodontic therapy [28].It was recorded that, after Herbst treatment, the upper molars moved mesially, the occlusal plane slightly closed, and the palatal plane tipped downward [24].Ruf and Pancherz stated that the mandibular plane angle was slightly affected by Herbst appliance treatment, and at the end of the orthodontic therapy, a continuous decrease in the mandibular plane angle was found [25].In a previous study, a significant difference of cranial base-mandibular angle was found between hypodivergent, normodivergent, and hyperdivergent patients [28], and the results showed that hypodivergent subjects tend to decrease this angle, while hyperdivergents tend to increase it. In fact, these authors observed that hypo- and hyperdivergent patients benefit from Herbst’s headgear effect in the upper maxilla, while hyperdivergent patients exhibited a deleterious backward mandibular rotation. A possible explanation could be that cantilever Herbst appliance with full-coverage stainless steel crowns on the upper and lower first molars was used by Rogers et al. [28], while in the present study, a total acrylic splint extending from the first lower molar to the first contralateral molar was used to reinforce the anchorage. Furthermore, another possible explanation for the rotational differences between subjects with different vertical growth patterns could be the orofacial musculature function as patients with weak jaw musculature could exhibit a backward mandibular rotation.Further studies conducted on a larger number of lateral cephalograms will be necessary to confirm the results of the present study.
## 5. Conclusion
Our study showed differences in response to treatment with the Herbst appliance depending on patient’s vertical growth pattern. Particularly, the changes in Ii/Olp over time were significantly different among groups (p<0.001).Moreover, the results exhibited that hypodivergent patients increased their mandibular divergence during treatment. Normodivergent patients showed very slight differences in mandibular divergence with no significant difference, while hyperdivergent patients exhibited a mandibular divergence decrease at the end of the Herbst treatment, and the difference among groups was significant (p<0.05).
---
*Source: 1018793-2020-01-11.xml* | 2020 |
# Insulin Resistance Predicts Atherogenic Lipoprotein Profile in Nondiabetic Subjects
**Authors:** Flávia De C. Cartolano; Gabriela D. Dias; Maria C. P. de Freitas; Antônio M. Figueiredo Neto; Nágila R. T. Damasceno
**Journal:** Journal of Diabetes Research
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1018796
---
## Abstract
Background. Atherogenic diabetes is associated with an increased cardiovascular risk and mortality in diabetic individuals; however, the impact of insulin resistance (IR) in lipid metabolism in preclinical stages is generally underreported. For that, we evaluated the capacity of IR to predict an atherogenic lipid subfraction profile. Methods. Complete clinical evaluation and biochemical analysis (lipid, glucose profile, LDL, and HDL subfractions and LDL phenotype and size) were performed in 181 patients. The impact of IR as a predictor of atherogenic lipoproteins was tested by logistic regression analysis in raw and adjusted models. Results. HDL-C and Apo AI were significantly lower in individuals with IR. Individuals with IR had a higher percentage of small HDL particles, lower percentage in the larger ones, and reduced frequency of phenotype A (IR = 62%; non-IR = 83%). IR individuals had reduced probability to have large HDL (OR = 0.213; CI = 0.999–0.457) and had twice more chances to show increased small HDL (OR = 2.486; CI = 1.341–7.051). IR was a significant predictor of small LDL (OR = 3.075; CI = 1.341–7.051) and atherogenic phenotype (OR = 3.176; CI = 1.469–6.867). Conclusion. IR, previously DM2 diagnosis, is a strong predictor of quantitative and qualitative features of lipoproteins directly associated with an increased atherogenic risk.
---
## Body
## 1. Introduction
The negative impact of diabetes in cardiovascular risk factors, atherogenesis, and cardiovascular events is well stablished in literature; however, the role of insulin resistance (IR) previously DM2 diagnosis is not totally clear. Clinically, it was defined as the inability of glucose uptake and utilization by insulin-dependent tissues and reduced insulin sensitivity, being the basis of type 2 diabetes mellitus (DM2) [1–3]. Almost 415 million people around the world are suffering from DM2, and recent data estimated that 318 million adults have impaired glucose tolerance and IR is prevalent in 20 to 30% of general population and in 90% of patients with DM2 [4]. In 2014, 11.9 million of Brazilian lived with DM2 and, by 2035, it is estimated that this prevalence will increase to 19.2 million [5].IR is linked with hypertriglyceridemia and high-density lipoprotein cholesterol (HDL-C) [6–8]; however, the relationship with low-density lipoprotein cholesterol (LDL-C) is contradictory [3]. These findings can be explained by compensatory hyperinsulinemia due to IR, which induces increased free fatty acid (FFA) efflux from adipose tissue, thus raising VLDL production in the liver and, consequently, plasma triacylglycerol (TAG) and also reducing HDL-C by activation of cholesterol ester transfer protein (CETP) and increased clearance by the kidneys [9, 10].Recently, Li et al. [10] described the complex bidirectional relationship of lipoprotein homeostasis and IR. HDL acts in both, IR and β-cell function, improving insulin secretion, increasing insulin sensitivity in the target tissues (adipose and muscle cells), and promoting positive effects on β-cell survival. This relation was confirmed by the association of qualitative and quantitative parameters of lipoprotein and DM2 [11, 12]. Up to now, few studies had analyzed lipoprotein subfractions in IR individuals before DM2 development [13, 14].Regarding this background, the aim of this study was to compare the impact of IR effect on lipid metabolism and to evaluate if IR is a predictor of atherogenic lipoprotein profile in Brazilian individuals with IR and without DM2.
## 2. Methods
### 2.1. Subjects and Study Design
One hundred eighty-one adults of both genders were selected. Individuals were recruited from the University Hospital of the University of Sao Paulo. Subjects included in the study were 30 to 74 years old, without cardiovascular events (assessed by ECG and clinical evaluation) and without diagnosis of DM1 and DM2. Presence of DM was firstly evaluated by a direct interview using a structured questionnaire in which medical diagnosis of DM2 and current use of insulin and/or hypoglycemic drugs were self-reported by individuals. After, fasting glucose and insulin were analyzed to confirm DM2 diagnosis. If the fasting glucose level was close to the cut-off point (≥7.0 mmol/L), a second analysis was performed to confirm DM2, as recommended by Brazilian Diabetes Society [15]. Pregnant or lactating women, individuals who participate in other studies, illicit drug users, and alcoholics were not enrolled in this protocol. This study was approved by the Ethic Committee in Research of the University Hospital (n° 1126/11) and the School of Public Health, University of Sao Paulo (n° 2264). All subjects gave their written informed consent to participate and have their data published.
### 2.2. Demographic, Clinical, and Anthropometric Features
Demographic and clinical profile was evaluated using a structured questionnaire addressing gender, age, clinical information, family history of chronic diseases (father and mother), smoking status, blood pressure, and regular medication use.From weight and height measures, body mass index (BMI) was calculated as weight (kg) divided by the square of the standing height (m2). The waist circumference (WC) and body composition (BIA) (Analyzer® model Quantum II, RJL Systems, Michigan, USA) were also evaluated.
### 2.3. Biochemical Analysis
After 12 h of fasting, blood samples were collected in vacutainer tubes containing ethylenediaminetetraacetic acid (EDTA) (1.0μg/mL). The protease inhibitors aprotinin (10.0 μg/mL), benzamidine (10.0 μM), and phenylmethylsulfonyl fluoride (PMSF) (5.0 μM) plus the antioxidant butylated hydroxytoluene (BHT) (100.0 μM) were added to the samples. Plasma and serum were separated by centrifugation at 3000 rpm for 10 minutes at 4°C, and samples were kept frozen at −80°C until analysis.Plasma triacylglycerol (TAG), total cholesterol (TC), and HDL-C levels were measured using commercial kits from Labtest® (Minas Gerais, Brazil). Low-density lipoprotein cholesterol (LDL-C) level was calculated using the Friedewald equation [16]. Non-HDL was calculated from TC minus HDL-C. The apolipoprotein B (APO B) and apolipoprotein AI (APO AI) were determined by standard methods, using the autokit APO A1 and APO B® (Wako Chemicals USA Inc., Richmond, VA, USA). Glucose level was analyzed by an enzymatic and colorimetric kit (Glucose PAP Liquiform®, Labtest, Minas Gerais, Brazil). The insulin was performed by a commercial kit Insulin Human Direct ELISA Kit® (Life Technologies, Grand Island, NY). Insulin resistance (IR) was calculated with the homeostasis model assessment-insulin resistance (HOMA-IR) as follows: [fasting insulin concentration μU/mL×fastingglucose(mmol/L)/22.5] [17]. IR classification was performed according to Stern et al. [18] that takes account HOMA-IR and BMI values: HOMA-IR > 4.65 or BMI > 28.90 kg/m2 or HOMA > 3.60 and BMI > 27.50 kg/m2. Based in these criteria, individuals were divided into the IR group and non-IR group.The lipoprotein fractions (VLDL and IDL) and subfractions (HDL and LDL) were determined by the Lipoprint® system. The LDL1 and LDL2 subfractions were classified as LDLLARGE, and subfractions LDL3 to LDL7 as smaller and denser particles (LDLSMALL). For HDL, ten subfractions were identified: HDLLARGE (HDL1 to HDL3), HDLINTERMEDIATE (HDL4 to HDL7), and HDLSMALL (HDL8 to HDL10). The LDL phenotypes were based in cut-off points (phenotype A ≥ 268 Å and phenotype non-A < 268 Å). The mean LDL size was determined. All analyses were conducted in duplicate, and coefficients of variance intra- and interassay were 1–15%.
### 2.4. Statistical Analysis
Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS®), version 20.0. A two-sided p value <0.05 was considered statistically significant. The Kolmogorov-Smirnov test (p>0.05) was applied to assess normality of data. Continuous variables with normally distributed data are presented as mean values and standard deviations (SD) while nonnormally distributed data are presented as medians and 25th and 75th percentiles. Categorical variables are presented as absolute values (n) and percentage (%). The comparison between groups was performed using Student’s t-test for normally distributed data. Nonnormally distributed data were analyzed using the nonparametric Mann–Whitney U test. Categorical variables were compared using Pearson’s chi-square test. To identify the effect of IR on lipoprotein subfraction profile, univariate logistic regression analysis was performed using the IR as an independent factor. Afterward, variables that showed correlations with IR values (p<0.20) have been included into a multivariate logistic regression analysis. HDL1, HDL4, and HDL10 did not match this criterion. Model A which included age and gender as covariate and additional adjustment, respectively, was made for smoking, statin, and/or fibrate use in model B. Adjusted odds ratio (AOR) and 95% confidence interval (CI) were determined.
## 2.1. Subjects and Study Design
One hundred eighty-one adults of both genders were selected. Individuals were recruited from the University Hospital of the University of Sao Paulo. Subjects included in the study were 30 to 74 years old, without cardiovascular events (assessed by ECG and clinical evaluation) and without diagnosis of DM1 and DM2. Presence of DM was firstly evaluated by a direct interview using a structured questionnaire in which medical diagnosis of DM2 and current use of insulin and/or hypoglycemic drugs were self-reported by individuals. After, fasting glucose and insulin were analyzed to confirm DM2 diagnosis. If the fasting glucose level was close to the cut-off point (≥7.0 mmol/L), a second analysis was performed to confirm DM2, as recommended by Brazilian Diabetes Society [15]. Pregnant or lactating women, individuals who participate in other studies, illicit drug users, and alcoholics were not enrolled in this protocol. This study was approved by the Ethic Committee in Research of the University Hospital (n° 1126/11) and the School of Public Health, University of Sao Paulo (n° 2264). All subjects gave their written informed consent to participate and have their data published.
## 2.2. Demographic, Clinical, and Anthropometric Features
Demographic and clinical profile was evaluated using a structured questionnaire addressing gender, age, clinical information, family history of chronic diseases (father and mother), smoking status, blood pressure, and regular medication use.From weight and height measures, body mass index (BMI) was calculated as weight (kg) divided by the square of the standing height (m2). The waist circumference (WC) and body composition (BIA) (Analyzer® model Quantum II, RJL Systems, Michigan, USA) were also evaluated.
## 2.3. Biochemical Analysis
After 12 h of fasting, blood samples were collected in vacutainer tubes containing ethylenediaminetetraacetic acid (EDTA) (1.0μg/mL). The protease inhibitors aprotinin (10.0 μg/mL), benzamidine (10.0 μM), and phenylmethylsulfonyl fluoride (PMSF) (5.0 μM) plus the antioxidant butylated hydroxytoluene (BHT) (100.0 μM) were added to the samples. Plasma and serum were separated by centrifugation at 3000 rpm for 10 minutes at 4°C, and samples were kept frozen at −80°C until analysis.Plasma triacylglycerol (TAG), total cholesterol (TC), and HDL-C levels were measured using commercial kits from Labtest® (Minas Gerais, Brazil). Low-density lipoprotein cholesterol (LDL-C) level was calculated using the Friedewald equation [16]. Non-HDL was calculated from TC minus HDL-C. The apolipoprotein B (APO B) and apolipoprotein AI (APO AI) were determined by standard methods, using the autokit APO A1 and APO B® (Wako Chemicals USA Inc., Richmond, VA, USA). Glucose level was analyzed by an enzymatic and colorimetric kit (Glucose PAP Liquiform®, Labtest, Minas Gerais, Brazil). The insulin was performed by a commercial kit Insulin Human Direct ELISA Kit® (Life Technologies, Grand Island, NY). Insulin resistance (IR) was calculated with the homeostasis model assessment-insulin resistance (HOMA-IR) as follows: [fasting insulin concentration μU/mL×fastingglucose(mmol/L)/22.5] [17]. IR classification was performed according to Stern et al. [18] that takes account HOMA-IR and BMI values: HOMA-IR > 4.65 or BMI > 28.90 kg/m2 or HOMA > 3.60 and BMI > 27.50 kg/m2. Based in these criteria, individuals were divided into the IR group and non-IR group.The lipoprotein fractions (VLDL and IDL) and subfractions (HDL and LDL) were determined by the Lipoprint® system. The LDL1 and LDL2 subfractions were classified as LDLLARGE, and subfractions LDL3 to LDL7 as smaller and denser particles (LDLSMALL). For HDL, ten subfractions were identified: HDLLARGE (HDL1 to HDL3), HDLINTERMEDIATE (HDL4 to HDL7), and HDLSMALL (HDL8 to HDL10). The LDL phenotypes were based in cut-off points (phenotype A ≥ 268 Å and phenotype non-A < 268 Å). The mean LDL size was determined. All analyses were conducted in duplicate, and coefficients of variance intra- and interassay were 1–15%.
## 2.4. Statistical Analysis
Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS®), version 20.0. A two-sided p value <0.05 was considered statistically significant. The Kolmogorov-Smirnov test (p>0.05) was applied to assess normality of data. Continuous variables with normally distributed data are presented as mean values and standard deviations (SD) while nonnormally distributed data are presented as medians and 25th and 75th percentiles. Categorical variables are presented as absolute values (n) and percentage (%). The comparison between groups was performed using Student’s t-test for normally distributed data. Nonnormally distributed data were analyzed using the nonparametric Mann–Whitney U test. Categorical variables were compared using Pearson’s chi-square test. To identify the effect of IR on lipoprotein subfraction profile, univariate logistic regression analysis was performed using the IR as an independent factor. Afterward, variables that showed correlations with IR values (p<0.20) have been included into a multivariate logistic regression analysis. HDL1, HDL4, and HDL10 did not match this criterion. Model A which included age and gender as covariate and additional adjustment, respectively, was made for smoking, statin, and/or fibrate use in model B. Adjusted odds ratio (AOR) and 95% confidence interval (CI) were determined.
## 3. Results
The average age was 51.3 (12.0) years old for the non-IR group and 52.5 (9.3) years old for the IR group. About 60% of the subjects were women, and both groups showed similar frequency of smoking (p=0.113). As expected, the IR group presented higher values for weight, BMI, and WC (Table 1).Table 1
Demographic, clinical profile, and anthropometry of the subjects according to the presence of insulin resistance.
Variables
Total (n=181)
Non-IR group (n=64)
IR group (n=117)
p
Women (n, %)
110.0 (60.8)
40.0 (62.5)
70.0 (59.8)
0.725
Age, years (mean, SD)
52.1 (10.3)
51.3 (12.0)
52.5 (9.3)
0.476
Weight, kg (mean, SD)
80.1 (16.9)
67.4 (10.6)
87.0 (15.7)
<0.001
BMI, % (mean, SD)
30.1 (5.5)
25.0 (2.6)
32.9 (4.6)
<0.001
WC, cm (mean, SD)
98.3 (13.0)
86.1 (7.4)
105.0 (10.2)
<0.001
FM, % (mean, SD)
35.2 (11.9)
29.5 (9.8)
38.3 (11.9)
<0.001
SBP, mmHg (mean, SD)
134.3 (19.5)
130.2 (21.5)
136.6 (18.0)
0.004
DBP, mmHg (mean, SD)
82.0 (10.2)
78.5 (10.2)
83.9 (9.7)
<0.001
Smoking (n, %)
34.0 (18.8)
16.0 (25.0)
18.0 (15.4)
0.113
Statin (n, %)
45.0 (24.9)
15.0 (23.4)
30.0 (25.6)
0.743
Fibrate (n, %)
4.0 (2.2)
1.0 (1.6)
3.0 (2.6)
0.661
Data presented as mean (standard deviation) or absolute value (frequency). Comparative analysis for categorical variables was performed by Pearson’s chi-square test (p<0.05), and continuous variables were performed by Student’s t-test (p<0.05). BMI: body mass index; WC: waist circumference; SBP: systolic blood pressure; DBP: diastolic blood pressure; non-IR group: individuals without insulin resistance; IR group: individuals with insulin resistance.HDL-C, APO AI, and LDL size levels were significantly lower in the IR group, which showed a higher TAG/HDL ratio. Over 80% of non-IR subjects exhibited phenotype A versus 62% in the IR group (p=0.003) (Table 2).Table 2
Biochemical profile of the subject according presence of insulin resistance.
Variables
Total (n=181)
Non-IR group (n=64)
IR group (n=117)
p
TC (mmol/L)
5.38 (1.01)
5.43 (0.91)
5.35 (1.06)
0.591
HDL-C (mmol/L)
0.98 (0.28)
1.09 (0.26)
0.93 (0.28)
0.001
LDL-C (mmol/L)
3.67 (0.91)
3.72 (0.80)
3.62 (1.04)
0.495
TAG (mmol/L)
1.37 (1.03; 1.99)
1.15 (0.97; 1.50)
1.52 (1.14; 2.16)
<0.001
Glucose (mmol/L)
5.33 (8; 5.7)
5.1 (4.8; 5.3)
5.4 (5.1; 5.8)
<0.001
Insulin (μU/mL)
17.0 (7.0)
12.0 (3.0)
20.0 (7.0)
<0.001
HOMA-IR
4.2 (1.9)
2.8 (0.7)
5.0 (2.0)
<0.001
TAG/HDL-C
1.5 (1.0; 2.3)
1.9 (0.8; 1.5)
1.6 (1.2; 2.6)
<0.001
Non-HDL-C (mmol/L)
4.42 (0.95)
4.38 (0.91)
4.42 (0.98)
0.686
APO AI (g/L)
1.33 (0.27)
1.42 (0.24)
1.29 (0.28)
0.001
APO B (g/L)
1.05 (0.23)
1.04 (0.23)
1.05 (0.23)
0.752
LDL size (Å)
270.0 (267.0; 272.0)
271.0 (269.0; 272.0)
270.0 (265.0; 272.0)
0.045
Phenotype A (n, %)
125.0 (69.0)
53.0 (83.0)
72.0 (62.0)
0.003
Data presented as mean (standard deviation) and median (p25-p75). Comparative analysis was performed by Student’st-tests or Mann–Whitney U test (p<0.05). TC: total cholesterol; HDL-C: high-density lipoprotein cholesterol; LDL-C: low-density lipoprotein cholesterol; TAG: triacylglycerol; TAG/HDL-C: ratio between TAG and HDL-C; APO AI: apolipoprotein AI; APO B: apolipoprotein B; non-IR group: individuals without insulin resistance; IR group: individuals with insulin resistance.The distribution of lipoprotein subfractions (Table3) shows that the IR group had a higher percentage of intermediate (HDL5, HDL6, and HDL7) and small HDL particles (HDL 8) and lower percentage of large particles (HDL2 and HDL3), contributing to decrease in HDLLARGE in the IR group. VLDL and LDL2 subfractions in the IR group showed higher percentages than those observed in non-IR subjects.Table 3
Distribution of lipoprotein subfractions of the subjects according to the presence of insulin resistance.
Variables (%)
Total (n=181)
Non-IR group (n=64)
IR group (n=117)
p
VLDL
17.85 (3.93)
16.80 (3.29)
18.42 (4.15)
0.017
IDL
28.64 (4.09)
29.08 (3.54)
28.39 (4.36)
0.277
LDL1
16.88 (3.99)
17.56 (3.61)
16.51 (4.16)
0.089
LDL2
9.84 (4.01)
8.84 (3.65)
10.39 (4.10)
0.013
LDL3
2.48 (2.75)
1.78 (2.00)
2.87 (3.02)
0.062
LDL4
0.42 (1.09)
0.28 (1.09)
0.50 (0.36)
0.063
LDL5
0.06 (0.40)
0.07 (0.46)
0.05 (0.36)
0.931
LDL6
0.01 (0.14)
0.03 (0.24)
0.00 (0.00)
0.176
LDL7
0.00 (0.00)
0.00 (0.00)
0.00 (0.00)
1.000
LDLLARGE
2.97 (3.83)
2.16 (3.20)
3.42 (4.07)
0.081
LDLSMALL
26.72 (4.81)
26.40 (4.92)
26.89 (4.77)
0.515
HDL1
10.66 (3.61)
11.00 (3.57)
10.47 (3.64)
0.346
HDL2
12.32 (4.36)
14.33 (4.15)
11.23 (4.08)
<0.001
HDL3
7.34 (2.11)
8.32 (2.08)
6.81 (1.94)
<0.001
HDL4
9.37 (1.60)
9.45 (1.39)
9.33 (1.70)
0.619
HDL5
11.06 (1.61)
10.58 (1.55)
11.32 (1.59)
0.003
HDL6
21.66 (3.01)
20.31 (2.76)
22.40 (2.88)
<0.001
HDL7
7.72 (1.43)
7.37 (1.33)
7.92 (1.45)
0.013
HDL8
7.64 (1.73)
7.23 (1.58)
7.87 (1.78)
0.017
HDL9
5.98 (1.77)
5.64 (1.61)
6.16 (1.83)
0.056
HDL10
6.22 (3.98)
5.75 (3.62)
6.48 (4.15)
0.239
HDLLARGE
30.32 (8.45)
33.64 (8.66)
28.51 (7.80)
<0.001
HDLINTERMEDIATE
49.81 (4.87)
47.72 (4.72)
50.95 (4.58)
<0.001
HDLSMALL
19.84 (6.50)
18.63 (6.10)
20.51 (6.64)
0.063
Data presented as mean (standard deviation). Comparative analysis was performed by Student’st-tests or Mann–Whitney U test (p<0.05). VLDL: very low-density lipoprotein; IDL: intermediate density lipoprotein; HDL: high-density lipoprotein; LDL: low-density lipoprotein; non-IR group: individuals without insulin resistance; IR group: individuals with insulin resistance.Presence of IR was associated with reduced chances to have HDLLARGE (HDL2 and HDL3) and increased small particles (HDL8) and HDLSMALL. Regarding LDL subfractions, IR was a predictor of significant chance of increased LDLSMALL (OR = 2.826; CI = 1.263–6.324) and phenotype non-A (OR = 3.011; CI = 1.424–6.366) (Table 4).Table 4
Multivariate logistic regression for effect of insulin resistance on lipoprotein subfraction profile.
Variables (%)
Raw data
Model A
Model B
OR
95% CI
AOR
95% CI
AOR
95% CI
VLDL
Non-IR group
1.000
1.000
1.000
IR group
1.974
[0.921; 4.230]
2.205
[0.965; 5.038]
2.332
[1.007; 5.404]
HDL2
Non-IR group
1.000
1.000
1.000
IR group
0.283
[0.141; 0.571]
0.267
[0.129; 0.552]
0.230
[0.108; 0.493]
HDL3
Non-IR group
1.000
1.000
1.000
IR group
0.322
[0.160; 0.645]
0.302
[0.146; 0.625]
0.298
[0.142; 0.625]
HDL5
Non-IR group
1.000
1.000
1.000
IR group
1.850
[0.881; 3.884]
1.843
[0.873; 3.890]
1.897
[0.890; 4.044]
HDL6
Non-IR group
1.000
1.000
1.000
IR group
3.367
[1.460; 7.766]
3.726
[1.579; 8.793]
3.936
[1.631; 9.498]
HDL7
Non-IR group
1.000
1.000
1.000
IR group
1.926
[0.919; 4.038]
1.927
[0.917; 4.047]
1.980
[0.930; 4.216]
HDL8
Non-IR group
1.000
1.000
1.000
IR group
2.305
[1.504; 5.039]
2.305
[1.042; 5.098]
2.363
[1.025; 5.445]
HDL9
Non-IR group
1.000
1.000
1.000
IR group
1.631
[0.772; 3.446]
1.367
[0.771; 3.474]
1.755
[0.806; 3.822]
HDL
LARGE
Non-IR group
1.000
1.000
1.000
IR group
0.249
[0.123; 0.505]
0.232
[0.111; 0.484]
0.213
[0.099; 0.457]
HDL
INTERMETDIATE
Non-IR group
1.000
1.000
1.000
IR group
3.367
[1.460; 7.766]
3.843
[1.606; 9.197]
4.698
[1.855; 11.901]
HDL
SMALL
Non-IR group
1.000
1.000
1.000
IR group
2.400
[1.099; 5.239]
2.429
[1.110; 5.319]
2.486
[1.115; 5.543]
LDL
SMALL
Non-IR group
1.000
1.000
1.000
IR group
2.826
[1.263; 6.324]
2. 794
[1.237; 6.310]
3.075
[1.341; 7.051]
Phenotype non-A
Non-IR group
1.000
1.000
1.000
IR group
3.011
[1.424; 6.366]
3.001
[1.404; 6.416]
3.176
[1.469; 6.867]
n
=
181. Model A: adjusted for gender and age. Model B: adjusted for gender, age, smoking, statin, and fibrate. VLDL: very low-density lipoprotein; HDL-C: high-density lipoprotein cholesterol; non-IR group: individuals without insulin resistance; IR group: with insulin resistance.
## 4. Discussion
This study showed that IR, without established DM2, is already a predictor of more atherogenic lipoprotein profile, contributing for a worse cardiovascular risk of Brazilian individuals.IR is a key factor for the development of atherosclerosis and DM2, and the most common associated metabolic abnormality is high TAG and low HDL-C levels, whereas TC and LDL-C are not consistently altered [19].Lipoproteins are heterogeneous structures, which vary in their size, density, and chemical composition and confer additional value to cholesterol content [20–22]. There are few studies that sought to analyze IR effects on lipid metabolism and lipoprotein subfractions in individuals without DM already clinically diagnosed [13, 14]. Recently, Shah et al. [23] described lipoprotein subclasses as a better marker to detect lipoprotein abnormalities in normoglycemic and prediabetic subjects.Hypertriglyceridemia is considered the principal lipid abnormality in IR and plays a pivotal role in diabetic dyslipidemia [24–27]. This causal and strict relationship between IR and dyslipidemia was recently revised by Li et al. [10]. Our results confirmed that IR has negative effects on lipid metabolism and reinforces the connection between DM and CVD. Lower values of HDL-C, APO AI, higher TAG levels, TAG/HDL ratio, and percentage of VLDL observed in the IR group characterize a typical basis for atherogenic DM. This scenario evidences that changes in physical structure of lipoproteins start earlier to DM2 and reaffirms the negative impact of IR, highlighting the relevance of strategies focused in its prevention.Elevated TAG levels are results of increased production and decreased clearance of TAG-rich lipoproteins in both fasting and nonfasting states [28]. These events favor increased production of VLDL, a prominent IR feature [29]. In agreement, our data showed that VLDL percentage in the non-IR group was lower than that in the IR group, confirming that individuals with IR were four times more likely to have lower HDLLARGE percentage. This result was mainly due to differences in HDL2 and HDL3. Garvey et al. [12] using nuclear magnetic resonance (NMR) observed that progressive IR was associated with a decrease in HDL size, depletion of large HDL particles, and a modest rise in small HDL. They also described the increase in VLDL size and in large VLDL levels before and after multiple adjustment (age, BMI, and gender). Similarly, MacLean et al. [13] described in a small sample of obese insulin-resistance women that concentration of large HDL was lower, while HDL size was negatively correlated with plasma insulin. Our results expand this previous study, because it was based on a bigger sample and included both genders. In addition, our data show the concordance of results obtained by Lipoprint system and NMR [30].The relationship between HDL size and cardiovascular risk is still a controversial issue. Clinical and epidemiological studies have shown that low HDL-C content is strongly and independently associated with cardiovascular diseases (CVD) [31]. However, some studies have questioned this association, hypothesizing that it requires a more specific analysis of this lipoprotein [32]. There is an important heterogeneity among HDL subfractions, comprising from small to large HDL, and different HDL subfractions appear to be associated with different functions. The HDL analysis performed on the JUPITER study suggested that concentration of HDL particles, rather than cholesterol content in the lipoprotein, was a more robust predictor of CVD events and a more appropriate target for therapeutic interventions [33].According to Pirillo et al. [34], large HDL particles are more competent in reverse cholesterol transport (RCT), classically described as the primary physiological function of HDL [35], which represents the capacity to transfer excess cholesterol from peripheral cells to the liver for excretion, contributing to attenuate the atherosclerotic stimulus. However, recent studies have shown that small HDL particles have greater antioxidant and anti-inflammatory properties, preventing LDL oxidation in the subendothelial layer through reactive oxygen species (ROS) action [36].Although there were controversial results in the literature, the majority of studies have associated lower concentrations of large HDL with cardiovascular events, dyslipidemia, obesity, metabolic syndrome, and diabetes [37].Plausible biological mechanisms supporting the role of IR in HDL have been recently reviewed [26, 27]. The decrease in HDL-C induced by IR is associated with increased catabolism of this lipoprotein. This event involves the elevated transfer of TAG to HDL owing to hypertriglyceridemia caused by increased VLDL hepatic synthesis and decreased lipoprotein lipase (LPL) activity. This process is modulated by CETP, which leads formation of HDL rich in TAG, turning this lipoprotein a good substrate for the hepatic lipase enzyme (HL), responsible for HDL catabolism [26]. Our results corroborate with the negative influence of IR in the connection between TAG and HDL-C levels. In addition, the odds values also confirmed that IR was a predictor for higher VLDL and small HDL and lower large HDL particles. Possibly, these changes in HDL subfractions modify its functionality, reducing its antiatherogenic role.In addition of the significant effect of IR on classical lipid profile, VLDL, and HDL subfractions, our data also showed that IR was related with increased chances to have smaller LDL. However, these structural changes were not related with modifications in total APO B and LDL-C levels, confirming that distribution of LDL and particle size adds information to classical evaluation of cholesterol and APO B content in this lipoprotein [38]. Small LDL particles are cited in several studies for their link with atherosclerosis [39].Our results disclose that before DM2 diagnosis, the presence of IR can already cause multiple changes in lipids and structure of lipoproteins and it is supported by the worse LDL phenotype in the IR-group. Classically, two LDL phenotypes, named A and B, based on LDL dense and size are described [20, 21]. In our study, phenotype non-A (more atherogenic profile) was present in 17% of non-IR subjects, whereas this percentage was 38% in the IR group. Previous studies did not describe the negative impact of IR in these phenotypes. Thus, the higher prevalence of phenotype non-A observed in our study contributes to add more information regarding a more atherogenic lipoprotein profile when associated with higher TAG levels and small HDL particles. These results are particularly relevant because phenotypes obtained by Lipoprint system showed high concordance with other validated methods such as NMR (phenotype A = 100% and phenotype B = 75%) and Zaxis (phenotype A = 100% and phenotype B = 95%) [40].Assessment of IR using the HOMA-IR equation is a simple and nonexpensive tool able to be used in clinical trials and in clinical practice routine. It has a high concordance level with hyperinsulinemic-euglycemic clamp technique, accepted as the gold standard method for IR diagnosis [41, 42]. However, some studies had demonstrated that HOMA-IR cut-off points are gender and age specific [43], while Gayoso-Diz et al. showed that metabolic syndrome components should be considered for IR classification [44]. These aspects suggest that HOMA-IR data should be analyzed with caution.Based in these interactions, our odds ratios were adjusted by age and sex; however, few influences were detected by these confounders. The cut-off point used in our study was based on a previous large-population study that included Caucasian (European subjects from 17 cities), Mexican American, and Pima Indian [18]. Stern et al. [18] demonstrated that HOMA-IR isolated and/or combined with BMI could identify individuals with IR, previously DM2 diagnosis. This model is adequate for clinical trials due its high capacity to identify properly individuals with IR (specificity of 92% for HOMA-IR>4.65) but can also be used in clinical practice routine when BMI could be included (HOMA-IR > 3.6 and BMI > 27.5 kg/m2) or analyzed alone (BMI > 28.9 kg/m2). Additional advantage of this model is the ability to identify IR based only on one measure of BMI. This aspect is particularly relevant for individuals and population in developing countries where health systems are unable to diagnose early dysglycemia, and the incidence of DM shows accelerated growth. Though previous studies have proposed lesser cut-off points (<2.0), values adopted in our study were due to similar ethnic between individuals (Caucasian and Mexican American), and opposite to other studies, cut-off points proposed by Stern et al. [18] were validated by euglycemic insulin clamp technique in a large population-based study. Indeed, many studies described cut-off points ranging from 1.55 to 3.8 when different populations are analyzed, with different health status and distinct statistical approaches (ROC and percentiles), frequently based on a cross-sectional design. Recently, Lee et al. based on a large Chinese transversal and prospective study with 15 years of follow-up proposed cut-off points for dysglycemia (1.4) and DM2 (2.0) [45]. Despite the relevant results described, these cut-off points were not validated by euglycemia clamp technique and were based only on one glucose analysis. In addition to these aspects, the ethnicity exerts influence in IR as previously described [46, 47], explaining, in part, the different cut-off points described in literature [44]. In our study, most individuals were Caucasian and only 1.7% (n=3) were Japanese Brazilian. Despite that, we decided not to exclude these individuals because they did not change our results’ profile and this ethnic distribution is a representative of Brazilian population.Altogether, clinical evaluation of IR can predict future changes in lipid metabolism and its impact in the development of CVD. These results are particularly relevant because they highlight the negative impact of IR, previously DM2 diagnosis, in qualitative aspects of lipid metabolism and cardiovascular risk not described previously in literature.Finally, despite of the robust predictive role of IR on atherogenic lipoprotein profile observed in our study, we assume potential limitations of these results. First, criteria used to define DM could include additional analysis such as glucose tolerance test; however, in order to avoid this costly and time-consuming technique, we performed a rigorous and direct interview addressing specific questions about DM. The diagnosis was confirmed by fasting glucose, repeating it in cases where diagnosis was not clear. Second, some individuals included in our study were under statin/fibrate treatment. In this case, all groups were matched and the individuals enrolled should be under the same drug protocol at least 30 days prior to data collection. Third, the criteria used to determine IR were not a gold standard accepted as the hyperinsulinemic-euglycemic clamp, tolerance test glucose minimal model of Bergman, hyperglycemic clamp, or oral tolerance glucose/meal test. However, our goal was to identify the capacity of IR to predict atherogenic lipoprotein using a fast, simple, and low-cost tool applicable to clinical practice such as HOMA-IR.In conclusion, our results showed that IR is associated with significant changes in quantitative and qualitative aspects of lipoproteins and it is a robust predictor of atherogenic lipoprotein profile in nondiabetic subjects.
---
*Source: 1018796-2017-08-22.xml* | 1018796-2017-08-22_1018796-2017-08-22.md | 34,508 | Insulin Resistance Predicts Atherogenic Lipoprotein Profile in Nondiabetic Subjects | Flávia De C. Cartolano; Gabriela D. Dias; Maria C. P. de Freitas; Antônio M. Figueiredo Neto; Nágila R. T. Damasceno | Journal of Diabetes Research
(2017) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1018796 | 1018796-2017-08-22.xml | ---
## Abstract
Background. Atherogenic diabetes is associated with an increased cardiovascular risk and mortality in diabetic individuals; however, the impact of insulin resistance (IR) in lipid metabolism in preclinical stages is generally underreported. For that, we evaluated the capacity of IR to predict an atherogenic lipid subfraction profile. Methods. Complete clinical evaluation and biochemical analysis (lipid, glucose profile, LDL, and HDL subfractions and LDL phenotype and size) were performed in 181 patients. The impact of IR as a predictor of atherogenic lipoproteins was tested by logistic regression analysis in raw and adjusted models. Results. HDL-C and Apo AI were significantly lower in individuals with IR. Individuals with IR had a higher percentage of small HDL particles, lower percentage in the larger ones, and reduced frequency of phenotype A (IR = 62%; non-IR = 83%). IR individuals had reduced probability to have large HDL (OR = 0.213; CI = 0.999–0.457) and had twice more chances to show increased small HDL (OR = 2.486; CI = 1.341–7.051). IR was a significant predictor of small LDL (OR = 3.075; CI = 1.341–7.051) and atherogenic phenotype (OR = 3.176; CI = 1.469–6.867). Conclusion. IR, previously DM2 diagnosis, is a strong predictor of quantitative and qualitative features of lipoproteins directly associated with an increased atherogenic risk.
---
## Body
## 1. Introduction
The negative impact of diabetes in cardiovascular risk factors, atherogenesis, and cardiovascular events is well stablished in literature; however, the role of insulin resistance (IR) previously DM2 diagnosis is not totally clear. Clinically, it was defined as the inability of glucose uptake and utilization by insulin-dependent tissues and reduced insulin sensitivity, being the basis of type 2 diabetes mellitus (DM2) [1–3]. Almost 415 million people around the world are suffering from DM2, and recent data estimated that 318 million adults have impaired glucose tolerance and IR is prevalent in 20 to 30% of general population and in 90% of patients with DM2 [4]. In 2014, 11.9 million of Brazilian lived with DM2 and, by 2035, it is estimated that this prevalence will increase to 19.2 million [5].IR is linked with hypertriglyceridemia and high-density lipoprotein cholesterol (HDL-C) [6–8]; however, the relationship with low-density lipoprotein cholesterol (LDL-C) is contradictory [3]. These findings can be explained by compensatory hyperinsulinemia due to IR, which induces increased free fatty acid (FFA) efflux from adipose tissue, thus raising VLDL production in the liver and, consequently, plasma triacylglycerol (TAG) and also reducing HDL-C by activation of cholesterol ester transfer protein (CETP) and increased clearance by the kidneys [9, 10].Recently, Li et al. [10] described the complex bidirectional relationship of lipoprotein homeostasis and IR. HDL acts in both, IR and β-cell function, improving insulin secretion, increasing insulin sensitivity in the target tissues (adipose and muscle cells), and promoting positive effects on β-cell survival. This relation was confirmed by the association of qualitative and quantitative parameters of lipoprotein and DM2 [11, 12]. Up to now, few studies had analyzed lipoprotein subfractions in IR individuals before DM2 development [13, 14].Regarding this background, the aim of this study was to compare the impact of IR effect on lipid metabolism and to evaluate if IR is a predictor of atherogenic lipoprotein profile in Brazilian individuals with IR and without DM2.
## 2. Methods
### 2.1. Subjects and Study Design
One hundred eighty-one adults of both genders were selected. Individuals were recruited from the University Hospital of the University of Sao Paulo. Subjects included in the study were 30 to 74 years old, without cardiovascular events (assessed by ECG and clinical evaluation) and without diagnosis of DM1 and DM2. Presence of DM was firstly evaluated by a direct interview using a structured questionnaire in which medical diagnosis of DM2 and current use of insulin and/or hypoglycemic drugs were self-reported by individuals. After, fasting glucose and insulin were analyzed to confirm DM2 diagnosis. If the fasting glucose level was close to the cut-off point (≥7.0 mmol/L), a second analysis was performed to confirm DM2, as recommended by Brazilian Diabetes Society [15]. Pregnant or lactating women, individuals who participate in other studies, illicit drug users, and alcoholics were not enrolled in this protocol. This study was approved by the Ethic Committee in Research of the University Hospital (n° 1126/11) and the School of Public Health, University of Sao Paulo (n° 2264). All subjects gave their written informed consent to participate and have their data published.
### 2.2. Demographic, Clinical, and Anthropometric Features
Demographic and clinical profile was evaluated using a structured questionnaire addressing gender, age, clinical information, family history of chronic diseases (father and mother), smoking status, blood pressure, and regular medication use.From weight and height measures, body mass index (BMI) was calculated as weight (kg) divided by the square of the standing height (m2). The waist circumference (WC) and body composition (BIA) (Analyzer® model Quantum II, RJL Systems, Michigan, USA) were also evaluated.
### 2.3. Biochemical Analysis
After 12 h of fasting, blood samples were collected in vacutainer tubes containing ethylenediaminetetraacetic acid (EDTA) (1.0μg/mL). The protease inhibitors aprotinin (10.0 μg/mL), benzamidine (10.0 μM), and phenylmethylsulfonyl fluoride (PMSF) (5.0 μM) plus the antioxidant butylated hydroxytoluene (BHT) (100.0 μM) were added to the samples. Plasma and serum were separated by centrifugation at 3000 rpm for 10 minutes at 4°C, and samples were kept frozen at −80°C until analysis.Plasma triacylglycerol (TAG), total cholesterol (TC), and HDL-C levels were measured using commercial kits from Labtest® (Minas Gerais, Brazil). Low-density lipoprotein cholesterol (LDL-C) level was calculated using the Friedewald equation [16]. Non-HDL was calculated from TC minus HDL-C. The apolipoprotein B (APO B) and apolipoprotein AI (APO AI) were determined by standard methods, using the autokit APO A1 and APO B® (Wako Chemicals USA Inc., Richmond, VA, USA). Glucose level was analyzed by an enzymatic and colorimetric kit (Glucose PAP Liquiform®, Labtest, Minas Gerais, Brazil). The insulin was performed by a commercial kit Insulin Human Direct ELISA Kit® (Life Technologies, Grand Island, NY). Insulin resistance (IR) was calculated with the homeostasis model assessment-insulin resistance (HOMA-IR) as follows: [fasting insulin concentration μU/mL×fastingglucose(mmol/L)/22.5] [17]. IR classification was performed according to Stern et al. [18] that takes account HOMA-IR and BMI values: HOMA-IR > 4.65 or BMI > 28.90 kg/m2 or HOMA > 3.60 and BMI > 27.50 kg/m2. Based in these criteria, individuals were divided into the IR group and non-IR group.The lipoprotein fractions (VLDL and IDL) and subfractions (HDL and LDL) were determined by the Lipoprint® system. The LDL1 and LDL2 subfractions were classified as LDLLARGE, and subfractions LDL3 to LDL7 as smaller and denser particles (LDLSMALL). For HDL, ten subfractions were identified: HDLLARGE (HDL1 to HDL3), HDLINTERMEDIATE (HDL4 to HDL7), and HDLSMALL (HDL8 to HDL10). The LDL phenotypes were based in cut-off points (phenotype A ≥ 268 Å and phenotype non-A < 268 Å). The mean LDL size was determined. All analyses were conducted in duplicate, and coefficients of variance intra- and interassay were 1–15%.
### 2.4. Statistical Analysis
Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS®), version 20.0. A two-sided p value <0.05 was considered statistically significant. The Kolmogorov-Smirnov test (p>0.05) was applied to assess normality of data. Continuous variables with normally distributed data are presented as mean values and standard deviations (SD) while nonnormally distributed data are presented as medians and 25th and 75th percentiles. Categorical variables are presented as absolute values (n) and percentage (%). The comparison between groups was performed using Student’s t-test for normally distributed data. Nonnormally distributed data were analyzed using the nonparametric Mann–Whitney U test. Categorical variables were compared using Pearson’s chi-square test. To identify the effect of IR on lipoprotein subfraction profile, univariate logistic regression analysis was performed using the IR as an independent factor. Afterward, variables that showed correlations with IR values (p<0.20) have been included into a multivariate logistic regression analysis. HDL1, HDL4, and HDL10 did not match this criterion. Model A which included age and gender as covariate and additional adjustment, respectively, was made for smoking, statin, and/or fibrate use in model B. Adjusted odds ratio (AOR) and 95% confidence interval (CI) were determined.
## 2.1. Subjects and Study Design
One hundred eighty-one adults of both genders were selected. Individuals were recruited from the University Hospital of the University of Sao Paulo. Subjects included in the study were 30 to 74 years old, without cardiovascular events (assessed by ECG and clinical evaluation) and without diagnosis of DM1 and DM2. Presence of DM was firstly evaluated by a direct interview using a structured questionnaire in which medical diagnosis of DM2 and current use of insulin and/or hypoglycemic drugs were self-reported by individuals. After, fasting glucose and insulin were analyzed to confirm DM2 diagnosis. If the fasting glucose level was close to the cut-off point (≥7.0 mmol/L), a second analysis was performed to confirm DM2, as recommended by Brazilian Diabetes Society [15]. Pregnant or lactating women, individuals who participate in other studies, illicit drug users, and alcoholics were not enrolled in this protocol. This study was approved by the Ethic Committee in Research of the University Hospital (n° 1126/11) and the School of Public Health, University of Sao Paulo (n° 2264). All subjects gave their written informed consent to participate and have their data published.
## 2.2. Demographic, Clinical, and Anthropometric Features
Demographic and clinical profile was evaluated using a structured questionnaire addressing gender, age, clinical information, family history of chronic diseases (father and mother), smoking status, blood pressure, and regular medication use.From weight and height measures, body mass index (BMI) was calculated as weight (kg) divided by the square of the standing height (m2). The waist circumference (WC) and body composition (BIA) (Analyzer® model Quantum II, RJL Systems, Michigan, USA) were also evaluated.
## 2.3. Biochemical Analysis
After 12 h of fasting, blood samples were collected in vacutainer tubes containing ethylenediaminetetraacetic acid (EDTA) (1.0μg/mL). The protease inhibitors aprotinin (10.0 μg/mL), benzamidine (10.0 μM), and phenylmethylsulfonyl fluoride (PMSF) (5.0 μM) plus the antioxidant butylated hydroxytoluene (BHT) (100.0 μM) were added to the samples. Plasma and serum were separated by centrifugation at 3000 rpm for 10 minutes at 4°C, and samples were kept frozen at −80°C until analysis.Plasma triacylglycerol (TAG), total cholesterol (TC), and HDL-C levels were measured using commercial kits from Labtest® (Minas Gerais, Brazil). Low-density lipoprotein cholesterol (LDL-C) level was calculated using the Friedewald equation [16]. Non-HDL was calculated from TC minus HDL-C. The apolipoprotein B (APO B) and apolipoprotein AI (APO AI) were determined by standard methods, using the autokit APO A1 and APO B® (Wako Chemicals USA Inc., Richmond, VA, USA). Glucose level was analyzed by an enzymatic and colorimetric kit (Glucose PAP Liquiform®, Labtest, Minas Gerais, Brazil). The insulin was performed by a commercial kit Insulin Human Direct ELISA Kit® (Life Technologies, Grand Island, NY). Insulin resistance (IR) was calculated with the homeostasis model assessment-insulin resistance (HOMA-IR) as follows: [fasting insulin concentration μU/mL×fastingglucose(mmol/L)/22.5] [17]. IR classification was performed according to Stern et al. [18] that takes account HOMA-IR and BMI values: HOMA-IR > 4.65 or BMI > 28.90 kg/m2 or HOMA > 3.60 and BMI > 27.50 kg/m2. Based in these criteria, individuals were divided into the IR group and non-IR group.The lipoprotein fractions (VLDL and IDL) and subfractions (HDL and LDL) were determined by the Lipoprint® system. The LDL1 and LDL2 subfractions were classified as LDLLARGE, and subfractions LDL3 to LDL7 as smaller and denser particles (LDLSMALL). For HDL, ten subfractions were identified: HDLLARGE (HDL1 to HDL3), HDLINTERMEDIATE (HDL4 to HDL7), and HDLSMALL (HDL8 to HDL10). The LDL phenotypes were based in cut-off points (phenotype A ≥ 268 Å and phenotype non-A < 268 Å). The mean LDL size was determined. All analyses were conducted in duplicate, and coefficients of variance intra- and interassay were 1–15%.
## 2.4. Statistical Analysis
Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS®), version 20.0. A two-sided p value <0.05 was considered statistically significant. The Kolmogorov-Smirnov test (p>0.05) was applied to assess normality of data. Continuous variables with normally distributed data are presented as mean values and standard deviations (SD) while nonnormally distributed data are presented as medians and 25th and 75th percentiles. Categorical variables are presented as absolute values (n) and percentage (%). The comparison between groups was performed using Student’s t-test for normally distributed data. Nonnormally distributed data were analyzed using the nonparametric Mann–Whitney U test. Categorical variables were compared using Pearson’s chi-square test. To identify the effect of IR on lipoprotein subfraction profile, univariate logistic regression analysis was performed using the IR as an independent factor. Afterward, variables that showed correlations with IR values (p<0.20) have been included into a multivariate logistic regression analysis. HDL1, HDL4, and HDL10 did not match this criterion. Model A which included age and gender as covariate and additional adjustment, respectively, was made for smoking, statin, and/or fibrate use in model B. Adjusted odds ratio (AOR) and 95% confidence interval (CI) were determined.
## 3. Results
The average age was 51.3 (12.0) years old for the non-IR group and 52.5 (9.3) years old for the IR group. About 60% of the subjects were women, and both groups showed similar frequency of smoking (p=0.113). As expected, the IR group presented higher values for weight, BMI, and WC (Table 1).Table 1
Demographic, clinical profile, and anthropometry of the subjects according to the presence of insulin resistance.
Variables
Total (n=181)
Non-IR group (n=64)
IR group (n=117)
p
Women (n, %)
110.0 (60.8)
40.0 (62.5)
70.0 (59.8)
0.725
Age, years (mean, SD)
52.1 (10.3)
51.3 (12.0)
52.5 (9.3)
0.476
Weight, kg (mean, SD)
80.1 (16.9)
67.4 (10.6)
87.0 (15.7)
<0.001
BMI, % (mean, SD)
30.1 (5.5)
25.0 (2.6)
32.9 (4.6)
<0.001
WC, cm (mean, SD)
98.3 (13.0)
86.1 (7.4)
105.0 (10.2)
<0.001
FM, % (mean, SD)
35.2 (11.9)
29.5 (9.8)
38.3 (11.9)
<0.001
SBP, mmHg (mean, SD)
134.3 (19.5)
130.2 (21.5)
136.6 (18.0)
0.004
DBP, mmHg (mean, SD)
82.0 (10.2)
78.5 (10.2)
83.9 (9.7)
<0.001
Smoking (n, %)
34.0 (18.8)
16.0 (25.0)
18.0 (15.4)
0.113
Statin (n, %)
45.0 (24.9)
15.0 (23.4)
30.0 (25.6)
0.743
Fibrate (n, %)
4.0 (2.2)
1.0 (1.6)
3.0 (2.6)
0.661
Data presented as mean (standard deviation) or absolute value (frequency). Comparative analysis for categorical variables was performed by Pearson’s chi-square test (p<0.05), and continuous variables were performed by Student’s t-test (p<0.05). BMI: body mass index; WC: waist circumference; SBP: systolic blood pressure; DBP: diastolic blood pressure; non-IR group: individuals without insulin resistance; IR group: individuals with insulin resistance.HDL-C, APO AI, and LDL size levels were significantly lower in the IR group, which showed a higher TAG/HDL ratio. Over 80% of non-IR subjects exhibited phenotype A versus 62% in the IR group (p=0.003) (Table 2).Table 2
Biochemical profile of the subject according presence of insulin resistance.
Variables
Total (n=181)
Non-IR group (n=64)
IR group (n=117)
p
TC (mmol/L)
5.38 (1.01)
5.43 (0.91)
5.35 (1.06)
0.591
HDL-C (mmol/L)
0.98 (0.28)
1.09 (0.26)
0.93 (0.28)
0.001
LDL-C (mmol/L)
3.67 (0.91)
3.72 (0.80)
3.62 (1.04)
0.495
TAG (mmol/L)
1.37 (1.03; 1.99)
1.15 (0.97; 1.50)
1.52 (1.14; 2.16)
<0.001
Glucose (mmol/L)
5.33 (8; 5.7)
5.1 (4.8; 5.3)
5.4 (5.1; 5.8)
<0.001
Insulin (μU/mL)
17.0 (7.0)
12.0 (3.0)
20.0 (7.0)
<0.001
HOMA-IR
4.2 (1.9)
2.8 (0.7)
5.0 (2.0)
<0.001
TAG/HDL-C
1.5 (1.0; 2.3)
1.9 (0.8; 1.5)
1.6 (1.2; 2.6)
<0.001
Non-HDL-C (mmol/L)
4.42 (0.95)
4.38 (0.91)
4.42 (0.98)
0.686
APO AI (g/L)
1.33 (0.27)
1.42 (0.24)
1.29 (0.28)
0.001
APO B (g/L)
1.05 (0.23)
1.04 (0.23)
1.05 (0.23)
0.752
LDL size (Å)
270.0 (267.0; 272.0)
271.0 (269.0; 272.0)
270.0 (265.0; 272.0)
0.045
Phenotype A (n, %)
125.0 (69.0)
53.0 (83.0)
72.0 (62.0)
0.003
Data presented as mean (standard deviation) and median (p25-p75). Comparative analysis was performed by Student’st-tests or Mann–Whitney U test (p<0.05). TC: total cholesterol; HDL-C: high-density lipoprotein cholesterol; LDL-C: low-density lipoprotein cholesterol; TAG: triacylglycerol; TAG/HDL-C: ratio between TAG and HDL-C; APO AI: apolipoprotein AI; APO B: apolipoprotein B; non-IR group: individuals without insulin resistance; IR group: individuals with insulin resistance.The distribution of lipoprotein subfractions (Table3) shows that the IR group had a higher percentage of intermediate (HDL5, HDL6, and HDL7) and small HDL particles (HDL 8) and lower percentage of large particles (HDL2 and HDL3), contributing to decrease in HDLLARGE in the IR group. VLDL and LDL2 subfractions in the IR group showed higher percentages than those observed in non-IR subjects.Table 3
Distribution of lipoprotein subfractions of the subjects according to the presence of insulin resistance.
Variables (%)
Total (n=181)
Non-IR group (n=64)
IR group (n=117)
p
VLDL
17.85 (3.93)
16.80 (3.29)
18.42 (4.15)
0.017
IDL
28.64 (4.09)
29.08 (3.54)
28.39 (4.36)
0.277
LDL1
16.88 (3.99)
17.56 (3.61)
16.51 (4.16)
0.089
LDL2
9.84 (4.01)
8.84 (3.65)
10.39 (4.10)
0.013
LDL3
2.48 (2.75)
1.78 (2.00)
2.87 (3.02)
0.062
LDL4
0.42 (1.09)
0.28 (1.09)
0.50 (0.36)
0.063
LDL5
0.06 (0.40)
0.07 (0.46)
0.05 (0.36)
0.931
LDL6
0.01 (0.14)
0.03 (0.24)
0.00 (0.00)
0.176
LDL7
0.00 (0.00)
0.00 (0.00)
0.00 (0.00)
1.000
LDLLARGE
2.97 (3.83)
2.16 (3.20)
3.42 (4.07)
0.081
LDLSMALL
26.72 (4.81)
26.40 (4.92)
26.89 (4.77)
0.515
HDL1
10.66 (3.61)
11.00 (3.57)
10.47 (3.64)
0.346
HDL2
12.32 (4.36)
14.33 (4.15)
11.23 (4.08)
<0.001
HDL3
7.34 (2.11)
8.32 (2.08)
6.81 (1.94)
<0.001
HDL4
9.37 (1.60)
9.45 (1.39)
9.33 (1.70)
0.619
HDL5
11.06 (1.61)
10.58 (1.55)
11.32 (1.59)
0.003
HDL6
21.66 (3.01)
20.31 (2.76)
22.40 (2.88)
<0.001
HDL7
7.72 (1.43)
7.37 (1.33)
7.92 (1.45)
0.013
HDL8
7.64 (1.73)
7.23 (1.58)
7.87 (1.78)
0.017
HDL9
5.98 (1.77)
5.64 (1.61)
6.16 (1.83)
0.056
HDL10
6.22 (3.98)
5.75 (3.62)
6.48 (4.15)
0.239
HDLLARGE
30.32 (8.45)
33.64 (8.66)
28.51 (7.80)
<0.001
HDLINTERMEDIATE
49.81 (4.87)
47.72 (4.72)
50.95 (4.58)
<0.001
HDLSMALL
19.84 (6.50)
18.63 (6.10)
20.51 (6.64)
0.063
Data presented as mean (standard deviation). Comparative analysis was performed by Student’st-tests or Mann–Whitney U test (p<0.05). VLDL: very low-density lipoprotein; IDL: intermediate density lipoprotein; HDL: high-density lipoprotein; LDL: low-density lipoprotein; non-IR group: individuals without insulin resistance; IR group: individuals with insulin resistance.Presence of IR was associated with reduced chances to have HDLLARGE (HDL2 and HDL3) and increased small particles (HDL8) and HDLSMALL. Regarding LDL subfractions, IR was a predictor of significant chance of increased LDLSMALL (OR = 2.826; CI = 1.263–6.324) and phenotype non-A (OR = 3.011; CI = 1.424–6.366) (Table 4).Table 4
Multivariate logistic regression for effect of insulin resistance on lipoprotein subfraction profile.
Variables (%)
Raw data
Model A
Model B
OR
95% CI
AOR
95% CI
AOR
95% CI
VLDL
Non-IR group
1.000
1.000
1.000
IR group
1.974
[0.921; 4.230]
2.205
[0.965; 5.038]
2.332
[1.007; 5.404]
HDL2
Non-IR group
1.000
1.000
1.000
IR group
0.283
[0.141; 0.571]
0.267
[0.129; 0.552]
0.230
[0.108; 0.493]
HDL3
Non-IR group
1.000
1.000
1.000
IR group
0.322
[0.160; 0.645]
0.302
[0.146; 0.625]
0.298
[0.142; 0.625]
HDL5
Non-IR group
1.000
1.000
1.000
IR group
1.850
[0.881; 3.884]
1.843
[0.873; 3.890]
1.897
[0.890; 4.044]
HDL6
Non-IR group
1.000
1.000
1.000
IR group
3.367
[1.460; 7.766]
3.726
[1.579; 8.793]
3.936
[1.631; 9.498]
HDL7
Non-IR group
1.000
1.000
1.000
IR group
1.926
[0.919; 4.038]
1.927
[0.917; 4.047]
1.980
[0.930; 4.216]
HDL8
Non-IR group
1.000
1.000
1.000
IR group
2.305
[1.504; 5.039]
2.305
[1.042; 5.098]
2.363
[1.025; 5.445]
HDL9
Non-IR group
1.000
1.000
1.000
IR group
1.631
[0.772; 3.446]
1.367
[0.771; 3.474]
1.755
[0.806; 3.822]
HDL
LARGE
Non-IR group
1.000
1.000
1.000
IR group
0.249
[0.123; 0.505]
0.232
[0.111; 0.484]
0.213
[0.099; 0.457]
HDL
INTERMETDIATE
Non-IR group
1.000
1.000
1.000
IR group
3.367
[1.460; 7.766]
3.843
[1.606; 9.197]
4.698
[1.855; 11.901]
HDL
SMALL
Non-IR group
1.000
1.000
1.000
IR group
2.400
[1.099; 5.239]
2.429
[1.110; 5.319]
2.486
[1.115; 5.543]
LDL
SMALL
Non-IR group
1.000
1.000
1.000
IR group
2.826
[1.263; 6.324]
2. 794
[1.237; 6.310]
3.075
[1.341; 7.051]
Phenotype non-A
Non-IR group
1.000
1.000
1.000
IR group
3.011
[1.424; 6.366]
3.001
[1.404; 6.416]
3.176
[1.469; 6.867]
n
=
181. Model A: adjusted for gender and age. Model B: adjusted for gender, age, smoking, statin, and fibrate. VLDL: very low-density lipoprotein; HDL-C: high-density lipoprotein cholesterol; non-IR group: individuals without insulin resistance; IR group: with insulin resistance.
## 4. Discussion
This study showed that IR, without established DM2, is already a predictor of more atherogenic lipoprotein profile, contributing for a worse cardiovascular risk of Brazilian individuals.IR is a key factor for the development of atherosclerosis and DM2, and the most common associated metabolic abnormality is high TAG and low HDL-C levels, whereas TC and LDL-C are not consistently altered [19].Lipoproteins are heterogeneous structures, which vary in their size, density, and chemical composition and confer additional value to cholesterol content [20–22]. There are few studies that sought to analyze IR effects on lipid metabolism and lipoprotein subfractions in individuals without DM already clinically diagnosed [13, 14]. Recently, Shah et al. [23] described lipoprotein subclasses as a better marker to detect lipoprotein abnormalities in normoglycemic and prediabetic subjects.Hypertriglyceridemia is considered the principal lipid abnormality in IR and plays a pivotal role in diabetic dyslipidemia [24–27]. This causal and strict relationship between IR and dyslipidemia was recently revised by Li et al. [10]. Our results confirmed that IR has negative effects on lipid metabolism and reinforces the connection between DM and CVD. Lower values of HDL-C, APO AI, higher TAG levels, TAG/HDL ratio, and percentage of VLDL observed in the IR group characterize a typical basis for atherogenic DM. This scenario evidences that changes in physical structure of lipoproteins start earlier to DM2 and reaffirms the negative impact of IR, highlighting the relevance of strategies focused in its prevention.Elevated TAG levels are results of increased production and decreased clearance of TAG-rich lipoproteins in both fasting and nonfasting states [28]. These events favor increased production of VLDL, a prominent IR feature [29]. In agreement, our data showed that VLDL percentage in the non-IR group was lower than that in the IR group, confirming that individuals with IR were four times more likely to have lower HDLLARGE percentage. This result was mainly due to differences in HDL2 and HDL3. Garvey et al. [12] using nuclear magnetic resonance (NMR) observed that progressive IR was associated with a decrease in HDL size, depletion of large HDL particles, and a modest rise in small HDL. They also described the increase in VLDL size and in large VLDL levels before and after multiple adjustment (age, BMI, and gender). Similarly, MacLean et al. [13] described in a small sample of obese insulin-resistance women that concentration of large HDL was lower, while HDL size was negatively correlated with plasma insulin. Our results expand this previous study, because it was based on a bigger sample and included both genders. In addition, our data show the concordance of results obtained by Lipoprint system and NMR [30].The relationship between HDL size and cardiovascular risk is still a controversial issue. Clinical and epidemiological studies have shown that low HDL-C content is strongly and independently associated with cardiovascular diseases (CVD) [31]. However, some studies have questioned this association, hypothesizing that it requires a more specific analysis of this lipoprotein [32]. There is an important heterogeneity among HDL subfractions, comprising from small to large HDL, and different HDL subfractions appear to be associated with different functions. The HDL analysis performed on the JUPITER study suggested that concentration of HDL particles, rather than cholesterol content in the lipoprotein, was a more robust predictor of CVD events and a more appropriate target for therapeutic interventions [33].According to Pirillo et al. [34], large HDL particles are more competent in reverse cholesterol transport (RCT), classically described as the primary physiological function of HDL [35], which represents the capacity to transfer excess cholesterol from peripheral cells to the liver for excretion, contributing to attenuate the atherosclerotic stimulus. However, recent studies have shown that small HDL particles have greater antioxidant and anti-inflammatory properties, preventing LDL oxidation in the subendothelial layer through reactive oxygen species (ROS) action [36].Although there were controversial results in the literature, the majority of studies have associated lower concentrations of large HDL with cardiovascular events, dyslipidemia, obesity, metabolic syndrome, and diabetes [37].Plausible biological mechanisms supporting the role of IR in HDL have been recently reviewed [26, 27]. The decrease in HDL-C induced by IR is associated with increased catabolism of this lipoprotein. This event involves the elevated transfer of TAG to HDL owing to hypertriglyceridemia caused by increased VLDL hepatic synthesis and decreased lipoprotein lipase (LPL) activity. This process is modulated by CETP, which leads formation of HDL rich in TAG, turning this lipoprotein a good substrate for the hepatic lipase enzyme (HL), responsible for HDL catabolism [26]. Our results corroborate with the negative influence of IR in the connection between TAG and HDL-C levels. In addition, the odds values also confirmed that IR was a predictor for higher VLDL and small HDL and lower large HDL particles. Possibly, these changes in HDL subfractions modify its functionality, reducing its antiatherogenic role.In addition of the significant effect of IR on classical lipid profile, VLDL, and HDL subfractions, our data also showed that IR was related with increased chances to have smaller LDL. However, these structural changes were not related with modifications in total APO B and LDL-C levels, confirming that distribution of LDL and particle size adds information to classical evaluation of cholesterol and APO B content in this lipoprotein [38]. Small LDL particles are cited in several studies for their link with atherosclerosis [39].Our results disclose that before DM2 diagnosis, the presence of IR can already cause multiple changes in lipids and structure of lipoproteins and it is supported by the worse LDL phenotype in the IR-group. Classically, two LDL phenotypes, named A and B, based on LDL dense and size are described [20, 21]. In our study, phenotype non-A (more atherogenic profile) was present in 17% of non-IR subjects, whereas this percentage was 38% in the IR group. Previous studies did not describe the negative impact of IR in these phenotypes. Thus, the higher prevalence of phenotype non-A observed in our study contributes to add more information regarding a more atherogenic lipoprotein profile when associated with higher TAG levels and small HDL particles. These results are particularly relevant because phenotypes obtained by Lipoprint system showed high concordance with other validated methods such as NMR (phenotype A = 100% and phenotype B = 75%) and Zaxis (phenotype A = 100% and phenotype B = 95%) [40].Assessment of IR using the HOMA-IR equation is a simple and nonexpensive tool able to be used in clinical trials and in clinical practice routine. It has a high concordance level with hyperinsulinemic-euglycemic clamp technique, accepted as the gold standard method for IR diagnosis [41, 42]. However, some studies had demonstrated that HOMA-IR cut-off points are gender and age specific [43], while Gayoso-Diz et al. showed that metabolic syndrome components should be considered for IR classification [44]. These aspects suggest that HOMA-IR data should be analyzed with caution.Based in these interactions, our odds ratios were adjusted by age and sex; however, few influences were detected by these confounders. The cut-off point used in our study was based on a previous large-population study that included Caucasian (European subjects from 17 cities), Mexican American, and Pima Indian [18]. Stern et al. [18] demonstrated that HOMA-IR isolated and/or combined with BMI could identify individuals with IR, previously DM2 diagnosis. This model is adequate for clinical trials due its high capacity to identify properly individuals with IR (specificity of 92% for HOMA-IR>4.65) but can also be used in clinical practice routine when BMI could be included (HOMA-IR > 3.6 and BMI > 27.5 kg/m2) or analyzed alone (BMI > 28.9 kg/m2). Additional advantage of this model is the ability to identify IR based only on one measure of BMI. This aspect is particularly relevant for individuals and population in developing countries where health systems are unable to diagnose early dysglycemia, and the incidence of DM shows accelerated growth. Though previous studies have proposed lesser cut-off points (<2.0), values adopted in our study were due to similar ethnic between individuals (Caucasian and Mexican American), and opposite to other studies, cut-off points proposed by Stern et al. [18] were validated by euglycemic insulin clamp technique in a large population-based study. Indeed, many studies described cut-off points ranging from 1.55 to 3.8 when different populations are analyzed, with different health status and distinct statistical approaches (ROC and percentiles), frequently based on a cross-sectional design. Recently, Lee et al. based on a large Chinese transversal and prospective study with 15 years of follow-up proposed cut-off points for dysglycemia (1.4) and DM2 (2.0) [45]. Despite the relevant results described, these cut-off points were not validated by euglycemia clamp technique and were based only on one glucose analysis. In addition to these aspects, the ethnicity exerts influence in IR as previously described [46, 47], explaining, in part, the different cut-off points described in literature [44]. In our study, most individuals were Caucasian and only 1.7% (n=3) were Japanese Brazilian. Despite that, we decided not to exclude these individuals because they did not change our results’ profile and this ethnic distribution is a representative of Brazilian population.Altogether, clinical evaluation of IR can predict future changes in lipid metabolism and its impact in the development of CVD. These results are particularly relevant because they highlight the negative impact of IR, previously DM2 diagnosis, in qualitative aspects of lipid metabolism and cardiovascular risk not described previously in literature.Finally, despite of the robust predictive role of IR on atherogenic lipoprotein profile observed in our study, we assume potential limitations of these results. First, criteria used to define DM could include additional analysis such as glucose tolerance test; however, in order to avoid this costly and time-consuming technique, we performed a rigorous and direct interview addressing specific questions about DM. The diagnosis was confirmed by fasting glucose, repeating it in cases where diagnosis was not clear. Second, some individuals included in our study were under statin/fibrate treatment. In this case, all groups were matched and the individuals enrolled should be under the same drug protocol at least 30 days prior to data collection. Third, the criteria used to determine IR were not a gold standard accepted as the hyperinsulinemic-euglycemic clamp, tolerance test glucose minimal model of Bergman, hyperglycemic clamp, or oral tolerance glucose/meal test. However, our goal was to identify the capacity of IR to predict atherogenic lipoprotein using a fast, simple, and low-cost tool applicable to clinical practice such as HOMA-IR.In conclusion, our results showed that IR is associated with significant changes in quantitative and qualitative aspects of lipoproteins and it is a robust predictor of atherogenic lipoprotein profile in nondiabetic subjects.
---
*Source: 1018796-2017-08-22.xml* | 2017 |
# Escape Path Obstacle-Based Mobility Model (EPOM) for Campus Delay-Tolerant Network
**Authors:** Sirajo Abdullahi Bakura; Alain Lambert; Thomas Nowak
**Journal:** Journal of Advanced Transportation
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1018904
---
## Abstract
In Delay-Tolerant Networks (DTNs), humans are the main carriers of mobile devices, signifying that human mobility can be exploited by extracting nodes’ interests, social behavior, and spatiotemporal features for the performance evaluation of DTNs protocols. This paper presents a new mobility model that describes students’ daily activities in a campus environment. Unlike the conventional random walk models, which use a free space environment, our model includes a collision-avoidance technique that generates an escape path upon encountering obstacles of different shapes and sizes that obstruct pedestrian movement. We evaluate the model’s usefulness by comparing the distributions of its synthetic traces with realistic traces in terms of spatial, temporal, and connectivity features of human mobility. Similarly, we analyze the concept of dynamic movement clusters observed on the location-based trajectories of the studied real traces. The model synthetically generates traces with the distribution of the intercluster travel distance, intracluster travel distance, direction of movement, contact duration, intercontact time, and pause time similar to the distribution of real traces.
---
## Body
## 1. Introduction
Mobility patterns play an essential role in the performance of wireless networks with intermittent connections such as Delay-Tolerant Networks (DTNs). Some of the features associated with these networks include persistent disconnections, absence of simultaneous end-to-end communication routes, sparse topology, long delays among nodes due to mobility, and sparse deployment of nodes. However, we can achieve a weak form of connectivity in DTNs by exploiting the temporal dimension and node mobility [1]. Considerable research efforts have been put recently to enable communications between network entities with intermittent connectivity [2].Moreover, the forwarding opportunities of the DTNs depend on the patterns of mobility that dictate contact opportunities between nodes for reliable information forwarding. Interestingly, humans are the main carriers of mobile devices. Therefore, there is a need to understand the underlying behavior of pedestrian mobility, the driving forces that influence its motivation to move, and the repulsive forces that describe its interaction with environmental constraints. These are essential for designing a realistic mobility model to be used as a tool for wireless network protocol evaluation, hence the need for a model based on the empirical study of pedestrians’ mobility and interaction with other objects in the environment to pave the way for better event management, emergency rescue operation, and congestion prediction in a narrow bottleneck.This study investigated the mobility characteristics of pedestrians using real traces and proposed an obstacle-based mobility model for the DTNs that closely replicates the empirical features observed in the analyzed traces and generates spatial, temporal, and connectivity features similar to the features generated by realistic human mobility to enhance opportunistic forwarding for the DTNs and support pedestrian collision avoidance in the event of crowd or emergency rescue operations.Several mobility models have been proposed in [3–10], which can be categorized into synthetic or trace-based. The synthetic mobility models are less realistic as compared to the trace-based ones. On the contrary, the trace-based models are much more difficult to develop than the synthetic models. In addition to the models that describe pure human mobility, the models presented in [11–13] have explored the concept of cognitive science modeling using the driving forces that influence pedestrians’ internal motivation to move in a given direction and speed and the repulsive forces that describe pedestrian interaction with other pedestrian and environmental constraints such as obstacles using empirical data obtained from laboratory-controlled experiments. In this regard, we concentrate on the pedestrians’ interaction with the static and moving obstacles.Starting with the conventional mobility models, random walks are the most widely used synthetic models for the analysis of node mobility [3, 4]. Random walk models generate mobility patterns in which mobile nodes display completely random behavior. With this regard, only a few wireless networks (e.g., sensor network for animal tracking [14, 15]) can display such kind of randomness. In contrast, the majority of wireless networks strictly obey certain mobility rules. Pedestrian mobility is not completely random but influenced by features specific to humans, which resemble intentional mobility toward points of attraction.On the contrary, the random waypoint model [8, 16, 17] is considered the first synthetic model that attempts to model intentional human movement, which is not captured in the random walk models. Nevertheless, the model was shown to be unrealistic in [9] due to its failure to provide a steady state, resulting in the inconsistent decrease of an average node speed over time.This property can lead to unreliable results. Some simple fixes and modifications to the random waypoint presented in [9] still fail to capture the realistic behavior of intentional human mobility to some locations due to the strength of a social relationship or connection. For instance, a student might go to class for the lecture and to the cafeteria to eat or visit a nearby dormitory friend.The node’s movement is not restricted to a pathway in the random walk and random waypoint models. The Manhattan mobility model [18], on the contrary, restricts the movement of a node to the pathway in the simulation area.A generalization of several classical models that attempt to develop synthetic mobility models for mobile networks and satisfy some statistics was presented in [6, 7, 19, 20]. Some of the statistical features studied are the flight distribution (i.e., a straight line distance covered between two consecutive waypoints), pause time distribution (i.e., the amount of time a node pauses at a waypoint), and intercontact time distribution (i.e., the amount of time between two contacts of the same pair of nodes) regarding different scales of time and space.To develop mobility models using empirical data, which is the concept adopted in our study, studies in [21–23] extract detailed mobility data from real traces and calibrate the uncovered mobility features in their models. The studies in [21, 22] recognize opportunities when the user is associated with the same Wi-Fi access point; the idea was extended in Kim et al. [23] by considering a situation when the users are within the communication range of each other. Kim et al. [23] proposed a synthetic mobility model from user mobility characteristics extracted from the wireless network traces syslog but only considers Wi-Fi access points, which have a higher granularity of mobility trajectories. In [11], a controlled laboratory experiment was conducted to study the behavioral effects of interactions between pedestrians. The study extracts individual behavioral laws from the statistical features observed in the empirical data.The contributions of this paper are threefold:(1)
We conduct a characterization of spatial, temporal, and connectivity features of human mobility using real traces.(2)
We conduct an in-depth study on the movement displacements and directions within the movement clusters characterized by short walks within a confined area.(3)
We propose an Escape Path Obstacle-based Mobility Model (EPOM) for the campus DTNs. We show that the model is generic enough to be fine-tuned with a few parameters to show matching characteristics with the spatiotemporal and connectivity features observed in the real traces.This paper is organized as follows: Section2 presents a review of the related works. Section 3 explains the characterization of human mobility. Section 4 describes the Escape Path Obstacle-based Mobility Model and its submodels. Section 5 describes the implementation of the proposed model. Section 6 presents the settings and the results of the simulations. Finally, the conclusion and future perspectives are given in Section 7.
## 2. Related Work
The increasing interest of the research community on the DTNs and the impacts of mobility on their performance has led to the development of several mobility models focusing on different mobility features [6, 7, 19, 20, 24]. Nevertheless, the conventional synthetic stochastic models [3, 4, 9, 25, 26] meant for the performance analysis of network protocols in the early ad-hoc networks are insufficient to capture users’ intentional behaviors and social attractions. Several works have investigated the adaptability of the conventional models in the next generation mobile networks such as DTNs, Vehicular Ad-hoc Networks (VANETs), and Wireless Sensor Networks (WSNs). The studies found that human mobility is characterized by intentional mobility as opposed to the random assumptions in the conventional models [7, 19, 20].Although synthetic models that capture intentional human behavior are more realistic than the conventional models, the trace-based models [22, 27] appear to be more realistic because they are mostly generated for a specific scenario and only for a few nodes. In contrast, the nonconventional synthetic models [7, 19, 20] can generate synthetic mobility traces for a large number of nodes considering mobility constraints such as obstacles and pathways. The generated traces are used to evaluate network protocols.In this regard, an in-depth understanding of the interaction between pedestrians and pedestrians with other obstacles in a realistic domain aids in simulating emergency scenarios for pedestrian safety [11–13].Lee et al. [6] employed the concept of fractal waypoints, Least Action Trip Plan (LATP), and a walker model to generate regular patterns of daily human mobility. Their model is based on daily routine activities such as going to the office or attending a lecture. However, the model did not capture an event’s occurrence time and the repetitiveness observed in the people’s realistic daily activities.Munjal et al. [7] presented a mobility model that mimics real human mobility patterns by relaxing an assumption of random mobility with a notion of a mobility influence; that is, nodes mobility is influenced by factors such as cluster size. The model studies seven mobility statistical features: the flights, intercontact time, pause time, long flight due to popularity, closest mobile node visits, community interaction, and mobile node distribution. However, the simulation space in Munjal et al. [7] is a free space without restricting obstacles, which is not always realistic in a real environment such as a campus setting characterized by buildings of different shapes and sizes.In an attempt to develop a mobility model that captures people’s agendas or activities, several models were developed [19, 20, 24, 28, 29]. Ekman et al. [19] presented a Working Day Model (WDM) that emulates the workers’ daily activities such as going to the office, going for evening activities, or returning home. The model uses map-based movement on the concept of sources destination. It also uses a timescale to switch between different submodels. Ekman et al. [19] showed the similarities between the distribution of the synthetic traces of his model and that of iMotes traces from the Cambridge experiment. However, they did not cover the impacts of obstacles, such as the floor, walls, and other constraints, which affect nodes’ mobility.In [29], the characteristics of human mobility were described by constructing multidimensional mobility space, divided into individuality metrics, pairwise encounter metrics, and group metrics. The model generates node trajectories that show more human mobility characteristics, but it was validated using a conventional model, that is, the random waypoint mobility model.Students’ daily activities on the campus were studied by Zhu et al. [20] with a focus on the contact time, intercontact time, and contact per hour distributions. This work did not consider the impacts of obstacles in restricting the free movement of the mobile nodes and a possible signal obstruction by buildings of different shapes and sizes in the campus environment.Social mobility models were presented in [30, 31]. Hrabcak et al. [24] presented a Students Social Based Mobility Model (SSBMM). Their work was inspired by the daily routine of a student’s life. The model distinguishes between the student’s free time and the mandatory time upon which social and school activities are simulated. They compare their model with the classical random walk model, even though the random walk model cannot capture the repetitiveness and heterogeneity of time and space in human mobility.Wang et al. [32] proposed an obstacle-based mobility model that generates a smooth trajectory of a Bezier curve for escaping obstacles. Human mobility trajectories for escaping obstacles such as building or road diversion are not always smooth curves in real scenarios. In addition, the model did not capture movement to attraction factors such as points of interest, which represents human social behavior.The Obstacle Mobility (OM) model developed by Jardosh et al. [33] models the environmental obstructions which affect both movement and signal propagation. In this model, the node paths and points are constructed from a Voronoi diagram based on the obstacle position on the campus-like simulation area.As an extension to [33], Papageorgiou et al. [34] proposed a model that allows nodes to move around the obstacle but is not limited to a defined path. The model considers rectangular obstacles that limited its ability to capture the realistic feature of an environment with obstacles of different shapes and sizes.A random obstacle-based mobility model for DTN was presented by Wu et al. [28]. In this model, the node moves from the initial location to the destination via the shortest path if there is no obstacle along the path; otherwise, the node recursively selects the node’s location close to the obstacle and moves forward. This operation is repeated until the node reaches its destination. This model considers obstacles with a rectangular shape. Similarly, an unnecessary trip would be made in the absence of a node close to the obstacle, especially when the destination is just behind the obstacle.Wang et al. [32] proposed an obstacle-based mobility model that generates a smooth trajectory of a Bezier curve for escaping obstacles. Human mobility trajectories for escaping obstacles such as building or road diversion are not always smooth curves in real scenarios. In addition to that, the model did not capture movement to attraction factors such as points of interest that represent human social behavior.Moussaïd et al. [11] conducted an experimental study of the behavioral mechanisms underlying self-organization in human crowds to study an individual pedestrian behavior. In the study, an individual pedestrian movement behavior is characterized by the triplet: the internal acceleration fi0, wall interaction fiwall, and individual interaction fij. To simplify the complexity of the assumptions, a study that uses simple rules to determine pedestrian behavior and crowd disasters was presented in [12]. The study used simple heuristics to determine the movement direction and possible choice of desired speed during static and moving obstacle encounters.Some of the research works in the literature have studied mobility characteristics in real traces to develop synthetic mobility models that exhibit the observed mobility features in the real traces. Kim et al. [23] analyzed mobility characteristics, including pause time, speed, and direction of movements and developed a software model that generates realistic user mobility tracks but the mobility trajectory granularity of the studied trace depends on the wireless Local Area Network (WLAN) access point locations and hence may not be applicable to higher mobility DTN.Real mobility traces at Dartmouth College [35] and Disney World theme park in Orlando [36] have been analyzed in [37] to obtain movement characteristics. Consequently, they obtained a visiting probability of people, distribution of movement speed, and pause time from the traces. Their proposed model is configured with the derived distribution in the simulation.However, several works have proposed different techniques for mining mobility patterns or mobility behaviors from the trajectory data.Ghosh et al. [38] proposed a mobility pattern mining framework to extract mobility association rules from taxi trips. The proposed framework has three modules: the input module, the spatiotemporal analysis module, and the mobility association generation module. The input module processes the Taxi GPSlog, road network, and Point-Of-Interest and generates transactions using application-specific mobility rule templates. The spatiotemporal analysis model analyses travel demand data and partition regions based on the travel demand and then generates mobility flow. Lastly, the mobility association generation module delineates how it can be used to understand urban dynamics.Yue et al. [39] proposed a trajectory clustering technique for mobility-behavior analysis. In their approach, they formulate mobility analysis as a clustering task. They developed an unsupervised learning technique that resolves the problem of lack of labeled trajectory data that support supervised learning, in which data does not necessarily need to be labeled.Rahman et al. [40] presented a dynamic clustering technique based on the processed COVID-19 infection data and mobility data. In this work, clusters can expand and shrink based on the merit of the data.This study presents an Escape Path Obstacle-based mobility Model (EPOM) for a campus Delay-Tolerant Network. Our model covers aspects such as daily routines, heterogeneity of time and space, skewed location visiting, and the discovered dynamic cluster evolution. We also develop a novel strategy for collision avoidance between pedestrians and obstacles of different shapes and sizes. Our model mimic more realistic behavior observed in the realistic traces.Table1compares existing works and the EPOM mobility model in terms of the most widely studied mobility features and the evaluation methods. Symbol X indicates that the existing work studied the mobility feature while symbol x indicates the opposite.Table 1
Comparison of existing works and the EPOM mobility model in terms of most widely studied mobility features.
Features[20] (2012)[19] (2008)[7] (2011)[9] RWP[37] (2020)[34] (2009)[34] (2017)EPOMObstacle-awarexXXxX✔✔✔Obstacle-shapexXxxXRectangular shapeIrregular shapeIrregular shapeTravel distanceX✔✔XXXX✔Direction of movementxXxxxXX✔Pause timex✔✔x✔XX✔Contact time and intercontact time)✔✔✔xxXx✔ ✔ xEvaluation methodReal tracesReal tracesReal tracesReal tracesReal tracesObstacle simulationObstacle simulationReal traces and obstacle sim
## 3. Characterization of Human Mobility
Several efforts have been devoted to investigating the properties of human mobility and uncovering hidden patterns [23, 41, 42]. Due to the dynamic nature of human mobility, there is no consensus on its characteristic features. The features that require a thorough investigation include but are not limited to some of the fundamental features such as travel distance and pause time. However, there is a need to understand the features of a movement cluster within the community, such as the intracluster travel distance and direction of movement, though we explicitly study some fundamental features for the whole domain: the connectivity features (i.e., contact and intercontact time), the spatial feature (i.e., travel distance), and the temporal feature (i.e., pause time). The movement direction within the clusters, to the best of our knowledge, has been assumed to be random [41] or reported for the entire domain [23, 42].In this study, we use daily GPS track log collected from two different university campuses (NCSU and KAIST) for the location-based trace [36]. Garmin GPS 60CSx handheld receivers are used for data collection which are WAAS (Wide Area Augmentation System) capable with a position accuracy of better than three meters 95 percent of the time, in North America. The GPS receivers take reading of their current positions at every 10 seconds and record them into a daily track log. The data are available at [43]. We are interested in the stationary locations at which users stay.For the contact-based trace, we use the Bluetooth encounters between mobile nodes from the Cambridge city students iMote experiment [44]. The data consist of 10641 contacts between iMote devices carried by students for the duration of about 11.43 days. The data is available at [43] repository. We are interested in the duration at which two devices are in contact with each other (contact duration) and the time between two consecutive contacts between two devices (intercontact time).We emphasize that the cluster concept in our study refers to the location at which a person spends much of his time exploring the neighboring locations. Therefore, it should not be confused with the concept in a social community, which refers to people sharing physical location, ideas, or common goals. A person can generate more than one cluster within his community, depending on his daily trip schedules. Figure1 shows the trajectory of user 16 in the KAIST trace, creating four dynamic clusters for one day. In our clustering, we consider only clusters that have more than eight locations within a specified threshold.Figure 1
The dynamic clusters of KAIST trace file 16. The blue points indicate the complete waypoints in a day, while the red points indicate the waypoint clusters. There are four clusters associated with the user.Before clustering, we have to remove transit locations from our traces, which is reasonable because some of the coordinates from the GPS traces do not belong to stationary locations; they belong to the transit locations at which a user stays briefly on its way to its destination. Algorithm1 summarizes the procedures for removing transit locations. In Algorithm 1, point pi+1 is deleted if the distance between pi and pi+1 is greater than a distance threshold. Similarly, if the pause time at point pi is less than a time threshold, point pi is removed from the original trace. When Algorithm 1 is executed, our original trace would be left with only stationary stations.Algorithm 1: Extracting dynamic clusters from the location-based traces.
InitiallydistanceThreshold = Δd, waitingTimeThreshold = Δt, First Point = Pi, Second Point = Pi+1.(1)
if distance (Pi, Pi+1) ≥ Δd then(2)
removePi+1(3)
if pauseTime (Pi) ≤ Δt then(4)
removePiNext, we run an agglomerative clustering technique [45] using a single linkage method, sometimes called connectedness or minimum method and created location clusters based on the similarity of the closest pair of locations.The more we know about the movement cluster properties such as a travel distance and direction of movement, the more we can predict the behavior of a pedestrian movement pattern for accurate design against possible crowd congestion or emergency scenarios.
### 3.1. Intracluster Direction of Movement
The direction of movement within the movement clusters has not been studied well despite its impacts on the mobility patterns. Nevertheless, some related works reported an aggregate distribution of movement directions for the whole domain instead of movement clusters [41, 42]. Our study takes a different approach; it studies the direction of movement within dynamic clusters to understand the direction angle’s properties.Figure2 shows a weighted Probability Density Function (PDF) of the movement direction within clusters from NCSU trace with a bin size of 1°. We measure the direction of each movement by its movement duration. We can see that the direction of movement is biased symmetric toward some preferred locations. It implies that the movement within a dynamic cluster is not random; it favors directions of the popular locations. The symmetry distribution was expected due to the possible return of nodes to their main locations after exploring some close by points of interest. We can also deduce that students visit common locations for their activities, which resulted in the similar aggregated distribution of angles with bias symmetry to angles between 90°–150° and 240°–330°, respectively. We can see that nodes move to other sides as well, but with smaller frequencies than the direction of the point of interest; this implies that geographical restrictions such as constraint movement on roads are not the driving factor for the bias symmetry of the movement angle distribution. On the contrary, the aggregate weighted PDF for the whole domain is shown in Figure 3. Figure 3 shows that though it has a symmetry shape, the direction of movement is almost uniformly distributed within the domain.Figure 2
The bias symmetry distribution of direction angle for the dynamic clusters (NCSU traces). Thex-axis represents the angular (units are in degrees), and the y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.Figure 3
The uniform distribution of direction angle for the whole domain (NCSU traces). Thex-axis represents the angular (units are in degrees) and y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.
### 3.2. Intracluster Travel
We study the travel distances between consecutive locations within the cluster at which a node spent a long time exploring neighboring locations.We fit four parametric models on the empirical data of KAIST intracluster travel distance as shown in Figure4. The distribution that best fits the data is the lognormal distribution with the parameters 2.29989493 and 0.8685148 for the log mean and log standard deviation, respectively, as shown in Figure 5 and KS test in Table 2. This shows that students take repeated short walks around some popular locations such as classes, libraries, and dormitories.Figure 4
Four (4) different distributions fitted to the KAIST intracluster distance trace; all distribution fits the empirical data, but power law and lognormal show better matching features.Figure 5
Log-normal fits the KAIST intracluster.Table 2
KAIST intracluster distance gof table.
DistgofNtailsCrit. ValRemarkLognorm0.015644948150.04764AcceptPower law0.0318167819000.03120Reject
### 3.3. Pause Time Distribution
Pause time distribution is one of the temporal features of human mobility, which plays a vital role in the diffusive nature of human mobility. It dictates the amount of time a node spends at a location with zero or close to zero velocity. Figure6 shows four different parametric models that fit the empirical data of the KAIST trace. After the KS test of gof, we found that power-law distribution is plausible, and hence there is no enough evidence to support its rejection as shown in Table 3. Figure 7 shows that the power law has a threshold value (xmin) of four minutes (240 s) and a cut-off value of P (Δ (t)) = 16 hrs. The power-law pause time distribution indicates a scale-free characteristic.Figure 6
Four (4) different distributions fitted to the KAIST trace pause time.Table 3
KAIST pause time gof table.
DistgofNtailsCrit. valueRemarkPower law0.023677028500.04665AcceptExponential0.13154073860.06922RejectFigure 7
A pause time distribution for the KAIST trace. The distribution exhibits power-law decay with exponential cut-off.
### 3.4. Intercontact Time
In this section, we characterize the empirical data from iMotes experiments at Cambridge [44]. The data includes some traces of Bluetooth sightings by groups of users carrying small devices (iMotes) for five days. Our goal is to extract the distribution of intercontact time from the dataset for further analysis. Figure 8 shows the aggregate CCDF distribution for the intercontact duration of the empirical data. The distribution follows a power-law distribution with the exponent = 1 : 4, but the power-law decay is overweight by an exponential decay toward the end of the distribution. The distribution is called a truncated power law, similar to the results presented in [6]. The power feature of the intercontact time distribution is interesting because it dictates the scale-free properties of an opportunistic network.Figure 8
Power-law distribution with different values of lambda fitted on the Cambridge iMotes trace intercontact time distribution.
## 3.1. Intracluster Direction of Movement
The direction of movement within the movement clusters has not been studied well despite its impacts on the mobility patterns. Nevertheless, some related works reported an aggregate distribution of movement directions for the whole domain instead of movement clusters [41, 42]. Our study takes a different approach; it studies the direction of movement within dynamic clusters to understand the direction angle’s properties.Figure2 shows a weighted Probability Density Function (PDF) of the movement direction within clusters from NCSU trace with a bin size of 1°. We measure the direction of each movement by its movement duration. We can see that the direction of movement is biased symmetric toward some preferred locations. It implies that the movement within a dynamic cluster is not random; it favors directions of the popular locations. The symmetry distribution was expected due to the possible return of nodes to their main locations after exploring some close by points of interest. We can also deduce that students visit common locations for their activities, which resulted in the similar aggregated distribution of angles with bias symmetry to angles between 90°–150° and 240°–330°, respectively. We can see that nodes move to other sides as well, but with smaller frequencies than the direction of the point of interest; this implies that geographical restrictions such as constraint movement on roads are not the driving factor for the bias symmetry of the movement angle distribution. On the contrary, the aggregate weighted PDF for the whole domain is shown in Figure 3. Figure 3 shows that though it has a symmetry shape, the direction of movement is almost uniformly distributed within the domain.Figure 2
The bias symmetry distribution of direction angle for the dynamic clusters (NCSU traces). Thex-axis represents the angular (units are in degrees), and the y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.Figure 3
The uniform distribution of direction angle for the whole domain (NCSU traces). Thex-axis represents the angular (units are in degrees) and y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.
## 3.2. Intracluster Travel
We study the travel distances between consecutive locations within the cluster at which a node spent a long time exploring neighboring locations.We fit four parametric models on the empirical data of KAIST intracluster travel distance as shown in Figure4. The distribution that best fits the data is the lognormal distribution with the parameters 2.29989493 and 0.8685148 for the log mean and log standard deviation, respectively, as shown in Figure 5 and KS test in Table 2. This shows that students take repeated short walks around some popular locations such as classes, libraries, and dormitories.Figure 4
Four (4) different distributions fitted to the KAIST intracluster distance trace; all distribution fits the empirical data, but power law and lognormal show better matching features.Figure 5
Log-normal fits the KAIST intracluster.Table 2
KAIST intracluster distance gof table.
DistgofNtailsCrit. ValRemarkLognorm0.015644948150.04764AcceptPower law0.0318167819000.03120Reject
## 3.3. Pause Time Distribution
Pause time distribution is one of the temporal features of human mobility, which plays a vital role in the diffusive nature of human mobility. It dictates the amount of time a node spends at a location with zero or close to zero velocity. Figure6 shows four different parametric models that fit the empirical data of the KAIST trace. After the KS test of gof, we found that power-law distribution is plausible, and hence there is no enough evidence to support its rejection as shown in Table 3. Figure 7 shows that the power law has a threshold value (xmin) of four minutes (240 s) and a cut-off value of P (Δ (t)) = 16 hrs. The power-law pause time distribution indicates a scale-free characteristic.Figure 6
Four (4) different distributions fitted to the KAIST trace pause time.Table 3
KAIST pause time gof table.
DistgofNtailsCrit. valueRemarkPower law0.023677028500.04665AcceptExponential0.13154073860.06922RejectFigure 7
A pause time distribution for the KAIST trace. The distribution exhibits power-law decay with exponential cut-off.
## 3.4. Intercontact Time
In this section, we characterize the empirical data from iMotes experiments at Cambridge [44]. The data includes some traces of Bluetooth sightings by groups of users carrying small devices (iMotes) for five days. Our goal is to extract the distribution of intercontact time from the dataset for further analysis. Figure 8 shows the aggregate CCDF distribution for the intercontact duration of the empirical data. The distribution follows a power-law distribution with the exponent = 1 : 4, but the power-law decay is overweight by an exponential decay toward the end of the distribution. The distribution is called a truncated power law, similar to the results presented in [6]. The power feature of the intercontact time distribution is interesting because it dictates the scale-free properties of an opportunistic network.Figure 8
Power-law distribution with different values of lambda fitted on the Cambridge iMotes trace intercontact time distribution.
## 4. EPOM Model
EPOM was developed by integrating several submodels or communities into one functional model. Each submodel captures a specific, realistic activity of a campus environment as observed in the empirical data of the studied traces [36, 44]. We have categorized the realistic activities into the home, study, eating, sports, and off-campus activities, respectively. In addition to the submodels, a switching model plays an important role in switching the status of nodes between the submodels; this is referred to as intercluster movement. It helps to capture the realistic nature of students’ lives on a campus, which is viewed as repetitive and heterogeneous in time and space. When a node changes its status (e.g., from home to study), it uses the preferred method of transport to move to its new destination. We adopted two methods of transport in the model: walking and bus.The EPOM model captures personnel (e.g., faculty/departmental staff) movement, such as walks to the cafeteria for eating, and off-campus activities such as shopping.One of the distinguishing features of EPOM from the previous works in [7, 8, 19, 20] is the consideration of static and moving obstacles along the movement path from the source to the destination. The EPOM ensures collision-free movement along the trajectories. We model this by strategically placing objects of different shapes and sizes on the movement trajectories and developing an algorithm that generates detour paths. This is reasonable because there is a need for a path to escape a non-moving pedestrian on the movement path. The pictorial description of different submodels is depicted in Figure 9.Figure 9
EPOM submodels: the large rectangles represent the submodels (i.e., home, study, cafeteria, sport, and off-campus). The line connecting them is a transport submodel. The red shapes are obstacles that affect both mobility and signal propagation.
### 4.1. Home Submodel
Home is the starting point of the simulation. Initially, a predefined location is assigned to each node in the home location file. These locations are used for sleeping or node’s free time. Daily routine activities of a node start in the morning when it wakes up from the sleeping state. Each node is assigned a wake-up time, which determines when the node should wake up from sleeping. The wake-up time obeys a normal distribution with the mean seven o’clock and configurable standard deviation.After waking up, a node checks its lecture schedules and decides whether to go for a lecture or do some in-home activities such as cooking, watching the morning news, laundry services, or visiting a friend at the nearby dormitory. These short walks account for the possible evolution of the first dynamic cluster. Some nodes leave their home without doing any internal activities. Depending on the time of the day and a node lecture schedules, a node can switch to other submodels from home. For example, a node may switch to the sport submodel in the evening to play games; it can switch to eating submodel for dinner or switch to off-campus submodel for shopping or visiting a friend in another location. This flexibility of EPOM captures social influence and heterogeneity in time and space.
### 4.2. Study Submodel
We assigned specific locations on the map as lecture rooms. If a node is in the lecture room, it walks within the lecture room and pauses for the lecture duration. The pause time distribution is location-dependent in our model. The pause time for the lecture is different from the pause time at the cafeteria. However, the pause time at nonspecific locations is derived from the truncated power-law distribution observed in the empirical data. We turn off the pause time completely during the lecture period for 80 percent of the nodes; only 20 percent can make some movement within the lecture room; this is to capture the realistic behavior of students for changing desks or forming discussion groups. At the end of the lecture, a node decides to walk to the laboratory or library. This internal movement is modeled as an intracluster walk within the vicinity of the study area with the libraries, laboratories, and other study-related locations as waypoints.
### 4.3. Eating Submodel
Some strategic locations on the map are defined as cafeterias. When it is time for lunch or dinner, a node may switch to eating submodel and move to the cafeteria to eat. The time distribution for eating is uniformly distributed, starting from 11:00 a.m. to 2:00 p.m. for lunch and 6:00 p.m. to 8:00 p.m. for dinner. While in the cafeteria, a node waits, makes some intracluster walks, gets served, then eats, and switches to another submodel. During the eating activities in a large cafeteria, we observed a large crowd of students within a confined location, hence, the need for collision avoidance to allow smooth flow of students.
### 4.4. Sport Submodel
We define some points on the map as playgrounds; the time for sport is also defined. A node in the sport submodel spends some time at the playground watching or doing some random intracluster movements around the vicinity of the playground.
### 4.5. Off-Campus Submodel
The off-campus submodel models all activities not included in the home, school, eating, and sport submodels. These activities include shopping, evening walk, or visiting friends. We define some points of interest (PoIs) on the map edges as meeting points. We have two types of PoIs: location preferences PoI and Bus Normal PoI with uniform preferences. Mobile nodes visit such locations in a group to capture group mobility characteristics and social influences of human mobility and individually to capture independent mobility freedom. The minimum and maximum sizes of a group are defined in the default setting file.
### 4.6. Transport Submodel
This submodel is used to move between different submodels when a node switches mode. We define two means of transport in our model: walking and bus riding. Most nodes walk while a bus is mostly used for off-campus activities. The probability of moving with a bus is configured in the setting. The heterogeneity in the transport submodel has a great impact on the performance of routing protocol for DTN; high-speed nodes can deliver messages to a long-distance destination quickly.Bus service is accessible by the node at predefined bus stops. Initially, the node would walk to the nearest bus station and wait for the bus; when the bus arrives, the node enters the bus and drops at the bus stop closest to its destination. The node switches to a walking submodel to complete its journey to the final destination.The nodes in our model move on the map; this is another aspect of realism. The maps contain the homes, classes, cafeterias, playgrounds, shops, PoIs, and bus stops. The map data are essential for restricting the movement of the nodes to specific areas, which helps increase node localization. It is used to distribute nodes in the simulation area uniformly.The EPOM model generates mobility patterns through intercluster and intracluster movements. Therefore, at each time instant, a node is either in intercluster or intracluster movement mode controlled by the two-step Markov model in Figure10. When a node is in the intracluster movement mode, it explores the point of interest within its community and walks to the preferred POIs or generates a travel distance chosen from a lognormal distribution bounded by the community size.Figure 10
Two steps Markov model for switching between the intercluster and intracluster movements.The direction of movement is chosen from a bias direction symmetry distribution in the range [0, 2π [; see Figure 2. The lognormal distribution of the intracluster travel distance means nodes visit closer locations more frequently than distant locations.
### 4.7. Obstacle Submodel
The obstacle submodel describes how the EPOM model handles collision avoidance between nodes and other obstructing objects along their movement trajectories. In the case of static obstacles with zero speed, such as pedestrians standing on the road, at the middle of the corridors, or any other stationary object, we define the location of different obstacles on the map using OpenJUMP (http://openjump.org/) geographic information system program as in Figure 11.Figure 11
An example of visualization of a mobility scenario on the ONE simulator. The red irregular polygons mimic random obstacles. The blue numbered icons stand for nodes. The gray line stands for normal trajectories without obstacles, and the green line represents a new trajectory created by node E96 using the Escape path mobility model.The transport submodel moves the node from the current location (e.g., home) to the destination (e.g., class).The Dijkstra shortest path algorithm calculates the shortest path from the current location to the destination. We have two scenarios here: in the first scenario, there is no obstacle on the path, while in the second scenario, an obstacle is encountered along the shortest path. In the first scenario, a node would follow the shortest path to its destination without obstruction, but in the second scenario, a node would explore the logic in Algorithm2 to generate an escape path using the following transitions:Algorithm 2: Escape path movement for node i.
InitiallyescapeVertex = ϕ, neighbors = ϕ, distToDest = ϕ(1)
get obstacle’s vertices(2)
escapeVertex: = nearest vertex(3)
Repeat(4)
move to theescapeVertex(5)
neighbors: = neighbor vertices(6)
escapeVertex: = nearest neighbor(7)
until distToDeste (escapeVertex) ≤ distToDest (all neighbors)(8)
move to the destination(1)
Move along the shortest path trajectories until an obstacle is reached, keeping a minimal distance to the obstacle.(2)
Generate an escape path using Algorithm2.(3)
Complete the movement to the next obstacle (in case of more than one obstacle) or destination.(4)
Repeat 2 and 3 until the final destination is reached.Algorithm2 avoids collision with an obstacle by generating an escape path as shown in Figure 12. In Algorithm 2, line 1 gets the coordinates of an obstacle’s vertices V = (A, C, D, F, E, B). Note that the shape of an obstacle determines the number of vertices. In line 2, a user finds the nearest vertex A. It moves to vertex A in line 4. It finds the neighbors of vertex A (i.e., B and C) in line 5 and sets the next escape vertex to the nearest neighbor B in line 6. It then checks the condition in line 7; if the distance from its current location A to the destination is less than the distance from its neighbors C and B to the destination, it moves directly to the destination (line 8); otherwise, it returns to line 3.Figure 12
Escape path generated with Algorithm2.Considering a human movement behavior of walking beside the edges of an obstructing body until it passes the section of the obstacle that blocks it, the algorithm behaves similarly by creating a path beside (not on the edges) the edges of the obstacle. Some existing works have proposed a Bezier curve [32] or branching to the closest neighbor node [28], which is not always realistic because a human path of escaping obstacle cannot always be curved, just like an isolated obstacle which may not have a closer neighbor.
## 4.1. Home Submodel
Home is the starting point of the simulation. Initially, a predefined location is assigned to each node in the home location file. These locations are used for sleeping or node’s free time. Daily routine activities of a node start in the morning when it wakes up from the sleeping state. Each node is assigned a wake-up time, which determines when the node should wake up from sleeping. The wake-up time obeys a normal distribution with the mean seven o’clock and configurable standard deviation.After waking up, a node checks its lecture schedules and decides whether to go for a lecture or do some in-home activities such as cooking, watching the morning news, laundry services, or visiting a friend at the nearby dormitory. These short walks account for the possible evolution of the first dynamic cluster. Some nodes leave their home without doing any internal activities. Depending on the time of the day and a node lecture schedules, a node can switch to other submodels from home. For example, a node may switch to the sport submodel in the evening to play games; it can switch to eating submodel for dinner or switch to off-campus submodel for shopping or visiting a friend in another location. This flexibility of EPOM captures social influence and heterogeneity in time and space.
## 4.2. Study Submodel
We assigned specific locations on the map as lecture rooms. If a node is in the lecture room, it walks within the lecture room and pauses for the lecture duration. The pause time distribution is location-dependent in our model. The pause time for the lecture is different from the pause time at the cafeteria. However, the pause time at nonspecific locations is derived from the truncated power-law distribution observed in the empirical data. We turn off the pause time completely during the lecture period for 80 percent of the nodes; only 20 percent can make some movement within the lecture room; this is to capture the realistic behavior of students for changing desks or forming discussion groups. At the end of the lecture, a node decides to walk to the laboratory or library. This internal movement is modeled as an intracluster walk within the vicinity of the study area with the libraries, laboratories, and other study-related locations as waypoints.
## 4.3. Eating Submodel
Some strategic locations on the map are defined as cafeterias. When it is time for lunch or dinner, a node may switch to eating submodel and move to the cafeteria to eat. The time distribution for eating is uniformly distributed, starting from 11:00 a.m. to 2:00 p.m. for lunch and 6:00 p.m. to 8:00 p.m. for dinner. While in the cafeteria, a node waits, makes some intracluster walks, gets served, then eats, and switches to another submodel. During the eating activities in a large cafeteria, we observed a large crowd of students within a confined location, hence, the need for collision avoidance to allow smooth flow of students.
## 4.4. Sport Submodel
We define some points on the map as playgrounds; the time for sport is also defined. A node in the sport submodel spends some time at the playground watching or doing some random intracluster movements around the vicinity of the playground.
## 4.5. Off-Campus Submodel
The off-campus submodel models all activities not included in the home, school, eating, and sport submodels. These activities include shopping, evening walk, or visiting friends. We define some points of interest (PoIs) on the map edges as meeting points. We have two types of PoIs: location preferences PoI and Bus Normal PoI with uniform preferences. Mobile nodes visit such locations in a group to capture group mobility characteristics and social influences of human mobility and individually to capture independent mobility freedom. The minimum and maximum sizes of a group are defined in the default setting file.
## 4.6. Transport Submodel
This submodel is used to move between different submodels when a node switches mode. We define two means of transport in our model: walking and bus riding. Most nodes walk while a bus is mostly used for off-campus activities. The probability of moving with a bus is configured in the setting. The heterogeneity in the transport submodel has a great impact on the performance of routing protocol for DTN; high-speed nodes can deliver messages to a long-distance destination quickly.Bus service is accessible by the node at predefined bus stops. Initially, the node would walk to the nearest bus station and wait for the bus; when the bus arrives, the node enters the bus and drops at the bus stop closest to its destination. The node switches to a walking submodel to complete its journey to the final destination.The nodes in our model move on the map; this is another aspect of realism. The maps contain the homes, classes, cafeterias, playgrounds, shops, PoIs, and bus stops. The map data are essential for restricting the movement of the nodes to specific areas, which helps increase node localization. It is used to distribute nodes in the simulation area uniformly.The EPOM model generates mobility patterns through intercluster and intracluster movements. Therefore, at each time instant, a node is either in intercluster or intracluster movement mode controlled by the two-step Markov model in Figure10. When a node is in the intracluster movement mode, it explores the point of interest within its community and walks to the preferred POIs or generates a travel distance chosen from a lognormal distribution bounded by the community size.Figure 10
Two steps Markov model for switching between the intercluster and intracluster movements.The direction of movement is chosen from a bias direction symmetry distribution in the range [0, 2π [; see Figure 2. The lognormal distribution of the intracluster travel distance means nodes visit closer locations more frequently than distant locations.
## 4.7. Obstacle Submodel
The obstacle submodel describes how the EPOM model handles collision avoidance between nodes and other obstructing objects along their movement trajectories. In the case of static obstacles with zero speed, such as pedestrians standing on the road, at the middle of the corridors, or any other stationary object, we define the location of different obstacles on the map using OpenJUMP (http://openjump.org/) geographic information system program as in Figure 11.Figure 11
An example of visualization of a mobility scenario on the ONE simulator. The red irregular polygons mimic random obstacles. The blue numbered icons stand for nodes. The gray line stands for normal trajectories without obstacles, and the green line represents a new trajectory created by node E96 using the Escape path mobility model.The transport submodel moves the node from the current location (e.g., home) to the destination (e.g., class).The Dijkstra shortest path algorithm calculates the shortest path from the current location to the destination. We have two scenarios here: in the first scenario, there is no obstacle on the path, while in the second scenario, an obstacle is encountered along the shortest path. In the first scenario, a node would follow the shortest path to its destination without obstruction, but in the second scenario, a node would explore the logic in Algorithm2 to generate an escape path using the following transitions:Algorithm 2: Escape path movement for node i.
InitiallyescapeVertex = ϕ, neighbors = ϕ, distToDest = ϕ(1)
get obstacle’s vertices(2)
escapeVertex: = nearest vertex(3)
Repeat(4)
move to theescapeVertex(5)
neighbors: = neighbor vertices(6)
escapeVertex: = nearest neighbor(7)
until distToDeste (escapeVertex) ≤ distToDest (all neighbors)(8)
move to the destination(1)
Move along the shortest path trajectories until an obstacle is reached, keeping a minimal distance to the obstacle.(2)
Generate an escape path using Algorithm2.(3)
Complete the movement to the next obstacle (in case of more than one obstacle) or destination.(4)
Repeat 2 and 3 until the final destination is reached.Algorithm2 avoids collision with an obstacle by generating an escape path as shown in Figure 12. In Algorithm 2, line 1 gets the coordinates of an obstacle’s vertices V = (A, C, D, F, E, B). Note that the shape of an obstacle determines the number of vertices. In line 2, a user finds the nearest vertex A. It moves to vertex A in line 4. It finds the neighbors of vertex A (i.e., B and C) in line 5 and sets the next escape vertex to the nearest neighbor B in line 6. It then checks the condition in line 7; if the distance from its current location A to the destination is less than the distance from its neighbors C and B to the destination, it moves directly to the destination (line 8); otherwise, it returns to line 3.Figure 12
Escape path generated with Algorithm2.Considering a human movement behavior of walking beside the edges of an obstructing body until it passes the section of the obstacle that blocks it, the algorithm behaves similarly by creating a path beside (not on the edges) the edges of the obstacle. Some existing works have proposed a Bezier curve [32] or branching to the closest neighbor node [28], which is not always realistic because a human path of escaping obstacle cannot always be curved, just like an isolated obstacle which may not have a closer neighbor.
## 5. Model Implementation
The Escape Path Obstacle-based Movement model was implemented on the Opportunistic Network Environment (ONE) simulator [46, 47] as a collection of different submodels. ONE supported different movement models such as the Random Waypoint Movement (RWP), Map-Based Movement (MBM), Shortest Path Map-Based Movement model (SPMBM), and Route-Based Movement model (RBM). MBM is a special type of RW in which nodes move along the map paths defined in Well-Known Text (WKT) files. We used the Open-JUMP Geographic Information System (GIS) program to define the location of obstacles, homes, classes, cafeterias, playgrounds, shops, and points of interest for off-campus activities. We created a main movement model that inherited the extended movement model of ONE and controlled the movement of nodes going to school, going to the cafeteria, going to sport, going shopping, or similar activities outside the campus, and finally returning home to sleep. The main model orders and switches between submodels, passes the control to the submodels responsible for different activities, facilitates the movement to the destination by giving information about the destination to the transport submodels, and decides on the probability to walk or use bus based on the setting configuration.In a real scenario, an obstacle, such as the floor, walls, buildings, or mountains, exists and impacts mobility and signal attenuation. To reflect this impact, we modify the method “isWithinRange(DTNHost anotherHost)” for the class NetworkLayer of the ONE simulator to reflect the signal’s attenuation in the propagation model. When a node’s signal propagates through an obstacle, it suffers attenuation due to the effects of diffraction, reflection, and scattering. Some attenuation results [33] are presented in Table 4. The attenuated values are randomly taken from a uniform distribution between 40 and 60 dB, which reflects the fact that obstacles have at least double walls. A connection is created when the radio signal is greater than a fixed threshold (transmitting range).Table 4
Power attenuation values.
HomeOfficeSingle wall6–20 dB6–20 dBDouble wall40–50 dB50–60 dB
## 6. Validation
Our goal is to show that our conceptual model (EPOM) is generic enough to be fine-tuned with a few parameters to show matching characteristics with the NCSU GPS traces [36], in terms of the spatial features: intracluster travel distance and intracluster direction of movement as well as the temporal feature (i.e., pause time). We also show that EPOM connectivity features matched those of iMote real traces [44] in terms of contact duration and intercontact time distribution.After the wake-up, a node starts to walk using the current mobility model; a node switches to different locations from the current location using the five steps Markov model depending on the time of the day. See Table5 for the list of simulation parameters.Table 5
Summary of the simulation parameters.
ParameterValueNumber of nodes1000Simulation length500,000 secTransmit range10 mbObstacle path transmit range[5, 10] mbWorld size5000 × 3000 m2Walking speed[1, 3] m/sBus speed[7, 10] m/sTransmit speed250 kbsRouting protocolEpidemicInterface typeSimple broadcast interfaceBuffer size50 mbMessage size[500 kb, 1 mb]Message interval[25, 35] secMessage TTL1,430 secWe simulate the random waypoint model on the same size simulation area with 1000 nodes uniformly distributed. Each node randomly chooses a waypoint and move with a speed of 0.5–5 m/s; when a node reaches the destination, it pauses for 1–3600 s. Both the speed and pause time are uniformly distributed.The simulation was run for the length ofT = 5 × 105 s, which is approximately five days. We assume all events are uniformly distributed over a longer period of time and consider the probability of an event of length x, p (x). We record only events that begin and end within the observed interval. We create the Complementary Cumulative Density Function (CCDF, P [X > x]) for the distribution of contact duration, intercontact time, intertravel distance, intratravel distance, intracluster movement direction and pause time.Settings: our simulation environment is a map of parts of the Université Paris Saclay campus, edited using OpenJUMP geographic information system program with 1000 nodes moving on the area of roughly 5000 × 3000 m2. We created different WKT files for the map roads, homes, lecture rooms, cafeterias, sport, off-campus activities locations, PoIs, and obstacles.Each node is assigned with a unique home located on the map as its starting point in the simulation, a wake-up time drawn from a normal distribution to each node.
### 6.1. Spatiotemporal Features
We start with the intracluster feature being one of the most important aspects of our study. We divide the main simulation domain into a number of equal size communities denoted byc to account for the dynamic clusters. C ϵ {1,. . . ., Nc }, where Nc is the total number of communities in the domain. During our analysis, we find out that each walker is associated with an average of three dynamic clusters per day, as shown in Figure 13 depending on the degree of the repetitiveness of the user’s schedule; we can exploit this type of temporal mobility feature to predict a possible user location. Similarly, it can be used by the opportunistic routing protocol to schedule package forwarding.Figure 13
Number of dynamic clusters per trace file in KAIST traces.After tuning our model, it generates matching walking clusters with the KAIST data. Figure14 shows one-day dynamic clusters of Node 4 generated from the EPOM model. The generated clusters matched with that of the KAIST trace in Figure 1 for the trace file sixteen [36].Figure 14
The synthetic clusters generated by node four in the EPOM model. The red numbers represent mobile nodes, the green points indicate waypoints, and the green lines represent the node trajectories. We can see the waypoints as dynamic clusters.Next, we focus on the intracluster travel distance to capture the neighborhood exploration observed in the real traces. In our model, at each time instant, a node is either in intercluster or intracluster movement mode managed by a two-step Markov model in Figure15. When a node is in the intracluster movement mode, it explores the point of interest within its community and walks to the preferred PoIs or generates a travel distance uniformly chosen at random from a lognormal distribution bounded by the community size. The direction of movement is uniformly chosen at random from the bias symmetry distribution of the empirical data shown in Figure 2. The lognormal distribution of the intracluster travel distance means nodes visit closer locations more frequently than distant locations.Figure 15
Two-state Markov model for intercluster and intracluster movement.Figure16 shows the distribution of direction angle generated from the synthetic traces of the EPOM model. The distribution is similar to the distribution of NCSU trace in Figure 2. The main take-home message from the two distributions is the movement within dynamic clusters which is not random but bias toward some PoIs and popular locations within the community.Figure 16
The bias symmetry distribution of direction angle for the EPOM clusters. Thex-axis represents the angular (units are in degrees) and the y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.Figure17 shows an intracluster travel distance generated from our model compared to that of the empirical data. The two distributions are similar for a longer period of time but slightly differ at the tail.Figure 17
Intracluster travel distance for the EPOM and KAIST traces. Both curves follow a lognormal distribution, meaning people visit some preferred nearby locations more than far distant locations.This is a consequence of the size of the community in the simulation domain. Therefore, EPOM replicates intracluster travel distance as observed in the KAIST empirical data.Next, we concentrate on the general domain by analyzing the intercluster travel distance distribution for the whole domain generated by the EPOM model and compare it with the empirical distribution observed from the empirical data. This is the approach adopted by most of the existing works in [48, 49].Figure18 shows the intercluster travel distance distribution for the KAIST, EPOM, and RWP traces. The distribution of the EPOM and KAIST traces fits the truncated power-law distribution. It shows that users tend to undertake many short walks in a cluster and occasionally take long-distance walks. We also note that such short-distance walks that evolve over time are the consequence of intracluster movements. In contrast, the curve for the conventional RWP model fits uniform distribution, which does not differentiate between short and long walks. This feature does not resemble the realistic nature of human mobility patterns.Figure 18
The intercluster travel distance for the EPOM, KAIST, and RWP models. The curves for the EPOM model and KAIST traces exhibit power-law decay for a long period, supporting the realistic nature of the human mobility pattern for taking short walks more than a long journey. The RWP curve is uniformly distributed and does not differentiate between short walks and long journeys.Mobility temporal characteristics analysis of user’s temporal locations at a certain period gives us an insight into the possibility of predicting users’ location, how long a user could stay at a given location, that is, pause time, when the user is expected to return to a given location, that is, return time and why a user exhibits a skewed visiting behavior to some locations, that is, dynamic community walk.We study the pause time distribution of the KAIST campus traces in [36] and tune the EPOM model to generate a pause time distribution similar to the empirical distribution observed. Figure 19 shows the pause time distribution of the KAIST trace and EPOM traces. The distribution is found to be power law with a heavy tail. This shows that students spent a long time at some locations, such as lecture rooms but stayed for a short time at most locations such as shopping malls and cafeterias. This distribution is consistent with the distribution of pause time observed in Dartmouth campus real traces in [23].Figure 19
The pause time distribution of the KAIST and EPOM traces. The figure indicates that humans mostly stay short in most places they visit and stay at few locations.The fact that users pause for a long time at some preferred location also indicates that users predominantly take short walks within the community of such locations.We observed that users are associated with an average of three dynamic clusters in one working day, which evolve over time, as shown in Figure13. This fact is true for all users, except for stationary users.
### 6.2. Connectivity Features
In this section, we investigate how closely the EPOM model reproduces the distribution of the studied connectivity metrics as observed in the empirical data of the realistic traces in [44]. We compare the distribution generated by both EPOM, iMote traces, and Random Waypoint on each plot.Figure20 shows the aggregate distribution of contact duration for EPOM, iMote traces, and RWP. Each plot shows the complementary cumulative distribution function of a contact duration using a log-log scale. We see that the EPOM distribution follows power-law decay for a long time, similar to the distribution of iMotes traces. This is consistent with the findings in most research on human mobility contact distribution [50]. The distribution of RWP consists of only a short time with exponential decay. The power-law feature of human mobility indicates that more nodes have contact opportunities for a shorter time while only a few nodes stay connected for a longer time. A DTN routing algorithm can be designed to exploit this feature in conjunction with the spatiotemporal features to decide the best way to route a message from the source to destination(s).Figure 20
The PDF of the contact time distribution for the EPOM, iMotes, and RWP model. The EPOM model follows power-law distribution for a long time, just like the iMotes traces, but RWP follows exponential distributions with very short contacts.From Figure21, we see the intercontact (ICTs) time distribution for EPOM, iMote traces, and RWP. Figure 21 shows that both the EPOM and iMote traces curves exhibit power-law decay with exponential cut-off, unlike RWP that entirely follows an exponential distribution. The distribution of ICTs for the EPOM is also consistent with the feature of the realistic ICTs discovered in [51]. The power-law nature of ICTs plays an important role in DTNs as it fundamentally impacts the behavior of networking protocols [51]. Though shorter intercontact time means more frequent connection, nodes with longer intercontact times are possibly assumed to have new data to share.Figure 21
The PDF of the intercontact time distribution for the EPOM, iMotes, and RWP model. It shows that both the EPOM and iMote traces curves exhibit power-law decay with exponential cut-off, unlike RWP, which entirely follows an exponential distribution.Figure22 presents contacts for each simulation hour. Figure 22 shows the repetitiveness of hourly activities. We use 43200 seconds as working day length. The contact per hour RWP is uniform throughout the simulation. Observe the repetitive behavior of the EPOM, which captures students’ daily routine activities at specific hours of the day.Figure 22
The distribution of contacts for each simulation hour. The EPOM model has shown repetitiveness of hourly activities, unlike RWP, which shows uniform distribution of activities for each hour.
## 6.1. Spatiotemporal Features
We start with the intracluster feature being one of the most important aspects of our study. We divide the main simulation domain into a number of equal size communities denoted byc to account for the dynamic clusters. C ϵ {1,. . . ., Nc }, where Nc is the total number of communities in the domain. During our analysis, we find out that each walker is associated with an average of three dynamic clusters per day, as shown in Figure 13 depending on the degree of the repetitiveness of the user’s schedule; we can exploit this type of temporal mobility feature to predict a possible user location. Similarly, it can be used by the opportunistic routing protocol to schedule package forwarding.Figure 13
Number of dynamic clusters per trace file in KAIST traces.After tuning our model, it generates matching walking clusters with the KAIST data. Figure14 shows one-day dynamic clusters of Node 4 generated from the EPOM model. The generated clusters matched with that of the KAIST trace in Figure 1 for the trace file sixteen [36].Figure 14
The synthetic clusters generated by node four in the EPOM model. The red numbers represent mobile nodes, the green points indicate waypoints, and the green lines represent the node trajectories. We can see the waypoints as dynamic clusters.Next, we focus on the intracluster travel distance to capture the neighborhood exploration observed in the real traces. In our model, at each time instant, a node is either in intercluster or intracluster movement mode managed by a two-step Markov model in Figure15. When a node is in the intracluster movement mode, it explores the point of interest within its community and walks to the preferred PoIs or generates a travel distance uniformly chosen at random from a lognormal distribution bounded by the community size. The direction of movement is uniformly chosen at random from the bias symmetry distribution of the empirical data shown in Figure 2. The lognormal distribution of the intracluster travel distance means nodes visit closer locations more frequently than distant locations.Figure 15
Two-state Markov model for intercluster and intracluster movement.Figure16 shows the distribution of direction angle generated from the synthetic traces of the EPOM model. The distribution is similar to the distribution of NCSU trace in Figure 2. The main take-home message from the two distributions is the movement within dynamic clusters which is not random but bias toward some PoIs and popular locations within the community.Figure 16
The bias symmetry distribution of direction angle for the EPOM clusters. Thex-axis represents the angular (units are in degrees) and the y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.Figure17 shows an intracluster travel distance generated from our model compared to that of the empirical data. The two distributions are similar for a longer period of time but slightly differ at the tail.Figure 17
Intracluster travel distance for the EPOM and KAIST traces. Both curves follow a lognormal distribution, meaning people visit some preferred nearby locations more than far distant locations.This is a consequence of the size of the community in the simulation domain. Therefore, EPOM replicates intracluster travel distance as observed in the KAIST empirical data.Next, we concentrate on the general domain by analyzing the intercluster travel distance distribution for the whole domain generated by the EPOM model and compare it with the empirical distribution observed from the empirical data. This is the approach adopted by most of the existing works in [48, 49].Figure18 shows the intercluster travel distance distribution for the KAIST, EPOM, and RWP traces. The distribution of the EPOM and KAIST traces fits the truncated power-law distribution. It shows that users tend to undertake many short walks in a cluster and occasionally take long-distance walks. We also note that such short-distance walks that evolve over time are the consequence of intracluster movements. In contrast, the curve for the conventional RWP model fits uniform distribution, which does not differentiate between short and long walks. This feature does not resemble the realistic nature of human mobility patterns.Figure 18
The intercluster travel distance for the EPOM, KAIST, and RWP models. The curves for the EPOM model and KAIST traces exhibit power-law decay for a long period, supporting the realistic nature of the human mobility pattern for taking short walks more than a long journey. The RWP curve is uniformly distributed and does not differentiate between short walks and long journeys.Mobility temporal characteristics analysis of user’s temporal locations at a certain period gives us an insight into the possibility of predicting users’ location, how long a user could stay at a given location, that is, pause time, when the user is expected to return to a given location, that is, return time and why a user exhibits a skewed visiting behavior to some locations, that is, dynamic community walk.We study the pause time distribution of the KAIST campus traces in [36] and tune the EPOM model to generate a pause time distribution similar to the empirical distribution observed. Figure 19 shows the pause time distribution of the KAIST trace and EPOM traces. The distribution is found to be power law with a heavy tail. This shows that students spent a long time at some locations, such as lecture rooms but stayed for a short time at most locations such as shopping malls and cafeterias. This distribution is consistent with the distribution of pause time observed in Dartmouth campus real traces in [23].Figure 19
The pause time distribution of the KAIST and EPOM traces. The figure indicates that humans mostly stay short in most places they visit and stay at few locations.The fact that users pause for a long time at some preferred location also indicates that users predominantly take short walks within the community of such locations.We observed that users are associated with an average of three dynamic clusters in one working day, which evolve over time, as shown in Figure13. This fact is true for all users, except for stationary users.
## 6.2. Connectivity Features
In this section, we investigate how closely the EPOM model reproduces the distribution of the studied connectivity metrics as observed in the empirical data of the realistic traces in [44]. We compare the distribution generated by both EPOM, iMote traces, and Random Waypoint on each plot.Figure20 shows the aggregate distribution of contact duration for EPOM, iMote traces, and RWP. Each plot shows the complementary cumulative distribution function of a contact duration using a log-log scale. We see that the EPOM distribution follows power-law decay for a long time, similar to the distribution of iMotes traces. This is consistent with the findings in most research on human mobility contact distribution [50]. The distribution of RWP consists of only a short time with exponential decay. The power-law feature of human mobility indicates that more nodes have contact opportunities for a shorter time while only a few nodes stay connected for a longer time. A DTN routing algorithm can be designed to exploit this feature in conjunction with the spatiotemporal features to decide the best way to route a message from the source to destination(s).Figure 20
The PDF of the contact time distribution for the EPOM, iMotes, and RWP model. The EPOM model follows power-law distribution for a long time, just like the iMotes traces, but RWP follows exponential distributions with very short contacts.From Figure21, we see the intercontact (ICTs) time distribution for EPOM, iMote traces, and RWP. Figure 21 shows that both the EPOM and iMote traces curves exhibit power-law decay with exponential cut-off, unlike RWP that entirely follows an exponential distribution. The distribution of ICTs for the EPOM is also consistent with the feature of the realistic ICTs discovered in [51]. The power-law nature of ICTs plays an important role in DTNs as it fundamentally impacts the behavior of networking protocols [51]. Though shorter intercontact time means more frequent connection, nodes with longer intercontact times are possibly assumed to have new data to share.Figure 21
The PDF of the intercontact time distribution for the EPOM, iMotes, and RWP model. It shows that both the EPOM and iMote traces curves exhibit power-law decay with exponential cut-off, unlike RWP, which entirely follows an exponential distribution.Figure22 presents contacts for each simulation hour. Figure 22 shows the repetitiveness of hourly activities. We use 43200 seconds as working day length. The contact per hour RWP is uniform throughout the simulation. Observe the repetitive behavior of the EPOM, which captures students’ daily routine activities at specific hours of the day.Figure 22
The distribution of contacts for each simulation hour. The EPOM model has shown repetitiveness of hourly activities, unlike RWP, which shows uniform distribution of activities for each hour.
## 7. Conclusion and Future Work
In this paper, we conduct an in-depth study of human mobility patterns using realistic datasets for Bluetooth encounters and Global Positioning System (GPS) track-logs traces at the fine-grain level to better understand human mobility properties and uncover hidden patterns. Consequently, we have discovered time-varying human mobility patterns associated with a dynamic evolution of movement clusters. We proposed a new mobility model that mimics the realistic mobility patterns of real-world traces in the presence of obstacles of different shapes and sizes. The model describes various student activities that routinely evolve over time, such as going to the lectures, going to the cafeteria, sport, and shopping. We have shown that the model produces the distribution of the intercluster travel distance, intracluster travel distance, intracluster direction of movement, contact duration, intercontact time, and pause time, similar to the distribution of realistic traces.For future work, we intend to extend the model to the urban scenario through an extensive study of dynamic clusters evolution of pedestrians and bus commuters. Consequently, we intend to design an efficient predicting framework for human mobility. The new framework will exploit the existing and new uncovered features to predict the user’s subsequent displacement, stay duration, and possible contact.
---
*Source: 1018904-2021-12-30.xml* | 1018904-2021-12-30_1018904-2021-12-30.md | 80,130 | Escape Path Obstacle-Based Mobility Model (EPOM) for Campus Delay-Tolerant Network | Sirajo Abdullahi Bakura; Alain Lambert; Thomas Nowak | Journal of Advanced Transportation
(2021) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1018904 | 1018904-2021-12-30.xml | ---
## Abstract
In Delay-Tolerant Networks (DTNs), humans are the main carriers of mobile devices, signifying that human mobility can be exploited by extracting nodes’ interests, social behavior, and spatiotemporal features for the performance evaluation of DTNs protocols. This paper presents a new mobility model that describes students’ daily activities in a campus environment. Unlike the conventional random walk models, which use a free space environment, our model includes a collision-avoidance technique that generates an escape path upon encountering obstacles of different shapes and sizes that obstruct pedestrian movement. We evaluate the model’s usefulness by comparing the distributions of its synthetic traces with realistic traces in terms of spatial, temporal, and connectivity features of human mobility. Similarly, we analyze the concept of dynamic movement clusters observed on the location-based trajectories of the studied real traces. The model synthetically generates traces with the distribution of the intercluster travel distance, intracluster travel distance, direction of movement, contact duration, intercontact time, and pause time similar to the distribution of real traces.
---
## Body
## 1. Introduction
Mobility patterns play an essential role in the performance of wireless networks with intermittent connections such as Delay-Tolerant Networks (DTNs). Some of the features associated with these networks include persistent disconnections, absence of simultaneous end-to-end communication routes, sparse topology, long delays among nodes due to mobility, and sparse deployment of nodes. However, we can achieve a weak form of connectivity in DTNs by exploiting the temporal dimension and node mobility [1]. Considerable research efforts have been put recently to enable communications between network entities with intermittent connectivity [2].Moreover, the forwarding opportunities of the DTNs depend on the patterns of mobility that dictate contact opportunities between nodes for reliable information forwarding. Interestingly, humans are the main carriers of mobile devices. Therefore, there is a need to understand the underlying behavior of pedestrian mobility, the driving forces that influence its motivation to move, and the repulsive forces that describe its interaction with environmental constraints. These are essential for designing a realistic mobility model to be used as a tool for wireless network protocol evaluation, hence the need for a model based on the empirical study of pedestrians’ mobility and interaction with other objects in the environment to pave the way for better event management, emergency rescue operation, and congestion prediction in a narrow bottleneck.This study investigated the mobility characteristics of pedestrians using real traces and proposed an obstacle-based mobility model for the DTNs that closely replicates the empirical features observed in the analyzed traces and generates spatial, temporal, and connectivity features similar to the features generated by realistic human mobility to enhance opportunistic forwarding for the DTNs and support pedestrian collision avoidance in the event of crowd or emergency rescue operations.Several mobility models have been proposed in [3–10], which can be categorized into synthetic or trace-based. The synthetic mobility models are less realistic as compared to the trace-based ones. On the contrary, the trace-based models are much more difficult to develop than the synthetic models. In addition to the models that describe pure human mobility, the models presented in [11–13] have explored the concept of cognitive science modeling using the driving forces that influence pedestrians’ internal motivation to move in a given direction and speed and the repulsive forces that describe pedestrian interaction with other pedestrian and environmental constraints such as obstacles using empirical data obtained from laboratory-controlled experiments. In this regard, we concentrate on the pedestrians’ interaction with the static and moving obstacles.Starting with the conventional mobility models, random walks are the most widely used synthetic models for the analysis of node mobility [3, 4]. Random walk models generate mobility patterns in which mobile nodes display completely random behavior. With this regard, only a few wireless networks (e.g., sensor network for animal tracking [14, 15]) can display such kind of randomness. In contrast, the majority of wireless networks strictly obey certain mobility rules. Pedestrian mobility is not completely random but influenced by features specific to humans, which resemble intentional mobility toward points of attraction.On the contrary, the random waypoint model [8, 16, 17] is considered the first synthetic model that attempts to model intentional human movement, which is not captured in the random walk models. Nevertheless, the model was shown to be unrealistic in [9] due to its failure to provide a steady state, resulting in the inconsistent decrease of an average node speed over time.This property can lead to unreliable results. Some simple fixes and modifications to the random waypoint presented in [9] still fail to capture the realistic behavior of intentional human mobility to some locations due to the strength of a social relationship or connection. For instance, a student might go to class for the lecture and to the cafeteria to eat or visit a nearby dormitory friend.The node’s movement is not restricted to a pathway in the random walk and random waypoint models. The Manhattan mobility model [18], on the contrary, restricts the movement of a node to the pathway in the simulation area.A generalization of several classical models that attempt to develop synthetic mobility models for mobile networks and satisfy some statistics was presented in [6, 7, 19, 20]. Some of the statistical features studied are the flight distribution (i.e., a straight line distance covered between two consecutive waypoints), pause time distribution (i.e., the amount of time a node pauses at a waypoint), and intercontact time distribution (i.e., the amount of time between two contacts of the same pair of nodes) regarding different scales of time and space.To develop mobility models using empirical data, which is the concept adopted in our study, studies in [21–23] extract detailed mobility data from real traces and calibrate the uncovered mobility features in their models. The studies in [21, 22] recognize opportunities when the user is associated with the same Wi-Fi access point; the idea was extended in Kim et al. [23] by considering a situation when the users are within the communication range of each other. Kim et al. [23] proposed a synthetic mobility model from user mobility characteristics extracted from the wireless network traces syslog but only considers Wi-Fi access points, which have a higher granularity of mobility trajectories. In [11], a controlled laboratory experiment was conducted to study the behavioral effects of interactions between pedestrians. The study extracts individual behavioral laws from the statistical features observed in the empirical data.The contributions of this paper are threefold:(1)
We conduct a characterization of spatial, temporal, and connectivity features of human mobility using real traces.(2)
We conduct an in-depth study on the movement displacements and directions within the movement clusters characterized by short walks within a confined area.(3)
We propose an Escape Path Obstacle-based Mobility Model (EPOM) for the campus DTNs. We show that the model is generic enough to be fine-tuned with a few parameters to show matching characteristics with the spatiotemporal and connectivity features observed in the real traces.This paper is organized as follows: Section2 presents a review of the related works. Section 3 explains the characterization of human mobility. Section 4 describes the Escape Path Obstacle-based Mobility Model and its submodels. Section 5 describes the implementation of the proposed model. Section 6 presents the settings and the results of the simulations. Finally, the conclusion and future perspectives are given in Section 7.
## 2. Related Work
The increasing interest of the research community on the DTNs and the impacts of mobility on their performance has led to the development of several mobility models focusing on different mobility features [6, 7, 19, 20, 24]. Nevertheless, the conventional synthetic stochastic models [3, 4, 9, 25, 26] meant for the performance analysis of network protocols in the early ad-hoc networks are insufficient to capture users’ intentional behaviors and social attractions. Several works have investigated the adaptability of the conventional models in the next generation mobile networks such as DTNs, Vehicular Ad-hoc Networks (VANETs), and Wireless Sensor Networks (WSNs). The studies found that human mobility is characterized by intentional mobility as opposed to the random assumptions in the conventional models [7, 19, 20].Although synthetic models that capture intentional human behavior are more realistic than the conventional models, the trace-based models [22, 27] appear to be more realistic because they are mostly generated for a specific scenario and only for a few nodes. In contrast, the nonconventional synthetic models [7, 19, 20] can generate synthetic mobility traces for a large number of nodes considering mobility constraints such as obstacles and pathways. The generated traces are used to evaluate network protocols.In this regard, an in-depth understanding of the interaction between pedestrians and pedestrians with other obstacles in a realistic domain aids in simulating emergency scenarios for pedestrian safety [11–13].Lee et al. [6] employed the concept of fractal waypoints, Least Action Trip Plan (LATP), and a walker model to generate regular patterns of daily human mobility. Their model is based on daily routine activities such as going to the office or attending a lecture. However, the model did not capture an event’s occurrence time and the repetitiveness observed in the people’s realistic daily activities.Munjal et al. [7] presented a mobility model that mimics real human mobility patterns by relaxing an assumption of random mobility with a notion of a mobility influence; that is, nodes mobility is influenced by factors such as cluster size. The model studies seven mobility statistical features: the flights, intercontact time, pause time, long flight due to popularity, closest mobile node visits, community interaction, and mobile node distribution. However, the simulation space in Munjal et al. [7] is a free space without restricting obstacles, which is not always realistic in a real environment such as a campus setting characterized by buildings of different shapes and sizes.In an attempt to develop a mobility model that captures people’s agendas or activities, several models were developed [19, 20, 24, 28, 29]. Ekman et al. [19] presented a Working Day Model (WDM) that emulates the workers’ daily activities such as going to the office, going for evening activities, or returning home. The model uses map-based movement on the concept of sources destination. It also uses a timescale to switch between different submodels. Ekman et al. [19] showed the similarities between the distribution of the synthetic traces of his model and that of iMotes traces from the Cambridge experiment. However, they did not cover the impacts of obstacles, such as the floor, walls, and other constraints, which affect nodes’ mobility.In [29], the characteristics of human mobility were described by constructing multidimensional mobility space, divided into individuality metrics, pairwise encounter metrics, and group metrics. The model generates node trajectories that show more human mobility characteristics, but it was validated using a conventional model, that is, the random waypoint mobility model.Students’ daily activities on the campus were studied by Zhu et al. [20] with a focus on the contact time, intercontact time, and contact per hour distributions. This work did not consider the impacts of obstacles in restricting the free movement of the mobile nodes and a possible signal obstruction by buildings of different shapes and sizes in the campus environment.Social mobility models were presented in [30, 31]. Hrabcak et al. [24] presented a Students Social Based Mobility Model (SSBMM). Their work was inspired by the daily routine of a student’s life. The model distinguishes between the student’s free time and the mandatory time upon which social and school activities are simulated. They compare their model with the classical random walk model, even though the random walk model cannot capture the repetitiveness and heterogeneity of time and space in human mobility.Wang et al. [32] proposed an obstacle-based mobility model that generates a smooth trajectory of a Bezier curve for escaping obstacles. Human mobility trajectories for escaping obstacles such as building or road diversion are not always smooth curves in real scenarios. In addition, the model did not capture movement to attraction factors such as points of interest, which represents human social behavior.The Obstacle Mobility (OM) model developed by Jardosh et al. [33] models the environmental obstructions which affect both movement and signal propagation. In this model, the node paths and points are constructed from a Voronoi diagram based on the obstacle position on the campus-like simulation area.As an extension to [33], Papageorgiou et al. [34] proposed a model that allows nodes to move around the obstacle but is not limited to a defined path. The model considers rectangular obstacles that limited its ability to capture the realistic feature of an environment with obstacles of different shapes and sizes.A random obstacle-based mobility model for DTN was presented by Wu et al. [28]. In this model, the node moves from the initial location to the destination via the shortest path if there is no obstacle along the path; otherwise, the node recursively selects the node’s location close to the obstacle and moves forward. This operation is repeated until the node reaches its destination. This model considers obstacles with a rectangular shape. Similarly, an unnecessary trip would be made in the absence of a node close to the obstacle, especially when the destination is just behind the obstacle.Wang et al. [32] proposed an obstacle-based mobility model that generates a smooth trajectory of a Bezier curve for escaping obstacles. Human mobility trajectories for escaping obstacles such as building or road diversion are not always smooth curves in real scenarios. In addition to that, the model did not capture movement to attraction factors such as points of interest that represent human social behavior.Moussaïd et al. [11] conducted an experimental study of the behavioral mechanisms underlying self-organization in human crowds to study an individual pedestrian behavior. In the study, an individual pedestrian movement behavior is characterized by the triplet: the internal acceleration fi0, wall interaction fiwall, and individual interaction fij. To simplify the complexity of the assumptions, a study that uses simple rules to determine pedestrian behavior and crowd disasters was presented in [12]. The study used simple heuristics to determine the movement direction and possible choice of desired speed during static and moving obstacle encounters.Some of the research works in the literature have studied mobility characteristics in real traces to develop synthetic mobility models that exhibit the observed mobility features in the real traces. Kim et al. [23] analyzed mobility characteristics, including pause time, speed, and direction of movements and developed a software model that generates realistic user mobility tracks but the mobility trajectory granularity of the studied trace depends on the wireless Local Area Network (WLAN) access point locations and hence may not be applicable to higher mobility DTN.Real mobility traces at Dartmouth College [35] and Disney World theme park in Orlando [36] have been analyzed in [37] to obtain movement characteristics. Consequently, they obtained a visiting probability of people, distribution of movement speed, and pause time from the traces. Their proposed model is configured with the derived distribution in the simulation.However, several works have proposed different techniques for mining mobility patterns or mobility behaviors from the trajectory data.Ghosh et al. [38] proposed a mobility pattern mining framework to extract mobility association rules from taxi trips. The proposed framework has three modules: the input module, the spatiotemporal analysis module, and the mobility association generation module. The input module processes the Taxi GPSlog, road network, and Point-Of-Interest and generates transactions using application-specific mobility rule templates. The spatiotemporal analysis model analyses travel demand data and partition regions based on the travel demand and then generates mobility flow. Lastly, the mobility association generation module delineates how it can be used to understand urban dynamics.Yue et al. [39] proposed a trajectory clustering technique for mobility-behavior analysis. In their approach, they formulate mobility analysis as a clustering task. They developed an unsupervised learning technique that resolves the problem of lack of labeled trajectory data that support supervised learning, in which data does not necessarily need to be labeled.Rahman et al. [40] presented a dynamic clustering technique based on the processed COVID-19 infection data and mobility data. In this work, clusters can expand and shrink based on the merit of the data.This study presents an Escape Path Obstacle-based mobility Model (EPOM) for a campus Delay-Tolerant Network. Our model covers aspects such as daily routines, heterogeneity of time and space, skewed location visiting, and the discovered dynamic cluster evolution. We also develop a novel strategy for collision avoidance between pedestrians and obstacles of different shapes and sizes. Our model mimic more realistic behavior observed in the realistic traces.Table1compares existing works and the EPOM mobility model in terms of the most widely studied mobility features and the evaluation methods. Symbol X indicates that the existing work studied the mobility feature while symbol x indicates the opposite.Table 1
Comparison of existing works and the EPOM mobility model in terms of most widely studied mobility features.
Features[20] (2012)[19] (2008)[7] (2011)[9] RWP[37] (2020)[34] (2009)[34] (2017)EPOMObstacle-awarexXXxX✔✔✔Obstacle-shapexXxxXRectangular shapeIrregular shapeIrregular shapeTravel distanceX✔✔XXXX✔Direction of movementxXxxxXX✔Pause timex✔✔x✔XX✔Contact time and intercontact time)✔✔✔xxXx✔ ✔ xEvaluation methodReal tracesReal tracesReal tracesReal tracesReal tracesObstacle simulationObstacle simulationReal traces and obstacle sim
## 3. Characterization of Human Mobility
Several efforts have been devoted to investigating the properties of human mobility and uncovering hidden patterns [23, 41, 42]. Due to the dynamic nature of human mobility, there is no consensus on its characteristic features. The features that require a thorough investigation include but are not limited to some of the fundamental features such as travel distance and pause time. However, there is a need to understand the features of a movement cluster within the community, such as the intracluster travel distance and direction of movement, though we explicitly study some fundamental features for the whole domain: the connectivity features (i.e., contact and intercontact time), the spatial feature (i.e., travel distance), and the temporal feature (i.e., pause time). The movement direction within the clusters, to the best of our knowledge, has been assumed to be random [41] or reported for the entire domain [23, 42].In this study, we use daily GPS track log collected from two different university campuses (NCSU and KAIST) for the location-based trace [36]. Garmin GPS 60CSx handheld receivers are used for data collection which are WAAS (Wide Area Augmentation System) capable with a position accuracy of better than three meters 95 percent of the time, in North America. The GPS receivers take reading of their current positions at every 10 seconds and record them into a daily track log. The data are available at [43]. We are interested in the stationary locations at which users stay.For the contact-based trace, we use the Bluetooth encounters between mobile nodes from the Cambridge city students iMote experiment [44]. The data consist of 10641 contacts between iMote devices carried by students for the duration of about 11.43 days. The data is available at [43] repository. We are interested in the duration at which two devices are in contact with each other (contact duration) and the time between two consecutive contacts between two devices (intercontact time).We emphasize that the cluster concept in our study refers to the location at which a person spends much of his time exploring the neighboring locations. Therefore, it should not be confused with the concept in a social community, which refers to people sharing physical location, ideas, or common goals. A person can generate more than one cluster within his community, depending on his daily trip schedules. Figure1 shows the trajectory of user 16 in the KAIST trace, creating four dynamic clusters for one day. In our clustering, we consider only clusters that have more than eight locations within a specified threshold.Figure 1
The dynamic clusters of KAIST trace file 16. The blue points indicate the complete waypoints in a day, while the red points indicate the waypoint clusters. There are four clusters associated with the user.Before clustering, we have to remove transit locations from our traces, which is reasonable because some of the coordinates from the GPS traces do not belong to stationary locations; they belong to the transit locations at which a user stays briefly on its way to its destination. Algorithm1 summarizes the procedures for removing transit locations. In Algorithm 1, point pi+1 is deleted if the distance between pi and pi+1 is greater than a distance threshold. Similarly, if the pause time at point pi is less than a time threshold, point pi is removed from the original trace. When Algorithm 1 is executed, our original trace would be left with only stationary stations.Algorithm 1: Extracting dynamic clusters from the location-based traces.
InitiallydistanceThreshold = Δd, waitingTimeThreshold = Δt, First Point = Pi, Second Point = Pi+1.(1)
if distance (Pi, Pi+1) ≥ Δd then(2)
removePi+1(3)
if pauseTime (Pi) ≤ Δt then(4)
removePiNext, we run an agglomerative clustering technique [45] using a single linkage method, sometimes called connectedness or minimum method and created location clusters based on the similarity of the closest pair of locations.The more we know about the movement cluster properties such as a travel distance and direction of movement, the more we can predict the behavior of a pedestrian movement pattern for accurate design against possible crowd congestion or emergency scenarios.
### 3.1. Intracluster Direction of Movement
The direction of movement within the movement clusters has not been studied well despite its impacts on the mobility patterns. Nevertheless, some related works reported an aggregate distribution of movement directions for the whole domain instead of movement clusters [41, 42]. Our study takes a different approach; it studies the direction of movement within dynamic clusters to understand the direction angle’s properties.Figure2 shows a weighted Probability Density Function (PDF) of the movement direction within clusters from NCSU trace with a bin size of 1°. We measure the direction of each movement by its movement duration. We can see that the direction of movement is biased symmetric toward some preferred locations. It implies that the movement within a dynamic cluster is not random; it favors directions of the popular locations. The symmetry distribution was expected due to the possible return of nodes to their main locations after exploring some close by points of interest. We can also deduce that students visit common locations for their activities, which resulted in the similar aggregated distribution of angles with bias symmetry to angles between 90°–150° and 240°–330°, respectively. We can see that nodes move to other sides as well, but with smaller frequencies than the direction of the point of interest; this implies that geographical restrictions such as constraint movement on roads are not the driving factor for the bias symmetry of the movement angle distribution. On the contrary, the aggregate weighted PDF for the whole domain is shown in Figure 3. Figure 3 shows that though it has a symmetry shape, the direction of movement is almost uniformly distributed within the domain.Figure 2
The bias symmetry distribution of direction angle for the dynamic clusters (NCSU traces). Thex-axis represents the angular (units are in degrees), and the y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.Figure 3
The uniform distribution of direction angle for the whole domain (NCSU traces). Thex-axis represents the angular (units are in degrees) and y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.
### 3.2. Intracluster Travel
We study the travel distances between consecutive locations within the cluster at which a node spent a long time exploring neighboring locations.We fit four parametric models on the empirical data of KAIST intracluster travel distance as shown in Figure4. The distribution that best fits the data is the lognormal distribution with the parameters 2.29989493 and 0.8685148 for the log mean and log standard deviation, respectively, as shown in Figure 5 and KS test in Table 2. This shows that students take repeated short walks around some popular locations such as classes, libraries, and dormitories.Figure 4
Four (4) different distributions fitted to the KAIST intracluster distance trace; all distribution fits the empirical data, but power law and lognormal show better matching features.Figure 5
Log-normal fits the KAIST intracluster.Table 2
KAIST intracluster distance gof table.
DistgofNtailsCrit. ValRemarkLognorm0.015644948150.04764AcceptPower law0.0318167819000.03120Reject
### 3.3. Pause Time Distribution
Pause time distribution is one of the temporal features of human mobility, which plays a vital role in the diffusive nature of human mobility. It dictates the amount of time a node spends at a location with zero or close to zero velocity. Figure6 shows four different parametric models that fit the empirical data of the KAIST trace. After the KS test of gof, we found that power-law distribution is plausible, and hence there is no enough evidence to support its rejection as shown in Table 3. Figure 7 shows that the power law has a threshold value (xmin) of four minutes (240 s) and a cut-off value of P (Δ (t)) = 16 hrs. The power-law pause time distribution indicates a scale-free characteristic.Figure 6
Four (4) different distributions fitted to the KAIST trace pause time.Table 3
KAIST pause time gof table.
DistgofNtailsCrit. valueRemarkPower law0.023677028500.04665AcceptExponential0.13154073860.06922RejectFigure 7
A pause time distribution for the KAIST trace. The distribution exhibits power-law decay with exponential cut-off.
### 3.4. Intercontact Time
In this section, we characterize the empirical data from iMotes experiments at Cambridge [44]. The data includes some traces of Bluetooth sightings by groups of users carrying small devices (iMotes) for five days. Our goal is to extract the distribution of intercontact time from the dataset for further analysis. Figure 8 shows the aggregate CCDF distribution for the intercontact duration of the empirical data. The distribution follows a power-law distribution with the exponent = 1 : 4, but the power-law decay is overweight by an exponential decay toward the end of the distribution. The distribution is called a truncated power law, similar to the results presented in [6]. The power feature of the intercontact time distribution is interesting because it dictates the scale-free properties of an opportunistic network.Figure 8
Power-law distribution with different values of lambda fitted on the Cambridge iMotes trace intercontact time distribution.
## 3.1. Intracluster Direction of Movement
The direction of movement within the movement clusters has not been studied well despite its impacts on the mobility patterns. Nevertheless, some related works reported an aggregate distribution of movement directions for the whole domain instead of movement clusters [41, 42]. Our study takes a different approach; it studies the direction of movement within dynamic clusters to understand the direction angle’s properties.Figure2 shows a weighted Probability Density Function (PDF) of the movement direction within clusters from NCSU trace with a bin size of 1°. We measure the direction of each movement by its movement duration. We can see that the direction of movement is biased symmetric toward some preferred locations. It implies that the movement within a dynamic cluster is not random; it favors directions of the popular locations. The symmetry distribution was expected due to the possible return of nodes to their main locations after exploring some close by points of interest. We can also deduce that students visit common locations for their activities, which resulted in the similar aggregated distribution of angles with bias symmetry to angles between 90°–150° and 240°–330°, respectively. We can see that nodes move to other sides as well, but with smaller frequencies than the direction of the point of interest; this implies that geographical restrictions such as constraint movement on roads are not the driving factor for the bias symmetry of the movement angle distribution. On the contrary, the aggregate weighted PDF for the whole domain is shown in Figure 3. Figure 3 shows that though it has a symmetry shape, the direction of movement is almost uniformly distributed within the domain.Figure 2
The bias symmetry distribution of direction angle for the dynamic clusters (NCSU traces). Thex-axis represents the angular (units are in degrees), and the y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.Figure 3
The uniform distribution of direction angle for the whole domain (NCSU traces). Thex-axis represents the angular (units are in degrees) and y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.
## 3.2. Intracluster Travel
We study the travel distances between consecutive locations within the cluster at which a node spent a long time exploring neighboring locations.We fit four parametric models on the empirical data of KAIST intracluster travel distance as shown in Figure4. The distribution that best fits the data is the lognormal distribution with the parameters 2.29989493 and 0.8685148 for the log mean and log standard deviation, respectively, as shown in Figure 5 and KS test in Table 2. This shows that students take repeated short walks around some popular locations such as classes, libraries, and dormitories.Figure 4
Four (4) different distributions fitted to the KAIST intracluster distance trace; all distribution fits the empirical data, but power law and lognormal show better matching features.Figure 5
Log-normal fits the KAIST intracluster.Table 2
KAIST intracluster distance gof table.
DistgofNtailsCrit. ValRemarkLognorm0.015644948150.04764AcceptPower law0.0318167819000.03120Reject
## 3.3. Pause Time Distribution
Pause time distribution is one of the temporal features of human mobility, which plays a vital role in the diffusive nature of human mobility. It dictates the amount of time a node spends at a location with zero or close to zero velocity. Figure6 shows four different parametric models that fit the empirical data of the KAIST trace. After the KS test of gof, we found that power-law distribution is plausible, and hence there is no enough evidence to support its rejection as shown in Table 3. Figure 7 shows that the power law has a threshold value (xmin) of four minutes (240 s) and a cut-off value of P (Δ (t)) = 16 hrs. The power-law pause time distribution indicates a scale-free characteristic.Figure 6
Four (4) different distributions fitted to the KAIST trace pause time.Table 3
KAIST pause time gof table.
DistgofNtailsCrit. valueRemarkPower law0.023677028500.04665AcceptExponential0.13154073860.06922RejectFigure 7
A pause time distribution for the KAIST trace. The distribution exhibits power-law decay with exponential cut-off.
## 3.4. Intercontact Time
In this section, we characterize the empirical data from iMotes experiments at Cambridge [44]. The data includes some traces of Bluetooth sightings by groups of users carrying small devices (iMotes) for five days. Our goal is to extract the distribution of intercontact time from the dataset for further analysis. Figure 8 shows the aggregate CCDF distribution for the intercontact duration of the empirical data. The distribution follows a power-law distribution with the exponent = 1 : 4, but the power-law decay is overweight by an exponential decay toward the end of the distribution. The distribution is called a truncated power law, similar to the results presented in [6]. The power feature of the intercontact time distribution is interesting because it dictates the scale-free properties of an opportunistic network.Figure 8
Power-law distribution with different values of lambda fitted on the Cambridge iMotes trace intercontact time distribution.
## 4. EPOM Model
EPOM was developed by integrating several submodels or communities into one functional model. Each submodel captures a specific, realistic activity of a campus environment as observed in the empirical data of the studied traces [36, 44]. We have categorized the realistic activities into the home, study, eating, sports, and off-campus activities, respectively. In addition to the submodels, a switching model plays an important role in switching the status of nodes between the submodels; this is referred to as intercluster movement. It helps to capture the realistic nature of students’ lives on a campus, which is viewed as repetitive and heterogeneous in time and space. When a node changes its status (e.g., from home to study), it uses the preferred method of transport to move to its new destination. We adopted two methods of transport in the model: walking and bus.The EPOM model captures personnel (e.g., faculty/departmental staff) movement, such as walks to the cafeteria for eating, and off-campus activities such as shopping.One of the distinguishing features of EPOM from the previous works in [7, 8, 19, 20] is the consideration of static and moving obstacles along the movement path from the source to the destination. The EPOM ensures collision-free movement along the trajectories. We model this by strategically placing objects of different shapes and sizes on the movement trajectories and developing an algorithm that generates detour paths. This is reasonable because there is a need for a path to escape a non-moving pedestrian on the movement path. The pictorial description of different submodels is depicted in Figure 9.Figure 9
EPOM submodels: the large rectangles represent the submodels (i.e., home, study, cafeteria, sport, and off-campus). The line connecting them is a transport submodel. The red shapes are obstacles that affect both mobility and signal propagation.
### 4.1. Home Submodel
Home is the starting point of the simulation. Initially, a predefined location is assigned to each node in the home location file. These locations are used for sleeping or node’s free time. Daily routine activities of a node start in the morning when it wakes up from the sleeping state. Each node is assigned a wake-up time, which determines when the node should wake up from sleeping. The wake-up time obeys a normal distribution with the mean seven o’clock and configurable standard deviation.After waking up, a node checks its lecture schedules and decides whether to go for a lecture or do some in-home activities such as cooking, watching the morning news, laundry services, or visiting a friend at the nearby dormitory. These short walks account for the possible evolution of the first dynamic cluster. Some nodes leave their home without doing any internal activities. Depending on the time of the day and a node lecture schedules, a node can switch to other submodels from home. For example, a node may switch to the sport submodel in the evening to play games; it can switch to eating submodel for dinner or switch to off-campus submodel for shopping or visiting a friend in another location. This flexibility of EPOM captures social influence and heterogeneity in time and space.
### 4.2. Study Submodel
We assigned specific locations on the map as lecture rooms. If a node is in the lecture room, it walks within the lecture room and pauses for the lecture duration. The pause time distribution is location-dependent in our model. The pause time for the lecture is different from the pause time at the cafeteria. However, the pause time at nonspecific locations is derived from the truncated power-law distribution observed in the empirical data. We turn off the pause time completely during the lecture period for 80 percent of the nodes; only 20 percent can make some movement within the lecture room; this is to capture the realistic behavior of students for changing desks or forming discussion groups. At the end of the lecture, a node decides to walk to the laboratory or library. This internal movement is modeled as an intracluster walk within the vicinity of the study area with the libraries, laboratories, and other study-related locations as waypoints.
### 4.3. Eating Submodel
Some strategic locations on the map are defined as cafeterias. When it is time for lunch or dinner, a node may switch to eating submodel and move to the cafeteria to eat. The time distribution for eating is uniformly distributed, starting from 11:00 a.m. to 2:00 p.m. for lunch and 6:00 p.m. to 8:00 p.m. for dinner. While in the cafeteria, a node waits, makes some intracluster walks, gets served, then eats, and switches to another submodel. During the eating activities in a large cafeteria, we observed a large crowd of students within a confined location, hence, the need for collision avoidance to allow smooth flow of students.
### 4.4. Sport Submodel
We define some points on the map as playgrounds; the time for sport is also defined. A node in the sport submodel spends some time at the playground watching or doing some random intracluster movements around the vicinity of the playground.
### 4.5. Off-Campus Submodel
The off-campus submodel models all activities not included in the home, school, eating, and sport submodels. These activities include shopping, evening walk, or visiting friends. We define some points of interest (PoIs) on the map edges as meeting points. We have two types of PoIs: location preferences PoI and Bus Normal PoI with uniform preferences. Mobile nodes visit such locations in a group to capture group mobility characteristics and social influences of human mobility and individually to capture independent mobility freedom. The minimum and maximum sizes of a group are defined in the default setting file.
### 4.6. Transport Submodel
This submodel is used to move between different submodels when a node switches mode. We define two means of transport in our model: walking and bus riding. Most nodes walk while a bus is mostly used for off-campus activities. The probability of moving with a bus is configured in the setting. The heterogeneity in the transport submodel has a great impact on the performance of routing protocol for DTN; high-speed nodes can deliver messages to a long-distance destination quickly.Bus service is accessible by the node at predefined bus stops. Initially, the node would walk to the nearest bus station and wait for the bus; when the bus arrives, the node enters the bus and drops at the bus stop closest to its destination. The node switches to a walking submodel to complete its journey to the final destination.The nodes in our model move on the map; this is another aspect of realism. The maps contain the homes, classes, cafeterias, playgrounds, shops, PoIs, and bus stops. The map data are essential for restricting the movement of the nodes to specific areas, which helps increase node localization. It is used to distribute nodes in the simulation area uniformly.The EPOM model generates mobility patterns through intercluster and intracluster movements. Therefore, at each time instant, a node is either in intercluster or intracluster movement mode controlled by the two-step Markov model in Figure10. When a node is in the intracluster movement mode, it explores the point of interest within its community and walks to the preferred POIs or generates a travel distance chosen from a lognormal distribution bounded by the community size.Figure 10
Two steps Markov model for switching between the intercluster and intracluster movements.The direction of movement is chosen from a bias direction symmetry distribution in the range [0, 2π [; see Figure 2. The lognormal distribution of the intracluster travel distance means nodes visit closer locations more frequently than distant locations.
### 4.7. Obstacle Submodel
The obstacle submodel describes how the EPOM model handles collision avoidance between nodes and other obstructing objects along their movement trajectories. In the case of static obstacles with zero speed, such as pedestrians standing on the road, at the middle of the corridors, or any other stationary object, we define the location of different obstacles on the map using OpenJUMP (http://openjump.org/) geographic information system program as in Figure 11.Figure 11
An example of visualization of a mobility scenario on the ONE simulator. The red irregular polygons mimic random obstacles. The blue numbered icons stand for nodes. The gray line stands for normal trajectories without obstacles, and the green line represents a new trajectory created by node E96 using the Escape path mobility model.The transport submodel moves the node from the current location (e.g., home) to the destination (e.g., class).The Dijkstra shortest path algorithm calculates the shortest path from the current location to the destination. We have two scenarios here: in the first scenario, there is no obstacle on the path, while in the second scenario, an obstacle is encountered along the shortest path. In the first scenario, a node would follow the shortest path to its destination without obstruction, but in the second scenario, a node would explore the logic in Algorithm2 to generate an escape path using the following transitions:Algorithm 2: Escape path movement for node i.
InitiallyescapeVertex = ϕ, neighbors = ϕ, distToDest = ϕ(1)
get obstacle’s vertices(2)
escapeVertex: = nearest vertex(3)
Repeat(4)
move to theescapeVertex(5)
neighbors: = neighbor vertices(6)
escapeVertex: = nearest neighbor(7)
until distToDeste (escapeVertex) ≤ distToDest (all neighbors)(8)
move to the destination(1)
Move along the shortest path trajectories until an obstacle is reached, keeping a minimal distance to the obstacle.(2)
Generate an escape path using Algorithm2.(3)
Complete the movement to the next obstacle (in case of more than one obstacle) or destination.(4)
Repeat 2 and 3 until the final destination is reached.Algorithm2 avoids collision with an obstacle by generating an escape path as shown in Figure 12. In Algorithm 2, line 1 gets the coordinates of an obstacle’s vertices V = (A, C, D, F, E, B). Note that the shape of an obstacle determines the number of vertices. In line 2, a user finds the nearest vertex A. It moves to vertex A in line 4. It finds the neighbors of vertex A (i.e., B and C) in line 5 and sets the next escape vertex to the nearest neighbor B in line 6. It then checks the condition in line 7; if the distance from its current location A to the destination is less than the distance from its neighbors C and B to the destination, it moves directly to the destination (line 8); otherwise, it returns to line 3.Figure 12
Escape path generated with Algorithm2.Considering a human movement behavior of walking beside the edges of an obstructing body until it passes the section of the obstacle that blocks it, the algorithm behaves similarly by creating a path beside (not on the edges) the edges of the obstacle. Some existing works have proposed a Bezier curve [32] or branching to the closest neighbor node [28], which is not always realistic because a human path of escaping obstacle cannot always be curved, just like an isolated obstacle which may not have a closer neighbor.
## 4.1. Home Submodel
Home is the starting point of the simulation. Initially, a predefined location is assigned to each node in the home location file. These locations are used for sleeping or node’s free time. Daily routine activities of a node start in the morning when it wakes up from the sleeping state. Each node is assigned a wake-up time, which determines when the node should wake up from sleeping. The wake-up time obeys a normal distribution with the mean seven o’clock and configurable standard deviation.After waking up, a node checks its lecture schedules and decides whether to go for a lecture or do some in-home activities such as cooking, watching the morning news, laundry services, or visiting a friend at the nearby dormitory. These short walks account for the possible evolution of the first dynamic cluster. Some nodes leave their home without doing any internal activities. Depending on the time of the day and a node lecture schedules, a node can switch to other submodels from home. For example, a node may switch to the sport submodel in the evening to play games; it can switch to eating submodel for dinner or switch to off-campus submodel for shopping or visiting a friend in another location. This flexibility of EPOM captures social influence and heterogeneity in time and space.
## 4.2. Study Submodel
We assigned specific locations on the map as lecture rooms. If a node is in the lecture room, it walks within the lecture room and pauses for the lecture duration. The pause time distribution is location-dependent in our model. The pause time for the lecture is different from the pause time at the cafeteria. However, the pause time at nonspecific locations is derived from the truncated power-law distribution observed in the empirical data. We turn off the pause time completely during the lecture period for 80 percent of the nodes; only 20 percent can make some movement within the lecture room; this is to capture the realistic behavior of students for changing desks or forming discussion groups. At the end of the lecture, a node decides to walk to the laboratory or library. This internal movement is modeled as an intracluster walk within the vicinity of the study area with the libraries, laboratories, and other study-related locations as waypoints.
## 4.3. Eating Submodel
Some strategic locations on the map are defined as cafeterias. When it is time for lunch or dinner, a node may switch to eating submodel and move to the cafeteria to eat. The time distribution for eating is uniformly distributed, starting from 11:00 a.m. to 2:00 p.m. for lunch and 6:00 p.m. to 8:00 p.m. for dinner. While in the cafeteria, a node waits, makes some intracluster walks, gets served, then eats, and switches to another submodel. During the eating activities in a large cafeteria, we observed a large crowd of students within a confined location, hence, the need for collision avoidance to allow smooth flow of students.
## 4.4. Sport Submodel
We define some points on the map as playgrounds; the time for sport is also defined. A node in the sport submodel spends some time at the playground watching or doing some random intracluster movements around the vicinity of the playground.
## 4.5. Off-Campus Submodel
The off-campus submodel models all activities not included in the home, school, eating, and sport submodels. These activities include shopping, evening walk, or visiting friends. We define some points of interest (PoIs) on the map edges as meeting points. We have two types of PoIs: location preferences PoI and Bus Normal PoI with uniform preferences. Mobile nodes visit such locations in a group to capture group mobility characteristics and social influences of human mobility and individually to capture independent mobility freedom. The minimum and maximum sizes of a group are defined in the default setting file.
## 4.6. Transport Submodel
This submodel is used to move between different submodels when a node switches mode. We define two means of transport in our model: walking and bus riding. Most nodes walk while a bus is mostly used for off-campus activities. The probability of moving with a bus is configured in the setting. The heterogeneity in the transport submodel has a great impact on the performance of routing protocol for DTN; high-speed nodes can deliver messages to a long-distance destination quickly.Bus service is accessible by the node at predefined bus stops. Initially, the node would walk to the nearest bus station and wait for the bus; when the bus arrives, the node enters the bus and drops at the bus stop closest to its destination. The node switches to a walking submodel to complete its journey to the final destination.The nodes in our model move on the map; this is another aspect of realism. The maps contain the homes, classes, cafeterias, playgrounds, shops, PoIs, and bus stops. The map data are essential for restricting the movement of the nodes to specific areas, which helps increase node localization. It is used to distribute nodes in the simulation area uniformly.The EPOM model generates mobility patterns through intercluster and intracluster movements. Therefore, at each time instant, a node is either in intercluster or intracluster movement mode controlled by the two-step Markov model in Figure10. When a node is in the intracluster movement mode, it explores the point of interest within its community and walks to the preferred POIs or generates a travel distance chosen from a lognormal distribution bounded by the community size.Figure 10
Two steps Markov model for switching between the intercluster and intracluster movements.The direction of movement is chosen from a bias direction symmetry distribution in the range [0, 2π [; see Figure 2. The lognormal distribution of the intracluster travel distance means nodes visit closer locations more frequently than distant locations.
## 4.7. Obstacle Submodel
The obstacle submodel describes how the EPOM model handles collision avoidance between nodes and other obstructing objects along their movement trajectories. In the case of static obstacles with zero speed, such as pedestrians standing on the road, at the middle of the corridors, or any other stationary object, we define the location of different obstacles on the map using OpenJUMP (http://openjump.org/) geographic information system program as in Figure 11.Figure 11
An example of visualization of a mobility scenario on the ONE simulator. The red irregular polygons mimic random obstacles. The blue numbered icons stand for nodes. The gray line stands for normal trajectories without obstacles, and the green line represents a new trajectory created by node E96 using the Escape path mobility model.The transport submodel moves the node from the current location (e.g., home) to the destination (e.g., class).The Dijkstra shortest path algorithm calculates the shortest path from the current location to the destination. We have two scenarios here: in the first scenario, there is no obstacle on the path, while in the second scenario, an obstacle is encountered along the shortest path. In the first scenario, a node would follow the shortest path to its destination without obstruction, but in the second scenario, a node would explore the logic in Algorithm2 to generate an escape path using the following transitions:Algorithm 2: Escape path movement for node i.
InitiallyescapeVertex = ϕ, neighbors = ϕ, distToDest = ϕ(1)
get obstacle’s vertices(2)
escapeVertex: = nearest vertex(3)
Repeat(4)
move to theescapeVertex(5)
neighbors: = neighbor vertices(6)
escapeVertex: = nearest neighbor(7)
until distToDeste (escapeVertex) ≤ distToDest (all neighbors)(8)
move to the destination(1)
Move along the shortest path trajectories until an obstacle is reached, keeping a minimal distance to the obstacle.(2)
Generate an escape path using Algorithm2.(3)
Complete the movement to the next obstacle (in case of more than one obstacle) or destination.(4)
Repeat 2 and 3 until the final destination is reached.Algorithm2 avoids collision with an obstacle by generating an escape path as shown in Figure 12. In Algorithm 2, line 1 gets the coordinates of an obstacle’s vertices V = (A, C, D, F, E, B). Note that the shape of an obstacle determines the number of vertices. In line 2, a user finds the nearest vertex A. It moves to vertex A in line 4. It finds the neighbors of vertex A (i.e., B and C) in line 5 and sets the next escape vertex to the nearest neighbor B in line 6. It then checks the condition in line 7; if the distance from its current location A to the destination is less than the distance from its neighbors C and B to the destination, it moves directly to the destination (line 8); otherwise, it returns to line 3.Figure 12
Escape path generated with Algorithm2.Considering a human movement behavior of walking beside the edges of an obstructing body until it passes the section of the obstacle that blocks it, the algorithm behaves similarly by creating a path beside (not on the edges) the edges of the obstacle. Some existing works have proposed a Bezier curve [32] or branching to the closest neighbor node [28], which is not always realistic because a human path of escaping obstacle cannot always be curved, just like an isolated obstacle which may not have a closer neighbor.
## 5. Model Implementation
The Escape Path Obstacle-based Movement model was implemented on the Opportunistic Network Environment (ONE) simulator [46, 47] as a collection of different submodels. ONE supported different movement models such as the Random Waypoint Movement (RWP), Map-Based Movement (MBM), Shortest Path Map-Based Movement model (SPMBM), and Route-Based Movement model (RBM). MBM is a special type of RW in which nodes move along the map paths defined in Well-Known Text (WKT) files. We used the Open-JUMP Geographic Information System (GIS) program to define the location of obstacles, homes, classes, cafeterias, playgrounds, shops, and points of interest for off-campus activities. We created a main movement model that inherited the extended movement model of ONE and controlled the movement of nodes going to school, going to the cafeteria, going to sport, going shopping, or similar activities outside the campus, and finally returning home to sleep. The main model orders and switches between submodels, passes the control to the submodels responsible for different activities, facilitates the movement to the destination by giving information about the destination to the transport submodels, and decides on the probability to walk or use bus based on the setting configuration.In a real scenario, an obstacle, such as the floor, walls, buildings, or mountains, exists and impacts mobility and signal attenuation. To reflect this impact, we modify the method “isWithinRange(DTNHost anotherHost)” for the class NetworkLayer of the ONE simulator to reflect the signal’s attenuation in the propagation model. When a node’s signal propagates through an obstacle, it suffers attenuation due to the effects of diffraction, reflection, and scattering. Some attenuation results [33] are presented in Table 4. The attenuated values are randomly taken from a uniform distribution between 40 and 60 dB, which reflects the fact that obstacles have at least double walls. A connection is created when the radio signal is greater than a fixed threshold (transmitting range).Table 4
Power attenuation values.
HomeOfficeSingle wall6–20 dB6–20 dBDouble wall40–50 dB50–60 dB
## 6. Validation
Our goal is to show that our conceptual model (EPOM) is generic enough to be fine-tuned with a few parameters to show matching characteristics with the NCSU GPS traces [36], in terms of the spatial features: intracluster travel distance and intracluster direction of movement as well as the temporal feature (i.e., pause time). We also show that EPOM connectivity features matched those of iMote real traces [44] in terms of contact duration and intercontact time distribution.After the wake-up, a node starts to walk using the current mobility model; a node switches to different locations from the current location using the five steps Markov model depending on the time of the day. See Table5 for the list of simulation parameters.Table 5
Summary of the simulation parameters.
ParameterValueNumber of nodes1000Simulation length500,000 secTransmit range10 mbObstacle path transmit range[5, 10] mbWorld size5000 × 3000 m2Walking speed[1, 3] m/sBus speed[7, 10] m/sTransmit speed250 kbsRouting protocolEpidemicInterface typeSimple broadcast interfaceBuffer size50 mbMessage size[500 kb, 1 mb]Message interval[25, 35] secMessage TTL1,430 secWe simulate the random waypoint model on the same size simulation area with 1000 nodes uniformly distributed. Each node randomly chooses a waypoint and move with a speed of 0.5–5 m/s; when a node reaches the destination, it pauses for 1–3600 s. Both the speed and pause time are uniformly distributed.The simulation was run for the length ofT = 5 × 105 s, which is approximately five days. We assume all events are uniformly distributed over a longer period of time and consider the probability of an event of length x, p (x). We record only events that begin and end within the observed interval. We create the Complementary Cumulative Density Function (CCDF, P [X > x]) for the distribution of contact duration, intercontact time, intertravel distance, intratravel distance, intracluster movement direction and pause time.Settings: our simulation environment is a map of parts of the Université Paris Saclay campus, edited using OpenJUMP geographic information system program with 1000 nodes moving on the area of roughly 5000 × 3000 m2. We created different WKT files for the map roads, homes, lecture rooms, cafeterias, sport, off-campus activities locations, PoIs, and obstacles.Each node is assigned with a unique home located on the map as its starting point in the simulation, a wake-up time drawn from a normal distribution to each node.
### 6.1. Spatiotemporal Features
We start with the intracluster feature being one of the most important aspects of our study. We divide the main simulation domain into a number of equal size communities denoted byc to account for the dynamic clusters. C ϵ {1,. . . ., Nc }, where Nc is the total number of communities in the domain. During our analysis, we find out that each walker is associated with an average of three dynamic clusters per day, as shown in Figure 13 depending on the degree of the repetitiveness of the user’s schedule; we can exploit this type of temporal mobility feature to predict a possible user location. Similarly, it can be used by the opportunistic routing protocol to schedule package forwarding.Figure 13
Number of dynamic clusters per trace file in KAIST traces.After tuning our model, it generates matching walking clusters with the KAIST data. Figure14 shows one-day dynamic clusters of Node 4 generated from the EPOM model. The generated clusters matched with that of the KAIST trace in Figure 1 for the trace file sixteen [36].Figure 14
The synthetic clusters generated by node four in the EPOM model. The red numbers represent mobile nodes, the green points indicate waypoints, and the green lines represent the node trajectories. We can see the waypoints as dynamic clusters.Next, we focus on the intracluster travel distance to capture the neighborhood exploration observed in the real traces. In our model, at each time instant, a node is either in intercluster or intracluster movement mode managed by a two-step Markov model in Figure15. When a node is in the intracluster movement mode, it explores the point of interest within its community and walks to the preferred PoIs or generates a travel distance uniformly chosen at random from a lognormal distribution bounded by the community size. The direction of movement is uniformly chosen at random from the bias symmetry distribution of the empirical data shown in Figure 2. The lognormal distribution of the intracluster travel distance means nodes visit closer locations more frequently than distant locations.Figure 15
Two-state Markov model for intercluster and intracluster movement.Figure16 shows the distribution of direction angle generated from the synthetic traces of the EPOM model. The distribution is similar to the distribution of NCSU trace in Figure 2. The main take-home message from the two distributions is the movement within dynamic clusters which is not random but bias toward some PoIs and popular locations within the community.Figure 16
The bias symmetry distribution of direction angle for the EPOM clusters. Thex-axis represents the angular (units are in degrees) and the y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.Figure17 shows an intracluster travel distance generated from our model compared to that of the empirical data. The two distributions are similar for a longer period of time but slightly differ at the tail.Figure 17
Intracluster travel distance for the EPOM and KAIST traces. Both curves follow a lognormal distribution, meaning people visit some preferred nearby locations more than far distant locations.This is a consequence of the size of the community in the simulation domain. Therefore, EPOM replicates intracluster travel distance as observed in the KAIST empirical data.Next, we concentrate on the general domain by analyzing the intercluster travel distance distribution for the whole domain generated by the EPOM model and compare it with the empirical distribution observed from the empirical data. This is the approach adopted by most of the existing works in [48, 49].Figure18 shows the intercluster travel distance distribution for the KAIST, EPOM, and RWP traces. The distribution of the EPOM and KAIST traces fits the truncated power-law distribution. It shows that users tend to undertake many short walks in a cluster and occasionally take long-distance walks. We also note that such short-distance walks that evolve over time are the consequence of intracluster movements. In contrast, the curve for the conventional RWP model fits uniform distribution, which does not differentiate between short and long walks. This feature does not resemble the realistic nature of human mobility patterns.Figure 18
The intercluster travel distance for the EPOM, KAIST, and RWP models. The curves for the EPOM model and KAIST traces exhibit power-law decay for a long period, supporting the realistic nature of the human mobility pattern for taking short walks more than a long journey. The RWP curve is uniformly distributed and does not differentiate between short walks and long journeys.Mobility temporal characteristics analysis of user’s temporal locations at a certain period gives us an insight into the possibility of predicting users’ location, how long a user could stay at a given location, that is, pause time, when the user is expected to return to a given location, that is, return time and why a user exhibits a skewed visiting behavior to some locations, that is, dynamic community walk.We study the pause time distribution of the KAIST campus traces in [36] and tune the EPOM model to generate a pause time distribution similar to the empirical distribution observed. Figure 19 shows the pause time distribution of the KAIST trace and EPOM traces. The distribution is found to be power law with a heavy tail. This shows that students spent a long time at some locations, such as lecture rooms but stayed for a short time at most locations such as shopping malls and cafeterias. This distribution is consistent with the distribution of pause time observed in Dartmouth campus real traces in [23].Figure 19
The pause time distribution of the KAIST and EPOM traces. The figure indicates that humans mostly stay short in most places they visit and stay at few locations.The fact that users pause for a long time at some preferred location also indicates that users predominantly take short walks within the community of such locations.We observed that users are associated with an average of three dynamic clusters in one working day, which evolve over time, as shown in Figure13. This fact is true for all users, except for stationary users.
### 6.2. Connectivity Features
In this section, we investigate how closely the EPOM model reproduces the distribution of the studied connectivity metrics as observed in the empirical data of the realistic traces in [44]. We compare the distribution generated by both EPOM, iMote traces, and Random Waypoint on each plot.Figure20 shows the aggregate distribution of contact duration for EPOM, iMote traces, and RWP. Each plot shows the complementary cumulative distribution function of a contact duration using a log-log scale. We see that the EPOM distribution follows power-law decay for a long time, similar to the distribution of iMotes traces. This is consistent with the findings in most research on human mobility contact distribution [50]. The distribution of RWP consists of only a short time with exponential decay. The power-law feature of human mobility indicates that more nodes have contact opportunities for a shorter time while only a few nodes stay connected for a longer time. A DTN routing algorithm can be designed to exploit this feature in conjunction with the spatiotemporal features to decide the best way to route a message from the source to destination(s).Figure 20
The PDF of the contact time distribution for the EPOM, iMotes, and RWP model. The EPOM model follows power-law distribution for a long time, just like the iMotes traces, but RWP follows exponential distributions with very short contacts.From Figure21, we see the intercontact (ICTs) time distribution for EPOM, iMote traces, and RWP. Figure 21 shows that both the EPOM and iMote traces curves exhibit power-law decay with exponential cut-off, unlike RWP that entirely follows an exponential distribution. The distribution of ICTs for the EPOM is also consistent with the feature of the realistic ICTs discovered in [51]. The power-law nature of ICTs plays an important role in DTNs as it fundamentally impacts the behavior of networking protocols [51]. Though shorter intercontact time means more frequent connection, nodes with longer intercontact times are possibly assumed to have new data to share.Figure 21
The PDF of the intercontact time distribution for the EPOM, iMotes, and RWP model. It shows that both the EPOM and iMote traces curves exhibit power-law decay with exponential cut-off, unlike RWP, which entirely follows an exponential distribution.Figure22 presents contacts for each simulation hour. Figure 22 shows the repetitiveness of hourly activities. We use 43200 seconds as working day length. The contact per hour RWP is uniform throughout the simulation. Observe the repetitive behavior of the EPOM, which captures students’ daily routine activities at specific hours of the day.Figure 22
The distribution of contacts for each simulation hour. The EPOM model has shown repetitiveness of hourly activities, unlike RWP, which shows uniform distribution of activities for each hour.
## 6.1. Spatiotemporal Features
We start with the intracluster feature being one of the most important aspects of our study. We divide the main simulation domain into a number of equal size communities denoted byc to account for the dynamic clusters. C ϵ {1,. . . ., Nc }, where Nc is the total number of communities in the domain. During our analysis, we find out that each walker is associated with an average of three dynamic clusters per day, as shown in Figure 13 depending on the degree of the repetitiveness of the user’s schedule; we can exploit this type of temporal mobility feature to predict a possible user location. Similarly, it can be used by the opportunistic routing protocol to schedule package forwarding.Figure 13
Number of dynamic clusters per trace file in KAIST traces.After tuning our model, it generates matching walking clusters with the KAIST data. Figure14 shows one-day dynamic clusters of Node 4 generated from the EPOM model. The generated clusters matched with that of the KAIST trace in Figure 1 for the trace file sixteen [36].Figure 14
The synthetic clusters generated by node four in the EPOM model. The red numbers represent mobile nodes, the green points indicate waypoints, and the green lines represent the node trajectories. We can see the waypoints as dynamic clusters.Next, we focus on the intracluster travel distance to capture the neighborhood exploration observed in the real traces. In our model, at each time instant, a node is either in intercluster or intracluster movement mode managed by a two-step Markov model in Figure15. When a node is in the intracluster movement mode, it explores the point of interest within its community and walks to the preferred PoIs or generates a travel distance uniformly chosen at random from a lognormal distribution bounded by the community size. The direction of movement is uniformly chosen at random from the bias symmetry distribution of the empirical data shown in Figure 2. The lognormal distribution of the intracluster travel distance means nodes visit closer locations more frequently than distant locations.Figure 15
Two-state Markov model for intercluster and intracluster movement.Figure16 shows the distribution of direction angle generated from the synthetic traces of the EPOM model. The distribution is similar to the distribution of NCSU trace in Figure 2. The main take-home message from the two distributions is the movement within dynamic clusters which is not random but bias toward some PoIs and popular locations within the community.Figure 16
The bias symmetry distribution of direction angle for the EPOM clusters. Thex-axis represents the angular (units are in degrees) and the y-axis is the density of movement toward a given direction. The bin size is 1°. Each direction is weighted by the duration of its movement.Figure17 shows an intracluster travel distance generated from our model compared to that of the empirical data. The two distributions are similar for a longer period of time but slightly differ at the tail.Figure 17
Intracluster travel distance for the EPOM and KAIST traces. Both curves follow a lognormal distribution, meaning people visit some preferred nearby locations more than far distant locations.This is a consequence of the size of the community in the simulation domain. Therefore, EPOM replicates intracluster travel distance as observed in the KAIST empirical data.Next, we concentrate on the general domain by analyzing the intercluster travel distance distribution for the whole domain generated by the EPOM model and compare it with the empirical distribution observed from the empirical data. This is the approach adopted by most of the existing works in [48, 49].Figure18 shows the intercluster travel distance distribution for the KAIST, EPOM, and RWP traces. The distribution of the EPOM and KAIST traces fits the truncated power-law distribution. It shows that users tend to undertake many short walks in a cluster and occasionally take long-distance walks. We also note that such short-distance walks that evolve over time are the consequence of intracluster movements. In contrast, the curve for the conventional RWP model fits uniform distribution, which does not differentiate between short and long walks. This feature does not resemble the realistic nature of human mobility patterns.Figure 18
The intercluster travel distance for the EPOM, KAIST, and RWP models. The curves for the EPOM model and KAIST traces exhibit power-law decay for a long period, supporting the realistic nature of the human mobility pattern for taking short walks more than a long journey. The RWP curve is uniformly distributed and does not differentiate between short walks and long journeys.Mobility temporal characteristics analysis of user’s temporal locations at a certain period gives us an insight into the possibility of predicting users’ location, how long a user could stay at a given location, that is, pause time, when the user is expected to return to a given location, that is, return time and why a user exhibits a skewed visiting behavior to some locations, that is, dynamic community walk.We study the pause time distribution of the KAIST campus traces in [36] and tune the EPOM model to generate a pause time distribution similar to the empirical distribution observed. Figure 19 shows the pause time distribution of the KAIST trace and EPOM traces. The distribution is found to be power law with a heavy tail. This shows that students spent a long time at some locations, such as lecture rooms but stayed for a short time at most locations such as shopping malls and cafeterias. This distribution is consistent with the distribution of pause time observed in Dartmouth campus real traces in [23].Figure 19
The pause time distribution of the KAIST and EPOM traces. The figure indicates that humans mostly stay short in most places they visit and stay at few locations.The fact that users pause for a long time at some preferred location also indicates that users predominantly take short walks within the community of such locations.We observed that users are associated with an average of three dynamic clusters in one working day, which evolve over time, as shown in Figure13. This fact is true for all users, except for stationary users.
## 6.2. Connectivity Features
In this section, we investigate how closely the EPOM model reproduces the distribution of the studied connectivity metrics as observed in the empirical data of the realistic traces in [44]. We compare the distribution generated by both EPOM, iMote traces, and Random Waypoint on each plot.Figure20 shows the aggregate distribution of contact duration for EPOM, iMote traces, and RWP. Each plot shows the complementary cumulative distribution function of a contact duration using a log-log scale. We see that the EPOM distribution follows power-law decay for a long time, similar to the distribution of iMotes traces. This is consistent with the findings in most research on human mobility contact distribution [50]. The distribution of RWP consists of only a short time with exponential decay. The power-law feature of human mobility indicates that more nodes have contact opportunities for a shorter time while only a few nodes stay connected for a longer time. A DTN routing algorithm can be designed to exploit this feature in conjunction with the spatiotemporal features to decide the best way to route a message from the source to destination(s).Figure 20
The PDF of the contact time distribution for the EPOM, iMotes, and RWP model. The EPOM model follows power-law distribution for a long time, just like the iMotes traces, but RWP follows exponential distributions with very short contacts.From Figure21, we see the intercontact (ICTs) time distribution for EPOM, iMote traces, and RWP. Figure 21 shows that both the EPOM and iMote traces curves exhibit power-law decay with exponential cut-off, unlike RWP that entirely follows an exponential distribution. The distribution of ICTs for the EPOM is also consistent with the feature of the realistic ICTs discovered in [51]. The power-law nature of ICTs plays an important role in DTNs as it fundamentally impacts the behavior of networking protocols [51]. Though shorter intercontact time means more frequent connection, nodes with longer intercontact times are possibly assumed to have new data to share.Figure 21
The PDF of the intercontact time distribution for the EPOM, iMotes, and RWP model. It shows that both the EPOM and iMote traces curves exhibit power-law decay with exponential cut-off, unlike RWP, which entirely follows an exponential distribution.Figure22 presents contacts for each simulation hour. Figure 22 shows the repetitiveness of hourly activities. We use 43200 seconds as working day length. The contact per hour RWP is uniform throughout the simulation. Observe the repetitive behavior of the EPOM, which captures students’ daily routine activities at specific hours of the day.Figure 22
The distribution of contacts for each simulation hour. The EPOM model has shown repetitiveness of hourly activities, unlike RWP, which shows uniform distribution of activities for each hour.
## 7. Conclusion and Future Work
In this paper, we conduct an in-depth study of human mobility patterns using realistic datasets for Bluetooth encounters and Global Positioning System (GPS) track-logs traces at the fine-grain level to better understand human mobility properties and uncover hidden patterns. Consequently, we have discovered time-varying human mobility patterns associated with a dynamic evolution of movement clusters. We proposed a new mobility model that mimics the realistic mobility patterns of real-world traces in the presence of obstacles of different shapes and sizes. The model describes various student activities that routinely evolve over time, such as going to the lectures, going to the cafeteria, sport, and shopping. We have shown that the model produces the distribution of the intercluster travel distance, intracluster travel distance, intracluster direction of movement, contact duration, intercontact time, and pause time, similar to the distribution of realistic traces.For future work, we intend to extend the model to the urban scenario through an extensive study of dynamic clusters evolution of pedestrians and bus commuters. Consequently, we intend to design an efficient predicting framework for human mobility. The new framework will exploit the existing and new uncovered features to predict the user’s subsequent displacement, stay duration, and possible contact.
---
*Source: 1018904-2021-12-30.xml* | 2021 |
# Multiplex Degenerate Primer Design for Targeted Whole Genome Amplification of Many Viral Genomes
**Authors:** Shea N. Gardner; Crystal J. Jaing; Maher M. Elsheikh; José Peña; David A. Hysom; Monica K. Borucki
**Journal:** Advances in Bioinformatics
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101894
---
## Abstract
Background. Targeted enrichment improves coverage of highly mutable viruses at low concentration in complex samples. Degenerate primers that anneal to conserved regions can facilitate amplification of divergent, low concentration variants, even when the strain present is unknown. Results. A tool for designing multiplex sets of degenerate sequencing primers to tile overlapping amplicons across multiple whole genomes is described. The new script, run_tiled_primers, is part of the PriMux software. Primers were designed for each segment of South American hemorrhagic fever viruses, tick-borne encephalitis, Henipaviruses, Arenaviruses, Filoviruses, Crimean-Congo hemorrhagic fever virus, Rift Valley fever virus, and Japanese encephalitis virus. Each group is highly diverse with as little as 5% genome consensus. Primer sets were computationally checked for nontarget cross reactions against the NCBI nucleotide sequence database. Primers for murine hepatitis virus were demonstrated in the lab to specifically amplify selected genes from a laboratory cultured strain that had undergone extensive passage in vitro and in vivo. Conclusions. This software should help researchers design multiplex sets of primers for targeted whole genome enrichment prior to sequencing to obtain better coverage of low titer, divergent viruses. Applications include viral discovery from a complex background and improved sensitivity and coverage of rapidly evolving strains or variants in a gene family.
---
## Body
## 1. Background
Sequencing whole genomes of potentially heterogeneous or divergent viruses can be challenging from a small or complex sample with low viral concentrations. Deep sequencing to detect rare viral variants or metagenomic sequencing to genotype viruses from a complex background requires targeted viral amplification. Techniques such as consensus PCR, Ion Ampliseq (Life Technologies) [1], TruSeq Amplicon (Illumina), and Haloplex (Agilent) [2] apply highly multiplexed PCR for target enrichment. Targeted enrichment should preferentially amplify the target virus over host or environmental DNA/RNA, in contrast to random amplification commonly used prior to whole genome sequencing. Primers designed to tile amplicons across a set of related viral genomes prior to sequencing can enrich whole viral genomes or large regions. However, high levels of intraspecific sequence variation combined with low virus concentrations mean that standard PCR primer design from a reference may fail due to mutations in the sample virus that prevent primer binding. To address this problem, we added a capability to the PriMux software distribution (http://sourceforge.net/projects/primux/) called run_tiled_primers that applies the PriMux software [3] to automate PCR primer design to achieve a near-minimal set of conserved, degenerate, multiplex-compatible primers designed to tile overlapping regions across multiple related whole genomes or regions.JCVI has an automated degenerate PCR primer design system called JCVI Primer Designer, which is similar to run_tiled_primers in that it designs degenerate primers to tile across viral genomes [4]. The major difference is that it begins with a consensus sequence containing degenerate bases and selects primers with fewer than 3 or 4 degenerate bases, so that in the end a majority of strains are amplified, but it does not require primers to amplify all strains. In their examples, most of the primer pairs could amplify >75% of isolates. Each primer pair for a given region is intended to be run as a specific pair, not as a multiplex with multiple pairs. Consensus sequences with too little conservation, that is, <90% consensus, are divided manually in a preprocessing step into subgroups which can be run separately through the pipeline. The method here differs in that it takes the full multiple sequence alignment as input rather than a consensus, and it seeks to automatically design a minimal, degenerate set of multiplex compatible primers to amplify all the strains for a given region in a single reaction. The major operational difference of run_tiled_primers compared to the JCVI pipeline is that run_tiled_primers does not require manual subdivision of the target sequences into high consensus groups to be run separately by the user, and run_tiled_primers attempts to cover 100% of the target sequences in a single pass using a greedy minimal set algorithm.Some regions of high conservation may have only one primer pair predicted to amplify all strain variants, while other regions may require many primers to cover all known variants. If multiple strains are present at once or if multiple forward and/or reverse primers in the multiplex amplify the strain present, the reaction will generate multiple overlapping amplicons spanning the same region, which could be problematic if exactly one amplicon sequence is needed, for example, for Sanger sequencing. In this case, the JCVI Primer Designer would be preferable since it designs primer pairs each to be run in singleplex reactions rather than as a multiplex, with the risk that outlier strains may not be amplified. However, when multiple overlapping reads with different endpoints or from different strains are acceptable, as in high throughput sequencing, run_tiled_primers should be suitable and could serve as a good alternative to random amplification when more specific enrichment is needed, and amplification of outliers is desired.For the viral groups we used here, the target sets included up to hundreds of sequences, and in many cases consensus was extremely low, as little as 5% of the bases in the multiple sequence alignment (Table1). The JCVI Primer Designer pipeline with a manual approach of subdividing the sequences into groups with 90% consensus and running each group separately could be a labor-intensive endeavor and would certainly result in a large number of singleplex reactions to cover each genome.Table 1
Summary of average lengths, number of sequences, and percentage of conserved bases in a multiple sequence alignment (with MUSCLE [5]), and number of tiled primers required for the short and long amplicon settings.
Organism
Number of sequences
Avg. Length
Consensus (%)
Number of primers for ~3,000 bp amplicons
Number of primers for ~10,000 bp amplicons
CCHF_S
56
1668
39
6
6
CCHF_M
49
5314
24
46
16
CCHF_L
31
12113
46
69
27
RVF_S
89
1684
53
2
2
RVF_M
69
3885
78
4
6
RVF_L
62
6404
83
6
4
Ebola
22
18659
5
116
35
Marburg
31
19115
70
34
8
Hendra
10
18234
97
12
4
Nipah
9
18247
91
18
6
Junin_L
12
7114
96
6
2
Machupo_L
5
7141
88
10
2
Junin_S
26
3410
80
4
4
Machupo_S
13
3432
76
4
4
JEV
144
10968
56
26
6
NW_Arena_S
100
3396
18
64
42
NW_Arena_L
42
7107
18
83
19
OW_Arena_S
54
3547
8
116
32
OW_Arena_L
45
7199
21
110
35
TBEV
67
10840
36
56
10
Abbreviations: CCHF = Crimean-Congo hemorrhagic fever, RVF = Rift Valley fever, JEV = Japanese encephalitis virus, NW_Arena = New World Arenavirus, OW_Arena = Old World Arenavirus, TBEV = tick-borne encephalitis virus, _L = L segment, _S = S segment.Possible applications include target enrichment for viral discovery of new members in a viral family from a complex host background, improving high throughput sequencing sensitivity and coverage of a rapidly evolving virus, or enriched coverage of variants in a gene family. We demonstrate the scalability of this software for designing whole genome amplification primers for a number of highly pathogenic viral groups which display very high levels of sequence variation, and for which we anticipate that targeted enrichment would be needed to obtain adequate sensitivity and genome coverage when sequencing from a clinical or environmental sample.
## 2. Implementation
### 2.1. Process
The run_tiled_primers process can be summarized as follows: split a multiple sequence alignment into overlapping regions, and for each region design a degenerate multiplex set of primers that in combination amplify that region in all strains with as few primers as possible. Run_tiled_primers takes as input a multiple sequence alignment (MSA). Run_tiled_primers splits the alignment into regions of size “s” bases that overlap by “x” bases (Figure 1).Figure 1
Diagram showing how the multiple sequence alignment is split into overlapping sections, and conserved; degenerate sets of primers are designed near the ends of the overlapping pieces so that overlapping amplicons should be produced which tile across the viral genome. FP = forward primer; RP = reverse primer.When splitting the alignment into regions of sizes, if the last “remainder” piece of an alignment is less than half of s, then s is increased by the amount that evenly divides the alignment without any remainder to s
′, and the split regions are recalculated with s
′. If a user desires to tile across only selected regions instead of tiling across the entire sequence, then an optional regions file may be specified which contains the regions (e.g., genes) and their start and end positions in the alignment.For each region, the PriMux software [3] is used to search for conserved, degenerate, and multiplex compatible primer sets to amplify that region in all target sequences with as few primers as possible. The PriMux “max” algorithm is used. Primers should be multiplex compatible since the primers for a given region are predicted not to form primer dimers and all to have T
m’s in a range specified by the user. As run_tiled_primers is a wrapper script around the PriMux workhorse, all the primer design characteristics are specified in a PriMux options file. The minimum and maximum amplicon lengths are determined by the (
s
,
x
) parameters to run_tiled_primers (Table 2), so these parameters may be omitted in the input options file or if they are present, their values will be replaced with values appropriate for the specified values of (
s
,
x
). Run_tiled_primers requires that primers must anneal within 0.5
x of either end of the region. If the value of x is 36 bp or less, it is too short for two nonoverlapping primers, typically at least 18 bp long. In this case, the code does not require that adjacent regions overlap and amplicons are allowed from anywhere in each region. Small overlaps (e.g., 40–80) do not leave much room to find good priming regions that pass the filters on T
m, entropy, free energy, and homopolymers as specified in the options file, and consequently it may not be possible to find primers for all targets. When this happens, increasing the overlap and relaxing the primer specifications may be necessary.Table 2
Parameters used for primer design inin silico examples and MHV example presented here.
In silico primer settings
MHV primer settings
Primer length range
18–25
18–27
T
m range allowed1
60–65°C
58–65°C
Number degenerate bases allowed per primer
5
3
Minimum distance of degenerate base to 3′ end of primer
3 nt
3 nt
Minimum trimer entropy allowed (to avoid repetitive sequence)2
3.5
3.3
Maximum length of homopolymer allowed
4 nt
5 nt
GC% range allowed
20–80
20–80
Minimum primer dimerΔ
G
−6 kcal/mol
−15 kcal/mol
Minimum hairpinΔ
G
−5 kcal/mol
−12 kcal/mol
Primer selection iterations
1
3
1
T
m is calculated using Unafold [6].
2Low complexity regions (repetitive sequence) are excluded from consideration as primers by setting a minimum entropy threshold for a primer candidate. The entropy S
i of a sequence was computed by counting the numbers of occurrences of n
A
A
A
,
n
A
A
C
,
…
,
n
T
T
T of the 64 possible trimers in the probe sequence, and dividing by the total number of trimers, yielding the corresponding frequencies f
A
A
A
,
…
,
f
T
T
T. The entropy is then given by the sum of -
f
t
log
2
f
t where the sum is over the trimers t with f
t
≠
0.Requiring that primers fall within0.5
x bases of the ends of each region facilitates the creation of amplicons which should overlap across a genome, allowing full genome assembly from the amplified products. There may not be amplicons covering the extreme 5′ and 3′ ends of a target sequence, since the first and last primers may be located some distance (maximum of x/2) from the ends. Rapid Amplification of cDNA Ends (RACE) PCR would be necessary to amplify the genome ends not covered by an overlapping region, priming with the reverse complement of the run_tiled_primers primers closest to the end so as to prime toward the edge of the genome.Because this split size is based on the alignment and since dashes in the alignment are not counted in amplicon length, actual amplicons may be substantially shorter than the split sizes. This is likely to happen for poorly aligning regions or regions in which there are insertions or deletions in a subset of the sequences. To compensate for this, one should select s that is larger than the actual amplicon lengths desired, particularly if the length of the MSA is much larger than the average genome length.Run_tiled_primers labels each overlapping region as #part, where # indicates the order of the regions, for example, 0part, 1part, and 2part are the three regions shown in Figure1. For each region, sets of conserved, degenerate primers are designed to ensure amplification of all the targets, if possible, given the primer specifications.The primers can be run in separate singleplex reactions for each split region, or, alternatively, primers for all regions can be combined in a large multiplex after the large set is checked for primer dimers that could occur between primers from different regions. Combining primers for all regions in multiplex should facilitate whole genome amplification in a single reaction. It may yield longer amplicons from the reaction of forward and reverse primers from different parts (FP from 0part reacting with RP from 1part gives product ~2 times the split size), depending on the polymerase processivity and the duration of the extension step, and should facilitate assembly across amplified regions. This helps alleviate cases where a primer cannot be found for one part in an outlier genome due toT
m, homopolymers, primer dimer Δ
G, and so forth, since primers from different parts may amplify across the region. However, since primers of overlapping regions can also produce amplicons shorter (less than x bp) than the desired amplicon of length between s
-
x and s bp (e.g., RP of 0part with the FP from 1part), a step to remove short amplicons before sequencing may be desired. In our experimental test with MHV, the primers from parts 0, 2, and 4 were combined in one reaction and the primers from parts 1 and 3 were combined in another, so that short products would not be produced.We used the script simulate_PCR.pl (https://sourceforge.net/projects/simulatepcr/ [7]) to predict all PCR amplicons from the multiplex degenerate primers compared to the target sequences and to the NCBI nt database. This script is run automatically from the run_tiled_primers code after it predicts primers. It is set to predict amplicons up to twice the maximum amplicon length specified by the user.
### 2.2. Computational Examples
Computationally predicted tiled primer sets were generated for the viruses and primer specifications provided in Table1. MSAs were created with MUSCLE [5]. Two settings of split size s and overlap size x were used: long amplicons with s
=
10,000, x
=
500; or short amplicons of s
=
3000, x
=
500. The choice of which set to use could depend upon the product lengths the polymerase can amplify and the duration of the extension step of PCR. These fairly long amplicons are provided as theoretical examples. Users may run run_tiled_primers with shorter amplicons (e.g., s
=
400 bp) to divide the MSA into many more parts. One amplicon per target sequence per region was desired (PriMux option file with - primer_selection_iterations = 1). Table 1 shows the average genome or segment length, the number of genomes available for each target, the % consensus among those sequences, and the total number of primers to amplify all overlapping regions of all genomes. All products from the nt database under 7800 bp (shorter amplicon) or 26 kb (longer amplicon) were predicted with simulate_PCR to identify potential amplification of nontarget organisms (Tables 3 and 4).Table 3
Number of nontarget amplicons predicted in a multiplex reaction of tiled primers for 3 kb amplicons. In a multiplex of the 3 kb-amplicon tiled primers for a given organism, of the possible reactions producing products, only a small number of primer combinations are predicted to amplify regions in nontarget organisms. Counts show the number of unique primer combinations in a multiplex that yield products for any sequence in the NCBI nt nucleotide database. The numerator is for any nontarget organism in nt and the denominator is for any target or nontarget organism in nt, that is, nonspecific/total of the possible primer combinations in the multiplex predicted to yield product when compared against nt. Vastly more amplicons are produced from target organisms, indicating any contaminating nontarget species should be a small minority of amplified product.
Organism
Nontarget amplicons/total amplicons
Nontarget amplicon source organism
CCHF_S
0/160
—
CCHF_M
0/1934
—
CCHF_L
0/3753
—
RVF_S
0/137
—
RVF_M
0/356
—
RVF_L
0/753
—
Ebola
1/2657
Zea mays clone BAC ZMMBBb0342E21
Marburg
0/1511
—
Hendra
0/206
—
Nipah
0/286
—
Junin_L
0/69
—
Machupo_L
0/153
—
Junin_S
0/84
—
Machupo_S
0/32
—
JEV
7/9515
RocioWest Nile
NW_Arena_S
56/1543
IppyLassa Luna Lymphocytic choriomeningitis Mobala Mopeia
NW_Arena_L
0/819
—
OW_Arena_S
73/2509
AllpahuayoAmapari Bear canyon Chapare Cupixi Dandenong Flexal Guanarito Junin LatinoLujoMachupo
Methylococcus capsulatus str. BathParanaPiritalSabiaTamiamiWhitewater Arroyo
OW_Arena_L
1/1826
Dandenong
TBEV
0/4925
—Table 4
Number of nontarget amplicons predicted in a multiplex reaction of tiled primers for 10 kb amplicons. As in Table3, but for the multiplexes of the 10 kb-amplicon tiled primers.
Organism
Nontarget amplicons/total amplicons
Nontarget amplicon source organism
CCHF_S
0/160
—
CCHF_M
0/261
—
CCHF_L
0/253
—
RVF_S
0/137
—
RVF_M
0/487
—
RVF_L
0/195
—
Ebola
0/534
—
Marburg
0/123
—
Hendra
0/50
—
Nipah
0/74
—
Junin_L
0/12
—
Machupo_L
0/7
—
Junin_S
0/95
—
Machupo_S
0/32
—
JEV
0/1554
—
NW_Arena_S
1/337
Human chromosome 14 BAC C-2555K7 of library CalTech-D
NW_Arena_L
0/86
—
OW_Arena_S
0/316
—
OW_Arena_L
0/131
—
TBEV
0/189
—
### 2.3. Murine Hepatitis Virus Example
Run_tiled_primers was used to design primers for selected regions of the coronavirus murine hepatitis virus (strain MHV-1) genome following passage in the lab, for a separate project in which deep sequencing of selected regions following lab passage was performed. In other work attempting to amplify passaged RNA viruses, finding robust primers based on the original genome was difficult due to mutations which modified primer binding sites [8]. It was hoped that run_tiled_primers would help avoid selecting primers in mutational hotspots by taking into account strain variation across multiple available genomes for the species, since run_tiled_primers seeks maximally conserved primers in the available sequences.Input to run_tiled_primers was an alignment of 22 MHV genomes (genome identities provided as supplementary information) created using MUSCLE [5]. Regions tiled were the Nsp1, Nsp3, Nsp14, and several genes at the 3′ end of the genome (regions file provided in supplementary information), using the primer parameters in Table 2. Primer sets were predicted to produce overlapping amplicons for these regions from all MHV genomes, and a subset of primers predicted to amplify the MHV-1 or MHV strain JHM genome was selected. Some primers that were predicted to amplify the JHM strain but not the MHV-1 strain were included in the multiplex, to check for possible evolutionary change of the original sequence toward the annotated reference JHM sequence or cross reactions with primer-genome mismatches.Samples from MHV-1 infected mice were provided by Dr. Richard Bowen at Colorado State University. The MHV-1 strain used to infect the mice was obtained from American Type Culture Collection (Manassas, VA) and viral stock was propagated in murine fibroblast 17Cl-1 cells then used to infect C3H mice via intranasal route. Mice were sacrificed four days after inoculation and bronchoalveolar lavage (BAL) fluid was collected. RNA was extracted from the BAL samples using Invitrogen TRIZOL reagent, as per the manufacturer’s instructions. RNA was converted to cDNA using Superscript III (Invitrogen) and random hexamers according to the manufacturer’s protocol.Multiplexed primer sets were designed to cover the Nsp3 and 3′ genes with 3 primer pairs per genomic region amplified when possible (total number of primers tested in two multiplex reactions was 53, Table S1). The primers were tested in the lab first by testing the primer pairs in individual reactions then as multiplexed reactions. No effort was made to optimize the PCR cycling conditions. RT-PCR conditions were as follows: reverse transcription was performed using random hexamers and the Superscript III RT reverse transcriptase kit (Invitrogen). The MHV-1 cDNA templates were amplified using the Q5 Hot Start High-Fidelity DNA Polymerase kit (New England BioLabs, Ipswich, MA), following manufacturer’s instructions. PCR conditions consisted of 98°C for 30 s, followed by 35 cycles of 98°C for 10 s, 60°C for 20 s, and 72°C for 1 min. The final cycle was 72°C for 2 min.Two multiplex reactions were set up with each containing a group of nonoverlapping primer sets (Figure2). For example, multiplex “A” included primer sets A, C, E, G, and I and multiplex “B” had primer sets B, D, F, and H. By staggering the primer sets into different multiplex reactions, the amplification of overlapping primer regions created by the reverse primer from one set with the forward primer of the overlapping, adjacent primer set was eliminated. Without this strategy, these overlapping primer sets would dominate the PCR reaction due to the small size of these amplicons.Figure 2
Diagram of the murine hepatitis virus (MHV) genome regions for which primer sets were tested. The approximate position of each region amplified by primer sets is shown (MHV genome is not drawn to scale). Each multiplex reaction consisted of primer sets that do not overlap in regions amplified. Each region is amplified using 3 forward primers and 3 reverse primers (Table S1; see Supplementary Material available online athttp://dx.doi.org/10.1155/2014/101894). For example, the A primer set consists of 3 forward primers (A1F, A2F, and A3F) and 3 reverse primers (A1R, A2R, and A3R). To verify that each region is amplified in the multiplex reaction, a second set of seminested PCRs were performed using the amplicons from the multiplex reaction as a template. For example, to ensure region A was amplified, the PCR product from the A mix multiplex was diluted 1 : 10,000 and used as template in a PCR reaction with AR1 primer paired with BF2 (Table S2). Primers are labeled according to genome region (A-I) and primer direction (F = forward, R = reverse).The amplification of each primer pair in the multiplex was tested using a seminested PCR strategy to verify that the correct, specific amplicons were being produced from each multiplex of primers for a given region (Figure2, Table S2). The multiplex PCR products served as templates for PCR reactions with primer pairs that included the reverse primer of one region paired with the forward primer from the downstream adjacent region to determine if the template generated from the multiplex was present. To ensure that the PCR product was generated from the multiplex product template rather than genomic DNA carried over from the initial sample, the multiplex product template was diluted 1 : 10,000 or excised from a gel and purified prior to use as a template.
## 2.1. Process
The run_tiled_primers process can be summarized as follows: split a multiple sequence alignment into overlapping regions, and for each region design a degenerate multiplex set of primers that in combination amplify that region in all strains with as few primers as possible. Run_tiled_primers takes as input a multiple sequence alignment (MSA). Run_tiled_primers splits the alignment into regions of size “s” bases that overlap by “x” bases (Figure 1).Figure 1
Diagram showing how the multiple sequence alignment is split into overlapping sections, and conserved; degenerate sets of primers are designed near the ends of the overlapping pieces so that overlapping amplicons should be produced which tile across the viral genome. FP = forward primer; RP = reverse primer.When splitting the alignment into regions of sizes, if the last “remainder” piece of an alignment is less than half of s, then s is increased by the amount that evenly divides the alignment without any remainder to s
′, and the split regions are recalculated with s
′. If a user desires to tile across only selected regions instead of tiling across the entire sequence, then an optional regions file may be specified which contains the regions (e.g., genes) and their start and end positions in the alignment.For each region, the PriMux software [3] is used to search for conserved, degenerate, and multiplex compatible primer sets to amplify that region in all target sequences with as few primers as possible. The PriMux “max” algorithm is used. Primers should be multiplex compatible since the primers for a given region are predicted not to form primer dimers and all to have T
m’s in a range specified by the user. As run_tiled_primers is a wrapper script around the PriMux workhorse, all the primer design characteristics are specified in a PriMux options file. The minimum and maximum amplicon lengths are determined by the (
s
,
x
) parameters to run_tiled_primers (Table 2), so these parameters may be omitted in the input options file or if they are present, their values will be replaced with values appropriate for the specified values of (
s
,
x
). Run_tiled_primers requires that primers must anneal within 0.5
x of either end of the region. If the value of x is 36 bp or less, it is too short for two nonoverlapping primers, typically at least 18 bp long. In this case, the code does not require that adjacent regions overlap and amplicons are allowed from anywhere in each region. Small overlaps (e.g., 40–80) do not leave much room to find good priming regions that pass the filters on T
m, entropy, free energy, and homopolymers as specified in the options file, and consequently it may not be possible to find primers for all targets. When this happens, increasing the overlap and relaxing the primer specifications may be necessary.Table 2
Parameters used for primer design inin silico examples and MHV example presented here.
In silico primer settings
MHV primer settings
Primer length range
18–25
18–27
T
m range allowed1
60–65°C
58–65°C
Number degenerate bases allowed per primer
5
3
Minimum distance of degenerate base to 3′ end of primer
3 nt
3 nt
Minimum trimer entropy allowed (to avoid repetitive sequence)2
3.5
3.3
Maximum length of homopolymer allowed
4 nt
5 nt
GC% range allowed
20–80
20–80
Minimum primer dimerΔ
G
−6 kcal/mol
−15 kcal/mol
Minimum hairpinΔ
G
−5 kcal/mol
−12 kcal/mol
Primer selection iterations
1
3
1
T
m is calculated using Unafold [6].
2Low complexity regions (repetitive sequence) are excluded from consideration as primers by setting a minimum entropy threshold for a primer candidate. The entropy S
i of a sequence was computed by counting the numbers of occurrences of n
A
A
A
,
n
A
A
C
,
…
,
n
T
T
T of the 64 possible trimers in the probe sequence, and dividing by the total number of trimers, yielding the corresponding frequencies f
A
A
A
,
…
,
f
T
T
T. The entropy is then given by the sum of -
f
t
log
2
f
t where the sum is over the trimers t with f
t
≠
0.Requiring that primers fall within0.5
x bases of the ends of each region facilitates the creation of amplicons which should overlap across a genome, allowing full genome assembly from the amplified products. There may not be amplicons covering the extreme 5′ and 3′ ends of a target sequence, since the first and last primers may be located some distance (maximum of x/2) from the ends. Rapid Amplification of cDNA Ends (RACE) PCR would be necessary to amplify the genome ends not covered by an overlapping region, priming with the reverse complement of the run_tiled_primers primers closest to the end so as to prime toward the edge of the genome.Because this split size is based on the alignment and since dashes in the alignment are not counted in amplicon length, actual amplicons may be substantially shorter than the split sizes. This is likely to happen for poorly aligning regions or regions in which there are insertions or deletions in a subset of the sequences. To compensate for this, one should select s that is larger than the actual amplicon lengths desired, particularly if the length of the MSA is much larger than the average genome length.Run_tiled_primers labels each overlapping region as #part, where # indicates the order of the regions, for example, 0part, 1part, and 2part are the three regions shown in Figure1. For each region, sets of conserved, degenerate primers are designed to ensure amplification of all the targets, if possible, given the primer specifications.The primers can be run in separate singleplex reactions for each split region, or, alternatively, primers for all regions can be combined in a large multiplex after the large set is checked for primer dimers that could occur between primers from different regions. Combining primers for all regions in multiplex should facilitate whole genome amplification in a single reaction. It may yield longer amplicons from the reaction of forward and reverse primers from different parts (FP from 0part reacting with RP from 1part gives product ~2 times the split size), depending on the polymerase processivity and the duration of the extension step, and should facilitate assembly across amplified regions. This helps alleviate cases where a primer cannot be found for one part in an outlier genome due toT
m, homopolymers, primer dimer Δ
G, and so forth, since primers from different parts may amplify across the region. However, since primers of overlapping regions can also produce amplicons shorter (less than x bp) than the desired amplicon of length between s
-
x and s bp (e.g., RP of 0part with the FP from 1part), a step to remove short amplicons before sequencing may be desired. In our experimental test with MHV, the primers from parts 0, 2, and 4 were combined in one reaction and the primers from parts 1 and 3 were combined in another, so that short products would not be produced.We used the script simulate_PCR.pl (https://sourceforge.net/projects/simulatepcr/ [7]) to predict all PCR amplicons from the multiplex degenerate primers compared to the target sequences and to the NCBI nt database. This script is run automatically from the run_tiled_primers code after it predicts primers. It is set to predict amplicons up to twice the maximum amplicon length specified by the user.
## 2.2. Computational Examples
Computationally predicted tiled primer sets were generated for the viruses and primer specifications provided in Table1. MSAs were created with MUSCLE [5]. Two settings of split size s and overlap size x were used: long amplicons with s
=
10,000, x
=
500; or short amplicons of s
=
3000, x
=
500. The choice of which set to use could depend upon the product lengths the polymerase can amplify and the duration of the extension step of PCR. These fairly long amplicons are provided as theoretical examples. Users may run run_tiled_primers with shorter amplicons (e.g., s
=
400 bp) to divide the MSA into many more parts. One amplicon per target sequence per region was desired (PriMux option file with - primer_selection_iterations = 1). Table 1 shows the average genome or segment length, the number of genomes available for each target, the % consensus among those sequences, and the total number of primers to amplify all overlapping regions of all genomes. All products from the nt database under 7800 bp (shorter amplicon) or 26 kb (longer amplicon) were predicted with simulate_PCR to identify potential amplification of nontarget organisms (Tables 3 and 4).Table 3
Number of nontarget amplicons predicted in a multiplex reaction of tiled primers for 3 kb amplicons. In a multiplex of the 3 kb-amplicon tiled primers for a given organism, of the possible reactions producing products, only a small number of primer combinations are predicted to amplify regions in nontarget organisms. Counts show the number of unique primer combinations in a multiplex that yield products for any sequence in the NCBI nt nucleotide database. The numerator is for any nontarget organism in nt and the denominator is for any target or nontarget organism in nt, that is, nonspecific/total of the possible primer combinations in the multiplex predicted to yield product when compared against nt. Vastly more amplicons are produced from target organisms, indicating any contaminating nontarget species should be a small minority of amplified product.
Organism
Nontarget amplicons/total amplicons
Nontarget amplicon source organism
CCHF_S
0/160
—
CCHF_M
0/1934
—
CCHF_L
0/3753
—
RVF_S
0/137
—
RVF_M
0/356
—
RVF_L
0/753
—
Ebola
1/2657
Zea mays clone BAC ZMMBBb0342E21
Marburg
0/1511
—
Hendra
0/206
—
Nipah
0/286
—
Junin_L
0/69
—
Machupo_L
0/153
—
Junin_S
0/84
—
Machupo_S
0/32
—
JEV
7/9515
RocioWest Nile
NW_Arena_S
56/1543
IppyLassa Luna Lymphocytic choriomeningitis Mobala Mopeia
NW_Arena_L
0/819
—
OW_Arena_S
73/2509
AllpahuayoAmapari Bear canyon Chapare Cupixi Dandenong Flexal Guanarito Junin LatinoLujoMachupo
Methylococcus capsulatus str. BathParanaPiritalSabiaTamiamiWhitewater Arroyo
OW_Arena_L
1/1826
Dandenong
TBEV
0/4925
—Table 4
Number of nontarget amplicons predicted in a multiplex reaction of tiled primers for 10 kb amplicons. As in Table3, but for the multiplexes of the 10 kb-amplicon tiled primers.
Organism
Nontarget amplicons/total amplicons
Nontarget amplicon source organism
CCHF_S
0/160
—
CCHF_M
0/261
—
CCHF_L
0/253
—
RVF_S
0/137
—
RVF_M
0/487
—
RVF_L
0/195
—
Ebola
0/534
—
Marburg
0/123
—
Hendra
0/50
—
Nipah
0/74
—
Junin_L
0/12
—
Machupo_L
0/7
—
Junin_S
0/95
—
Machupo_S
0/32
—
JEV
0/1554
—
NW_Arena_S
1/337
Human chromosome 14 BAC C-2555K7 of library CalTech-D
NW_Arena_L
0/86
—
OW_Arena_S
0/316
—
OW_Arena_L
0/131
—
TBEV
0/189
—
## 2.3. Murine Hepatitis Virus Example
Run_tiled_primers was used to design primers for selected regions of the coronavirus murine hepatitis virus (strain MHV-1) genome following passage in the lab, for a separate project in which deep sequencing of selected regions following lab passage was performed. In other work attempting to amplify passaged RNA viruses, finding robust primers based on the original genome was difficult due to mutations which modified primer binding sites [8]. It was hoped that run_tiled_primers would help avoid selecting primers in mutational hotspots by taking into account strain variation across multiple available genomes for the species, since run_tiled_primers seeks maximally conserved primers in the available sequences.Input to run_tiled_primers was an alignment of 22 MHV genomes (genome identities provided as supplementary information) created using MUSCLE [5]. Regions tiled were the Nsp1, Nsp3, Nsp14, and several genes at the 3′ end of the genome (regions file provided in supplementary information), using the primer parameters in Table 2. Primer sets were predicted to produce overlapping amplicons for these regions from all MHV genomes, and a subset of primers predicted to amplify the MHV-1 or MHV strain JHM genome was selected. Some primers that were predicted to amplify the JHM strain but not the MHV-1 strain were included in the multiplex, to check for possible evolutionary change of the original sequence toward the annotated reference JHM sequence or cross reactions with primer-genome mismatches.Samples from MHV-1 infected mice were provided by Dr. Richard Bowen at Colorado State University. The MHV-1 strain used to infect the mice was obtained from American Type Culture Collection (Manassas, VA) and viral stock was propagated in murine fibroblast 17Cl-1 cells then used to infect C3H mice via intranasal route. Mice were sacrificed four days after inoculation and bronchoalveolar lavage (BAL) fluid was collected. RNA was extracted from the BAL samples using Invitrogen TRIZOL reagent, as per the manufacturer’s instructions. RNA was converted to cDNA using Superscript III (Invitrogen) and random hexamers according to the manufacturer’s protocol.Multiplexed primer sets were designed to cover the Nsp3 and 3′ genes with 3 primer pairs per genomic region amplified when possible (total number of primers tested in two multiplex reactions was 53, Table S1). The primers were tested in the lab first by testing the primer pairs in individual reactions then as multiplexed reactions. No effort was made to optimize the PCR cycling conditions. RT-PCR conditions were as follows: reverse transcription was performed using random hexamers and the Superscript III RT reverse transcriptase kit (Invitrogen). The MHV-1 cDNA templates were amplified using the Q5 Hot Start High-Fidelity DNA Polymerase kit (New England BioLabs, Ipswich, MA), following manufacturer’s instructions. PCR conditions consisted of 98°C for 30 s, followed by 35 cycles of 98°C for 10 s, 60°C for 20 s, and 72°C for 1 min. The final cycle was 72°C for 2 min.Two multiplex reactions were set up with each containing a group of nonoverlapping primer sets (Figure2). For example, multiplex “A” included primer sets A, C, E, G, and I and multiplex “B” had primer sets B, D, F, and H. By staggering the primer sets into different multiplex reactions, the amplification of overlapping primer regions created by the reverse primer from one set with the forward primer of the overlapping, adjacent primer set was eliminated. Without this strategy, these overlapping primer sets would dominate the PCR reaction due to the small size of these amplicons.Figure 2
Diagram of the murine hepatitis virus (MHV) genome regions for which primer sets were tested. The approximate position of each region amplified by primer sets is shown (MHV genome is not drawn to scale). Each multiplex reaction consisted of primer sets that do not overlap in regions amplified. Each region is amplified using 3 forward primers and 3 reverse primers (Table S1; see Supplementary Material available online athttp://dx.doi.org/10.1155/2014/101894). For example, the A primer set consists of 3 forward primers (A1F, A2F, and A3F) and 3 reverse primers (A1R, A2R, and A3R). To verify that each region is amplified in the multiplex reaction, a second set of seminested PCRs were performed using the amplicons from the multiplex reaction as a template. For example, to ensure region A was amplified, the PCR product from the A mix multiplex was diluted 1 : 10,000 and used as template in a PCR reaction with AR1 primer paired with BF2 (Table S2). Primers are labeled according to genome region (A-I) and primer direction (F = forward, R = reverse).The amplification of each primer pair in the multiplex was tested using a seminested PCR strategy to verify that the correct, specific amplicons were being produced from each multiplex of primers for a given region (Figure2, Table S2). The multiplex PCR products served as templates for PCR reactions with primer pairs that included the reverse primer of one region paired with the forward primer from the downstream adjacent region to determine if the template generated from the multiplex was present. To ensure that the PCR product was generated from the multiplex product template rather than genomic DNA carried over from the initial sample, the multiplex product template was diluted 1 : 10,000 or excised from a gel and purified prior to use as a template.
## 3. Results and Discussion
All the primers for both(
s
,
x
) settings are provided as Supplementary data as are the predicted amplicon start and end positions in each target genome from a multiplex of the primers for a given viral target set. Tiled amplification of these viruses required from 2 to 116 primers (Table 1). Primers are predicted to be specific to the target organisms for the most part, although not exclusively (Tables 3 and 4). The few cases of off-target amplification come from closely related organisms in the same family such as Old World (OW) and New World (NW) Arenaviruses or other Flaviviruses amplified by the Japanese encephalitis virus (JEV) multiplex. The three exceptions were a single amplicon of 2830 bp from a BAC clone of Zea mays (maize) from the Ebola 3 kb multiplex, a single amplicon of 3610 bp fromMethylococcus capsulatus str. Bath from the OW Arena S segment 3 kb multiplex and a single amplicon of 851 bp from a human BAC from a library at CalTech. All three of these predicted nontarget amplicons result from a single primer in each of those reactions performing as both forward primer (FP) and reverse primer (RP). Nonetheless, the primer multiplexes described here should strongly favor the preferential enrichment of desired targets.Deriving each primer set required multiple sequence alignment and a call to run_tile_primers in the current PriMux software distribution (http://sourceforge.net/projects/primux/). In comparison, primer design with the JCVI pipeline for any of these target sets would require the following steps: (1) inspection of a phylogeny for the full target set to build multiple smaller clade-level sets with no more than 10% sequence variation, (2) realignment of the clade-level sets, (3) running of the JCVI pipeline on each clade set, (4) assessing which target sequences are not amplified after one design round and rerun the pipeline on those sequences for each clade, (5) and repeating step 4 until all target sequences are predicted to be amplified.
## 4. MHV Results
Multiplexed primers were tested in the lab as primer pairs in individual reactions then as multiplexed reactions. Twenty-two of the primer pairs worked and four failed to give a product and were paired with other primers in subsequent testing or if necessary, replaced with an alternative primer. Amplicons were detected in the expected size ranges, confirming amplification of the expected regions from the multiplexed sets (Figure S1). In some cases extra bands were present, but they were generally smaller than the targeted size; this was common when the template cDNA was obtained from a clinical sample rather than high titer cell culture derived viral stock from this study. The PCR products generated with these highly multiplexed assays were then sequenced using Illumina ultradeep sequencing with a high fidelity polymerase. These primers yielded high coverage averaging 150,000x of the genomic regions amplified by the multiplex primers.
## 5. Conclusions
Software is described to generate tiled, multiplex, and degenerate amplification primers to span entire genomes or regions of many variant sequences. This tool should facilitate the amplification of overlapping products across whole genomes or user-specified regions of target sets with high levels of variation. Applications include target enrichment for viral discovery of new members in a viral family from a complex host background, improving high throughput sequencing sensitivity and coverage of a rapidly evolving virus, or enriched coverage of variants in a gene family.
---
*Source: 101894-2014-08-03.xml* | 101894-2014-08-03_101894-2014-08-03.md | 44,631 | Multiplex Degenerate Primer Design for Targeted Whole Genome Amplification of Many Viral Genomes | Shea N. Gardner; Crystal J. Jaing; Maher M. Elsheikh; José Peña; David A. Hysom; Monica K. Borucki | Advances in Bioinformatics
(2014) | Biological Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101894 | 101894-2014-08-03.xml | ---
## Abstract
Background. Targeted enrichment improves coverage of highly mutable viruses at low concentration in complex samples. Degenerate primers that anneal to conserved regions can facilitate amplification of divergent, low concentration variants, even when the strain present is unknown. Results. A tool for designing multiplex sets of degenerate sequencing primers to tile overlapping amplicons across multiple whole genomes is described. The new script, run_tiled_primers, is part of the PriMux software. Primers were designed for each segment of South American hemorrhagic fever viruses, tick-borne encephalitis, Henipaviruses, Arenaviruses, Filoviruses, Crimean-Congo hemorrhagic fever virus, Rift Valley fever virus, and Japanese encephalitis virus. Each group is highly diverse with as little as 5% genome consensus. Primer sets were computationally checked for nontarget cross reactions against the NCBI nucleotide sequence database. Primers for murine hepatitis virus were demonstrated in the lab to specifically amplify selected genes from a laboratory cultured strain that had undergone extensive passage in vitro and in vivo. Conclusions. This software should help researchers design multiplex sets of primers for targeted whole genome enrichment prior to sequencing to obtain better coverage of low titer, divergent viruses. Applications include viral discovery from a complex background and improved sensitivity and coverage of rapidly evolving strains or variants in a gene family.
---
## Body
## 1. Background
Sequencing whole genomes of potentially heterogeneous or divergent viruses can be challenging from a small or complex sample with low viral concentrations. Deep sequencing to detect rare viral variants or metagenomic sequencing to genotype viruses from a complex background requires targeted viral amplification. Techniques such as consensus PCR, Ion Ampliseq (Life Technologies) [1], TruSeq Amplicon (Illumina), and Haloplex (Agilent) [2] apply highly multiplexed PCR for target enrichment. Targeted enrichment should preferentially amplify the target virus over host or environmental DNA/RNA, in contrast to random amplification commonly used prior to whole genome sequencing. Primers designed to tile amplicons across a set of related viral genomes prior to sequencing can enrich whole viral genomes or large regions. However, high levels of intraspecific sequence variation combined with low virus concentrations mean that standard PCR primer design from a reference may fail due to mutations in the sample virus that prevent primer binding. To address this problem, we added a capability to the PriMux software distribution (http://sourceforge.net/projects/primux/) called run_tiled_primers that applies the PriMux software [3] to automate PCR primer design to achieve a near-minimal set of conserved, degenerate, multiplex-compatible primers designed to tile overlapping regions across multiple related whole genomes or regions.JCVI has an automated degenerate PCR primer design system called JCVI Primer Designer, which is similar to run_tiled_primers in that it designs degenerate primers to tile across viral genomes [4]. The major difference is that it begins with a consensus sequence containing degenerate bases and selects primers with fewer than 3 or 4 degenerate bases, so that in the end a majority of strains are amplified, but it does not require primers to amplify all strains. In their examples, most of the primer pairs could amplify >75% of isolates. Each primer pair for a given region is intended to be run as a specific pair, not as a multiplex with multiple pairs. Consensus sequences with too little conservation, that is, <90% consensus, are divided manually in a preprocessing step into subgroups which can be run separately through the pipeline. The method here differs in that it takes the full multiple sequence alignment as input rather than a consensus, and it seeks to automatically design a minimal, degenerate set of multiplex compatible primers to amplify all the strains for a given region in a single reaction. The major operational difference of run_tiled_primers compared to the JCVI pipeline is that run_tiled_primers does not require manual subdivision of the target sequences into high consensus groups to be run separately by the user, and run_tiled_primers attempts to cover 100% of the target sequences in a single pass using a greedy minimal set algorithm.Some regions of high conservation may have only one primer pair predicted to amplify all strain variants, while other regions may require many primers to cover all known variants. If multiple strains are present at once or if multiple forward and/or reverse primers in the multiplex amplify the strain present, the reaction will generate multiple overlapping amplicons spanning the same region, which could be problematic if exactly one amplicon sequence is needed, for example, for Sanger sequencing. In this case, the JCVI Primer Designer would be preferable since it designs primer pairs each to be run in singleplex reactions rather than as a multiplex, with the risk that outlier strains may not be amplified. However, when multiple overlapping reads with different endpoints or from different strains are acceptable, as in high throughput sequencing, run_tiled_primers should be suitable and could serve as a good alternative to random amplification when more specific enrichment is needed, and amplification of outliers is desired.For the viral groups we used here, the target sets included up to hundreds of sequences, and in many cases consensus was extremely low, as little as 5% of the bases in the multiple sequence alignment (Table1). The JCVI Primer Designer pipeline with a manual approach of subdividing the sequences into groups with 90% consensus and running each group separately could be a labor-intensive endeavor and would certainly result in a large number of singleplex reactions to cover each genome.Table 1
Summary of average lengths, number of sequences, and percentage of conserved bases in a multiple sequence alignment (with MUSCLE [5]), and number of tiled primers required for the short and long amplicon settings.
Organism
Number of sequences
Avg. Length
Consensus (%)
Number of primers for ~3,000 bp amplicons
Number of primers for ~10,000 bp amplicons
CCHF_S
56
1668
39
6
6
CCHF_M
49
5314
24
46
16
CCHF_L
31
12113
46
69
27
RVF_S
89
1684
53
2
2
RVF_M
69
3885
78
4
6
RVF_L
62
6404
83
6
4
Ebola
22
18659
5
116
35
Marburg
31
19115
70
34
8
Hendra
10
18234
97
12
4
Nipah
9
18247
91
18
6
Junin_L
12
7114
96
6
2
Machupo_L
5
7141
88
10
2
Junin_S
26
3410
80
4
4
Machupo_S
13
3432
76
4
4
JEV
144
10968
56
26
6
NW_Arena_S
100
3396
18
64
42
NW_Arena_L
42
7107
18
83
19
OW_Arena_S
54
3547
8
116
32
OW_Arena_L
45
7199
21
110
35
TBEV
67
10840
36
56
10
Abbreviations: CCHF = Crimean-Congo hemorrhagic fever, RVF = Rift Valley fever, JEV = Japanese encephalitis virus, NW_Arena = New World Arenavirus, OW_Arena = Old World Arenavirus, TBEV = tick-borne encephalitis virus, _L = L segment, _S = S segment.Possible applications include target enrichment for viral discovery of new members in a viral family from a complex host background, improving high throughput sequencing sensitivity and coverage of a rapidly evolving virus, or enriched coverage of variants in a gene family. We demonstrate the scalability of this software for designing whole genome amplification primers for a number of highly pathogenic viral groups which display very high levels of sequence variation, and for which we anticipate that targeted enrichment would be needed to obtain adequate sensitivity and genome coverage when sequencing from a clinical or environmental sample.
## 2. Implementation
### 2.1. Process
The run_tiled_primers process can be summarized as follows: split a multiple sequence alignment into overlapping regions, and for each region design a degenerate multiplex set of primers that in combination amplify that region in all strains with as few primers as possible. Run_tiled_primers takes as input a multiple sequence alignment (MSA). Run_tiled_primers splits the alignment into regions of size “s” bases that overlap by “x” bases (Figure 1).Figure 1
Diagram showing how the multiple sequence alignment is split into overlapping sections, and conserved; degenerate sets of primers are designed near the ends of the overlapping pieces so that overlapping amplicons should be produced which tile across the viral genome. FP = forward primer; RP = reverse primer.When splitting the alignment into regions of sizes, if the last “remainder” piece of an alignment is less than half of s, then s is increased by the amount that evenly divides the alignment without any remainder to s
′, and the split regions are recalculated with s
′. If a user desires to tile across only selected regions instead of tiling across the entire sequence, then an optional regions file may be specified which contains the regions (e.g., genes) and their start and end positions in the alignment.For each region, the PriMux software [3] is used to search for conserved, degenerate, and multiplex compatible primer sets to amplify that region in all target sequences with as few primers as possible. The PriMux “max” algorithm is used. Primers should be multiplex compatible since the primers for a given region are predicted not to form primer dimers and all to have T
m’s in a range specified by the user. As run_tiled_primers is a wrapper script around the PriMux workhorse, all the primer design characteristics are specified in a PriMux options file. The minimum and maximum amplicon lengths are determined by the (
s
,
x
) parameters to run_tiled_primers (Table 2), so these parameters may be omitted in the input options file or if they are present, their values will be replaced with values appropriate for the specified values of (
s
,
x
). Run_tiled_primers requires that primers must anneal within 0.5
x of either end of the region. If the value of x is 36 bp or less, it is too short for two nonoverlapping primers, typically at least 18 bp long. In this case, the code does not require that adjacent regions overlap and amplicons are allowed from anywhere in each region. Small overlaps (e.g., 40–80) do not leave much room to find good priming regions that pass the filters on T
m, entropy, free energy, and homopolymers as specified in the options file, and consequently it may not be possible to find primers for all targets. When this happens, increasing the overlap and relaxing the primer specifications may be necessary.Table 2
Parameters used for primer design inin silico examples and MHV example presented here.
In silico primer settings
MHV primer settings
Primer length range
18–25
18–27
T
m range allowed1
60–65°C
58–65°C
Number degenerate bases allowed per primer
5
3
Minimum distance of degenerate base to 3′ end of primer
3 nt
3 nt
Minimum trimer entropy allowed (to avoid repetitive sequence)2
3.5
3.3
Maximum length of homopolymer allowed
4 nt
5 nt
GC% range allowed
20–80
20–80
Minimum primer dimerΔ
G
−6 kcal/mol
−15 kcal/mol
Minimum hairpinΔ
G
−5 kcal/mol
−12 kcal/mol
Primer selection iterations
1
3
1
T
m is calculated using Unafold [6].
2Low complexity regions (repetitive sequence) are excluded from consideration as primers by setting a minimum entropy threshold for a primer candidate. The entropy S
i of a sequence was computed by counting the numbers of occurrences of n
A
A
A
,
n
A
A
C
,
…
,
n
T
T
T of the 64 possible trimers in the probe sequence, and dividing by the total number of trimers, yielding the corresponding frequencies f
A
A
A
,
…
,
f
T
T
T. The entropy is then given by the sum of -
f
t
log
2
f
t where the sum is over the trimers t with f
t
≠
0.Requiring that primers fall within0.5
x bases of the ends of each region facilitates the creation of amplicons which should overlap across a genome, allowing full genome assembly from the amplified products. There may not be amplicons covering the extreme 5′ and 3′ ends of a target sequence, since the first and last primers may be located some distance (maximum of x/2) from the ends. Rapid Amplification of cDNA Ends (RACE) PCR would be necessary to amplify the genome ends not covered by an overlapping region, priming with the reverse complement of the run_tiled_primers primers closest to the end so as to prime toward the edge of the genome.Because this split size is based on the alignment and since dashes in the alignment are not counted in amplicon length, actual amplicons may be substantially shorter than the split sizes. This is likely to happen for poorly aligning regions or regions in which there are insertions or deletions in a subset of the sequences. To compensate for this, one should select s that is larger than the actual amplicon lengths desired, particularly if the length of the MSA is much larger than the average genome length.Run_tiled_primers labels each overlapping region as #part, where # indicates the order of the regions, for example, 0part, 1part, and 2part are the three regions shown in Figure1. For each region, sets of conserved, degenerate primers are designed to ensure amplification of all the targets, if possible, given the primer specifications.The primers can be run in separate singleplex reactions for each split region, or, alternatively, primers for all regions can be combined in a large multiplex after the large set is checked for primer dimers that could occur between primers from different regions. Combining primers for all regions in multiplex should facilitate whole genome amplification in a single reaction. It may yield longer amplicons from the reaction of forward and reverse primers from different parts (FP from 0part reacting with RP from 1part gives product ~2 times the split size), depending on the polymerase processivity and the duration of the extension step, and should facilitate assembly across amplified regions. This helps alleviate cases where a primer cannot be found for one part in an outlier genome due toT
m, homopolymers, primer dimer Δ
G, and so forth, since primers from different parts may amplify across the region. However, since primers of overlapping regions can also produce amplicons shorter (less than x bp) than the desired amplicon of length between s
-
x and s bp (e.g., RP of 0part with the FP from 1part), a step to remove short amplicons before sequencing may be desired. In our experimental test with MHV, the primers from parts 0, 2, and 4 were combined in one reaction and the primers from parts 1 and 3 were combined in another, so that short products would not be produced.We used the script simulate_PCR.pl (https://sourceforge.net/projects/simulatepcr/ [7]) to predict all PCR amplicons from the multiplex degenerate primers compared to the target sequences and to the NCBI nt database. This script is run automatically from the run_tiled_primers code after it predicts primers. It is set to predict amplicons up to twice the maximum amplicon length specified by the user.
### 2.2. Computational Examples
Computationally predicted tiled primer sets were generated for the viruses and primer specifications provided in Table1. MSAs were created with MUSCLE [5]. Two settings of split size s and overlap size x were used: long amplicons with s
=
10,000, x
=
500; or short amplicons of s
=
3000, x
=
500. The choice of which set to use could depend upon the product lengths the polymerase can amplify and the duration of the extension step of PCR. These fairly long amplicons are provided as theoretical examples. Users may run run_tiled_primers with shorter amplicons (e.g., s
=
400 bp) to divide the MSA into many more parts. One amplicon per target sequence per region was desired (PriMux option file with - primer_selection_iterations = 1). Table 1 shows the average genome or segment length, the number of genomes available for each target, the % consensus among those sequences, and the total number of primers to amplify all overlapping regions of all genomes. All products from the nt database under 7800 bp (shorter amplicon) or 26 kb (longer amplicon) were predicted with simulate_PCR to identify potential amplification of nontarget organisms (Tables 3 and 4).Table 3
Number of nontarget amplicons predicted in a multiplex reaction of tiled primers for 3 kb amplicons. In a multiplex of the 3 kb-amplicon tiled primers for a given organism, of the possible reactions producing products, only a small number of primer combinations are predicted to amplify regions in nontarget organisms. Counts show the number of unique primer combinations in a multiplex that yield products for any sequence in the NCBI nt nucleotide database. The numerator is for any nontarget organism in nt and the denominator is for any target or nontarget organism in nt, that is, nonspecific/total of the possible primer combinations in the multiplex predicted to yield product when compared against nt. Vastly more amplicons are produced from target organisms, indicating any contaminating nontarget species should be a small minority of amplified product.
Organism
Nontarget amplicons/total amplicons
Nontarget amplicon source organism
CCHF_S
0/160
—
CCHF_M
0/1934
—
CCHF_L
0/3753
—
RVF_S
0/137
—
RVF_M
0/356
—
RVF_L
0/753
—
Ebola
1/2657
Zea mays clone BAC ZMMBBb0342E21
Marburg
0/1511
—
Hendra
0/206
—
Nipah
0/286
—
Junin_L
0/69
—
Machupo_L
0/153
—
Junin_S
0/84
—
Machupo_S
0/32
—
JEV
7/9515
RocioWest Nile
NW_Arena_S
56/1543
IppyLassa Luna Lymphocytic choriomeningitis Mobala Mopeia
NW_Arena_L
0/819
—
OW_Arena_S
73/2509
AllpahuayoAmapari Bear canyon Chapare Cupixi Dandenong Flexal Guanarito Junin LatinoLujoMachupo
Methylococcus capsulatus str. BathParanaPiritalSabiaTamiamiWhitewater Arroyo
OW_Arena_L
1/1826
Dandenong
TBEV
0/4925
—Table 4
Number of nontarget amplicons predicted in a multiplex reaction of tiled primers for 10 kb amplicons. As in Table3, but for the multiplexes of the 10 kb-amplicon tiled primers.
Organism
Nontarget amplicons/total amplicons
Nontarget amplicon source organism
CCHF_S
0/160
—
CCHF_M
0/261
—
CCHF_L
0/253
—
RVF_S
0/137
—
RVF_M
0/487
—
RVF_L
0/195
—
Ebola
0/534
—
Marburg
0/123
—
Hendra
0/50
—
Nipah
0/74
—
Junin_L
0/12
—
Machupo_L
0/7
—
Junin_S
0/95
—
Machupo_S
0/32
—
JEV
0/1554
—
NW_Arena_S
1/337
Human chromosome 14 BAC C-2555K7 of library CalTech-D
NW_Arena_L
0/86
—
OW_Arena_S
0/316
—
OW_Arena_L
0/131
—
TBEV
0/189
—
### 2.3. Murine Hepatitis Virus Example
Run_tiled_primers was used to design primers for selected regions of the coronavirus murine hepatitis virus (strain MHV-1) genome following passage in the lab, for a separate project in which deep sequencing of selected regions following lab passage was performed. In other work attempting to amplify passaged RNA viruses, finding robust primers based on the original genome was difficult due to mutations which modified primer binding sites [8]. It was hoped that run_tiled_primers would help avoid selecting primers in mutational hotspots by taking into account strain variation across multiple available genomes for the species, since run_tiled_primers seeks maximally conserved primers in the available sequences.Input to run_tiled_primers was an alignment of 22 MHV genomes (genome identities provided as supplementary information) created using MUSCLE [5]. Regions tiled were the Nsp1, Nsp3, Nsp14, and several genes at the 3′ end of the genome (regions file provided in supplementary information), using the primer parameters in Table 2. Primer sets were predicted to produce overlapping amplicons for these regions from all MHV genomes, and a subset of primers predicted to amplify the MHV-1 or MHV strain JHM genome was selected. Some primers that were predicted to amplify the JHM strain but not the MHV-1 strain were included in the multiplex, to check for possible evolutionary change of the original sequence toward the annotated reference JHM sequence or cross reactions with primer-genome mismatches.Samples from MHV-1 infected mice were provided by Dr. Richard Bowen at Colorado State University. The MHV-1 strain used to infect the mice was obtained from American Type Culture Collection (Manassas, VA) and viral stock was propagated in murine fibroblast 17Cl-1 cells then used to infect C3H mice via intranasal route. Mice were sacrificed four days after inoculation and bronchoalveolar lavage (BAL) fluid was collected. RNA was extracted from the BAL samples using Invitrogen TRIZOL reagent, as per the manufacturer’s instructions. RNA was converted to cDNA using Superscript III (Invitrogen) and random hexamers according to the manufacturer’s protocol.Multiplexed primer sets were designed to cover the Nsp3 and 3′ genes with 3 primer pairs per genomic region amplified when possible (total number of primers tested in two multiplex reactions was 53, Table S1). The primers were tested in the lab first by testing the primer pairs in individual reactions then as multiplexed reactions. No effort was made to optimize the PCR cycling conditions. RT-PCR conditions were as follows: reverse transcription was performed using random hexamers and the Superscript III RT reverse transcriptase kit (Invitrogen). The MHV-1 cDNA templates were amplified using the Q5 Hot Start High-Fidelity DNA Polymerase kit (New England BioLabs, Ipswich, MA), following manufacturer’s instructions. PCR conditions consisted of 98°C for 30 s, followed by 35 cycles of 98°C for 10 s, 60°C for 20 s, and 72°C for 1 min. The final cycle was 72°C for 2 min.Two multiplex reactions were set up with each containing a group of nonoverlapping primer sets (Figure2). For example, multiplex “A” included primer sets A, C, E, G, and I and multiplex “B” had primer sets B, D, F, and H. By staggering the primer sets into different multiplex reactions, the amplification of overlapping primer regions created by the reverse primer from one set with the forward primer of the overlapping, adjacent primer set was eliminated. Without this strategy, these overlapping primer sets would dominate the PCR reaction due to the small size of these amplicons.Figure 2
Diagram of the murine hepatitis virus (MHV) genome regions for which primer sets were tested. The approximate position of each region amplified by primer sets is shown (MHV genome is not drawn to scale). Each multiplex reaction consisted of primer sets that do not overlap in regions amplified. Each region is amplified using 3 forward primers and 3 reverse primers (Table S1; see Supplementary Material available online athttp://dx.doi.org/10.1155/2014/101894). For example, the A primer set consists of 3 forward primers (A1F, A2F, and A3F) and 3 reverse primers (A1R, A2R, and A3R). To verify that each region is amplified in the multiplex reaction, a second set of seminested PCRs were performed using the amplicons from the multiplex reaction as a template. For example, to ensure region A was amplified, the PCR product from the A mix multiplex was diluted 1 : 10,000 and used as template in a PCR reaction with AR1 primer paired with BF2 (Table S2). Primers are labeled according to genome region (A-I) and primer direction (F = forward, R = reverse).The amplification of each primer pair in the multiplex was tested using a seminested PCR strategy to verify that the correct, specific amplicons were being produced from each multiplex of primers for a given region (Figure2, Table S2). The multiplex PCR products served as templates for PCR reactions with primer pairs that included the reverse primer of one region paired with the forward primer from the downstream adjacent region to determine if the template generated from the multiplex was present. To ensure that the PCR product was generated from the multiplex product template rather than genomic DNA carried over from the initial sample, the multiplex product template was diluted 1 : 10,000 or excised from a gel and purified prior to use as a template.
## 2.1. Process
The run_tiled_primers process can be summarized as follows: split a multiple sequence alignment into overlapping regions, and for each region design a degenerate multiplex set of primers that in combination amplify that region in all strains with as few primers as possible. Run_tiled_primers takes as input a multiple sequence alignment (MSA). Run_tiled_primers splits the alignment into regions of size “s” bases that overlap by “x” bases (Figure 1).Figure 1
Diagram showing how the multiple sequence alignment is split into overlapping sections, and conserved; degenerate sets of primers are designed near the ends of the overlapping pieces so that overlapping amplicons should be produced which tile across the viral genome. FP = forward primer; RP = reverse primer.When splitting the alignment into regions of sizes, if the last “remainder” piece of an alignment is less than half of s, then s is increased by the amount that evenly divides the alignment without any remainder to s
′, and the split regions are recalculated with s
′. If a user desires to tile across only selected regions instead of tiling across the entire sequence, then an optional regions file may be specified which contains the regions (e.g., genes) and their start and end positions in the alignment.For each region, the PriMux software [3] is used to search for conserved, degenerate, and multiplex compatible primer sets to amplify that region in all target sequences with as few primers as possible. The PriMux “max” algorithm is used. Primers should be multiplex compatible since the primers for a given region are predicted not to form primer dimers and all to have T
m’s in a range specified by the user. As run_tiled_primers is a wrapper script around the PriMux workhorse, all the primer design characteristics are specified in a PriMux options file. The minimum and maximum amplicon lengths are determined by the (
s
,
x
) parameters to run_tiled_primers (Table 2), so these parameters may be omitted in the input options file or if they are present, their values will be replaced with values appropriate for the specified values of (
s
,
x
). Run_tiled_primers requires that primers must anneal within 0.5
x of either end of the region. If the value of x is 36 bp or less, it is too short for two nonoverlapping primers, typically at least 18 bp long. In this case, the code does not require that adjacent regions overlap and amplicons are allowed from anywhere in each region. Small overlaps (e.g., 40–80) do not leave much room to find good priming regions that pass the filters on T
m, entropy, free energy, and homopolymers as specified in the options file, and consequently it may not be possible to find primers for all targets. When this happens, increasing the overlap and relaxing the primer specifications may be necessary.Table 2
Parameters used for primer design inin silico examples and MHV example presented here.
In silico primer settings
MHV primer settings
Primer length range
18–25
18–27
T
m range allowed1
60–65°C
58–65°C
Number degenerate bases allowed per primer
5
3
Minimum distance of degenerate base to 3′ end of primer
3 nt
3 nt
Minimum trimer entropy allowed (to avoid repetitive sequence)2
3.5
3.3
Maximum length of homopolymer allowed
4 nt
5 nt
GC% range allowed
20–80
20–80
Minimum primer dimerΔ
G
−6 kcal/mol
−15 kcal/mol
Minimum hairpinΔ
G
−5 kcal/mol
−12 kcal/mol
Primer selection iterations
1
3
1
T
m is calculated using Unafold [6].
2Low complexity regions (repetitive sequence) are excluded from consideration as primers by setting a minimum entropy threshold for a primer candidate. The entropy S
i of a sequence was computed by counting the numbers of occurrences of n
A
A
A
,
n
A
A
C
,
…
,
n
T
T
T of the 64 possible trimers in the probe sequence, and dividing by the total number of trimers, yielding the corresponding frequencies f
A
A
A
,
…
,
f
T
T
T. The entropy is then given by the sum of -
f
t
log
2
f
t where the sum is over the trimers t with f
t
≠
0.Requiring that primers fall within0.5
x bases of the ends of each region facilitates the creation of amplicons which should overlap across a genome, allowing full genome assembly from the amplified products. There may not be amplicons covering the extreme 5′ and 3′ ends of a target sequence, since the first and last primers may be located some distance (maximum of x/2) from the ends. Rapid Amplification of cDNA Ends (RACE) PCR would be necessary to amplify the genome ends not covered by an overlapping region, priming with the reverse complement of the run_tiled_primers primers closest to the end so as to prime toward the edge of the genome.Because this split size is based on the alignment and since dashes in the alignment are not counted in amplicon length, actual amplicons may be substantially shorter than the split sizes. This is likely to happen for poorly aligning regions or regions in which there are insertions or deletions in a subset of the sequences. To compensate for this, one should select s that is larger than the actual amplicon lengths desired, particularly if the length of the MSA is much larger than the average genome length.Run_tiled_primers labels each overlapping region as #part, where # indicates the order of the regions, for example, 0part, 1part, and 2part are the three regions shown in Figure1. For each region, sets of conserved, degenerate primers are designed to ensure amplification of all the targets, if possible, given the primer specifications.The primers can be run in separate singleplex reactions for each split region, or, alternatively, primers for all regions can be combined in a large multiplex after the large set is checked for primer dimers that could occur between primers from different regions. Combining primers for all regions in multiplex should facilitate whole genome amplification in a single reaction. It may yield longer amplicons from the reaction of forward and reverse primers from different parts (FP from 0part reacting with RP from 1part gives product ~2 times the split size), depending on the polymerase processivity and the duration of the extension step, and should facilitate assembly across amplified regions. This helps alleviate cases where a primer cannot be found for one part in an outlier genome due toT
m, homopolymers, primer dimer Δ
G, and so forth, since primers from different parts may amplify across the region. However, since primers of overlapping regions can also produce amplicons shorter (less than x bp) than the desired amplicon of length between s
-
x and s bp (e.g., RP of 0part with the FP from 1part), a step to remove short amplicons before sequencing may be desired. In our experimental test with MHV, the primers from parts 0, 2, and 4 were combined in one reaction and the primers from parts 1 and 3 were combined in another, so that short products would not be produced.We used the script simulate_PCR.pl (https://sourceforge.net/projects/simulatepcr/ [7]) to predict all PCR amplicons from the multiplex degenerate primers compared to the target sequences and to the NCBI nt database. This script is run automatically from the run_tiled_primers code after it predicts primers. It is set to predict amplicons up to twice the maximum amplicon length specified by the user.
## 2.2. Computational Examples
Computationally predicted tiled primer sets were generated for the viruses and primer specifications provided in Table1. MSAs were created with MUSCLE [5]. Two settings of split size s and overlap size x were used: long amplicons with s
=
10,000, x
=
500; or short amplicons of s
=
3000, x
=
500. The choice of which set to use could depend upon the product lengths the polymerase can amplify and the duration of the extension step of PCR. These fairly long amplicons are provided as theoretical examples. Users may run run_tiled_primers with shorter amplicons (e.g., s
=
400 bp) to divide the MSA into many more parts. One amplicon per target sequence per region was desired (PriMux option file with - primer_selection_iterations = 1). Table 1 shows the average genome or segment length, the number of genomes available for each target, the % consensus among those sequences, and the total number of primers to amplify all overlapping regions of all genomes. All products from the nt database under 7800 bp (shorter amplicon) or 26 kb (longer amplicon) were predicted with simulate_PCR to identify potential amplification of nontarget organisms (Tables 3 and 4).Table 3
Number of nontarget amplicons predicted in a multiplex reaction of tiled primers for 3 kb amplicons. In a multiplex of the 3 kb-amplicon tiled primers for a given organism, of the possible reactions producing products, only a small number of primer combinations are predicted to amplify regions in nontarget organisms. Counts show the number of unique primer combinations in a multiplex that yield products for any sequence in the NCBI nt nucleotide database. The numerator is for any nontarget organism in nt and the denominator is for any target or nontarget organism in nt, that is, nonspecific/total of the possible primer combinations in the multiplex predicted to yield product when compared against nt. Vastly more amplicons are produced from target organisms, indicating any contaminating nontarget species should be a small minority of amplified product.
Organism
Nontarget amplicons/total amplicons
Nontarget amplicon source organism
CCHF_S
0/160
—
CCHF_M
0/1934
—
CCHF_L
0/3753
—
RVF_S
0/137
—
RVF_M
0/356
—
RVF_L
0/753
—
Ebola
1/2657
Zea mays clone BAC ZMMBBb0342E21
Marburg
0/1511
—
Hendra
0/206
—
Nipah
0/286
—
Junin_L
0/69
—
Machupo_L
0/153
—
Junin_S
0/84
—
Machupo_S
0/32
—
JEV
7/9515
RocioWest Nile
NW_Arena_S
56/1543
IppyLassa Luna Lymphocytic choriomeningitis Mobala Mopeia
NW_Arena_L
0/819
—
OW_Arena_S
73/2509
AllpahuayoAmapari Bear canyon Chapare Cupixi Dandenong Flexal Guanarito Junin LatinoLujoMachupo
Methylococcus capsulatus str. BathParanaPiritalSabiaTamiamiWhitewater Arroyo
OW_Arena_L
1/1826
Dandenong
TBEV
0/4925
—Table 4
Number of nontarget amplicons predicted in a multiplex reaction of tiled primers for 10 kb amplicons. As in Table3, but for the multiplexes of the 10 kb-amplicon tiled primers.
Organism
Nontarget amplicons/total amplicons
Nontarget amplicon source organism
CCHF_S
0/160
—
CCHF_M
0/261
—
CCHF_L
0/253
—
RVF_S
0/137
—
RVF_M
0/487
—
RVF_L
0/195
—
Ebola
0/534
—
Marburg
0/123
—
Hendra
0/50
—
Nipah
0/74
—
Junin_L
0/12
—
Machupo_L
0/7
—
Junin_S
0/95
—
Machupo_S
0/32
—
JEV
0/1554
—
NW_Arena_S
1/337
Human chromosome 14 BAC C-2555K7 of library CalTech-D
NW_Arena_L
0/86
—
OW_Arena_S
0/316
—
OW_Arena_L
0/131
—
TBEV
0/189
—
## 2.3. Murine Hepatitis Virus Example
Run_tiled_primers was used to design primers for selected regions of the coronavirus murine hepatitis virus (strain MHV-1) genome following passage in the lab, for a separate project in which deep sequencing of selected regions following lab passage was performed. In other work attempting to amplify passaged RNA viruses, finding robust primers based on the original genome was difficult due to mutations which modified primer binding sites [8]. It was hoped that run_tiled_primers would help avoid selecting primers in mutational hotspots by taking into account strain variation across multiple available genomes for the species, since run_tiled_primers seeks maximally conserved primers in the available sequences.Input to run_tiled_primers was an alignment of 22 MHV genomes (genome identities provided as supplementary information) created using MUSCLE [5]. Regions tiled were the Nsp1, Nsp3, Nsp14, and several genes at the 3′ end of the genome (regions file provided in supplementary information), using the primer parameters in Table 2. Primer sets were predicted to produce overlapping amplicons for these regions from all MHV genomes, and a subset of primers predicted to amplify the MHV-1 or MHV strain JHM genome was selected. Some primers that were predicted to amplify the JHM strain but not the MHV-1 strain were included in the multiplex, to check for possible evolutionary change of the original sequence toward the annotated reference JHM sequence or cross reactions with primer-genome mismatches.Samples from MHV-1 infected mice were provided by Dr. Richard Bowen at Colorado State University. The MHV-1 strain used to infect the mice was obtained from American Type Culture Collection (Manassas, VA) and viral stock was propagated in murine fibroblast 17Cl-1 cells then used to infect C3H mice via intranasal route. Mice were sacrificed four days after inoculation and bronchoalveolar lavage (BAL) fluid was collected. RNA was extracted from the BAL samples using Invitrogen TRIZOL reagent, as per the manufacturer’s instructions. RNA was converted to cDNA using Superscript III (Invitrogen) and random hexamers according to the manufacturer’s protocol.Multiplexed primer sets were designed to cover the Nsp3 and 3′ genes with 3 primer pairs per genomic region amplified when possible (total number of primers tested in two multiplex reactions was 53, Table S1). The primers were tested in the lab first by testing the primer pairs in individual reactions then as multiplexed reactions. No effort was made to optimize the PCR cycling conditions. RT-PCR conditions were as follows: reverse transcription was performed using random hexamers and the Superscript III RT reverse transcriptase kit (Invitrogen). The MHV-1 cDNA templates were amplified using the Q5 Hot Start High-Fidelity DNA Polymerase kit (New England BioLabs, Ipswich, MA), following manufacturer’s instructions. PCR conditions consisted of 98°C for 30 s, followed by 35 cycles of 98°C for 10 s, 60°C for 20 s, and 72°C for 1 min. The final cycle was 72°C for 2 min.Two multiplex reactions were set up with each containing a group of nonoverlapping primer sets (Figure2). For example, multiplex “A” included primer sets A, C, E, G, and I and multiplex “B” had primer sets B, D, F, and H. By staggering the primer sets into different multiplex reactions, the amplification of overlapping primer regions created by the reverse primer from one set with the forward primer of the overlapping, adjacent primer set was eliminated. Without this strategy, these overlapping primer sets would dominate the PCR reaction due to the small size of these amplicons.Figure 2
Diagram of the murine hepatitis virus (MHV) genome regions for which primer sets were tested. The approximate position of each region amplified by primer sets is shown (MHV genome is not drawn to scale). Each multiplex reaction consisted of primer sets that do not overlap in regions amplified. Each region is amplified using 3 forward primers and 3 reverse primers (Table S1; see Supplementary Material available online athttp://dx.doi.org/10.1155/2014/101894). For example, the A primer set consists of 3 forward primers (A1F, A2F, and A3F) and 3 reverse primers (A1R, A2R, and A3R). To verify that each region is amplified in the multiplex reaction, a second set of seminested PCRs were performed using the amplicons from the multiplex reaction as a template. For example, to ensure region A was amplified, the PCR product from the A mix multiplex was diluted 1 : 10,000 and used as template in a PCR reaction with AR1 primer paired with BF2 (Table S2). Primers are labeled according to genome region (A-I) and primer direction (F = forward, R = reverse).The amplification of each primer pair in the multiplex was tested using a seminested PCR strategy to verify that the correct, specific amplicons were being produced from each multiplex of primers for a given region (Figure2, Table S2). The multiplex PCR products served as templates for PCR reactions with primer pairs that included the reverse primer of one region paired with the forward primer from the downstream adjacent region to determine if the template generated from the multiplex was present. To ensure that the PCR product was generated from the multiplex product template rather than genomic DNA carried over from the initial sample, the multiplex product template was diluted 1 : 10,000 or excised from a gel and purified prior to use as a template.
## 3. Results and Discussion
All the primers for both(
s
,
x
) settings are provided as Supplementary data as are the predicted amplicon start and end positions in each target genome from a multiplex of the primers for a given viral target set. Tiled amplification of these viruses required from 2 to 116 primers (Table 1). Primers are predicted to be specific to the target organisms for the most part, although not exclusively (Tables 3 and 4). The few cases of off-target amplification come from closely related organisms in the same family such as Old World (OW) and New World (NW) Arenaviruses or other Flaviviruses amplified by the Japanese encephalitis virus (JEV) multiplex. The three exceptions were a single amplicon of 2830 bp from a BAC clone of Zea mays (maize) from the Ebola 3 kb multiplex, a single amplicon of 3610 bp fromMethylococcus capsulatus str. Bath from the OW Arena S segment 3 kb multiplex and a single amplicon of 851 bp from a human BAC from a library at CalTech. All three of these predicted nontarget amplicons result from a single primer in each of those reactions performing as both forward primer (FP) and reverse primer (RP). Nonetheless, the primer multiplexes described here should strongly favor the preferential enrichment of desired targets.Deriving each primer set required multiple sequence alignment and a call to run_tile_primers in the current PriMux software distribution (http://sourceforge.net/projects/primux/). In comparison, primer design with the JCVI pipeline for any of these target sets would require the following steps: (1) inspection of a phylogeny for the full target set to build multiple smaller clade-level sets with no more than 10% sequence variation, (2) realignment of the clade-level sets, (3) running of the JCVI pipeline on each clade set, (4) assessing which target sequences are not amplified after one design round and rerun the pipeline on those sequences for each clade, (5) and repeating step 4 until all target sequences are predicted to be amplified.
## 4. MHV Results
Multiplexed primers were tested in the lab as primer pairs in individual reactions then as multiplexed reactions. Twenty-two of the primer pairs worked and four failed to give a product and were paired with other primers in subsequent testing or if necessary, replaced with an alternative primer. Amplicons were detected in the expected size ranges, confirming amplification of the expected regions from the multiplexed sets (Figure S1). In some cases extra bands were present, but they were generally smaller than the targeted size; this was common when the template cDNA was obtained from a clinical sample rather than high titer cell culture derived viral stock from this study. The PCR products generated with these highly multiplexed assays were then sequenced using Illumina ultradeep sequencing with a high fidelity polymerase. These primers yielded high coverage averaging 150,000x of the genomic regions amplified by the multiplex primers.
## 5. Conclusions
Software is described to generate tiled, multiplex, and degenerate amplification primers to span entire genomes or regions of many variant sequences. This tool should facilitate the amplification of overlapping products across whole genomes or user-specified regions of target sets with high levels of variation. Applications include target enrichment for viral discovery of new members in a viral family from a complex host background, improving high throughput sequencing sensitivity and coverage of a rapidly evolving virus, or enriched coverage of variants in a gene family.
---
*Source: 101894-2014-08-03.xml* | 2014 |
# Immunology and Immunodiagnosis of Cystic Echinococcosis: An Update
**Authors:** Wenbao Zhang; Hao Wen; Jun Li; Renyong Lin; Donald P. McManus
**Journal:** Clinical and Developmental Immunology
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101895
---
## Abstract
Cystic echinococcosis (CE) is a cosmopolitan zoonosis caused by the larval cystic stage of the dog tapewormEchinococcus granulosus. This complex multicellular pathogen produces various antigens which modulate the host immune response and promote parasite survival and development. The recent application of modern molecular and immunological approaches has revealed novel insights on the nature of the immune responses generated during the course of a hydatid infection, although many aspects of the Echinococcus-host interplay remain unexplored. This paper summarizes recent developments in our understanding of the immunology and diagnosis of echinococcosis, indicates areas where information is lacking, and suggests possible new strategies to improve serodiagnosis for practical application.
---
## Body
## 1. Introduction
Two neglected parasitic diseases, of both medical and public health importance, are cystic echinococcosis (CE) and alveolar echinococcosis (AE), caused byEchinococcus granulosus (Eg) and E. multilocularis, respectively. CE is a near-cosmopolitan zoonosis and responsible for most of the burden of echinococcosis globally [1], although AE is endemic in Europe [2, 3] and is problematic in China [4–6].The immunology and serodiagnosis of echinococcosis have been reviewed previously [7–10]. In this review, we summarize the general consensus of the immunology and immunodiagnosis of CE, and reinforce previous findings with observations from some recent studies.TheEchinococcus organisms have a complex life cycle involving two hosts, a definitive carnivore host and an intermediate herbivore host. Intermediate hosts become infected by ingesting the parasite’s eggs, which are released in the faeces of definitive hosts. The eggs hatch in the gastrointestinal tract and become activated larvae which penetrate the intestinal wall and enter the bloodstream, eventually locating in internal organs where they develop into hydatid cysts.Hydatid cysts ofE. granulosus develop in internal organs of humans and intermediate hosts (herbivores such as sheep, horses, cattle, pigs, goats, and camels) as unilocular fluid-filled bladders. These consist of two parasite-derived layers, an inner nucleated germinal layer and an outer acellular laminated layer surrounded by a host-produced fibrous capsule as the consequence of the host immune response [10]. Brood capsules and protoscoleces bud off from the germinal membrane. Carnivores such as dogs, wolves, and foxes act as definitive hosts. Sexual maturity of adult E. granulosus occurs in the host’s small intestine within 4 to 5 weeks of ingesting offal containing viable protoscoleces. Gravid proglottids or released eggs are shed in the feces. An intermediate host is infected by taking an egg or eggs orally.The intermediate host produces a significant immune response againstE. granulosus infection [10]. However, the parasite has developed highly effective strategies for escaping the host defences and to avoid clearance. These mechanisms can be classified as antigenic mimicry, antigenic depletion, antigenic variation, immunologic indifference, immunologic diversion, and immunologic subversion [10]. Understanding how these immune responses are produced has been of fundamental importance in developing immunodiagnostic kits and highly effective recombinant vaccines against E. granulosus infection.There are three significant features ofE. granulosus infection: (1) the parasite uses a large number of different mammalian species as intermediate hosts. Additional species can become quickly adapted as new intermediate hosts with the production of highly fertile cysts. Examples are Australian marsupials, which have become highly susceptible to CE after E. granulosus was introduced into Australia at the time of European settlement [11], and now plays a major role in the transmission of CE on this continent [12, 13]. (2) The resulting chronic cyst-forming disease in the intermediate host is characterized by long-term growth of the metacestode (hydatid) cysts in internal organs for as long as 53 years [14]. (3) The unilocular fluid-filled cysts can be located in most organs, with about 70% found in the liver, 20% occur in the lungs, with the remainder involving other organs such as the kidney, spleen, brain, heart, and bone. These distinct features combined with the multicellular nature of E. granulosus make CE a good general model for studying the immunology of chronic infections.Cysts ofE. granulosus can grow to more than 20 cm in diameter in humans, but the clinical manifestations are generally mild and remain asymptomatic for a considerable period. Consequently, serodiagnostic tools are important for screening populations at high risk of infection.
## 2. Host Immune Responses to Hydatid Infection
### 2.1. Antibody Responses
The earliest immunoglobulin (Ig) G response to CE hydatid cyst fluid and oncospheral antigens appears after 2 and 11 weeks, respectively, in mice and sheep challenged with eggs or oncospheres ofE. granulosus [15, 16]. These antioncospheral antibodies play a major role in parasite killing and are central to the protective immune response against E. granulosus [17]. Although antibody levels against the oncosphere are low [15] in the early stages of infection, the parasite killing mechanisms may involve antibody-dependent cell-mediated cytotoxicity reactions [18, 19].In the chronic phases of CE, there is frequent occurrence of elevated antibody levels, particularly IgG, IgM, and IgE [20–24], with IgG1 and IgG4 IgG subclasses being predominant [21, 25–29]. This antibody production is essential for the development of serodiagnostic tests.About 30–40% of patients are antibody-negative for CE. In many of these patients, however, varying levels of circulating antigens (CAg) and circulating immune complexes (CIC) are measurable [30]. This phenomenon suggests that B cell activity and proliferation may be regulated and inhibited by E. granulosus antigens. It is not known whether these antigens directly target B cells or via T cell regulatory mechanisms.
### 2.2. Cellular Responses and Th2 Regulation
During the early stages of an echinococcal infection, there is a marked activation of cell-mediated immunity including cellular inflammatory responses and pathological changes [10, 31]. Cellular infiltration of eosinophils, neutrophils, macrophages, and fibrocytes occurs in humans [32, 33] and sheep [34] infections. However, this generally does not result in a severe inflammatory response, and aged cysts tend to become surrounded by a fibrous layer that separates the laminated cystic layer from host tissue.There are very few reports on T cell cytokine profiles in an early primary (oral challenge with eggs)E. granulosus infection. Infection with E. multilocularis eggs induced low levels of interferon- (IFN-) gama, IL-2, and IL-4 at the beginning and high levels at the end of the infection [35, 36], and a similar immune profile in the early stage of CE infection is likely.Given the recent advances in understanding the immunoregulatory capabilities of helminthic infections, it has been suggested that Th2 responses play a crucial role in chronic helminthiasis [37]. However, a remarkable feature of chronic CE infection is the coexistence of IFN-gamma, IL-4 and IL-10 at high levels in human echinococcosis [38]. It is unclear why hydatid infection can induce high levels of both Th1 and Th2 cytokines [39] since they usually downregulate each other [40]. Antigen and the amount of antigens released may play key roles. For instance, E. granulosus antigen B skewed Th1/Th2 cytokine ratios towards a preferentially immunopathology-associated Th2 polarization, predominantly in patients with progressive disease [41].The role of IL-10 in chronic infection largely remains unclear, although one report showed that IL-10 may impair the Th1 protective response and allow the parasite to survive in hydatid patients [42]. The interaction of the Echinococcusorganisms with their mammalian hosts may provide a highly suitable model to address some of the fundamental questions remaining such as the molecular basis underpinning the different effects of IL-10 on different cell types, the mechanisms of regulation of IL-10 production, the inhibitory role of IL-10 on monocyte/macrophage and CD4 T cell function, its involvement in stimulating the development of B cells and CD8 T cells, and its role in the differentiation and function of T regulatory cells.
#### 2.2.1. Correlation of Cytokines with Antibody Production
Studies with mouse models to overexpress cytokines by inducing cytokine expression vectors showed that IL-12 and IFN-gamma induce a parasite-specific IgG2a response in mice infected with protoscoleces ofE. granulosus whereas in IL-4-gene-transfected mice, IgG1 was elevated, indicating that IgG1 and IgG2 antibody isotypes are regulated by Th1 and Th2 cytokines, respectively [43].When patients with relapsing disease or with viable, growing cysts, IgG1 and IgG4 are elevated and maintained at a high level [21, 44], whereas a low level of IFN-gamma produced by peripheral blood mononucleocytes (PBMC) in vitro compared with patients with a primary infection [45, 46]. For some relapsed cases, IFN-gamma levels were undetectable in the sera of patients [47] whereas the concentrations of specific IgG1 and IgG4 declined in cases characterized by cyst infiltration or calcification [44].This indicates that the IgG4 antibody response is also associated with cystic development, growth, and disease progression whereas IgG1, IgG2, and IgG3 responses occur predominantly when cysts became infiltrated or are destroyed by the host [21].
#### 2.2.2. T Cell Profile, Cyst Progression, and Efficacy of Treatment
The polarized Th2 cell response is a significant feature of the chronic stage ofEchinococcus infection which is modulated by the developmental status of the hydatid cyst. In vitro T cell stimulation showed that cell lines from a patient with an inactive cyst had a Th1 profile while the T-cell lines derived from patients with active and transitional hydatid cysts had mixed Th1/Th2 and Th0 clones [48]. When CE patients were drug-treated with albendazole/mebendazole, a Th1 cytokine profile, rather than a Th2 profile, typically dominated, indicating that Th1 responses have a role in the process of cyst degeneration [46].Mice injected with a vector expressing IL-4 displayed six times higher cyst load than the load in control mice [43], indicating IL-4 plays an important role in hydatid cyst development in the mammalian host.Cytokine analysis of 177 CE patients showed that Th1 cytokines were related to disease resistance; in contrast Th2 cytokines were associated with disease susceptibility and chronicity [38]. Both in vitro and in vivo studies have shown that high levels of the Th1 cytokine IFN-gamma were found in patients who responded to chemotherapy, whereas high levels of Th2 cytokines (IL-4 and IL-10) occurred in patients who did not [46, 49–51], indicating IL-10/IL-4 impairs the Th1 resistant response allowing E. granulosus to survive [42, 52].Self-cure of CE is common in sheep [53], and it most likely also happens in human populations in hyperendemic areas as patients with calcified cysts are reported [54, 55]. It would be of value to consider the T cell profiles of these self-cure patients as this may impact on future treatment approaches and vaccine development.
#### 2.2.3. Dendritic Cells
More studies have focused on dentritic cells (DC) and their regulation on other immune responses in CE.E. granulosus antigens influence maturation and differentiation of DC stimulated with lipopolysaccharide (LPS) [56]. This includes downmodulation of CD1a expression and upregulation of CD86 expression, a lower percentage of CD83(+) cells present and, downregulation of interleukin-12p70 (IL-12p70) and TNF alpha [57]. In addition, hydatid cyst fluid (HCF) modulates the transition of human monocytes to DC, impairs secretion of IL-12, IL-6, or PGE2 in response to LPS stimulation, and modulates the phenotype of cells generated during culture, resulting in increased CD14 expression [56].HCF antigen B (AgB) has been shown to induce IL-1 receptor-associated kinase phosphorylation and activate nuclear factor-kappa B, suggesting that Toll-like receptors could participate inE. granulosus-stimulated DC maturation [57].E. multilocularis infection in mice induced DC expressing high levels TGF and very low levels of IL-10 and IL-12, and the expression of the surface markers CD80, CD86, and CD40 was downregulated [58, 59]. However, the higher level of IL-4 than IFN-gamma/IL-2 mRNA expression in AE-CD4+pe-Tcells indicated DC play a role in the generation of a regulatory immune response [59].DifferentE. multilocularis antigens have been shown to stimulate different expression profiles of DC. Em14-3-3-antigen induced CD80, CD86, and MHC class II surface expression, but Em2(G11) failed to do so. Similarly, LPS and Em14-3-3 yielded elevated IL-12, TNF-I+/−, and IL-10 expression levels, while Em2(G11) did not. The proliferation of bone marrow DC isolated from AE-diseased mice was abrogated [60], indicating the E. multilocularis infection triggered unresponsiveness in T cells.
#### 2.2.4. Summary of Immunological Responses in Echinococcosis and Directions for Further Study
Human helminth infections exhibit many immune downregulatory characteristics, with affected populations showing lower levels of immunopathological disease in cohort studies of allergy and autoimmunity. Model system studies have linked helminth infections with marked expansion of populations of immunoregulatory cells, such as alternatively activated macrophages, T regulatory cells (Tregs), and regulatory B cells [37].In the establishedEchinococcus cystic stage, the typical response, in both humans and animals, is of the Th2 type and involves the cytokines IL-4, IL-5, IL-10, and IL-13, the antibody isotypes IgG1, IgG4, and IgE, and expanded populations of eosinophils, mast cells, and alternatively activated macrophages [10, 31]. The precise role of Th2 responses in parasitic infections is still not very clear. It is likely that E. granulosus controls the dialogue between cells of the immune system through the release of antigens which induce Th2 responses and suppression of others involving regulatory T and B cells. Th2 is significantly associated with chronic infection and may regulate the establishment of the parasite infection. More details are needed of the regulation of Th2 cytokines on antibody production, echinococcal cyst growth, and the efficacy of treatment. The role of the antibody responses in the host parasite interaction and chronic infection remains unknown in CE.It has been shown that in vivo depletion of DC inhibits the induction of a Th2 immune response in chronic helminth infection and DC alone can drive Th2 cell differentiation [37]. It is not known which DC signals induce the Th2 differentiation programme in naïve T cells [61] but CE represents a good model to address this issue.As well, a number of other critical questions remain that are important for studying the role of Treg cells in the chronic infection resulting from echinococcosis such as whether Treg cells present in greater frequencies in echinococcal infections as other infections [62, 63], whether Echinococcus can expand T reg cell populations, and whether the parasites secrete factors which can directly induce the conversion of naïve T cells into functional Treg cells. There are no studies in echinococcosis on regulatory B cells, which are populations of B cells that downregulate immune responses. These cells are most often associated with production of the immunosuppressive cytokine IL-10.Moreover, many allergic and autoimmune inflammatory conditions can be ameliorated by a range of different helminth infections [64–66], so the question arises: can echinococcal infection reduce the allergic condition?
## 2.1. Antibody Responses
The earliest immunoglobulin (Ig) G response to CE hydatid cyst fluid and oncospheral antigens appears after 2 and 11 weeks, respectively, in mice and sheep challenged with eggs or oncospheres ofE. granulosus [15, 16]. These antioncospheral antibodies play a major role in parasite killing and are central to the protective immune response against E. granulosus [17]. Although antibody levels against the oncosphere are low [15] in the early stages of infection, the parasite killing mechanisms may involve antibody-dependent cell-mediated cytotoxicity reactions [18, 19].In the chronic phases of CE, there is frequent occurrence of elevated antibody levels, particularly IgG, IgM, and IgE [20–24], with IgG1 and IgG4 IgG subclasses being predominant [21, 25–29]. This antibody production is essential for the development of serodiagnostic tests.About 30–40% of patients are antibody-negative for CE. In many of these patients, however, varying levels of circulating antigens (CAg) and circulating immune complexes (CIC) are measurable [30]. This phenomenon suggests that B cell activity and proliferation may be regulated and inhibited by E. granulosus antigens. It is not known whether these antigens directly target B cells or via T cell regulatory mechanisms.
## 2.2. Cellular Responses and Th2 Regulation
During the early stages of an echinococcal infection, there is a marked activation of cell-mediated immunity including cellular inflammatory responses and pathological changes [10, 31]. Cellular infiltration of eosinophils, neutrophils, macrophages, and fibrocytes occurs in humans [32, 33] and sheep [34] infections. However, this generally does not result in a severe inflammatory response, and aged cysts tend to become surrounded by a fibrous layer that separates the laminated cystic layer from host tissue.There are very few reports on T cell cytokine profiles in an early primary (oral challenge with eggs)E. granulosus infection. Infection with E. multilocularis eggs induced low levels of interferon- (IFN-) gama, IL-2, and IL-4 at the beginning and high levels at the end of the infection [35, 36], and a similar immune profile in the early stage of CE infection is likely.Given the recent advances in understanding the immunoregulatory capabilities of helminthic infections, it has been suggested that Th2 responses play a crucial role in chronic helminthiasis [37]. However, a remarkable feature of chronic CE infection is the coexistence of IFN-gamma, IL-4 and IL-10 at high levels in human echinococcosis [38]. It is unclear why hydatid infection can induce high levels of both Th1 and Th2 cytokines [39] since they usually downregulate each other [40]. Antigen and the amount of antigens released may play key roles. For instance, E. granulosus antigen B skewed Th1/Th2 cytokine ratios towards a preferentially immunopathology-associated Th2 polarization, predominantly in patients with progressive disease [41].The role of IL-10 in chronic infection largely remains unclear, although one report showed that IL-10 may impair the Th1 protective response and allow the parasite to survive in hydatid patients [42]. The interaction of the Echinococcusorganisms with their mammalian hosts may provide a highly suitable model to address some of the fundamental questions remaining such as the molecular basis underpinning the different effects of IL-10 on different cell types, the mechanisms of regulation of IL-10 production, the inhibitory role of IL-10 on monocyte/macrophage and CD4 T cell function, its involvement in stimulating the development of B cells and CD8 T cells, and its role in the differentiation and function of T regulatory cells.
### 2.2.1. Correlation of Cytokines with Antibody Production
Studies with mouse models to overexpress cytokines by inducing cytokine expression vectors showed that IL-12 and IFN-gamma induce a parasite-specific IgG2a response in mice infected with protoscoleces ofE. granulosus whereas in IL-4-gene-transfected mice, IgG1 was elevated, indicating that IgG1 and IgG2 antibody isotypes are regulated by Th1 and Th2 cytokines, respectively [43].When patients with relapsing disease or with viable, growing cysts, IgG1 and IgG4 are elevated and maintained at a high level [21, 44], whereas a low level of IFN-gamma produced by peripheral blood mononucleocytes (PBMC) in vitro compared with patients with a primary infection [45, 46]. For some relapsed cases, IFN-gamma levels were undetectable in the sera of patients [47] whereas the concentrations of specific IgG1 and IgG4 declined in cases characterized by cyst infiltration or calcification [44].This indicates that the IgG4 antibody response is also associated with cystic development, growth, and disease progression whereas IgG1, IgG2, and IgG3 responses occur predominantly when cysts became infiltrated or are destroyed by the host [21].
### 2.2.2. T Cell Profile, Cyst Progression, and Efficacy of Treatment
The polarized Th2 cell response is a significant feature of the chronic stage ofEchinococcus infection which is modulated by the developmental status of the hydatid cyst. In vitro T cell stimulation showed that cell lines from a patient with an inactive cyst had a Th1 profile while the T-cell lines derived from patients with active and transitional hydatid cysts had mixed Th1/Th2 and Th0 clones [48]. When CE patients were drug-treated with albendazole/mebendazole, a Th1 cytokine profile, rather than a Th2 profile, typically dominated, indicating that Th1 responses have a role in the process of cyst degeneration [46].Mice injected with a vector expressing IL-4 displayed six times higher cyst load than the load in control mice [43], indicating IL-4 plays an important role in hydatid cyst development in the mammalian host.Cytokine analysis of 177 CE patients showed that Th1 cytokines were related to disease resistance; in contrast Th2 cytokines were associated with disease susceptibility and chronicity [38]. Both in vitro and in vivo studies have shown that high levels of the Th1 cytokine IFN-gamma were found in patients who responded to chemotherapy, whereas high levels of Th2 cytokines (IL-4 and IL-10) occurred in patients who did not [46, 49–51], indicating IL-10/IL-4 impairs the Th1 resistant response allowing E. granulosus to survive [42, 52].Self-cure of CE is common in sheep [53], and it most likely also happens in human populations in hyperendemic areas as patients with calcified cysts are reported [54, 55]. It would be of value to consider the T cell profiles of these self-cure patients as this may impact on future treatment approaches and vaccine development.
### 2.2.3. Dendritic Cells
More studies have focused on dentritic cells (DC) and their regulation on other immune responses in CE.E. granulosus antigens influence maturation and differentiation of DC stimulated with lipopolysaccharide (LPS) [56]. This includes downmodulation of CD1a expression and upregulation of CD86 expression, a lower percentage of CD83(+) cells present and, downregulation of interleukin-12p70 (IL-12p70) and TNF alpha [57]. In addition, hydatid cyst fluid (HCF) modulates the transition of human monocytes to DC, impairs secretion of IL-12, IL-6, or PGE2 in response to LPS stimulation, and modulates the phenotype of cells generated during culture, resulting in increased CD14 expression [56].HCF antigen B (AgB) has been shown to induce IL-1 receptor-associated kinase phosphorylation and activate nuclear factor-kappa B, suggesting that Toll-like receptors could participate inE. granulosus-stimulated DC maturation [57].E. multilocularis infection in mice induced DC expressing high levels TGF and very low levels of IL-10 and IL-12, and the expression of the surface markers CD80, CD86, and CD40 was downregulated [58, 59]. However, the higher level of IL-4 than IFN-gamma/IL-2 mRNA expression in AE-CD4+pe-Tcells indicated DC play a role in the generation of a regulatory immune response [59].DifferentE. multilocularis antigens have been shown to stimulate different expression profiles of DC. Em14-3-3-antigen induced CD80, CD86, and MHC class II surface expression, but Em2(G11) failed to do so. Similarly, LPS and Em14-3-3 yielded elevated IL-12, TNF-I+/−, and IL-10 expression levels, while Em2(G11) did not. The proliferation of bone marrow DC isolated from AE-diseased mice was abrogated [60], indicating the E. multilocularis infection triggered unresponsiveness in T cells.
### 2.2.4. Summary of Immunological Responses in Echinococcosis and Directions for Further Study
Human helminth infections exhibit many immune downregulatory characteristics, with affected populations showing lower levels of immunopathological disease in cohort studies of allergy and autoimmunity. Model system studies have linked helminth infections with marked expansion of populations of immunoregulatory cells, such as alternatively activated macrophages, T regulatory cells (Tregs), and regulatory B cells [37].In the establishedEchinococcus cystic stage, the typical response, in both humans and animals, is of the Th2 type and involves the cytokines IL-4, IL-5, IL-10, and IL-13, the antibody isotypes IgG1, IgG4, and IgE, and expanded populations of eosinophils, mast cells, and alternatively activated macrophages [10, 31]. The precise role of Th2 responses in parasitic infections is still not very clear. It is likely that E. granulosus controls the dialogue between cells of the immune system through the release of antigens which induce Th2 responses and suppression of others involving regulatory T and B cells. Th2 is significantly associated with chronic infection and may regulate the establishment of the parasite infection. More details are needed of the regulation of Th2 cytokines on antibody production, echinococcal cyst growth, and the efficacy of treatment. The role of the antibody responses in the host parasite interaction and chronic infection remains unknown in CE.It has been shown that in vivo depletion of DC inhibits the induction of a Th2 immune response in chronic helminth infection and DC alone can drive Th2 cell differentiation [37]. It is not known which DC signals induce the Th2 differentiation programme in naïve T cells [61] but CE represents a good model to address this issue.As well, a number of other critical questions remain that are important for studying the role of Treg cells in the chronic infection resulting from echinococcosis such as whether Treg cells present in greater frequencies in echinococcal infections as other infections [62, 63], whether Echinococcus can expand T reg cell populations, and whether the parasites secrete factors which can directly induce the conversion of naïve T cells into functional Treg cells. There are no studies in echinococcosis on regulatory B cells, which are populations of B cells that downregulate immune responses. These cells are most often associated with production of the immunosuppressive cytokine IL-10.Moreover, many allergic and autoimmune inflammatory conditions can be ameliorated by a range of different helminth infections [64–66], so the question arises: can echinococcal infection reduce the allergic condition?
## 2.2.1. Correlation of Cytokines with Antibody Production
Studies with mouse models to overexpress cytokines by inducing cytokine expression vectors showed that IL-12 and IFN-gamma induce a parasite-specific IgG2a response in mice infected with protoscoleces ofE. granulosus whereas in IL-4-gene-transfected mice, IgG1 was elevated, indicating that IgG1 and IgG2 antibody isotypes are regulated by Th1 and Th2 cytokines, respectively [43].When patients with relapsing disease or with viable, growing cysts, IgG1 and IgG4 are elevated and maintained at a high level [21, 44], whereas a low level of IFN-gamma produced by peripheral blood mononucleocytes (PBMC) in vitro compared with patients with a primary infection [45, 46]. For some relapsed cases, IFN-gamma levels were undetectable in the sera of patients [47] whereas the concentrations of specific IgG1 and IgG4 declined in cases characterized by cyst infiltration or calcification [44].This indicates that the IgG4 antibody response is also associated with cystic development, growth, and disease progression whereas IgG1, IgG2, and IgG3 responses occur predominantly when cysts became infiltrated or are destroyed by the host [21].
## 2.2.2. T Cell Profile, Cyst Progression, and Efficacy of Treatment
The polarized Th2 cell response is a significant feature of the chronic stage ofEchinococcus infection which is modulated by the developmental status of the hydatid cyst. In vitro T cell stimulation showed that cell lines from a patient with an inactive cyst had a Th1 profile while the T-cell lines derived from patients with active and transitional hydatid cysts had mixed Th1/Th2 and Th0 clones [48]. When CE patients were drug-treated with albendazole/mebendazole, a Th1 cytokine profile, rather than a Th2 profile, typically dominated, indicating that Th1 responses have a role in the process of cyst degeneration [46].Mice injected with a vector expressing IL-4 displayed six times higher cyst load than the load in control mice [43], indicating IL-4 plays an important role in hydatid cyst development in the mammalian host.Cytokine analysis of 177 CE patients showed that Th1 cytokines were related to disease resistance; in contrast Th2 cytokines were associated with disease susceptibility and chronicity [38]. Both in vitro and in vivo studies have shown that high levels of the Th1 cytokine IFN-gamma were found in patients who responded to chemotherapy, whereas high levels of Th2 cytokines (IL-4 and IL-10) occurred in patients who did not [46, 49–51], indicating IL-10/IL-4 impairs the Th1 resistant response allowing E. granulosus to survive [42, 52].Self-cure of CE is common in sheep [53], and it most likely also happens in human populations in hyperendemic areas as patients with calcified cysts are reported [54, 55]. It would be of value to consider the T cell profiles of these self-cure patients as this may impact on future treatment approaches and vaccine development.
## 2.2.3. Dendritic Cells
More studies have focused on dentritic cells (DC) and their regulation on other immune responses in CE.E. granulosus antigens influence maturation and differentiation of DC stimulated with lipopolysaccharide (LPS) [56]. This includes downmodulation of CD1a expression and upregulation of CD86 expression, a lower percentage of CD83(+) cells present and, downregulation of interleukin-12p70 (IL-12p70) and TNF alpha [57]. In addition, hydatid cyst fluid (HCF) modulates the transition of human monocytes to DC, impairs secretion of IL-12, IL-6, or PGE2 in response to LPS stimulation, and modulates the phenotype of cells generated during culture, resulting in increased CD14 expression [56].HCF antigen B (AgB) has been shown to induce IL-1 receptor-associated kinase phosphorylation and activate nuclear factor-kappa B, suggesting that Toll-like receptors could participate inE. granulosus-stimulated DC maturation [57].E. multilocularis infection in mice induced DC expressing high levels TGF and very low levels of IL-10 and IL-12, and the expression of the surface markers CD80, CD86, and CD40 was downregulated [58, 59]. However, the higher level of IL-4 than IFN-gamma/IL-2 mRNA expression in AE-CD4+pe-Tcells indicated DC play a role in the generation of a regulatory immune response [59].DifferentE. multilocularis antigens have been shown to stimulate different expression profiles of DC. Em14-3-3-antigen induced CD80, CD86, and MHC class II surface expression, but Em2(G11) failed to do so. Similarly, LPS and Em14-3-3 yielded elevated IL-12, TNF-I+/−, and IL-10 expression levels, while Em2(G11) did not. The proliferation of bone marrow DC isolated from AE-diseased mice was abrogated [60], indicating the E. multilocularis infection triggered unresponsiveness in T cells.
## 2.2.4. Summary of Immunological Responses in Echinococcosis and Directions for Further Study
Human helminth infections exhibit many immune downregulatory characteristics, with affected populations showing lower levels of immunopathological disease in cohort studies of allergy and autoimmunity. Model system studies have linked helminth infections with marked expansion of populations of immunoregulatory cells, such as alternatively activated macrophages, T regulatory cells (Tregs), and regulatory B cells [37].In the establishedEchinococcus cystic stage, the typical response, in both humans and animals, is of the Th2 type and involves the cytokines IL-4, IL-5, IL-10, and IL-13, the antibody isotypes IgG1, IgG4, and IgE, and expanded populations of eosinophils, mast cells, and alternatively activated macrophages [10, 31]. The precise role of Th2 responses in parasitic infections is still not very clear. It is likely that E. granulosus controls the dialogue between cells of the immune system through the release of antigens which induce Th2 responses and suppression of others involving regulatory T and B cells. Th2 is significantly associated with chronic infection and may regulate the establishment of the parasite infection. More details are needed of the regulation of Th2 cytokines on antibody production, echinococcal cyst growth, and the efficacy of treatment. The role of the antibody responses in the host parasite interaction and chronic infection remains unknown in CE.It has been shown that in vivo depletion of DC inhibits the induction of a Th2 immune response in chronic helminth infection and DC alone can drive Th2 cell differentiation [37]. It is not known which DC signals induce the Th2 differentiation programme in naïve T cells [61] but CE represents a good model to address this issue.As well, a number of other critical questions remain that are important for studying the role of Treg cells in the chronic infection resulting from echinococcosis such as whether Treg cells present in greater frequencies in echinococcal infections as other infections [62, 63], whether Echinococcus can expand T reg cell populations, and whether the parasites secrete factors which can directly induce the conversion of naïve T cells into functional Treg cells. There are no studies in echinococcosis on regulatory B cells, which are populations of B cells that downregulate immune responses. These cells are most often associated with production of the immunosuppressive cytokine IL-10.Moreover, many allergic and autoimmune inflammatory conditions can be ameliorated by a range of different helminth infections [64–66], so the question arises: can echinococcal infection reduce the allergic condition?
## 3. Serological Diagnosis
Typical asymptomatic features in the early stages of infection and for a long period after establishment makes early diagnosis of echinococcosis in humans difficult. Physical imaging to diagnose the CE infection, is usually used in the late stages of infection. Early diagnosis of CE by serology may, therefore, provide opportunities for early treatment and more effective chemotherapy. Another practical application of serology in human echinococcosis is the followup of the treatment.Although hydatid disease is an asymptomatic infection, the host does produce detectable humoral and cellular responses against the infection. Measurement of these responses is a prerequisite for developing effective serodiagnostic tools.
### 3.1. Antibody Detection
Infection with larval cysts ofEchinococcus in humans and intermediate animal hosts results in a specific antibody response, mainly of the IgG class accompanied by detectable IgM, IgA, and IgE antibodies in some patients [9, 31, 76, 77].In terms of methodology, almost all serological tests developed for immunodiagnosis of human CE cases have incorporated the detection of antibodies. There are considerable differences between the various tests both in specificity and sensitivity. As the sensitivity of a test increases, so generally does the demand for improved antigens in order that sufficient specificity can be achieved to take advantage of the greater sensitivity. An optimum test should be specific with high sensitivity. Insensitive and nonspecific assays including the Cassoni intradermal test, the complement fixation test (CFT), the indirect haemagglutination (IHA) test, and the latex agglutination (LA) test have been replaced by the enzyme-linked immunosorbent assay (ELISA), the indirect immunofluorescence antibody test (IFAT), immunoelectrophorosis (IEP), and immunoblotting (IB) in routine laboratory application [78].A comparison of the diagnostic sensitivity and specificity of IEP, ELISA, and IB, in detecting IgG antibodies in patient sera to native and recombinant AgB and a hydatid fluid fraction (HFF), showed that HFF-IB gave the highest sensitivity (80%) followed by ELISA (72%) and IEP (31%). The diagnostic sensitivity significantly decreased as cysts matured (from type I-II to type VII, classified by ultrasound). Recombinant and native AgB-IB yielded similar levels of sensitivity (74%) but a large number of clinically or surgically confirmed CE patients (20%) were negative. In these patient sera, IB, to assess the usefulness of another recombinantE. granulosus molecule (elongation factor-1 beta/delta) in detecting IgE antibodies, yielded a positivity of 33%. Serological tests developed for determining anti-Echinococcus IgE in serum usually express results qualitatively or semiquantitatively in titres or units specific for the test kit [20, 79, 80].The serodiagnostic performance of a range of different antigens and the various methods available for immunodiagnosis have been reviewed in depth [10, 31]. Some recent studies are referred to in Table 1 with the sensitivity and specificity of individual tests listed. Some antigens, such as native AgB and its recombinant proteins, yielded reasonable diagnostic performance using panels of sera from clinically confirmed cases of echinococcosis and other helminth infections. However, when the antigens were used for screening human populations in hyperendemic communities, they showed high seropositivity rates, although these rates had a low correlation with US monitoring of individual subjects [81].Table 1
Characteristics of assays using different antigens fromE. granulosus developed after 2003 for immunodiagnosis of cystic echinococcosis.
Number of subjects testedAntigenAssay methodSensitivity (%)Specificity (%)Ig isotypeRefs.CEHealthy controlsOther diseases44—438 kDaWB47.751.2IgG[67]44—4316 kDaWB45.567.4IgG[67]44—4324 kDaWB68.262.8IgG[67]3636—AgBELISA91.797.2IgG[68]1029568rAgB1ELISA88.280.9IgG[69]1029568rAgB2ELISA91.293IgG[69]8755739AgBDot-WB68.493.4IgG[70]8575739AgBELISA57.493.4IgG[70]32470500ABWB86.492IgG[71]15511058?ELISA73.699.1IgE[72]8755739AgBDot-WB68.493.4IgG[70]8575739AgBELISA57.493.4IgG[70]32470500ABWB86.492IgG[71]15511058?ELISA73.699.1IgE[72]15511058?ELISA90.390.9IgG[72]15511058HCFWB90.194.5IgG[72]32470500EpC1WB88.795.6IgG[71]9537—HSP2064IgG1,4[73]973758Eg19WB10100IgG[74]1029568E14tELISA35.391.7IgG[69]1029568C317ELISA58.880.9IgG[69]60——P5WB9711[75]ELISA: enzyme-linked immunosorbent assay; WB: western blotting; dELISA: dot enzyme-linked immunosorbent assay.Recently developed dipstick assays [82] are considered to be valuable methods for CE serodiagnosis. One dipstick assay has been developed that exhibited 100% sensitivity and 91.4% specificity when tested on sera from 26 CE patients and sera from 35 subjects with other parasitic infections using camel hydatid cyst fluid as antigen [83]. Since the dipstick assay is extremely easy to perform with a visually interpretable result within 15 min, in addition to being both sensitive and specific, the test could be an acceptable alternative for use in clinical laboratories lacking specialized equipment or the technological expertise needed for western blotting or ELISA. Similarly, a new 3-minute rapid dot immunogold filtration assay (DIGFA) for serodiagnosis of human CE and AE has been developed using four native antigen preparations crude and partially purified hydatid cyst fluid extracts from E. granulosus (EgCF and AgB), E. granulosus protoscolex extract (EgP), and E. multilocularis metacestode antigen (Em2) [70]. Like the dipstick assay, the test incorporates a simple eye-read colour change and achieved an overall sensitivity of 80.7% for human CE and 92.9% for human AE in a hospital diagnostic setting [70]. These rapid test scan be used for both clinical diagnostic support, as well as combining with ultrasound for mass screening in areas endemic for CE and AE.Standardization of techniques and antigenic preparations and the characterization of new antigens are urgently required to improve the performance of hydatid immunodiagnosis. Antigens used in current tests are either cyst fluid or crude homogenates of the parasite collected from domestic animals. However, the supply of antigenic sources can often be limited, even for laboratory use. Since the preparation of purified echinococcal antigens relies on the availability of parasitic material and the quality control of this material is difficult to standardize for a large scale production, this can impact substantially on sensitivity and specificity of the available immunodiagnostic tools.
### 3.2. Antigen Detection
Antibody detection is likely to indicate exposure to anEchinococcus infection, but it may not necessarily indicate the presence of an established and viable infection, or the disease. Serum antibodies may persist for a prolonged period, reaching up to 10 years after hydatid cyst removal [84]. In addition, the degree of antibody response may be related to the location and condition of a mature hydatid cyst. For instance, hydatid cysts in human lung, spleen, or kidney tend to be associated with lower serum antibody levels [9]. Furthermore, in Echinococcus-endemic villages, up to 26% or more of the general population may have antibodies to HCF antigens, but with only about 2% of the villagers having hydatid cysts [81, 85, 86], indicating that the antibody levels may not necessarily reflect the true prevalence of CE.Antigen detection may provide a suitable alternative. Serum antigen detection may also be less affected by hydatid cyst location and provides a tool for serological monitoring of antiparasitic therapy [87]. Circulating antigen (CAg) in CE patient sera, can be detected using ELISA directly or indirectly, and against titrated cyst fluid standards, CAg concentrations have been shown to vary from 100 to 700 ng/mL [88].Antigen detection assays depend principally on the binding of specific polyclonal or monoclonal antibodies to parasite antigen present in serum or urine. A number of different assays have been developed to detect echinococcal antigens. The standard double antibody sandwich ELISA is a common method for measuring the presence and/or concentration of circulating parasite antigens. In the test, antibody raised to the targeted protein is coated onto a microtiter plate to capture antigen (Figure1). The same antibody, which is enzyme labelled, is commonly used in the tertiary layer of the assay. This type of antigen capture therefore relies on the presence of multiple binding sites on the target antigens(s). Efforts to detect CAg in CE patients have been reviewed extensively by Craig et al. [85].Figure 1
Schematic of ELISA and immuno-PCR for detecting circulating antigen in serum. (a) Sandwich ELISA. (1) Plate is coated with a capture antibody; (2) serum sample is added, and any antigen present in the serum binds to the capture antibody; (3) detecting antibody conjugate is added and binds to the antigen; (4) substrate is added, and is converted by the enzyme to a detectable form. (b) Direct ELISA. Plate is coated with diluted serum containing antigen; (2) detecting antibody is added, and binds to antigen; (3) enzyme-linked secondary antibody is added, and binds to detecting antibody; (4) substrate is added and is converted by the enzyme to a detectable form. (c) Capture immuno-PCR. (1) Plate is coated with capture antibody; (2) serum sample is added; (3) biotinylated detecting antibody is added and binds to antigen; (4) Streptavidin and biotinylated reporter DNA are added, and the biotinylated antibody and biotinylated reporter DNA are linked by streptavidin; (5) Primers and PCR components are added and PCR or real-time PCR undertaken to quantify antigen. (d) Non-capture immuno-PCR. Serum sample is coated on the plate and the remainder of the steps are as for the capture-immuno-PCR (C).CAg in serum is normally in the form of a circulating immune complex (CIC) with some in free form. Therefore, the serum needs to be treated with acid buffer or polyethylene glycol (PEG) to release and concentrate the circulating antigens. Acidic treatment (0.2 M glycine/HCl) of CE patient serum is quite straightforward to dissociate CIC [85]. In a comparison of acid-treatment and PEG precipitation methods, all the sera of 30 confirmed positive cases of CE had detectable levels of antigen in the acid-treated sera [30]. However, 23 (77%) and 26 (87%) sera of 30 confirmed cases had free antigen as well as CIC of an 8 kDa antigen in the untreated and in the polyethylene glycol (PEG) precipitated sera, respectively. None of the sera from other patients with parasitic infections or viral hepatitis had any detectable levels of 8 kDa antigen in the untreated, acid-treated, or PEG-precipitated serum samples. These investigations, therefore, suggested that the demonstration of circulating antigen employing monospecific antibodies to affinity purified 8 kDa antigen in acid-treated sera is more efficient than the detection of free circulating antigen or CIC in untreated or in PEG-precipitated sera [89].IgM CICs tend to be positively associated with active hydatid disease [85, 90]. Combining measurement of circulating antibody, CICs, and CAg resulted in an increase from 77% to 90% compared to measurement of serum antibody alone [91]. Antigens in soluble CICs from CE patients have been characterized by separating them on SDS-PAGE [85] or by ion-exchange fast protein liquid chromatography (FPLC) [92]. Both studies indicated a candidate antigen detectable in serum with an approximate relative molecular mass of 60–67 KDa, and which is also present in cyst fluid.Comparison of CAg and IgG antibody using ELISA, together with western blotting, showed a relatively low sensitivity (43%) for detection of specific serum antigen in CE, compared to 75% for IgG antibodies [93]. However, the specificity of this CAg ELISA was 90% when tested against sera from AE patients and 100% against human cysticercosis sera. The limited cross-reactivity may be a way for practical diagnosis of CE in areas where AE and cysticercosis are coendemic. The advantage of CAg detection is its high sensitivity for detecting CE in 54–57% of patients who are serum antibody negative [91, 93]. CAg detection does appear, therefore, to be potentially useful as a secondary test for some suspected CE cases where antibody titers are low [85, 94].A combination of CAg and antibody detection has been shown to increase the sensitivity from 85% (antibody only) to 89% (antibody+CAg) in ELISA of 115 surgically confirmed hydatid patients, 41 individuals exhibiting other parasitic and unrelated diseases, and 69 healthy subjects [95].Although there has been no application to date for echinococcal diagnosis, a technique for antigen detection, called immunopolymerase chain reaction (immuno-PCR), was developed by Sano et al. [96]. It combines the molecular recognition of antibodies with the high DNA amplification capability of PCR. The procedure is similar to conventional ELISA but is far more sensitive. And, in principle, could be applied to the detection of single antigen molecules. Instead of an enzyme, a DNA molecule is linked to the detection antibody and serves as a template for PCR (Figure 1). The DNA molecule is amplified and the PCR product is measured by gel electrophoresis. An improvement of this method is to amplify the DNA fragment by real-time PCR, thereby eliminating post-PCR analysis. Furthermore, real-time PCR is extremely accurate and sensitive, which should make it possible to quantitate very low amounts of DNA-coupled detection antibody with high accuracy.
### 3.3. Serodiagnosis: The Future
Almost all available immunodiagnostic techniques, including methods for detecting specific antibodies and circulating parasite antigens in serum or other body fluids, have been applied for diagnosing echinococcosis. However, all the tools developed to date are generally applicable for laboratory research purposes only. None of the available diagnostic tools, kits, or methods are generally accepted by clinical physicians. Nevertheless, such serological tools are potentially important for epidemiological studies, confirmation of infection status, and treatment and the monitoring of control programs, and efforts should continue so that new assays for improved, practical diagnosis of echinococcosis are developed.
## 3.1. Antibody Detection
Infection with larval cysts ofEchinococcus in humans and intermediate animal hosts results in a specific antibody response, mainly of the IgG class accompanied by detectable IgM, IgA, and IgE antibodies in some patients [9, 31, 76, 77].In terms of methodology, almost all serological tests developed for immunodiagnosis of human CE cases have incorporated the detection of antibodies. There are considerable differences between the various tests both in specificity and sensitivity. As the sensitivity of a test increases, so generally does the demand for improved antigens in order that sufficient specificity can be achieved to take advantage of the greater sensitivity. An optimum test should be specific with high sensitivity. Insensitive and nonspecific assays including the Cassoni intradermal test, the complement fixation test (CFT), the indirect haemagglutination (IHA) test, and the latex agglutination (LA) test have been replaced by the enzyme-linked immunosorbent assay (ELISA), the indirect immunofluorescence antibody test (IFAT), immunoelectrophorosis (IEP), and immunoblotting (IB) in routine laboratory application [78].A comparison of the diagnostic sensitivity and specificity of IEP, ELISA, and IB, in detecting IgG antibodies in patient sera to native and recombinant AgB and a hydatid fluid fraction (HFF), showed that HFF-IB gave the highest sensitivity (80%) followed by ELISA (72%) and IEP (31%). The diagnostic sensitivity significantly decreased as cysts matured (from type I-II to type VII, classified by ultrasound). Recombinant and native AgB-IB yielded similar levels of sensitivity (74%) but a large number of clinically or surgically confirmed CE patients (20%) were negative. In these patient sera, IB, to assess the usefulness of another recombinantE. granulosus molecule (elongation factor-1 beta/delta) in detecting IgE antibodies, yielded a positivity of 33%. Serological tests developed for determining anti-Echinococcus IgE in serum usually express results qualitatively or semiquantitatively in titres or units specific for the test kit [20, 79, 80].The serodiagnostic performance of a range of different antigens and the various methods available for immunodiagnosis have been reviewed in depth [10, 31]. Some recent studies are referred to in Table 1 with the sensitivity and specificity of individual tests listed. Some antigens, such as native AgB and its recombinant proteins, yielded reasonable diagnostic performance using panels of sera from clinically confirmed cases of echinococcosis and other helminth infections. However, when the antigens were used for screening human populations in hyperendemic communities, they showed high seropositivity rates, although these rates had a low correlation with US monitoring of individual subjects [81].Table 1
Characteristics of assays using different antigens fromE. granulosus developed after 2003 for immunodiagnosis of cystic echinococcosis.
Number of subjects testedAntigenAssay methodSensitivity (%)Specificity (%)Ig isotypeRefs.CEHealthy controlsOther diseases44—438 kDaWB47.751.2IgG[67]44—4316 kDaWB45.567.4IgG[67]44—4324 kDaWB68.262.8IgG[67]3636—AgBELISA91.797.2IgG[68]1029568rAgB1ELISA88.280.9IgG[69]1029568rAgB2ELISA91.293IgG[69]8755739AgBDot-WB68.493.4IgG[70]8575739AgBELISA57.493.4IgG[70]32470500ABWB86.492IgG[71]15511058?ELISA73.699.1IgE[72]8755739AgBDot-WB68.493.4IgG[70]8575739AgBELISA57.493.4IgG[70]32470500ABWB86.492IgG[71]15511058?ELISA73.699.1IgE[72]15511058?ELISA90.390.9IgG[72]15511058HCFWB90.194.5IgG[72]32470500EpC1WB88.795.6IgG[71]9537—HSP2064IgG1,4[73]973758Eg19WB10100IgG[74]1029568E14tELISA35.391.7IgG[69]1029568C317ELISA58.880.9IgG[69]60——P5WB9711[75]ELISA: enzyme-linked immunosorbent assay; WB: western blotting; dELISA: dot enzyme-linked immunosorbent assay.Recently developed dipstick assays [82] are considered to be valuable methods for CE serodiagnosis. One dipstick assay has been developed that exhibited 100% sensitivity and 91.4% specificity when tested on sera from 26 CE patients and sera from 35 subjects with other parasitic infections using camel hydatid cyst fluid as antigen [83]. Since the dipstick assay is extremely easy to perform with a visually interpretable result within 15 min, in addition to being both sensitive and specific, the test could be an acceptable alternative for use in clinical laboratories lacking specialized equipment or the technological expertise needed for western blotting or ELISA. Similarly, a new 3-minute rapid dot immunogold filtration assay (DIGFA) for serodiagnosis of human CE and AE has been developed using four native antigen preparations crude and partially purified hydatid cyst fluid extracts from E. granulosus (EgCF and AgB), E. granulosus protoscolex extract (EgP), and E. multilocularis metacestode antigen (Em2) [70]. Like the dipstick assay, the test incorporates a simple eye-read colour change and achieved an overall sensitivity of 80.7% for human CE and 92.9% for human AE in a hospital diagnostic setting [70]. These rapid test scan be used for both clinical diagnostic support, as well as combining with ultrasound for mass screening in areas endemic for CE and AE.Standardization of techniques and antigenic preparations and the characterization of new antigens are urgently required to improve the performance of hydatid immunodiagnosis. Antigens used in current tests are either cyst fluid or crude homogenates of the parasite collected from domestic animals. However, the supply of antigenic sources can often be limited, even for laboratory use. Since the preparation of purified echinococcal antigens relies on the availability of parasitic material and the quality control of this material is difficult to standardize for a large scale production, this can impact substantially on sensitivity and specificity of the available immunodiagnostic tools.
## 3.2. Antigen Detection
Antibody detection is likely to indicate exposure to anEchinococcus infection, but it may not necessarily indicate the presence of an established and viable infection, or the disease. Serum antibodies may persist for a prolonged period, reaching up to 10 years after hydatid cyst removal [84]. In addition, the degree of antibody response may be related to the location and condition of a mature hydatid cyst. For instance, hydatid cysts in human lung, spleen, or kidney tend to be associated with lower serum antibody levels [9]. Furthermore, in Echinococcus-endemic villages, up to 26% or more of the general population may have antibodies to HCF antigens, but with only about 2% of the villagers having hydatid cysts [81, 85, 86], indicating that the antibody levels may not necessarily reflect the true prevalence of CE.Antigen detection may provide a suitable alternative. Serum antigen detection may also be less affected by hydatid cyst location and provides a tool for serological monitoring of antiparasitic therapy [87]. Circulating antigen (CAg) in CE patient sera, can be detected using ELISA directly or indirectly, and against titrated cyst fluid standards, CAg concentrations have been shown to vary from 100 to 700 ng/mL [88].Antigen detection assays depend principally on the binding of specific polyclonal or monoclonal antibodies to parasite antigen present in serum or urine. A number of different assays have been developed to detect echinococcal antigens. The standard double antibody sandwich ELISA is a common method for measuring the presence and/or concentration of circulating parasite antigens. In the test, antibody raised to the targeted protein is coated onto a microtiter plate to capture antigen (Figure1). The same antibody, which is enzyme labelled, is commonly used in the tertiary layer of the assay. This type of antigen capture therefore relies on the presence of multiple binding sites on the target antigens(s). Efforts to detect CAg in CE patients have been reviewed extensively by Craig et al. [85].Figure 1
Schematic of ELISA and immuno-PCR for detecting circulating antigen in serum. (a) Sandwich ELISA. (1) Plate is coated with a capture antibody; (2) serum sample is added, and any antigen present in the serum binds to the capture antibody; (3) detecting antibody conjugate is added and binds to the antigen; (4) substrate is added, and is converted by the enzyme to a detectable form. (b) Direct ELISA. Plate is coated with diluted serum containing antigen; (2) detecting antibody is added, and binds to antigen; (3) enzyme-linked secondary antibody is added, and binds to detecting antibody; (4) substrate is added and is converted by the enzyme to a detectable form. (c) Capture immuno-PCR. (1) Plate is coated with capture antibody; (2) serum sample is added; (3) biotinylated detecting antibody is added and binds to antigen; (4) Streptavidin and biotinylated reporter DNA are added, and the biotinylated antibody and biotinylated reporter DNA are linked by streptavidin; (5) Primers and PCR components are added and PCR or real-time PCR undertaken to quantify antigen. (d) Non-capture immuno-PCR. Serum sample is coated on the plate and the remainder of the steps are as for the capture-immuno-PCR (C).CAg in serum is normally in the form of a circulating immune complex (CIC) with some in free form. Therefore, the serum needs to be treated with acid buffer or polyethylene glycol (PEG) to release and concentrate the circulating antigens. Acidic treatment (0.2 M glycine/HCl) of CE patient serum is quite straightforward to dissociate CIC [85]. In a comparison of acid-treatment and PEG precipitation methods, all the sera of 30 confirmed positive cases of CE had detectable levels of antigen in the acid-treated sera [30]. However, 23 (77%) and 26 (87%) sera of 30 confirmed cases had free antigen as well as CIC of an 8 kDa antigen in the untreated and in the polyethylene glycol (PEG) precipitated sera, respectively. None of the sera from other patients with parasitic infections or viral hepatitis had any detectable levels of 8 kDa antigen in the untreated, acid-treated, or PEG-precipitated serum samples. These investigations, therefore, suggested that the demonstration of circulating antigen employing monospecific antibodies to affinity purified 8 kDa antigen in acid-treated sera is more efficient than the detection of free circulating antigen or CIC in untreated or in PEG-precipitated sera [89].IgM CICs tend to be positively associated with active hydatid disease [85, 90]. Combining measurement of circulating antibody, CICs, and CAg resulted in an increase from 77% to 90% compared to measurement of serum antibody alone [91]. Antigens in soluble CICs from CE patients have been characterized by separating them on SDS-PAGE [85] or by ion-exchange fast protein liquid chromatography (FPLC) [92]. Both studies indicated a candidate antigen detectable in serum with an approximate relative molecular mass of 60–67 KDa, and which is also present in cyst fluid.Comparison of CAg and IgG antibody using ELISA, together with western blotting, showed a relatively low sensitivity (43%) for detection of specific serum antigen in CE, compared to 75% for IgG antibodies [93]. However, the specificity of this CAg ELISA was 90% when tested against sera from AE patients and 100% against human cysticercosis sera. The limited cross-reactivity may be a way for practical diagnosis of CE in areas where AE and cysticercosis are coendemic. The advantage of CAg detection is its high sensitivity for detecting CE in 54–57% of patients who are serum antibody negative [91, 93]. CAg detection does appear, therefore, to be potentially useful as a secondary test for some suspected CE cases where antibody titers are low [85, 94].A combination of CAg and antibody detection has been shown to increase the sensitivity from 85% (antibody only) to 89% (antibody+CAg) in ELISA of 115 surgically confirmed hydatid patients, 41 individuals exhibiting other parasitic and unrelated diseases, and 69 healthy subjects [95].Although there has been no application to date for echinococcal diagnosis, a technique for antigen detection, called immunopolymerase chain reaction (immuno-PCR), was developed by Sano et al. [96]. It combines the molecular recognition of antibodies with the high DNA amplification capability of PCR. The procedure is similar to conventional ELISA but is far more sensitive. And, in principle, could be applied to the detection of single antigen molecules. Instead of an enzyme, a DNA molecule is linked to the detection antibody and serves as a template for PCR (Figure 1). The DNA molecule is amplified and the PCR product is measured by gel electrophoresis. An improvement of this method is to amplify the DNA fragment by real-time PCR, thereby eliminating post-PCR analysis. Furthermore, real-time PCR is extremely accurate and sensitive, which should make it possible to quantitate very low amounts of DNA-coupled detection antibody with high accuracy.
## 3.3. Serodiagnosis: The Future
Almost all available immunodiagnostic techniques, including methods for detecting specific antibodies and circulating parasite antigens in serum or other body fluids, have been applied for diagnosing echinococcosis. However, all the tools developed to date are generally applicable for laboratory research purposes only. None of the available diagnostic tools, kits, or methods are generally accepted by clinical physicians. Nevertheless, such serological tools are potentially important for epidemiological studies, confirmation of infection status, and treatment and the monitoring of control programs, and efforts should continue so that new assays for improved, practical diagnosis of echinococcosis are developed.
---
*Source: 101895-2011-12-25.xml* | 101895-2011-12-25_101895-2011-12-25.md | 63,431 | Immunology and Immunodiagnosis of Cystic Echinococcosis: An Update | Wenbao Zhang; Hao Wen; Jun Li; Renyong Lin; Donald P. McManus | Clinical and Developmental Immunology
(2012) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101895 | 101895-2011-12-25.xml | ---
## Abstract
Cystic echinococcosis (CE) is a cosmopolitan zoonosis caused by the larval cystic stage of the dog tapewormEchinococcus granulosus. This complex multicellular pathogen produces various antigens which modulate the host immune response and promote parasite survival and development. The recent application of modern molecular and immunological approaches has revealed novel insights on the nature of the immune responses generated during the course of a hydatid infection, although many aspects of the Echinococcus-host interplay remain unexplored. This paper summarizes recent developments in our understanding of the immunology and diagnosis of echinococcosis, indicates areas where information is lacking, and suggests possible new strategies to improve serodiagnosis for practical application.
---
## Body
## 1. Introduction
Two neglected parasitic diseases, of both medical and public health importance, are cystic echinococcosis (CE) and alveolar echinococcosis (AE), caused byEchinococcus granulosus (Eg) and E. multilocularis, respectively. CE is a near-cosmopolitan zoonosis and responsible for most of the burden of echinococcosis globally [1], although AE is endemic in Europe [2, 3] and is problematic in China [4–6].The immunology and serodiagnosis of echinococcosis have been reviewed previously [7–10]. In this review, we summarize the general consensus of the immunology and immunodiagnosis of CE, and reinforce previous findings with observations from some recent studies.TheEchinococcus organisms have a complex life cycle involving two hosts, a definitive carnivore host and an intermediate herbivore host. Intermediate hosts become infected by ingesting the parasite’s eggs, which are released in the faeces of definitive hosts. The eggs hatch in the gastrointestinal tract and become activated larvae which penetrate the intestinal wall and enter the bloodstream, eventually locating in internal organs where they develop into hydatid cysts.Hydatid cysts ofE. granulosus develop in internal organs of humans and intermediate hosts (herbivores such as sheep, horses, cattle, pigs, goats, and camels) as unilocular fluid-filled bladders. These consist of two parasite-derived layers, an inner nucleated germinal layer and an outer acellular laminated layer surrounded by a host-produced fibrous capsule as the consequence of the host immune response [10]. Brood capsules and protoscoleces bud off from the germinal membrane. Carnivores such as dogs, wolves, and foxes act as definitive hosts. Sexual maturity of adult E. granulosus occurs in the host’s small intestine within 4 to 5 weeks of ingesting offal containing viable protoscoleces. Gravid proglottids or released eggs are shed in the feces. An intermediate host is infected by taking an egg or eggs orally.The intermediate host produces a significant immune response againstE. granulosus infection [10]. However, the parasite has developed highly effective strategies for escaping the host defences and to avoid clearance. These mechanisms can be classified as antigenic mimicry, antigenic depletion, antigenic variation, immunologic indifference, immunologic diversion, and immunologic subversion [10]. Understanding how these immune responses are produced has been of fundamental importance in developing immunodiagnostic kits and highly effective recombinant vaccines against E. granulosus infection.There are three significant features ofE. granulosus infection: (1) the parasite uses a large number of different mammalian species as intermediate hosts. Additional species can become quickly adapted as new intermediate hosts with the production of highly fertile cysts. Examples are Australian marsupials, which have become highly susceptible to CE after E. granulosus was introduced into Australia at the time of European settlement [11], and now plays a major role in the transmission of CE on this continent [12, 13]. (2) The resulting chronic cyst-forming disease in the intermediate host is characterized by long-term growth of the metacestode (hydatid) cysts in internal organs for as long as 53 years [14]. (3) The unilocular fluid-filled cysts can be located in most organs, with about 70% found in the liver, 20% occur in the lungs, with the remainder involving other organs such as the kidney, spleen, brain, heart, and bone. These distinct features combined with the multicellular nature of E. granulosus make CE a good general model for studying the immunology of chronic infections.Cysts ofE. granulosus can grow to more than 20 cm in diameter in humans, but the clinical manifestations are generally mild and remain asymptomatic for a considerable period. Consequently, serodiagnostic tools are important for screening populations at high risk of infection.
## 2. Host Immune Responses to Hydatid Infection
### 2.1. Antibody Responses
The earliest immunoglobulin (Ig) G response to CE hydatid cyst fluid and oncospheral antigens appears after 2 and 11 weeks, respectively, in mice and sheep challenged with eggs or oncospheres ofE. granulosus [15, 16]. These antioncospheral antibodies play a major role in parasite killing and are central to the protective immune response against E. granulosus [17]. Although antibody levels against the oncosphere are low [15] in the early stages of infection, the parasite killing mechanisms may involve antibody-dependent cell-mediated cytotoxicity reactions [18, 19].In the chronic phases of CE, there is frequent occurrence of elevated antibody levels, particularly IgG, IgM, and IgE [20–24], with IgG1 and IgG4 IgG subclasses being predominant [21, 25–29]. This antibody production is essential for the development of serodiagnostic tests.About 30–40% of patients are antibody-negative for CE. In many of these patients, however, varying levels of circulating antigens (CAg) and circulating immune complexes (CIC) are measurable [30]. This phenomenon suggests that B cell activity and proliferation may be regulated and inhibited by E. granulosus antigens. It is not known whether these antigens directly target B cells or via T cell regulatory mechanisms.
### 2.2. Cellular Responses and Th2 Regulation
During the early stages of an echinococcal infection, there is a marked activation of cell-mediated immunity including cellular inflammatory responses and pathological changes [10, 31]. Cellular infiltration of eosinophils, neutrophils, macrophages, and fibrocytes occurs in humans [32, 33] and sheep [34] infections. However, this generally does not result in a severe inflammatory response, and aged cysts tend to become surrounded by a fibrous layer that separates the laminated cystic layer from host tissue.There are very few reports on T cell cytokine profiles in an early primary (oral challenge with eggs)E. granulosus infection. Infection with E. multilocularis eggs induced low levels of interferon- (IFN-) gama, IL-2, and IL-4 at the beginning and high levels at the end of the infection [35, 36], and a similar immune profile in the early stage of CE infection is likely.Given the recent advances in understanding the immunoregulatory capabilities of helminthic infections, it has been suggested that Th2 responses play a crucial role in chronic helminthiasis [37]. However, a remarkable feature of chronic CE infection is the coexistence of IFN-gamma, IL-4 and IL-10 at high levels in human echinococcosis [38]. It is unclear why hydatid infection can induce high levels of both Th1 and Th2 cytokines [39] since they usually downregulate each other [40]. Antigen and the amount of antigens released may play key roles. For instance, E. granulosus antigen B skewed Th1/Th2 cytokine ratios towards a preferentially immunopathology-associated Th2 polarization, predominantly in patients with progressive disease [41].The role of IL-10 in chronic infection largely remains unclear, although one report showed that IL-10 may impair the Th1 protective response and allow the parasite to survive in hydatid patients [42]. The interaction of the Echinococcusorganisms with their mammalian hosts may provide a highly suitable model to address some of the fundamental questions remaining such as the molecular basis underpinning the different effects of IL-10 on different cell types, the mechanisms of regulation of IL-10 production, the inhibitory role of IL-10 on monocyte/macrophage and CD4 T cell function, its involvement in stimulating the development of B cells and CD8 T cells, and its role in the differentiation and function of T regulatory cells.
#### 2.2.1. Correlation of Cytokines with Antibody Production
Studies with mouse models to overexpress cytokines by inducing cytokine expression vectors showed that IL-12 and IFN-gamma induce a parasite-specific IgG2a response in mice infected with protoscoleces ofE. granulosus whereas in IL-4-gene-transfected mice, IgG1 was elevated, indicating that IgG1 and IgG2 antibody isotypes are regulated by Th1 and Th2 cytokines, respectively [43].When patients with relapsing disease or with viable, growing cysts, IgG1 and IgG4 are elevated and maintained at a high level [21, 44], whereas a low level of IFN-gamma produced by peripheral blood mononucleocytes (PBMC) in vitro compared with patients with a primary infection [45, 46]. For some relapsed cases, IFN-gamma levels were undetectable in the sera of patients [47] whereas the concentrations of specific IgG1 and IgG4 declined in cases characterized by cyst infiltration or calcification [44].This indicates that the IgG4 antibody response is also associated with cystic development, growth, and disease progression whereas IgG1, IgG2, and IgG3 responses occur predominantly when cysts became infiltrated or are destroyed by the host [21].
#### 2.2.2. T Cell Profile, Cyst Progression, and Efficacy of Treatment
The polarized Th2 cell response is a significant feature of the chronic stage ofEchinococcus infection which is modulated by the developmental status of the hydatid cyst. In vitro T cell stimulation showed that cell lines from a patient with an inactive cyst had a Th1 profile while the T-cell lines derived from patients with active and transitional hydatid cysts had mixed Th1/Th2 and Th0 clones [48]. When CE patients were drug-treated with albendazole/mebendazole, a Th1 cytokine profile, rather than a Th2 profile, typically dominated, indicating that Th1 responses have a role in the process of cyst degeneration [46].Mice injected with a vector expressing IL-4 displayed six times higher cyst load than the load in control mice [43], indicating IL-4 plays an important role in hydatid cyst development in the mammalian host.Cytokine analysis of 177 CE patients showed that Th1 cytokines were related to disease resistance; in contrast Th2 cytokines were associated with disease susceptibility and chronicity [38]. Both in vitro and in vivo studies have shown that high levels of the Th1 cytokine IFN-gamma were found in patients who responded to chemotherapy, whereas high levels of Th2 cytokines (IL-4 and IL-10) occurred in patients who did not [46, 49–51], indicating IL-10/IL-4 impairs the Th1 resistant response allowing E. granulosus to survive [42, 52].Self-cure of CE is common in sheep [53], and it most likely also happens in human populations in hyperendemic areas as patients with calcified cysts are reported [54, 55]. It would be of value to consider the T cell profiles of these self-cure patients as this may impact on future treatment approaches and vaccine development.
#### 2.2.3. Dendritic Cells
More studies have focused on dentritic cells (DC) and their regulation on other immune responses in CE.E. granulosus antigens influence maturation and differentiation of DC stimulated with lipopolysaccharide (LPS) [56]. This includes downmodulation of CD1a expression and upregulation of CD86 expression, a lower percentage of CD83(+) cells present and, downregulation of interleukin-12p70 (IL-12p70) and TNF alpha [57]. In addition, hydatid cyst fluid (HCF) modulates the transition of human monocytes to DC, impairs secretion of IL-12, IL-6, or PGE2 in response to LPS stimulation, and modulates the phenotype of cells generated during culture, resulting in increased CD14 expression [56].HCF antigen B (AgB) has been shown to induce IL-1 receptor-associated kinase phosphorylation and activate nuclear factor-kappa B, suggesting that Toll-like receptors could participate inE. granulosus-stimulated DC maturation [57].E. multilocularis infection in mice induced DC expressing high levels TGF and very low levels of IL-10 and IL-12, and the expression of the surface markers CD80, CD86, and CD40 was downregulated [58, 59]. However, the higher level of IL-4 than IFN-gamma/IL-2 mRNA expression in AE-CD4+pe-Tcells indicated DC play a role in the generation of a regulatory immune response [59].DifferentE. multilocularis antigens have been shown to stimulate different expression profiles of DC. Em14-3-3-antigen induced CD80, CD86, and MHC class II surface expression, but Em2(G11) failed to do so. Similarly, LPS and Em14-3-3 yielded elevated IL-12, TNF-I+/−, and IL-10 expression levels, while Em2(G11) did not. The proliferation of bone marrow DC isolated from AE-diseased mice was abrogated [60], indicating the E. multilocularis infection triggered unresponsiveness in T cells.
#### 2.2.4. Summary of Immunological Responses in Echinococcosis and Directions for Further Study
Human helminth infections exhibit many immune downregulatory characteristics, with affected populations showing lower levels of immunopathological disease in cohort studies of allergy and autoimmunity. Model system studies have linked helminth infections with marked expansion of populations of immunoregulatory cells, such as alternatively activated macrophages, T regulatory cells (Tregs), and regulatory B cells [37].In the establishedEchinococcus cystic stage, the typical response, in both humans and animals, is of the Th2 type and involves the cytokines IL-4, IL-5, IL-10, and IL-13, the antibody isotypes IgG1, IgG4, and IgE, and expanded populations of eosinophils, mast cells, and alternatively activated macrophages [10, 31]. The precise role of Th2 responses in parasitic infections is still not very clear. It is likely that E. granulosus controls the dialogue between cells of the immune system through the release of antigens which induce Th2 responses and suppression of others involving regulatory T and B cells. Th2 is significantly associated with chronic infection and may regulate the establishment of the parasite infection. More details are needed of the regulation of Th2 cytokines on antibody production, echinococcal cyst growth, and the efficacy of treatment. The role of the antibody responses in the host parasite interaction and chronic infection remains unknown in CE.It has been shown that in vivo depletion of DC inhibits the induction of a Th2 immune response in chronic helminth infection and DC alone can drive Th2 cell differentiation [37]. It is not known which DC signals induce the Th2 differentiation programme in naïve T cells [61] but CE represents a good model to address this issue.As well, a number of other critical questions remain that are important for studying the role of Treg cells in the chronic infection resulting from echinococcosis such as whether Treg cells present in greater frequencies in echinococcal infections as other infections [62, 63], whether Echinococcus can expand T reg cell populations, and whether the parasites secrete factors which can directly induce the conversion of naïve T cells into functional Treg cells. There are no studies in echinococcosis on regulatory B cells, which are populations of B cells that downregulate immune responses. These cells are most often associated with production of the immunosuppressive cytokine IL-10.Moreover, many allergic and autoimmune inflammatory conditions can be ameliorated by a range of different helminth infections [64–66], so the question arises: can echinococcal infection reduce the allergic condition?
## 2.1. Antibody Responses
The earliest immunoglobulin (Ig) G response to CE hydatid cyst fluid and oncospheral antigens appears after 2 and 11 weeks, respectively, in mice and sheep challenged with eggs or oncospheres ofE. granulosus [15, 16]. These antioncospheral antibodies play a major role in parasite killing and are central to the protective immune response against E. granulosus [17]. Although antibody levels against the oncosphere are low [15] in the early stages of infection, the parasite killing mechanisms may involve antibody-dependent cell-mediated cytotoxicity reactions [18, 19].In the chronic phases of CE, there is frequent occurrence of elevated antibody levels, particularly IgG, IgM, and IgE [20–24], with IgG1 and IgG4 IgG subclasses being predominant [21, 25–29]. This antibody production is essential for the development of serodiagnostic tests.About 30–40% of patients are antibody-negative for CE. In many of these patients, however, varying levels of circulating antigens (CAg) and circulating immune complexes (CIC) are measurable [30]. This phenomenon suggests that B cell activity and proliferation may be regulated and inhibited by E. granulosus antigens. It is not known whether these antigens directly target B cells or via T cell regulatory mechanisms.
## 2.2. Cellular Responses and Th2 Regulation
During the early stages of an echinococcal infection, there is a marked activation of cell-mediated immunity including cellular inflammatory responses and pathological changes [10, 31]. Cellular infiltration of eosinophils, neutrophils, macrophages, and fibrocytes occurs in humans [32, 33] and sheep [34] infections. However, this generally does not result in a severe inflammatory response, and aged cysts tend to become surrounded by a fibrous layer that separates the laminated cystic layer from host tissue.There are very few reports on T cell cytokine profiles in an early primary (oral challenge with eggs)E. granulosus infection. Infection with E. multilocularis eggs induced low levels of interferon- (IFN-) gama, IL-2, and IL-4 at the beginning and high levels at the end of the infection [35, 36], and a similar immune profile in the early stage of CE infection is likely.Given the recent advances in understanding the immunoregulatory capabilities of helminthic infections, it has been suggested that Th2 responses play a crucial role in chronic helminthiasis [37]. However, a remarkable feature of chronic CE infection is the coexistence of IFN-gamma, IL-4 and IL-10 at high levels in human echinococcosis [38]. It is unclear why hydatid infection can induce high levels of both Th1 and Th2 cytokines [39] since they usually downregulate each other [40]. Antigen and the amount of antigens released may play key roles. For instance, E. granulosus antigen B skewed Th1/Th2 cytokine ratios towards a preferentially immunopathology-associated Th2 polarization, predominantly in patients with progressive disease [41].The role of IL-10 in chronic infection largely remains unclear, although one report showed that IL-10 may impair the Th1 protective response and allow the parasite to survive in hydatid patients [42]. The interaction of the Echinococcusorganisms with their mammalian hosts may provide a highly suitable model to address some of the fundamental questions remaining such as the molecular basis underpinning the different effects of IL-10 on different cell types, the mechanisms of regulation of IL-10 production, the inhibitory role of IL-10 on monocyte/macrophage and CD4 T cell function, its involvement in stimulating the development of B cells and CD8 T cells, and its role in the differentiation and function of T regulatory cells.
### 2.2.1. Correlation of Cytokines with Antibody Production
Studies with mouse models to overexpress cytokines by inducing cytokine expression vectors showed that IL-12 and IFN-gamma induce a parasite-specific IgG2a response in mice infected with protoscoleces ofE. granulosus whereas in IL-4-gene-transfected mice, IgG1 was elevated, indicating that IgG1 and IgG2 antibody isotypes are regulated by Th1 and Th2 cytokines, respectively [43].When patients with relapsing disease or with viable, growing cysts, IgG1 and IgG4 are elevated and maintained at a high level [21, 44], whereas a low level of IFN-gamma produced by peripheral blood mononucleocytes (PBMC) in vitro compared with patients with a primary infection [45, 46]. For some relapsed cases, IFN-gamma levels were undetectable in the sera of patients [47] whereas the concentrations of specific IgG1 and IgG4 declined in cases characterized by cyst infiltration or calcification [44].This indicates that the IgG4 antibody response is also associated with cystic development, growth, and disease progression whereas IgG1, IgG2, and IgG3 responses occur predominantly when cysts became infiltrated or are destroyed by the host [21].
### 2.2.2. T Cell Profile, Cyst Progression, and Efficacy of Treatment
The polarized Th2 cell response is a significant feature of the chronic stage ofEchinococcus infection which is modulated by the developmental status of the hydatid cyst. In vitro T cell stimulation showed that cell lines from a patient with an inactive cyst had a Th1 profile while the T-cell lines derived from patients with active and transitional hydatid cysts had mixed Th1/Th2 and Th0 clones [48]. When CE patients were drug-treated with albendazole/mebendazole, a Th1 cytokine profile, rather than a Th2 profile, typically dominated, indicating that Th1 responses have a role in the process of cyst degeneration [46].Mice injected with a vector expressing IL-4 displayed six times higher cyst load than the load in control mice [43], indicating IL-4 plays an important role in hydatid cyst development in the mammalian host.Cytokine analysis of 177 CE patients showed that Th1 cytokines were related to disease resistance; in contrast Th2 cytokines were associated with disease susceptibility and chronicity [38]. Both in vitro and in vivo studies have shown that high levels of the Th1 cytokine IFN-gamma were found in patients who responded to chemotherapy, whereas high levels of Th2 cytokines (IL-4 and IL-10) occurred in patients who did not [46, 49–51], indicating IL-10/IL-4 impairs the Th1 resistant response allowing E. granulosus to survive [42, 52].Self-cure of CE is common in sheep [53], and it most likely also happens in human populations in hyperendemic areas as patients with calcified cysts are reported [54, 55]. It would be of value to consider the T cell profiles of these self-cure patients as this may impact on future treatment approaches and vaccine development.
### 2.2.3. Dendritic Cells
More studies have focused on dentritic cells (DC) and their regulation on other immune responses in CE.E. granulosus antigens influence maturation and differentiation of DC stimulated with lipopolysaccharide (LPS) [56]. This includes downmodulation of CD1a expression and upregulation of CD86 expression, a lower percentage of CD83(+) cells present and, downregulation of interleukin-12p70 (IL-12p70) and TNF alpha [57]. In addition, hydatid cyst fluid (HCF) modulates the transition of human monocytes to DC, impairs secretion of IL-12, IL-6, or PGE2 in response to LPS stimulation, and modulates the phenotype of cells generated during culture, resulting in increased CD14 expression [56].HCF antigen B (AgB) has been shown to induce IL-1 receptor-associated kinase phosphorylation and activate nuclear factor-kappa B, suggesting that Toll-like receptors could participate inE. granulosus-stimulated DC maturation [57].E. multilocularis infection in mice induced DC expressing high levels TGF and very low levels of IL-10 and IL-12, and the expression of the surface markers CD80, CD86, and CD40 was downregulated [58, 59]. However, the higher level of IL-4 than IFN-gamma/IL-2 mRNA expression in AE-CD4+pe-Tcells indicated DC play a role in the generation of a regulatory immune response [59].DifferentE. multilocularis antigens have been shown to stimulate different expression profiles of DC. Em14-3-3-antigen induced CD80, CD86, and MHC class II surface expression, but Em2(G11) failed to do so. Similarly, LPS and Em14-3-3 yielded elevated IL-12, TNF-I+/−, and IL-10 expression levels, while Em2(G11) did not. The proliferation of bone marrow DC isolated from AE-diseased mice was abrogated [60], indicating the E. multilocularis infection triggered unresponsiveness in T cells.
### 2.2.4. Summary of Immunological Responses in Echinococcosis and Directions for Further Study
Human helminth infections exhibit many immune downregulatory characteristics, with affected populations showing lower levels of immunopathological disease in cohort studies of allergy and autoimmunity. Model system studies have linked helminth infections with marked expansion of populations of immunoregulatory cells, such as alternatively activated macrophages, T regulatory cells (Tregs), and regulatory B cells [37].In the establishedEchinococcus cystic stage, the typical response, in both humans and animals, is of the Th2 type and involves the cytokines IL-4, IL-5, IL-10, and IL-13, the antibody isotypes IgG1, IgG4, and IgE, and expanded populations of eosinophils, mast cells, and alternatively activated macrophages [10, 31]. The precise role of Th2 responses in parasitic infections is still not very clear. It is likely that E. granulosus controls the dialogue between cells of the immune system through the release of antigens which induce Th2 responses and suppression of others involving regulatory T and B cells. Th2 is significantly associated with chronic infection and may regulate the establishment of the parasite infection. More details are needed of the regulation of Th2 cytokines on antibody production, echinococcal cyst growth, and the efficacy of treatment. The role of the antibody responses in the host parasite interaction and chronic infection remains unknown in CE.It has been shown that in vivo depletion of DC inhibits the induction of a Th2 immune response in chronic helminth infection and DC alone can drive Th2 cell differentiation [37]. It is not known which DC signals induce the Th2 differentiation programme in naïve T cells [61] but CE represents a good model to address this issue.As well, a number of other critical questions remain that are important for studying the role of Treg cells in the chronic infection resulting from echinococcosis such as whether Treg cells present in greater frequencies in echinococcal infections as other infections [62, 63], whether Echinococcus can expand T reg cell populations, and whether the parasites secrete factors which can directly induce the conversion of naïve T cells into functional Treg cells. There are no studies in echinococcosis on regulatory B cells, which are populations of B cells that downregulate immune responses. These cells are most often associated with production of the immunosuppressive cytokine IL-10.Moreover, many allergic and autoimmune inflammatory conditions can be ameliorated by a range of different helminth infections [64–66], so the question arises: can echinococcal infection reduce the allergic condition?
## 2.2.1. Correlation of Cytokines with Antibody Production
Studies with mouse models to overexpress cytokines by inducing cytokine expression vectors showed that IL-12 and IFN-gamma induce a parasite-specific IgG2a response in mice infected with protoscoleces ofE. granulosus whereas in IL-4-gene-transfected mice, IgG1 was elevated, indicating that IgG1 and IgG2 antibody isotypes are regulated by Th1 and Th2 cytokines, respectively [43].When patients with relapsing disease or with viable, growing cysts, IgG1 and IgG4 are elevated and maintained at a high level [21, 44], whereas a low level of IFN-gamma produced by peripheral blood mononucleocytes (PBMC) in vitro compared with patients with a primary infection [45, 46]. For some relapsed cases, IFN-gamma levels were undetectable in the sera of patients [47] whereas the concentrations of specific IgG1 and IgG4 declined in cases characterized by cyst infiltration or calcification [44].This indicates that the IgG4 antibody response is also associated with cystic development, growth, and disease progression whereas IgG1, IgG2, and IgG3 responses occur predominantly when cysts became infiltrated or are destroyed by the host [21].
## 2.2.2. T Cell Profile, Cyst Progression, and Efficacy of Treatment
The polarized Th2 cell response is a significant feature of the chronic stage ofEchinococcus infection which is modulated by the developmental status of the hydatid cyst. In vitro T cell stimulation showed that cell lines from a patient with an inactive cyst had a Th1 profile while the T-cell lines derived from patients with active and transitional hydatid cysts had mixed Th1/Th2 and Th0 clones [48]. When CE patients were drug-treated with albendazole/mebendazole, a Th1 cytokine profile, rather than a Th2 profile, typically dominated, indicating that Th1 responses have a role in the process of cyst degeneration [46].Mice injected with a vector expressing IL-4 displayed six times higher cyst load than the load in control mice [43], indicating IL-4 plays an important role in hydatid cyst development in the mammalian host.Cytokine analysis of 177 CE patients showed that Th1 cytokines were related to disease resistance; in contrast Th2 cytokines were associated with disease susceptibility and chronicity [38]. Both in vitro and in vivo studies have shown that high levels of the Th1 cytokine IFN-gamma were found in patients who responded to chemotherapy, whereas high levels of Th2 cytokines (IL-4 and IL-10) occurred in patients who did not [46, 49–51], indicating IL-10/IL-4 impairs the Th1 resistant response allowing E. granulosus to survive [42, 52].Self-cure of CE is common in sheep [53], and it most likely also happens in human populations in hyperendemic areas as patients with calcified cysts are reported [54, 55]. It would be of value to consider the T cell profiles of these self-cure patients as this may impact on future treatment approaches and vaccine development.
## 2.2.3. Dendritic Cells
More studies have focused on dentritic cells (DC) and their regulation on other immune responses in CE.E. granulosus antigens influence maturation and differentiation of DC stimulated with lipopolysaccharide (LPS) [56]. This includes downmodulation of CD1a expression and upregulation of CD86 expression, a lower percentage of CD83(+) cells present and, downregulation of interleukin-12p70 (IL-12p70) and TNF alpha [57]. In addition, hydatid cyst fluid (HCF) modulates the transition of human monocytes to DC, impairs secretion of IL-12, IL-6, or PGE2 in response to LPS stimulation, and modulates the phenotype of cells generated during culture, resulting in increased CD14 expression [56].HCF antigen B (AgB) has been shown to induce IL-1 receptor-associated kinase phosphorylation and activate nuclear factor-kappa B, suggesting that Toll-like receptors could participate inE. granulosus-stimulated DC maturation [57].E. multilocularis infection in mice induced DC expressing high levels TGF and very low levels of IL-10 and IL-12, and the expression of the surface markers CD80, CD86, and CD40 was downregulated [58, 59]. However, the higher level of IL-4 than IFN-gamma/IL-2 mRNA expression in AE-CD4+pe-Tcells indicated DC play a role in the generation of a regulatory immune response [59].DifferentE. multilocularis antigens have been shown to stimulate different expression profiles of DC. Em14-3-3-antigen induced CD80, CD86, and MHC class II surface expression, but Em2(G11) failed to do so. Similarly, LPS and Em14-3-3 yielded elevated IL-12, TNF-I+/−, and IL-10 expression levels, while Em2(G11) did not. The proliferation of bone marrow DC isolated from AE-diseased mice was abrogated [60], indicating the E. multilocularis infection triggered unresponsiveness in T cells.
## 2.2.4. Summary of Immunological Responses in Echinococcosis and Directions for Further Study
Human helminth infections exhibit many immune downregulatory characteristics, with affected populations showing lower levels of immunopathological disease in cohort studies of allergy and autoimmunity. Model system studies have linked helminth infections with marked expansion of populations of immunoregulatory cells, such as alternatively activated macrophages, T regulatory cells (Tregs), and regulatory B cells [37].In the establishedEchinococcus cystic stage, the typical response, in both humans and animals, is of the Th2 type and involves the cytokines IL-4, IL-5, IL-10, and IL-13, the antibody isotypes IgG1, IgG4, and IgE, and expanded populations of eosinophils, mast cells, and alternatively activated macrophages [10, 31]. The precise role of Th2 responses in parasitic infections is still not very clear. It is likely that E. granulosus controls the dialogue between cells of the immune system through the release of antigens which induce Th2 responses and suppression of others involving regulatory T and B cells. Th2 is significantly associated with chronic infection and may regulate the establishment of the parasite infection. More details are needed of the regulation of Th2 cytokines on antibody production, echinococcal cyst growth, and the efficacy of treatment. The role of the antibody responses in the host parasite interaction and chronic infection remains unknown in CE.It has been shown that in vivo depletion of DC inhibits the induction of a Th2 immune response in chronic helminth infection and DC alone can drive Th2 cell differentiation [37]. It is not known which DC signals induce the Th2 differentiation programme in naïve T cells [61] but CE represents a good model to address this issue.As well, a number of other critical questions remain that are important for studying the role of Treg cells in the chronic infection resulting from echinococcosis such as whether Treg cells present in greater frequencies in echinococcal infections as other infections [62, 63], whether Echinococcus can expand T reg cell populations, and whether the parasites secrete factors which can directly induce the conversion of naïve T cells into functional Treg cells. There are no studies in echinococcosis on regulatory B cells, which are populations of B cells that downregulate immune responses. These cells are most often associated with production of the immunosuppressive cytokine IL-10.Moreover, many allergic and autoimmune inflammatory conditions can be ameliorated by a range of different helminth infections [64–66], so the question arises: can echinococcal infection reduce the allergic condition?
## 3. Serological Diagnosis
Typical asymptomatic features in the early stages of infection and for a long period after establishment makes early diagnosis of echinococcosis in humans difficult. Physical imaging to diagnose the CE infection, is usually used in the late stages of infection. Early diagnosis of CE by serology may, therefore, provide opportunities for early treatment and more effective chemotherapy. Another practical application of serology in human echinococcosis is the followup of the treatment.Although hydatid disease is an asymptomatic infection, the host does produce detectable humoral and cellular responses against the infection. Measurement of these responses is a prerequisite for developing effective serodiagnostic tools.
### 3.1. Antibody Detection
Infection with larval cysts ofEchinococcus in humans and intermediate animal hosts results in a specific antibody response, mainly of the IgG class accompanied by detectable IgM, IgA, and IgE antibodies in some patients [9, 31, 76, 77].In terms of methodology, almost all serological tests developed for immunodiagnosis of human CE cases have incorporated the detection of antibodies. There are considerable differences between the various tests both in specificity and sensitivity. As the sensitivity of a test increases, so generally does the demand for improved antigens in order that sufficient specificity can be achieved to take advantage of the greater sensitivity. An optimum test should be specific with high sensitivity. Insensitive and nonspecific assays including the Cassoni intradermal test, the complement fixation test (CFT), the indirect haemagglutination (IHA) test, and the latex agglutination (LA) test have been replaced by the enzyme-linked immunosorbent assay (ELISA), the indirect immunofluorescence antibody test (IFAT), immunoelectrophorosis (IEP), and immunoblotting (IB) in routine laboratory application [78].A comparison of the diagnostic sensitivity and specificity of IEP, ELISA, and IB, in detecting IgG antibodies in patient sera to native and recombinant AgB and a hydatid fluid fraction (HFF), showed that HFF-IB gave the highest sensitivity (80%) followed by ELISA (72%) and IEP (31%). The diagnostic sensitivity significantly decreased as cysts matured (from type I-II to type VII, classified by ultrasound). Recombinant and native AgB-IB yielded similar levels of sensitivity (74%) but a large number of clinically or surgically confirmed CE patients (20%) were negative. In these patient sera, IB, to assess the usefulness of another recombinantE. granulosus molecule (elongation factor-1 beta/delta) in detecting IgE antibodies, yielded a positivity of 33%. Serological tests developed for determining anti-Echinococcus IgE in serum usually express results qualitatively or semiquantitatively in titres or units specific for the test kit [20, 79, 80].The serodiagnostic performance of a range of different antigens and the various methods available for immunodiagnosis have been reviewed in depth [10, 31]. Some recent studies are referred to in Table 1 with the sensitivity and specificity of individual tests listed. Some antigens, such as native AgB and its recombinant proteins, yielded reasonable diagnostic performance using panels of sera from clinically confirmed cases of echinococcosis and other helminth infections. However, when the antigens were used for screening human populations in hyperendemic communities, they showed high seropositivity rates, although these rates had a low correlation with US monitoring of individual subjects [81].Table 1
Characteristics of assays using different antigens fromE. granulosus developed after 2003 for immunodiagnosis of cystic echinococcosis.
Number of subjects testedAntigenAssay methodSensitivity (%)Specificity (%)Ig isotypeRefs.CEHealthy controlsOther diseases44—438 kDaWB47.751.2IgG[67]44—4316 kDaWB45.567.4IgG[67]44—4324 kDaWB68.262.8IgG[67]3636—AgBELISA91.797.2IgG[68]1029568rAgB1ELISA88.280.9IgG[69]1029568rAgB2ELISA91.293IgG[69]8755739AgBDot-WB68.493.4IgG[70]8575739AgBELISA57.493.4IgG[70]32470500ABWB86.492IgG[71]15511058?ELISA73.699.1IgE[72]8755739AgBDot-WB68.493.4IgG[70]8575739AgBELISA57.493.4IgG[70]32470500ABWB86.492IgG[71]15511058?ELISA73.699.1IgE[72]15511058?ELISA90.390.9IgG[72]15511058HCFWB90.194.5IgG[72]32470500EpC1WB88.795.6IgG[71]9537—HSP2064IgG1,4[73]973758Eg19WB10100IgG[74]1029568E14tELISA35.391.7IgG[69]1029568C317ELISA58.880.9IgG[69]60——P5WB9711[75]ELISA: enzyme-linked immunosorbent assay; WB: western blotting; dELISA: dot enzyme-linked immunosorbent assay.Recently developed dipstick assays [82] are considered to be valuable methods for CE serodiagnosis. One dipstick assay has been developed that exhibited 100% sensitivity and 91.4% specificity when tested on sera from 26 CE patients and sera from 35 subjects with other parasitic infections using camel hydatid cyst fluid as antigen [83]. Since the dipstick assay is extremely easy to perform with a visually interpretable result within 15 min, in addition to being both sensitive and specific, the test could be an acceptable alternative for use in clinical laboratories lacking specialized equipment or the technological expertise needed for western blotting or ELISA. Similarly, a new 3-minute rapid dot immunogold filtration assay (DIGFA) for serodiagnosis of human CE and AE has been developed using four native antigen preparations crude and partially purified hydatid cyst fluid extracts from E. granulosus (EgCF and AgB), E. granulosus protoscolex extract (EgP), and E. multilocularis metacestode antigen (Em2) [70]. Like the dipstick assay, the test incorporates a simple eye-read colour change and achieved an overall sensitivity of 80.7% for human CE and 92.9% for human AE in a hospital diagnostic setting [70]. These rapid test scan be used for both clinical diagnostic support, as well as combining with ultrasound for mass screening in areas endemic for CE and AE.Standardization of techniques and antigenic preparations and the characterization of new antigens are urgently required to improve the performance of hydatid immunodiagnosis. Antigens used in current tests are either cyst fluid or crude homogenates of the parasite collected from domestic animals. However, the supply of antigenic sources can often be limited, even for laboratory use. Since the preparation of purified echinococcal antigens relies on the availability of parasitic material and the quality control of this material is difficult to standardize for a large scale production, this can impact substantially on sensitivity and specificity of the available immunodiagnostic tools.
### 3.2. Antigen Detection
Antibody detection is likely to indicate exposure to anEchinococcus infection, but it may not necessarily indicate the presence of an established and viable infection, or the disease. Serum antibodies may persist for a prolonged period, reaching up to 10 years after hydatid cyst removal [84]. In addition, the degree of antibody response may be related to the location and condition of a mature hydatid cyst. For instance, hydatid cysts in human lung, spleen, or kidney tend to be associated with lower serum antibody levels [9]. Furthermore, in Echinococcus-endemic villages, up to 26% or more of the general population may have antibodies to HCF antigens, but with only about 2% of the villagers having hydatid cysts [81, 85, 86], indicating that the antibody levels may not necessarily reflect the true prevalence of CE.Antigen detection may provide a suitable alternative. Serum antigen detection may also be less affected by hydatid cyst location and provides a tool for serological monitoring of antiparasitic therapy [87]. Circulating antigen (CAg) in CE patient sera, can be detected using ELISA directly or indirectly, and against titrated cyst fluid standards, CAg concentrations have been shown to vary from 100 to 700 ng/mL [88].Antigen detection assays depend principally on the binding of specific polyclonal or monoclonal antibodies to parasite antigen present in serum or urine. A number of different assays have been developed to detect echinococcal antigens. The standard double antibody sandwich ELISA is a common method for measuring the presence and/or concentration of circulating parasite antigens. In the test, antibody raised to the targeted protein is coated onto a microtiter plate to capture antigen (Figure1). The same antibody, which is enzyme labelled, is commonly used in the tertiary layer of the assay. This type of antigen capture therefore relies on the presence of multiple binding sites on the target antigens(s). Efforts to detect CAg in CE patients have been reviewed extensively by Craig et al. [85].Figure 1
Schematic of ELISA and immuno-PCR for detecting circulating antigen in serum. (a) Sandwich ELISA. (1) Plate is coated with a capture antibody; (2) serum sample is added, and any antigen present in the serum binds to the capture antibody; (3) detecting antibody conjugate is added and binds to the antigen; (4) substrate is added, and is converted by the enzyme to a detectable form. (b) Direct ELISA. Plate is coated with diluted serum containing antigen; (2) detecting antibody is added, and binds to antigen; (3) enzyme-linked secondary antibody is added, and binds to detecting antibody; (4) substrate is added and is converted by the enzyme to a detectable form. (c) Capture immuno-PCR. (1) Plate is coated with capture antibody; (2) serum sample is added; (3) biotinylated detecting antibody is added and binds to antigen; (4) Streptavidin and biotinylated reporter DNA are added, and the biotinylated antibody and biotinylated reporter DNA are linked by streptavidin; (5) Primers and PCR components are added and PCR or real-time PCR undertaken to quantify antigen. (d) Non-capture immuno-PCR. Serum sample is coated on the plate and the remainder of the steps are as for the capture-immuno-PCR (C).CAg in serum is normally in the form of a circulating immune complex (CIC) with some in free form. Therefore, the serum needs to be treated with acid buffer or polyethylene glycol (PEG) to release and concentrate the circulating antigens. Acidic treatment (0.2 M glycine/HCl) of CE patient serum is quite straightforward to dissociate CIC [85]. In a comparison of acid-treatment and PEG precipitation methods, all the sera of 30 confirmed positive cases of CE had detectable levels of antigen in the acid-treated sera [30]. However, 23 (77%) and 26 (87%) sera of 30 confirmed cases had free antigen as well as CIC of an 8 kDa antigen in the untreated and in the polyethylene glycol (PEG) precipitated sera, respectively. None of the sera from other patients with parasitic infections or viral hepatitis had any detectable levels of 8 kDa antigen in the untreated, acid-treated, or PEG-precipitated serum samples. These investigations, therefore, suggested that the demonstration of circulating antigen employing monospecific antibodies to affinity purified 8 kDa antigen in acid-treated sera is more efficient than the detection of free circulating antigen or CIC in untreated or in PEG-precipitated sera [89].IgM CICs tend to be positively associated with active hydatid disease [85, 90]. Combining measurement of circulating antibody, CICs, and CAg resulted in an increase from 77% to 90% compared to measurement of serum antibody alone [91]. Antigens in soluble CICs from CE patients have been characterized by separating them on SDS-PAGE [85] or by ion-exchange fast protein liquid chromatography (FPLC) [92]. Both studies indicated a candidate antigen detectable in serum with an approximate relative molecular mass of 60–67 KDa, and which is also present in cyst fluid.Comparison of CAg and IgG antibody using ELISA, together with western blotting, showed a relatively low sensitivity (43%) for detection of specific serum antigen in CE, compared to 75% for IgG antibodies [93]. However, the specificity of this CAg ELISA was 90% when tested against sera from AE patients and 100% against human cysticercosis sera. The limited cross-reactivity may be a way for practical diagnosis of CE in areas where AE and cysticercosis are coendemic. The advantage of CAg detection is its high sensitivity for detecting CE in 54–57% of patients who are serum antibody negative [91, 93]. CAg detection does appear, therefore, to be potentially useful as a secondary test for some suspected CE cases where antibody titers are low [85, 94].A combination of CAg and antibody detection has been shown to increase the sensitivity from 85% (antibody only) to 89% (antibody+CAg) in ELISA of 115 surgically confirmed hydatid patients, 41 individuals exhibiting other parasitic and unrelated diseases, and 69 healthy subjects [95].Although there has been no application to date for echinococcal diagnosis, a technique for antigen detection, called immunopolymerase chain reaction (immuno-PCR), was developed by Sano et al. [96]. It combines the molecular recognition of antibodies with the high DNA amplification capability of PCR. The procedure is similar to conventional ELISA but is far more sensitive. And, in principle, could be applied to the detection of single antigen molecules. Instead of an enzyme, a DNA molecule is linked to the detection antibody and serves as a template for PCR (Figure 1). The DNA molecule is amplified and the PCR product is measured by gel electrophoresis. An improvement of this method is to amplify the DNA fragment by real-time PCR, thereby eliminating post-PCR analysis. Furthermore, real-time PCR is extremely accurate and sensitive, which should make it possible to quantitate very low amounts of DNA-coupled detection antibody with high accuracy.
### 3.3. Serodiagnosis: The Future
Almost all available immunodiagnostic techniques, including methods for detecting specific antibodies and circulating parasite antigens in serum or other body fluids, have been applied for diagnosing echinococcosis. However, all the tools developed to date are generally applicable for laboratory research purposes only. None of the available diagnostic tools, kits, or methods are generally accepted by clinical physicians. Nevertheless, such serological tools are potentially important for epidemiological studies, confirmation of infection status, and treatment and the monitoring of control programs, and efforts should continue so that new assays for improved, practical diagnosis of echinococcosis are developed.
## 3.1. Antibody Detection
Infection with larval cysts ofEchinococcus in humans and intermediate animal hosts results in a specific antibody response, mainly of the IgG class accompanied by detectable IgM, IgA, and IgE antibodies in some patients [9, 31, 76, 77].In terms of methodology, almost all serological tests developed for immunodiagnosis of human CE cases have incorporated the detection of antibodies. There are considerable differences between the various tests both in specificity and sensitivity. As the sensitivity of a test increases, so generally does the demand for improved antigens in order that sufficient specificity can be achieved to take advantage of the greater sensitivity. An optimum test should be specific with high sensitivity. Insensitive and nonspecific assays including the Cassoni intradermal test, the complement fixation test (CFT), the indirect haemagglutination (IHA) test, and the latex agglutination (LA) test have been replaced by the enzyme-linked immunosorbent assay (ELISA), the indirect immunofluorescence antibody test (IFAT), immunoelectrophorosis (IEP), and immunoblotting (IB) in routine laboratory application [78].A comparison of the diagnostic sensitivity and specificity of IEP, ELISA, and IB, in detecting IgG antibodies in patient sera to native and recombinant AgB and a hydatid fluid fraction (HFF), showed that HFF-IB gave the highest sensitivity (80%) followed by ELISA (72%) and IEP (31%). The diagnostic sensitivity significantly decreased as cysts matured (from type I-II to type VII, classified by ultrasound). Recombinant and native AgB-IB yielded similar levels of sensitivity (74%) but a large number of clinically or surgically confirmed CE patients (20%) were negative. In these patient sera, IB, to assess the usefulness of another recombinantE. granulosus molecule (elongation factor-1 beta/delta) in detecting IgE antibodies, yielded a positivity of 33%. Serological tests developed for determining anti-Echinococcus IgE in serum usually express results qualitatively or semiquantitatively in titres or units specific for the test kit [20, 79, 80].The serodiagnostic performance of a range of different antigens and the various methods available for immunodiagnosis have been reviewed in depth [10, 31]. Some recent studies are referred to in Table 1 with the sensitivity and specificity of individual tests listed. Some antigens, such as native AgB and its recombinant proteins, yielded reasonable diagnostic performance using panels of sera from clinically confirmed cases of echinococcosis and other helminth infections. However, when the antigens were used for screening human populations in hyperendemic communities, they showed high seropositivity rates, although these rates had a low correlation with US monitoring of individual subjects [81].Table 1
Characteristics of assays using different antigens fromE. granulosus developed after 2003 for immunodiagnosis of cystic echinococcosis.
Number of subjects testedAntigenAssay methodSensitivity (%)Specificity (%)Ig isotypeRefs.CEHealthy controlsOther diseases44—438 kDaWB47.751.2IgG[67]44—4316 kDaWB45.567.4IgG[67]44—4324 kDaWB68.262.8IgG[67]3636—AgBELISA91.797.2IgG[68]1029568rAgB1ELISA88.280.9IgG[69]1029568rAgB2ELISA91.293IgG[69]8755739AgBDot-WB68.493.4IgG[70]8575739AgBELISA57.493.4IgG[70]32470500ABWB86.492IgG[71]15511058?ELISA73.699.1IgE[72]8755739AgBDot-WB68.493.4IgG[70]8575739AgBELISA57.493.4IgG[70]32470500ABWB86.492IgG[71]15511058?ELISA73.699.1IgE[72]15511058?ELISA90.390.9IgG[72]15511058HCFWB90.194.5IgG[72]32470500EpC1WB88.795.6IgG[71]9537—HSP2064IgG1,4[73]973758Eg19WB10100IgG[74]1029568E14tELISA35.391.7IgG[69]1029568C317ELISA58.880.9IgG[69]60——P5WB9711[75]ELISA: enzyme-linked immunosorbent assay; WB: western blotting; dELISA: dot enzyme-linked immunosorbent assay.Recently developed dipstick assays [82] are considered to be valuable methods for CE serodiagnosis. One dipstick assay has been developed that exhibited 100% sensitivity and 91.4% specificity when tested on sera from 26 CE patients and sera from 35 subjects with other parasitic infections using camel hydatid cyst fluid as antigen [83]. Since the dipstick assay is extremely easy to perform with a visually interpretable result within 15 min, in addition to being both sensitive and specific, the test could be an acceptable alternative for use in clinical laboratories lacking specialized equipment or the technological expertise needed for western blotting or ELISA. Similarly, a new 3-minute rapid dot immunogold filtration assay (DIGFA) for serodiagnosis of human CE and AE has been developed using four native antigen preparations crude and partially purified hydatid cyst fluid extracts from E. granulosus (EgCF and AgB), E. granulosus protoscolex extract (EgP), and E. multilocularis metacestode antigen (Em2) [70]. Like the dipstick assay, the test incorporates a simple eye-read colour change and achieved an overall sensitivity of 80.7% for human CE and 92.9% for human AE in a hospital diagnostic setting [70]. These rapid test scan be used for both clinical diagnostic support, as well as combining with ultrasound for mass screening in areas endemic for CE and AE.Standardization of techniques and antigenic preparations and the characterization of new antigens are urgently required to improve the performance of hydatid immunodiagnosis. Antigens used in current tests are either cyst fluid or crude homogenates of the parasite collected from domestic animals. However, the supply of antigenic sources can often be limited, even for laboratory use. Since the preparation of purified echinococcal antigens relies on the availability of parasitic material and the quality control of this material is difficult to standardize for a large scale production, this can impact substantially on sensitivity and specificity of the available immunodiagnostic tools.
## 3.2. Antigen Detection
Antibody detection is likely to indicate exposure to anEchinococcus infection, but it may not necessarily indicate the presence of an established and viable infection, or the disease. Serum antibodies may persist for a prolonged period, reaching up to 10 years after hydatid cyst removal [84]. In addition, the degree of antibody response may be related to the location and condition of a mature hydatid cyst. For instance, hydatid cysts in human lung, spleen, or kidney tend to be associated with lower serum antibody levels [9]. Furthermore, in Echinococcus-endemic villages, up to 26% or more of the general population may have antibodies to HCF antigens, but with only about 2% of the villagers having hydatid cysts [81, 85, 86], indicating that the antibody levels may not necessarily reflect the true prevalence of CE.Antigen detection may provide a suitable alternative. Serum antigen detection may also be less affected by hydatid cyst location and provides a tool for serological monitoring of antiparasitic therapy [87]. Circulating antigen (CAg) in CE patient sera, can be detected using ELISA directly or indirectly, and against titrated cyst fluid standards, CAg concentrations have been shown to vary from 100 to 700 ng/mL [88].Antigen detection assays depend principally on the binding of specific polyclonal or monoclonal antibodies to parasite antigen present in serum or urine. A number of different assays have been developed to detect echinococcal antigens. The standard double antibody sandwich ELISA is a common method for measuring the presence and/or concentration of circulating parasite antigens. In the test, antibody raised to the targeted protein is coated onto a microtiter plate to capture antigen (Figure1). The same antibody, which is enzyme labelled, is commonly used in the tertiary layer of the assay. This type of antigen capture therefore relies on the presence of multiple binding sites on the target antigens(s). Efforts to detect CAg in CE patients have been reviewed extensively by Craig et al. [85].Figure 1
Schematic of ELISA and immuno-PCR for detecting circulating antigen in serum. (a) Sandwich ELISA. (1) Plate is coated with a capture antibody; (2) serum sample is added, and any antigen present in the serum binds to the capture antibody; (3) detecting antibody conjugate is added and binds to the antigen; (4) substrate is added, and is converted by the enzyme to a detectable form. (b) Direct ELISA. Plate is coated with diluted serum containing antigen; (2) detecting antibody is added, and binds to antigen; (3) enzyme-linked secondary antibody is added, and binds to detecting antibody; (4) substrate is added and is converted by the enzyme to a detectable form. (c) Capture immuno-PCR. (1) Plate is coated with capture antibody; (2) serum sample is added; (3) biotinylated detecting antibody is added and binds to antigen; (4) Streptavidin and biotinylated reporter DNA are added, and the biotinylated antibody and biotinylated reporter DNA are linked by streptavidin; (5) Primers and PCR components are added and PCR or real-time PCR undertaken to quantify antigen. (d) Non-capture immuno-PCR. Serum sample is coated on the plate and the remainder of the steps are as for the capture-immuno-PCR (C).CAg in serum is normally in the form of a circulating immune complex (CIC) with some in free form. Therefore, the serum needs to be treated with acid buffer or polyethylene glycol (PEG) to release and concentrate the circulating antigens. Acidic treatment (0.2 M glycine/HCl) of CE patient serum is quite straightforward to dissociate CIC [85]. In a comparison of acid-treatment and PEG precipitation methods, all the sera of 30 confirmed positive cases of CE had detectable levels of antigen in the acid-treated sera [30]. However, 23 (77%) and 26 (87%) sera of 30 confirmed cases had free antigen as well as CIC of an 8 kDa antigen in the untreated and in the polyethylene glycol (PEG) precipitated sera, respectively. None of the sera from other patients with parasitic infections or viral hepatitis had any detectable levels of 8 kDa antigen in the untreated, acid-treated, or PEG-precipitated serum samples. These investigations, therefore, suggested that the demonstration of circulating antigen employing monospecific antibodies to affinity purified 8 kDa antigen in acid-treated sera is more efficient than the detection of free circulating antigen or CIC in untreated or in PEG-precipitated sera [89].IgM CICs tend to be positively associated with active hydatid disease [85, 90]. Combining measurement of circulating antibody, CICs, and CAg resulted in an increase from 77% to 90% compared to measurement of serum antibody alone [91]. Antigens in soluble CICs from CE patients have been characterized by separating them on SDS-PAGE [85] or by ion-exchange fast protein liquid chromatography (FPLC) [92]. Both studies indicated a candidate antigen detectable in serum with an approximate relative molecular mass of 60–67 KDa, and which is also present in cyst fluid.Comparison of CAg and IgG antibody using ELISA, together with western blotting, showed a relatively low sensitivity (43%) for detection of specific serum antigen in CE, compared to 75% for IgG antibodies [93]. However, the specificity of this CAg ELISA was 90% when tested against sera from AE patients and 100% against human cysticercosis sera. The limited cross-reactivity may be a way for practical diagnosis of CE in areas where AE and cysticercosis are coendemic. The advantage of CAg detection is its high sensitivity for detecting CE in 54–57% of patients who are serum antibody negative [91, 93]. CAg detection does appear, therefore, to be potentially useful as a secondary test for some suspected CE cases where antibody titers are low [85, 94].A combination of CAg and antibody detection has been shown to increase the sensitivity from 85% (antibody only) to 89% (antibody+CAg) in ELISA of 115 surgically confirmed hydatid patients, 41 individuals exhibiting other parasitic and unrelated diseases, and 69 healthy subjects [95].Although there has been no application to date for echinococcal diagnosis, a technique for antigen detection, called immunopolymerase chain reaction (immuno-PCR), was developed by Sano et al. [96]. It combines the molecular recognition of antibodies with the high DNA amplification capability of PCR. The procedure is similar to conventional ELISA but is far more sensitive. And, in principle, could be applied to the detection of single antigen molecules. Instead of an enzyme, a DNA molecule is linked to the detection antibody and serves as a template for PCR (Figure 1). The DNA molecule is amplified and the PCR product is measured by gel electrophoresis. An improvement of this method is to amplify the DNA fragment by real-time PCR, thereby eliminating post-PCR analysis. Furthermore, real-time PCR is extremely accurate and sensitive, which should make it possible to quantitate very low amounts of DNA-coupled detection antibody with high accuracy.
## 3.3. Serodiagnosis: The Future
Almost all available immunodiagnostic techniques, including methods for detecting specific antibodies and circulating parasite antigens in serum or other body fluids, have been applied for diagnosing echinococcosis. However, all the tools developed to date are generally applicable for laboratory research purposes only. None of the available diagnostic tools, kits, or methods are generally accepted by clinical physicians. Nevertheless, such serological tools are potentially important for epidemiological studies, confirmation of infection status, and treatment and the monitoring of control programs, and efforts should continue so that new assays for improved, practical diagnosis of echinococcosis are developed.
---
*Source: 101895-2011-12-25.xml* | 2012 |
# Heart Rate Variability during Auricular Acupressure at Heart Point in Healthy Volunteers: A Pilot Study
**Authors:** Dieu-Thuong Thi Trinh; Que-Chi Thi Nguyen; Minh-Man Pham Bui; Van-Dan Nguyen; Khac-Minh Thai
**Journal:** Evidence-Based Complementary and Alternative Medicine
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1019029
---
## Abstract
Heart rate variability (HRV) is the variation in time between each heartbeat. Increasing HRV may contribute to improving autonomic nervous system dysfunctions. Acupuncture stimulation through the vagus plexus in the ear is considered as a method that can improve HRV. In this pilot study, we examined 114 healthy volunteers at the Faculty of Traditional Medicine, University of Medicine and Pharmacy at Ho Chi Minh City, from January to May 2020. During a 20-minute interval, participants were stimulated two times at the acupoint in the left ear with Semen seed. The heart rate and HRV values were monitored before, during, and after acupressure every 5 minutes. When we compared the experimental group with the control group, HRV significantly increased in the stage of ear-stimulated acupressure compared with the stage before and after the auricular acupressure (p=0.01, p=0.04, p=0.04 and p=0.02) and the difference was not statistically significant compared with the phase of nonstimulated (p=0.15, p=0.28). The changes in other values including SDNN (standard deviation of the average NN), RMSSD (root mean square of successive RR interval differences), LF (low-frequency power), and HF (high-frequency power) in all stages were not statistically significant p=>0.05 between groups. Based on the results, we can determine the increase in HRV when conducting auricular acupressure with stimulation at the heart acupoint on the left ear. This leads to a direction in further studies for clinical application for patients with autonomic nervous disorder.
---
## Body
## 1. Introduction
The time interval between two consecutive heartbeats is called the heart interval, and the difference in each interval produces heart rate variability (HRV) [1, 2]. HRV is measured in milliseconds (ms). Heart rate variability (HRV) is an index influenced by various factors such as age, sex, physique, health status, the frequent use of alcohol, tobacco, certain drugs, and physiological conditions of circadian rhythms and contextual factors when measured. Therefore, HRV is considered as a proxy for the health status of the whole system [3]. Many studies have shown that chronic low HRV values are associated with sudden cardiac deaths, depression, and diabetic neuropathy. Therefore, improving HRV may contribute to the improvement of the related diseases [4–6].The HRV measurement standards were developed by the European Society of Cardiology (ESC) and the North American Society of Cardiac Rhythm and Electrophysiology (NASPE) in 1996 and have become a popular measurement standard to this day. Among these standards, two methods of measuring time domain and frequency domain with HRV components are commonly applied in many studies on HRV [7–11]. Through HRV components, it is possible to assess sympathetic and parasympathetic activities of the heart.Besides ECG, which is used as the gold standard to measure HRV, nowadays with the development of science and technology, many new methods have been developed to measure HRV such as photoplethysmography (PPG) through smartphone, smartwatch, ear strap, chest strap, or wrist strap devices. These methods allow for more convenient and cost-effective HRV monitoring [12]. Studies have suggested that the PPG method is equivalent to ECG, with a high correlation coefficient [13, 14]. Kyto HRM 2511B, a compact, wearable device in the ear, allows for simpler HRV measurements in comparison to ECG [13]. While ECG is considered to be the gold standard, using devices with PPG technology has become more popular.In traditional medicine, acupuncture in the vagus nerve (i.e., a part of the autonomic nervous system) distribution areas in the ear is a method that affects HRV by increasing the parasympathetic activity and contributes to an increase in HRV in a beneficial way [9]. Accordingly, the middle of the ear cavity is considered to be the place where most of the vagus nerve is located, which is comparable to the heart acupoint [4]. This area is expected to be highly effective and of low risk, especially for the left ear canal. However, research on the effect of HRV when stimulating the heart acupoint in the ear alone is still limited [2]. Moreover, these studies are about needling acupoints in the ear and there has been no study that uses an ear seed—a tiny device that stimulates acupoints by pressing without using needles and that patients can use by themselves.Therefore, this study was set out to evaluate how the use of ear seed at the heart point affects the autonomic nervous system through HRV value. Along with time-domain and frequency-domain measurements, we are also seeking for whether there are any changes during the auricular acupressure process? Furthermore, this study also investigates the undesirable events when doing auricular acupressure at the heart acupoint. The study is expected to form the basis for using auricular acupressure to improve HRV to treat related diseases in further studies.
## 2. Materials and Methods
Participants were healthy volunteers who lived in Ho Chi Minh City. The research ethics was approved by the Medical Ethics Council of the University of Medicine and Pharmacy at Ho Chi Minh City.Volunteers would sign an informed consent form before the study. Participants were randomly assigned into two groups by the GraphPad software version 9.1. Participants in the experimental group received auricular acupressure in the left heart acupoint, while the control group received placebo auricular acupressure by removing the ear seed but keeping the sticker attached in the left heart acupoint. The study was designed as a single-blinded pilot study. Only participants were blinded and did not know which groups they would belong to.The sample sizen is calculated according to the formula:(1)n=z1−β+z1−α/22.σ2d2,where n is the number of sample sizes needed for the study; z1 −I = 0.83; and z1 −α/2 = 1.96. According to Clancy J. A [7], d = 70.35 and σ = 178.65. With a 10% expected loss, n was calculated as 57. So the total sample of the study was 114.
### 2.1. Inclusion Criteria
The inclusion criteria included healthy men and women with no history of cardiovascular diseases, diabetes, and thyroid aged between 20 and 29 and had vital signs within the normal range (pulse, regular heart rate, and resting heart rate: 60–100 beats/min; resting blood pressure: from 90/60 mmHg to ≤140/90 mmHg; breathing rate: 16 ± 3 times/minute; temperature: 36.6–37.5°C; and SpO2 ≥ 95%). All volunteers had body mass index (BMI) from 18.5 to 23 kg/m2 and had no psychiatric stress problem during acupuncture day (confirmed by answering the DASS21 questionnaire with stress point less than 15 points).
### 2.2. Exclusion Criteria
The exclusion criteria included volunteers whose ages were out of the range above used stimulants such as beer, alcohol, coffee, and tobacco within 24 hours before conducting the study. No volunteers played sports 2 hours before the study or had skin injuries in the area of auricular acupressure. Women who were in menstruation period, pregnancy, or breastfeeding, people using drugs affecting blood pressure, and heart rate within one month were also excluded.
### 2.3. Criteria to Stop Research
The criteria were participants who wanted to stop participating in the study or had overreacted parasympathetic stimulation symptoms such as dizziness, nausea, vomiting, pain, and allergy at the stimulus area. These cases would be recorded as unexpected events.
### 2.4. HRV Measurement
Monitoring values in the periods included before, during, and after auricular acupressure by using Kyto HRM-2511B, a photoplethysmography device, which was attached to the right earlobe of the participants. Monitored values included heart rate, heart rate variability (HRV consists of changes in the time intervals between consecutive heartbeats or between two successive R-waves of the QRS signal on the electrocardiogram-RR intervals, and HRV is measured in milliseconds), time-domain components SDNN (standard deviation of RR intervals), RMSSD (root square root of mean squares of differences between RR intervals), frequency-domain LF (low-frequency range—LF has a frequency of 0.04–0.15 Hz), and HF (high-frequency band—HF has a frequency of 0.15–0.4 Hz).Auricular acupressure: we conducted auricular acupressure at the heart acupoint on the left ear, which was located in the middle of the ear cavity by using a sticker with Vaccaria ear seed (experimental group) or a sticker without seed (control group) for 20 minutes with two times of stimulating. The time of stimulation was 30 seconds with two acupressure movements per second, resulting in a total of 60 acupressure movements per stimulation.HRV was monitored every 5 minutes. The measurement profile and measurement times (T1-T6) are schematically shown in Figure1.Figure 1
Study protocol. T1: before auricular acupressure, T2: auricular acupressure without stimulation, T3: the 1st auricular acupressure with stimulation in 30 sec, T4: auricular acupressure without stimulation, T5: the 2nd auricular acupressure with stimulation in 30 sec, and T6: after auricular acupressure.
### 2.5. General Protocol
The study was conducted in a quiet room from 8 : 00 to 10 : 00 A.M. at 26 ± 1°C. Participants rest for 10 minutes, and then, their pulse rate, heart rate, blood pressure, breathing rate, and SpO2 were measured. Participants did not speak and did not change posture during acupressure.
### 2.6. Statistical Analysis
Data were analyzed using SPSS version 22.0.T-test was used to compare baseline characteristics and heart rate of the volunteers between groups for each stage. HRV and HRV components (SDNN, RMSSD, LF, and HF) at the time before acupressure, during acupressure, and after acupressure in each group were compared by the Wilcoxon signed-rank test and between two research groups by the Mann-Whitney U test. The results were statistically significant when p<0.05.
## 2.1. Inclusion Criteria
The inclusion criteria included healthy men and women with no history of cardiovascular diseases, diabetes, and thyroid aged between 20 and 29 and had vital signs within the normal range (pulse, regular heart rate, and resting heart rate: 60–100 beats/min; resting blood pressure: from 90/60 mmHg to ≤140/90 mmHg; breathing rate: 16 ± 3 times/minute; temperature: 36.6–37.5°C; and SpO2 ≥ 95%). All volunteers had body mass index (BMI) from 18.5 to 23 kg/m2 and had no psychiatric stress problem during acupuncture day (confirmed by answering the DASS21 questionnaire with stress point less than 15 points).
## 2.2. Exclusion Criteria
The exclusion criteria included volunteers whose ages were out of the range above used stimulants such as beer, alcohol, coffee, and tobacco within 24 hours before conducting the study. No volunteers played sports 2 hours before the study or had skin injuries in the area of auricular acupressure. Women who were in menstruation period, pregnancy, or breastfeeding, people using drugs affecting blood pressure, and heart rate within one month were also excluded.
## 2.3. Criteria to Stop Research
The criteria were participants who wanted to stop participating in the study or had overreacted parasympathetic stimulation symptoms such as dizziness, nausea, vomiting, pain, and allergy at the stimulus area. These cases would be recorded as unexpected events.
## 2.4. HRV Measurement
Monitoring values in the periods included before, during, and after auricular acupressure by using Kyto HRM-2511B, a photoplethysmography device, which was attached to the right earlobe of the participants. Monitored values included heart rate, heart rate variability (HRV consists of changes in the time intervals between consecutive heartbeats or between two successive R-waves of the QRS signal on the electrocardiogram-RR intervals, and HRV is measured in milliseconds), time-domain components SDNN (standard deviation of RR intervals), RMSSD (root square root of mean squares of differences between RR intervals), frequency-domain LF (low-frequency range—LF has a frequency of 0.04–0.15 Hz), and HF (high-frequency band—HF has a frequency of 0.15–0.4 Hz).Auricular acupressure: we conducted auricular acupressure at the heart acupoint on the left ear, which was located in the middle of the ear cavity by using a sticker with Vaccaria ear seed (experimental group) or a sticker without seed (control group) for 20 minutes with two times of stimulating. The time of stimulation was 30 seconds with two acupressure movements per second, resulting in a total of 60 acupressure movements per stimulation.HRV was monitored every 5 minutes. The measurement profile and measurement times (T1-T6) are schematically shown in Figure1.Figure 1
Study protocol. T1: before auricular acupressure, T2: auricular acupressure without stimulation, T3: the 1st auricular acupressure with stimulation in 30 sec, T4: auricular acupressure without stimulation, T5: the 2nd auricular acupressure with stimulation in 30 sec, and T6: after auricular acupressure.
## 2.5. General Protocol
The study was conducted in a quiet room from 8 : 00 to 10 : 00 A.M. at 26 ± 1°C. Participants rest for 10 minutes, and then, their pulse rate, heart rate, blood pressure, breathing rate, and SpO2 were measured. Participants did not speak and did not change posture during acupressure.
## 2.6. Statistical Analysis
Data were analyzed using SPSS version 22.0.T-test was used to compare baseline characteristics and heart rate of the volunteers between groups for each stage. HRV and HRV components (SDNN, RMSSD, LF, and HF) at the time before acupressure, during acupressure, and after acupressure in each group were compared by the Wilcoxon signed-rank test and between two research groups by the Mann-Whitney U test. The results were statistically significant when p<0.05.
## 3. Results
### 3.1. General Characteristics of the Study Population
Table1 shows the general characteristics of the study population in each group at the beginning of the experiment. The anthropometric and hemodynamic data were in their normal range and did not show significant differences between groups. There were no significant differences in sex and age between the experimental and control group p>0.05. The difference in basic characteristics of the experimental and control groups was not significant (t-test, p>0.05, Table 1). Pulse, heart rate, blood pressure, respiratory rate, SpO2, and BMI of all participants were within normal values, which is required for the safety of participants.Table 1
Anthropometric subjects’ characteristics.
CharacteristicsExperimental group (n = 57)Control group (n = 57)p value(Mean ± SD)Gender (n, %)Male27 (47.37)28 (49.12)0.85aFemale30 (52.63)29 (50.88)Age (years)25.54 ± 2.8025.12 ± 2.630.41bPulse (bpm)73.89 ± 8.6771.88 ± 6.670.14cHR (bpm)73.89 ± 8.6771.88 ± 6.670.14cSBP (mmHg)109.18 ± 10.99107.39 ± 8.450.33cDBP (mmHg)72.91 ± 5.5672.11 ± 4.800.41cBreath (bpm)16.26 ± 1.9916.21 ± 1.860.88cSpO2 (%)97.09 ± 1.4396.93 ± 1.450.56cBMI (kg/m2)20.28 ± 1.5420.67 ± 1.510.17cNote: a-Fisher’s exact test, b-Mann-WhitneyU test, t-test. HR = heart rate, SBP = systolic blood pressure, DBP = diastolic blood pressure, BMI = body mass index. SD: standard deviation.When comparing values of an index between groups including pulse, heart rate, blood pressure, respiratory rate, SpO2, and BMI, the results were not statistically significant with p>0.05. This shows the random distribution of participants into two groups, thereby ensuring accuracy and objectivity when comparing the two groups.
### 3.2. Heart Rate and HRV in Each Stage of the Study
The heart rates in each stage of the two study groups are shown in Table2. There was no statistically significant difference in heart rate in each stage of T1, T2, T3, T4, T5, and T6 between groups (p>0.05, t-test).Table 2
Heart rate behavior between groups.
StageHeart rate (bpm) (mean ± SD)p valueExperimental group (n = 57)Control group (n = 57)T173.89 ± 8.6771.88 ± 6.670.17T273.49 ± 9.2172.14 ± 6.980.38T370.53 ± 9.1171.07 ± 7.160.72T473.67 ± 9.3771.49 ± 7.130.17T570.91 ± 8.6671.72 ± 6.690.58T673.16 ± 8.7971.87 ± 7.290.40SD: standard deviation. Table3 shows the HRV of each stage between groups. There was a statistically significant difference in HRV between groups at the T1, T2, T4, and T6 stages (p<0.05 Mann-Whitney U test). There was no statistically significant difference in HRV between groups in stages T3 and T5 (p>0.05 Mann-Whitney U test).Table 3
Heart rate variability in each stage between groups.
StageHRV (ms) (median (IQR 25th – 75th))p valueExperimental group (n = 57)Control group (n = 57)T153.0049.00, 57.5057.0051.00, 61.500.01T253.0047.00, 55.0055.0050.00, 63.000.04T354.0050.00, 58.0057.0050.00, 63.000.15T452.0047.50, 55.0055.0048.00, 61.500.04T554.0050.50, 59.5056.0051.00, 63.000.28T652.0049.00, 57.5056.0050.00, 63.000.02IQR: interquartile range.In the experimental group, heart rate in the stage of auricular acupressure with stimulation was lower than that of before, after, and in the stage without stimulation (Figure2(a)). HRV in the stage of auricular acupressure with stimulation was greater than HRV before and after acupressure but was not statistically significantly different from HRV in the stage of auricular acupressure without stimulation (Figure 2(b)).Figure 2
Heart rate and HRV in each stage. (a) Heart rate. (b) HRV.In the control group, the HRV difference between stages was not statistically significant (Figure2(b)).
### 3.3. Auricular Acupressure at Heart Acupoint Alters Elements of HRV
The variation of the time domain and frequency domain is shown in Table4. There were no significant differences in the SDNN and RMSSD between groups (Wilcoxon signed rank-sum test, p>0.05). There were no significant differences in the LF and HF between groups in each stage (Wilcoxon signed rank-sum test, p>0.05).Table 4
The variation of time domain.
StageSDNN (median (IQR))RMSSD (median (IQR))LF (median (IQR))HF (median (IQR))Experimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueT142.97 (33.19, 53.52)42.71 (34.10, 55.84)0.9030.85 (19.51, 51.33)32.41 (22.43, 46.10)0.90321.23 (167.65, 544.03)290.37 (168.71, 680.75)0.90292.81 (131.04, 568.82)281.96 (147.09, 519.95)0.90T243.65 (27.68, 53.84)51.02 (34.89, 58.44)0.6529.11 (18.35, 49.64)28.82 (21.70, 50.53)0.65292.14 (184.77, 562.70)336.65 (169.43, 603.96)0.65278.62 (141.36, 545.21)272.03 (161.88, 434.44)0.65T342.97 (26.65, 54.31)44.81 (33.14, 54.79)0.9832.56 (19.42, 49.80)34.47 (24.85, 53.97)0.98301.54 (176.90, 585.02)318.40 (156.40, 634.21)0.98223.11 (113.14, 516.47)270.55 (176.56, 496.28)0.98T445.30 (37.71, 57.54)46.30 (40.15, 54.43)0.8231.24 (19.93, 51.26)33.32 (22.10, 50.75)0.82308.70 (173.52, 593.46)322.61 (174.94, 659.84)0.82269.36 (139.44, 554.53)279.77 (186.05, 489.68)0.82T542.04 (29.60, 51.60)46.16 (35.37, 54.62)0.3531.94 (17.24, 54.68)31.28 (25.75, 50.46)0.35283.66 (146.31, 511.77)364.90 (173.39, 609.40)0.35287.66 (153.18, 501.74)316.51 (215.40, 555.57)0.35T645.75 (30.66, 55.94)47.07 (37.20, 61.61)0.2927.52 (20.86, 47.27)28.39 (21.66, 49.96)0.29255.83 (162.03, 513.95)325.11 (161.59, 681.28)0.29252.31 (115.47, 597.09)343.77 (219.05, 631.33)0.29IQR: interquartile range.
## 3.1. General Characteristics of the Study Population
Table1 shows the general characteristics of the study population in each group at the beginning of the experiment. The anthropometric and hemodynamic data were in their normal range and did not show significant differences between groups. There were no significant differences in sex and age between the experimental and control group p>0.05. The difference in basic characteristics of the experimental and control groups was not significant (t-test, p>0.05, Table 1). Pulse, heart rate, blood pressure, respiratory rate, SpO2, and BMI of all participants were within normal values, which is required for the safety of participants.Table 1
Anthropometric subjects’ characteristics.
CharacteristicsExperimental group (n = 57)Control group (n = 57)p value(Mean ± SD)Gender (n, %)Male27 (47.37)28 (49.12)0.85aFemale30 (52.63)29 (50.88)Age (years)25.54 ± 2.8025.12 ± 2.630.41bPulse (bpm)73.89 ± 8.6771.88 ± 6.670.14cHR (bpm)73.89 ± 8.6771.88 ± 6.670.14cSBP (mmHg)109.18 ± 10.99107.39 ± 8.450.33cDBP (mmHg)72.91 ± 5.5672.11 ± 4.800.41cBreath (bpm)16.26 ± 1.9916.21 ± 1.860.88cSpO2 (%)97.09 ± 1.4396.93 ± 1.450.56cBMI (kg/m2)20.28 ± 1.5420.67 ± 1.510.17cNote: a-Fisher’s exact test, b-Mann-WhitneyU test, t-test. HR = heart rate, SBP = systolic blood pressure, DBP = diastolic blood pressure, BMI = body mass index. SD: standard deviation.When comparing values of an index between groups including pulse, heart rate, blood pressure, respiratory rate, SpO2, and BMI, the results were not statistically significant with p>0.05. This shows the random distribution of participants into two groups, thereby ensuring accuracy and objectivity when comparing the two groups.
## 3.2. Heart Rate and HRV in Each Stage of the Study
The heart rates in each stage of the two study groups are shown in Table2. There was no statistically significant difference in heart rate in each stage of T1, T2, T3, T4, T5, and T6 between groups (p>0.05, t-test).Table 2
Heart rate behavior between groups.
StageHeart rate (bpm) (mean ± SD)p valueExperimental group (n = 57)Control group (n = 57)T173.89 ± 8.6771.88 ± 6.670.17T273.49 ± 9.2172.14 ± 6.980.38T370.53 ± 9.1171.07 ± 7.160.72T473.67 ± 9.3771.49 ± 7.130.17T570.91 ± 8.6671.72 ± 6.690.58T673.16 ± 8.7971.87 ± 7.290.40SD: standard deviation. Table3 shows the HRV of each stage between groups. There was a statistically significant difference in HRV between groups at the T1, T2, T4, and T6 stages (p<0.05 Mann-Whitney U test). There was no statistically significant difference in HRV between groups in stages T3 and T5 (p>0.05 Mann-Whitney U test).Table 3
Heart rate variability in each stage between groups.
StageHRV (ms) (median (IQR 25th – 75th))p valueExperimental group (n = 57)Control group (n = 57)T153.0049.00, 57.5057.0051.00, 61.500.01T253.0047.00, 55.0055.0050.00, 63.000.04T354.0050.00, 58.0057.0050.00, 63.000.15T452.0047.50, 55.0055.0048.00, 61.500.04T554.0050.50, 59.5056.0051.00, 63.000.28T652.0049.00, 57.5056.0050.00, 63.000.02IQR: interquartile range.In the experimental group, heart rate in the stage of auricular acupressure with stimulation was lower than that of before, after, and in the stage without stimulation (Figure2(a)). HRV in the stage of auricular acupressure with stimulation was greater than HRV before and after acupressure but was not statistically significantly different from HRV in the stage of auricular acupressure without stimulation (Figure 2(b)).Figure 2
Heart rate and HRV in each stage. (a) Heart rate. (b) HRV.In the control group, the HRV difference between stages was not statistically significant (Figure2(b)).
## 3.3. Auricular Acupressure at Heart Acupoint Alters Elements of HRV
The variation of the time domain and frequency domain is shown in Table4. There were no significant differences in the SDNN and RMSSD between groups (Wilcoxon signed rank-sum test, p>0.05). There were no significant differences in the LF and HF between groups in each stage (Wilcoxon signed rank-sum test, p>0.05).Table 4
The variation of time domain.
StageSDNN (median (IQR))RMSSD (median (IQR))LF (median (IQR))HF (median (IQR))Experimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueT142.97 (33.19, 53.52)42.71 (34.10, 55.84)0.9030.85 (19.51, 51.33)32.41 (22.43, 46.10)0.90321.23 (167.65, 544.03)290.37 (168.71, 680.75)0.90292.81 (131.04, 568.82)281.96 (147.09, 519.95)0.90T243.65 (27.68, 53.84)51.02 (34.89, 58.44)0.6529.11 (18.35, 49.64)28.82 (21.70, 50.53)0.65292.14 (184.77, 562.70)336.65 (169.43, 603.96)0.65278.62 (141.36, 545.21)272.03 (161.88, 434.44)0.65T342.97 (26.65, 54.31)44.81 (33.14, 54.79)0.9832.56 (19.42, 49.80)34.47 (24.85, 53.97)0.98301.54 (176.90, 585.02)318.40 (156.40, 634.21)0.98223.11 (113.14, 516.47)270.55 (176.56, 496.28)0.98T445.30 (37.71, 57.54)46.30 (40.15, 54.43)0.8231.24 (19.93, 51.26)33.32 (22.10, 50.75)0.82308.70 (173.52, 593.46)322.61 (174.94, 659.84)0.82269.36 (139.44, 554.53)279.77 (186.05, 489.68)0.82T542.04 (29.60, 51.60)46.16 (35.37, 54.62)0.3531.94 (17.24, 54.68)31.28 (25.75, 50.46)0.35283.66 (146.31, 511.77)364.90 (173.39, 609.40)0.35287.66 (153.18, 501.74)316.51 (215.40, 555.57)0.35T645.75 (30.66, 55.94)47.07 (37.20, 61.61)0.2927.52 (20.86, 47.27)28.39 (21.66, 49.96)0.29255.83 (162.03, 513.95)325.11 (161.59, 681.28)0.29252.31 (115.47, 597.09)343.77 (219.05, 631.33)0.29IQR: interquartile range.
## 4. Discussion
With the expectation of investigating how the use of ear seed at the heart point affects the autonomic nervous system through HR, HRV, SDNN, RMSSD, LF, and HF values of 114 volunteers, we suggest that auricular acupressure on the point that is in the distribution of vagus nerve can have a significant affect to the autonomic cardiovascular system on healthy people.
### 4.1. Heart Rate
The first value to be investigated when stimulating the heart auricular acupoint in the ears was the heart rate. In this study, the heart rate in the experimental group was not statistically significant compared with the control group (Table2). This result is similar to the study of Gao et al. on healthy volunteers. When performing stimulated acupressure of the vagus nerve in the ear, the heart rate was reduced [8, 15–17]. The decrease in heart rate is a precedent for further research on heart rate variability.
### 4.2. HRV
As for the experimental group, the HRV value increased in the stage of auricular acupressure with stimulation compared to the other stages (Table3). This result is similar to the study of Clancy et al. [7, 8]. However, when compared to the control group, the increased HRV in the experimental group is only statistically significant in the stimulation stage compared with that of the before and after acupressure. The difference was not statistically significant compared to the nonstimulated acupressure stage p>0.05 (Figure 2(b)). Meanwhile, the heart rate decreased in both the nonstimulated and stimulated acupressure stage compared with that before and after the acupressure. According to the HRV definition, when the heart rate drops, it creates more space for variation between consecutive heart rates, leading to higher HRV. However, in our study, when the heart rate decreased, there was no difference in the stimulated acupressure stage compared with the nonstimulated acupressure stage in HRV. The question is as follows: is there any conflict between heart rate and HRV values in our study?It is known that HRV is not only physiologically linked to heart rate via autonomic nerve activity but also mathematically linked [18]. In experimental studies, it has been shown that at least one part of the HRV value is influenced by the intrinsic factor of the sinus node, namely, the cyclic length of myocardial cells or the period of RR that has a nonlinear relationship with neurotransmitter concentrations at the sinus node [19, 20]. This leads to the situation that with the same intensity of parasympathetic activity, a longer interval between RRs leads to a higher HRV [20, 21]. Therefore, although parasympathetic activity lowers the heart rate, HRV changes may be statistically insignificant because HRV also depends on neurotransmitters affecting the length of RR interval between heart cycles.The relationship between the RR distance and heart rate mentioned in the SACHA study shows that when the heart rate is of the same value, the difference between the RR intervals may not be equal, leading to different HRV [22]. This assumption may partly explain the mismatch between the reduction in heart rate and the increase in HRV observed in our study.The increase in HRV when stimulating heart auricular acupoint can be a potential approach in clinical practice. Previous studies have shown that stimulating the vagus nerve in the ear with an increase in HRV will have an antiarrhythmic effect, potentially reducing the recurrence of atrial fibrillation when combined with an antiarrhythmic drug [23, 24]. Therefore, future studies to determine the influence of auricular acupuncture on heart acupoints in patients with atrial fibrillation are particularly promising.
### 4.3. The Variation of Time Domain
#### 4.3.1. SDNN
The SDNN is the standard deviation of normal sinus rhythms, measured in milliseconds (ms). In 5 minutes, the SDNN values were mainly influenced by parasympathetic-mediated respiratory sinus arrhythmia (RSA) [3].In our study, the SDNN change was not statistically significantp>0.05 at the study stages and in both the experimental group and the control group. Nevertheless, in the study of Boehmer et al. [15], SDNN increased after performing auricular acupuncture. To clarify this difference, it is important to compare the inclusion criteria and methods of the two studies. In the study of Boehmer, the participants were young men (23 ± 2 years old), who were regularly engaged in moderate-intensity physical activity over the past 12 months (7 ± 3 hours/week). When conducting the study, participants were performed auricular acupressure at the heart acupoint and measured HRV in the supine position and then switched to a standing position to measure HRV. Thereby, it can be proposed that, in Andreas’ study, a baroreceptor reflex occurred in participants with high sensitivity, leading to an increase in RSA [25]. An increase in RSA leads to an increase in SDNN.The SDNN is considered the “gold standard” in cardiovascular risk stratification, especially in the 24-hour recordings [26]. The SDNN value predicts morbidity and mortality. The study by Kleiger RE et al. based on 24 hours of ECG monitoring and HRV analysis showed that cardiovascular patients with SDNN values less than 50 ms are classified as unhealthy, 50–100 ms corresponds to harmed health, and over 100 ms is classified as healthy. Poststroke patients with SDNN values above 100 ms had a 5.3-fold lower risk of death compared with patients with values below 50 ms [27]. Whether increasing SDNN can reduce mortality risk is being studied.
#### 4.3.2. RMSSD
When surveying RMSSD through each stage, we noted that RMSSD was different with no statistical significancep>0.05 between both groups, which was also observed by Andreas et al. [15].Considering the time-domain HRV, it was found that RMSSD is affected more by parasympathetic activity rather than SDNN. While compared with the frequency-domain index, RMSSD was correlated with HF; however, RMSSD was less affected by respiratory frequency than HF. It can be, therefore, suggested that RMSSD is the main time-domain index used to estimate changes in parasympathetic activity through HRV [28]. However, in our study, parasympathetic activity will increase, but the variation in RMSSD is not statistically significant. The reason is that although RMSSD reflects parasympathetic activity in the heart, it is an indirect result and a major factor in the alteration of R peaks produced by the sinus node. The activity is also influenced by receptors on the sinus node cells [29].
### 4.4. The Frequency-Domain Spectral Analysis
#### 4.4.1. Low-Frequency (LF) Power
In our study, if we compare the LF value at each stage, including before, during, and after acupressure, the difference is not statistically significantp>0.05 in both the experimental and control groups (Table 4). This result is similar to the studies of Shen et al. [24] and Lee et al. [10], but different from that of Gao et al. [7, 8, 15]. In the study of Gao, LF value was increased in both stages of stimulating the heart point with electroacupuncture (electric vibrating pen) and auricular needling (stimulating the acupuncture point with a needle).Physiologically, the stimulation of the vagus nerve will increase the activation of baroreceptors and then activate the baroreflex. Many studies have shown that the LF value reflects the activity of the sympathetic nervous system and the efferent parasympathetic nervous system (A and C fibers), respectively, related to the action of baroreceptors. [30–33]. This explains the results of Gao where stimulation of the vagus nerve in the ear would increase the activation of baroreceptors, leading to an increase in LF.Moreover, Stauss suggested that when stimulating the vagus nerve in the ear, the hemodynamic changes will depend on the number of stimulation sites, stimulation parameters (potential, frequency, length, pulse, and current direction), which lead to different changes in heart rate, blood pressure, and baroreflex [34]. In our study, we used Semen seeds to perform auricular acupressure, so the stimulation was weaker than that in Gao’s study, which used electroacupuncture. Therefore, lower sensitivity and weaker baroreflex will lead to a change in LF value that is not statistically significant.The baroreflex plays an important role in hemodynamic stability and cardiovascular protection and is also a strong prognostic factor in some cardiovascular diseases such as hypertension and chronic heart failure [35]. LF is considered an indicator of the sensitivity of the baroreceptor reflex; therefore, measuring the LF value is a noninvasive method to determine the sensitivity of the baroreceptor [36]. When stimulating the heart point with Semen seed, the LF value did not change.
#### 4.4.2. High-Frequency (HF) Power
Theoretically, stimulating the heart point in the ear belonging to the vagus nerve will increase parasympathetic activity. Accordingly, HF modulated by parasympathetic activity will increase. Nevertheless, in our study the HF value changed but the difference was not statistically significant between the stages, including before, during, and after atrial pressure in both groupsp>0.05 (Table 4). Hayano and Yuda suggested that the HF value of HRV does not necessarily reflect cardiac parasympathetic functioning. In fact, when observing the heart rate oscillation through the autonomic nerve, the HF band is regulated by cardiac parasympathetic activity. HF is influenced by parasympathetic activity in the heart in the frequency range of 0.15–0.4 Hz. In addition, RSA is considered to be a determinant of the HF component. Even if HRV in the HF band decreases or disappears, it does not mean that cardiac parasympathetic nerve arrest or autonomic dysfunction is occurring. This occurs when the respiratory rate is outside the HF range, such as in slow breathing HF < 0.15 Hz (9 breaths/min), during deep, or fast breathing HF > 0.4 Hz (24 breaths/min) [37]. Therefore, when conducting an investigation of the HF value to assess cardiac parasympathetic function, it is necessary to monitor the respiratory parameters during the survey [38, 39].The HF components compared with other HRV frequency components such as LF and VLF are weak clinical prognostic factors in short-term HRV measurements [40]. It has been found that HRV observed in the HF band has data that are not necessarily mediated by autonomic nerves [37]. This phenomenon is described by various terms such as complex HRV, erratic sinus rhythm, or heart rate fragmentation (HRF) [41]. This is a type of instability characterized by an anomalous appearance between peaks in the RR time series even though the ECG shows sinus rhythm. The occurrence of HRF may confound the association between HF and cardiac parasympathetic function and distort the prognosis, so HF is rarely used in clinical evaluation [37]. However, low HF is often correlated with stress, anxiety disorders, and stress; therefore, improving the HF value has positive implications for body health [38].
#### 4.4.3. Unwanted Reactions
One case of drowsiness was recorded in the experimental group and not in the control group. This can be both an unwanted reaction and a beneficial effect. If this method is applied to patients with insomnia at the right time, there will be therapeutic benefits [15, 25, 26].
### 4.5. Limitations
First, the respiratory frequency had not been investigated during the study process, so the effect of parasympathomimetics on HF had not been accurately assessed.Second, this is the first study to conduct acupressure to investigate HRV; hence, we performed it on healthy volunteers to ensure safety and to monitor possible dangerous cardiovascular events during acupressure. Evaluating results on healthy people will not represent the goal of clinical application with subjects who are cardiovascular patients. Therefore, in further research it will be focused on patients with chronic cardiovascular or HRV-related diseases.
## 4.1. Heart Rate
The first value to be investigated when stimulating the heart auricular acupoint in the ears was the heart rate. In this study, the heart rate in the experimental group was not statistically significant compared with the control group (Table2). This result is similar to the study of Gao et al. on healthy volunteers. When performing stimulated acupressure of the vagus nerve in the ear, the heart rate was reduced [8, 15–17]. The decrease in heart rate is a precedent for further research on heart rate variability.
## 4.2. HRV
As for the experimental group, the HRV value increased in the stage of auricular acupressure with stimulation compared to the other stages (Table3). This result is similar to the study of Clancy et al. [7, 8]. However, when compared to the control group, the increased HRV in the experimental group is only statistically significant in the stimulation stage compared with that of the before and after acupressure. The difference was not statistically significant compared to the nonstimulated acupressure stage p>0.05 (Figure 2(b)). Meanwhile, the heart rate decreased in both the nonstimulated and stimulated acupressure stage compared with that before and after the acupressure. According to the HRV definition, when the heart rate drops, it creates more space for variation between consecutive heart rates, leading to higher HRV. However, in our study, when the heart rate decreased, there was no difference in the stimulated acupressure stage compared with the nonstimulated acupressure stage in HRV. The question is as follows: is there any conflict between heart rate and HRV values in our study?It is known that HRV is not only physiologically linked to heart rate via autonomic nerve activity but also mathematically linked [18]. In experimental studies, it has been shown that at least one part of the HRV value is influenced by the intrinsic factor of the sinus node, namely, the cyclic length of myocardial cells or the period of RR that has a nonlinear relationship with neurotransmitter concentrations at the sinus node [19, 20]. This leads to the situation that with the same intensity of parasympathetic activity, a longer interval between RRs leads to a higher HRV [20, 21]. Therefore, although parasympathetic activity lowers the heart rate, HRV changes may be statistically insignificant because HRV also depends on neurotransmitters affecting the length of RR interval between heart cycles.The relationship between the RR distance and heart rate mentioned in the SACHA study shows that when the heart rate is of the same value, the difference between the RR intervals may not be equal, leading to different HRV [22]. This assumption may partly explain the mismatch between the reduction in heart rate and the increase in HRV observed in our study.The increase in HRV when stimulating heart auricular acupoint can be a potential approach in clinical practice. Previous studies have shown that stimulating the vagus nerve in the ear with an increase in HRV will have an antiarrhythmic effect, potentially reducing the recurrence of atrial fibrillation when combined with an antiarrhythmic drug [23, 24]. Therefore, future studies to determine the influence of auricular acupuncture on heart acupoints in patients with atrial fibrillation are particularly promising.
## 4.3. The Variation of Time Domain
### 4.3.1. SDNN
The SDNN is the standard deviation of normal sinus rhythms, measured in milliseconds (ms). In 5 minutes, the SDNN values were mainly influenced by parasympathetic-mediated respiratory sinus arrhythmia (RSA) [3].In our study, the SDNN change was not statistically significantp>0.05 at the study stages and in both the experimental group and the control group. Nevertheless, in the study of Boehmer et al. [15], SDNN increased after performing auricular acupuncture. To clarify this difference, it is important to compare the inclusion criteria and methods of the two studies. In the study of Boehmer, the participants were young men (23 ± 2 years old), who were regularly engaged in moderate-intensity physical activity over the past 12 months (7 ± 3 hours/week). When conducting the study, participants were performed auricular acupressure at the heart acupoint and measured HRV in the supine position and then switched to a standing position to measure HRV. Thereby, it can be proposed that, in Andreas’ study, a baroreceptor reflex occurred in participants with high sensitivity, leading to an increase in RSA [25]. An increase in RSA leads to an increase in SDNN.The SDNN is considered the “gold standard” in cardiovascular risk stratification, especially in the 24-hour recordings [26]. The SDNN value predicts morbidity and mortality. The study by Kleiger RE et al. based on 24 hours of ECG monitoring and HRV analysis showed that cardiovascular patients with SDNN values less than 50 ms are classified as unhealthy, 50–100 ms corresponds to harmed health, and over 100 ms is classified as healthy. Poststroke patients with SDNN values above 100 ms had a 5.3-fold lower risk of death compared with patients with values below 50 ms [27]. Whether increasing SDNN can reduce mortality risk is being studied.
### 4.3.2. RMSSD
When surveying RMSSD through each stage, we noted that RMSSD was different with no statistical significancep>0.05 between both groups, which was also observed by Andreas et al. [15].Considering the time-domain HRV, it was found that RMSSD is affected more by parasympathetic activity rather than SDNN. While compared with the frequency-domain index, RMSSD was correlated with HF; however, RMSSD was less affected by respiratory frequency than HF. It can be, therefore, suggested that RMSSD is the main time-domain index used to estimate changes in parasympathetic activity through HRV [28]. However, in our study, parasympathetic activity will increase, but the variation in RMSSD is not statistically significant. The reason is that although RMSSD reflects parasympathetic activity in the heart, it is an indirect result and a major factor in the alteration of R peaks produced by the sinus node. The activity is also influenced by receptors on the sinus node cells [29].
## 4.3.1. SDNN
The SDNN is the standard deviation of normal sinus rhythms, measured in milliseconds (ms). In 5 minutes, the SDNN values were mainly influenced by parasympathetic-mediated respiratory sinus arrhythmia (RSA) [3].In our study, the SDNN change was not statistically significantp>0.05 at the study stages and in both the experimental group and the control group. Nevertheless, in the study of Boehmer et al. [15], SDNN increased after performing auricular acupuncture. To clarify this difference, it is important to compare the inclusion criteria and methods of the two studies. In the study of Boehmer, the participants were young men (23 ± 2 years old), who were regularly engaged in moderate-intensity physical activity over the past 12 months (7 ± 3 hours/week). When conducting the study, participants were performed auricular acupressure at the heart acupoint and measured HRV in the supine position and then switched to a standing position to measure HRV. Thereby, it can be proposed that, in Andreas’ study, a baroreceptor reflex occurred in participants with high sensitivity, leading to an increase in RSA [25]. An increase in RSA leads to an increase in SDNN.The SDNN is considered the “gold standard” in cardiovascular risk stratification, especially in the 24-hour recordings [26]. The SDNN value predicts morbidity and mortality. The study by Kleiger RE et al. based on 24 hours of ECG monitoring and HRV analysis showed that cardiovascular patients with SDNN values less than 50 ms are classified as unhealthy, 50–100 ms corresponds to harmed health, and over 100 ms is classified as healthy. Poststroke patients with SDNN values above 100 ms had a 5.3-fold lower risk of death compared with patients with values below 50 ms [27]. Whether increasing SDNN can reduce mortality risk is being studied.
## 4.3.2. RMSSD
When surveying RMSSD through each stage, we noted that RMSSD was different with no statistical significancep>0.05 between both groups, which was also observed by Andreas et al. [15].Considering the time-domain HRV, it was found that RMSSD is affected more by parasympathetic activity rather than SDNN. While compared with the frequency-domain index, RMSSD was correlated with HF; however, RMSSD was less affected by respiratory frequency than HF. It can be, therefore, suggested that RMSSD is the main time-domain index used to estimate changes in parasympathetic activity through HRV [28]. However, in our study, parasympathetic activity will increase, but the variation in RMSSD is not statistically significant. The reason is that although RMSSD reflects parasympathetic activity in the heart, it is an indirect result and a major factor in the alteration of R peaks produced by the sinus node. The activity is also influenced by receptors on the sinus node cells [29].
## 4.4. The Frequency-Domain Spectral Analysis
### 4.4.1. Low-Frequency (LF) Power
In our study, if we compare the LF value at each stage, including before, during, and after acupressure, the difference is not statistically significantp>0.05 in both the experimental and control groups (Table 4). This result is similar to the studies of Shen et al. [24] and Lee et al. [10], but different from that of Gao et al. [7, 8, 15]. In the study of Gao, LF value was increased in both stages of stimulating the heart point with electroacupuncture (electric vibrating pen) and auricular needling (stimulating the acupuncture point with a needle).Physiologically, the stimulation of the vagus nerve will increase the activation of baroreceptors and then activate the baroreflex. Many studies have shown that the LF value reflects the activity of the sympathetic nervous system and the efferent parasympathetic nervous system (A and C fibers), respectively, related to the action of baroreceptors. [30–33]. This explains the results of Gao where stimulation of the vagus nerve in the ear would increase the activation of baroreceptors, leading to an increase in LF.Moreover, Stauss suggested that when stimulating the vagus nerve in the ear, the hemodynamic changes will depend on the number of stimulation sites, stimulation parameters (potential, frequency, length, pulse, and current direction), which lead to different changes in heart rate, blood pressure, and baroreflex [34]. In our study, we used Semen seeds to perform auricular acupressure, so the stimulation was weaker than that in Gao’s study, which used electroacupuncture. Therefore, lower sensitivity and weaker baroreflex will lead to a change in LF value that is not statistically significant.The baroreflex plays an important role in hemodynamic stability and cardiovascular protection and is also a strong prognostic factor in some cardiovascular diseases such as hypertension and chronic heart failure [35]. LF is considered an indicator of the sensitivity of the baroreceptor reflex; therefore, measuring the LF value is a noninvasive method to determine the sensitivity of the baroreceptor [36]. When stimulating the heart point with Semen seed, the LF value did not change.
### 4.4.2. High-Frequency (HF) Power
Theoretically, stimulating the heart point in the ear belonging to the vagus nerve will increase parasympathetic activity. Accordingly, HF modulated by parasympathetic activity will increase. Nevertheless, in our study the HF value changed but the difference was not statistically significant between the stages, including before, during, and after atrial pressure in both groupsp>0.05 (Table 4). Hayano and Yuda suggested that the HF value of HRV does not necessarily reflect cardiac parasympathetic functioning. In fact, when observing the heart rate oscillation through the autonomic nerve, the HF band is regulated by cardiac parasympathetic activity. HF is influenced by parasympathetic activity in the heart in the frequency range of 0.15–0.4 Hz. In addition, RSA is considered to be a determinant of the HF component. Even if HRV in the HF band decreases or disappears, it does not mean that cardiac parasympathetic nerve arrest or autonomic dysfunction is occurring. This occurs when the respiratory rate is outside the HF range, such as in slow breathing HF < 0.15 Hz (9 breaths/min), during deep, or fast breathing HF > 0.4 Hz (24 breaths/min) [37]. Therefore, when conducting an investigation of the HF value to assess cardiac parasympathetic function, it is necessary to monitor the respiratory parameters during the survey [38, 39].The HF components compared with other HRV frequency components such as LF and VLF are weak clinical prognostic factors in short-term HRV measurements [40]. It has been found that HRV observed in the HF band has data that are not necessarily mediated by autonomic nerves [37]. This phenomenon is described by various terms such as complex HRV, erratic sinus rhythm, or heart rate fragmentation (HRF) [41]. This is a type of instability characterized by an anomalous appearance between peaks in the RR time series even though the ECG shows sinus rhythm. The occurrence of HRF may confound the association between HF and cardiac parasympathetic function and distort the prognosis, so HF is rarely used in clinical evaluation [37]. However, low HF is often correlated with stress, anxiety disorders, and stress; therefore, improving the HF value has positive implications for body health [38].
### 4.4.3. Unwanted Reactions
One case of drowsiness was recorded in the experimental group and not in the control group. This can be both an unwanted reaction and a beneficial effect. If this method is applied to patients with insomnia at the right time, there will be therapeutic benefits [15, 25, 26].
## 4.4.1. Low-Frequency (LF) Power
In our study, if we compare the LF value at each stage, including before, during, and after acupressure, the difference is not statistically significantp>0.05 in both the experimental and control groups (Table 4). This result is similar to the studies of Shen et al. [24] and Lee et al. [10], but different from that of Gao et al. [7, 8, 15]. In the study of Gao, LF value was increased in both stages of stimulating the heart point with electroacupuncture (electric vibrating pen) and auricular needling (stimulating the acupuncture point with a needle).Physiologically, the stimulation of the vagus nerve will increase the activation of baroreceptors and then activate the baroreflex. Many studies have shown that the LF value reflects the activity of the sympathetic nervous system and the efferent parasympathetic nervous system (A and C fibers), respectively, related to the action of baroreceptors. [30–33]. This explains the results of Gao where stimulation of the vagus nerve in the ear would increase the activation of baroreceptors, leading to an increase in LF.Moreover, Stauss suggested that when stimulating the vagus nerve in the ear, the hemodynamic changes will depend on the number of stimulation sites, stimulation parameters (potential, frequency, length, pulse, and current direction), which lead to different changes in heart rate, blood pressure, and baroreflex [34]. In our study, we used Semen seeds to perform auricular acupressure, so the stimulation was weaker than that in Gao’s study, which used electroacupuncture. Therefore, lower sensitivity and weaker baroreflex will lead to a change in LF value that is not statistically significant.The baroreflex plays an important role in hemodynamic stability and cardiovascular protection and is also a strong prognostic factor in some cardiovascular diseases such as hypertension and chronic heart failure [35]. LF is considered an indicator of the sensitivity of the baroreceptor reflex; therefore, measuring the LF value is a noninvasive method to determine the sensitivity of the baroreceptor [36]. When stimulating the heart point with Semen seed, the LF value did not change.
## 4.4.2. High-Frequency (HF) Power
Theoretically, stimulating the heart point in the ear belonging to the vagus nerve will increase parasympathetic activity. Accordingly, HF modulated by parasympathetic activity will increase. Nevertheless, in our study the HF value changed but the difference was not statistically significant between the stages, including before, during, and after atrial pressure in both groupsp>0.05 (Table 4). Hayano and Yuda suggested that the HF value of HRV does not necessarily reflect cardiac parasympathetic functioning. In fact, when observing the heart rate oscillation through the autonomic nerve, the HF band is regulated by cardiac parasympathetic activity. HF is influenced by parasympathetic activity in the heart in the frequency range of 0.15–0.4 Hz. In addition, RSA is considered to be a determinant of the HF component. Even if HRV in the HF band decreases or disappears, it does not mean that cardiac parasympathetic nerve arrest or autonomic dysfunction is occurring. This occurs when the respiratory rate is outside the HF range, such as in slow breathing HF < 0.15 Hz (9 breaths/min), during deep, or fast breathing HF > 0.4 Hz (24 breaths/min) [37]. Therefore, when conducting an investigation of the HF value to assess cardiac parasympathetic function, it is necessary to monitor the respiratory parameters during the survey [38, 39].The HF components compared with other HRV frequency components such as LF and VLF are weak clinical prognostic factors in short-term HRV measurements [40]. It has been found that HRV observed in the HF band has data that are not necessarily mediated by autonomic nerves [37]. This phenomenon is described by various terms such as complex HRV, erratic sinus rhythm, or heart rate fragmentation (HRF) [41]. This is a type of instability characterized by an anomalous appearance between peaks in the RR time series even though the ECG shows sinus rhythm. The occurrence of HRF may confound the association between HF and cardiac parasympathetic function and distort the prognosis, so HF is rarely used in clinical evaluation [37]. However, low HF is often correlated with stress, anxiety disorders, and stress; therefore, improving the HF value has positive implications for body health [38].
## 4.4.3. Unwanted Reactions
One case of drowsiness was recorded in the experimental group and not in the control group. This can be both an unwanted reaction and a beneficial effect. If this method is applied to patients with insomnia at the right time, there will be therapeutic benefits [15, 25, 26].
## 4.5. Limitations
First, the respiratory frequency had not been investigated during the study process, so the effect of parasympathomimetics on HF had not been accurately assessed.Second, this is the first study to conduct acupressure to investigate HRV; hence, we performed it on healthy volunteers to ensure safety and to monitor possible dangerous cardiovascular events during acupressure. Evaluating results on healthy people will not represent the goal of clinical application with subjects who are cardiovascular patients. Therefore, in further research it will be focused on patients with chronic cardiovascular or HRV-related diseases.
## 5. Conclusions
The results of the study show that HRV value increased with stimulated acupressure at the heart acupoint of the left ear in healthy volunteers and had a high safety effect. This study is the first step to evaluate the safety of acupuncture, opening a direction in the future for traditional medicine studies on auricular therapy for patients with autonomic nervous disorders.
---
*Source: 1019029-2022-04-25.xml* | 1019029-2022-04-25_1019029-2022-04-25.md | 57,880 | Heart Rate Variability during Auricular Acupressure at Heart Point in Healthy Volunteers: A Pilot Study | Dieu-Thuong Thi Trinh; Que-Chi Thi Nguyen; Minh-Man Pham Bui; Van-Dan Nguyen; Khac-Minh Thai | Evidence-Based Complementary and Alternative Medicine
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1019029 | 1019029-2022-04-25.xml | ---
## Abstract
Heart rate variability (HRV) is the variation in time between each heartbeat. Increasing HRV may contribute to improving autonomic nervous system dysfunctions. Acupuncture stimulation through the vagus plexus in the ear is considered as a method that can improve HRV. In this pilot study, we examined 114 healthy volunteers at the Faculty of Traditional Medicine, University of Medicine and Pharmacy at Ho Chi Minh City, from January to May 2020. During a 20-minute interval, participants were stimulated two times at the acupoint in the left ear with Semen seed. The heart rate and HRV values were monitored before, during, and after acupressure every 5 minutes. When we compared the experimental group with the control group, HRV significantly increased in the stage of ear-stimulated acupressure compared with the stage before and after the auricular acupressure (p=0.01, p=0.04, p=0.04 and p=0.02) and the difference was not statistically significant compared with the phase of nonstimulated (p=0.15, p=0.28). The changes in other values including SDNN (standard deviation of the average NN), RMSSD (root mean square of successive RR interval differences), LF (low-frequency power), and HF (high-frequency power) in all stages were not statistically significant p=>0.05 between groups. Based on the results, we can determine the increase in HRV when conducting auricular acupressure with stimulation at the heart acupoint on the left ear. This leads to a direction in further studies for clinical application for patients with autonomic nervous disorder.
---
## Body
## 1. Introduction
The time interval between two consecutive heartbeats is called the heart interval, and the difference in each interval produces heart rate variability (HRV) [1, 2]. HRV is measured in milliseconds (ms). Heart rate variability (HRV) is an index influenced by various factors such as age, sex, physique, health status, the frequent use of alcohol, tobacco, certain drugs, and physiological conditions of circadian rhythms and contextual factors when measured. Therefore, HRV is considered as a proxy for the health status of the whole system [3]. Many studies have shown that chronic low HRV values are associated with sudden cardiac deaths, depression, and diabetic neuropathy. Therefore, improving HRV may contribute to the improvement of the related diseases [4–6].The HRV measurement standards were developed by the European Society of Cardiology (ESC) and the North American Society of Cardiac Rhythm and Electrophysiology (NASPE) in 1996 and have become a popular measurement standard to this day. Among these standards, two methods of measuring time domain and frequency domain with HRV components are commonly applied in many studies on HRV [7–11]. Through HRV components, it is possible to assess sympathetic and parasympathetic activities of the heart.Besides ECG, which is used as the gold standard to measure HRV, nowadays with the development of science and technology, many new methods have been developed to measure HRV such as photoplethysmography (PPG) through smartphone, smartwatch, ear strap, chest strap, or wrist strap devices. These methods allow for more convenient and cost-effective HRV monitoring [12]. Studies have suggested that the PPG method is equivalent to ECG, with a high correlation coefficient [13, 14]. Kyto HRM 2511B, a compact, wearable device in the ear, allows for simpler HRV measurements in comparison to ECG [13]. While ECG is considered to be the gold standard, using devices with PPG technology has become more popular.In traditional medicine, acupuncture in the vagus nerve (i.e., a part of the autonomic nervous system) distribution areas in the ear is a method that affects HRV by increasing the parasympathetic activity and contributes to an increase in HRV in a beneficial way [9]. Accordingly, the middle of the ear cavity is considered to be the place where most of the vagus nerve is located, which is comparable to the heart acupoint [4]. This area is expected to be highly effective and of low risk, especially for the left ear canal. However, research on the effect of HRV when stimulating the heart acupoint in the ear alone is still limited [2]. Moreover, these studies are about needling acupoints in the ear and there has been no study that uses an ear seed—a tiny device that stimulates acupoints by pressing without using needles and that patients can use by themselves.Therefore, this study was set out to evaluate how the use of ear seed at the heart point affects the autonomic nervous system through HRV value. Along with time-domain and frequency-domain measurements, we are also seeking for whether there are any changes during the auricular acupressure process? Furthermore, this study also investigates the undesirable events when doing auricular acupressure at the heart acupoint. The study is expected to form the basis for using auricular acupressure to improve HRV to treat related diseases in further studies.
## 2. Materials and Methods
Participants were healthy volunteers who lived in Ho Chi Minh City. The research ethics was approved by the Medical Ethics Council of the University of Medicine and Pharmacy at Ho Chi Minh City.Volunteers would sign an informed consent form before the study. Participants were randomly assigned into two groups by the GraphPad software version 9.1. Participants in the experimental group received auricular acupressure in the left heart acupoint, while the control group received placebo auricular acupressure by removing the ear seed but keeping the sticker attached in the left heart acupoint. The study was designed as a single-blinded pilot study. Only participants were blinded and did not know which groups they would belong to.The sample sizen is calculated according to the formula:(1)n=z1−β+z1−α/22.σ2d2,where n is the number of sample sizes needed for the study; z1 −I = 0.83; and z1 −α/2 = 1.96. According to Clancy J. A [7], d = 70.35 and σ = 178.65. With a 10% expected loss, n was calculated as 57. So the total sample of the study was 114.
### 2.1. Inclusion Criteria
The inclusion criteria included healthy men and women with no history of cardiovascular diseases, diabetes, and thyroid aged between 20 and 29 and had vital signs within the normal range (pulse, regular heart rate, and resting heart rate: 60–100 beats/min; resting blood pressure: from 90/60 mmHg to ≤140/90 mmHg; breathing rate: 16 ± 3 times/minute; temperature: 36.6–37.5°C; and SpO2 ≥ 95%). All volunteers had body mass index (BMI) from 18.5 to 23 kg/m2 and had no psychiatric stress problem during acupuncture day (confirmed by answering the DASS21 questionnaire with stress point less than 15 points).
### 2.2. Exclusion Criteria
The exclusion criteria included volunteers whose ages were out of the range above used stimulants such as beer, alcohol, coffee, and tobacco within 24 hours before conducting the study. No volunteers played sports 2 hours before the study or had skin injuries in the area of auricular acupressure. Women who were in menstruation period, pregnancy, or breastfeeding, people using drugs affecting blood pressure, and heart rate within one month were also excluded.
### 2.3. Criteria to Stop Research
The criteria were participants who wanted to stop participating in the study or had overreacted parasympathetic stimulation symptoms such as dizziness, nausea, vomiting, pain, and allergy at the stimulus area. These cases would be recorded as unexpected events.
### 2.4. HRV Measurement
Monitoring values in the periods included before, during, and after auricular acupressure by using Kyto HRM-2511B, a photoplethysmography device, which was attached to the right earlobe of the participants. Monitored values included heart rate, heart rate variability (HRV consists of changes in the time intervals between consecutive heartbeats or between two successive R-waves of the QRS signal on the electrocardiogram-RR intervals, and HRV is measured in milliseconds), time-domain components SDNN (standard deviation of RR intervals), RMSSD (root square root of mean squares of differences between RR intervals), frequency-domain LF (low-frequency range—LF has a frequency of 0.04–0.15 Hz), and HF (high-frequency band—HF has a frequency of 0.15–0.4 Hz).Auricular acupressure: we conducted auricular acupressure at the heart acupoint on the left ear, which was located in the middle of the ear cavity by using a sticker with Vaccaria ear seed (experimental group) or a sticker without seed (control group) for 20 minutes with two times of stimulating. The time of stimulation was 30 seconds with two acupressure movements per second, resulting in a total of 60 acupressure movements per stimulation.HRV was monitored every 5 minutes. The measurement profile and measurement times (T1-T6) are schematically shown in Figure1.Figure 1
Study protocol. T1: before auricular acupressure, T2: auricular acupressure without stimulation, T3: the 1st auricular acupressure with stimulation in 30 sec, T4: auricular acupressure without stimulation, T5: the 2nd auricular acupressure with stimulation in 30 sec, and T6: after auricular acupressure.
### 2.5. General Protocol
The study was conducted in a quiet room from 8 : 00 to 10 : 00 A.M. at 26 ± 1°C. Participants rest for 10 minutes, and then, their pulse rate, heart rate, blood pressure, breathing rate, and SpO2 were measured. Participants did not speak and did not change posture during acupressure.
### 2.6. Statistical Analysis
Data were analyzed using SPSS version 22.0.T-test was used to compare baseline characteristics and heart rate of the volunteers between groups for each stage. HRV and HRV components (SDNN, RMSSD, LF, and HF) at the time before acupressure, during acupressure, and after acupressure in each group were compared by the Wilcoxon signed-rank test and between two research groups by the Mann-Whitney U test. The results were statistically significant when p<0.05.
## 2.1. Inclusion Criteria
The inclusion criteria included healthy men and women with no history of cardiovascular diseases, diabetes, and thyroid aged between 20 and 29 and had vital signs within the normal range (pulse, regular heart rate, and resting heart rate: 60–100 beats/min; resting blood pressure: from 90/60 mmHg to ≤140/90 mmHg; breathing rate: 16 ± 3 times/minute; temperature: 36.6–37.5°C; and SpO2 ≥ 95%). All volunteers had body mass index (BMI) from 18.5 to 23 kg/m2 and had no psychiatric stress problem during acupuncture day (confirmed by answering the DASS21 questionnaire with stress point less than 15 points).
## 2.2. Exclusion Criteria
The exclusion criteria included volunteers whose ages were out of the range above used stimulants such as beer, alcohol, coffee, and tobacco within 24 hours before conducting the study. No volunteers played sports 2 hours before the study or had skin injuries in the area of auricular acupressure. Women who were in menstruation period, pregnancy, or breastfeeding, people using drugs affecting blood pressure, and heart rate within one month were also excluded.
## 2.3. Criteria to Stop Research
The criteria were participants who wanted to stop participating in the study or had overreacted parasympathetic stimulation symptoms such as dizziness, nausea, vomiting, pain, and allergy at the stimulus area. These cases would be recorded as unexpected events.
## 2.4. HRV Measurement
Monitoring values in the periods included before, during, and after auricular acupressure by using Kyto HRM-2511B, a photoplethysmography device, which was attached to the right earlobe of the participants. Monitored values included heart rate, heart rate variability (HRV consists of changes in the time intervals between consecutive heartbeats or between two successive R-waves of the QRS signal on the electrocardiogram-RR intervals, and HRV is measured in milliseconds), time-domain components SDNN (standard deviation of RR intervals), RMSSD (root square root of mean squares of differences between RR intervals), frequency-domain LF (low-frequency range—LF has a frequency of 0.04–0.15 Hz), and HF (high-frequency band—HF has a frequency of 0.15–0.4 Hz).Auricular acupressure: we conducted auricular acupressure at the heart acupoint on the left ear, which was located in the middle of the ear cavity by using a sticker with Vaccaria ear seed (experimental group) or a sticker without seed (control group) for 20 minutes with two times of stimulating. The time of stimulation was 30 seconds with two acupressure movements per second, resulting in a total of 60 acupressure movements per stimulation.HRV was monitored every 5 minutes. The measurement profile and measurement times (T1-T6) are schematically shown in Figure1.Figure 1
Study protocol. T1: before auricular acupressure, T2: auricular acupressure without stimulation, T3: the 1st auricular acupressure with stimulation in 30 sec, T4: auricular acupressure without stimulation, T5: the 2nd auricular acupressure with stimulation in 30 sec, and T6: after auricular acupressure.
## 2.5. General Protocol
The study was conducted in a quiet room from 8 : 00 to 10 : 00 A.M. at 26 ± 1°C. Participants rest for 10 minutes, and then, their pulse rate, heart rate, blood pressure, breathing rate, and SpO2 were measured. Participants did not speak and did not change posture during acupressure.
## 2.6. Statistical Analysis
Data were analyzed using SPSS version 22.0.T-test was used to compare baseline characteristics and heart rate of the volunteers between groups for each stage. HRV and HRV components (SDNN, RMSSD, LF, and HF) at the time before acupressure, during acupressure, and after acupressure in each group were compared by the Wilcoxon signed-rank test and between two research groups by the Mann-Whitney U test. The results were statistically significant when p<0.05.
## 3. Results
### 3.1. General Characteristics of the Study Population
Table1 shows the general characteristics of the study population in each group at the beginning of the experiment. The anthropometric and hemodynamic data were in their normal range and did not show significant differences between groups. There were no significant differences in sex and age between the experimental and control group p>0.05. The difference in basic characteristics of the experimental and control groups was not significant (t-test, p>0.05, Table 1). Pulse, heart rate, blood pressure, respiratory rate, SpO2, and BMI of all participants were within normal values, which is required for the safety of participants.Table 1
Anthropometric subjects’ characteristics.
CharacteristicsExperimental group (n = 57)Control group (n = 57)p value(Mean ± SD)Gender (n, %)Male27 (47.37)28 (49.12)0.85aFemale30 (52.63)29 (50.88)Age (years)25.54 ± 2.8025.12 ± 2.630.41bPulse (bpm)73.89 ± 8.6771.88 ± 6.670.14cHR (bpm)73.89 ± 8.6771.88 ± 6.670.14cSBP (mmHg)109.18 ± 10.99107.39 ± 8.450.33cDBP (mmHg)72.91 ± 5.5672.11 ± 4.800.41cBreath (bpm)16.26 ± 1.9916.21 ± 1.860.88cSpO2 (%)97.09 ± 1.4396.93 ± 1.450.56cBMI (kg/m2)20.28 ± 1.5420.67 ± 1.510.17cNote: a-Fisher’s exact test, b-Mann-WhitneyU test, t-test. HR = heart rate, SBP = systolic blood pressure, DBP = diastolic blood pressure, BMI = body mass index. SD: standard deviation.When comparing values of an index between groups including pulse, heart rate, blood pressure, respiratory rate, SpO2, and BMI, the results were not statistically significant with p>0.05. This shows the random distribution of participants into two groups, thereby ensuring accuracy and objectivity when comparing the two groups.
### 3.2. Heart Rate and HRV in Each Stage of the Study
The heart rates in each stage of the two study groups are shown in Table2. There was no statistically significant difference in heart rate in each stage of T1, T2, T3, T4, T5, and T6 between groups (p>0.05, t-test).Table 2
Heart rate behavior between groups.
StageHeart rate (bpm) (mean ± SD)p valueExperimental group (n = 57)Control group (n = 57)T173.89 ± 8.6771.88 ± 6.670.17T273.49 ± 9.2172.14 ± 6.980.38T370.53 ± 9.1171.07 ± 7.160.72T473.67 ± 9.3771.49 ± 7.130.17T570.91 ± 8.6671.72 ± 6.690.58T673.16 ± 8.7971.87 ± 7.290.40SD: standard deviation. Table3 shows the HRV of each stage between groups. There was a statistically significant difference in HRV between groups at the T1, T2, T4, and T6 stages (p<0.05 Mann-Whitney U test). There was no statistically significant difference in HRV between groups in stages T3 and T5 (p>0.05 Mann-Whitney U test).Table 3
Heart rate variability in each stage between groups.
StageHRV (ms) (median (IQR 25th – 75th))p valueExperimental group (n = 57)Control group (n = 57)T153.0049.00, 57.5057.0051.00, 61.500.01T253.0047.00, 55.0055.0050.00, 63.000.04T354.0050.00, 58.0057.0050.00, 63.000.15T452.0047.50, 55.0055.0048.00, 61.500.04T554.0050.50, 59.5056.0051.00, 63.000.28T652.0049.00, 57.5056.0050.00, 63.000.02IQR: interquartile range.In the experimental group, heart rate in the stage of auricular acupressure with stimulation was lower than that of before, after, and in the stage without stimulation (Figure2(a)). HRV in the stage of auricular acupressure with stimulation was greater than HRV before and after acupressure but was not statistically significantly different from HRV in the stage of auricular acupressure without stimulation (Figure 2(b)).Figure 2
Heart rate and HRV in each stage. (a) Heart rate. (b) HRV.In the control group, the HRV difference between stages was not statistically significant (Figure2(b)).
### 3.3. Auricular Acupressure at Heart Acupoint Alters Elements of HRV
The variation of the time domain and frequency domain is shown in Table4. There were no significant differences in the SDNN and RMSSD between groups (Wilcoxon signed rank-sum test, p>0.05). There were no significant differences in the LF and HF between groups in each stage (Wilcoxon signed rank-sum test, p>0.05).Table 4
The variation of time domain.
StageSDNN (median (IQR))RMSSD (median (IQR))LF (median (IQR))HF (median (IQR))Experimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueT142.97 (33.19, 53.52)42.71 (34.10, 55.84)0.9030.85 (19.51, 51.33)32.41 (22.43, 46.10)0.90321.23 (167.65, 544.03)290.37 (168.71, 680.75)0.90292.81 (131.04, 568.82)281.96 (147.09, 519.95)0.90T243.65 (27.68, 53.84)51.02 (34.89, 58.44)0.6529.11 (18.35, 49.64)28.82 (21.70, 50.53)0.65292.14 (184.77, 562.70)336.65 (169.43, 603.96)0.65278.62 (141.36, 545.21)272.03 (161.88, 434.44)0.65T342.97 (26.65, 54.31)44.81 (33.14, 54.79)0.9832.56 (19.42, 49.80)34.47 (24.85, 53.97)0.98301.54 (176.90, 585.02)318.40 (156.40, 634.21)0.98223.11 (113.14, 516.47)270.55 (176.56, 496.28)0.98T445.30 (37.71, 57.54)46.30 (40.15, 54.43)0.8231.24 (19.93, 51.26)33.32 (22.10, 50.75)0.82308.70 (173.52, 593.46)322.61 (174.94, 659.84)0.82269.36 (139.44, 554.53)279.77 (186.05, 489.68)0.82T542.04 (29.60, 51.60)46.16 (35.37, 54.62)0.3531.94 (17.24, 54.68)31.28 (25.75, 50.46)0.35283.66 (146.31, 511.77)364.90 (173.39, 609.40)0.35287.66 (153.18, 501.74)316.51 (215.40, 555.57)0.35T645.75 (30.66, 55.94)47.07 (37.20, 61.61)0.2927.52 (20.86, 47.27)28.39 (21.66, 49.96)0.29255.83 (162.03, 513.95)325.11 (161.59, 681.28)0.29252.31 (115.47, 597.09)343.77 (219.05, 631.33)0.29IQR: interquartile range.
## 3.1. General Characteristics of the Study Population
Table1 shows the general characteristics of the study population in each group at the beginning of the experiment. The anthropometric and hemodynamic data were in their normal range and did not show significant differences between groups. There were no significant differences in sex and age between the experimental and control group p>0.05. The difference in basic characteristics of the experimental and control groups was not significant (t-test, p>0.05, Table 1). Pulse, heart rate, blood pressure, respiratory rate, SpO2, and BMI of all participants were within normal values, which is required for the safety of participants.Table 1
Anthropometric subjects’ characteristics.
CharacteristicsExperimental group (n = 57)Control group (n = 57)p value(Mean ± SD)Gender (n, %)Male27 (47.37)28 (49.12)0.85aFemale30 (52.63)29 (50.88)Age (years)25.54 ± 2.8025.12 ± 2.630.41bPulse (bpm)73.89 ± 8.6771.88 ± 6.670.14cHR (bpm)73.89 ± 8.6771.88 ± 6.670.14cSBP (mmHg)109.18 ± 10.99107.39 ± 8.450.33cDBP (mmHg)72.91 ± 5.5672.11 ± 4.800.41cBreath (bpm)16.26 ± 1.9916.21 ± 1.860.88cSpO2 (%)97.09 ± 1.4396.93 ± 1.450.56cBMI (kg/m2)20.28 ± 1.5420.67 ± 1.510.17cNote: a-Fisher’s exact test, b-Mann-WhitneyU test, t-test. HR = heart rate, SBP = systolic blood pressure, DBP = diastolic blood pressure, BMI = body mass index. SD: standard deviation.When comparing values of an index between groups including pulse, heart rate, blood pressure, respiratory rate, SpO2, and BMI, the results were not statistically significant with p>0.05. This shows the random distribution of participants into two groups, thereby ensuring accuracy and objectivity when comparing the two groups.
## 3.2. Heart Rate and HRV in Each Stage of the Study
The heart rates in each stage of the two study groups are shown in Table2. There was no statistically significant difference in heart rate in each stage of T1, T2, T3, T4, T5, and T6 between groups (p>0.05, t-test).Table 2
Heart rate behavior between groups.
StageHeart rate (bpm) (mean ± SD)p valueExperimental group (n = 57)Control group (n = 57)T173.89 ± 8.6771.88 ± 6.670.17T273.49 ± 9.2172.14 ± 6.980.38T370.53 ± 9.1171.07 ± 7.160.72T473.67 ± 9.3771.49 ± 7.130.17T570.91 ± 8.6671.72 ± 6.690.58T673.16 ± 8.7971.87 ± 7.290.40SD: standard deviation. Table3 shows the HRV of each stage between groups. There was a statistically significant difference in HRV between groups at the T1, T2, T4, and T6 stages (p<0.05 Mann-Whitney U test). There was no statistically significant difference in HRV between groups in stages T3 and T5 (p>0.05 Mann-Whitney U test).Table 3
Heart rate variability in each stage between groups.
StageHRV (ms) (median (IQR 25th – 75th))p valueExperimental group (n = 57)Control group (n = 57)T153.0049.00, 57.5057.0051.00, 61.500.01T253.0047.00, 55.0055.0050.00, 63.000.04T354.0050.00, 58.0057.0050.00, 63.000.15T452.0047.50, 55.0055.0048.00, 61.500.04T554.0050.50, 59.5056.0051.00, 63.000.28T652.0049.00, 57.5056.0050.00, 63.000.02IQR: interquartile range.In the experimental group, heart rate in the stage of auricular acupressure with stimulation was lower than that of before, after, and in the stage without stimulation (Figure2(a)). HRV in the stage of auricular acupressure with stimulation was greater than HRV before and after acupressure but was not statistically significantly different from HRV in the stage of auricular acupressure without stimulation (Figure 2(b)).Figure 2
Heart rate and HRV in each stage. (a) Heart rate. (b) HRV.In the control group, the HRV difference between stages was not statistically significant (Figure2(b)).
## 3.3. Auricular Acupressure at Heart Acupoint Alters Elements of HRV
The variation of the time domain and frequency domain is shown in Table4. There were no significant differences in the SDNN and RMSSD between groups (Wilcoxon signed rank-sum test, p>0.05). There were no significant differences in the LF and HF between groups in each stage (Wilcoxon signed rank-sum test, p>0.05).Table 4
The variation of time domain.
StageSDNN (median (IQR))RMSSD (median (IQR))LF (median (IQR))HF (median (IQR))Experimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueExperimental groupn = 57Control groupn = 57p valueT142.97 (33.19, 53.52)42.71 (34.10, 55.84)0.9030.85 (19.51, 51.33)32.41 (22.43, 46.10)0.90321.23 (167.65, 544.03)290.37 (168.71, 680.75)0.90292.81 (131.04, 568.82)281.96 (147.09, 519.95)0.90T243.65 (27.68, 53.84)51.02 (34.89, 58.44)0.6529.11 (18.35, 49.64)28.82 (21.70, 50.53)0.65292.14 (184.77, 562.70)336.65 (169.43, 603.96)0.65278.62 (141.36, 545.21)272.03 (161.88, 434.44)0.65T342.97 (26.65, 54.31)44.81 (33.14, 54.79)0.9832.56 (19.42, 49.80)34.47 (24.85, 53.97)0.98301.54 (176.90, 585.02)318.40 (156.40, 634.21)0.98223.11 (113.14, 516.47)270.55 (176.56, 496.28)0.98T445.30 (37.71, 57.54)46.30 (40.15, 54.43)0.8231.24 (19.93, 51.26)33.32 (22.10, 50.75)0.82308.70 (173.52, 593.46)322.61 (174.94, 659.84)0.82269.36 (139.44, 554.53)279.77 (186.05, 489.68)0.82T542.04 (29.60, 51.60)46.16 (35.37, 54.62)0.3531.94 (17.24, 54.68)31.28 (25.75, 50.46)0.35283.66 (146.31, 511.77)364.90 (173.39, 609.40)0.35287.66 (153.18, 501.74)316.51 (215.40, 555.57)0.35T645.75 (30.66, 55.94)47.07 (37.20, 61.61)0.2927.52 (20.86, 47.27)28.39 (21.66, 49.96)0.29255.83 (162.03, 513.95)325.11 (161.59, 681.28)0.29252.31 (115.47, 597.09)343.77 (219.05, 631.33)0.29IQR: interquartile range.
## 4. Discussion
With the expectation of investigating how the use of ear seed at the heart point affects the autonomic nervous system through HR, HRV, SDNN, RMSSD, LF, and HF values of 114 volunteers, we suggest that auricular acupressure on the point that is in the distribution of vagus nerve can have a significant affect to the autonomic cardiovascular system on healthy people.
### 4.1. Heart Rate
The first value to be investigated when stimulating the heart auricular acupoint in the ears was the heart rate. In this study, the heart rate in the experimental group was not statistically significant compared with the control group (Table2). This result is similar to the study of Gao et al. on healthy volunteers. When performing stimulated acupressure of the vagus nerve in the ear, the heart rate was reduced [8, 15–17]. The decrease in heart rate is a precedent for further research on heart rate variability.
### 4.2. HRV
As for the experimental group, the HRV value increased in the stage of auricular acupressure with stimulation compared to the other stages (Table3). This result is similar to the study of Clancy et al. [7, 8]. However, when compared to the control group, the increased HRV in the experimental group is only statistically significant in the stimulation stage compared with that of the before and after acupressure. The difference was not statistically significant compared to the nonstimulated acupressure stage p>0.05 (Figure 2(b)). Meanwhile, the heart rate decreased in both the nonstimulated and stimulated acupressure stage compared with that before and after the acupressure. According to the HRV definition, when the heart rate drops, it creates more space for variation between consecutive heart rates, leading to higher HRV. However, in our study, when the heart rate decreased, there was no difference in the stimulated acupressure stage compared with the nonstimulated acupressure stage in HRV. The question is as follows: is there any conflict between heart rate and HRV values in our study?It is known that HRV is not only physiologically linked to heart rate via autonomic nerve activity but also mathematically linked [18]. In experimental studies, it has been shown that at least one part of the HRV value is influenced by the intrinsic factor of the sinus node, namely, the cyclic length of myocardial cells or the period of RR that has a nonlinear relationship with neurotransmitter concentrations at the sinus node [19, 20]. This leads to the situation that with the same intensity of parasympathetic activity, a longer interval between RRs leads to a higher HRV [20, 21]. Therefore, although parasympathetic activity lowers the heart rate, HRV changes may be statistically insignificant because HRV also depends on neurotransmitters affecting the length of RR interval between heart cycles.The relationship between the RR distance and heart rate mentioned in the SACHA study shows that when the heart rate is of the same value, the difference between the RR intervals may not be equal, leading to different HRV [22]. This assumption may partly explain the mismatch between the reduction in heart rate and the increase in HRV observed in our study.The increase in HRV when stimulating heart auricular acupoint can be a potential approach in clinical practice. Previous studies have shown that stimulating the vagus nerve in the ear with an increase in HRV will have an antiarrhythmic effect, potentially reducing the recurrence of atrial fibrillation when combined with an antiarrhythmic drug [23, 24]. Therefore, future studies to determine the influence of auricular acupuncture on heart acupoints in patients with atrial fibrillation are particularly promising.
### 4.3. The Variation of Time Domain
#### 4.3.1. SDNN
The SDNN is the standard deviation of normal sinus rhythms, measured in milliseconds (ms). In 5 minutes, the SDNN values were mainly influenced by parasympathetic-mediated respiratory sinus arrhythmia (RSA) [3].In our study, the SDNN change was not statistically significantp>0.05 at the study stages and in both the experimental group and the control group. Nevertheless, in the study of Boehmer et al. [15], SDNN increased after performing auricular acupuncture. To clarify this difference, it is important to compare the inclusion criteria and methods of the two studies. In the study of Boehmer, the participants were young men (23 ± 2 years old), who were regularly engaged in moderate-intensity physical activity over the past 12 months (7 ± 3 hours/week). When conducting the study, participants were performed auricular acupressure at the heart acupoint and measured HRV in the supine position and then switched to a standing position to measure HRV. Thereby, it can be proposed that, in Andreas’ study, a baroreceptor reflex occurred in participants with high sensitivity, leading to an increase in RSA [25]. An increase in RSA leads to an increase in SDNN.The SDNN is considered the “gold standard” in cardiovascular risk stratification, especially in the 24-hour recordings [26]. The SDNN value predicts morbidity and mortality. The study by Kleiger RE et al. based on 24 hours of ECG monitoring and HRV analysis showed that cardiovascular patients with SDNN values less than 50 ms are classified as unhealthy, 50–100 ms corresponds to harmed health, and over 100 ms is classified as healthy. Poststroke patients with SDNN values above 100 ms had a 5.3-fold lower risk of death compared with patients with values below 50 ms [27]. Whether increasing SDNN can reduce mortality risk is being studied.
#### 4.3.2. RMSSD
When surveying RMSSD through each stage, we noted that RMSSD was different with no statistical significancep>0.05 between both groups, which was also observed by Andreas et al. [15].Considering the time-domain HRV, it was found that RMSSD is affected more by parasympathetic activity rather than SDNN. While compared with the frequency-domain index, RMSSD was correlated with HF; however, RMSSD was less affected by respiratory frequency than HF. It can be, therefore, suggested that RMSSD is the main time-domain index used to estimate changes in parasympathetic activity through HRV [28]. However, in our study, parasympathetic activity will increase, but the variation in RMSSD is not statistically significant. The reason is that although RMSSD reflects parasympathetic activity in the heart, it is an indirect result and a major factor in the alteration of R peaks produced by the sinus node. The activity is also influenced by receptors on the sinus node cells [29].
### 4.4. The Frequency-Domain Spectral Analysis
#### 4.4.1. Low-Frequency (LF) Power
In our study, if we compare the LF value at each stage, including before, during, and after acupressure, the difference is not statistically significantp>0.05 in both the experimental and control groups (Table 4). This result is similar to the studies of Shen et al. [24] and Lee et al. [10], but different from that of Gao et al. [7, 8, 15]. In the study of Gao, LF value was increased in both stages of stimulating the heart point with electroacupuncture (electric vibrating pen) and auricular needling (stimulating the acupuncture point with a needle).Physiologically, the stimulation of the vagus nerve will increase the activation of baroreceptors and then activate the baroreflex. Many studies have shown that the LF value reflects the activity of the sympathetic nervous system and the efferent parasympathetic nervous system (A and C fibers), respectively, related to the action of baroreceptors. [30–33]. This explains the results of Gao where stimulation of the vagus nerve in the ear would increase the activation of baroreceptors, leading to an increase in LF.Moreover, Stauss suggested that when stimulating the vagus nerve in the ear, the hemodynamic changes will depend on the number of stimulation sites, stimulation parameters (potential, frequency, length, pulse, and current direction), which lead to different changes in heart rate, blood pressure, and baroreflex [34]. In our study, we used Semen seeds to perform auricular acupressure, so the stimulation was weaker than that in Gao’s study, which used electroacupuncture. Therefore, lower sensitivity and weaker baroreflex will lead to a change in LF value that is not statistically significant.The baroreflex plays an important role in hemodynamic stability and cardiovascular protection and is also a strong prognostic factor in some cardiovascular diseases such as hypertension and chronic heart failure [35]. LF is considered an indicator of the sensitivity of the baroreceptor reflex; therefore, measuring the LF value is a noninvasive method to determine the sensitivity of the baroreceptor [36]. When stimulating the heart point with Semen seed, the LF value did not change.
#### 4.4.2. High-Frequency (HF) Power
Theoretically, stimulating the heart point in the ear belonging to the vagus nerve will increase parasympathetic activity. Accordingly, HF modulated by parasympathetic activity will increase. Nevertheless, in our study the HF value changed but the difference was not statistically significant between the stages, including before, during, and after atrial pressure in both groupsp>0.05 (Table 4). Hayano and Yuda suggested that the HF value of HRV does not necessarily reflect cardiac parasympathetic functioning. In fact, when observing the heart rate oscillation through the autonomic nerve, the HF band is regulated by cardiac parasympathetic activity. HF is influenced by parasympathetic activity in the heart in the frequency range of 0.15–0.4 Hz. In addition, RSA is considered to be a determinant of the HF component. Even if HRV in the HF band decreases or disappears, it does not mean that cardiac parasympathetic nerve arrest or autonomic dysfunction is occurring. This occurs when the respiratory rate is outside the HF range, such as in slow breathing HF < 0.15 Hz (9 breaths/min), during deep, or fast breathing HF > 0.4 Hz (24 breaths/min) [37]. Therefore, when conducting an investigation of the HF value to assess cardiac parasympathetic function, it is necessary to monitor the respiratory parameters during the survey [38, 39].The HF components compared with other HRV frequency components such as LF and VLF are weak clinical prognostic factors in short-term HRV measurements [40]. It has been found that HRV observed in the HF band has data that are not necessarily mediated by autonomic nerves [37]. This phenomenon is described by various terms such as complex HRV, erratic sinus rhythm, or heart rate fragmentation (HRF) [41]. This is a type of instability characterized by an anomalous appearance between peaks in the RR time series even though the ECG shows sinus rhythm. The occurrence of HRF may confound the association between HF and cardiac parasympathetic function and distort the prognosis, so HF is rarely used in clinical evaluation [37]. However, low HF is often correlated with stress, anxiety disorders, and stress; therefore, improving the HF value has positive implications for body health [38].
#### 4.4.3. Unwanted Reactions
One case of drowsiness was recorded in the experimental group and not in the control group. This can be both an unwanted reaction and a beneficial effect. If this method is applied to patients with insomnia at the right time, there will be therapeutic benefits [15, 25, 26].
### 4.5. Limitations
First, the respiratory frequency had not been investigated during the study process, so the effect of parasympathomimetics on HF had not been accurately assessed.Second, this is the first study to conduct acupressure to investigate HRV; hence, we performed it on healthy volunteers to ensure safety and to monitor possible dangerous cardiovascular events during acupressure. Evaluating results on healthy people will not represent the goal of clinical application with subjects who are cardiovascular patients. Therefore, in further research it will be focused on patients with chronic cardiovascular or HRV-related diseases.
## 4.1. Heart Rate
The first value to be investigated when stimulating the heart auricular acupoint in the ears was the heart rate. In this study, the heart rate in the experimental group was not statistically significant compared with the control group (Table2). This result is similar to the study of Gao et al. on healthy volunteers. When performing stimulated acupressure of the vagus nerve in the ear, the heart rate was reduced [8, 15–17]. The decrease in heart rate is a precedent for further research on heart rate variability.
## 4.2. HRV
As for the experimental group, the HRV value increased in the stage of auricular acupressure with stimulation compared to the other stages (Table3). This result is similar to the study of Clancy et al. [7, 8]. However, when compared to the control group, the increased HRV in the experimental group is only statistically significant in the stimulation stage compared with that of the before and after acupressure. The difference was not statistically significant compared to the nonstimulated acupressure stage p>0.05 (Figure 2(b)). Meanwhile, the heart rate decreased in both the nonstimulated and stimulated acupressure stage compared with that before and after the acupressure. According to the HRV definition, when the heart rate drops, it creates more space for variation between consecutive heart rates, leading to higher HRV. However, in our study, when the heart rate decreased, there was no difference in the stimulated acupressure stage compared with the nonstimulated acupressure stage in HRV. The question is as follows: is there any conflict between heart rate and HRV values in our study?It is known that HRV is not only physiologically linked to heart rate via autonomic nerve activity but also mathematically linked [18]. In experimental studies, it has been shown that at least one part of the HRV value is influenced by the intrinsic factor of the sinus node, namely, the cyclic length of myocardial cells or the period of RR that has a nonlinear relationship with neurotransmitter concentrations at the sinus node [19, 20]. This leads to the situation that with the same intensity of parasympathetic activity, a longer interval between RRs leads to a higher HRV [20, 21]. Therefore, although parasympathetic activity lowers the heart rate, HRV changes may be statistically insignificant because HRV also depends on neurotransmitters affecting the length of RR interval between heart cycles.The relationship between the RR distance and heart rate mentioned in the SACHA study shows that when the heart rate is of the same value, the difference between the RR intervals may not be equal, leading to different HRV [22]. This assumption may partly explain the mismatch between the reduction in heart rate and the increase in HRV observed in our study.The increase in HRV when stimulating heart auricular acupoint can be a potential approach in clinical practice. Previous studies have shown that stimulating the vagus nerve in the ear with an increase in HRV will have an antiarrhythmic effect, potentially reducing the recurrence of atrial fibrillation when combined with an antiarrhythmic drug [23, 24]. Therefore, future studies to determine the influence of auricular acupuncture on heart acupoints in patients with atrial fibrillation are particularly promising.
## 4.3. The Variation of Time Domain
### 4.3.1. SDNN
The SDNN is the standard deviation of normal sinus rhythms, measured in milliseconds (ms). In 5 minutes, the SDNN values were mainly influenced by parasympathetic-mediated respiratory sinus arrhythmia (RSA) [3].In our study, the SDNN change was not statistically significantp>0.05 at the study stages and in both the experimental group and the control group. Nevertheless, in the study of Boehmer et al. [15], SDNN increased after performing auricular acupuncture. To clarify this difference, it is important to compare the inclusion criteria and methods of the two studies. In the study of Boehmer, the participants were young men (23 ± 2 years old), who were regularly engaged in moderate-intensity physical activity over the past 12 months (7 ± 3 hours/week). When conducting the study, participants were performed auricular acupressure at the heart acupoint and measured HRV in the supine position and then switched to a standing position to measure HRV. Thereby, it can be proposed that, in Andreas’ study, a baroreceptor reflex occurred in participants with high sensitivity, leading to an increase in RSA [25]. An increase in RSA leads to an increase in SDNN.The SDNN is considered the “gold standard” in cardiovascular risk stratification, especially in the 24-hour recordings [26]. The SDNN value predicts morbidity and mortality. The study by Kleiger RE et al. based on 24 hours of ECG monitoring and HRV analysis showed that cardiovascular patients with SDNN values less than 50 ms are classified as unhealthy, 50–100 ms corresponds to harmed health, and over 100 ms is classified as healthy. Poststroke patients with SDNN values above 100 ms had a 5.3-fold lower risk of death compared with patients with values below 50 ms [27]. Whether increasing SDNN can reduce mortality risk is being studied.
### 4.3.2. RMSSD
When surveying RMSSD through each stage, we noted that RMSSD was different with no statistical significancep>0.05 between both groups, which was also observed by Andreas et al. [15].Considering the time-domain HRV, it was found that RMSSD is affected more by parasympathetic activity rather than SDNN. While compared with the frequency-domain index, RMSSD was correlated with HF; however, RMSSD was less affected by respiratory frequency than HF. It can be, therefore, suggested that RMSSD is the main time-domain index used to estimate changes in parasympathetic activity through HRV [28]. However, in our study, parasympathetic activity will increase, but the variation in RMSSD is not statistically significant. The reason is that although RMSSD reflects parasympathetic activity in the heart, it is an indirect result and a major factor in the alteration of R peaks produced by the sinus node. The activity is also influenced by receptors on the sinus node cells [29].
## 4.3.1. SDNN
The SDNN is the standard deviation of normal sinus rhythms, measured in milliseconds (ms). In 5 minutes, the SDNN values were mainly influenced by parasympathetic-mediated respiratory sinus arrhythmia (RSA) [3].In our study, the SDNN change was not statistically significantp>0.05 at the study stages and in both the experimental group and the control group. Nevertheless, in the study of Boehmer et al. [15], SDNN increased after performing auricular acupuncture. To clarify this difference, it is important to compare the inclusion criteria and methods of the two studies. In the study of Boehmer, the participants were young men (23 ± 2 years old), who were regularly engaged in moderate-intensity physical activity over the past 12 months (7 ± 3 hours/week). When conducting the study, participants were performed auricular acupressure at the heart acupoint and measured HRV in the supine position and then switched to a standing position to measure HRV. Thereby, it can be proposed that, in Andreas’ study, a baroreceptor reflex occurred in participants with high sensitivity, leading to an increase in RSA [25]. An increase in RSA leads to an increase in SDNN.The SDNN is considered the “gold standard” in cardiovascular risk stratification, especially in the 24-hour recordings [26]. The SDNN value predicts morbidity and mortality. The study by Kleiger RE et al. based on 24 hours of ECG monitoring and HRV analysis showed that cardiovascular patients with SDNN values less than 50 ms are classified as unhealthy, 50–100 ms corresponds to harmed health, and over 100 ms is classified as healthy. Poststroke patients with SDNN values above 100 ms had a 5.3-fold lower risk of death compared with patients with values below 50 ms [27]. Whether increasing SDNN can reduce mortality risk is being studied.
## 4.3.2. RMSSD
When surveying RMSSD through each stage, we noted that RMSSD was different with no statistical significancep>0.05 between both groups, which was also observed by Andreas et al. [15].Considering the time-domain HRV, it was found that RMSSD is affected more by parasympathetic activity rather than SDNN. While compared with the frequency-domain index, RMSSD was correlated with HF; however, RMSSD was less affected by respiratory frequency than HF. It can be, therefore, suggested that RMSSD is the main time-domain index used to estimate changes in parasympathetic activity through HRV [28]. However, in our study, parasympathetic activity will increase, but the variation in RMSSD is not statistically significant. The reason is that although RMSSD reflects parasympathetic activity in the heart, it is an indirect result and a major factor in the alteration of R peaks produced by the sinus node. The activity is also influenced by receptors on the sinus node cells [29].
## 4.4. The Frequency-Domain Spectral Analysis
### 4.4.1. Low-Frequency (LF) Power
In our study, if we compare the LF value at each stage, including before, during, and after acupressure, the difference is not statistically significantp>0.05 in both the experimental and control groups (Table 4). This result is similar to the studies of Shen et al. [24] and Lee et al. [10], but different from that of Gao et al. [7, 8, 15]. In the study of Gao, LF value was increased in both stages of stimulating the heart point with electroacupuncture (electric vibrating pen) and auricular needling (stimulating the acupuncture point with a needle).Physiologically, the stimulation of the vagus nerve will increase the activation of baroreceptors and then activate the baroreflex. Many studies have shown that the LF value reflects the activity of the sympathetic nervous system and the efferent parasympathetic nervous system (A and C fibers), respectively, related to the action of baroreceptors. [30–33]. This explains the results of Gao where stimulation of the vagus nerve in the ear would increase the activation of baroreceptors, leading to an increase in LF.Moreover, Stauss suggested that when stimulating the vagus nerve in the ear, the hemodynamic changes will depend on the number of stimulation sites, stimulation parameters (potential, frequency, length, pulse, and current direction), which lead to different changes in heart rate, blood pressure, and baroreflex [34]. In our study, we used Semen seeds to perform auricular acupressure, so the stimulation was weaker than that in Gao’s study, which used electroacupuncture. Therefore, lower sensitivity and weaker baroreflex will lead to a change in LF value that is not statistically significant.The baroreflex plays an important role in hemodynamic stability and cardiovascular protection and is also a strong prognostic factor in some cardiovascular diseases such as hypertension and chronic heart failure [35]. LF is considered an indicator of the sensitivity of the baroreceptor reflex; therefore, measuring the LF value is a noninvasive method to determine the sensitivity of the baroreceptor [36]. When stimulating the heart point with Semen seed, the LF value did not change.
### 4.4.2. High-Frequency (HF) Power
Theoretically, stimulating the heart point in the ear belonging to the vagus nerve will increase parasympathetic activity. Accordingly, HF modulated by parasympathetic activity will increase. Nevertheless, in our study the HF value changed but the difference was not statistically significant between the stages, including before, during, and after atrial pressure in both groupsp>0.05 (Table 4). Hayano and Yuda suggested that the HF value of HRV does not necessarily reflect cardiac parasympathetic functioning. In fact, when observing the heart rate oscillation through the autonomic nerve, the HF band is regulated by cardiac parasympathetic activity. HF is influenced by parasympathetic activity in the heart in the frequency range of 0.15–0.4 Hz. In addition, RSA is considered to be a determinant of the HF component. Even if HRV in the HF band decreases or disappears, it does not mean that cardiac parasympathetic nerve arrest or autonomic dysfunction is occurring. This occurs when the respiratory rate is outside the HF range, such as in slow breathing HF < 0.15 Hz (9 breaths/min), during deep, or fast breathing HF > 0.4 Hz (24 breaths/min) [37]. Therefore, when conducting an investigation of the HF value to assess cardiac parasympathetic function, it is necessary to monitor the respiratory parameters during the survey [38, 39].The HF components compared with other HRV frequency components such as LF and VLF are weak clinical prognostic factors in short-term HRV measurements [40]. It has been found that HRV observed in the HF band has data that are not necessarily mediated by autonomic nerves [37]. This phenomenon is described by various terms such as complex HRV, erratic sinus rhythm, or heart rate fragmentation (HRF) [41]. This is a type of instability characterized by an anomalous appearance between peaks in the RR time series even though the ECG shows sinus rhythm. The occurrence of HRF may confound the association between HF and cardiac parasympathetic function and distort the prognosis, so HF is rarely used in clinical evaluation [37]. However, low HF is often correlated with stress, anxiety disorders, and stress; therefore, improving the HF value has positive implications for body health [38].
### 4.4.3. Unwanted Reactions
One case of drowsiness was recorded in the experimental group and not in the control group. This can be both an unwanted reaction and a beneficial effect. If this method is applied to patients with insomnia at the right time, there will be therapeutic benefits [15, 25, 26].
## 4.4.1. Low-Frequency (LF) Power
In our study, if we compare the LF value at each stage, including before, during, and after acupressure, the difference is not statistically significantp>0.05 in both the experimental and control groups (Table 4). This result is similar to the studies of Shen et al. [24] and Lee et al. [10], but different from that of Gao et al. [7, 8, 15]. In the study of Gao, LF value was increased in both stages of stimulating the heart point with electroacupuncture (electric vibrating pen) and auricular needling (stimulating the acupuncture point with a needle).Physiologically, the stimulation of the vagus nerve will increase the activation of baroreceptors and then activate the baroreflex. Many studies have shown that the LF value reflects the activity of the sympathetic nervous system and the efferent parasympathetic nervous system (A and C fibers), respectively, related to the action of baroreceptors. [30–33]. This explains the results of Gao where stimulation of the vagus nerve in the ear would increase the activation of baroreceptors, leading to an increase in LF.Moreover, Stauss suggested that when stimulating the vagus nerve in the ear, the hemodynamic changes will depend on the number of stimulation sites, stimulation parameters (potential, frequency, length, pulse, and current direction), which lead to different changes in heart rate, blood pressure, and baroreflex [34]. In our study, we used Semen seeds to perform auricular acupressure, so the stimulation was weaker than that in Gao’s study, which used electroacupuncture. Therefore, lower sensitivity and weaker baroreflex will lead to a change in LF value that is not statistically significant.The baroreflex plays an important role in hemodynamic stability and cardiovascular protection and is also a strong prognostic factor in some cardiovascular diseases such as hypertension and chronic heart failure [35]. LF is considered an indicator of the sensitivity of the baroreceptor reflex; therefore, measuring the LF value is a noninvasive method to determine the sensitivity of the baroreceptor [36]. When stimulating the heart point with Semen seed, the LF value did not change.
## 4.4.2. High-Frequency (HF) Power
Theoretically, stimulating the heart point in the ear belonging to the vagus nerve will increase parasympathetic activity. Accordingly, HF modulated by parasympathetic activity will increase. Nevertheless, in our study the HF value changed but the difference was not statistically significant between the stages, including before, during, and after atrial pressure in both groupsp>0.05 (Table 4). Hayano and Yuda suggested that the HF value of HRV does not necessarily reflect cardiac parasympathetic functioning. In fact, when observing the heart rate oscillation through the autonomic nerve, the HF band is regulated by cardiac parasympathetic activity. HF is influenced by parasympathetic activity in the heart in the frequency range of 0.15–0.4 Hz. In addition, RSA is considered to be a determinant of the HF component. Even if HRV in the HF band decreases or disappears, it does not mean that cardiac parasympathetic nerve arrest or autonomic dysfunction is occurring. This occurs when the respiratory rate is outside the HF range, such as in slow breathing HF < 0.15 Hz (9 breaths/min), during deep, or fast breathing HF > 0.4 Hz (24 breaths/min) [37]. Therefore, when conducting an investigation of the HF value to assess cardiac parasympathetic function, it is necessary to monitor the respiratory parameters during the survey [38, 39].The HF components compared with other HRV frequency components such as LF and VLF are weak clinical prognostic factors in short-term HRV measurements [40]. It has been found that HRV observed in the HF band has data that are not necessarily mediated by autonomic nerves [37]. This phenomenon is described by various terms such as complex HRV, erratic sinus rhythm, or heart rate fragmentation (HRF) [41]. This is a type of instability characterized by an anomalous appearance between peaks in the RR time series even though the ECG shows sinus rhythm. The occurrence of HRF may confound the association between HF and cardiac parasympathetic function and distort the prognosis, so HF is rarely used in clinical evaluation [37]. However, low HF is often correlated with stress, anxiety disorders, and stress; therefore, improving the HF value has positive implications for body health [38].
## 4.4.3. Unwanted Reactions
One case of drowsiness was recorded in the experimental group and not in the control group. This can be both an unwanted reaction and a beneficial effect. If this method is applied to patients with insomnia at the right time, there will be therapeutic benefits [15, 25, 26].
## 4.5. Limitations
First, the respiratory frequency had not been investigated during the study process, so the effect of parasympathomimetics on HF had not been accurately assessed.Second, this is the first study to conduct acupressure to investigate HRV; hence, we performed it on healthy volunteers to ensure safety and to monitor possible dangerous cardiovascular events during acupressure. Evaluating results on healthy people will not represent the goal of clinical application with subjects who are cardiovascular patients. Therefore, in further research it will be focused on patients with chronic cardiovascular or HRV-related diseases.
## 5. Conclusions
The results of the study show that HRV value increased with stimulated acupressure at the heart acupoint of the left ear in healthy volunteers and had a high safety effect. This study is the first step to evaluate the safety of acupuncture, opening a direction in the future for traditional medicine studies on auricular therapy for patients with autonomic nervous disorders.
---
*Source: 1019029-2022-04-25.xml* | 2022 |
# Severe Metabolic Acidemia in a Patient with Aleukemic Leukemia
**Authors:** Moutaz Ghrewati; Faiza Manji; Varun Modi; Chandra Chandran; Michael Maroules
**Journal:** Case Reports in Nephrology
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1019034
---
## Abstract
Malignancy associated lactic acidosis is a rare metabolic complication that may accompany various types of malignancies. To date, most cases that have been reported are associated with hematologic malignancies (lymphoma and leukemia). Many theories have been proposed to explain the pathophysiology of lactic acidosis in malignancies. We are reporting an unusual case of a 62-year-old female who presented with a complaint of generalized weakness. Patient was found to have pancytopenia and metabolic acidosis with an anion gap secondary to lactic acid in addition to non-anion gap acidosis (NAGA). The lactic acidosis resolved only after initiation of chemotherapy as she was diagnosed with B-cell acute lymphoblastic leukemia. Our patient also had a coexistent Renal Tubular Acidosis (RTA) with large kidneys. The kidney size also decreased with chemotherapy. Our case is unique as evidenced by aleukemic leukemia combined with anion gap acidosis and non-anion gap acidosis. Lactic acidosis has many different causes; although rare, hematologic malignancies should be included in the differential diagnosis regardless of cell counts or tumor burden.
---
## Body
## 1. Introduction
Lactic acidosis is classified based on tissue perfusion and oxygenation into type A and type B. Type A occurs when there is marked decrease in oxygen delivery to tissues. On the other hand, type B lactic acidosis occurs in the presence of sufficient oxygen delivery to tissues with main causes being malignancy, diabetes mellitus, drugs, hepatic failure, and renal failure [1].Lactic acidosis has been reported in many cases of leukemia as being associated with an elevated white blood cell count. However, lactic acidosis can still occur even when leukemia is present with a low white blood cell count, a condition known as aleukemic leukemia [2].We report a case of B-cell acute lymphoblastic leukemia (ALL) with pancytopenia and lactic acidosis that responded only to chemotherapy. Patient also had associated RTA due to leukemic infiltrates in the kidneys.
## 2. Case Report
62-year-old female with past medical history of anemia presented with complaint of weakness and dizziness that started a week prior to admission, associated with > 20 lbs. of weight loss over 1 year. Upon admission, no specific clinical findings were noted except for reddish annular spots on the right lower extremities. Blood pressure was 169 / 72; pulse was 102 bpm; respiratory rate was 18 breaths/ minute; temp was 98.3 F; pulse ox was 100% on R/A.Initial laboratory data revealed the data in Table1.Table 1
Initial blood work results.
Name of test Reading Reference range VBG PH 7.24 7.36 – 7.44 VBG PCO2 26 mmhg 36 – 44 VBG HCO3 11.1 mmol/L 22 – 66 VBG Base excess -15.5 mmol/L -2 - 3 Lactic acid 12.3 mmol/L 0.5 - 2.2 WBC 2.3 K/mm3 4.5 – 11 HGB 6.4 g/dl 12 – 16 HCT 17.3 % 36 – 42 PLTs 77 K/mm3 140 – 440 MCV 124.4 U3 80 – 100 RDW 16.2 % 0.5 - 16.5 Segs 33 % 36 – 75 Lymphs 62 % 24 – 44 Atypical Lymphs 1 % 0 – 7 Monocytes 2 % 4 – 10 Eosinophil 1 % 0 – 5 Basophil 1 % 0 – 2 Retic count 4.9 % 0.5 – 2 PT 13.8 sec 12.2 – 14.9 INR 1.1 1 PTT 28.2 sec 21.3 - 35.1 Na+ 141 Meq/L 135 – 145 K+ 3.7 Meq/L 3.5 – 5 Chloride 109 Meq/L 98 – 107 CO2 11 Meq/L 21 – 31 Blood glucose 101 mg/dl 70 – 105 BUN 23 mg/dl 7 – 23 Creatinine 1.18 mg /dl 0.60 – 1.30 Calcium 8.8 mg/dl 8.6 – 10.3 Total protein 6 g/dl 6.4 – 8.4 Albumin 3.8 g/dl 3.5 – 5.7 ALP 69 IU/L 34 – 104 AST 24 U/L 13 – 39 ALT 31 U/L 7 – 25 LDH 185 U/L 140 – 271 Serum osmolarity 297 mOsm/ Kg 283 – 299 Urine Na+ 81 Meq/L 15 – 237 Urine K+ 21 Meq/L 22 – 164 Urine CL- 24 mmol/L 24 – 255 Urine PH 6.5 5-8 Urine Osmolality 628 mOsm/kg 50 – 900 Urine glucose Neg (mg/dl) NegativeBased on the results in Table1, the serum anion gap is 21.5. However, the delta/delta ratio is ~0.74 which indicates that the patient has mixed anion gap and non-anion gap metabolic acidosis. The positive urine anion gap (36) and urine PH > 6 in the presence of metabolic acidosis suggest a renal involvement represented as RTA. Furthermore, we calculate the urine osmolar gap (UOG) using the following formula:UOG = measured urine osmolality - ((2∗ (urine Na + urine K)) + (urine urea nitrogen / 2.8) + (urine glucose / 18)) which would create a urine osmolar gap of 95.43 mOsm/kg which further suggests the distal RTA.Additionally, the patient had a bone marrow biopsy which showed markedly hypercellular bone marrow with 70% B-lymphoblast which is consistent with B-ALL. Staining is positive for TdT, PAX5, CD79a, and CD10. Cytological studies could not be performed due to dry tap.Peripheral blood smear showed only few target cells.Initial CT scan of abdomen was significant for enlargements of the kidneys bilaterally (see Figure1).Figure 1
The enlargement of the kidneys bilaterally prior to chemotherapy.Table2 shows the hospital course for the management of lactic acidosis.Table 2
Explanation of the hospital course management of lactic acidosis.
Date Management of lactic acidosis Lactate acid level MMOL/L CO2 level MEQ/L 1 s t day 0.9 % Normal saline 12.3 11 2 n d day Normosol-R∗ 11 11 1 s t week Dextrose 5% + sodium bicarbonate IV 17 2 n d week 0.9 % Normal saline + Sodium Bicarbonate and 1st cycle of chemotherapy(Hyper-CVAD )∗∗, with Intrathecal Methotrexate 13 3 r d week Few days after1st cycle of Hyper CVAD with intrathecal Methotrexate 6.3 25 5 t h week 0.45 normal saline + Sodium bicarbonate+ 2nd cycle of Hyper CVAD -- 28 8 t h week 4th cycle of Hyper CVAD 24 Discharge -- -- 25 ∗Each 100 mL of Normosol-R contains sodium chloride, 526 mg; sodium acetate, 222 mg; sodium gluconate, 502 mg; potassium chloride, 37 mg; and magnesium chloride hexahydrate, 30 mg. ∗∗Hyper-CVAD: hyper-fractionated chemotherapy of cyclophosphamide, vincristine, doxorubicin, and dexamethasone.Based on Table2, metabolic acidosis was first managed with fluid replacement and sodium bicarbonate while searching for possible causes of lactic acidosis. Lactic acidosis improved with fluids and bicarbonate replacement. However, the complete resolution was achieved only after chemotherapy with hyperfractionated chemotherapy of cyclophosphamide, vincristine, doxorubicin, and dexamethasone (hyper-CVAD) with intrathecal prophylaxis methotrexate was started. Patient received a total of 8 cycles of hyper-CVAD chemotherapy.She had her bone marrow biopsy done after 6 cycles and was found to be in complete remission.
## 3. Discussion
Lactic acidosis results from an imbalance between lactic production and utilization. Lactic acid usually forms under anaerobic condition that shifts the pyruvate in the direction of lactate via lactate dehydrogenase. The most common causes of anaerobic metabolism are hypovolemia, hypoxia, cardiac failure, and sepsis [3]. In our case patient has saturation of 100% on RA, with normal vital signs except for mile elevation in blood pressure, septic work-up was negative, and echo showed normal Ejection Fraction: 55-60%. However, lactic acidosis did not respond to IV fluid replacement.After lactic acid is produced, it is utilized mainly by the liver and by the kidneys to a lesser extent which makes metastasis to the liver or kidneys a potential cause for lactic acidosis in malignancies. Literature review revealed that only 20 cases of leukemia associated with lactic acidosis had liver involvement. And, 2 cases reported kidney involvement, whereas only 2 cases had both liver and kidney involvement [4, 5]. In our reported case initial imaging showed enlarged fatty liver and revealed enlargement of both kidneys. Repeated CT scan after 6 cycles of hyper-CVAD showed that kidney size has decreased with almost 2 cm difference [see Figures 1 and 2]. Kidney involvement in our reported case was responsible for the non-anion gap part of metabolic acidosis which mandates further search for the cause of acidosis.Figure 2
The change in size of the kidneys bilaterally after chemotherapy.Lactic acidosis in malignancies can also result from underperfusion of wide burden tumor or increased rate of aerobic glycolysis by neoplastic cells (Warburg effect). Burden of tumor is better assessed in solid tumors, but in hematologic malignancies cell count can be considered the best alternative. Out of the 26 reported cases cell count was either normal or elevated in 18 of them [4, 5]. In our case initial work-up included complete blood count which revealed pancytopenia.Warburg effect is a phenomenon that describes the unique metabolism in malignant cells. Malignant cells prefer to metabolize pyruvate into lactic acid direction even in the presence of oxygen, a process known as aerobic glycolysis [see Figure3]. The primary goal of the process is not generating energy (ATP) but rather using products of aerobic glycolysis as building blocks to produce new daughter cell, whereas in the presence of oxygen nonproliferating cells tend to metabolize glucose through mitochondrial tricarboxylic acid (TCA) cycle followed by series of electron transport chain reactions known as oxidative phosphorylation with the primary goal being to maximize ATP production formed out of each molecule of glucose [6].Figure 3
Comparison in the metabolism pathway between normal cells and neoplastic cells.Many theories were proposed to explain this effect. Warburg who first described this effect in the early 1920s hypothesized that since cancer cells tend to be dysplastic, this effect results from mitochondrial dysfunction which subsequently impairs the processes (TCA /ETC) that take place in the micro-organelle. Therefore, the metabolism of glucose will shift towards fermentation of glucose into lactate. However, subsequent research showed that the mitochondria and their function are intact in most cancer cells [7].Further research was able to recognize mutations involved in glucose metabolism inside cancer cells. These include the PI3K signaling pathway [7] and overexpression of Hexokinase-2 (HK2) [8], whereas another involved pathway is pyruvate kinase (PK)–M2 that represents the embryonic isoform of PK [9, 10]. Absence of these mutations in normal cells sheds light on the factors that may play an important role in establishing Warburg effect in proliferating cells.With future research, different theories might be proposed in order to explain this effect. We hypothesize that the mutations responsible for Warburg effect result in mediators that alternate the metabolic pathway in the cancer cells. Being aware of these mediators can be a future promise for the new era of chemotherapy.
## 4. Conclusion
Lactic acidosis is a metabolic disorder that has different etiologies. It has been reported with malignancies including leukemia with high cell count. However, our case has some unique features including having AG and NAGA simultaneously, RTA due to leukemia infiltration of the kidneys, AG resulting from the unique metabolism of malignant cells, and resolution of both types of acidosis only after starting chemotherapy. Warburg effect is a big contributor to lactic acidosis in malignancy. Our case illustrates that this effect can be seen even with aleukemic Leukemia and suggests tumor load may not be needed for this phenomenon to occur.
---
*Source: 1019034-2018-11-18.xml* | 1019034-2018-11-18_1019034-2018-11-18.md | 11,679 | Severe Metabolic Acidemia in a Patient with Aleukemic Leukemia | Moutaz Ghrewati; Faiza Manji; Varun Modi; Chandra Chandran; Michael Maroules | Case Reports in Nephrology
(2018) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1019034 | 1019034-2018-11-18.xml | ---
## Abstract
Malignancy associated lactic acidosis is a rare metabolic complication that may accompany various types of malignancies. To date, most cases that have been reported are associated with hematologic malignancies (lymphoma and leukemia). Many theories have been proposed to explain the pathophysiology of lactic acidosis in malignancies. We are reporting an unusual case of a 62-year-old female who presented with a complaint of generalized weakness. Patient was found to have pancytopenia and metabolic acidosis with an anion gap secondary to lactic acid in addition to non-anion gap acidosis (NAGA). The lactic acidosis resolved only after initiation of chemotherapy as she was diagnosed with B-cell acute lymphoblastic leukemia. Our patient also had a coexistent Renal Tubular Acidosis (RTA) with large kidneys. The kidney size also decreased with chemotherapy. Our case is unique as evidenced by aleukemic leukemia combined with anion gap acidosis and non-anion gap acidosis. Lactic acidosis has many different causes; although rare, hematologic malignancies should be included in the differential diagnosis regardless of cell counts or tumor burden.
---
## Body
## 1. Introduction
Lactic acidosis is classified based on tissue perfusion and oxygenation into type A and type B. Type A occurs when there is marked decrease in oxygen delivery to tissues. On the other hand, type B lactic acidosis occurs in the presence of sufficient oxygen delivery to tissues with main causes being malignancy, diabetes mellitus, drugs, hepatic failure, and renal failure [1].Lactic acidosis has been reported in many cases of leukemia as being associated with an elevated white blood cell count. However, lactic acidosis can still occur even when leukemia is present with a low white blood cell count, a condition known as aleukemic leukemia [2].We report a case of B-cell acute lymphoblastic leukemia (ALL) with pancytopenia and lactic acidosis that responded only to chemotherapy. Patient also had associated RTA due to leukemic infiltrates in the kidneys.
## 2. Case Report
62-year-old female with past medical history of anemia presented with complaint of weakness and dizziness that started a week prior to admission, associated with > 20 lbs. of weight loss over 1 year. Upon admission, no specific clinical findings were noted except for reddish annular spots on the right lower extremities. Blood pressure was 169 / 72; pulse was 102 bpm; respiratory rate was 18 breaths/ minute; temp was 98.3 F; pulse ox was 100% on R/A.Initial laboratory data revealed the data in Table1.Table 1
Initial blood work results.
Name of test Reading Reference range VBG PH 7.24 7.36 – 7.44 VBG PCO2 26 mmhg 36 – 44 VBG HCO3 11.1 mmol/L 22 – 66 VBG Base excess -15.5 mmol/L -2 - 3 Lactic acid 12.3 mmol/L 0.5 - 2.2 WBC 2.3 K/mm3 4.5 – 11 HGB 6.4 g/dl 12 – 16 HCT 17.3 % 36 – 42 PLTs 77 K/mm3 140 – 440 MCV 124.4 U3 80 – 100 RDW 16.2 % 0.5 - 16.5 Segs 33 % 36 – 75 Lymphs 62 % 24 – 44 Atypical Lymphs 1 % 0 – 7 Monocytes 2 % 4 – 10 Eosinophil 1 % 0 – 5 Basophil 1 % 0 – 2 Retic count 4.9 % 0.5 – 2 PT 13.8 sec 12.2 – 14.9 INR 1.1 1 PTT 28.2 sec 21.3 - 35.1 Na+ 141 Meq/L 135 – 145 K+ 3.7 Meq/L 3.5 – 5 Chloride 109 Meq/L 98 – 107 CO2 11 Meq/L 21 – 31 Blood glucose 101 mg/dl 70 – 105 BUN 23 mg/dl 7 – 23 Creatinine 1.18 mg /dl 0.60 – 1.30 Calcium 8.8 mg/dl 8.6 – 10.3 Total protein 6 g/dl 6.4 – 8.4 Albumin 3.8 g/dl 3.5 – 5.7 ALP 69 IU/L 34 – 104 AST 24 U/L 13 – 39 ALT 31 U/L 7 – 25 LDH 185 U/L 140 – 271 Serum osmolarity 297 mOsm/ Kg 283 – 299 Urine Na+ 81 Meq/L 15 – 237 Urine K+ 21 Meq/L 22 – 164 Urine CL- 24 mmol/L 24 – 255 Urine PH 6.5 5-8 Urine Osmolality 628 mOsm/kg 50 – 900 Urine glucose Neg (mg/dl) NegativeBased on the results in Table1, the serum anion gap is 21.5. However, the delta/delta ratio is ~0.74 which indicates that the patient has mixed anion gap and non-anion gap metabolic acidosis. The positive urine anion gap (36) and urine PH > 6 in the presence of metabolic acidosis suggest a renal involvement represented as RTA. Furthermore, we calculate the urine osmolar gap (UOG) using the following formula:UOG = measured urine osmolality - ((2∗ (urine Na + urine K)) + (urine urea nitrogen / 2.8) + (urine glucose / 18)) which would create a urine osmolar gap of 95.43 mOsm/kg which further suggests the distal RTA.Additionally, the patient had a bone marrow biopsy which showed markedly hypercellular bone marrow with 70% B-lymphoblast which is consistent with B-ALL. Staining is positive for TdT, PAX5, CD79a, and CD10. Cytological studies could not be performed due to dry tap.Peripheral blood smear showed only few target cells.Initial CT scan of abdomen was significant for enlargements of the kidneys bilaterally (see Figure1).Figure 1
The enlargement of the kidneys bilaterally prior to chemotherapy.Table2 shows the hospital course for the management of lactic acidosis.Table 2
Explanation of the hospital course management of lactic acidosis.
Date Management of lactic acidosis Lactate acid level MMOL/L CO2 level MEQ/L 1 s t day 0.9 % Normal saline 12.3 11 2 n d day Normosol-R∗ 11 11 1 s t week Dextrose 5% + sodium bicarbonate IV 17 2 n d week 0.9 % Normal saline + Sodium Bicarbonate and 1st cycle of chemotherapy(Hyper-CVAD )∗∗, with Intrathecal Methotrexate 13 3 r d week Few days after1st cycle of Hyper CVAD with intrathecal Methotrexate 6.3 25 5 t h week 0.45 normal saline + Sodium bicarbonate+ 2nd cycle of Hyper CVAD -- 28 8 t h week 4th cycle of Hyper CVAD 24 Discharge -- -- 25 ∗Each 100 mL of Normosol-R contains sodium chloride, 526 mg; sodium acetate, 222 mg; sodium gluconate, 502 mg; potassium chloride, 37 mg; and magnesium chloride hexahydrate, 30 mg. ∗∗Hyper-CVAD: hyper-fractionated chemotherapy of cyclophosphamide, vincristine, doxorubicin, and dexamethasone.Based on Table2, metabolic acidosis was first managed with fluid replacement and sodium bicarbonate while searching for possible causes of lactic acidosis. Lactic acidosis improved with fluids and bicarbonate replacement. However, the complete resolution was achieved only after chemotherapy with hyperfractionated chemotherapy of cyclophosphamide, vincristine, doxorubicin, and dexamethasone (hyper-CVAD) with intrathecal prophylaxis methotrexate was started. Patient received a total of 8 cycles of hyper-CVAD chemotherapy.She had her bone marrow biopsy done after 6 cycles and was found to be in complete remission.
## 3. Discussion
Lactic acidosis results from an imbalance between lactic production and utilization. Lactic acid usually forms under anaerobic condition that shifts the pyruvate in the direction of lactate via lactate dehydrogenase. The most common causes of anaerobic metabolism are hypovolemia, hypoxia, cardiac failure, and sepsis [3]. In our case patient has saturation of 100% on RA, with normal vital signs except for mile elevation in blood pressure, septic work-up was negative, and echo showed normal Ejection Fraction: 55-60%. However, lactic acidosis did not respond to IV fluid replacement.After lactic acid is produced, it is utilized mainly by the liver and by the kidneys to a lesser extent which makes metastasis to the liver or kidneys a potential cause for lactic acidosis in malignancies. Literature review revealed that only 20 cases of leukemia associated with lactic acidosis had liver involvement. And, 2 cases reported kidney involvement, whereas only 2 cases had both liver and kidney involvement [4, 5]. In our reported case initial imaging showed enlarged fatty liver and revealed enlargement of both kidneys. Repeated CT scan after 6 cycles of hyper-CVAD showed that kidney size has decreased with almost 2 cm difference [see Figures 1 and 2]. Kidney involvement in our reported case was responsible for the non-anion gap part of metabolic acidosis which mandates further search for the cause of acidosis.Figure 2
The change in size of the kidneys bilaterally after chemotherapy.Lactic acidosis in malignancies can also result from underperfusion of wide burden tumor or increased rate of aerobic glycolysis by neoplastic cells (Warburg effect). Burden of tumor is better assessed in solid tumors, but in hematologic malignancies cell count can be considered the best alternative. Out of the 26 reported cases cell count was either normal or elevated in 18 of them [4, 5]. In our case initial work-up included complete blood count which revealed pancytopenia.Warburg effect is a phenomenon that describes the unique metabolism in malignant cells. Malignant cells prefer to metabolize pyruvate into lactic acid direction even in the presence of oxygen, a process known as aerobic glycolysis [see Figure3]. The primary goal of the process is not generating energy (ATP) but rather using products of aerobic glycolysis as building blocks to produce new daughter cell, whereas in the presence of oxygen nonproliferating cells tend to metabolize glucose through mitochondrial tricarboxylic acid (TCA) cycle followed by series of electron transport chain reactions known as oxidative phosphorylation with the primary goal being to maximize ATP production formed out of each molecule of glucose [6].Figure 3
Comparison in the metabolism pathway between normal cells and neoplastic cells.Many theories were proposed to explain this effect. Warburg who first described this effect in the early 1920s hypothesized that since cancer cells tend to be dysplastic, this effect results from mitochondrial dysfunction which subsequently impairs the processes (TCA /ETC) that take place in the micro-organelle. Therefore, the metabolism of glucose will shift towards fermentation of glucose into lactate. However, subsequent research showed that the mitochondria and their function are intact in most cancer cells [7].Further research was able to recognize mutations involved in glucose metabolism inside cancer cells. These include the PI3K signaling pathway [7] and overexpression of Hexokinase-2 (HK2) [8], whereas another involved pathway is pyruvate kinase (PK)–M2 that represents the embryonic isoform of PK [9, 10]. Absence of these mutations in normal cells sheds light on the factors that may play an important role in establishing Warburg effect in proliferating cells.With future research, different theories might be proposed in order to explain this effect. We hypothesize that the mutations responsible for Warburg effect result in mediators that alternate the metabolic pathway in the cancer cells. Being aware of these mediators can be a future promise for the new era of chemotherapy.
## 4. Conclusion
Lactic acidosis is a metabolic disorder that has different etiologies. It has been reported with malignancies including leukemia with high cell count. However, our case has some unique features including having AG and NAGA simultaneously, RTA due to leukemia infiltration of the kidneys, AG resulting from the unique metabolism of malignant cells, and resolution of both types of acidosis only after starting chemotherapy. Warburg effect is a big contributor to lactic acidosis in malignancy. Our case illustrates that this effect can be seen even with aleukemic Leukemia and suggests tumor load may not be needed for this phenomenon to occur.
---
*Source: 1019034-2018-11-18.xml* | 2018 |
# First Boundary Value Problem for Cordes-Type Semilinear Parabolic Equation with Discontinuous Coefficients
**Authors:** Aziz Harman; Ezgi Harman
**Journal:** Journal of Mathematics
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1019038
---
## Abstract
For a class of semilinear parabolic equations with discontinuous coefficients, the strong solvability of the Dirichlet problem is studied in this paper. The problem∑i,j=1naijt,xuxixj−ut+gt,x,u=ft,x,uΓQT=0, in QT=Ω×0,T is the subject of our study, where Ω is bounded C2 or a convex subdomain of En+1,ΓQT=∂QT\∖t=T. The function gx,u is assumed to be a Caratheodory function satisfying the growth condition gt,x,u≤b0uq, for b0>0,q∈0,n+1/n−1,n≥2, and leading coefficients satisfy Cordes condition b0>0,q∈0,n+1/n−1,n≥2.
---
## Body
## 1. Introduction
LetEn be an n-dimensional Euclidean space of points x=x1,x2,…,xn and Ω be a bounded domain in En with boundary ∂Ω of the class C2 or simply a convex domain. Set QT=Ω×0,T and ΓQT=∂QT\t=T. Consider in QT the Dirichlet problem:(1)∑i,j=1naijt,xuxixj−ut+gt,x,u=ft,x,t,x∈QT,(2)uΓQT=0.It is assumed that the coefficientsaijt,x,i,j=1,2,…,n, of the operator(3)L=∑i,j=1naijt,x∂2∂xi∂xj−∂∂t,are bounded measurable functions satisfying the uniform parabolicity(4)γξ2≤∑i,j=1naijt,xξiξj≤γ−1ξ2,for γ∈0,1,∀t,x∈QT,∀ξ∈En, and the Cordes-type condition(5)∑i,j=1naij2t,x∑i=1naiit,x2≤1n−μ2−δ.Here,μ=essinf∑i=1naiit,x/esssup∑i=1naiit,x, and the number δ∈0,1/n+1. The nonlinear term, function gt,x,u:QT⟶E1, satisfies the Caratheodory condition, that is, g is a measurable function with respect to variables t,x∈Ω, and for almost all t,x∈QT continuously depend on the variable u∈E1. Also, the growth condition(6)gt,x,u≤b0uq,b0>0,is satisfied.The spaceW˙p2,1QT, p>1, is a closure of function class u∈C∞Q¯T∩CQ¯T,uΓQT=0 with respect to norm(7)uW˙p2,1QT=uLpQT+∑i=1n∂xiuLpQT+∂tuLpQT+∑i,j=1n∂xi∂xjuLpQT.Here,ui,ut, and uij denote the weak derivatives uxi,ut, and uxixj, respectively, i,j=1,…,n. The conjugate number is denoted by p′, i.e., 1<p<∞, 1/p′+1/p==1. By the same letter C, we denote different positive constants, and the value of C is not essential for purposes of this study.Forp∈1,∞, we denote by vLpQT or simply vp the norm of a Banach space Lp0,T;LpΩ defined as gp=∫0Tgt,⋅LpΩpdt1/p.A functionut,x∈W˙p2,1QT is called the strong solution (almost everywhere) of problems (1) and (2) if it satisfies equation (1), a.e., in QT.In this study, we will make essential use of the existence results given in Theorem 1.1 of [1] (see, also [2]) for Cordes-type parabolic equations satisfying (5). In [1], the estimate(8)uW˙22,1QT≤CLuL2QT,was proved for all u∈W˙p2,1QT, and when T≤T0 with T0=T0n,L,Ω to be sufficiently small and positive constant C depends on n,Ω,L.In the stationary case, i.e., the solution does not depend on the time variable (the elliptic equation), from examples ([3], p. 48), it is followed that the equation Lu=f is solvable in W˙p2,1QT for no p>1 (see [3–8]) if the coefficients are discontinuous. In the absense of g (t, x, u), the strong solvability of the Dirichlet problem for quasi-linear parabolic equations under more restrictive then (5) conditions see, e.g. [9, 10].If the trace of matrixaijt,x is constant, condition (5) is exactly Cordes condition (see, e.g., [7, 11–13]):(9)∑i,j=1naij2t,x∑i=1naiit,x2≤1n−1−δ.For the strong solvability problem inW˙p2Ω for any p>1 for parabolic equations with discontinuous coefficients, we refer [8, 14, 15], where the leading coefficients are taken from the VMO class. We refer [16] on exact growth conditions for strong solvability of nonlinear elliptic equations Δu=gx,u,ux in W˙p2Ω whenever p>n.The aim pursued in this paper is to prove the strong solvability of Dirichlet problems (1) and (2) in the space W˙22,1QT for T to be sufficiently small, the ft,xL2QT norm to be sufficiently small, and the coefficients to satisfy (5).
## 2. Main Result
In order to carry out the proof of main Theorem1, we need the following assertion from [1].Lemma 1.
Letut,x be a W˙22,1QT function in QT=Ω×0,T and conditions (2), (4), and (5) be fulfilled for ut,x and coefficients of the operator L; the domain Ω is of C2 class or simply convex. Then, there exists sufficiently small T0 depending on ℒ,n,Ω such that, for T≤T0, estimate (8) holds with the constant C depending on ℒ,n,Ω.The following assertion is the main result of this paper.Theorem 1.
Letn>4,0<q<n+1/n−1, and conditions (4)–(6) be fulfilled, and ∂Ω∈C2. Let T0 be a number in Lemma 1 and T≤T0. Then, problems (1) and (2) have at least one strong solution in the space W˙22,1QT for any ft,x∈L2QT satisfying(10)fL2QT≤Cb0−1/q−1mesn+1QTqn−1/n+1−11/2q−1.Proof.
In order to get the solvability of problem (1) and (2), we apply the Schauder fixed point theorem on completely continuous mappings of a compact subset in the Banach space (see, e.g. [4], p. 257, or [17]).
SetL2qQT as a basic Banach space. In this space, we define the set V2=u∈W˙22,1QTuW2,12QT≤K, where the number K will be chosen later. Show that V2is compact in L2qQT. By using the condition 2q<2n+1/n−1 and Sobolev–Kondrachov’s compact embedding theorem, the space W21QT is imbedded into L2qQT compactly. On the contrary, W22,1QTW21QT is continuous. Therefore, V2L2qQT is compact.
ShowV2 is convex. For any u1,u2∈V2 and t∈0,1, it holds u=tu1+1−tu2∈V2:(11)uW22,1QT≤tu1W22,1QT+1−tu2W22,1QT≤K.
Forut,x∈V2, denote vt,x∈W˙22,1QT the solution of the Dirichlet problem:(12)Lv+gt,x,u=ft,x,t,x∈QT,(13)vΓQT=0.
For fixedut,x∈V2 and f∈L2QT, problems (12) and (13) are uniquely solvable in the space W˙22,1QT; because of the assumptions on domain and q, we get the Dirichlet problem for equation (1) (for its solvability, we refer [1, 2, 9, 10]):(14)Lv=Ft,x,t,x∈QT,uΓQT=0,where F=ft,x−gt,x,∈L2QT.
We have(15)FL2QT≤fL2QT+gL2QT≤fL2QT+b0uqL2QT.
By using the chain of imbeddings,W22,1QTW21QTL2qQT and u∈W˙22,1QT, the norm uqL2QT is finite.
Insert an operatorA:u⟶v acting on L2qQT, where v is a solution of problems (12) and (13):(16)Au=v.
Show that operatorA is completely continuous in L2qQT. Let um be a convergence sequence in L2qQT with um⟶u0. Show that its image is convergent in L2qQT with vm⟶v0, where v0=Au0,vm=Aum.
Then,(17)Lvm=−gt,x,um+f,⋮Lv0=−gt,x,u0+f.
We have(18)Lvm−v0=−gt,x,um−gt,x,u0.
Setgm=gt,x,um, g=gt,x,u, and show that(19)gm−gL2QT⟶0form⟶∞.
For that, fromum⟶u0 in L2qQT follows the convergnce in measure in QT. This and the Caratheodory condition imply that the convergence in measure gm−g02⟶0. To prove (19), it remains to show the equicontinuity of gm2, which follows from equicontinuity of um2q. The convergence um⟶u0 in L2qQT implies equicontinuity of um2q.
Applying Vitali’s theorem, we get(20)gm−gL2QT⟶0asm⟶∞.
To showvm⟶v0 in L2qQT, we use the estimate from Lemma 1 for sufficiently small T0 with T≤T0:(21)vm−v0W22,1QT≤CLvm−v0L2QT=Cgm−gL2QT⟶0.
By virtue ofW˙22,1QT↪L2qQT, it follows that(22)vn−v0Lr,2Ω⟶0asn⟶∞.
The complete continuity of operatorA in L2qQT has been shown.
Now, we have to showu∈V2 implies v=Au∈V2. For this, applying Lemma 1, it follows that(23)vW22,1QT≤CFL2QT≤Cδ,γ,ngL2QT+fL2QT.
Using Holder’s inequality and the imbedding chain(24)W22,1QT↪W21QT↪L2qQT,it follows that(25)gL2QT≤∫QTb02u2qdxdt1/2=b0uL2qQTq≤Cb0u2n+1/n−1qmesn+1QT1/2−qn−1/2n+1≤Cb0mesn+1QT1/2−qn−1/2n+1uW21QTq≤C2b0mesn+1QT1/2−qn−1/2n+1uW22,1QTq.
Using Lemma1, this is exceeded:(26)C1b0mesn+1QT1/2−qn−1/2n+1ℒuL2QTq.
Using estimate (26) in (23), we get(27)vW22,1QTC1b0mesn+1QT1/2−qn−1/2n+1ℒuL2QTq+fL2Ω≤C3Kqb0mesn+1QT1/2−qn−1/2n+1fL2Ω.
LetK be such that(28)C2.5Kqb0mesn+1QT1/2−qn−1/2n+1+fL2Ω≤K.
For such numberK to exist, condition (10) is sufficient. To prove it, set the notation(29)a=b0mesn+1QT1/2−qn−1/2n+1,b=fL2Ω.
Inequality (28) takes the form(30)aKq+b≤K,aKq−K+b≤0,K>0.
The functionfK=aKq−K, K≥0, takes its minimal in K0=1/qa1/q−1. Indeed, df/dK=aqKq−1−1; then, for K0q−1=1/qa,df/dKK0=0;d2f/dK2K0>0. Therefore, for b≤fK0, inequality (30) is solvable with respect to K. To finish the proof, it remains to set sufficiently small T0 so that condition (10) is satisfied. It is possible since mesn+1QT=TmesnΩ, the power on mesn+1QT, is positive, i.e., 1/2−qn−1/2n+1>0.
This completes the proof of Theorem1.
## 3. Conclusion
In this paper, the strong solvability problem for a class of second-order semilinear parabolic equations is studied. For the strong solvability of the first boundary value problem for a class of parabolic equations having a nonlinear term, a sufficient condition is found for the power growth condition. In the proof, the Schauder fixed point theorem in the Banach space is used. Also, some a priori estimates are shown in order to realize the legitimate.
---
*Source: 1019038-2020-06-19.xml* | 1019038-2020-06-19_1019038-2020-06-19.md | 9,193 | First Boundary Value Problem for Cordes-Type Semilinear Parabolic Equation with Discontinuous Coefficients | Aziz Harman; Ezgi Harman | Journal of Mathematics
(2020) | Mathematical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1019038 | 1019038-2020-06-19.xml | ---
## Abstract
For a class of semilinear parabolic equations with discontinuous coefficients, the strong solvability of the Dirichlet problem is studied in this paper. The problem∑i,j=1naijt,xuxixj−ut+gt,x,u=ft,x,uΓQT=0, in QT=Ω×0,T is the subject of our study, where Ω is bounded C2 or a convex subdomain of En+1,ΓQT=∂QT\∖t=T. The function gx,u is assumed to be a Caratheodory function satisfying the growth condition gt,x,u≤b0uq, for b0>0,q∈0,n+1/n−1,n≥2, and leading coefficients satisfy Cordes condition b0>0,q∈0,n+1/n−1,n≥2.
---
## Body
## 1. Introduction
LetEn be an n-dimensional Euclidean space of points x=x1,x2,…,xn and Ω be a bounded domain in En with boundary ∂Ω of the class C2 or simply a convex domain. Set QT=Ω×0,T and ΓQT=∂QT\t=T. Consider in QT the Dirichlet problem:(1)∑i,j=1naijt,xuxixj−ut+gt,x,u=ft,x,t,x∈QT,(2)uΓQT=0.It is assumed that the coefficientsaijt,x,i,j=1,2,…,n, of the operator(3)L=∑i,j=1naijt,x∂2∂xi∂xj−∂∂t,are bounded measurable functions satisfying the uniform parabolicity(4)γξ2≤∑i,j=1naijt,xξiξj≤γ−1ξ2,for γ∈0,1,∀t,x∈QT,∀ξ∈En, and the Cordes-type condition(5)∑i,j=1naij2t,x∑i=1naiit,x2≤1n−μ2−δ.Here,μ=essinf∑i=1naiit,x/esssup∑i=1naiit,x, and the number δ∈0,1/n+1. The nonlinear term, function gt,x,u:QT⟶E1, satisfies the Caratheodory condition, that is, g is a measurable function with respect to variables t,x∈Ω, and for almost all t,x∈QT continuously depend on the variable u∈E1. Also, the growth condition(6)gt,x,u≤b0uq,b0>0,is satisfied.The spaceW˙p2,1QT, p>1, is a closure of function class u∈C∞Q¯T∩CQ¯T,uΓQT=0 with respect to norm(7)uW˙p2,1QT=uLpQT+∑i=1n∂xiuLpQT+∂tuLpQT+∑i,j=1n∂xi∂xjuLpQT.Here,ui,ut, and uij denote the weak derivatives uxi,ut, and uxixj, respectively, i,j=1,…,n. The conjugate number is denoted by p′, i.e., 1<p<∞, 1/p′+1/p==1. By the same letter C, we denote different positive constants, and the value of C is not essential for purposes of this study.Forp∈1,∞, we denote by vLpQT or simply vp the norm of a Banach space Lp0,T;LpΩ defined as gp=∫0Tgt,⋅LpΩpdt1/p.A functionut,x∈W˙p2,1QT is called the strong solution (almost everywhere) of problems (1) and (2) if it satisfies equation (1), a.e., in QT.In this study, we will make essential use of the existence results given in Theorem 1.1 of [1] (see, also [2]) for Cordes-type parabolic equations satisfying (5). In [1], the estimate(8)uW˙22,1QT≤CLuL2QT,was proved for all u∈W˙p2,1QT, and when T≤T0 with T0=T0n,L,Ω to be sufficiently small and positive constant C depends on n,Ω,L.In the stationary case, i.e., the solution does not depend on the time variable (the elliptic equation), from examples ([3], p. 48), it is followed that the equation Lu=f is solvable in W˙p2,1QT for no p>1 (see [3–8]) if the coefficients are discontinuous. In the absense of g (t, x, u), the strong solvability of the Dirichlet problem for quasi-linear parabolic equations under more restrictive then (5) conditions see, e.g. [9, 10].If the trace of matrixaijt,x is constant, condition (5) is exactly Cordes condition (see, e.g., [7, 11–13]):(9)∑i,j=1naij2t,x∑i=1naiit,x2≤1n−1−δ.For the strong solvability problem inW˙p2Ω for any p>1 for parabolic equations with discontinuous coefficients, we refer [8, 14, 15], where the leading coefficients are taken from the VMO class. We refer [16] on exact growth conditions for strong solvability of nonlinear elliptic equations Δu=gx,u,ux in W˙p2Ω whenever p>n.The aim pursued in this paper is to prove the strong solvability of Dirichlet problems (1) and (2) in the space W˙22,1QT for T to be sufficiently small, the ft,xL2QT norm to be sufficiently small, and the coefficients to satisfy (5).
## 2. Main Result
In order to carry out the proof of main Theorem1, we need the following assertion from [1].Lemma 1.
Letut,x be a W˙22,1QT function in QT=Ω×0,T and conditions (2), (4), and (5) be fulfilled for ut,x and coefficients of the operator L; the domain Ω is of C2 class or simply convex. Then, there exists sufficiently small T0 depending on ℒ,n,Ω such that, for T≤T0, estimate (8) holds with the constant C depending on ℒ,n,Ω.The following assertion is the main result of this paper.Theorem 1.
Letn>4,0<q<n+1/n−1, and conditions (4)–(6) be fulfilled, and ∂Ω∈C2. Let T0 be a number in Lemma 1 and T≤T0. Then, problems (1) and (2) have at least one strong solution in the space W˙22,1QT for any ft,x∈L2QT satisfying(10)fL2QT≤Cb0−1/q−1mesn+1QTqn−1/n+1−11/2q−1.Proof.
In order to get the solvability of problem (1) and (2), we apply the Schauder fixed point theorem on completely continuous mappings of a compact subset in the Banach space (see, e.g. [4], p. 257, or [17]).
SetL2qQT as a basic Banach space. In this space, we define the set V2=u∈W˙22,1QTuW2,12QT≤K, where the number K will be chosen later. Show that V2is compact in L2qQT. By using the condition 2q<2n+1/n−1 and Sobolev–Kondrachov’s compact embedding theorem, the space W21QT is imbedded into L2qQT compactly. On the contrary, W22,1QTW21QT is continuous. Therefore, V2L2qQT is compact.
ShowV2 is convex. For any u1,u2∈V2 and t∈0,1, it holds u=tu1+1−tu2∈V2:(11)uW22,1QT≤tu1W22,1QT+1−tu2W22,1QT≤K.
Forut,x∈V2, denote vt,x∈W˙22,1QT the solution of the Dirichlet problem:(12)Lv+gt,x,u=ft,x,t,x∈QT,(13)vΓQT=0.
For fixedut,x∈V2 and f∈L2QT, problems (12) and (13) are uniquely solvable in the space W˙22,1QT; because of the assumptions on domain and q, we get the Dirichlet problem for equation (1) (for its solvability, we refer [1, 2, 9, 10]):(14)Lv=Ft,x,t,x∈QT,uΓQT=0,where F=ft,x−gt,x,∈L2QT.
We have(15)FL2QT≤fL2QT+gL2QT≤fL2QT+b0uqL2QT.
By using the chain of imbeddings,W22,1QTW21QTL2qQT and u∈W˙22,1QT, the norm uqL2QT is finite.
Insert an operatorA:u⟶v acting on L2qQT, where v is a solution of problems (12) and (13):(16)Au=v.
Show that operatorA is completely continuous in L2qQT. Let um be a convergence sequence in L2qQT with um⟶u0. Show that its image is convergent in L2qQT with vm⟶v0, where v0=Au0,vm=Aum.
Then,(17)Lvm=−gt,x,um+f,⋮Lv0=−gt,x,u0+f.
We have(18)Lvm−v0=−gt,x,um−gt,x,u0.
Setgm=gt,x,um, g=gt,x,u, and show that(19)gm−gL2QT⟶0form⟶∞.
For that, fromum⟶u0 in L2qQT follows the convergnce in measure in QT. This and the Caratheodory condition imply that the convergence in measure gm−g02⟶0. To prove (19), it remains to show the equicontinuity of gm2, which follows from equicontinuity of um2q. The convergence um⟶u0 in L2qQT implies equicontinuity of um2q.
Applying Vitali’s theorem, we get(20)gm−gL2QT⟶0asm⟶∞.
To showvm⟶v0 in L2qQT, we use the estimate from Lemma 1 for sufficiently small T0 with T≤T0:(21)vm−v0W22,1QT≤CLvm−v0L2QT=Cgm−gL2QT⟶0.
By virtue ofW˙22,1QT↪L2qQT, it follows that(22)vn−v0Lr,2Ω⟶0asn⟶∞.
The complete continuity of operatorA in L2qQT has been shown.
Now, we have to showu∈V2 implies v=Au∈V2. For this, applying Lemma 1, it follows that(23)vW22,1QT≤CFL2QT≤Cδ,γ,ngL2QT+fL2QT.
Using Holder’s inequality and the imbedding chain(24)W22,1QT↪W21QT↪L2qQT,it follows that(25)gL2QT≤∫QTb02u2qdxdt1/2=b0uL2qQTq≤Cb0u2n+1/n−1qmesn+1QT1/2−qn−1/2n+1≤Cb0mesn+1QT1/2−qn−1/2n+1uW21QTq≤C2b0mesn+1QT1/2−qn−1/2n+1uW22,1QTq.
Using Lemma1, this is exceeded:(26)C1b0mesn+1QT1/2−qn−1/2n+1ℒuL2QTq.
Using estimate (26) in (23), we get(27)vW22,1QTC1b0mesn+1QT1/2−qn−1/2n+1ℒuL2QTq+fL2Ω≤C3Kqb0mesn+1QT1/2−qn−1/2n+1fL2Ω.
LetK be such that(28)C2.5Kqb0mesn+1QT1/2−qn−1/2n+1+fL2Ω≤K.
For such numberK to exist, condition (10) is sufficient. To prove it, set the notation(29)a=b0mesn+1QT1/2−qn−1/2n+1,b=fL2Ω.
Inequality (28) takes the form(30)aKq+b≤K,aKq−K+b≤0,K>0.
The functionfK=aKq−K, K≥0, takes its minimal in K0=1/qa1/q−1. Indeed, df/dK=aqKq−1−1; then, for K0q−1=1/qa,df/dKK0=0;d2f/dK2K0>0. Therefore, for b≤fK0, inequality (30) is solvable with respect to K. To finish the proof, it remains to set sufficiently small T0 so that condition (10) is satisfied. It is possible since mesn+1QT=TmesnΩ, the power on mesn+1QT, is positive, i.e., 1/2−qn−1/2n+1>0.
This completes the proof of Theorem1.
## 3. Conclusion
In this paper, the strong solvability problem for a class of second-order semilinear parabolic equations is studied. For the strong solvability of the first boundary value problem for a class of parabolic equations having a nonlinear term, a sufficient condition is found for the power growth condition. In the proof, the Schauder fixed point theorem in the Banach space is used. Also, some a priori estimates are shown in order to realize the legitimate.
---
*Source: 1019038-2020-06-19.xml* | 2020 |
# Wenxin Keli versus Sotalol for Paroxysmal Atrial Fibrillation Caused by Hyperthyroidism: A Prospective, Open Label, and Randomized Study
**Authors:** Zhaowei Meng; Jian Tan; Qing He; Mei Zhu; Xue Li; Jianping Zhang; Qiang Jia; Shen Wang; Guizhi Zhang; Wei Zheng
**Journal:** Evidence-Based Complementary and Alternative Medicine
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101904
---
## Abstract
We aimed to compare effectiveness of Wenxin Keli (WK) and sotalol in assisting sinus rhythm (SR) restoration from paroxysmal atrial fibrillation (PAF) caused by hyperthyroidism, as well as in maintaining SR. We randomly prescribed WK (18 g tid) or sotalol (80 mg bid) to 91 or 89 patients. Since it was not ethical not to give patients antiarrhythmia drugs, no control group was set. Antithyroid drugs were given to 90 patients (45 in WK group, 45 in sotalol group);131I was given to 90 patients (46 in WK group, 44 in sotalol group). Three months later, SR was obtained in 83/91 or 80/89 cases from WK or sotalol groups (P=0.762). By another analysis, SR was obtained in 86/90 or 77/90 cases from 131I or ATD groups (P=0.022). Then, we randomly assigned the successfully SR-reverted patients into three groups: WK, sotalol, and control (no antiarrhythmia drug was given) groups. After twelve-month follow-up, PAF recurrence happened in 1/54, 2/54, and 9/55 cases, respectively. Log-Rank test showed significant higher PAF recurrent rate in control patients than either treatment (P=0.06). We demonstrated the same efficacies of WK and sotalol to assist SR reversion from hyperthyroidism-caused PAF. We also showed that either drug could maintain SR in such patients.
---
## Body
## 1. Introduction
Atrial fibrillation is the most common cardiac rhythm disturbance, increasing in prevalence with age. By definition, atrial fibrillation is a supraventricular tachyarrhythmia characterized by uncoordinated atrial activation with consequent deterioration of atrial mechanical function [1–3]. Clinicians should distinguish a first-detected episode of atrial fibrillation, whether or not it is symptomatic or self-limited. Patients with atrial fibrillation have markedly reduced survival rate compared with subjects without atrial fibrillation. In paroxysmal atrial fibrillation (PAF), sudden repeated changes in rhythm cause symptoms which most patients find very debilitating. In addition, PAF carries an increasing risk of thromboembolic events, when compared with chronic atrial fibrillation [4, 5]. Therefore, the effective treatment and prevention of this kind of arrhythmia has important clinical significance [1–3, 6, 7]. Atrial fibrillation occurs in 10% to 25% of patients with hyperthyroidism, more commonly in men and elderly patients [2, 8, 9]. Mainstay treatment is restoration of euthyroid state, which can be accomplished by antithyroid drugs, 131I, and surgery. Successful management of hyperthyroidism could result in restoration of sinus rhythm (SR) in up to two-thirds of patients [10]. Mechanism of hyperthyroidism-induced atrial fibrillation has been proposed [10–12]. It is generally agreed that shortening of action potential duration and effective refractory period play key roles in this electrophysiological abnormality.Wenxin Keli (WK) is a pure Chinese herb medicine. It has been reported to be useful in the treatment of atrial fibrillation [13–15], ventricular arrhythmia [16, 17], myocardial infarction-induced arrhythmia, heart failure, Brugada syndrome [18], and so forth. WK extract is composed of 5 components:Nardostachys chinensis Batal. extract (NcBe),Codonopsis, notoginseng, amber, and rhizoma polygonati. Burashnikov and colleagues [13] recently presented a fascinating electrophysiological investigation of WK on atrial fibrillation. This study showed that WK, as a novel atrial-selective sodium-channel blocking agent, could prolong action potential duration and effective refractory period. This investigation was hailed in the same issue’s editorial commentary as an emblematic milestone of integrating traditional Chinese medicine into Western medical practices [14]. In fact, WK monotherapy or in a combined antiarrhythmic regimen has been widely used for arrhythmia management in China. Chen and colleagues [15] recently conducted a meta-analysis and found solid evidence to prove WK as an effective drug to improve P-wave dispersion as well as to maintain SR in patients with PAF and its complications. However, the effect of WK on hyperthyroidism-induced atrial fibrillation has never been studied so far.Therefore, in this open label and randomized study, we aimed to prospectively compare the effectiveness between WK and sotalol in assisting SR reversion from hyperthyroidism-caused PAF. We also intended to study their effectiveness in the maintenance of SR. Sotalol was chosen as a comparing drug, because it was proven to have efficacy to restore and maintain SR from atrial fibrillation. And sotalol possessed both class II and class III antiarrhythmic effects [2, 3].
## 2. Patients and Methods
### 2.1. Patients
From January 2011 till January 2013, a series of 180 hyperthyroidism patients (diagnosed as Graves’ disease), who came to either Nuclear Medicine Department or Endocrinology Department, were consecutively enrolled in this prospective study. All of the patients had symptomatic PAF. There were 98 males (55.48±12.02 years old) and 82 females (56.12±9.98 years old). Entry criteria included PAF due to hyperthyroidism; electrocardiographic evidence of atrial fibrillation; symptoms such as palpitations, light headedness, chest pain, and dyspnoea in association with PAF; good compliance. Exclusion criteria were PAF due to other reasons, recent myocardial infarction, heart failure, inflammation such as pneumonia and diarrhea, unstable hepatic or renal function, poor compliance, and other major medical problems that would leave the patient with a life expectancy of less than two years. All enrolled patients gave their informed consent. This study was approved by the Institutional Review Board of Tianjin Medical University General Hospital (approval number #20101207A).
### 2.2. Definition
The diagnosis of PAF was made according to the American College of Cardiology Foundation/American Heart Association Task Force guideline definition; briefly, PAF had episodes that were generally less than 7 days (most less than 24 h), yet it was usually recurrent [1, 2].
### 2.3. Protocol
This study was designed as a prospective, open label, and randomized investigation. Generally, patients eligible for the study were allocated to one of the treatments using a computer generated random number algorithm. As reported [15], the clinical applications of WK against PAF include two aspects: restoration of SR from PAF and maintenance of SR afterwards. Therefore, we divided our study into two stages of sinus restoration and maintenance, in order to determine WK’s effects on these two aspects.Initially, baseline demographic data were obtained from the subjects. Relevant symptoms, cardiac diagnoses, and medical history were noted. Physical examination, 24-hour ambulatory electrocardiograph and/or regular 12-lead electrocardiograph, and serum biochemical tests (including electrolytes and renal and liver function) were carried out. All electrocardiographic recordings were reviewed by at least two experienced observers.In the first part of the study, we randomly prescribed WK (18 g tid) or sotalol (80 mg bid) to 91 patients (49 males, 42 females) or 89 patients (49 males, 40 females), respectively. This part of the study compared the effectiveness of WK and sotalol to restore SR from PAF. In this investigation, it was not ethical not to give the patients any antiarrhythmia drugs. So, we did not design control; we just compared WK and sotalol. Antithyroid drugs (ATD) were given to 90 patients (45 in WK group, 45 in sotalol group), and131I was also given to 90 patients (46 in WK group, 44 in sotalol group). Due to the similar ethical reason, no control group was set. ATD-treated patients were given methimazole (initial dose 30 mg per day). 131I therapeutic procedure was performed according to our protocol [19, 20]. Thyroid radioiodine uptake value was measured at 6, 24, 48, and 72 hours after an oral tracer dose uptake of 131I (about 74 kBq) by a nuclear multifunctional instrument (MN-6300XT Apparatus, Technological University, China). Then 131I effective half-life time (T1/2eff) and maximum uptake in thyroid were calculated. Thyroid ultrasonography was performed by using a color doppler ultrasound machine (GE Vingmed Ultrasound Vivid Five, Horten, Norway). Thyroid volume was calculated with the following formula: volume (cm3) = (width × length × thickness of left lobe) + (width × length × thickness of right lobe). Thyroid weight (g) = 0.479 × volume (cm3). Serum thyroid hormones were tested by an immunofluorometric assay, including free triiodothyronine (FT3, reference 3.50–6.50 pmol/L), free thyroxine (FT4, reference 11.50–23.50 pmol/L), and thyroid stimulating hormone (TSH, reference 0.20–5.00 μIU/mL). The therapeutic dose of 131I was calculated as the following formula [19, 20]: dose (37 MBq) = (thyroid weight (g) × absorption dose (Gy/g) × 0.67)/(T1/2eff (days) × maximum uptake (%)). Absorption dose = 100 Gy/g thyroid tissue; 0.67 is a rectified factor. Participants visited our outpatient department every month. At each scheduled follow-up visit, physical examination and routine laboratory tests were done. And, at the end of the third month, ambulatory electrocardiograph and/or regular 12-lead electrocardiograph were repeated; all relevant symptoms were documented. Disappearing of PAF was defined as restoration of SR.In the second part of the study, we randomly assigned the successfully SR-reverted patients into one of the following three groups: 54 cases were given WK (9 g tid), 54 cases were given sotalol (40 mg bid), and 55 cases served as control. In this part of the study, the control patients did not take any antiarrhythmia drug. Since patients recruited at this stage had much better improved thyroid status, and all of them were in SR when entering this investigation, it was ethically approved by our Institutional Review Board not to give the control patients any antiarrhythmia drugs. If patients were still in hyperthyroidism status, appropriate dose of methimazole was given to maintain euthyroidism. If the patients were in posttherapeutic hypothyroidism status, appropriate dose of levothyroxine was given to maintain euthyroidism. For hypothyroid patients who had already restored SR, WK and sotalol were stopped. Participants were asked to visit our outpatient department every three months. At each scheduled or sometimes unscheduled follow-up visit, physical examination and routine laboratory tests were repeated. And, at the end of the twelfth month, ambulatory electrocardiograph and/or regular 12-lead electrocardiograph were done; all relevant symptoms were documented. Time-point of PAF recurrence, its frequency, and related symptoms were collected as well.Participant flow chart was presented in Figure1 to illustrate the whole study process for better understanding.Figure 1
Participant flow chart. Initially, in the first stage of the study (a), 180 eligible hyperthyroidism patients with paroxysmal atrial fibrillation were randomized into either Wenxin Keli (91 cases) or sotalol (89 cases) treatment for sinus rhythm restoration. At the end of the first stage intervention, 83/91 cases and 80/89 cases were reverted to sinus rhythm, respectively. There were 8/91 cases and 9/89 cases who did not restore sinus rhythm. These 17 patients (still with atrial fibrillation) were not eligible for the second part of the study, and they were dropped out. In the second stage of the study (b), all sinus rhythm reverted patients (163 cases) were randomized into one of the following three groups: WK (54 cases), sotalol (54 cases), and control (55 cases) groups. The purpose is to observe drug’s sinus rhythm maintenance effect. At the end of the second stage intervention, 1/54 cases, 2/54 cases, and 9/55 cases had recurrent paroxysmal atrial fibrillation, respectively.
### 2.4. Statistical Analysis
All data were presented as mean ± SD. Statistics were performed with SPSS 17.0 (SPSS Incorporated, IL, USA). Differences between two groups were analyzed by independent samplest-test. Differences between multiple groups were analyzed by one-way analysis of variance (ANOVA), and then least significant difference (LSD) test was used for multiple comparisons among the groups. χ2 test was adopted to determine case number changes of patients after different treatments. χ2 test was also used to check whether sex had a significant influence on the intergroup differences. Kaplan-Meier analysis by Log-Rank χ2 test was used to estimate the cumulative recurrent rate of PAF in different groups.P value not exceeding 0.05 was considered statistically significant.
## 2.1. Patients
From January 2011 till January 2013, a series of 180 hyperthyroidism patients (diagnosed as Graves’ disease), who came to either Nuclear Medicine Department or Endocrinology Department, were consecutively enrolled in this prospective study. All of the patients had symptomatic PAF. There were 98 males (55.48±12.02 years old) and 82 females (56.12±9.98 years old). Entry criteria included PAF due to hyperthyroidism; electrocardiographic evidence of atrial fibrillation; symptoms such as palpitations, light headedness, chest pain, and dyspnoea in association with PAF; good compliance. Exclusion criteria were PAF due to other reasons, recent myocardial infarction, heart failure, inflammation such as pneumonia and diarrhea, unstable hepatic or renal function, poor compliance, and other major medical problems that would leave the patient with a life expectancy of less than two years. All enrolled patients gave their informed consent. This study was approved by the Institutional Review Board of Tianjin Medical University General Hospital (approval number #20101207A).
## 2.2. Definition
The diagnosis of PAF was made according to the American College of Cardiology Foundation/American Heart Association Task Force guideline definition; briefly, PAF had episodes that were generally less than 7 days (most less than 24 h), yet it was usually recurrent [1, 2].
## 2.3. Protocol
This study was designed as a prospective, open label, and randomized investigation. Generally, patients eligible for the study were allocated to one of the treatments using a computer generated random number algorithm. As reported [15], the clinical applications of WK against PAF include two aspects: restoration of SR from PAF and maintenance of SR afterwards. Therefore, we divided our study into two stages of sinus restoration and maintenance, in order to determine WK’s effects on these two aspects.Initially, baseline demographic data were obtained from the subjects. Relevant symptoms, cardiac diagnoses, and medical history were noted. Physical examination, 24-hour ambulatory electrocardiograph and/or regular 12-lead electrocardiograph, and serum biochemical tests (including electrolytes and renal and liver function) were carried out. All electrocardiographic recordings were reviewed by at least two experienced observers.In the first part of the study, we randomly prescribed WK (18 g tid) or sotalol (80 mg bid) to 91 patients (49 males, 42 females) or 89 patients (49 males, 40 females), respectively. This part of the study compared the effectiveness of WK and sotalol to restore SR from PAF. In this investigation, it was not ethical not to give the patients any antiarrhythmia drugs. So, we did not design control; we just compared WK and sotalol. Antithyroid drugs (ATD) were given to 90 patients (45 in WK group, 45 in sotalol group), and131I was also given to 90 patients (46 in WK group, 44 in sotalol group). Due to the similar ethical reason, no control group was set. ATD-treated patients were given methimazole (initial dose 30 mg per day). 131I therapeutic procedure was performed according to our protocol [19, 20]. Thyroid radioiodine uptake value was measured at 6, 24, 48, and 72 hours after an oral tracer dose uptake of 131I (about 74 kBq) by a nuclear multifunctional instrument (MN-6300XT Apparatus, Technological University, China). Then 131I effective half-life time (T1/2eff) and maximum uptake in thyroid were calculated. Thyroid ultrasonography was performed by using a color doppler ultrasound machine (GE Vingmed Ultrasound Vivid Five, Horten, Norway). Thyroid volume was calculated with the following formula: volume (cm3) = (width × length × thickness of left lobe) + (width × length × thickness of right lobe). Thyroid weight (g) = 0.479 × volume (cm3). Serum thyroid hormones were tested by an immunofluorometric assay, including free triiodothyronine (FT3, reference 3.50–6.50 pmol/L), free thyroxine (FT4, reference 11.50–23.50 pmol/L), and thyroid stimulating hormone (TSH, reference 0.20–5.00 μIU/mL). The therapeutic dose of 131I was calculated as the following formula [19, 20]: dose (37 MBq) = (thyroid weight (g) × absorption dose (Gy/g) × 0.67)/(T1/2eff (days) × maximum uptake (%)). Absorption dose = 100 Gy/g thyroid tissue; 0.67 is a rectified factor. Participants visited our outpatient department every month. At each scheduled follow-up visit, physical examination and routine laboratory tests were done. And, at the end of the third month, ambulatory electrocardiograph and/or regular 12-lead electrocardiograph were repeated; all relevant symptoms were documented. Disappearing of PAF was defined as restoration of SR.In the second part of the study, we randomly assigned the successfully SR-reverted patients into one of the following three groups: 54 cases were given WK (9 g tid), 54 cases were given sotalol (40 mg bid), and 55 cases served as control. In this part of the study, the control patients did not take any antiarrhythmia drug. Since patients recruited at this stage had much better improved thyroid status, and all of them were in SR when entering this investigation, it was ethically approved by our Institutional Review Board not to give the control patients any antiarrhythmia drugs. If patients were still in hyperthyroidism status, appropriate dose of methimazole was given to maintain euthyroidism. If the patients were in posttherapeutic hypothyroidism status, appropriate dose of levothyroxine was given to maintain euthyroidism. For hypothyroid patients who had already restored SR, WK and sotalol were stopped. Participants were asked to visit our outpatient department every three months. At each scheduled or sometimes unscheduled follow-up visit, physical examination and routine laboratory tests were repeated. And, at the end of the twelfth month, ambulatory electrocardiograph and/or regular 12-lead electrocardiograph were done; all relevant symptoms were documented. Time-point of PAF recurrence, its frequency, and related symptoms were collected as well.Participant flow chart was presented in Figure1 to illustrate the whole study process for better understanding.Figure 1
Participant flow chart. Initially, in the first stage of the study (a), 180 eligible hyperthyroidism patients with paroxysmal atrial fibrillation were randomized into either Wenxin Keli (91 cases) or sotalol (89 cases) treatment for sinus rhythm restoration. At the end of the first stage intervention, 83/91 cases and 80/89 cases were reverted to sinus rhythm, respectively. There were 8/91 cases and 9/89 cases who did not restore sinus rhythm. These 17 patients (still with atrial fibrillation) were not eligible for the second part of the study, and they were dropped out. In the second stage of the study (b), all sinus rhythm reverted patients (163 cases) were randomized into one of the following three groups: WK (54 cases), sotalol (54 cases), and control (55 cases) groups. The purpose is to observe drug’s sinus rhythm maintenance effect. At the end of the second stage intervention, 1/54 cases, 2/54 cases, and 9/55 cases had recurrent paroxysmal atrial fibrillation, respectively.
## 2.4. Statistical Analysis
All data were presented as mean ± SD. Statistics were performed with SPSS 17.0 (SPSS Incorporated, IL, USA). Differences between two groups were analyzed by independent samplest-test. Differences between multiple groups were analyzed by one-way analysis of variance (ANOVA), and then least significant difference (LSD) test was used for multiple comparisons among the groups. χ2 test was adopted to determine case number changes of patients after different treatments. χ2 test was also used to check whether sex had a significant influence on the intergroup differences. Kaplan-Meier analysis by Log-Rank χ2 test was used to estimate the cumulative recurrent rate of PAF in different groups.P value not exceeding 0.05 was considered statistically significant.
## 3. Results
### 3.1. Sinus Rhythm Restoration by Different Therapies
First, baseline information revealed no significant differences of hyperthyroidism history, PAF history, or thyroid hormone levels between the groups (Table1). Data in this investigation were analyzed by two ways. In the first analysis, three months after treatment of WK or sotalol, SR was obtained in 83/91 cases (91.209%) or 80/89 cases (89.888%); χ2 test showed no significant differences, indicating equal efficacies of the two drugs for assisting SR reversion (Table 2). Sex did not cause significant differences between the groups (Table 2). Thyroid hormones also demonstrated no differences before or after treatments (Table 3). In the second analysis, after treatment of 131I or ATD, SR was obtained in 86/90 cases or 77/90 cases; χ2 test showed significant differences, indicating better effects of 131I treatment (Table 4). Thyroid hormones displayed no differences before treatment, yet significant differences existed after treatment (Table 5). A typical case showing successful converted SR from PAF was presented (Figure 2).Table 1
Baseline information of all participants.
Parameters
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
Hyperthyroidism history (years)
8.374 ± 2.619
8.551 ± 2.680
0.448 (0.655)
PAF* history (years)
4.099 ± 1.599
4.213 ± 1.675
0.469 (0.639)
FT3* (pmol/L)
24.613 ± 5.059
24.405 ± 5.006
−0.278 (0.781)
FT4* (pmol/L)
118.697 ± 29.213
116.132 ± 28.266
−0.598 (0.550)
TSH* (μIU/mL)
0.007 ± 0.010
0.009 ± 0.015
1.191 (0.235)
∗WK: Wenxin Keli, PAF: paroxysmal atrial fibrillation, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Table 2
Case number distribution of patients after WK* or sotalol treatments in the first investigation.
Groups (case number)
Male
Female
Total number
SR* restored number
Total number
SR* restored number
WK* treatment (91 cases)
49
45
42
38
Sotalol treatment (89 cases)
49
44
40
36
χ
2 value (P value)(WK):(sotalol)∗∗
0.092 (0.762)
χ
2 value (P value)(male):(female)∗∗
0.017 (0.896)
∗WK: Wenxin Keli; SR: sinus rhythm; **analyzed by χ2 test.Table 3
Comparisons of thyroid hormones in patients before and after WK* or sotalol treatments in the first investigation.
Before treatments
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
FT3* (pmol/L)
24.613 ± 5.059
24.405 ± 5.006
−0.278 (0.781)
FT4* (pmol/L)
118.697 ± 29.213
116.132 ± 28.266
−0.598 (0.550)
TSH* (μIU/mL)
0.007 ± 0.010
0.009 ± 0.147
1.191 (0.235)
Three months after treatments
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
FT3* (pmol/L)
6.495 ± 3.713
6.596 ± 3.740
0.182 (0.855)
FT4* (pmol/L)
21.447 ± 11.727
21.655 ± 10.612
0.125 (0.901)
TSH* (μIU/mL)
6.210 ± 10.002
5.752 ± 8.915
−0.324 (0.746)
∗WK: Wenxin Keli, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Table 4
Case number distribution of patients after131I or ATD* treatments in the first investigation.
Groups (case number)
Male
Female
Total number
SR* restored number
Total number
SR* restored number
131
I treatment (90 cases)
49
47
41
39
ATD* treatment (90 cases)
49
42
41
35
χ
2 value (P value)(131I):(ATD)∗∗
5.262 (0.022)
χ
2 value (P value)(male):(female)∗∗
0.017 (0.896)
∗ATD: antithyroid drugs; SR: sinus rhythm; **analyzed by χ2 test.Table 5
Comparisons of thyroid hormones in patients before and after131I or ATD* treatments in the first investigation.
Before treatments
131
I treatment (90 cases)
ATD* treatment (90 cases)
t value (P value)**
FT3* (pmol/L)
24.056 ± 5.321
24.964 ± 4.685
1.215 (0.226)
FT4* (pmol/L)
117.633 ± 29.225
117.225 ± 28.322
−0.095 (0.924)
TSH* (μIU/mL)
0.007 ± 0.011
0.009 ± 0.014
0.757 (0.450)
Three months after treatments
131
I treatment (90 cases)
ATD* treatment (90 cases)
t value (P value)**
FT3* (pmol/L)
5.837 ± 2.830
7.252 ± 4.330
2.595 (0.010)
FT4* (pmol/L)
19.378 ± 8.292
23.722 ± 13.120
2.655 (0.009)
TSH* (μIU/mL)
6.427 ± 9.702
5.539 ± 9.237
−0.629 (0.530)
∗ATD: antithyroid drugs, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Figure 2
A typical case of successful sinus rhythm restoration from paroxysmal atrial fibrillation. A 64-year-old male patient was diagnosed with Graves’ disease for eight years. He had paroxysmal atrial fibrillation for three years (a). He was given 6 mCi of131I for the treatment of Graves’ disease. And Wenxin Keli (18 g tid) was prescribed during and after the 131I treatment. Baseline free triiodothyronine, free thyroxine, and thyroid stimulating hormone were 21.46 pmol/L, 104.8 pmol/L, and 0.011 μIU/mL, respectively. One month later, when sinus rhythm was restored (b), free triiodothyronine, free thyroxine, and thyroid stimulating hormone were 3.35 pmol/L, 12.89 pmol/L, and 4.52 μIU/mL, respectively. At the three-month end-point of the first investigation, thyroid hormones were still normal. After entering the second investigation, Wenxin Keli (9 g tid) was prescribed during the follow-up. His thyroid function maintained normal level, and his heart rhythm maintained sinus rhythm during the rest of the study.
(a)
(b)
### 3.2. Sinus Rhythm Maintenance by Different Therapies
Data in the second investigation were analyzed by two methods. First, at the end of twelve-month follow-up, recurrent PAF happened in 1/54 (1.852%), 2/54 (3.704%), and 9/55 (16.364%) cases in WK, sotalol, or control groups, respectively. We found no differences of thyroid hormones at any follow-up time-point among the groups (Table6). However, χ2 test showed significant differences between WK and control groups and significant differences between sotalol and control groups, while there were no differences between WK and sotalol groups (Table 7). Second, Kaplan-Meier curves were drawn to determine the cumulative recurrent rate of PAF in different groups (Figure 3). Log-Rank test showed significant higher PAF recurrent rate in control patients compared with either treatment (χ2=10.229, P=0.06). Therefore, we proved that both WK and sotalol could successfully maintain SR.Table 6
Comparisons of thyroid hormones at any follow-up time-points in the second investigation.
Baseline
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.532 ± 2.372
5.752 ± 2.608
5.680 ± 2.486
0.110 (0.896)
FT4* (pmol/L)
18.469 ± 7.182
19.351 ± 7.577
19.046 ± 7.576
0.195 (0.823)
TSH* (μIU/mL)
7.126 ± 10.449
5.859 ± 8.668
6.832 ± 10.110
0.249 (0.780)
Three months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.035 ± 0.934
5.129 ± 0.908
5.098 ± 0.965
0.140 (0.870)
FT4* (pmol/L)
15.664 ± 3.112
16.061 ± 3.336
15.994 ± 3.465
0.222 (0.801)
TSH* (μIU/mL)
4.683 ± 4.211
4.083 ± 3.456
4.352 ± 3.885
0.326 (0.722)
Six months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.257 ± 0.930
5.373 ± 0.915
5.381 ± 1.057
0.277 (0.758)
FT4* (pmol/L)
16.446 ± 3.339
16.916 ± 3.727
16.955 ± 4.014
0.317 (0.729)
TSH* (μIU/mL)
4.032 ± 3.492
3.567 ± 2.885
3.756 ± 3.202
0.288 (0.750)
Nine months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.367 ± 0.975
5.458 ± 0.958
5.590 ± 1.328
0.566 (0.569)
FT4* (pmol/L)
17.184 ± 3.208
17.760 ± 4.131
18.084 ± 4.833
0.668 (0.514)
TSH* (μIU/mL)
2.912 ± 1.730
2.701 ± 1.666
2.785 ± 1.719
0.211 (0.810)
Twelve months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.562 ± 0.969
5.740 ± 1.302
5.874 ± 1.406
0.866 (0.422)
FT4* (pmol/L)
17.830 ± 3.485
18.532 ± 5.113
18.901 ± 5.388
0.716 (0.490)
TSH* (μIU/mL)
2.519 ± 1.420
2.388 ± 1.423
2.409 ± 1.488
0.129 (0.879)
∗WK: Wenxin Keli, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by one-way analysis of variance and least significant difference test.Table 7
Cumulative recurrent PAF* at the end of follow-up in the second investigation.
Groups (case number)
Male
Female
Total number
Cumulative recurrent PAF
Total number
Cumulative recurrent PAF
WK* treatment (54 cases)
27
0
27
1
Sotalol treatment (54 cases)
27
2
27
0
Control (55 cases)
35
4
20
5
χ
2 value (P value)(WK):(control)∗∗
6.886 (0.009)
χ
2 value (P value)(sotalol):(control)∗∗
4.813 (0.028)
χ
2 value (P value)(WK):(sotalol)∗∗
0.343 (0.558)
∗WK: Wenxin Keli; PAF: paroxysmal atrial fibrillation; **analyzed by χ2 test.Figure 3
The cumulative recurrent rate of paroxysmal atrial fibrillation during the follow-up in different groups. In the second part of the study, we randomly assigned the successfully sinus rhythm reverted patients into one of the following three groups: 54 cases were given Wenxin Keli (9 g tid), 54 cases were given sotalol (40 mg bid), and 55 cases served as control. Kaplan-Meier analysis by Log-Rankχ2 test was used to determine the cumulative recurrent rate of paroxysmal atrial fibrillation in different groups during the one-year-long follow-up. Vertical axle was PAF recurrence free rate and horizontal axle was the follow-up time (WK = Wenxin Keli, PAF = paroxysmal atrial fibrillation).
### 3.3. Side Effects
Since there is always an inherent bitter taste in Chinese medicine, some patients would unavoidably complain about the gastrointestinal discomfort or related symptoms after taking WK. Altogether, there were 10/91 cases (10.989%) in the first investigation and 6/54 (11.111%) in the second investigation who reported various degrees of nausea and dizziness after taking WK. However, all patients showed endurance and continued with the medication. For sotalol groups, the gastrointestinal discomfort was far less frequent; there were only 3/89 cases (3.371%) in the first investigation and 2/54 (3.704%) in the second investigation who reported mild stomach discomfort. However, after taking sotalol, 2/89 cases (2.247%) in the first investigation developed symptomatic bradycardia, whose PAF disappeared though. The problems completely dissolved after dose reduction from 80 mg bid to 40 mg bid for one patient and from 80 mg bid to 40 mg qd for the other patient. These two patients’ heart rhythm maintained SR during the rest of the study. WK showed no bradycardia side effect. No other unwanted incidences were recorded.
## 3.1. Sinus Rhythm Restoration by Different Therapies
First, baseline information revealed no significant differences of hyperthyroidism history, PAF history, or thyroid hormone levels between the groups (Table1). Data in this investigation were analyzed by two ways. In the first analysis, three months after treatment of WK or sotalol, SR was obtained in 83/91 cases (91.209%) or 80/89 cases (89.888%); χ2 test showed no significant differences, indicating equal efficacies of the two drugs for assisting SR reversion (Table 2). Sex did not cause significant differences between the groups (Table 2). Thyroid hormones also demonstrated no differences before or after treatments (Table 3). In the second analysis, after treatment of 131I or ATD, SR was obtained in 86/90 cases or 77/90 cases; χ2 test showed significant differences, indicating better effects of 131I treatment (Table 4). Thyroid hormones displayed no differences before treatment, yet significant differences existed after treatment (Table 5). A typical case showing successful converted SR from PAF was presented (Figure 2).Table 1
Baseline information of all participants.
Parameters
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
Hyperthyroidism history (years)
8.374 ± 2.619
8.551 ± 2.680
0.448 (0.655)
PAF* history (years)
4.099 ± 1.599
4.213 ± 1.675
0.469 (0.639)
FT3* (pmol/L)
24.613 ± 5.059
24.405 ± 5.006
−0.278 (0.781)
FT4* (pmol/L)
118.697 ± 29.213
116.132 ± 28.266
−0.598 (0.550)
TSH* (μIU/mL)
0.007 ± 0.010
0.009 ± 0.015
1.191 (0.235)
∗WK: Wenxin Keli, PAF: paroxysmal atrial fibrillation, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Table 2
Case number distribution of patients after WK* or sotalol treatments in the first investigation.
Groups (case number)
Male
Female
Total number
SR* restored number
Total number
SR* restored number
WK* treatment (91 cases)
49
45
42
38
Sotalol treatment (89 cases)
49
44
40
36
χ
2 value (P value)(WK):(sotalol)∗∗
0.092 (0.762)
χ
2 value (P value)(male):(female)∗∗
0.017 (0.896)
∗WK: Wenxin Keli; SR: sinus rhythm; **analyzed by χ2 test.Table 3
Comparisons of thyroid hormones in patients before and after WK* or sotalol treatments in the first investigation.
Before treatments
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
FT3* (pmol/L)
24.613 ± 5.059
24.405 ± 5.006
−0.278 (0.781)
FT4* (pmol/L)
118.697 ± 29.213
116.132 ± 28.266
−0.598 (0.550)
TSH* (μIU/mL)
0.007 ± 0.010
0.009 ± 0.147
1.191 (0.235)
Three months after treatments
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
FT3* (pmol/L)
6.495 ± 3.713
6.596 ± 3.740
0.182 (0.855)
FT4* (pmol/L)
21.447 ± 11.727
21.655 ± 10.612
0.125 (0.901)
TSH* (μIU/mL)
6.210 ± 10.002
5.752 ± 8.915
−0.324 (0.746)
∗WK: Wenxin Keli, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Table 4
Case number distribution of patients after131I or ATD* treatments in the first investigation.
Groups (case number)
Male
Female
Total number
SR* restored number
Total number
SR* restored number
131
I treatment (90 cases)
49
47
41
39
ATD* treatment (90 cases)
49
42
41
35
χ
2 value (P value)(131I):(ATD)∗∗
5.262 (0.022)
χ
2 value (P value)(male):(female)∗∗
0.017 (0.896)
∗ATD: antithyroid drugs; SR: sinus rhythm; **analyzed by χ2 test.Table 5
Comparisons of thyroid hormones in patients before and after131I or ATD* treatments in the first investigation.
Before treatments
131
I treatment (90 cases)
ATD* treatment (90 cases)
t value (P value)**
FT3* (pmol/L)
24.056 ± 5.321
24.964 ± 4.685
1.215 (0.226)
FT4* (pmol/L)
117.633 ± 29.225
117.225 ± 28.322
−0.095 (0.924)
TSH* (μIU/mL)
0.007 ± 0.011
0.009 ± 0.014
0.757 (0.450)
Three months after treatments
131
I treatment (90 cases)
ATD* treatment (90 cases)
t value (P value)**
FT3* (pmol/L)
5.837 ± 2.830
7.252 ± 4.330
2.595 (0.010)
FT4* (pmol/L)
19.378 ± 8.292
23.722 ± 13.120
2.655 (0.009)
TSH* (μIU/mL)
6.427 ± 9.702
5.539 ± 9.237
−0.629 (0.530)
∗ATD: antithyroid drugs, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Figure 2
A typical case of successful sinus rhythm restoration from paroxysmal atrial fibrillation. A 64-year-old male patient was diagnosed with Graves’ disease for eight years. He had paroxysmal atrial fibrillation for three years (a). He was given 6 mCi of131I for the treatment of Graves’ disease. And Wenxin Keli (18 g tid) was prescribed during and after the 131I treatment. Baseline free triiodothyronine, free thyroxine, and thyroid stimulating hormone were 21.46 pmol/L, 104.8 pmol/L, and 0.011 μIU/mL, respectively. One month later, when sinus rhythm was restored (b), free triiodothyronine, free thyroxine, and thyroid stimulating hormone were 3.35 pmol/L, 12.89 pmol/L, and 4.52 μIU/mL, respectively. At the three-month end-point of the first investigation, thyroid hormones were still normal. After entering the second investigation, Wenxin Keli (9 g tid) was prescribed during the follow-up. His thyroid function maintained normal level, and his heart rhythm maintained sinus rhythm during the rest of the study.
(a)
(b)
## 3.2. Sinus Rhythm Maintenance by Different Therapies
Data in the second investigation were analyzed by two methods. First, at the end of twelve-month follow-up, recurrent PAF happened in 1/54 (1.852%), 2/54 (3.704%), and 9/55 (16.364%) cases in WK, sotalol, or control groups, respectively. We found no differences of thyroid hormones at any follow-up time-point among the groups (Table6). However, χ2 test showed significant differences between WK and control groups and significant differences between sotalol and control groups, while there were no differences between WK and sotalol groups (Table 7). Second, Kaplan-Meier curves were drawn to determine the cumulative recurrent rate of PAF in different groups (Figure 3). Log-Rank test showed significant higher PAF recurrent rate in control patients compared with either treatment (χ2=10.229, P=0.06). Therefore, we proved that both WK and sotalol could successfully maintain SR.Table 6
Comparisons of thyroid hormones at any follow-up time-points in the second investigation.
Baseline
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.532 ± 2.372
5.752 ± 2.608
5.680 ± 2.486
0.110 (0.896)
FT4* (pmol/L)
18.469 ± 7.182
19.351 ± 7.577
19.046 ± 7.576
0.195 (0.823)
TSH* (μIU/mL)
7.126 ± 10.449
5.859 ± 8.668
6.832 ± 10.110
0.249 (0.780)
Three months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.035 ± 0.934
5.129 ± 0.908
5.098 ± 0.965
0.140 (0.870)
FT4* (pmol/L)
15.664 ± 3.112
16.061 ± 3.336
15.994 ± 3.465
0.222 (0.801)
TSH* (μIU/mL)
4.683 ± 4.211
4.083 ± 3.456
4.352 ± 3.885
0.326 (0.722)
Six months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.257 ± 0.930
5.373 ± 0.915
5.381 ± 1.057
0.277 (0.758)
FT4* (pmol/L)
16.446 ± 3.339
16.916 ± 3.727
16.955 ± 4.014
0.317 (0.729)
TSH* (μIU/mL)
4.032 ± 3.492
3.567 ± 2.885
3.756 ± 3.202
0.288 (0.750)
Nine months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.367 ± 0.975
5.458 ± 0.958
5.590 ± 1.328
0.566 (0.569)
FT4* (pmol/L)
17.184 ± 3.208
17.760 ± 4.131
18.084 ± 4.833
0.668 (0.514)
TSH* (μIU/mL)
2.912 ± 1.730
2.701 ± 1.666
2.785 ± 1.719
0.211 (0.810)
Twelve months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.562 ± 0.969
5.740 ± 1.302
5.874 ± 1.406
0.866 (0.422)
FT4* (pmol/L)
17.830 ± 3.485
18.532 ± 5.113
18.901 ± 5.388
0.716 (0.490)
TSH* (μIU/mL)
2.519 ± 1.420
2.388 ± 1.423
2.409 ± 1.488
0.129 (0.879)
∗WK: Wenxin Keli, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by one-way analysis of variance and least significant difference test.Table 7
Cumulative recurrent PAF* at the end of follow-up in the second investigation.
Groups (case number)
Male
Female
Total number
Cumulative recurrent PAF
Total number
Cumulative recurrent PAF
WK* treatment (54 cases)
27
0
27
1
Sotalol treatment (54 cases)
27
2
27
0
Control (55 cases)
35
4
20
5
χ
2 value (P value)(WK):(control)∗∗
6.886 (0.009)
χ
2 value (P value)(sotalol):(control)∗∗
4.813 (0.028)
χ
2 value (P value)(WK):(sotalol)∗∗
0.343 (0.558)
∗WK: Wenxin Keli; PAF: paroxysmal atrial fibrillation; **analyzed by χ2 test.Figure 3
The cumulative recurrent rate of paroxysmal atrial fibrillation during the follow-up in different groups. In the second part of the study, we randomly assigned the successfully sinus rhythm reverted patients into one of the following three groups: 54 cases were given Wenxin Keli (9 g tid), 54 cases were given sotalol (40 mg bid), and 55 cases served as control. Kaplan-Meier analysis by Log-Rankχ2 test was used to determine the cumulative recurrent rate of paroxysmal atrial fibrillation in different groups during the one-year-long follow-up. Vertical axle was PAF recurrence free rate and horizontal axle was the follow-up time (WK = Wenxin Keli, PAF = paroxysmal atrial fibrillation).
## 3.3. Side Effects
Since there is always an inherent bitter taste in Chinese medicine, some patients would unavoidably complain about the gastrointestinal discomfort or related symptoms after taking WK. Altogether, there were 10/91 cases (10.989%) in the first investigation and 6/54 (11.111%) in the second investigation who reported various degrees of nausea and dizziness after taking WK. However, all patients showed endurance and continued with the medication. For sotalol groups, the gastrointestinal discomfort was far less frequent; there were only 3/89 cases (3.371%) in the first investigation and 2/54 (3.704%) in the second investigation who reported mild stomach discomfort. However, after taking sotalol, 2/89 cases (2.247%) in the first investigation developed symptomatic bradycardia, whose PAF disappeared though. The problems completely dissolved after dose reduction from 80 mg bid to 40 mg bid for one patient and from 80 mg bid to 40 mg qd for the other patient. These two patients’ heart rhythm maintained SR during the rest of the study. WK showed no bradycardia side effect. No other unwanted incidences were recorded.
## 4. Discussion
The risk of developing atrial fibrillation in patients with hyperthyroidism is approximately 6-fold of the euthyroidism population, which aggravates the overall conditions of such patients [9]. Successful treatment of hyperthyroidism with either 131I or ATD is associated with a reversion to SR in a majority of patients [10, 21, 22]. However, pharmacological management of atrial fibrillation in patients with hyperthyroidism is still an issue lacking in comprehensive analysis. In general, rate control is very important to reduce the mortality rate of patients with atrial fibrillation [6, 7]. Selective or nonselective β-blockers can provide rapid symptom relief by reducing the ventricular rate, but these agents are unlikely to convert PAF to SR. Pharmacotherapy of atrial fibrillation has an advantage over electrical cardioversion and the catheter ablation methods, because it can be used on an outpatient basis [23]. However, the optimal pharmacological means to restore and maintain SR in patients with hyperthyroidism-caused atrial fibrillation remains controversial.WK is identified as a novel drug against atrial fibrillation. Its mechanism has been elucidated recently. Burashnikov and colleagues [13] have implemented an isolated canine perfused right atrial preparation and recorded atrial and ventricular transmembrane action potentials and pseudoelectrograms before and after intracoronary perfusion of various concentrations of WK. Interestingly, WK produced effects more noticeable in atrial tissue than in ventricular tissue, as it caused action potential duration shortening and prolongation of effective refractory periods in an atrial-selective manner. In addition, WK produced a greater reduction in the maximum rate of rise of the action potential upstroke and a larger increase in the diastolic threshold for excitation in atrial cells, suggestive of sodium-channel current blockade. This was confirmed in HEK293 cells expressing the sodium ion channel protein SCN5A, in which WK decreased the peak sodium-channel current, in both dose-dependent and use-dependent fashions. Finally, antiarrhythmic properties of WK were illustrated by the prolongation of the P-wave duration and both the prevention and termination of acetylcholine-mediated atrial fibrillation. The above mechanism of WK acts directly against the electrophysiological changes in hyperthyroidism-induced atrial fibrillation [10–12].Contrary to the relatively new discovery of the MK mechanisms, traditional Chinese medicines were first documented about 2500 years ago by Confucian scholars and are now still being used by tens of millions in China as well as around the world [14, 24, 25]. Clinical evidence of WK is based on results of clinical trials being carried out in Chinese hospitals for years. These studies have shown that WK can significantly improve heart palpitations, chest tightness, shortness of breath, fatigue, insomnia, and other symptoms of atrial fibrillation [15]. Currently, WK monotherapy or combined therapy with antiarrhythmic drugs has been recommended as an effective method for atrial fibrillation in China. In fact, WK is the first Chinese-developed antiarrhythmic medicine to be approved by the Chinese state. Besides antiarrhythmic property, clinical trials have also confirmed that WK can increase coronary blood low, reduce myocardial oxygen consumption, enhance myocardial compliance, improve myocardial hypoxia tolerance, relieve anterior and posterior cardiac loading, and reduce myocardial tissue damage in patients with high blood pressure. These clinical evidences are in accordance with WK’s basic mechanistic research findings recently [13, 16–18].In the current investigation, we provided the first clinical evidence of WK as well as sotalol on the management of hyperthyroidism-induced PAF in two aspects. First, the drugs could assist SR reversion from PAF caused by hyperthyroidism. Second, the drugs could maintain SR afterwards. The second application seemed more important, since the first application was very dependent on the degree of thyroid hormone reduction. We showed that there were nearly the same efficacies of both WK and sotalol to assist SR restoration. However,131I was much more effective for hyperthyroidism management and thereafter to gain better SR reversion. We believed that this was largely due to better therapeutic results of 131I to control thyroid hormones (Tables 4 and 5). In the latter investigation, we showed that both WK and sotalol could maintain SR with equal abilities in our cohort, who have already gained SR after treatments. The cumulative recurrent rate was significantly lower in the drug-treated patients than in the control cases (Figure 3). Our study proved the usefulness and effectiveness of WK as well as sotalol on the long-term maintenance management of such patients, which is indeed very important for clinical purposes.Although WK’s effect on hyperthyroidism-related atrial fibrillation has never been reported before, WK’s anti-atrial-fibrillation ability is not new discovery. All of WK’s clinical studies are published in Chinese language so far; however, after considering their relevancy to the current study, further comments are deserved. Chen and colleagues [15] compiled and evaluated all available randomized controlled trials regarding WK’s therapeutic effects against PAF (complicated with diseases other than hyperthyroidism) according to the PRISMA systematic review standard. There were nine trials analyzing therapeutic effectiveness of WK alone or combined with Western medicine, compared with no medicine or Western medicine alone, in patients with PAF [26–34]. Most of the trials used amiodarone as the Western medicine, which cannot be used for hyperthyroidism-related atrial fibrillation. These trials were not homogeneous, requiring the use of the random effects model for statistical analysis. Meta-analysis results demonstrated a significant difference between the two therapeutic groups (the WK combination therapy was much better). Seven trials used the maintenance rate of SR at six months following treatment as an outcome measurement [33–39]. These seven trials compared the combination of WK plus Western medicine with Western medicine alone (mostly amiodarone). These trials were homogeneous, requiring the use of the fixed effects model for statistical analysis. The rate of maintenance of SR in the former group was greater than the latter group. Meta-analysis results showed that there was a significant beneficial effect in the WK combination regimens compared with the Western medicine monotherapy. The above literature is in conformity with our findings in that WK is an effective drug for the management of PAF, not only for initial SR reversion therapy but also for the long-term maintenance therapy.In conclusion, we demonstrated the same efficacies of WK and sotalol to assist SR reversion from hyperthyroidism-related PAF.131I was better to control thyroid hormone and to gain SR reversion. We also showed that both WK and sotalol could maintain SR with equal abilities in those PAF hyperthyroidism patients who had already gained SR after treatments. Therefore, WK is a useful drug that should be advocated in the initial treatment of PAF caused by hyperthyroidism, as well as in the follow-up management strategy.
---
*Source: 101904-2015-05-17.xml* | 101904-2015-05-17_101904-2015-05-17.md | 50,045 | Wenxin Keli versus Sotalol for Paroxysmal Atrial Fibrillation Caused by Hyperthyroidism: A Prospective, Open Label, and Randomized Study | Zhaowei Meng; Jian Tan; Qing He; Mei Zhu; Xue Li; Jianping Zhang; Qiang Jia; Shen Wang; Guizhi Zhang; Wei Zheng | Evidence-Based Complementary and Alternative Medicine
(2015) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101904 | 101904-2015-05-17.xml | ---
## Abstract
We aimed to compare effectiveness of Wenxin Keli (WK) and sotalol in assisting sinus rhythm (SR) restoration from paroxysmal atrial fibrillation (PAF) caused by hyperthyroidism, as well as in maintaining SR. We randomly prescribed WK (18 g tid) or sotalol (80 mg bid) to 91 or 89 patients. Since it was not ethical not to give patients antiarrhythmia drugs, no control group was set. Antithyroid drugs were given to 90 patients (45 in WK group, 45 in sotalol group);131I was given to 90 patients (46 in WK group, 44 in sotalol group). Three months later, SR was obtained in 83/91 or 80/89 cases from WK or sotalol groups (P=0.762). By another analysis, SR was obtained in 86/90 or 77/90 cases from 131I or ATD groups (P=0.022). Then, we randomly assigned the successfully SR-reverted patients into three groups: WK, sotalol, and control (no antiarrhythmia drug was given) groups. After twelve-month follow-up, PAF recurrence happened in 1/54, 2/54, and 9/55 cases, respectively. Log-Rank test showed significant higher PAF recurrent rate in control patients than either treatment (P=0.06). We demonstrated the same efficacies of WK and sotalol to assist SR reversion from hyperthyroidism-caused PAF. We also showed that either drug could maintain SR in such patients.
---
## Body
## 1. Introduction
Atrial fibrillation is the most common cardiac rhythm disturbance, increasing in prevalence with age. By definition, atrial fibrillation is a supraventricular tachyarrhythmia characterized by uncoordinated atrial activation with consequent deterioration of atrial mechanical function [1–3]. Clinicians should distinguish a first-detected episode of atrial fibrillation, whether or not it is symptomatic or self-limited. Patients with atrial fibrillation have markedly reduced survival rate compared with subjects without atrial fibrillation. In paroxysmal atrial fibrillation (PAF), sudden repeated changes in rhythm cause symptoms which most patients find very debilitating. In addition, PAF carries an increasing risk of thromboembolic events, when compared with chronic atrial fibrillation [4, 5]. Therefore, the effective treatment and prevention of this kind of arrhythmia has important clinical significance [1–3, 6, 7]. Atrial fibrillation occurs in 10% to 25% of patients with hyperthyroidism, more commonly in men and elderly patients [2, 8, 9]. Mainstay treatment is restoration of euthyroid state, which can be accomplished by antithyroid drugs, 131I, and surgery. Successful management of hyperthyroidism could result in restoration of sinus rhythm (SR) in up to two-thirds of patients [10]. Mechanism of hyperthyroidism-induced atrial fibrillation has been proposed [10–12]. It is generally agreed that shortening of action potential duration and effective refractory period play key roles in this electrophysiological abnormality.Wenxin Keli (WK) is a pure Chinese herb medicine. It has been reported to be useful in the treatment of atrial fibrillation [13–15], ventricular arrhythmia [16, 17], myocardial infarction-induced arrhythmia, heart failure, Brugada syndrome [18], and so forth. WK extract is composed of 5 components:Nardostachys chinensis Batal. extract (NcBe),Codonopsis, notoginseng, amber, and rhizoma polygonati. Burashnikov and colleagues [13] recently presented a fascinating electrophysiological investigation of WK on atrial fibrillation. This study showed that WK, as a novel atrial-selective sodium-channel blocking agent, could prolong action potential duration and effective refractory period. This investigation was hailed in the same issue’s editorial commentary as an emblematic milestone of integrating traditional Chinese medicine into Western medical practices [14]. In fact, WK monotherapy or in a combined antiarrhythmic regimen has been widely used for arrhythmia management in China. Chen and colleagues [15] recently conducted a meta-analysis and found solid evidence to prove WK as an effective drug to improve P-wave dispersion as well as to maintain SR in patients with PAF and its complications. However, the effect of WK on hyperthyroidism-induced atrial fibrillation has never been studied so far.Therefore, in this open label and randomized study, we aimed to prospectively compare the effectiveness between WK and sotalol in assisting SR reversion from hyperthyroidism-caused PAF. We also intended to study their effectiveness in the maintenance of SR. Sotalol was chosen as a comparing drug, because it was proven to have efficacy to restore and maintain SR from atrial fibrillation. And sotalol possessed both class II and class III antiarrhythmic effects [2, 3].
## 2. Patients and Methods
### 2.1. Patients
From January 2011 till January 2013, a series of 180 hyperthyroidism patients (diagnosed as Graves’ disease), who came to either Nuclear Medicine Department or Endocrinology Department, were consecutively enrolled in this prospective study. All of the patients had symptomatic PAF. There were 98 males (55.48±12.02 years old) and 82 females (56.12±9.98 years old). Entry criteria included PAF due to hyperthyroidism; electrocardiographic evidence of atrial fibrillation; symptoms such as palpitations, light headedness, chest pain, and dyspnoea in association with PAF; good compliance. Exclusion criteria were PAF due to other reasons, recent myocardial infarction, heart failure, inflammation such as pneumonia and diarrhea, unstable hepatic or renal function, poor compliance, and other major medical problems that would leave the patient with a life expectancy of less than two years. All enrolled patients gave their informed consent. This study was approved by the Institutional Review Board of Tianjin Medical University General Hospital (approval number #20101207A).
### 2.2. Definition
The diagnosis of PAF was made according to the American College of Cardiology Foundation/American Heart Association Task Force guideline definition; briefly, PAF had episodes that were generally less than 7 days (most less than 24 h), yet it was usually recurrent [1, 2].
### 2.3. Protocol
This study was designed as a prospective, open label, and randomized investigation. Generally, patients eligible for the study were allocated to one of the treatments using a computer generated random number algorithm. As reported [15], the clinical applications of WK against PAF include two aspects: restoration of SR from PAF and maintenance of SR afterwards. Therefore, we divided our study into two stages of sinus restoration and maintenance, in order to determine WK’s effects on these two aspects.Initially, baseline demographic data were obtained from the subjects. Relevant symptoms, cardiac diagnoses, and medical history were noted. Physical examination, 24-hour ambulatory electrocardiograph and/or regular 12-lead electrocardiograph, and serum biochemical tests (including electrolytes and renal and liver function) were carried out. All electrocardiographic recordings were reviewed by at least two experienced observers.In the first part of the study, we randomly prescribed WK (18 g tid) or sotalol (80 mg bid) to 91 patients (49 males, 42 females) or 89 patients (49 males, 40 females), respectively. This part of the study compared the effectiveness of WK and sotalol to restore SR from PAF. In this investigation, it was not ethical not to give the patients any antiarrhythmia drugs. So, we did not design control; we just compared WK and sotalol. Antithyroid drugs (ATD) were given to 90 patients (45 in WK group, 45 in sotalol group), and131I was also given to 90 patients (46 in WK group, 44 in sotalol group). Due to the similar ethical reason, no control group was set. ATD-treated patients were given methimazole (initial dose 30 mg per day). 131I therapeutic procedure was performed according to our protocol [19, 20]. Thyroid radioiodine uptake value was measured at 6, 24, 48, and 72 hours after an oral tracer dose uptake of 131I (about 74 kBq) by a nuclear multifunctional instrument (MN-6300XT Apparatus, Technological University, China). Then 131I effective half-life time (T1/2eff) and maximum uptake in thyroid were calculated. Thyroid ultrasonography was performed by using a color doppler ultrasound machine (GE Vingmed Ultrasound Vivid Five, Horten, Norway). Thyroid volume was calculated with the following formula: volume (cm3) = (width × length × thickness of left lobe) + (width × length × thickness of right lobe). Thyroid weight (g) = 0.479 × volume (cm3). Serum thyroid hormones were tested by an immunofluorometric assay, including free triiodothyronine (FT3, reference 3.50–6.50 pmol/L), free thyroxine (FT4, reference 11.50–23.50 pmol/L), and thyroid stimulating hormone (TSH, reference 0.20–5.00 μIU/mL). The therapeutic dose of 131I was calculated as the following formula [19, 20]: dose (37 MBq) = (thyroid weight (g) × absorption dose (Gy/g) × 0.67)/(T1/2eff (days) × maximum uptake (%)). Absorption dose = 100 Gy/g thyroid tissue; 0.67 is a rectified factor. Participants visited our outpatient department every month. At each scheduled follow-up visit, physical examination and routine laboratory tests were done. And, at the end of the third month, ambulatory electrocardiograph and/or regular 12-lead electrocardiograph were repeated; all relevant symptoms were documented. Disappearing of PAF was defined as restoration of SR.In the second part of the study, we randomly assigned the successfully SR-reverted patients into one of the following three groups: 54 cases were given WK (9 g tid), 54 cases were given sotalol (40 mg bid), and 55 cases served as control. In this part of the study, the control patients did not take any antiarrhythmia drug. Since patients recruited at this stage had much better improved thyroid status, and all of them were in SR when entering this investigation, it was ethically approved by our Institutional Review Board not to give the control patients any antiarrhythmia drugs. If patients were still in hyperthyroidism status, appropriate dose of methimazole was given to maintain euthyroidism. If the patients were in posttherapeutic hypothyroidism status, appropriate dose of levothyroxine was given to maintain euthyroidism. For hypothyroid patients who had already restored SR, WK and sotalol were stopped. Participants were asked to visit our outpatient department every three months. At each scheduled or sometimes unscheduled follow-up visit, physical examination and routine laboratory tests were repeated. And, at the end of the twelfth month, ambulatory electrocardiograph and/or regular 12-lead electrocardiograph were done; all relevant symptoms were documented. Time-point of PAF recurrence, its frequency, and related symptoms were collected as well.Participant flow chart was presented in Figure1 to illustrate the whole study process for better understanding.Figure 1
Participant flow chart. Initially, in the first stage of the study (a), 180 eligible hyperthyroidism patients with paroxysmal atrial fibrillation were randomized into either Wenxin Keli (91 cases) or sotalol (89 cases) treatment for sinus rhythm restoration. At the end of the first stage intervention, 83/91 cases and 80/89 cases were reverted to sinus rhythm, respectively. There were 8/91 cases and 9/89 cases who did not restore sinus rhythm. These 17 patients (still with atrial fibrillation) were not eligible for the second part of the study, and they were dropped out. In the second stage of the study (b), all sinus rhythm reverted patients (163 cases) were randomized into one of the following three groups: WK (54 cases), sotalol (54 cases), and control (55 cases) groups. The purpose is to observe drug’s sinus rhythm maintenance effect. At the end of the second stage intervention, 1/54 cases, 2/54 cases, and 9/55 cases had recurrent paroxysmal atrial fibrillation, respectively.
### 2.4. Statistical Analysis
All data were presented as mean ± SD. Statistics were performed with SPSS 17.0 (SPSS Incorporated, IL, USA). Differences between two groups were analyzed by independent samplest-test. Differences between multiple groups were analyzed by one-way analysis of variance (ANOVA), and then least significant difference (LSD) test was used for multiple comparisons among the groups. χ2 test was adopted to determine case number changes of patients after different treatments. χ2 test was also used to check whether sex had a significant influence on the intergroup differences. Kaplan-Meier analysis by Log-Rank χ2 test was used to estimate the cumulative recurrent rate of PAF in different groups.P value not exceeding 0.05 was considered statistically significant.
## 2.1. Patients
From January 2011 till January 2013, a series of 180 hyperthyroidism patients (diagnosed as Graves’ disease), who came to either Nuclear Medicine Department or Endocrinology Department, were consecutively enrolled in this prospective study. All of the patients had symptomatic PAF. There were 98 males (55.48±12.02 years old) and 82 females (56.12±9.98 years old). Entry criteria included PAF due to hyperthyroidism; electrocardiographic evidence of atrial fibrillation; symptoms such as palpitations, light headedness, chest pain, and dyspnoea in association with PAF; good compliance. Exclusion criteria were PAF due to other reasons, recent myocardial infarction, heart failure, inflammation such as pneumonia and diarrhea, unstable hepatic or renal function, poor compliance, and other major medical problems that would leave the patient with a life expectancy of less than two years. All enrolled patients gave their informed consent. This study was approved by the Institutional Review Board of Tianjin Medical University General Hospital (approval number #20101207A).
## 2.2. Definition
The diagnosis of PAF was made according to the American College of Cardiology Foundation/American Heart Association Task Force guideline definition; briefly, PAF had episodes that were generally less than 7 days (most less than 24 h), yet it was usually recurrent [1, 2].
## 2.3. Protocol
This study was designed as a prospective, open label, and randomized investigation. Generally, patients eligible for the study were allocated to one of the treatments using a computer generated random number algorithm. As reported [15], the clinical applications of WK against PAF include two aspects: restoration of SR from PAF and maintenance of SR afterwards. Therefore, we divided our study into two stages of sinus restoration and maintenance, in order to determine WK’s effects on these two aspects.Initially, baseline demographic data were obtained from the subjects. Relevant symptoms, cardiac diagnoses, and medical history were noted. Physical examination, 24-hour ambulatory electrocardiograph and/or regular 12-lead electrocardiograph, and serum biochemical tests (including electrolytes and renal and liver function) were carried out. All electrocardiographic recordings were reviewed by at least two experienced observers.In the first part of the study, we randomly prescribed WK (18 g tid) or sotalol (80 mg bid) to 91 patients (49 males, 42 females) or 89 patients (49 males, 40 females), respectively. This part of the study compared the effectiveness of WK and sotalol to restore SR from PAF. In this investigation, it was not ethical not to give the patients any antiarrhythmia drugs. So, we did not design control; we just compared WK and sotalol. Antithyroid drugs (ATD) were given to 90 patients (45 in WK group, 45 in sotalol group), and131I was also given to 90 patients (46 in WK group, 44 in sotalol group). Due to the similar ethical reason, no control group was set. ATD-treated patients were given methimazole (initial dose 30 mg per day). 131I therapeutic procedure was performed according to our protocol [19, 20]. Thyroid radioiodine uptake value was measured at 6, 24, 48, and 72 hours after an oral tracer dose uptake of 131I (about 74 kBq) by a nuclear multifunctional instrument (MN-6300XT Apparatus, Technological University, China). Then 131I effective half-life time (T1/2eff) and maximum uptake in thyroid were calculated. Thyroid ultrasonography was performed by using a color doppler ultrasound machine (GE Vingmed Ultrasound Vivid Five, Horten, Norway). Thyroid volume was calculated with the following formula: volume (cm3) = (width × length × thickness of left lobe) + (width × length × thickness of right lobe). Thyroid weight (g) = 0.479 × volume (cm3). Serum thyroid hormones were tested by an immunofluorometric assay, including free triiodothyronine (FT3, reference 3.50–6.50 pmol/L), free thyroxine (FT4, reference 11.50–23.50 pmol/L), and thyroid stimulating hormone (TSH, reference 0.20–5.00 μIU/mL). The therapeutic dose of 131I was calculated as the following formula [19, 20]: dose (37 MBq) = (thyroid weight (g) × absorption dose (Gy/g) × 0.67)/(T1/2eff (days) × maximum uptake (%)). Absorption dose = 100 Gy/g thyroid tissue; 0.67 is a rectified factor. Participants visited our outpatient department every month. At each scheduled follow-up visit, physical examination and routine laboratory tests were done. And, at the end of the third month, ambulatory electrocardiograph and/or regular 12-lead electrocardiograph were repeated; all relevant symptoms were documented. Disappearing of PAF was defined as restoration of SR.In the second part of the study, we randomly assigned the successfully SR-reverted patients into one of the following three groups: 54 cases were given WK (9 g tid), 54 cases were given sotalol (40 mg bid), and 55 cases served as control. In this part of the study, the control patients did not take any antiarrhythmia drug. Since patients recruited at this stage had much better improved thyroid status, and all of them were in SR when entering this investigation, it was ethically approved by our Institutional Review Board not to give the control patients any antiarrhythmia drugs. If patients were still in hyperthyroidism status, appropriate dose of methimazole was given to maintain euthyroidism. If the patients were in posttherapeutic hypothyroidism status, appropriate dose of levothyroxine was given to maintain euthyroidism. For hypothyroid patients who had already restored SR, WK and sotalol were stopped. Participants were asked to visit our outpatient department every three months. At each scheduled or sometimes unscheduled follow-up visit, physical examination and routine laboratory tests were repeated. And, at the end of the twelfth month, ambulatory electrocardiograph and/or regular 12-lead electrocardiograph were done; all relevant symptoms were documented. Time-point of PAF recurrence, its frequency, and related symptoms were collected as well.Participant flow chart was presented in Figure1 to illustrate the whole study process for better understanding.Figure 1
Participant flow chart. Initially, in the first stage of the study (a), 180 eligible hyperthyroidism patients with paroxysmal atrial fibrillation were randomized into either Wenxin Keli (91 cases) or sotalol (89 cases) treatment for sinus rhythm restoration. At the end of the first stage intervention, 83/91 cases and 80/89 cases were reverted to sinus rhythm, respectively. There were 8/91 cases and 9/89 cases who did not restore sinus rhythm. These 17 patients (still with atrial fibrillation) were not eligible for the second part of the study, and they were dropped out. In the second stage of the study (b), all sinus rhythm reverted patients (163 cases) were randomized into one of the following three groups: WK (54 cases), sotalol (54 cases), and control (55 cases) groups. The purpose is to observe drug’s sinus rhythm maintenance effect. At the end of the second stage intervention, 1/54 cases, 2/54 cases, and 9/55 cases had recurrent paroxysmal atrial fibrillation, respectively.
## 2.4. Statistical Analysis
All data were presented as mean ± SD. Statistics were performed with SPSS 17.0 (SPSS Incorporated, IL, USA). Differences between two groups were analyzed by independent samplest-test. Differences between multiple groups were analyzed by one-way analysis of variance (ANOVA), and then least significant difference (LSD) test was used for multiple comparisons among the groups. χ2 test was adopted to determine case number changes of patients after different treatments. χ2 test was also used to check whether sex had a significant influence on the intergroup differences. Kaplan-Meier analysis by Log-Rank χ2 test was used to estimate the cumulative recurrent rate of PAF in different groups.P value not exceeding 0.05 was considered statistically significant.
## 3. Results
### 3.1. Sinus Rhythm Restoration by Different Therapies
First, baseline information revealed no significant differences of hyperthyroidism history, PAF history, or thyroid hormone levels between the groups (Table1). Data in this investigation were analyzed by two ways. In the first analysis, three months after treatment of WK or sotalol, SR was obtained in 83/91 cases (91.209%) or 80/89 cases (89.888%); χ2 test showed no significant differences, indicating equal efficacies of the two drugs for assisting SR reversion (Table 2). Sex did not cause significant differences between the groups (Table 2). Thyroid hormones also demonstrated no differences before or after treatments (Table 3). In the second analysis, after treatment of 131I or ATD, SR was obtained in 86/90 cases or 77/90 cases; χ2 test showed significant differences, indicating better effects of 131I treatment (Table 4). Thyroid hormones displayed no differences before treatment, yet significant differences existed after treatment (Table 5). A typical case showing successful converted SR from PAF was presented (Figure 2).Table 1
Baseline information of all participants.
Parameters
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
Hyperthyroidism history (years)
8.374 ± 2.619
8.551 ± 2.680
0.448 (0.655)
PAF* history (years)
4.099 ± 1.599
4.213 ± 1.675
0.469 (0.639)
FT3* (pmol/L)
24.613 ± 5.059
24.405 ± 5.006
−0.278 (0.781)
FT4* (pmol/L)
118.697 ± 29.213
116.132 ± 28.266
−0.598 (0.550)
TSH* (μIU/mL)
0.007 ± 0.010
0.009 ± 0.015
1.191 (0.235)
∗WK: Wenxin Keli, PAF: paroxysmal atrial fibrillation, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Table 2
Case number distribution of patients after WK* or sotalol treatments in the first investigation.
Groups (case number)
Male
Female
Total number
SR* restored number
Total number
SR* restored number
WK* treatment (91 cases)
49
45
42
38
Sotalol treatment (89 cases)
49
44
40
36
χ
2 value (P value)(WK):(sotalol)∗∗
0.092 (0.762)
χ
2 value (P value)(male):(female)∗∗
0.017 (0.896)
∗WK: Wenxin Keli; SR: sinus rhythm; **analyzed by χ2 test.Table 3
Comparisons of thyroid hormones in patients before and after WK* or sotalol treatments in the first investigation.
Before treatments
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
FT3* (pmol/L)
24.613 ± 5.059
24.405 ± 5.006
−0.278 (0.781)
FT4* (pmol/L)
118.697 ± 29.213
116.132 ± 28.266
−0.598 (0.550)
TSH* (μIU/mL)
0.007 ± 0.010
0.009 ± 0.147
1.191 (0.235)
Three months after treatments
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
FT3* (pmol/L)
6.495 ± 3.713
6.596 ± 3.740
0.182 (0.855)
FT4* (pmol/L)
21.447 ± 11.727
21.655 ± 10.612
0.125 (0.901)
TSH* (μIU/mL)
6.210 ± 10.002
5.752 ± 8.915
−0.324 (0.746)
∗WK: Wenxin Keli, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Table 4
Case number distribution of patients after131I or ATD* treatments in the first investigation.
Groups (case number)
Male
Female
Total number
SR* restored number
Total number
SR* restored number
131
I treatment (90 cases)
49
47
41
39
ATD* treatment (90 cases)
49
42
41
35
χ
2 value (P value)(131I):(ATD)∗∗
5.262 (0.022)
χ
2 value (P value)(male):(female)∗∗
0.017 (0.896)
∗ATD: antithyroid drugs; SR: sinus rhythm; **analyzed by χ2 test.Table 5
Comparisons of thyroid hormones in patients before and after131I or ATD* treatments in the first investigation.
Before treatments
131
I treatment (90 cases)
ATD* treatment (90 cases)
t value (P value)**
FT3* (pmol/L)
24.056 ± 5.321
24.964 ± 4.685
1.215 (0.226)
FT4* (pmol/L)
117.633 ± 29.225
117.225 ± 28.322
−0.095 (0.924)
TSH* (μIU/mL)
0.007 ± 0.011
0.009 ± 0.014
0.757 (0.450)
Three months after treatments
131
I treatment (90 cases)
ATD* treatment (90 cases)
t value (P value)**
FT3* (pmol/L)
5.837 ± 2.830
7.252 ± 4.330
2.595 (0.010)
FT4* (pmol/L)
19.378 ± 8.292
23.722 ± 13.120
2.655 (0.009)
TSH* (μIU/mL)
6.427 ± 9.702
5.539 ± 9.237
−0.629 (0.530)
∗ATD: antithyroid drugs, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Figure 2
A typical case of successful sinus rhythm restoration from paroxysmal atrial fibrillation. A 64-year-old male patient was diagnosed with Graves’ disease for eight years. He had paroxysmal atrial fibrillation for three years (a). He was given 6 mCi of131I for the treatment of Graves’ disease. And Wenxin Keli (18 g tid) was prescribed during and after the 131I treatment. Baseline free triiodothyronine, free thyroxine, and thyroid stimulating hormone were 21.46 pmol/L, 104.8 pmol/L, and 0.011 μIU/mL, respectively. One month later, when sinus rhythm was restored (b), free triiodothyronine, free thyroxine, and thyroid stimulating hormone were 3.35 pmol/L, 12.89 pmol/L, and 4.52 μIU/mL, respectively. At the three-month end-point of the first investigation, thyroid hormones were still normal. After entering the second investigation, Wenxin Keli (9 g tid) was prescribed during the follow-up. His thyroid function maintained normal level, and his heart rhythm maintained sinus rhythm during the rest of the study.
(a)
(b)
### 3.2. Sinus Rhythm Maintenance by Different Therapies
Data in the second investigation were analyzed by two methods. First, at the end of twelve-month follow-up, recurrent PAF happened in 1/54 (1.852%), 2/54 (3.704%), and 9/55 (16.364%) cases in WK, sotalol, or control groups, respectively. We found no differences of thyroid hormones at any follow-up time-point among the groups (Table6). However, χ2 test showed significant differences between WK and control groups and significant differences between sotalol and control groups, while there were no differences between WK and sotalol groups (Table 7). Second, Kaplan-Meier curves were drawn to determine the cumulative recurrent rate of PAF in different groups (Figure 3). Log-Rank test showed significant higher PAF recurrent rate in control patients compared with either treatment (χ2=10.229, P=0.06). Therefore, we proved that both WK and sotalol could successfully maintain SR.Table 6
Comparisons of thyroid hormones at any follow-up time-points in the second investigation.
Baseline
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.532 ± 2.372
5.752 ± 2.608
5.680 ± 2.486
0.110 (0.896)
FT4* (pmol/L)
18.469 ± 7.182
19.351 ± 7.577
19.046 ± 7.576
0.195 (0.823)
TSH* (μIU/mL)
7.126 ± 10.449
5.859 ± 8.668
6.832 ± 10.110
0.249 (0.780)
Three months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.035 ± 0.934
5.129 ± 0.908
5.098 ± 0.965
0.140 (0.870)
FT4* (pmol/L)
15.664 ± 3.112
16.061 ± 3.336
15.994 ± 3.465
0.222 (0.801)
TSH* (μIU/mL)
4.683 ± 4.211
4.083 ± 3.456
4.352 ± 3.885
0.326 (0.722)
Six months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.257 ± 0.930
5.373 ± 0.915
5.381 ± 1.057
0.277 (0.758)
FT4* (pmol/L)
16.446 ± 3.339
16.916 ± 3.727
16.955 ± 4.014
0.317 (0.729)
TSH* (μIU/mL)
4.032 ± 3.492
3.567 ± 2.885
3.756 ± 3.202
0.288 (0.750)
Nine months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.367 ± 0.975
5.458 ± 0.958
5.590 ± 1.328
0.566 (0.569)
FT4* (pmol/L)
17.184 ± 3.208
17.760 ± 4.131
18.084 ± 4.833
0.668 (0.514)
TSH* (μIU/mL)
2.912 ± 1.730
2.701 ± 1.666
2.785 ± 1.719
0.211 (0.810)
Twelve months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.562 ± 0.969
5.740 ± 1.302
5.874 ± 1.406
0.866 (0.422)
FT4* (pmol/L)
17.830 ± 3.485
18.532 ± 5.113
18.901 ± 5.388
0.716 (0.490)
TSH* (μIU/mL)
2.519 ± 1.420
2.388 ± 1.423
2.409 ± 1.488
0.129 (0.879)
∗WK: Wenxin Keli, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by one-way analysis of variance and least significant difference test.Table 7
Cumulative recurrent PAF* at the end of follow-up in the second investigation.
Groups (case number)
Male
Female
Total number
Cumulative recurrent PAF
Total number
Cumulative recurrent PAF
WK* treatment (54 cases)
27
0
27
1
Sotalol treatment (54 cases)
27
2
27
0
Control (55 cases)
35
4
20
5
χ
2 value (P value)(WK):(control)∗∗
6.886 (0.009)
χ
2 value (P value)(sotalol):(control)∗∗
4.813 (0.028)
χ
2 value (P value)(WK):(sotalol)∗∗
0.343 (0.558)
∗WK: Wenxin Keli; PAF: paroxysmal atrial fibrillation; **analyzed by χ2 test.Figure 3
The cumulative recurrent rate of paroxysmal atrial fibrillation during the follow-up in different groups. In the second part of the study, we randomly assigned the successfully sinus rhythm reverted patients into one of the following three groups: 54 cases were given Wenxin Keli (9 g tid), 54 cases were given sotalol (40 mg bid), and 55 cases served as control. Kaplan-Meier analysis by Log-Rankχ2 test was used to determine the cumulative recurrent rate of paroxysmal atrial fibrillation in different groups during the one-year-long follow-up. Vertical axle was PAF recurrence free rate and horizontal axle was the follow-up time (WK = Wenxin Keli, PAF = paroxysmal atrial fibrillation).
### 3.3. Side Effects
Since there is always an inherent bitter taste in Chinese medicine, some patients would unavoidably complain about the gastrointestinal discomfort or related symptoms after taking WK. Altogether, there were 10/91 cases (10.989%) in the first investigation and 6/54 (11.111%) in the second investigation who reported various degrees of nausea and dizziness after taking WK. However, all patients showed endurance and continued with the medication. For sotalol groups, the gastrointestinal discomfort was far less frequent; there were only 3/89 cases (3.371%) in the first investigation and 2/54 (3.704%) in the second investigation who reported mild stomach discomfort. However, after taking sotalol, 2/89 cases (2.247%) in the first investigation developed symptomatic bradycardia, whose PAF disappeared though. The problems completely dissolved after dose reduction from 80 mg bid to 40 mg bid for one patient and from 80 mg bid to 40 mg qd for the other patient. These two patients’ heart rhythm maintained SR during the rest of the study. WK showed no bradycardia side effect. No other unwanted incidences were recorded.
## 3.1. Sinus Rhythm Restoration by Different Therapies
First, baseline information revealed no significant differences of hyperthyroidism history, PAF history, or thyroid hormone levels between the groups (Table1). Data in this investigation were analyzed by two ways. In the first analysis, three months after treatment of WK or sotalol, SR was obtained in 83/91 cases (91.209%) or 80/89 cases (89.888%); χ2 test showed no significant differences, indicating equal efficacies of the two drugs for assisting SR reversion (Table 2). Sex did not cause significant differences between the groups (Table 2). Thyroid hormones also demonstrated no differences before or after treatments (Table 3). In the second analysis, after treatment of 131I or ATD, SR was obtained in 86/90 cases or 77/90 cases; χ2 test showed significant differences, indicating better effects of 131I treatment (Table 4). Thyroid hormones displayed no differences before treatment, yet significant differences existed after treatment (Table 5). A typical case showing successful converted SR from PAF was presented (Figure 2).Table 1
Baseline information of all participants.
Parameters
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
Hyperthyroidism history (years)
8.374 ± 2.619
8.551 ± 2.680
0.448 (0.655)
PAF* history (years)
4.099 ± 1.599
4.213 ± 1.675
0.469 (0.639)
FT3* (pmol/L)
24.613 ± 5.059
24.405 ± 5.006
−0.278 (0.781)
FT4* (pmol/L)
118.697 ± 29.213
116.132 ± 28.266
−0.598 (0.550)
TSH* (μIU/mL)
0.007 ± 0.010
0.009 ± 0.015
1.191 (0.235)
∗WK: Wenxin Keli, PAF: paroxysmal atrial fibrillation, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Table 2
Case number distribution of patients after WK* or sotalol treatments in the first investigation.
Groups (case number)
Male
Female
Total number
SR* restored number
Total number
SR* restored number
WK* treatment (91 cases)
49
45
42
38
Sotalol treatment (89 cases)
49
44
40
36
χ
2 value (P value)(WK):(sotalol)∗∗
0.092 (0.762)
χ
2 value (P value)(male):(female)∗∗
0.017 (0.896)
∗WK: Wenxin Keli; SR: sinus rhythm; **analyzed by χ2 test.Table 3
Comparisons of thyroid hormones in patients before and after WK* or sotalol treatments in the first investigation.
Before treatments
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
FT3* (pmol/L)
24.613 ± 5.059
24.405 ± 5.006
−0.278 (0.781)
FT4* (pmol/L)
118.697 ± 29.213
116.132 ± 28.266
−0.598 (0.550)
TSH* (μIU/mL)
0.007 ± 0.010
0.009 ± 0.147
1.191 (0.235)
Three months after treatments
WK* treatment (91 cases)
Sotalol treatment (89 cases)
t value (P value)**
FT3* (pmol/L)
6.495 ± 3.713
6.596 ± 3.740
0.182 (0.855)
FT4* (pmol/L)
21.447 ± 11.727
21.655 ± 10.612
0.125 (0.901)
TSH* (μIU/mL)
6.210 ± 10.002
5.752 ± 8.915
−0.324 (0.746)
∗WK: Wenxin Keli, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Table 4
Case number distribution of patients after131I or ATD* treatments in the first investigation.
Groups (case number)
Male
Female
Total number
SR* restored number
Total number
SR* restored number
131
I treatment (90 cases)
49
47
41
39
ATD* treatment (90 cases)
49
42
41
35
χ
2 value (P value)(131I):(ATD)∗∗
5.262 (0.022)
χ
2 value (P value)(male):(female)∗∗
0.017 (0.896)
∗ATD: antithyroid drugs; SR: sinus rhythm; **analyzed by χ2 test.Table 5
Comparisons of thyroid hormones in patients before and after131I or ATD* treatments in the first investigation.
Before treatments
131
I treatment (90 cases)
ATD* treatment (90 cases)
t value (P value)**
FT3* (pmol/L)
24.056 ± 5.321
24.964 ± 4.685
1.215 (0.226)
FT4* (pmol/L)
117.633 ± 29.225
117.225 ± 28.322
−0.095 (0.924)
TSH* (μIU/mL)
0.007 ± 0.011
0.009 ± 0.014
0.757 (0.450)
Three months after treatments
131
I treatment (90 cases)
ATD* treatment (90 cases)
t value (P value)**
FT3* (pmol/L)
5.837 ± 2.830
7.252 ± 4.330
2.595 (0.010)
FT4* (pmol/L)
19.378 ± 8.292
23.722 ± 13.120
2.655 (0.009)
TSH* (μIU/mL)
6.427 ± 9.702
5.539 ± 9.237
−0.629 (0.530)
∗ATD: antithyroid drugs, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by independent samples t-test.Figure 2
A typical case of successful sinus rhythm restoration from paroxysmal atrial fibrillation. A 64-year-old male patient was diagnosed with Graves’ disease for eight years. He had paroxysmal atrial fibrillation for three years (a). He was given 6 mCi of131I for the treatment of Graves’ disease. And Wenxin Keli (18 g tid) was prescribed during and after the 131I treatment. Baseline free triiodothyronine, free thyroxine, and thyroid stimulating hormone were 21.46 pmol/L, 104.8 pmol/L, and 0.011 μIU/mL, respectively. One month later, when sinus rhythm was restored (b), free triiodothyronine, free thyroxine, and thyroid stimulating hormone were 3.35 pmol/L, 12.89 pmol/L, and 4.52 μIU/mL, respectively. At the three-month end-point of the first investigation, thyroid hormones were still normal. After entering the second investigation, Wenxin Keli (9 g tid) was prescribed during the follow-up. His thyroid function maintained normal level, and his heart rhythm maintained sinus rhythm during the rest of the study.
(a)
(b)
## 3.2. Sinus Rhythm Maintenance by Different Therapies
Data in the second investigation were analyzed by two methods. First, at the end of twelve-month follow-up, recurrent PAF happened in 1/54 (1.852%), 2/54 (3.704%), and 9/55 (16.364%) cases in WK, sotalol, or control groups, respectively. We found no differences of thyroid hormones at any follow-up time-point among the groups (Table6). However, χ2 test showed significant differences between WK and control groups and significant differences between sotalol and control groups, while there were no differences between WK and sotalol groups (Table 7). Second, Kaplan-Meier curves were drawn to determine the cumulative recurrent rate of PAF in different groups (Figure 3). Log-Rank test showed significant higher PAF recurrent rate in control patients compared with either treatment (χ2=10.229, P=0.06). Therefore, we proved that both WK and sotalol could successfully maintain SR.Table 6
Comparisons of thyroid hormones at any follow-up time-points in the second investigation.
Baseline
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.532 ± 2.372
5.752 ± 2.608
5.680 ± 2.486
0.110 (0.896)
FT4* (pmol/L)
18.469 ± 7.182
19.351 ± 7.577
19.046 ± 7.576
0.195 (0.823)
TSH* (μIU/mL)
7.126 ± 10.449
5.859 ± 8.668
6.832 ± 10.110
0.249 (0.780)
Three months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.035 ± 0.934
5.129 ± 0.908
5.098 ± 0.965
0.140 (0.870)
FT4* (pmol/L)
15.664 ± 3.112
16.061 ± 3.336
15.994 ± 3.465
0.222 (0.801)
TSH* (μIU/mL)
4.683 ± 4.211
4.083 ± 3.456
4.352 ± 3.885
0.326 (0.722)
Six months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.257 ± 0.930
5.373 ± 0.915
5.381 ± 1.057
0.277 (0.758)
FT4* (pmol/L)
16.446 ± 3.339
16.916 ± 3.727
16.955 ± 4.014
0.317 (0.729)
TSH* (μIU/mL)
4.032 ± 3.492
3.567 ± 2.885
3.756 ± 3.202
0.288 (0.750)
Nine months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.367 ± 0.975
5.458 ± 0.958
5.590 ± 1.328
0.566 (0.569)
FT4* (pmol/L)
17.184 ± 3.208
17.760 ± 4.131
18.084 ± 4.833
0.668 (0.514)
TSH* (μIU/mL)
2.912 ± 1.730
2.701 ± 1.666
2.785 ± 1.719
0.211 (0.810)
Twelve months
WK* treatment (54 cases)
Sotalol treatment (54 cases)
Control (55 cases)
F value (P value)**
FT3* (pmol/L)
5.562 ± 0.969
5.740 ± 1.302
5.874 ± 1.406
0.866 (0.422)
FT4* (pmol/L)
17.830 ± 3.485
18.532 ± 5.113
18.901 ± 5.388
0.716 (0.490)
TSH* (μIU/mL)
2.519 ± 1.420
2.388 ± 1.423
2.409 ± 1.488
0.129 (0.879)
∗WK: Wenxin Keli, FT3: free triiodothyronine, FT4: free thyroxine, and TSH: thyroid stimulating hormone; **analyzed by one-way analysis of variance and least significant difference test.Table 7
Cumulative recurrent PAF* at the end of follow-up in the second investigation.
Groups (case number)
Male
Female
Total number
Cumulative recurrent PAF
Total number
Cumulative recurrent PAF
WK* treatment (54 cases)
27
0
27
1
Sotalol treatment (54 cases)
27
2
27
0
Control (55 cases)
35
4
20
5
χ
2 value (P value)(WK):(control)∗∗
6.886 (0.009)
χ
2 value (P value)(sotalol):(control)∗∗
4.813 (0.028)
χ
2 value (P value)(WK):(sotalol)∗∗
0.343 (0.558)
∗WK: Wenxin Keli; PAF: paroxysmal atrial fibrillation; **analyzed by χ2 test.Figure 3
The cumulative recurrent rate of paroxysmal atrial fibrillation during the follow-up in different groups. In the second part of the study, we randomly assigned the successfully sinus rhythm reverted patients into one of the following three groups: 54 cases were given Wenxin Keli (9 g tid), 54 cases were given sotalol (40 mg bid), and 55 cases served as control. Kaplan-Meier analysis by Log-Rankχ2 test was used to determine the cumulative recurrent rate of paroxysmal atrial fibrillation in different groups during the one-year-long follow-up. Vertical axle was PAF recurrence free rate and horizontal axle was the follow-up time (WK = Wenxin Keli, PAF = paroxysmal atrial fibrillation).
## 3.3. Side Effects
Since there is always an inherent bitter taste in Chinese medicine, some patients would unavoidably complain about the gastrointestinal discomfort or related symptoms after taking WK. Altogether, there were 10/91 cases (10.989%) in the first investigation and 6/54 (11.111%) in the second investigation who reported various degrees of nausea and dizziness after taking WK. However, all patients showed endurance and continued with the medication. For sotalol groups, the gastrointestinal discomfort was far less frequent; there were only 3/89 cases (3.371%) in the first investigation and 2/54 (3.704%) in the second investigation who reported mild stomach discomfort. However, after taking sotalol, 2/89 cases (2.247%) in the first investigation developed symptomatic bradycardia, whose PAF disappeared though. The problems completely dissolved after dose reduction from 80 mg bid to 40 mg bid for one patient and from 80 mg bid to 40 mg qd for the other patient. These two patients’ heart rhythm maintained SR during the rest of the study. WK showed no bradycardia side effect. No other unwanted incidences were recorded.
## 4. Discussion
The risk of developing atrial fibrillation in patients with hyperthyroidism is approximately 6-fold of the euthyroidism population, which aggravates the overall conditions of such patients [9]. Successful treatment of hyperthyroidism with either 131I or ATD is associated with a reversion to SR in a majority of patients [10, 21, 22]. However, pharmacological management of atrial fibrillation in patients with hyperthyroidism is still an issue lacking in comprehensive analysis. In general, rate control is very important to reduce the mortality rate of patients with atrial fibrillation [6, 7]. Selective or nonselective β-blockers can provide rapid symptom relief by reducing the ventricular rate, but these agents are unlikely to convert PAF to SR. Pharmacotherapy of atrial fibrillation has an advantage over electrical cardioversion and the catheter ablation methods, because it can be used on an outpatient basis [23]. However, the optimal pharmacological means to restore and maintain SR in patients with hyperthyroidism-caused atrial fibrillation remains controversial.WK is identified as a novel drug against atrial fibrillation. Its mechanism has been elucidated recently. Burashnikov and colleagues [13] have implemented an isolated canine perfused right atrial preparation and recorded atrial and ventricular transmembrane action potentials and pseudoelectrograms before and after intracoronary perfusion of various concentrations of WK. Interestingly, WK produced effects more noticeable in atrial tissue than in ventricular tissue, as it caused action potential duration shortening and prolongation of effective refractory periods in an atrial-selective manner. In addition, WK produced a greater reduction in the maximum rate of rise of the action potential upstroke and a larger increase in the diastolic threshold for excitation in atrial cells, suggestive of sodium-channel current blockade. This was confirmed in HEK293 cells expressing the sodium ion channel protein SCN5A, in which WK decreased the peak sodium-channel current, in both dose-dependent and use-dependent fashions. Finally, antiarrhythmic properties of WK were illustrated by the prolongation of the P-wave duration and both the prevention and termination of acetylcholine-mediated atrial fibrillation. The above mechanism of WK acts directly against the electrophysiological changes in hyperthyroidism-induced atrial fibrillation [10–12].Contrary to the relatively new discovery of the MK mechanisms, traditional Chinese medicines were first documented about 2500 years ago by Confucian scholars and are now still being used by tens of millions in China as well as around the world [14, 24, 25]. Clinical evidence of WK is based on results of clinical trials being carried out in Chinese hospitals for years. These studies have shown that WK can significantly improve heart palpitations, chest tightness, shortness of breath, fatigue, insomnia, and other symptoms of atrial fibrillation [15]. Currently, WK monotherapy or combined therapy with antiarrhythmic drugs has been recommended as an effective method for atrial fibrillation in China. In fact, WK is the first Chinese-developed antiarrhythmic medicine to be approved by the Chinese state. Besides antiarrhythmic property, clinical trials have also confirmed that WK can increase coronary blood low, reduce myocardial oxygen consumption, enhance myocardial compliance, improve myocardial hypoxia tolerance, relieve anterior and posterior cardiac loading, and reduce myocardial tissue damage in patients with high blood pressure. These clinical evidences are in accordance with WK’s basic mechanistic research findings recently [13, 16–18].In the current investigation, we provided the first clinical evidence of WK as well as sotalol on the management of hyperthyroidism-induced PAF in two aspects. First, the drugs could assist SR reversion from PAF caused by hyperthyroidism. Second, the drugs could maintain SR afterwards. The second application seemed more important, since the first application was very dependent on the degree of thyroid hormone reduction. We showed that there were nearly the same efficacies of both WK and sotalol to assist SR restoration. However,131I was much more effective for hyperthyroidism management and thereafter to gain better SR reversion. We believed that this was largely due to better therapeutic results of 131I to control thyroid hormones (Tables 4 and 5). In the latter investigation, we showed that both WK and sotalol could maintain SR with equal abilities in our cohort, who have already gained SR after treatments. The cumulative recurrent rate was significantly lower in the drug-treated patients than in the control cases (Figure 3). Our study proved the usefulness and effectiveness of WK as well as sotalol on the long-term maintenance management of such patients, which is indeed very important for clinical purposes.Although WK’s effect on hyperthyroidism-related atrial fibrillation has never been reported before, WK’s anti-atrial-fibrillation ability is not new discovery. All of WK’s clinical studies are published in Chinese language so far; however, after considering their relevancy to the current study, further comments are deserved. Chen and colleagues [15] compiled and evaluated all available randomized controlled trials regarding WK’s therapeutic effects against PAF (complicated with diseases other than hyperthyroidism) according to the PRISMA systematic review standard. There were nine trials analyzing therapeutic effectiveness of WK alone or combined with Western medicine, compared with no medicine or Western medicine alone, in patients with PAF [26–34]. Most of the trials used amiodarone as the Western medicine, which cannot be used for hyperthyroidism-related atrial fibrillation. These trials were not homogeneous, requiring the use of the random effects model for statistical analysis. Meta-analysis results demonstrated a significant difference between the two therapeutic groups (the WK combination therapy was much better). Seven trials used the maintenance rate of SR at six months following treatment as an outcome measurement [33–39]. These seven trials compared the combination of WK plus Western medicine with Western medicine alone (mostly amiodarone). These trials were homogeneous, requiring the use of the fixed effects model for statistical analysis. The rate of maintenance of SR in the former group was greater than the latter group. Meta-analysis results showed that there was a significant beneficial effect in the WK combination regimens compared with the Western medicine monotherapy. The above literature is in conformity with our findings in that WK is an effective drug for the management of PAF, not only for initial SR reversion therapy but also for the long-term maintenance therapy.In conclusion, we demonstrated the same efficacies of WK and sotalol to assist SR reversion from hyperthyroidism-related PAF.131I was better to control thyroid hormone and to gain SR reversion. We also showed that both WK and sotalol could maintain SR with equal abilities in those PAF hyperthyroidism patients who had already gained SR after treatments. Therefore, WK is a useful drug that should be advocated in the initial treatment of PAF caused by hyperthyroidism, as well as in the follow-up management strategy.
---
*Source: 101904-2015-05-17.xml* | 2015 |
# Security Analysis of HMAC/NMAC by Using Fault Injection
**Authors:** Kitae Jeong; Yuseop Lee; Jaechul Sung; Seokhie Hong
**Journal:** Journal of Applied Mathematics
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101907
---
## Abstract
In Choukri and Tunstall (2005), the authors showed that if they decreased the number of rounds in AES by injecting faults, it is possible to recover the secret key. In this paper, we propose fault injection attacks on HMAC/NMAC by applying the main idea of their attack. These attacks are applicable to HMAC/NMAC based on the MD-family hash functions and can recover the secret key with the negligible computational complexity. Particularly, these results on HMAC/NMAC-SHA-2 are the first known key recovery attacks so far.
---
## Body
## 1. Introduction
HMAC and NMAC are hash-based message authentication codes proposed in [1]. The construction of HMAC/NMAC is based on a keyed hash function. Let H be an iterated Merkle-Damgård hash function, which defines a keyed hash function HK by replacing IV with the key K. Then, HMAC and NMAC can be defined as follows:(1)HMACK(M)=H(K-⊕opad∥H(K-⊕ipad∥M)),NMACK1,K2(M)=HK1(HK2(M)).
Here, M is a message and K and (K1,K2) are the secret keys of HMAC and NMAC, respectively; K- means K padded to a single block, and opad(=0x5c5c⋯) and ipad(=0x3636⋯) are two one-block length constants. Until now, many theoretical cryptanalytic results on HMAC/NMAC have been proposed [2–4]. For example, Wang et al. presented key recovery attacks on HMAC/NMAC-MD4 with 272 MAC queries and 277 MD4 computations [4]. On the other hand, McEvoy et al. introduced a differential power analysis on HMAC-SHA-256 in [5]. This attack does not allow the recovery of the secret key, but rather a secret intermediate hash value of SHA-256. It leads to forging the MACs for arbitrary messages. Correlation power analysis on HMAC based on six SHA-3 candidates was presented in [6]. This is also a forgery attack. To our knowledge, there is no key recovery attack on HMAC by using side channel analyss.Side channel analysis exploits the easily accessible information such as power consumption, running time, and input-output behavior under malfunctions. It is often much more powerful than the classical cryptanalysis such as differential cryptanalysis and linear cryptanalysis. Since Kocher had introduced timing attacks in [7], many side channel analyses such as differential fault analysis [8] and fault injection attack [9] have been proposed [10–12].Choukri and Tunstall proposed a fault injection attack on AES [13]. The fault injection method used a transient glitch on the power supplied to the smart card. In general, the implementation of a symmetric cryptographic algorithm in the PIC assembly language will have the following format:(2)movlwOAhmovwfRoudCounterRoudLabelCallRoudFunctiondecfzRoudCountergotoRoudLabel
The RAM variable (RoudCounter) is set to the number of rounds required (in the case of AES, OA in hexadecimal). The round function is executed, which has been represented by a call to the function RoudFunction. The RoudCounter variable is then decremented, and the round is repeated until RoudCounter is equal to zero, at which the loop point exits. It is this loop that we are trying to change so that it exits earlier than expected. The target of the fault is the decfz step, which consists of a decrement, a test, and a conditional jump. The conditional jump is presented as a jump of one instruction when the test is positive; otherwise, the next instruction is executed. The aim of the attack is to reduce the algorithm to one round. It is not possible to remove the first round entirely as the first conditional test is after the first round. Thus, the cryptanalysis of the resulting algorithm can be simple and only requires two plaintext/ciphertext pairs.In this paper, we propose fault injection attacks on HMAC/NMAC. Our fault assumption is based on that of [13]. That is, it is assumed that we can decrease the number of steps in the target compression function by injecting some faults. Our attack can be applied to HMAC/NMAC based on the MD-family hash functions and recover the secret key with the negligible computational complexity. As concrete examples, we apply our attack to HMAC/NMAC based on MD4, MD5 and SHA-2. Our attack results are summarized in Table 1. In the case of HMAC-SHA-256, for any message, we can recover the n-word secret key with [n/3] fault injections and only a negligible computational complexity. Also, we need only 2·[n/3] fault injections to recover the 2n-word secret key of NMAC-SHA-256. Thus, when n is 4, that is, the 4(8)-word secret key, we require just two(four) fault injections to recover the secret key of HMAC-SHA-256 (NMAC-SHA-256), respectively. Note that the attack results on HMAC/NMAC-SHA-2 are the first known key recovery attacks on them.Table 1
Our attack results on HMAC/NMAC.
Algorithm
No. of injected faults
Algorithm
No. of injected faults
HMAC-MD4
[
n
/
3
]
NMAC-MD4
2
·
[
n
/
3
]
HMAC-MD5
[
n
/
3
]
NMAC-MD5
2
·
[
n
/
3
]
HMAC-SHA-224
[
n
/
3
]
NMAC-SHA-224
[
n
/
3
]
+
[
n
/
2
]
HMAC-SHA-256
[
n
/
3
]
NMAC-SHA-256
2
·
[
n
/
3
]
HMAC-SHA-384
[
n
/
3
]
NMAC-SHA-384
[
n
/
3
]
+
n
HMAC-SHA-512
[
n
/
3
]
NMAC-SHA-512
2
·
[
n
/
3
]This paper is organized as follows: in Section2, we briefly introduce the MD-family hash functions. Then, we describe the fault injection attacks on HMAC and NMAC in Sections 3 and 4, respectively. Finally, we give a conclusion in Section 5.
## 2. MD-Family Hash Function
Since MD4 [14] had been introduced in 1990, the MD-family hash functions such as MD5 [15] and SHA-2 [16], where the design rationale is based on that of MD4, have been proposed. To compute the hash value for a message M of any size, the MD-family hash functions divide M into message blocks (M0,…,Mt) of fixed length b and obtain the hash value by using a compression function f. A compression function f takes a b-bit message string Mi-1 and a s-bit chaining variable IHVi-1 as input values and outputs an updated s-bit chaining variable IHVi. IHVi is computed by iteratively using a step function. It consists of addition, Boolean function, and rotation operations. After operating a step function repeatedly, IHVi is updated by adding IHVi-1. Table 2 presents the parameters of MD4, MD5, and SHA-2.Table 2
Parameters of MD4, MD5 and SHA-2.
Hash function
Message block
Chaining value
Hash value
Step function
Word size
MD4
512 bits
128 bits
128 bits
48 steps
32 bits
MD5
512 bits
128 bits
128 bits
64 steps
32 bits
SHA-224
512 bits
256 bits
224 bits
64 steps
32 bits
SHA-256
512 bits
256 bits
256 bits
64 steps
32 bits
SHA-384
1024 bits
512 bits
384 bits
80 steps
64 bits
SHA-512
1024 bits
512 bits
512 bits
80 steps
64 bitsAs a concrete example, we briefly introduce SHA-2 (SHA-224, SHA-256, SHA-384, and SHA-512), one of the most important MD-family hash functions. In SHA-224/256, the word size is 32 bits. The message string is firstly padded to be a 512-bit multiple and is divided into 512-bit blocks. A compression functionf takes a 512-bit message string and a 256-bit chaining variable as input values and outputs an updated 256-bit chaining variable. It consists of a message expansion and a data processing. The message block is expanded by using the following message expansion function. Here, (m0,…,m15)=Mi, “+” denotes the wordwise addition, σ0(X)=(X⋙7)⊕(X⋙18)⊕(X≫3), and σ1(X)=(X⋙17)⊕(X⋙19)⊕(X≫10).Consider(3)Wj=mj,(0≤j<16),Wj=σ1(Wj-2)+Wj-7+σ0(Wj-15)+Wj-16,dddddddddddddddddddd(16≤j<80).The data processing computesIHVi as follows. Here, Vj denotes a 256-bit value consisting of eight words Aj, Bj, Cj, Dj, Ej, Fj, Gj, and Hj.Consider(4)V0=IHVi-1,Vj+1=Rj(Vj,Wj),(j=0,…,63),IHVi=IHVi-1+V64.
A step function Rj is defined as follows. Here, Kj is a constant number for each step; Ch(X,Y,Z)=(X∨Y)⊕(¬X∨Z), Maj(X,Y,Z)=(X∨Y)⊕(X∨Z)⊕(Y∨Z), Σ0(X)=(X⋙2)⊕(X⋙13)⊕(X⋙22), and Σ1(X)=(X⋙6)⊕(X⋙11)⊕(X⋙25).
(5)T1j=Hj+Σ1(Ej)+Ch(Ej,Fj,Gj)+Kj+Wj,T2j=Σ0(Aj)+Maj(Aj,Bj,Cj),Aj+1=T1j+T2j,Bj+1=Aj,Cj+1=Bj,Dj+1=Cj,Ej+1=Dj+T1j,Fj+1=Ej,Gj+1=Fj,Hj+1=Gj.
SHA-224 outputs the left most 224-bit value of IHVt+1 as the hash value, and SHA-256 outputs IHVt+1 as the hash value.The structures of SHA-384/512 are similar to those of SHA-224/256. In SHA-384/512, the word size is double that of SHA-224/256. Thus, a message block is 1024 bits, and the size of a chaining value is 512 bits. A compression function consists of 80 steps.
## 3. Key Recovery Attack on HMAC
Our attack can be applied to HMAC based on the MD-family hash functions. As a concrete example, we introduce a key recovery attack on HMAC-SHA-2. Other cases can be explained similarly. For the detailed attack results, see Table1.
### 3.1. Fault Assumption
Recall that the authors reduced the number of rounds in AES to one in [13]. We apply this fault assumption to HMAC. That is, by using several fault injections, we can reduce the number of steps in the last two compression functions (see Figure 1). Similarly to AES, the MD-family hash functions compute the hash value by iteratively using a step function. Moreover, there are some results based on similar fault models [17, 18]. Thus, our fault assumption is reasonable.Figure 1
Our fault model on HMAC.For the simplicity, we denote these two compression functions byf0* and f1*. Thus, we reduce the number of steps in (f0*,f1*) to some values by using fault injections, respectively, and then recover the secret key K of HMAC-SHA-2. When we reduced the number of steps in fi* to j, we denoted this event by fi,j* in this paper.
### 3.2. Key Recovery Attack on HMAC-SHA-256/512
As mentioned in the previous section, the structure of SHA-512 is similar to that of SHA-256 excluding parameters such as the word size. Thus, we only propose a key recovery attack on HMAC-SHA-256 in this subsection.Since the word size of SHA-256 is 32 bits, we assume that the length ofK(=K0∥K1∥⋯∥Kn-1) is 32·n bits. Our attack on HMAC-SHA-256 conducts the procedure recovering 96-bit (K3i,K3i+1,K3i+2) iteratively [n/3] times (i=0,…,[(n-1)/3]). We can recover (K0,K1,K2) as follows. From an event (f0,3*,f1,1*), we compute HMAC(=HMAC0∥HMAC1∥⋯∥HMAC7). Then, we can construct the following six equations (see Figure 2). Here, (A,…,H)=(IHV0,0,…,IHV0,7);
(6)(Y+Z)+(A+B)=HMAC1,(X+Y)+(B+C)=HMAC2,X+(A+B+D)=HMAC3,(β+γ)+(E+F)=HMAC5,(α+β)+(F+G)=HMAC6,α+(E+G+H)=HMAC7.
Since (A,…,H) are known values in (6), we can obtain (X,Y,Z,α,β,γ). With these values, we can compute (K0,K1,K2) by using the following equations:
(7)K0⊕0x5c=X-(H+Σ1(E)+Ch(E,F,G)+C0ddddddddddddddd+Σ0(A)+Maj(A,B,C)),K1⊕0x5c=Y-(G+Σ1(α)+Ch(α,E,F)+C1ddddddddddddddd+Σ0(X)+Maj(X,A,B)),K2⊕0x5c=Z-(F+Σ1(β)+Ch(β,α,E)+C2ddddddddddddddd+Σ0(Y)+Maj(Y,X,A)).Figure 2
An event(f0,3*,f1,1*).By repeating the previous procedure, we can recoverK by using [n/3] fault injections. Since this consists of only solving simple equations, the computational complexity is negligible.
### 3.3. Key Recovery Attack on HMAC-SHA-224/384
Recall that SHA-224/384 outputs the left most 224/384 bits of the resulting 256/512-bit hash value as the hash value, respectively. For example, HMAC-SHA-224 outputs onlyHMAC0∥⋯∥HMAC6 as the hash value. Thus, we can not compute (α,β) in (6) (see Figure 2). However, we can obtain (X,Y,Z) in (6) and compute K0 in (7). And then, we can obtain α by using K0. By repeating this procedure, we can recover (K1,K2) sequentially. Hence, we can recover K of HMAC-SHA-224/384 with [n/3] fault injections and the negligible computational complexity.
## 3.1. Fault Assumption
Recall that the authors reduced the number of rounds in AES to one in [13]. We apply this fault assumption to HMAC. That is, by using several fault injections, we can reduce the number of steps in the last two compression functions (see Figure 1). Similarly to AES, the MD-family hash functions compute the hash value by iteratively using a step function. Moreover, there are some results based on similar fault models [17, 18]. Thus, our fault assumption is reasonable.Figure 1
Our fault model on HMAC.For the simplicity, we denote these two compression functions byf0* and f1*. Thus, we reduce the number of steps in (f0*,f1*) to some values by using fault injections, respectively, and then recover the secret key K of HMAC-SHA-2. When we reduced the number of steps in fi* to j, we denoted this event by fi,j* in this paper.
## 3.2. Key Recovery Attack on HMAC-SHA-256/512
As mentioned in the previous section, the structure of SHA-512 is similar to that of SHA-256 excluding parameters such as the word size. Thus, we only propose a key recovery attack on HMAC-SHA-256 in this subsection.Since the word size of SHA-256 is 32 bits, we assume that the length ofK(=K0∥K1∥⋯∥Kn-1) is 32·n bits. Our attack on HMAC-SHA-256 conducts the procedure recovering 96-bit (K3i,K3i+1,K3i+2) iteratively [n/3] times (i=0,…,[(n-1)/3]). We can recover (K0,K1,K2) as follows. From an event (f0,3*,f1,1*), we compute HMAC(=HMAC0∥HMAC1∥⋯∥HMAC7). Then, we can construct the following six equations (see Figure 2). Here, (A,…,H)=(IHV0,0,…,IHV0,7);
(6)(Y+Z)+(A+B)=HMAC1,(X+Y)+(B+C)=HMAC2,X+(A+B+D)=HMAC3,(β+γ)+(E+F)=HMAC5,(α+β)+(F+G)=HMAC6,α+(E+G+H)=HMAC7.
Since (A,…,H) are known values in (6), we can obtain (X,Y,Z,α,β,γ). With these values, we can compute (K0,K1,K2) by using the following equations:
(7)K0⊕0x5c=X-(H+Σ1(E)+Ch(E,F,G)+C0ddddddddddddddd+Σ0(A)+Maj(A,B,C)),K1⊕0x5c=Y-(G+Σ1(α)+Ch(α,E,F)+C1ddddddddddddddd+Σ0(X)+Maj(X,A,B)),K2⊕0x5c=Z-(F+Σ1(β)+Ch(β,α,E)+C2ddddddddddddddd+Σ0(Y)+Maj(Y,X,A)).Figure 2
An event(f0,3*,f1,1*).By repeating the previous procedure, we can recoverK by using [n/3] fault injections. Since this consists of only solving simple equations, the computational complexity is negligible.
## 3.3. Key Recovery Attack on HMAC-SHA-224/384
Recall that SHA-224/384 outputs the left most 224/384 bits of the resulting 256/512-bit hash value as the hash value, respectively. For example, HMAC-SHA-224 outputs onlyHMAC0∥⋯∥HMAC6 as the hash value. Thus, we can not compute (α,β) in (6) (see Figure 2). However, we can obtain (X,Y,Z) in (6) and compute K0 in (7). And then, we can obtain α by using K0. By repeating this procedure, we can recover (K1,K2) sequentially. Hence, we can recover K of HMAC-SHA-224/384 with [n/3] fault injections and the negligible computational complexity.
## 4. Key Recovery Attack on NMAC
A key recovery attack on NMAC is similar to that on HMAC. This is also applicable to NMAC based on the MD-family hash functions. As a concrete example, we present a key recovery attack on NMAC-SHA-2. In the case of other MD-family hash functions, we can attack in a similar fashion. Table1 gives the detailed results.
### 4.1. Fault Assumption
Differently from HMAC, NMAC uses twon-word secret keys (K1,K2) (see Figure 3). Thus, our attack on NMAC consists of the following two steps. Firstly, we recover K2 by using a key recovery attack on HMAC-SHA-2 (Fault1 in Figure 3). Secondly, to compute K1, we inject faults to (f2*,f3*,f1*) (Fault2 in Figure 3). Note that we assume that a message M is only a single block.Figure 3
Our fault model on NMAC.
### 4.2. Key Recovery Attack on NMAC-SHA-256/512
Since the structure of SHA-512 is similar to that of SHA-256, we only introduce a key recovery attack on NMAC-SHA-256 in this subsection. We assume that the length of(K1(=K1,0∥⋯∥K1,n-1),K2(=K2,0∥⋯∥K2,n-1)) is 32·n bits, respectively. By using a key recovery attack on HMAC-SHA-256, we firstly recover K2 with [n/3] fault injections and the negligible computational complexity. And then, we compute K1 from an event (f2,i*,f3,j*,f1,k*).Table3 shows the results of an event (f2,3*,f3,1*,f1,4*). From Table 3, we can compute (K1,0,K1,1,K1,2) as follows. Since we know HMAC and can compute IHV1(=A∥B∥⋯∥H) by using K2, we can compute (α1,α2,α3,α4,β1,β2,β3,β4) by using the following equations:(8)α1=HMAC3-D,β1=HMAC7-H,α2=HMAC2-C,β2=HMAC6-G,α3=HMAC1-B,β3=HMAC5-F,α4=HMAC0-A,β4=HMAC4-E.
By using (α1,α2,α3,α4,β1,β2,β3,β4), we can compute the messages of f1*(W+Z+a,Y+Z+a+b,X+Y+b+c,X+a+c+d) easily. Since we know IHV0(=a∥b∥⋯∥h), we can obtain (X,Y,Z). Thus, we can recover (K1,0,K1,1,K1,2) similarly to a key recovery attack on HMAC-SHA-256.Table 3
Recovery of(K1,0,K1,1,K1,2).
Chaining value
Message
f
2,3
*
(
a
,
b
,
c
,
d
,
e
,
f
,
g
,
h
)
K
1,0
(
X
,
a
,
b
,
c
,
?
,
e
,
f
,
g
)
K
1,1
(
Y
,
X
,
a
,
b
,
?
,
?
,
e
,
f
)
K
1,2
(
Z
,
Y
,
X
,
a
,
?
,
?
,
?
,
e
)
—
(
Z
+
a
,
Y
+
b
,
X
+
c
,
a
+
d
,
?
,
?
,
?
,
e
+
h
)
feed-forward
f
3,1
*
(
Z
+
a
,
Y
+
b
,
X
+
c
,
a
+
d
,
?
,
?
,
?
,
e
+
h
)
M
(
W
,
Z
+
a
,
Y
+
b
,
X
+
c
,
?
,
?
,
?
,
?
)
—
(
W
+
Z
+
a
,
Y
+
Z
+
a
+
b
,
X
+
Y
+
b
+
c
,
X
+
a
+
c
+
d
,
?
,
?
,
?
,
?
)
feed-forward
f
1,4
*
(
A
,
B
,
C
,
D
,
E
,
F
,
G
,
H
)
W
+
Z
+
a
(
α
1
,
A
,
B
,
C
,
β
1
,
E
,
F
,
G
)
Y
+
Z
+
a
+
b
(
α
2
,
α
1
,
A
,
B
,
β
2
,
β
1
,
E
,
F
)
X
+
Y
+
b
+
c
(
α
3
,
α
2
,
α
1
,
A
,
β
3
,
β
2
,
β
1
,
E
)
X
+
a
+
c
+
d
(
α
4
,
α
3
,
α
2
,
α
1
,
β
4
,
β
3
,
β
2
,
β
1
)
—
(
α
4
+
A
,
α
3
+
B
,
α
2
+
C
,
α
1
+
D
,
β
4
+
E
,
β
3
+
F
,
β
2
+
G
,
β
1
+
H
)
feed-forwardBy repeating the previous procedure, we can recover(K1,K2) by using 2·[n/3] fault injections. Its computational complexity is also negligible.
### 4.3. Key Recovery Attack on NMAC-SHA-224/384
Since SHA-224/384 outputs the left most 224/384 bits of the resulting 256/512-bit hash value as the hash value, respectively, we do not know(β1,β2) (see Table 3). In this case, we can not compute K1. Thus, we consider different events on these algorithms. To recover (K1,0,K1,1) of NMAC-SHA-224, we use an event (f2,2*,f3,1*,f1,3*). This attack needs [n/3]+[n/2] fault injections and the negligible computational complexity. In the case of NMAC-SHA-384, we consider an event (f2,1*,f3,1*,f1,2*) to recover K1,0. This attack requires [n/3]+n fault injections with a negligible computational complexity.
## 4.1. Fault Assumption
Differently from HMAC, NMAC uses twon-word secret keys (K1,K2) (see Figure 3). Thus, our attack on NMAC consists of the following two steps. Firstly, we recover K2 by using a key recovery attack on HMAC-SHA-2 (Fault1 in Figure 3). Secondly, to compute K1, we inject faults to (f2*,f3*,f1*) (Fault2 in Figure 3). Note that we assume that a message M is only a single block.Figure 3
Our fault model on NMAC.
## 4.2. Key Recovery Attack on NMAC-SHA-256/512
Since the structure of SHA-512 is similar to that of SHA-256, we only introduce a key recovery attack on NMAC-SHA-256 in this subsection. We assume that the length of(K1(=K1,0∥⋯∥K1,n-1),K2(=K2,0∥⋯∥K2,n-1)) is 32·n bits, respectively. By using a key recovery attack on HMAC-SHA-256, we firstly recover K2 with [n/3] fault injections and the negligible computational complexity. And then, we compute K1 from an event (f2,i*,f3,j*,f1,k*).Table3 shows the results of an event (f2,3*,f3,1*,f1,4*). From Table 3, we can compute (K1,0,K1,1,K1,2) as follows. Since we know HMAC and can compute IHV1(=A∥B∥⋯∥H) by using K2, we can compute (α1,α2,α3,α4,β1,β2,β3,β4) by using the following equations:(8)α1=HMAC3-D,β1=HMAC7-H,α2=HMAC2-C,β2=HMAC6-G,α3=HMAC1-B,β3=HMAC5-F,α4=HMAC0-A,β4=HMAC4-E.
By using (α1,α2,α3,α4,β1,β2,β3,β4), we can compute the messages of f1*(W+Z+a,Y+Z+a+b,X+Y+b+c,X+a+c+d) easily. Since we know IHV0(=a∥b∥⋯∥h), we can obtain (X,Y,Z). Thus, we can recover (K1,0,K1,1,K1,2) similarly to a key recovery attack on HMAC-SHA-256.Table 3
Recovery of(K1,0,K1,1,K1,2).
Chaining value
Message
f
2,3
*
(
a
,
b
,
c
,
d
,
e
,
f
,
g
,
h
)
K
1,0
(
X
,
a
,
b
,
c
,
?
,
e
,
f
,
g
)
K
1,1
(
Y
,
X
,
a
,
b
,
?
,
?
,
e
,
f
)
K
1,2
(
Z
,
Y
,
X
,
a
,
?
,
?
,
?
,
e
)
—
(
Z
+
a
,
Y
+
b
,
X
+
c
,
a
+
d
,
?
,
?
,
?
,
e
+
h
)
feed-forward
f
3,1
*
(
Z
+
a
,
Y
+
b
,
X
+
c
,
a
+
d
,
?
,
?
,
?
,
e
+
h
)
M
(
W
,
Z
+
a
,
Y
+
b
,
X
+
c
,
?
,
?
,
?
,
?
)
—
(
W
+
Z
+
a
,
Y
+
Z
+
a
+
b
,
X
+
Y
+
b
+
c
,
X
+
a
+
c
+
d
,
?
,
?
,
?
,
?
)
feed-forward
f
1,4
*
(
A
,
B
,
C
,
D
,
E
,
F
,
G
,
H
)
W
+
Z
+
a
(
α
1
,
A
,
B
,
C
,
β
1
,
E
,
F
,
G
)
Y
+
Z
+
a
+
b
(
α
2
,
α
1
,
A
,
B
,
β
2
,
β
1
,
E
,
F
)
X
+
Y
+
b
+
c
(
α
3
,
α
2
,
α
1
,
A
,
β
3
,
β
2
,
β
1
,
E
)
X
+
a
+
c
+
d
(
α
4
,
α
3
,
α
2
,
α
1
,
β
4
,
β
3
,
β
2
,
β
1
)
—
(
α
4
+
A
,
α
3
+
B
,
α
2
+
C
,
α
1
+
D
,
β
4
+
E
,
β
3
+
F
,
β
2
+
G
,
β
1
+
H
)
feed-forwardBy repeating the previous procedure, we can recover(K1,K2) by using 2·[n/3] fault injections. Its computational complexity is also negligible.
## 4.3. Key Recovery Attack on NMAC-SHA-224/384
Since SHA-224/384 outputs the left most 224/384 bits of the resulting 256/512-bit hash value as the hash value, respectively, we do not know(β1,β2) (see Table 3). In this case, we can not compute K1. Thus, we consider different events on these algorithms. To recover (K1,0,K1,1) of NMAC-SHA-224, we use an event (f2,2*,f3,1*,f1,3*). This attack needs [n/3]+[n/2] fault injections and the negligible computational complexity. In the case of NMAC-SHA-384, we consider an event (f2,1*,f3,1*,f1,2*) to recover K1,0. This attack requires [n/3]+n fault injections with a negligible computational complexity.
## 5. Conclusion
In this paper, we proposed key recovery attacks on HMAC/NAC by using a fault injection attack. Our attack can be applied to HMAC/NMAC based on the MD-family hash functions and requires a small number of fault injections with a negligible computational complexity. As concrete examples, we applied our attack to HMAC/NMAC based on MD4, MD5, and SHA-2. The results on HMAC/NMAC-SHA-2 are the first known key recovery attacks on them.
---
*Source: 101907-2013-09-25.xml* | 101907-2013-09-25_101907-2013-09-25.md | 22,120 | Security Analysis of HMAC/NMAC by Using Fault Injection | Kitae Jeong; Yuseop Lee; Jaechul Sung; Seokhie Hong | Journal of Applied Mathematics
(2013) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101907 | 101907-2013-09-25.xml | ---
## Abstract
In Choukri and Tunstall (2005), the authors showed that if they decreased the number of rounds in AES by injecting faults, it is possible to recover the secret key. In this paper, we propose fault injection attacks on HMAC/NMAC by applying the main idea of their attack. These attacks are applicable to HMAC/NMAC based on the MD-family hash functions and can recover the secret key with the negligible computational complexity. Particularly, these results on HMAC/NMAC-SHA-2 are the first known key recovery attacks so far.
---
## Body
## 1. Introduction
HMAC and NMAC are hash-based message authentication codes proposed in [1]. The construction of HMAC/NMAC is based on a keyed hash function. Let H be an iterated Merkle-Damgård hash function, which defines a keyed hash function HK by replacing IV with the key K. Then, HMAC and NMAC can be defined as follows:(1)HMACK(M)=H(K-⊕opad∥H(K-⊕ipad∥M)),NMACK1,K2(M)=HK1(HK2(M)).
Here, M is a message and K and (K1,K2) are the secret keys of HMAC and NMAC, respectively; K- means K padded to a single block, and opad(=0x5c5c⋯) and ipad(=0x3636⋯) are two one-block length constants. Until now, many theoretical cryptanalytic results on HMAC/NMAC have been proposed [2–4]. For example, Wang et al. presented key recovery attacks on HMAC/NMAC-MD4 with 272 MAC queries and 277 MD4 computations [4]. On the other hand, McEvoy et al. introduced a differential power analysis on HMAC-SHA-256 in [5]. This attack does not allow the recovery of the secret key, but rather a secret intermediate hash value of SHA-256. It leads to forging the MACs for arbitrary messages. Correlation power analysis on HMAC based on six SHA-3 candidates was presented in [6]. This is also a forgery attack. To our knowledge, there is no key recovery attack on HMAC by using side channel analyss.Side channel analysis exploits the easily accessible information such as power consumption, running time, and input-output behavior under malfunctions. It is often much more powerful than the classical cryptanalysis such as differential cryptanalysis and linear cryptanalysis. Since Kocher had introduced timing attacks in [7], many side channel analyses such as differential fault analysis [8] and fault injection attack [9] have been proposed [10–12].Choukri and Tunstall proposed a fault injection attack on AES [13]. The fault injection method used a transient glitch on the power supplied to the smart card. In general, the implementation of a symmetric cryptographic algorithm in the PIC assembly language will have the following format:(2)movlwOAhmovwfRoudCounterRoudLabelCallRoudFunctiondecfzRoudCountergotoRoudLabel
The RAM variable (RoudCounter) is set to the number of rounds required (in the case of AES, OA in hexadecimal). The round function is executed, which has been represented by a call to the function RoudFunction. The RoudCounter variable is then decremented, and the round is repeated until RoudCounter is equal to zero, at which the loop point exits. It is this loop that we are trying to change so that it exits earlier than expected. The target of the fault is the decfz step, which consists of a decrement, a test, and a conditional jump. The conditional jump is presented as a jump of one instruction when the test is positive; otherwise, the next instruction is executed. The aim of the attack is to reduce the algorithm to one round. It is not possible to remove the first round entirely as the first conditional test is after the first round. Thus, the cryptanalysis of the resulting algorithm can be simple and only requires two plaintext/ciphertext pairs.In this paper, we propose fault injection attacks on HMAC/NMAC. Our fault assumption is based on that of [13]. That is, it is assumed that we can decrease the number of steps in the target compression function by injecting some faults. Our attack can be applied to HMAC/NMAC based on the MD-family hash functions and recover the secret key with the negligible computational complexity. As concrete examples, we apply our attack to HMAC/NMAC based on MD4, MD5 and SHA-2. Our attack results are summarized in Table 1. In the case of HMAC-SHA-256, for any message, we can recover the n-word secret key with [n/3] fault injections and only a negligible computational complexity. Also, we need only 2·[n/3] fault injections to recover the 2n-word secret key of NMAC-SHA-256. Thus, when n is 4, that is, the 4(8)-word secret key, we require just two(four) fault injections to recover the secret key of HMAC-SHA-256 (NMAC-SHA-256), respectively. Note that the attack results on HMAC/NMAC-SHA-2 are the first known key recovery attacks on them.Table 1
Our attack results on HMAC/NMAC.
Algorithm
No. of injected faults
Algorithm
No. of injected faults
HMAC-MD4
[
n
/
3
]
NMAC-MD4
2
·
[
n
/
3
]
HMAC-MD5
[
n
/
3
]
NMAC-MD5
2
·
[
n
/
3
]
HMAC-SHA-224
[
n
/
3
]
NMAC-SHA-224
[
n
/
3
]
+
[
n
/
2
]
HMAC-SHA-256
[
n
/
3
]
NMAC-SHA-256
2
·
[
n
/
3
]
HMAC-SHA-384
[
n
/
3
]
NMAC-SHA-384
[
n
/
3
]
+
n
HMAC-SHA-512
[
n
/
3
]
NMAC-SHA-512
2
·
[
n
/
3
]This paper is organized as follows: in Section2, we briefly introduce the MD-family hash functions. Then, we describe the fault injection attacks on HMAC and NMAC in Sections 3 and 4, respectively. Finally, we give a conclusion in Section 5.
## 2. MD-Family Hash Function
Since MD4 [14] had been introduced in 1990, the MD-family hash functions such as MD5 [15] and SHA-2 [16], where the design rationale is based on that of MD4, have been proposed. To compute the hash value for a message M of any size, the MD-family hash functions divide M into message blocks (M0,…,Mt) of fixed length b and obtain the hash value by using a compression function f. A compression function f takes a b-bit message string Mi-1 and a s-bit chaining variable IHVi-1 as input values and outputs an updated s-bit chaining variable IHVi. IHVi is computed by iteratively using a step function. It consists of addition, Boolean function, and rotation operations. After operating a step function repeatedly, IHVi is updated by adding IHVi-1. Table 2 presents the parameters of MD4, MD5, and SHA-2.Table 2
Parameters of MD4, MD5 and SHA-2.
Hash function
Message block
Chaining value
Hash value
Step function
Word size
MD4
512 bits
128 bits
128 bits
48 steps
32 bits
MD5
512 bits
128 bits
128 bits
64 steps
32 bits
SHA-224
512 bits
256 bits
224 bits
64 steps
32 bits
SHA-256
512 bits
256 bits
256 bits
64 steps
32 bits
SHA-384
1024 bits
512 bits
384 bits
80 steps
64 bits
SHA-512
1024 bits
512 bits
512 bits
80 steps
64 bitsAs a concrete example, we briefly introduce SHA-2 (SHA-224, SHA-256, SHA-384, and SHA-512), one of the most important MD-family hash functions. In SHA-224/256, the word size is 32 bits. The message string is firstly padded to be a 512-bit multiple and is divided into 512-bit blocks. A compression functionf takes a 512-bit message string and a 256-bit chaining variable as input values and outputs an updated 256-bit chaining variable. It consists of a message expansion and a data processing. The message block is expanded by using the following message expansion function. Here, (m0,…,m15)=Mi, “+” denotes the wordwise addition, σ0(X)=(X⋙7)⊕(X⋙18)⊕(X≫3), and σ1(X)=(X⋙17)⊕(X⋙19)⊕(X≫10).Consider(3)Wj=mj,(0≤j<16),Wj=σ1(Wj-2)+Wj-7+σ0(Wj-15)+Wj-16,dddddddddddddddddddd(16≤j<80).The data processing computesIHVi as follows. Here, Vj denotes a 256-bit value consisting of eight words Aj, Bj, Cj, Dj, Ej, Fj, Gj, and Hj.Consider(4)V0=IHVi-1,Vj+1=Rj(Vj,Wj),(j=0,…,63),IHVi=IHVi-1+V64.
A step function Rj is defined as follows. Here, Kj is a constant number for each step; Ch(X,Y,Z)=(X∨Y)⊕(¬X∨Z), Maj(X,Y,Z)=(X∨Y)⊕(X∨Z)⊕(Y∨Z), Σ0(X)=(X⋙2)⊕(X⋙13)⊕(X⋙22), and Σ1(X)=(X⋙6)⊕(X⋙11)⊕(X⋙25).
(5)T1j=Hj+Σ1(Ej)+Ch(Ej,Fj,Gj)+Kj+Wj,T2j=Σ0(Aj)+Maj(Aj,Bj,Cj),Aj+1=T1j+T2j,Bj+1=Aj,Cj+1=Bj,Dj+1=Cj,Ej+1=Dj+T1j,Fj+1=Ej,Gj+1=Fj,Hj+1=Gj.
SHA-224 outputs the left most 224-bit value of IHVt+1 as the hash value, and SHA-256 outputs IHVt+1 as the hash value.The structures of SHA-384/512 are similar to those of SHA-224/256. In SHA-384/512, the word size is double that of SHA-224/256. Thus, a message block is 1024 bits, and the size of a chaining value is 512 bits. A compression function consists of 80 steps.
## 3. Key Recovery Attack on HMAC
Our attack can be applied to HMAC based on the MD-family hash functions. As a concrete example, we introduce a key recovery attack on HMAC-SHA-2. Other cases can be explained similarly. For the detailed attack results, see Table1.
### 3.1. Fault Assumption
Recall that the authors reduced the number of rounds in AES to one in [13]. We apply this fault assumption to HMAC. That is, by using several fault injections, we can reduce the number of steps in the last two compression functions (see Figure 1). Similarly to AES, the MD-family hash functions compute the hash value by iteratively using a step function. Moreover, there are some results based on similar fault models [17, 18]. Thus, our fault assumption is reasonable.Figure 1
Our fault model on HMAC.For the simplicity, we denote these two compression functions byf0* and f1*. Thus, we reduce the number of steps in (f0*,f1*) to some values by using fault injections, respectively, and then recover the secret key K of HMAC-SHA-2. When we reduced the number of steps in fi* to j, we denoted this event by fi,j* in this paper.
### 3.2. Key Recovery Attack on HMAC-SHA-256/512
As mentioned in the previous section, the structure of SHA-512 is similar to that of SHA-256 excluding parameters such as the word size. Thus, we only propose a key recovery attack on HMAC-SHA-256 in this subsection.Since the word size of SHA-256 is 32 bits, we assume that the length ofK(=K0∥K1∥⋯∥Kn-1) is 32·n bits. Our attack on HMAC-SHA-256 conducts the procedure recovering 96-bit (K3i,K3i+1,K3i+2) iteratively [n/3] times (i=0,…,[(n-1)/3]). We can recover (K0,K1,K2) as follows. From an event (f0,3*,f1,1*), we compute HMAC(=HMAC0∥HMAC1∥⋯∥HMAC7). Then, we can construct the following six equations (see Figure 2). Here, (A,…,H)=(IHV0,0,…,IHV0,7);
(6)(Y+Z)+(A+B)=HMAC1,(X+Y)+(B+C)=HMAC2,X+(A+B+D)=HMAC3,(β+γ)+(E+F)=HMAC5,(α+β)+(F+G)=HMAC6,α+(E+G+H)=HMAC7.
Since (A,…,H) are known values in (6), we can obtain (X,Y,Z,α,β,γ). With these values, we can compute (K0,K1,K2) by using the following equations:
(7)K0⊕0x5c=X-(H+Σ1(E)+Ch(E,F,G)+C0ddddddddddddddd+Σ0(A)+Maj(A,B,C)),K1⊕0x5c=Y-(G+Σ1(α)+Ch(α,E,F)+C1ddddddddddddddd+Σ0(X)+Maj(X,A,B)),K2⊕0x5c=Z-(F+Σ1(β)+Ch(β,α,E)+C2ddddddddddddddd+Σ0(Y)+Maj(Y,X,A)).Figure 2
An event(f0,3*,f1,1*).By repeating the previous procedure, we can recoverK by using [n/3] fault injections. Since this consists of only solving simple equations, the computational complexity is negligible.
### 3.3. Key Recovery Attack on HMAC-SHA-224/384
Recall that SHA-224/384 outputs the left most 224/384 bits of the resulting 256/512-bit hash value as the hash value, respectively. For example, HMAC-SHA-224 outputs onlyHMAC0∥⋯∥HMAC6 as the hash value. Thus, we can not compute (α,β) in (6) (see Figure 2). However, we can obtain (X,Y,Z) in (6) and compute K0 in (7). And then, we can obtain α by using K0. By repeating this procedure, we can recover (K1,K2) sequentially. Hence, we can recover K of HMAC-SHA-224/384 with [n/3] fault injections and the negligible computational complexity.
## 3.1. Fault Assumption
Recall that the authors reduced the number of rounds in AES to one in [13]. We apply this fault assumption to HMAC. That is, by using several fault injections, we can reduce the number of steps in the last two compression functions (see Figure 1). Similarly to AES, the MD-family hash functions compute the hash value by iteratively using a step function. Moreover, there are some results based on similar fault models [17, 18]. Thus, our fault assumption is reasonable.Figure 1
Our fault model on HMAC.For the simplicity, we denote these two compression functions byf0* and f1*. Thus, we reduce the number of steps in (f0*,f1*) to some values by using fault injections, respectively, and then recover the secret key K of HMAC-SHA-2. When we reduced the number of steps in fi* to j, we denoted this event by fi,j* in this paper.
## 3.2. Key Recovery Attack on HMAC-SHA-256/512
As mentioned in the previous section, the structure of SHA-512 is similar to that of SHA-256 excluding parameters such as the word size. Thus, we only propose a key recovery attack on HMAC-SHA-256 in this subsection.Since the word size of SHA-256 is 32 bits, we assume that the length ofK(=K0∥K1∥⋯∥Kn-1) is 32·n bits. Our attack on HMAC-SHA-256 conducts the procedure recovering 96-bit (K3i,K3i+1,K3i+2) iteratively [n/3] times (i=0,…,[(n-1)/3]). We can recover (K0,K1,K2) as follows. From an event (f0,3*,f1,1*), we compute HMAC(=HMAC0∥HMAC1∥⋯∥HMAC7). Then, we can construct the following six equations (see Figure 2). Here, (A,…,H)=(IHV0,0,…,IHV0,7);
(6)(Y+Z)+(A+B)=HMAC1,(X+Y)+(B+C)=HMAC2,X+(A+B+D)=HMAC3,(β+γ)+(E+F)=HMAC5,(α+β)+(F+G)=HMAC6,α+(E+G+H)=HMAC7.
Since (A,…,H) are known values in (6), we can obtain (X,Y,Z,α,β,γ). With these values, we can compute (K0,K1,K2) by using the following equations:
(7)K0⊕0x5c=X-(H+Σ1(E)+Ch(E,F,G)+C0ddddddddddddddd+Σ0(A)+Maj(A,B,C)),K1⊕0x5c=Y-(G+Σ1(α)+Ch(α,E,F)+C1ddddddddddddddd+Σ0(X)+Maj(X,A,B)),K2⊕0x5c=Z-(F+Σ1(β)+Ch(β,α,E)+C2ddddddddddddddd+Σ0(Y)+Maj(Y,X,A)).Figure 2
An event(f0,3*,f1,1*).By repeating the previous procedure, we can recoverK by using [n/3] fault injections. Since this consists of only solving simple equations, the computational complexity is negligible.
## 3.3. Key Recovery Attack on HMAC-SHA-224/384
Recall that SHA-224/384 outputs the left most 224/384 bits of the resulting 256/512-bit hash value as the hash value, respectively. For example, HMAC-SHA-224 outputs onlyHMAC0∥⋯∥HMAC6 as the hash value. Thus, we can not compute (α,β) in (6) (see Figure 2). However, we can obtain (X,Y,Z) in (6) and compute K0 in (7). And then, we can obtain α by using K0. By repeating this procedure, we can recover (K1,K2) sequentially. Hence, we can recover K of HMAC-SHA-224/384 with [n/3] fault injections and the negligible computational complexity.
## 4. Key Recovery Attack on NMAC
A key recovery attack on NMAC is similar to that on HMAC. This is also applicable to NMAC based on the MD-family hash functions. As a concrete example, we present a key recovery attack on NMAC-SHA-2. In the case of other MD-family hash functions, we can attack in a similar fashion. Table1 gives the detailed results.
### 4.1. Fault Assumption
Differently from HMAC, NMAC uses twon-word secret keys (K1,K2) (see Figure 3). Thus, our attack on NMAC consists of the following two steps. Firstly, we recover K2 by using a key recovery attack on HMAC-SHA-2 (Fault1 in Figure 3). Secondly, to compute K1, we inject faults to (f2*,f3*,f1*) (Fault2 in Figure 3). Note that we assume that a message M is only a single block.Figure 3
Our fault model on NMAC.
### 4.2. Key Recovery Attack on NMAC-SHA-256/512
Since the structure of SHA-512 is similar to that of SHA-256, we only introduce a key recovery attack on NMAC-SHA-256 in this subsection. We assume that the length of(K1(=K1,0∥⋯∥K1,n-1),K2(=K2,0∥⋯∥K2,n-1)) is 32·n bits, respectively. By using a key recovery attack on HMAC-SHA-256, we firstly recover K2 with [n/3] fault injections and the negligible computational complexity. And then, we compute K1 from an event (f2,i*,f3,j*,f1,k*).Table3 shows the results of an event (f2,3*,f3,1*,f1,4*). From Table 3, we can compute (K1,0,K1,1,K1,2) as follows. Since we know HMAC and can compute IHV1(=A∥B∥⋯∥H) by using K2, we can compute (α1,α2,α3,α4,β1,β2,β3,β4) by using the following equations:(8)α1=HMAC3-D,β1=HMAC7-H,α2=HMAC2-C,β2=HMAC6-G,α3=HMAC1-B,β3=HMAC5-F,α4=HMAC0-A,β4=HMAC4-E.
By using (α1,α2,α3,α4,β1,β2,β3,β4), we can compute the messages of f1*(W+Z+a,Y+Z+a+b,X+Y+b+c,X+a+c+d) easily. Since we know IHV0(=a∥b∥⋯∥h), we can obtain (X,Y,Z). Thus, we can recover (K1,0,K1,1,K1,2) similarly to a key recovery attack on HMAC-SHA-256.Table 3
Recovery of(K1,0,K1,1,K1,2).
Chaining value
Message
f
2,3
*
(
a
,
b
,
c
,
d
,
e
,
f
,
g
,
h
)
K
1,0
(
X
,
a
,
b
,
c
,
?
,
e
,
f
,
g
)
K
1,1
(
Y
,
X
,
a
,
b
,
?
,
?
,
e
,
f
)
K
1,2
(
Z
,
Y
,
X
,
a
,
?
,
?
,
?
,
e
)
—
(
Z
+
a
,
Y
+
b
,
X
+
c
,
a
+
d
,
?
,
?
,
?
,
e
+
h
)
feed-forward
f
3,1
*
(
Z
+
a
,
Y
+
b
,
X
+
c
,
a
+
d
,
?
,
?
,
?
,
e
+
h
)
M
(
W
,
Z
+
a
,
Y
+
b
,
X
+
c
,
?
,
?
,
?
,
?
)
—
(
W
+
Z
+
a
,
Y
+
Z
+
a
+
b
,
X
+
Y
+
b
+
c
,
X
+
a
+
c
+
d
,
?
,
?
,
?
,
?
)
feed-forward
f
1,4
*
(
A
,
B
,
C
,
D
,
E
,
F
,
G
,
H
)
W
+
Z
+
a
(
α
1
,
A
,
B
,
C
,
β
1
,
E
,
F
,
G
)
Y
+
Z
+
a
+
b
(
α
2
,
α
1
,
A
,
B
,
β
2
,
β
1
,
E
,
F
)
X
+
Y
+
b
+
c
(
α
3
,
α
2
,
α
1
,
A
,
β
3
,
β
2
,
β
1
,
E
)
X
+
a
+
c
+
d
(
α
4
,
α
3
,
α
2
,
α
1
,
β
4
,
β
3
,
β
2
,
β
1
)
—
(
α
4
+
A
,
α
3
+
B
,
α
2
+
C
,
α
1
+
D
,
β
4
+
E
,
β
3
+
F
,
β
2
+
G
,
β
1
+
H
)
feed-forwardBy repeating the previous procedure, we can recover(K1,K2) by using 2·[n/3] fault injections. Its computational complexity is also negligible.
### 4.3. Key Recovery Attack on NMAC-SHA-224/384
Since SHA-224/384 outputs the left most 224/384 bits of the resulting 256/512-bit hash value as the hash value, respectively, we do not know(β1,β2) (see Table 3). In this case, we can not compute K1. Thus, we consider different events on these algorithms. To recover (K1,0,K1,1) of NMAC-SHA-224, we use an event (f2,2*,f3,1*,f1,3*). This attack needs [n/3]+[n/2] fault injections and the negligible computational complexity. In the case of NMAC-SHA-384, we consider an event (f2,1*,f3,1*,f1,2*) to recover K1,0. This attack requires [n/3]+n fault injections with a negligible computational complexity.
## 4.1. Fault Assumption
Differently from HMAC, NMAC uses twon-word secret keys (K1,K2) (see Figure 3). Thus, our attack on NMAC consists of the following two steps. Firstly, we recover K2 by using a key recovery attack on HMAC-SHA-2 (Fault1 in Figure 3). Secondly, to compute K1, we inject faults to (f2*,f3*,f1*) (Fault2 in Figure 3). Note that we assume that a message M is only a single block.Figure 3
Our fault model on NMAC.
## 4.2. Key Recovery Attack on NMAC-SHA-256/512
Since the structure of SHA-512 is similar to that of SHA-256, we only introduce a key recovery attack on NMAC-SHA-256 in this subsection. We assume that the length of(K1(=K1,0∥⋯∥K1,n-1),K2(=K2,0∥⋯∥K2,n-1)) is 32·n bits, respectively. By using a key recovery attack on HMAC-SHA-256, we firstly recover K2 with [n/3] fault injections and the negligible computational complexity. And then, we compute K1 from an event (f2,i*,f3,j*,f1,k*).Table3 shows the results of an event (f2,3*,f3,1*,f1,4*). From Table 3, we can compute (K1,0,K1,1,K1,2) as follows. Since we know HMAC and can compute IHV1(=A∥B∥⋯∥H) by using K2, we can compute (α1,α2,α3,α4,β1,β2,β3,β4) by using the following equations:(8)α1=HMAC3-D,β1=HMAC7-H,α2=HMAC2-C,β2=HMAC6-G,α3=HMAC1-B,β3=HMAC5-F,α4=HMAC0-A,β4=HMAC4-E.
By using (α1,α2,α3,α4,β1,β2,β3,β4), we can compute the messages of f1*(W+Z+a,Y+Z+a+b,X+Y+b+c,X+a+c+d) easily. Since we know IHV0(=a∥b∥⋯∥h), we can obtain (X,Y,Z). Thus, we can recover (K1,0,K1,1,K1,2) similarly to a key recovery attack on HMAC-SHA-256.Table 3
Recovery of(K1,0,K1,1,K1,2).
Chaining value
Message
f
2,3
*
(
a
,
b
,
c
,
d
,
e
,
f
,
g
,
h
)
K
1,0
(
X
,
a
,
b
,
c
,
?
,
e
,
f
,
g
)
K
1,1
(
Y
,
X
,
a
,
b
,
?
,
?
,
e
,
f
)
K
1,2
(
Z
,
Y
,
X
,
a
,
?
,
?
,
?
,
e
)
—
(
Z
+
a
,
Y
+
b
,
X
+
c
,
a
+
d
,
?
,
?
,
?
,
e
+
h
)
feed-forward
f
3,1
*
(
Z
+
a
,
Y
+
b
,
X
+
c
,
a
+
d
,
?
,
?
,
?
,
e
+
h
)
M
(
W
,
Z
+
a
,
Y
+
b
,
X
+
c
,
?
,
?
,
?
,
?
)
—
(
W
+
Z
+
a
,
Y
+
Z
+
a
+
b
,
X
+
Y
+
b
+
c
,
X
+
a
+
c
+
d
,
?
,
?
,
?
,
?
)
feed-forward
f
1,4
*
(
A
,
B
,
C
,
D
,
E
,
F
,
G
,
H
)
W
+
Z
+
a
(
α
1
,
A
,
B
,
C
,
β
1
,
E
,
F
,
G
)
Y
+
Z
+
a
+
b
(
α
2
,
α
1
,
A
,
B
,
β
2
,
β
1
,
E
,
F
)
X
+
Y
+
b
+
c
(
α
3
,
α
2
,
α
1
,
A
,
β
3
,
β
2
,
β
1
,
E
)
X
+
a
+
c
+
d
(
α
4
,
α
3
,
α
2
,
α
1
,
β
4
,
β
3
,
β
2
,
β
1
)
—
(
α
4
+
A
,
α
3
+
B
,
α
2
+
C
,
α
1
+
D
,
β
4
+
E
,
β
3
+
F
,
β
2
+
G
,
β
1
+
H
)
feed-forwardBy repeating the previous procedure, we can recover(K1,K2) by using 2·[n/3] fault injections. Its computational complexity is also negligible.
## 4.3. Key Recovery Attack on NMAC-SHA-224/384
Since SHA-224/384 outputs the left most 224/384 bits of the resulting 256/512-bit hash value as the hash value, respectively, we do not know(β1,β2) (see Table 3). In this case, we can not compute K1. Thus, we consider different events on these algorithms. To recover (K1,0,K1,1) of NMAC-SHA-224, we use an event (f2,2*,f3,1*,f1,3*). This attack needs [n/3]+[n/2] fault injections and the negligible computational complexity. In the case of NMAC-SHA-384, we consider an event (f2,1*,f3,1*,f1,2*) to recover K1,0. This attack requires [n/3]+n fault injections with a negligible computational complexity.
## 5. Conclusion
In this paper, we proposed key recovery attacks on HMAC/NAC by using a fault injection attack. Our attack can be applied to HMAC/NMAC based on the MD-family hash functions and requires a small number of fault injections with a negligible computational complexity. As concrete examples, we applied our attack to HMAC/NMAC based on MD4, MD5, and SHA-2. The results on HMAC/NMAC-SHA-2 are the first known key recovery attacks on them.
---
*Source: 101907-2013-09-25.xml* | 2013 |
# Hierarchical Sarsa Learning Based Route Guidance Algorithm
**Authors:** Feng Wen; Xingqiao Wang; Xiaowei Xu
**Journal:** Journal of Advanced Transportation
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1019078
---
## Abstract
In modern society, route guidance problems can be found everywhere. Reinforcement learning models can be normally used to solve such kind of problems; particularly, Sarsa Learning is suitable for tackling with dynamic route guidance problem. But how to solve the large state space of digital road network is a challenge for Sarsa Learning, which is very common due to the large scale of modern road network. In this study, the hierarchical Sarsa learning based route guidance algorithm (HSLRG) is proposed to guide vehicles in the large scale road network, in which, by decomposing the route guidance task, the state space of route guidance system can be reduced. In this method, Multilevel Network method is introduced, and Differential Evolution based clustering method is adopted to optimize the multilevel road network structure. The proposed algorithm was simulated with several different scale road networks; the experiment results show that, in the large scale road networks, the proposed method can greatly enhance the efficiency of the dynamic route guidance system.
---
## Body
## 1. Introduction
In the recent decades, more and more people own their private vehicles, and the traffic pressure in the city increased rapidly. Citizens’ life quality is always undermined by daily delay which is one of the consequences of traffic congestion. The congestion can also cause the aggravation of pollution and the increasing of travelling cost. The dynamic route guidance method, which can not only provide travel routes but also relieve the traffic congestion, attracted many scholars’ attention [1–3].Dynamic route guidance system (DRGS) is an important part of Intelligent Transportation System (ITS), in which centrally determined route guidance system (CDRGS) [4] is economically effective and efficient for drivers and can avoid Braess’s paradox [5]. CDRGS guides all the vehicles for all the possible origin destination (OD) pairs with the real-time information and considers guidance in terms of the whole traffic system. However, traditional route guidance methods, like Dijkstra Algorithm [6] and A∗ Algorithm[7], are not suitable in the dynamic traffic environment [8], because these shortest path algorithms may cause traffic concentration and overreaction phenomenon when they are adopted to guide plenty of vehicles. Multiple paths routing algorithm [9] could relief the traffic jam by distributing traffic into different paths and does not depend too much on the real-time data, but when it needs to compute new solutions, the response time may be lengthened. Reinforcement learning strategy has been widely used in the dynamic environment [10–13], because it can reduce the computational time and make full use of real-time information. With these characters, reinforcement learning strategy has been used in the dynamic route guidance system. Shanqing et al. [14] applied Sarsa learning to guide vehicles in the dynamic environments by considering minimizing the route computational time. In our earlier study [15], Sarsa learning is adopted to guide vehicles in CDRGS and the Boltzmann distribution is selected as the action selection method. The results show that, compared with traditional methods, the proposed Sarsa learning based route guidance algorithm (SLRGA) and Sarsa learning with Boltzmann distribution algorithm (SLWBD) can strongly reduce the travelling time and relieve traffic congestion.However, the scale of real-world road networks is usually large, and then the scale of state set of reinforcement learning based route guidance system responding to these road networks is huge. Thus it is really difficult for reinforcement learning based route guidance system to be convergent in the larger scale traffic environment. So, how to solve the route guidance problem in the large scale road network with reinforcement learning method is a challenge. Hierarchical reinforcement learning (HRL) can improve in both time and searching space for learning and execution of the whole task by recursively decomposing larger and complex tasks into sequentially executed smaller and simpler subtasks [13]. The decomposition strategy is a key point in the hierarchical context [16], and when HRL is used in solving the route guidance problem in the large scale road networks, avoiding congestion phenomenon and reducing vehicles’ traveling time can be achieved by an effective decomposition of the route guidance.Heng Ding et al. [17] proposed a macroscopic fundamental diagram (MFD) based traffic guidance perimeter control coupled (TGPCC) method to improve the performance of macroscopic traffic networks. They establish a programming function according to the network equilibrium rule of traffic flow amongst multiple MFD subregions, which reduce the congestion phenomenon by effectively assigning the traffic flow amongst different subregions. So, partitioning the original network and assigning traffic flows in subnetworks are effectively considered as the objective of the decomposition strategy when HRL is adopted for solving route guidance problems.Multilevel approach has been successfully employed in a variety of problems [18] and Multilevel Network method [19] is considered to be introduced to segment the original network into several subnetworks and generate higher level network. S. Jung et al. [20] indicated that the optimal route on higher level network between two nodes is equivalent to that on original road network. Thus, Multilevel Network method can be utilized to perform the route guidance task in the large scale road network, in which route guidance on the higher level network can be seen as the decomposition of the route guidance task, and as a result, this method would not affect the preciseness of route guidance.Therefore, Multilevel Network structure based HRL is adopted in this study, and considering the on-line learning characteristic of Sarsa learning method and its effective performance in solving route guidance problems[15], the hierarchical Sarsa learning based route guidance algorithm (HSLRG) is proposed to guide vehicles with proper routes in the large scale road network. The route guidance task can be divided into several smaller route guidance tasks, and then these smaller route guidance tasks perform on the corresponding subnetworks. To generate the Multilevel Network structure, traditional clustering methods like K-means [21] and K-modes [22] have been considered. However comparing with conventional clustering methods, evolution based clustering method can avoid tripping into local optimal problem [19]. In addition, evolutionary algorithm can always deal with multiobjective problems effectively [23–26]. In this study, Differential Evolution [27, 28] based clustering method, which can be adopted in complex environment [29], is introduced, and multiobjective functions are designed to optimize the Multilevel Network structure.The contribution of this work is shown as follows: Firstly, we proposed a novel Multilevel Network structure based dynamic route guidance method. By reducing the state action space with Multilevel Network structure, the route guidance method can greatly reduce the congestion phenomenon in the road network and improve the efficiency of the whole transportation system notably. Secondly, we provide a Differential Evolution based clustering method to construct the Multilevel Network with multiobjectives. These objectives consider optimizing the structure from both higher level network and subnetwork aspects and optimize the structure greatly.This paper includes seven sections. Section2 introduces the Multilevel Network based route guidance model (MNRGM). Section 3 introduces the Differential Evolution based clustering method. Section 4 proposes HSLRG and describes the main procedure and details of it. Section 5 introduces the experimental conditions and discusses and analyzes the results. The last parts of this paper are the conclusion and acknowledgement sections.
## 2. Multilevel Network Based Route Guidance Model
In this section, MNRGM is introduced. HRL can reduce the searching space, and in this study, it is used to decompose the vehicle guidance from the original network into subnetworks. Sarsa learning, which fits for solving dynamic environment problems [30, 31], is adopted to guide vehicles in the Multilevel Network. The purpose of this model can be seen as follows:(i)
Reduce the average travelling time of vehicles in the large scale road network.(ii)
Reduce the probability of congestion in the large scale road network.(iii)
Reduce the searching space of reinforcement learning in the large scale road.And we assumed that the real-time travelling information in the Multilevel Network can be collected.
### 2.1. Multilevel Network Model
Multilevel Network is constructed by dividing the original network into several subnetworks. The example of two-level network can be seen as Figure1. The boundary nodes of subnetworks and the optimal routes between them are nodes and links on higher level network.Figure 1
An example of Multilevel Network.In this model, the topographical road map is seen as the directed networkG(V,E), where V denotes the set of nodes of road network and E denotes the set of links of road network; i.e., sij corresponds to the link from node i to node j. The cost of it in this model is measured by the traveling time. IfG(V,E) can be divided intom subnetworks like G1(V1,E1),G2(V2,E2),…,Gm(Vm,Em) then(1)V=V1∪V2∪⋯∪Vm,E=E1∪E2∪⋯∪EmIn the subnetwork, the nodes can be divided into two categories: interior nodes and boundary nodes. A node is a boundary node if it belongs to more than one subnetwork, and vice versa.The Multilevel Network model is shown as follows.Indices. i , j , r ∈ { 1,2 , … , n }, index of node.Parameters n: the number of nodes. o: origin node. d: destination node. R ( o , d ): a route from o to d. s i j k: link from node i to node j in level k. k: index of the level in Multilevel Network, k∈{1,2,…,Kmax}. K m a x: the maximum level of Multilevel Network. n k: the number of nodes in level k of Multilevel Network. c i j k: cost of link sij in level k of Multilevel Network. F ( r ): set of nodes connected from node r. T ( r ): set of nodes connected to node r.Decision Variables (2) x i j k = 1 , i f a n d o n l y i f l i n k s i j i s i n c l u d e d i n R o , d i n l e v e l k 0 , o t h e r w i s eThe optimal path on Multilevel Network can be calculated as follows:(3)min∑k=1Kmax∑i=1nk∑j=1nkcijkxijk(4)s.t.∑j∈Frxrjk-∑i∈Trxirk=1r=o0r∈V∖o,d-1r=d(5)xijk∈0,1,∀i,j,kwhere constraints (4) and (5) can ensure the flow conservation rule to be observed for V∖{o,d}.TheKmax is set as 2 in the simulations of this study.We useGhigh(V′,E′) to represent the higher level network, where V′ and E′ are the set of nodes and links of higher level network, respectively.The set of boundary nodes between any subnetworksGi(Vi,Ei) and Gj(Vj,Ej) is Vi∩Vj, where i≠j. We use B(Gi) to represent the set of boundary nodes of subnetwork Gi(Vi,Ei). Then,(6)BGi=⋃j=1mVi∩Vj,wherei=1,2,…,m,j≠i.LetBT represent the set of the boundary nodes:(7)BT=⋃i=1mBGiwhereV′=BT.Links of the higher level network are calculated and generated based onBT. In Gi(Vi,Ei), we use l(u, v) to represent the optimal route between any node pair u and v in B(Gi); the cost function fc(u, v) of l(.) is shown as follows:(8)fcu,v=lu,vifthereisaroutefromutovonGiVi,Eiwithoutanyotherboundarynodeontheroute;∅otherwise.For subnetworkGi(Vi,Ei), let(9)LGi=u,v∣u,v∈BGi×BGiLet LT represent the set of links of the higher level network:(10)LT=⋃i=1mLGiwhereE′=LTIn order to guide vehicles in this structure, once the OD pairs are determined, the higher level network is extended, the extension of higher level network can be denoted asGhigh′(BT′,LT′), where BT′ is the extension of BT, which can be shown as BT′=BT∪O∪D, and LT′ is the extension of LT, which is shown as LT′=LT∪L(O)∪L(D), L(O) denotes the set of routes from original node to boundary nodes in the corresponding subnetwork, and L(D) denotes the set of routes from boundary nodes to destination node in the corresponding subnetworks, which can be shown as(11)LO=o,u∣o,u∈O×BGi(12)LD=u,d∣u,d∈BGj×Dwhere O is the set of original nodes, D is set of destination nodes, and Gi and Gj are the corresponding subnetworks of O and D.
### 2.2. Multilevel Network Based Hierarchical Reinforcement Learning
#### 2.2.1. Hierarchical Sarsa Learning
Hierarchical reinforcement learning (HRL)[32] decomposes a reinforcement learning task into a hierarchy of subtasks so that lower-level child tasks can be invoked by higher-level parent tasks to reduce computing time and searching space.In this study, the route guidance tasks are decomposed according to the structure of the Multilevel Network. As shown in Figure2, the guidance in the higher level network (the selected series of links in the higher level network) determines the subtasks in the subnetworks. It guides vehicles from a node in the subnetwork to a boundary node or a destination node in this subnetwork. For example, as shown in Figure 3, the vehicle guidance on the original network is decomposed into guidance on three subnetworks, which can be seen as follows:(i)
Vehicle departs from original nodeO and arrives at boundary node Bi in subnetwork G1;(ii)
Vehicle departs from boundary nodeBi and arrives at boundary node Bj in subnetwork G2;(iii)
Vehicle departs from boundary nodeBj and arrives at destination node D in subnetwork G3.Figure 2
An example of vehicle guidance in the higher level network.Figure 3
An example of decomposition of route guidance.In the hierarchical Sarsa learning model, the agent is the CDRGS in each road network (both subnetworks and higher level network), and the purpose of the CDRGS is to guide all the vehicles in the traffic road network and to pursue the optimal travelling time. For each agent, the state is continuous, which is the positions and destinations of all the vehicles in the corresponding subnetwork (or higher level network); the description of the continuous state space of any graphGi can be shown as follows:(13)StatecGi=pvel1,dvel1,…,pvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in Gi, p(velj) is the position of vehicle velj, and d(velj) is the destination of vehicle d(velj).In order to reduce the state space, the discrete states which are the nodes and destinations of each vehicle are adopted. In the original network, the state space isStated(G); with the Multilevel Network structure, the state space is reduced, each subnetwork has the state space Stated(Gi), the state space of higher level network is Stated(Ghigh), and the function can be seen as follows:(14)StatedGi=vvel1,dvel1,…,vvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in subnetwork Gi, v∈Vi, Vi is the set of nodes in subnetwork Gi, v(velj) is the nearest node in front of vehicle velj, and d(velj) is the destination node of vehicle d(velj).The action of each agent is an array which is composed of selections of next guided link of each vehicle, which is shown as follows:(15)ActionGi=evel1,…,evelj,…where e(.)∈Ei is the guided next link of vehicle, and Ei is the set of links in subnetwork Gi.According to theAction(Gi), as shown in Figure 4, in each network (both higher level network and subnetwork), vehicles would receive their guidance information. And the passing time which is the time spent by each vehicle in the corresponding link composes the penalty; the penalty can be seen as follows:(16)PGi=tvel1,tvel2,…,tvelj,…where t(velj) is passing time of vehicle velj for the link e(velj).Figure 4
Demonstration of vehicle guidance in the network.Q-value matrix is used to guide vehicles in each subnetwork and higher level network, in which each Q-value represents the estimate optimal traveling time from the corresponding link to the destination. The proposed vehicle guidance method on both level networks is based on Sarsa learning. The equation of updating Q-values in the matrix with Sarsa learning method is shown as follows:(17)Qdi,j←Qdi,j+α∗tij+γ∗Qdj,k-Qdi,jwhere Qd(i,j) is the estimated optimal traveling time to destination d for each vehicle which selects moving to node j in node i; tij is the travelling time of the latest passing time of link sij; k is the node belonging to F(j) (the set of nodes connected from node j), through which vehicles travel to destination d after they passed link sij; α is the learning rate. γ is the discount rate.Boltzmann distribution [33] is adopted as the probability distribution of action selection in this study which can balance the exploration and exploitation of action selection according to the Q-values. The probability model of action selection is shown as follows:(18)pdi,j←e-1/τQdi,j/EQdi∑j∈Aie-1/τQdi,j/EQdiwhere EQ(i) is the average Q-value from node i to destination d; τ is temperature.(19)τ=τmax1+e-αNV-βwhere τmax,α,β are constants; NV is the total number of vehicles in the road network.
#### 2.2.2. Optimizing Multilevel Network Structure
In this study, in order to accelerate the convergence of reinforcement learning in the Multilevel Network, the structure of the Multilevel Network should be considered. Both state action space of subnetworks and higher level network can be optimized with clustering method. Two objective functions have been considered, which are described as follows:(20)∑BiT∗SGi(21)SGhighwhere S(.) is the searching space of the road network, and it can be calculated as follows:(22)SG=∏i=1VEviwhere E(v) is the number of links departing from node v if the set is not null; otherwise it is 1.
## 2.1. Multilevel Network Model
Multilevel Network is constructed by dividing the original network into several subnetworks. The example of two-level network can be seen as Figure1. The boundary nodes of subnetworks and the optimal routes between them are nodes and links on higher level network.Figure 1
An example of Multilevel Network.In this model, the topographical road map is seen as the directed networkG(V,E), where V denotes the set of nodes of road network and E denotes the set of links of road network; i.e., sij corresponds to the link from node i to node j. The cost of it in this model is measured by the traveling time. IfG(V,E) can be divided intom subnetworks like G1(V1,E1),G2(V2,E2),…,Gm(Vm,Em) then(1)V=V1∪V2∪⋯∪Vm,E=E1∪E2∪⋯∪EmIn the subnetwork, the nodes can be divided into two categories: interior nodes and boundary nodes. A node is a boundary node if it belongs to more than one subnetwork, and vice versa.The Multilevel Network model is shown as follows.Indices. i , j , r ∈ { 1,2 , … , n }, index of node.Parameters n: the number of nodes. o: origin node. d: destination node. R ( o , d ): a route from o to d. s i j k: link from node i to node j in level k. k: index of the level in Multilevel Network, k∈{1,2,…,Kmax}. K m a x: the maximum level of Multilevel Network. n k: the number of nodes in level k of Multilevel Network. c i j k: cost of link sij in level k of Multilevel Network. F ( r ): set of nodes connected from node r. T ( r ): set of nodes connected to node r.Decision Variables (2) x i j k = 1 , i f a n d o n l y i f l i n k s i j i s i n c l u d e d i n R o , d i n l e v e l k 0 , o t h e r w i s eThe optimal path on Multilevel Network can be calculated as follows:(3)min∑k=1Kmax∑i=1nk∑j=1nkcijkxijk(4)s.t.∑j∈Frxrjk-∑i∈Trxirk=1r=o0r∈V∖o,d-1r=d(5)xijk∈0,1,∀i,j,kwhere constraints (4) and (5) can ensure the flow conservation rule to be observed for V∖{o,d}.TheKmax is set as 2 in the simulations of this study.We useGhigh(V′,E′) to represent the higher level network, where V′ and E′ are the set of nodes and links of higher level network, respectively.The set of boundary nodes between any subnetworksGi(Vi,Ei) and Gj(Vj,Ej) is Vi∩Vj, where i≠j. We use B(Gi) to represent the set of boundary nodes of subnetwork Gi(Vi,Ei). Then,(6)BGi=⋃j=1mVi∩Vj,wherei=1,2,…,m,j≠i.LetBT represent the set of the boundary nodes:(7)BT=⋃i=1mBGiwhereV′=BT.Links of the higher level network are calculated and generated based onBT. In Gi(Vi,Ei), we use l(u, v) to represent the optimal route between any node pair u and v in B(Gi); the cost function fc(u, v) of l(.) is shown as follows:(8)fcu,v=lu,vifthereisaroutefromutovonGiVi,Eiwithoutanyotherboundarynodeontheroute;∅otherwise.For subnetworkGi(Vi,Ei), let(9)LGi=u,v∣u,v∈BGi×BGiLet LT represent the set of links of the higher level network:(10)LT=⋃i=1mLGiwhereE′=LTIn order to guide vehicles in this structure, once the OD pairs are determined, the higher level network is extended, the extension of higher level network can be denoted asGhigh′(BT′,LT′), where BT′ is the extension of BT, which can be shown as BT′=BT∪O∪D, and LT′ is the extension of LT, which is shown as LT′=LT∪L(O)∪L(D), L(O) denotes the set of routes from original node to boundary nodes in the corresponding subnetwork, and L(D) denotes the set of routes from boundary nodes to destination node in the corresponding subnetworks, which can be shown as(11)LO=o,u∣o,u∈O×BGi(12)LD=u,d∣u,d∈BGj×Dwhere O is the set of original nodes, D is set of destination nodes, and Gi and Gj are the corresponding subnetworks of O and D.
## 2.2. Multilevel Network Based Hierarchical Reinforcement Learning
### 2.2.1. Hierarchical Sarsa Learning
Hierarchical reinforcement learning (HRL)[32] decomposes a reinforcement learning task into a hierarchy of subtasks so that lower-level child tasks can be invoked by higher-level parent tasks to reduce computing time and searching space.In this study, the route guidance tasks are decomposed according to the structure of the Multilevel Network. As shown in Figure2, the guidance in the higher level network (the selected series of links in the higher level network) determines the subtasks in the subnetworks. It guides vehicles from a node in the subnetwork to a boundary node or a destination node in this subnetwork. For example, as shown in Figure 3, the vehicle guidance on the original network is decomposed into guidance on three subnetworks, which can be seen as follows:(i)
Vehicle departs from original nodeO and arrives at boundary node Bi in subnetwork G1;(ii)
Vehicle departs from boundary nodeBi and arrives at boundary node Bj in subnetwork G2;(iii)
Vehicle departs from boundary nodeBj and arrives at destination node D in subnetwork G3.Figure 2
An example of vehicle guidance in the higher level network.Figure 3
An example of decomposition of route guidance.In the hierarchical Sarsa learning model, the agent is the CDRGS in each road network (both subnetworks and higher level network), and the purpose of the CDRGS is to guide all the vehicles in the traffic road network and to pursue the optimal travelling time. For each agent, the state is continuous, which is the positions and destinations of all the vehicles in the corresponding subnetwork (or higher level network); the description of the continuous state space of any graphGi can be shown as follows:(13)StatecGi=pvel1,dvel1,…,pvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in Gi, p(velj) is the position of vehicle velj, and d(velj) is the destination of vehicle d(velj).In order to reduce the state space, the discrete states which are the nodes and destinations of each vehicle are adopted. In the original network, the state space isStated(G); with the Multilevel Network structure, the state space is reduced, each subnetwork has the state space Stated(Gi), the state space of higher level network is Stated(Ghigh), and the function can be seen as follows:(14)StatedGi=vvel1,dvel1,…,vvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in subnetwork Gi, v∈Vi, Vi is the set of nodes in subnetwork Gi, v(velj) is the nearest node in front of vehicle velj, and d(velj) is the destination node of vehicle d(velj).The action of each agent is an array which is composed of selections of next guided link of each vehicle, which is shown as follows:(15)ActionGi=evel1,…,evelj,…where e(.)∈Ei is the guided next link of vehicle, and Ei is the set of links in subnetwork Gi.According to theAction(Gi), as shown in Figure 4, in each network (both higher level network and subnetwork), vehicles would receive their guidance information. And the passing time which is the time spent by each vehicle in the corresponding link composes the penalty; the penalty can be seen as follows:(16)PGi=tvel1,tvel2,…,tvelj,…where t(velj) is passing time of vehicle velj for the link e(velj).Figure 4
Demonstration of vehicle guidance in the network.Q-value matrix is used to guide vehicles in each subnetwork and higher level network, in which each Q-value represents the estimate optimal traveling time from the corresponding link to the destination. The proposed vehicle guidance method on both level networks is based on Sarsa learning. The equation of updating Q-values in the matrix with Sarsa learning method is shown as follows:(17)Qdi,j←Qdi,j+α∗tij+γ∗Qdj,k-Qdi,jwhere Qd(i,j) is the estimated optimal traveling time to destination d for each vehicle which selects moving to node j in node i; tij is the travelling time of the latest passing time of link sij; k is the node belonging to F(j) (the set of nodes connected from node j), through which vehicles travel to destination d after they passed link sij; α is the learning rate. γ is the discount rate.Boltzmann distribution [33] is adopted as the probability distribution of action selection in this study which can balance the exploration and exploitation of action selection according to the Q-values. The probability model of action selection is shown as follows:(18)pdi,j←e-1/τQdi,j/EQdi∑j∈Aie-1/τQdi,j/EQdiwhere EQ(i) is the average Q-value from node i to destination d; τ is temperature.(19)τ=τmax1+e-αNV-βwhere τmax,α,β are constants; NV is the total number of vehicles in the road network.
### 2.2.2. Optimizing Multilevel Network Structure
In this study, in order to accelerate the convergence of reinforcement learning in the Multilevel Network, the structure of the Multilevel Network should be considered. Both state action space of subnetworks and higher level network can be optimized with clustering method. Two objective functions have been considered, which are described as follows:(20)∑BiT∗SGi(21)SGhighwhere S(.) is the searching space of the road network, and it can be calculated as follows:(22)SG=∏i=1VEviwhere E(v) is the number of links departing from node v if the set is not null; otherwise it is 1.
## 2.2.1. Hierarchical Sarsa Learning
Hierarchical reinforcement learning (HRL)[32] decomposes a reinforcement learning task into a hierarchy of subtasks so that lower-level child tasks can be invoked by higher-level parent tasks to reduce computing time and searching space.In this study, the route guidance tasks are decomposed according to the structure of the Multilevel Network. As shown in Figure2, the guidance in the higher level network (the selected series of links in the higher level network) determines the subtasks in the subnetworks. It guides vehicles from a node in the subnetwork to a boundary node or a destination node in this subnetwork. For example, as shown in Figure 3, the vehicle guidance on the original network is decomposed into guidance on three subnetworks, which can be seen as follows:(i)
Vehicle departs from original nodeO and arrives at boundary node Bi in subnetwork G1;(ii)
Vehicle departs from boundary nodeBi and arrives at boundary node Bj in subnetwork G2;(iii)
Vehicle departs from boundary nodeBj and arrives at destination node D in subnetwork G3.Figure 2
An example of vehicle guidance in the higher level network.Figure 3
An example of decomposition of route guidance.In the hierarchical Sarsa learning model, the agent is the CDRGS in each road network (both subnetworks and higher level network), and the purpose of the CDRGS is to guide all the vehicles in the traffic road network and to pursue the optimal travelling time. For each agent, the state is continuous, which is the positions and destinations of all the vehicles in the corresponding subnetwork (or higher level network); the description of the continuous state space of any graphGi can be shown as follows:(13)StatecGi=pvel1,dvel1,…,pvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in Gi, p(velj) is the position of vehicle velj, and d(velj) is the destination of vehicle d(velj).In order to reduce the state space, the discrete states which are the nodes and destinations of each vehicle are adopted. In the original network, the state space isStated(G); with the Multilevel Network structure, the state space is reduced, each subnetwork has the state space Stated(Gi), the state space of higher level network is Stated(Ghigh), and the function can be seen as follows:(14)StatedGi=vvel1,dvel1,…,vvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in subnetwork Gi, v∈Vi, Vi is the set of nodes in subnetwork Gi, v(velj) is the nearest node in front of vehicle velj, and d(velj) is the destination node of vehicle d(velj).The action of each agent is an array which is composed of selections of next guided link of each vehicle, which is shown as follows:(15)ActionGi=evel1,…,evelj,…where e(.)∈Ei is the guided next link of vehicle, and Ei is the set of links in subnetwork Gi.According to theAction(Gi), as shown in Figure 4, in each network (both higher level network and subnetwork), vehicles would receive their guidance information. And the passing time which is the time spent by each vehicle in the corresponding link composes the penalty; the penalty can be seen as follows:(16)PGi=tvel1,tvel2,…,tvelj,…where t(velj) is passing time of vehicle velj for the link e(velj).Figure 4
Demonstration of vehicle guidance in the network.Q-value matrix is used to guide vehicles in each subnetwork and higher level network, in which each Q-value represents the estimate optimal traveling time from the corresponding link to the destination. The proposed vehicle guidance method on both level networks is based on Sarsa learning. The equation of updating Q-values in the matrix with Sarsa learning method is shown as follows:(17)Qdi,j←Qdi,j+α∗tij+γ∗Qdj,k-Qdi,jwhere Qd(i,j) is the estimated optimal traveling time to destination d for each vehicle which selects moving to node j in node i; tij is the travelling time of the latest passing time of link sij; k is the node belonging to F(j) (the set of nodes connected from node j), through which vehicles travel to destination d after they passed link sij; α is the learning rate. γ is the discount rate.Boltzmann distribution [33] is adopted as the probability distribution of action selection in this study which can balance the exploration and exploitation of action selection according to the Q-values. The probability model of action selection is shown as follows:(18)pdi,j←e-1/τQdi,j/EQdi∑j∈Aie-1/τQdi,j/EQdiwhere EQ(i) is the average Q-value from node i to destination d; τ is temperature.(19)τ=τmax1+e-αNV-βwhere τmax,α,β are constants; NV is the total number of vehicles in the road network.
## 2.2.2. Optimizing Multilevel Network Structure
In this study, in order to accelerate the convergence of reinforcement learning in the Multilevel Network, the structure of the Multilevel Network should be considered. Both state action space of subnetworks and higher level network can be optimized with clustering method. Two objective functions have been considered, which are described as follows:(20)∑BiT∗SGi(21)SGhighwhere S(.) is the searching space of the road network, and it can be calculated as follows:(22)SG=∏i=1VEviwhere E(v) is the number of links departing from node v if the set is not null; otherwise it is 1.
## 3. Differential Evolution Based Clustering Method
Ding et al. [17] divided the heterogeneous networks into homogeneous subregions, which have small variances in link densities, such that each subregion has a well-defined MFD shape. In the proposed method, multiple homogeneous similar scale subnetworks and a virtual higher level network which can effectively assign traffic flows among them are required. In this section, a Differential Evolution based clustering method is used to generate the previous Multilevel Network structure offline.
### 3.1. DE Based Clustering Method
DE [27, 28] is a well-known direction based evolution method which can search the optimal solution effectively in large scale searching space. In order to construct the proper Multilevel Network structure, various individuals should be maintained in the population, and an effective evolution direction is necessary. Thus DE is selected as the clustering method.In the proposed method, decoding operator is clustering the road network, and after decoding, each gene in the chromosome becomes a subnetwork. On the other word, subnetworkGi(Vi,Ei) is cluster i of the clustering result of the corresponding chromosome.In order to accelerate the convergence of reinforcement learning in the Multilevel Network, two factors are considered when the Multilevel Networks are constructed. The first one is the convergence efficiency of reinforcement learning on each subnetwork. The second one is the convergence efficiency of reinforcement learning on the higher level network. Therefore, there are two objective functions, minimizing the state action space of all subnetworks in (23) and minimizing the state action space of the higher level network in (24).(23)∑BiT∗SGi(24)SGhighIn order to achieve these two objective functions simultaneously, a fitness function is used, which is shown as follows:(25)Fitness=log∑BiT∗SGi+logSGhigh
### 3.2. Genetic Representation
When the Multilevel Network structure is constructed by the DE based clustering method, the number of clusters has strong influence on the number of nodes and links of the higher level network [34], which will affect the two objective functions. So, an appropriate number of clusters should be found to optimize the structure of the Multilevel Network.In this study, in order to get the proper number of clusters, two vectors, coordinate value vector and available vector, are defined in the chromosome. Each element in the coordinate value vector is corresponding to the element in the same position of the available vector. The maximum length of these vectors isM, the coordinate values vectors present cluster centroids, and each number of the available vector represents the validity of the corresponding centroid; if the number is bigger than the threshold valid, the corresponding centroid is valid, and visa versa.The decoding procedure is the clustering procedure, in which the Multilevel Network structure is generated with each valid gene.
### 3.3. Differential Evolution
The DE operator of any individualxi can be seen as follows:(26)li=r1+Fr2-r3,r1≠r2≠r3≠xiwhere r1,r2, and r3 are three different individuals which are randomly selected from the population, li is the mutants of xi, (r2-r3) forms a vector, and F which is a positive real number controls the length of the vector.The overall procedure of DE based clustering method can be seen in Algorithm1.Algorithm 1:Procedure of DE based clustering.
input: road network data, DE parameters output: optimal solutions E(P) begin current generationt←0; initialize PopulationP(t); generate Multilevel Network according to each chromosome; evaluate each Multilevel Network; while not termination conditiondo for individual xit do if random(0,1)<PC then selected different individualsr1t,r2t,r3t from P(t) randomly; lit=r1t+F(r2t-r3t) generate a Multilevel Network according to the chromosome; evaluate the Multilevel Network; end if end for t←t+1; for individual xitdo ifFitness(lit-1)<Fitness(xit-1), xit=lit-1; otherwisexit=xit-1. end for end while end
## 3.1. DE Based Clustering Method
DE [27, 28] is a well-known direction based evolution method which can search the optimal solution effectively in large scale searching space. In order to construct the proper Multilevel Network structure, various individuals should be maintained in the population, and an effective evolution direction is necessary. Thus DE is selected as the clustering method.In the proposed method, decoding operator is clustering the road network, and after decoding, each gene in the chromosome becomes a subnetwork. On the other word, subnetworkGi(Vi,Ei) is cluster i of the clustering result of the corresponding chromosome.In order to accelerate the convergence of reinforcement learning in the Multilevel Network, two factors are considered when the Multilevel Networks are constructed. The first one is the convergence efficiency of reinforcement learning on each subnetwork. The second one is the convergence efficiency of reinforcement learning on the higher level network. Therefore, there are two objective functions, minimizing the state action space of all subnetworks in (23) and minimizing the state action space of the higher level network in (24).(23)∑BiT∗SGi(24)SGhighIn order to achieve these two objective functions simultaneously, a fitness function is used, which is shown as follows:(25)Fitness=log∑BiT∗SGi+logSGhigh
## 3.2. Genetic Representation
When the Multilevel Network structure is constructed by the DE based clustering method, the number of clusters has strong influence on the number of nodes and links of the higher level network [34], which will affect the two objective functions. So, an appropriate number of clusters should be found to optimize the structure of the Multilevel Network.In this study, in order to get the proper number of clusters, two vectors, coordinate value vector and available vector, are defined in the chromosome. Each element in the coordinate value vector is corresponding to the element in the same position of the available vector. The maximum length of these vectors isM, the coordinate values vectors present cluster centroids, and each number of the available vector represents the validity of the corresponding centroid; if the number is bigger than the threshold valid, the corresponding centroid is valid, and visa versa.The decoding procedure is the clustering procedure, in which the Multilevel Network structure is generated with each valid gene.
## 3.3. Differential Evolution
The DE operator of any individualxi can be seen as follows:(26)li=r1+Fr2-r3,r1≠r2≠r3≠xiwhere r1,r2, and r3 are three different individuals which are randomly selected from the population, li is the mutants of xi, (r2-r3) forms a vector, and F which is a positive real number controls the length of the vector.The overall procedure of DE based clustering method can be seen in Algorithm1.Algorithm 1:Procedure of DE based clustering.
input: road network data, DE parameters output: optimal solutions E(P) begin current generationt←0; initialize PopulationP(t); generate Multilevel Network according to each chromosome; evaluate each Multilevel Network; while not termination conditiondo for individual xit do if random(0,1)<PC then selected different individualsr1t,r2t,r3t from P(t) randomly; lit=r1t+F(r2t-r3t) generate a Multilevel Network according to the chromosome; evaluate the Multilevel Network; end if end for t←t+1; for individual xitdo ifFitness(lit-1)<Fitness(xit-1), xit=lit-1; otherwisexit=xit-1. end for end while end
## 4. Hierarchical Sarsa Learning Based Route Guidance Algorithm
### 4.1. Overall Procedure
After generating the optimized Multilevel Network structure, the proposed hierarchical Sarsa learning based route guidance algorithm (HSLRG) can be divided into 3 stages:(i)
Initializing stage: initialize Q-values of all the boundary nodes and destination nodes in the Multilevel Network.(ii)
Route guidance stage: guide vehicles in the higher level network and subnetworks.(iii)
Updating stage: update Q-values of all the boundary nodes and destination nodes in the Multilevel Network.Before each updating stage, the CDRGS collects travelling information from the environment. During the period, the CDRGS guides vehicles with the Q-values updated in last updating stage. The overall procedure of the proposed HSLRG is shown as Algorithm2.Algorithm 2:Overall procedure of Hierarchical Sarsa Learning based route guidance algorithm.
begin //Initializing stage Initialization routine while not termination condition do while at updating interval do //Route Guidance stage Route Guidance routine end while //Updating stage Updating routine; end while end
### 4.2. Initializing Q-Values
Q-value based Dynamic Programming is adopted to initialize the Q-values of Sarsa of the Multilevel Network, and Q-values are iteratively calculated by the following equation.(27)Qdni,j=tijc+mink∈FjQdn-1j,ki∈I-d-Bd,j∈Fdwhere i,j∈I is set of nodes; d∈D is set of destinations; sij is link departure from node i to node j; tijc is the history traveling time of link sij; F(i) is set of nodes depart from node i.In this study, the procedure of initialization can be seen as Algorithm3.Algorithm 3:Procedure of Initialization.
begin //Initializing Q-value ofBT′ in each subnetwork for each d∈BT′ do InitializeQd According to Eq. (27) in the corresponding subnetwork end for //Initializing Q-value ofD in the higher level network for each d∈D do InitializeQd According to Eq. (27) in the Ghigh′ end for end
### 4.3. Route Guidance Procedure
In the HSLRG, the guidance is based on the Sarsa learning in the Multilevel Network. The guidance in the higher level network determines the actual destinations of vehicles in each subnetwork. The route guidance procedure for each vehicle of CDRGS can be divided into 3 steps, which can be seen as follows.Step 1.
Guide vehicle in the higher level network with Algorithm4 and get the selected link (the subtask on the subnetwork).Algorithm 4:Procedure of Route Guidance.
input: Vehicle v, Destination d output: Next link sjk begin Get the linksij link of vehicle v //Calculating the probabilities of next links according to Eq. (18). p d ( j , k ) ← e - 1 / τ Q d j , k / E Q d j ∑ k ∈ A ( j ) e - 1 / τ Q d j , k / E Q d j //Selecting the next link. Choosesjk by pd(j,k). endStep 2.
According to the result of Step1, guide vehicle in the subnetwork with Algorithm 4 until the vehicle reaches the boundary node or destination.Step 3.
If the vehicle does not reach destination, turn to Step1.
### 4.4. Updating Procedure
During the updating stage, the following steps should be performed:(i)
Update the Q-values ofd∈BT′ in each subnetwork Gi.(ii)
Update the Q-values ofd∈D in the higher level network Ghigh′.The procedure of updating is presented as Algorithm5.Algorithm 5:Procedure of Updating.
input: Destination d, Network G output: Qdn begin Q d n - 1 ⟵ Q d n for link sij∈L(G) do if tijd≠null then Qdn(i,j)←Qdn-1(i,j)+α∗tijd+γ∗Qdn-1j,k-Qdn-1i,j end if end for endThe updates of Q-value for each subnetwork/high level network are independent of each other, so the updating of the proposed method is designed computing parallel, and the time complexity of updating stage isO(|DG|∗LG), where, |DG| and LG are the number of elements in destination set and link set in the road network G, respectively.
## 4.1. Overall Procedure
After generating the optimized Multilevel Network structure, the proposed hierarchical Sarsa learning based route guidance algorithm (HSLRG) can be divided into 3 stages:(i)
Initializing stage: initialize Q-values of all the boundary nodes and destination nodes in the Multilevel Network.(ii)
Route guidance stage: guide vehicles in the higher level network and subnetworks.(iii)
Updating stage: update Q-values of all the boundary nodes and destination nodes in the Multilevel Network.Before each updating stage, the CDRGS collects travelling information from the environment. During the period, the CDRGS guides vehicles with the Q-values updated in last updating stage. The overall procedure of the proposed HSLRG is shown as Algorithm2.Algorithm 2:Overall procedure of Hierarchical Sarsa Learning based route guidance algorithm.
begin //Initializing stage Initialization routine while not termination condition do while at updating interval do //Route Guidance stage Route Guidance routine end while //Updating stage Updating routine; end while end
## 4.2. Initializing Q-Values
Q-value based Dynamic Programming is adopted to initialize the Q-values of Sarsa of the Multilevel Network, and Q-values are iteratively calculated by the following equation.(27)Qdni,j=tijc+mink∈FjQdn-1j,ki∈I-d-Bd,j∈Fdwhere i,j∈I is set of nodes; d∈D is set of destinations; sij is link departure from node i to node j; tijc is the history traveling time of link sij; F(i) is set of nodes depart from node i.In this study, the procedure of initialization can be seen as Algorithm3.Algorithm 3:Procedure of Initialization.
begin //Initializing Q-value ofBT′ in each subnetwork for each d∈BT′ do InitializeQd According to Eq. (27) in the corresponding subnetwork end for //Initializing Q-value ofD in the higher level network for each d∈D do InitializeQd According to Eq. (27) in the Ghigh′ end for end
## 4.3. Route Guidance Procedure
In the HSLRG, the guidance is based on the Sarsa learning in the Multilevel Network. The guidance in the higher level network determines the actual destinations of vehicles in each subnetwork. The route guidance procedure for each vehicle of CDRGS can be divided into 3 steps, which can be seen as follows.Step 1.
Guide vehicle in the higher level network with Algorithm4 and get the selected link (the subtask on the subnetwork).Algorithm 4:Procedure of Route Guidance.
input: Vehicle v, Destination d output: Next link sjk begin Get the linksij link of vehicle v //Calculating the probabilities of next links according to Eq. (18). p d ( j , k ) ← e - 1 / τ Q d j , k / E Q d j ∑ k ∈ A ( j ) e - 1 / τ Q d j , k / E Q d j //Selecting the next link. Choosesjk by pd(j,k). endStep 2.
According to the result of Step1, guide vehicle in the subnetwork with Algorithm 4 until the vehicle reaches the boundary node or destination.Step 3.
If the vehicle does not reach destination, turn to Step1.
## 4.4. Updating Procedure
During the updating stage, the following steps should be performed:(i)
Update the Q-values ofd∈BT′ in each subnetwork Gi.(ii)
Update the Q-values ofd∈D in the higher level network Ghigh′.The procedure of updating is presented as Algorithm5.Algorithm 5:Procedure of Updating.
input: Destination d, Network G output: Qdn begin Q d n - 1 ⟵ Q d n for link sij∈L(G) do if tijd≠null then Qdn(i,j)←Qdn-1(i,j)+α∗tijd+γ∗Qdn-1j,k-Qdn-1i,j end if end for endThe updates of Q-value for each subnetwork/high level network are independent of each other, so the updating of the proposed method is designed computing parallel, and the time complexity of updating stage isO(|DG|∗LG), where, |DG| and LG are the number of elements in destination set and link set in the road network G, respectively.
## 5. Simulation
In this study, the SUMO [35] simulator is used to implement the experiments with three different digital road networks as shown in Table 1. All the algorithms were coded in Java and a PC with 8-core Xeon E5-2640 v3 2.60GHZ processor and 128GB of RAM running Linux (centos 6.6) was used for the all experiments. Our experiments are conducted using real networks, representing various roads of Japan (Experiment 1 and Experiment 2) and US (Experiment 3). The Japan digital road maps are taken from Japan Digital Road Map Association (JDRMA). The US digital networks is provided by the Topologically Integrated Geographic Encoding and Referencing (TIGER)/line collection, available at http://www.diag.uniroma1.it/challenge9/data/tiger/. In the simulation, a time step means a second, and the length of simulation of experiments is set as 15000 time steps.Table 1
Data of experiments.
Item Experiment 1 Experiment 2 Experiment 3 Number of nodes 1500 1800 3500 Number of links 4620 5488 11310 Number of OD 33 33 100 Number of OD pairs 1089 1089 10000 Vehicle departure rate of each origin node 7 seconds per vehicle 7 seconds per vehicle 8 seconds per vehicle
### 5.1. Multilevel Network
DE based clustering method is used to generate Multilevel Network of each experiment, the evolution process can be seen as Figure5, the x-axis is the generation, and the y-axis is the average fitness of individuals in the population. The results of the DE can be seen as Table 2.Table 2
Results of DE based clustering method.
Item Experiment 1 Experiment 2 Experiment 3 Number of clusters 12 11 21 Fitness 228 229 607Figure 5
The evolution process of road network in Experiment 1, Experiment 2, and Experiment 3.It can be seen that the DE based clustering method can reduce the fitness during the process of evolution effectively and Multilevel Network structure which is used in the proposed algorithm has been optimized greatly.
### 5.2. Comparing Method
In the experiments, the Dijkstra algorithm (DA) and Sarsa learning based route guidance on the original road network method are adopted to compare with the proposed method.( 1 )Dijkstra algorithm (DA): DA is adopted to represent the static shortest route method, and it calculates the routes every 60 time steps based on real-time traffic information which is supposed to be collected in this study.( 2 )Sarsa learning based route guidance on the original road network method: In order to evaluate the efficiency of Multilevel Network based route guidance method, Sarsa learning with Boltzmann distribution algorithm (SLWBD), which only considers the route guidance on the original road network, is adopted as comparing method in the simulations. The Boltzmann distribution is selected as the action selection method. The Q-values are updated with (17) every 60 time steps.
### 5.3. Evaluation
Two kinds of criteria are adopted to evaluate the performance of route guidance algorithm.(1)
The number of vehicles in the traffic systemNumV;(2)
The average traveling time of vehicles arriving destinations in the a period of time, which is calculated as follows:(28)aveTt=∑i=1NtTviNtwhere t is the time step; N(t) is the total number of vehicles arriving destinations in a period of time until t; vi is one of the vehicles that reached destination in the time period. T(vi) is the traveling time of vehicle vi;Every 100 time steps, these figures are estimated, and the time period is set as 100 time steps. These two criteria can reflect the traffic condition in the road network; lowerNumV means less congestion happened in the road network; lower aveT reflects that vehicles were guided by better routes and the time they cost on waiting in the road network is reduced. So these two criteria are adopted to evaluate whether the HSLRG is converged.
### 5.4. Experiment
In this part, simulations are conducted to evaluate the performance of the proposed HSLRG. In order to evaluate the performance of the proposed method, the drivers’ acceptance of guidance is supposed as 100%. The updating interval of higher level network is set as 30 time steps, and the updating interval of subnetworks network is 60 time steps. The data shown in the following tables are results of the average of multiple independent simulations. In order to accelerate the converge of reinforcement learning at early stage of simulation and keep Q-values stable at middle and final stage, the learning rateα of Sarsa learning is changed depending on the time step of simulation. The concept of Simulated Annealing [36] is introduced, and the equation can be seen as follows:(29)α=a∗1-tMAXTIMEb+minimumαwhere t is the current time of simulation, MAXTIME is the total simulation time, a and b are constants, and minimumα is the lower limit of α.Table3 presents the results of Experiment 1, Experiment 2, and Experiment 3. Figures 6(a), 6(b), 6(c), 6(d), 6(e) and 6(f) in Figure 6 show NumV and aveT of these experiments, respectively. Table 4 shows the mean and standard deviation (Std) of these experiments.Table 3
Results of Experiments.
Experiment Algorithm Number of arriving vehicles Mean duration of each vehicle Mean route length of vehicle Experiment 1 HSLRG 63,862.3 308.03 1,880.81 SLWBD 63,542.67 616.88 6,876.22 DA 49,604.25 757.67 3,931.18 Experiment 2 HSLRG 70,572.33 194.81 1,639.80 SLWBD 62,193.4 670.91 5,127.49 DA 51,193.1 1,087.06 4,792.38 Experiment 3 HSLRG 95,919.17 456.12 4,029.92 SLWBD 50,690.11 2,738.79 23,735.61 DA 78,696.5 1,384.96 9,160.27Table 4
Mean and Std of Experiment results.
Experiment Algorithm MeanNumV StdNumV MeanaveT StdaveT 1 HSLRG 1055 296.9 236.6 108.5 SLWBD 2578 609.8 576 163.9 DA 2969 859.6 789 282.2 2 HSLRG 975.4 174.1 190 73.43 SLWBD 2606 523.5 560 125.3 DA 4347 1502 1094 477.4 3 HSLRG 3644 711.7 498.5 137.1 SLWBD 27130 13160 2606 1382 DA 9943 4463 1370 628.5Figure 6
The results of Experiments.
(a)
TheNumV of Experiment 1 (b)
TheaveT of Experiment 1 (c)
TheNumV of Experiment 2 (d)
TheaveT of Experiment 2 (e)
TheNumV of Experiment 3 (f)
TheaveT of Experiment 3As shown in Figures6(a)–6(f), HSLRG has lower figures of evolution values than SLWBD and DA almost during the entire simulations. These data indicate that HSLRG is fitting for guide vehicles in the large scale route network; it can alleviate the congestion phenomena and reduce the traveling time and traveling distance of vehicles in the larger scale route network. In Figures 6(a)–6(d), the tendency of NumV and aveT of HSLRG and SLWBD becomes decreasing after early stage of simulation (about 5000 time steps in Experiment 1, and about 2000 time steps in Experiment 2) while as shown in Figures 6(e) and 6(f), the evaluation values of SLWBD increased dramatically during the total 15000 time steps. The data indicate that, in limited size of road network, SLWBD has reasonable performance; however, in the larger scale road the performance of SLWBD becomes poor. As Figures 6(a)–6(f) show, the measured values of DA increased continuously. This performance indicates that DA is not a proper method for route guidance in the dynamic environments. The main reason is that DA only considers the static shortest routes, which may cause negative behavioral phenomena in dynamic transportation system, including overreaction and concentration phenomena. As shown in Table 4, from the mean and Std of NumV and aveT, we can see that the performance of proposed HSLRG dominates that of SLWBD and DA, which can prove the effectiveness of the proposed HSLRG.As shown in Table3, it can be seen that in all the experiments HSLRG has the best performance and outweighs the other two methods; the statistic data indicate that vehicles guided by this algorithm have not only the largest number of vehicles arriving destinations and the least mean traveling time, but also the least traveling distance. SLWBD has better performance than DA in Experiment 1 and Experiment 2 but worse performance in Experiment 3. The statistic result indicates that Sarsa learning based route guidance on the original road network is not suitable for guiding vehicles in the large scale road network. It is because the speed of convergence of reinforcement learning depends on the scale of the searching space, and it is exponential growth with the increasing of the scale of road network. And the proposed HSLRG introduced optimized Multilevel Network structure, by which route guidance on the subnetwork and route guidance on the higher level network are combined to compress the searching space of the traffic system. So, the proposed HSLRG can enhance the efficiency of CDRGS greatly.
## 5.1. Multilevel Network
DE based clustering method is used to generate Multilevel Network of each experiment, the evolution process can be seen as Figure5, the x-axis is the generation, and the y-axis is the average fitness of individuals in the population. The results of the DE can be seen as Table 2.Table 2
Results of DE based clustering method.
Item Experiment 1 Experiment 2 Experiment 3 Number of clusters 12 11 21 Fitness 228 229 607Figure 5
The evolution process of road network in Experiment 1, Experiment 2, and Experiment 3.It can be seen that the DE based clustering method can reduce the fitness during the process of evolution effectively and Multilevel Network structure which is used in the proposed algorithm has been optimized greatly.
## 5.2. Comparing Method
In the experiments, the Dijkstra algorithm (DA) and Sarsa learning based route guidance on the original road network method are adopted to compare with the proposed method.( 1 )Dijkstra algorithm (DA): DA is adopted to represent the static shortest route method, and it calculates the routes every 60 time steps based on real-time traffic information which is supposed to be collected in this study.( 2 )Sarsa learning based route guidance on the original road network method: In order to evaluate the efficiency of Multilevel Network based route guidance method, Sarsa learning with Boltzmann distribution algorithm (SLWBD), which only considers the route guidance on the original road network, is adopted as comparing method in the simulations. The Boltzmann distribution is selected as the action selection method. The Q-values are updated with (17) every 60 time steps.
## 5.3. Evaluation
Two kinds of criteria are adopted to evaluate the performance of route guidance algorithm.(1)
The number of vehicles in the traffic systemNumV;(2)
The average traveling time of vehicles arriving destinations in the a period of time, which is calculated as follows:(28)aveTt=∑i=1NtTviNtwhere t is the time step; N(t) is the total number of vehicles arriving destinations in a period of time until t; vi is one of the vehicles that reached destination in the time period. T(vi) is the traveling time of vehicle vi;Every 100 time steps, these figures are estimated, and the time period is set as 100 time steps. These two criteria can reflect the traffic condition in the road network; lowerNumV means less congestion happened in the road network; lower aveT reflects that vehicles were guided by better routes and the time they cost on waiting in the road network is reduced. So these two criteria are adopted to evaluate whether the HSLRG is converged.
## 5.4. Experiment
In this part, simulations are conducted to evaluate the performance of the proposed HSLRG. In order to evaluate the performance of the proposed method, the drivers’ acceptance of guidance is supposed as 100%. The updating interval of higher level network is set as 30 time steps, and the updating interval of subnetworks network is 60 time steps. The data shown in the following tables are results of the average of multiple independent simulations. In order to accelerate the converge of reinforcement learning at early stage of simulation and keep Q-values stable at middle and final stage, the learning rateα of Sarsa learning is changed depending on the time step of simulation. The concept of Simulated Annealing [36] is introduced, and the equation can be seen as follows:(29)α=a∗1-tMAXTIMEb+minimumαwhere t is the current time of simulation, MAXTIME is the total simulation time, a and b are constants, and minimumα is the lower limit of α.Table3 presents the results of Experiment 1, Experiment 2, and Experiment 3. Figures 6(a), 6(b), 6(c), 6(d), 6(e) and 6(f) in Figure 6 show NumV and aveT of these experiments, respectively. Table 4 shows the mean and standard deviation (Std) of these experiments.Table 3
Results of Experiments.
Experiment Algorithm Number of arriving vehicles Mean duration of each vehicle Mean route length of vehicle Experiment 1 HSLRG 63,862.3 308.03 1,880.81 SLWBD 63,542.67 616.88 6,876.22 DA 49,604.25 757.67 3,931.18 Experiment 2 HSLRG 70,572.33 194.81 1,639.80 SLWBD 62,193.4 670.91 5,127.49 DA 51,193.1 1,087.06 4,792.38 Experiment 3 HSLRG 95,919.17 456.12 4,029.92 SLWBD 50,690.11 2,738.79 23,735.61 DA 78,696.5 1,384.96 9,160.27Table 4
Mean and Std of Experiment results.
Experiment Algorithm MeanNumV StdNumV MeanaveT StdaveT 1 HSLRG 1055 296.9 236.6 108.5 SLWBD 2578 609.8 576 163.9 DA 2969 859.6 789 282.2 2 HSLRG 975.4 174.1 190 73.43 SLWBD 2606 523.5 560 125.3 DA 4347 1502 1094 477.4 3 HSLRG 3644 711.7 498.5 137.1 SLWBD 27130 13160 2606 1382 DA 9943 4463 1370 628.5Figure 6
The results of Experiments.
(a)
TheNumV of Experiment 1 (b)
TheaveT of Experiment 1 (c)
TheNumV of Experiment 2 (d)
TheaveT of Experiment 2 (e)
TheNumV of Experiment 3 (f)
TheaveT of Experiment 3As shown in Figures6(a)–6(f), HSLRG has lower figures of evolution values than SLWBD and DA almost during the entire simulations. These data indicate that HSLRG is fitting for guide vehicles in the large scale route network; it can alleviate the congestion phenomena and reduce the traveling time and traveling distance of vehicles in the larger scale route network. In Figures 6(a)–6(d), the tendency of NumV and aveT of HSLRG and SLWBD becomes decreasing after early stage of simulation (about 5000 time steps in Experiment 1, and about 2000 time steps in Experiment 2) while as shown in Figures 6(e) and 6(f), the evaluation values of SLWBD increased dramatically during the total 15000 time steps. The data indicate that, in limited size of road network, SLWBD has reasonable performance; however, in the larger scale road the performance of SLWBD becomes poor. As Figures 6(a)–6(f) show, the measured values of DA increased continuously. This performance indicates that DA is not a proper method for route guidance in the dynamic environments. The main reason is that DA only considers the static shortest routes, which may cause negative behavioral phenomena in dynamic transportation system, including overreaction and concentration phenomena. As shown in Table 4, from the mean and Std of NumV and aveT, we can see that the performance of proposed HSLRG dominates that of SLWBD and DA, which can prove the effectiveness of the proposed HSLRG.As shown in Table3, it can be seen that in all the experiments HSLRG has the best performance and outweighs the other two methods; the statistic data indicate that vehicles guided by this algorithm have not only the largest number of vehicles arriving destinations and the least mean traveling time, but also the least traveling distance. SLWBD has better performance than DA in Experiment 1 and Experiment 2 but worse performance in Experiment 3. The statistic result indicates that Sarsa learning based route guidance on the original road network is not suitable for guiding vehicles in the large scale road network. It is because the speed of convergence of reinforcement learning depends on the scale of the searching space, and it is exponential growth with the increasing of the scale of road network. And the proposed HSLRG introduced optimized Multilevel Network structure, by which route guidance on the subnetwork and route guidance on the higher level network are combined to compress the searching space of the traffic system. So, the proposed HSLRG can enhance the efficiency of CDRGS greatly.
## 6. Conclusion
In this paper, we have proposed the hierarchical Sarsa learning based route guidance algorithm (HSLRG) to solve route guidance problem in large scale road networks. HSLRG applies Multilevel Network method to reduce the state space of the traffic environment, which can greatly accelerate convergence of the route guidance algorithm. The effectiveness and efficiency of HSLRG were studied in three different scale road networks. The simulation results show that, in the large scale road network, comparing with SLWBD and DA, HSLRG can guide vehicles to the destinations more effectively. How to guide vehicles with multiobjective and considering personality of drivers are worthwhile for future research.
---
*Source: 1019078-2019-06-27.xml* | 1019078-2019-06-27_1019078-2019-06-27.md | 64,822 | Hierarchical Sarsa Learning Based Route Guidance Algorithm | Feng Wen; Xingqiao Wang; Xiaowei Xu | Journal of Advanced Transportation
(2019) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1019078 | 1019078-2019-06-27.xml | ---
## Abstract
In modern society, route guidance problems can be found everywhere. Reinforcement learning models can be normally used to solve such kind of problems; particularly, Sarsa Learning is suitable for tackling with dynamic route guidance problem. But how to solve the large state space of digital road network is a challenge for Sarsa Learning, which is very common due to the large scale of modern road network. In this study, the hierarchical Sarsa learning based route guidance algorithm (HSLRG) is proposed to guide vehicles in the large scale road network, in which, by decomposing the route guidance task, the state space of route guidance system can be reduced. In this method, Multilevel Network method is introduced, and Differential Evolution based clustering method is adopted to optimize the multilevel road network structure. The proposed algorithm was simulated with several different scale road networks; the experiment results show that, in the large scale road networks, the proposed method can greatly enhance the efficiency of the dynamic route guidance system.
---
## Body
## 1. Introduction
In the recent decades, more and more people own their private vehicles, and the traffic pressure in the city increased rapidly. Citizens’ life quality is always undermined by daily delay which is one of the consequences of traffic congestion. The congestion can also cause the aggravation of pollution and the increasing of travelling cost. The dynamic route guidance method, which can not only provide travel routes but also relieve the traffic congestion, attracted many scholars’ attention [1–3].Dynamic route guidance system (DRGS) is an important part of Intelligent Transportation System (ITS), in which centrally determined route guidance system (CDRGS) [4] is economically effective and efficient for drivers and can avoid Braess’s paradox [5]. CDRGS guides all the vehicles for all the possible origin destination (OD) pairs with the real-time information and considers guidance in terms of the whole traffic system. However, traditional route guidance methods, like Dijkstra Algorithm [6] and A∗ Algorithm[7], are not suitable in the dynamic traffic environment [8], because these shortest path algorithms may cause traffic concentration and overreaction phenomenon when they are adopted to guide plenty of vehicles. Multiple paths routing algorithm [9] could relief the traffic jam by distributing traffic into different paths and does not depend too much on the real-time data, but when it needs to compute new solutions, the response time may be lengthened. Reinforcement learning strategy has been widely used in the dynamic environment [10–13], because it can reduce the computational time and make full use of real-time information. With these characters, reinforcement learning strategy has been used in the dynamic route guidance system. Shanqing et al. [14] applied Sarsa learning to guide vehicles in the dynamic environments by considering minimizing the route computational time. In our earlier study [15], Sarsa learning is adopted to guide vehicles in CDRGS and the Boltzmann distribution is selected as the action selection method. The results show that, compared with traditional methods, the proposed Sarsa learning based route guidance algorithm (SLRGA) and Sarsa learning with Boltzmann distribution algorithm (SLWBD) can strongly reduce the travelling time and relieve traffic congestion.However, the scale of real-world road networks is usually large, and then the scale of state set of reinforcement learning based route guidance system responding to these road networks is huge. Thus it is really difficult for reinforcement learning based route guidance system to be convergent in the larger scale traffic environment. So, how to solve the route guidance problem in the large scale road network with reinforcement learning method is a challenge. Hierarchical reinforcement learning (HRL) can improve in both time and searching space for learning and execution of the whole task by recursively decomposing larger and complex tasks into sequentially executed smaller and simpler subtasks [13]. The decomposition strategy is a key point in the hierarchical context [16], and when HRL is used in solving the route guidance problem in the large scale road networks, avoiding congestion phenomenon and reducing vehicles’ traveling time can be achieved by an effective decomposition of the route guidance.Heng Ding et al. [17] proposed a macroscopic fundamental diagram (MFD) based traffic guidance perimeter control coupled (TGPCC) method to improve the performance of macroscopic traffic networks. They establish a programming function according to the network equilibrium rule of traffic flow amongst multiple MFD subregions, which reduce the congestion phenomenon by effectively assigning the traffic flow amongst different subregions. So, partitioning the original network and assigning traffic flows in subnetworks are effectively considered as the objective of the decomposition strategy when HRL is adopted for solving route guidance problems.Multilevel approach has been successfully employed in a variety of problems [18] and Multilevel Network method [19] is considered to be introduced to segment the original network into several subnetworks and generate higher level network. S. Jung et al. [20] indicated that the optimal route on higher level network between two nodes is equivalent to that on original road network. Thus, Multilevel Network method can be utilized to perform the route guidance task in the large scale road network, in which route guidance on the higher level network can be seen as the decomposition of the route guidance task, and as a result, this method would not affect the preciseness of route guidance.Therefore, Multilevel Network structure based HRL is adopted in this study, and considering the on-line learning characteristic of Sarsa learning method and its effective performance in solving route guidance problems[15], the hierarchical Sarsa learning based route guidance algorithm (HSLRG) is proposed to guide vehicles with proper routes in the large scale road network. The route guidance task can be divided into several smaller route guidance tasks, and then these smaller route guidance tasks perform on the corresponding subnetworks. To generate the Multilevel Network structure, traditional clustering methods like K-means [21] and K-modes [22] have been considered. However comparing with conventional clustering methods, evolution based clustering method can avoid tripping into local optimal problem [19]. In addition, evolutionary algorithm can always deal with multiobjective problems effectively [23–26]. In this study, Differential Evolution [27, 28] based clustering method, which can be adopted in complex environment [29], is introduced, and multiobjective functions are designed to optimize the Multilevel Network structure.The contribution of this work is shown as follows: Firstly, we proposed a novel Multilevel Network structure based dynamic route guidance method. By reducing the state action space with Multilevel Network structure, the route guidance method can greatly reduce the congestion phenomenon in the road network and improve the efficiency of the whole transportation system notably. Secondly, we provide a Differential Evolution based clustering method to construct the Multilevel Network with multiobjectives. These objectives consider optimizing the structure from both higher level network and subnetwork aspects and optimize the structure greatly.This paper includes seven sections. Section2 introduces the Multilevel Network based route guidance model (MNRGM). Section 3 introduces the Differential Evolution based clustering method. Section 4 proposes HSLRG and describes the main procedure and details of it. Section 5 introduces the experimental conditions and discusses and analyzes the results. The last parts of this paper are the conclusion and acknowledgement sections.
## 2. Multilevel Network Based Route Guidance Model
In this section, MNRGM is introduced. HRL can reduce the searching space, and in this study, it is used to decompose the vehicle guidance from the original network into subnetworks. Sarsa learning, which fits for solving dynamic environment problems [30, 31], is adopted to guide vehicles in the Multilevel Network. The purpose of this model can be seen as follows:(i)
Reduce the average travelling time of vehicles in the large scale road network.(ii)
Reduce the probability of congestion in the large scale road network.(iii)
Reduce the searching space of reinforcement learning in the large scale road.And we assumed that the real-time travelling information in the Multilevel Network can be collected.
### 2.1. Multilevel Network Model
Multilevel Network is constructed by dividing the original network into several subnetworks. The example of two-level network can be seen as Figure1. The boundary nodes of subnetworks and the optimal routes between them are nodes and links on higher level network.Figure 1
An example of Multilevel Network.In this model, the topographical road map is seen as the directed networkG(V,E), where V denotes the set of nodes of road network and E denotes the set of links of road network; i.e., sij corresponds to the link from node i to node j. The cost of it in this model is measured by the traveling time. IfG(V,E) can be divided intom subnetworks like G1(V1,E1),G2(V2,E2),…,Gm(Vm,Em) then(1)V=V1∪V2∪⋯∪Vm,E=E1∪E2∪⋯∪EmIn the subnetwork, the nodes can be divided into two categories: interior nodes and boundary nodes. A node is a boundary node if it belongs to more than one subnetwork, and vice versa.The Multilevel Network model is shown as follows.Indices. i , j , r ∈ { 1,2 , … , n }, index of node.Parameters n: the number of nodes. o: origin node. d: destination node. R ( o , d ): a route from o to d. s i j k: link from node i to node j in level k. k: index of the level in Multilevel Network, k∈{1,2,…,Kmax}. K m a x: the maximum level of Multilevel Network. n k: the number of nodes in level k of Multilevel Network. c i j k: cost of link sij in level k of Multilevel Network. F ( r ): set of nodes connected from node r. T ( r ): set of nodes connected to node r.Decision Variables (2) x i j k = 1 , i f a n d o n l y i f l i n k s i j i s i n c l u d e d i n R o , d i n l e v e l k 0 , o t h e r w i s eThe optimal path on Multilevel Network can be calculated as follows:(3)min∑k=1Kmax∑i=1nk∑j=1nkcijkxijk(4)s.t.∑j∈Frxrjk-∑i∈Trxirk=1r=o0r∈V∖o,d-1r=d(5)xijk∈0,1,∀i,j,kwhere constraints (4) and (5) can ensure the flow conservation rule to be observed for V∖{o,d}.TheKmax is set as 2 in the simulations of this study.We useGhigh(V′,E′) to represent the higher level network, where V′ and E′ are the set of nodes and links of higher level network, respectively.The set of boundary nodes between any subnetworksGi(Vi,Ei) and Gj(Vj,Ej) is Vi∩Vj, where i≠j. We use B(Gi) to represent the set of boundary nodes of subnetwork Gi(Vi,Ei). Then,(6)BGi=⋃j=1mVi∩Vj,wherei=1,2,…,m,j≠i.LetBT represent the set of the boundary nodes:(7)BT=⋃i=1mBGiwhereV′=BT.Links of the higher level network are calculated and generated based onBT. In Gi(Vi,Ei), we use l(u, v) to represent the optimal route between any node pair u and v in B(Gi); the cost function fc(u, v) of l(.) is shown as follows:(8)fcu,v=lu,vifthereisaroutefromutovonGiVi,Eiwithoutanyotherboundarynodeontheroute;∅otherwise.For subnetworkGi(Vi,Ei), let(9)LGi=u,v∣u,v∈BGi×BGiLet LT represent the set of links of the higher level network:(10)LT=⋃i=1mLGiwhereE′=LTIn order to guide vehicles in this structure, once the OD pairs are determined, the higher level network is extended, the extension of higher level network can be denoted asGhigh′(BT′,LT′), where BT′ is the extension of BT, which can be shown as BT′=BT∪O∪D, and LT′ is the extension of LT, which is shown as LT′=LT∪L(O)∪L(D), L(O) denotes the set of routes from original node to boundary nodes in the corresponding subnetwork, and L(D) denotes the set of routes from boundary nodes to destination node in the corresponding subnetworks, which can be shown as(11)LO=o,u∣o,u∈O×BGi(12)LD=u,d∣u,d∈BGj×Dwhere O is the set of original nodes, D is set of destination nodes, and Gi and Gj are the corresponding subnetworks of O and D.
### 2.2. Multilevel Network Based Hierarchical Reinforcement Learning
#### 2.2.1. Hierarchical Sarsa Learning
Hierarchical reinforcement learning (HRL)[32] decomposes a reinforcement learning task into a hierarchy of subtasks so that lower-level child tasks can be invoked by higher-level parent tasks to reduce computing time and searching space.In this study, the route guidance tasks are decomposed according to the structure of the Multilevel Network. As shown in Figure2, the guidance in the higher level network (the selected series of links in the higher level network) determines the subtasks in the subnetworks. It guides vehicles from a node in the subnetwork to a boundary node or a destination node in this subnetwork. For example, as shown in Figure 3, the vehicle guidance on the original network is decomposed into guidance on three subnetworks, which can be seen as follows:(i)
Vehicle departs from original nodeO and arrives at boundary node Bi in subnetwork G1;(ii)
Vehicle departs from boundary nodeBi and arrives at boundary node Bj in subnetwork G2;(iii)
Vehicle departs from boundary nodeBj and arrives at destination node D in subnetwork G3.Figure 2
An example of vehicle guidance in the higher level network.Figure 3
An example of decomposition of route guidance.In the hierarchical Sarsa learning model, the agent is the CDRGS in each road network (both subnetworks and higher level network), and the purpose of the CDRGS is to guide all the vehicles in the traffic road network and to pursue the optimal travelling time. For each agent, the state is continuous, which is the positions and destinations of all the vehicles in the corresponding subnetwork (or higher level network); the description of the continuous state space of any graphGi can be shown as follows:(13)StatecGi=pvel1,dvel1,…,pvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in Gi, p(velj) is the position of vehicle velj, and d(velj) is the destination of vehicle d(velj).In order to reduce the state space, the discrete states which are the nodes and destinations of each vehicle are adopted. In the original network, the state space isStated(G); with the Multilevel Network structure, the state space is reduced, each subnetwork has the state space Stated(Gi), the state space of higher level network is Stated(Ghigh), and the function can be seen as follows:(14)StatedGi=vvel1,dvel1,…,vvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in subnetwork Gi, v∈Vi, Vi is the set of nodes in subnetwork Gi, v(velj) is the nearest node in front of vehicle velj, and d(velj) is the destination node of vehicle d(velj).The action of each agent is an array which is composed of selections of next guided link of each vehicle, which is shown as follows:(15)ActionGi=evel1,…,evelj,…where e(.)∈Ei is the guided next link of vehicle, and Ei is the set of links in subnetwork Gi.According to theAction(Gi), as shown in Figure 4, in each network (both higher level network and subnetwork), vehicles would receive their guidance information. And the passing time which is the time spent by each vehicle in the corresponding link composes the penalty; the penalty can be seen as follows:(16)PGi=tvel1,tvel2,…,tvelj,…where t(velj) is passing time of vehicle velj for the link e(velj).Figure 4
Demonstration of vehicle guidance in the network.Q-value matrix is used to guide vehicles in each subnetwork and higher level network, in which each Q-value represents the estimate optimal traveling time from the corresponding link to the destination. The proposed vehicle guidance method on both level networks is based on Sarsa learning. The equation of updating Q-values in the matrix with Sarsa learning method is shown as follows:(17)Qdi,j←Qdi,j+α∗tij+γ∗Qdj,k-Qdi,jwhere Qd(i,j) is the estimated optimal traveling time to destination d for each vehicle which selects moving to node j in node i; tij is the travelling time of the latest passing time of link sij; k is the node belonging to F(j) (the set of nodes connected from node j), through which vehicles travel to destination d after they passed link sij; α is the learning rate. γ is the discount rate.Boltzmann distribution [33] is adopted as the probability distribution of action selection in this study which can balance the exploration and exploitation of action selection according to the Q-values. The probability model of action selection is shown as follows:(18)pdi,j←e-1/τQdi,j/EQdi∑j∈Aie-1/τQdi,j/EQdiwhere EQ(i) is the average Q-value from node i to destination d; τ is temperature.(19)τ=τmax1+e-αNV-βwhere τmax,α,β are constants; NV is the total number of vehicles in the road network.
#### 2.2.2. Optimizing Multilevel Network Structure
In this study, in order to accelerate the convergence of reinforcement learning in the Multilevel Network, the structure of the Multilevel Network should be considered. Both state action space of subnetworks and higher level network can be optimized with clustering method. Two objective functions have been considered, which are described as follows:(20)∑BiT∗SGi(21)SGhighwhere S(.) is the searching space of the road network, and it can be calculated as follows:(22)SG=∏i=1VEviwhere E(v) is the number of links departing from node v if the set is not null; otherwise it is 1.
## 2.1. Multilevel Network Model
Multilevel Network is constructed by dividing the original network into several subnetworks. The example of two-level network can be seen as Figure1. The boundary nodes of subnetworks and the optimal routes between them are nodes and links on higher level network.Figure 1
An example of Multilevel Network.In this model, the topographical road map is seen as the directed networkG(V,E), where V denotes the set of nodes of road network and E denotes the set of links of road network; i.e., sij corresponds to the link from node i to node j. The cost of it in this model is measured by the traveling time. IfG(V,E) can be divided intom subnetworks like G1(V1,E1),G2(V2,E2),…,Gm(Vm,Em) then(1)V=V1∪V2∪⋯∪Vm,E=E1∪E2∪⋯∪EmIn the subnetwork, the nodes can be divided into two categories: interior nodes and boundary nodes. A node is a boundary node if it belongs to more than one subnetwork, and vice versa.The Multilevel Network model is shown as follows.Indices. i , j , r ∈ { 1,2 , … , n }, index of node.Parameters n: the number of nodes. o: origin node. d: destination node. R ( o , d ): a route from o to d. s i j k: link from node i to node j in level k. k: index of the level in Multilevel Network, k∈{1,2,…,Kmax}. K m a x: the maximum level of Multilevel Network. n k: the number of nodes in level k of Multilevel Network. c i j k: cost of link sij in level k of Multilevel Network. F ( r ): set of nodes connected from node r. T ( r ): set of nodes connected to node r.Decision Variables (2) x i j k = 1 , i f a n d o n l y i f l i n k s i j i s i n c l u d e d i n R o , d i n l e v e l k 0 , o t h e r w i s eThe optimal path on Multilevel Network can be calculated as follows:(3)min∑k=1Kmax∑i=1nk∑j=1nkcijkxijk(4)s.t.∑j∈Frxrjk-∑i∈Trxirk=1r=o0r∈V∖o,d-1r=d(5)xijk∈0,1,∀i,j,kwhere constraints (4) and (5) can ensure the flow conservation rule to be observed for V∖{o,d}.TheKmax is set as 2 in the simulations of this study.We useGhigh(V′,E′) to represent the higher level network, where V′ and E′ are the set of nodes and links of higher level network, respectively.The set of boundary nodes between any subnetworksGi(Vi,Ei) and Gj(Vj,Ej) is Vi∩Vj, where i≠j. We use B(Gi) to represent the set of boundary nodes of subnetwork Gi(Vi,Ei). Then,(6)BGi=⋃j=1mVi∩Vj,wherei=1,2,…,m,j≠i.LetBT represent the set of the boundary nodes:(7)BT=⋃i=1mBGiwhereV′=BT.Links of the higher level network are calculated and generated based onBT. In Gi(Vi,Ei), we use l(u, v) to represent the optimal route between any node pair u and v in B(Gi); the cost function fc(u, v) of l(.) is shown as follows:(8)fcu,v=lu,vifthereisaroutefromutovonGiVi,Eiwithoutanyotherboundarynodeontheroute;∅otherwise.For subnetworkGi(Vi,Ei), let(9)LGi=u,v∣u,v∈BGi×BGiLet LT represent the set of links of the higher level network:(10)LT=⋃i=1mLGiwhereE′=LTIn order to guide vehicles in this structure, once the OD pairs are determined, the higher level network is extended, the extension of higher level network can be denoted asGhigh′(BT′,LT′), where BT′ is the extension of BT, which can be shown as BT′=BT∪O∪D, and LT′ is the extension of LT, which is shown as LT′=LT∪L(O)∪L(D), L(O) denotes the set of routes from original node to boundary nodes in the corresponding subnetwork, and L(D) denotes the set of routes from boundary nodes to destination node in the corresponding subnetworks, which can be shown as(11)LO=o,u∣o,u∈O×BGi(12)LD=u,d∣u,d∈BGj×Dwhere O is the set of original nodes, D is set of destination nodes, and Gi and Gj are the corresponding subnetworks of O and D.
## 2.2. Multilevel Network Based Hierarchical Reinforcement Learning
### 2.2.1. Hierarchical Sarsa Learning
Hierarchical reinforcement learning (HRL)[32] decomposes a reinforcement learning task into a hierarchy of subtasks so that lower-level child tasks can be invoked by higher-level parent tasks to reduce computing time and searching space.In this study, the route guidance tasks are decomposed according to the structure of the Multilevel Network. As shown in Figure2, the guidance in the higher level network (the selected series of links in the higher level network) determines the subtasks in the subnetworks. It guides vehicles from a node in the subnetwork to a boundary node or a destination node in this subnetwork. For example, as shown in Figure 3, the vehicle guidance on the original network is decomposed into guidance on three subnetworks, which can be seen as follows:(i)
Vehicle departs from original nodeO and arrives at boundary node Bi in subnetwork G1;(ii)
Vehicle departs from boundary nodeBi and arrives at boundary node Bj in subnetwork G2;(iii)
Vehicle departs from boundary nodeBj and arrives at destination node D in subnetwork G3.Figure 2
An example of vehicle guidance in the higher level network.Figure 3
An example of decomposition of route guidance.In the hierarchical Sarsa learning model, the agent is the CDRGS in each road network (both subnetworks and higher level network), and the purpose of the CDRGS is to guide all the vehicles in the traffic road network and to pursue the optimal travelling time. For each agent, the state is continuous, which is the positions and destinations of all the vehicles in the corresponding subnetwork (or higher level network); the description of the continuous state space of any graphGi can be shown as follows:(13)StatecGi=pvel1,dvel1,…,pvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in Gi, p(velj) is the position of vehicle velj, and d(velj) is the destination of vehicle d(velj).In order to reduce the state space, the discrete states which are the nodes and destinations of each vehicle are adopted. In the original network, the state space isStated(G); with the Multilevel Network structure, the state space is reduced, each subnetwork has the state space Stated(Gi), the state space of higher level network is Stated(Ghigh), and the function can be seen as follows:(14)StatedGi=vvel1,dvel1,…,vvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in subnetwork Gi, v∈Vi, Vi is the set of nodes in subnetwork Gi, v(velj) is the nearest node in front of vehicle velj, and d(velj) is the destination node of vehicle d(velj).The action of each agent is an array which is composed of selections of next guided link of each vehicle, which is shown as follows:(15)ActionGi=evel1,…,evelj,…where e(.)∈Ei is the guided next link of vehicle, and Ei is the set of links in subnetwork Gi.According to theAction(Gi), as shown in Figure 4, in each network (both higher level network and subnetwork), vehicles would receive their guidance information. And the passing time which is the time spent by each vehicle in the corresponding link composes the penalty; the penalty can be seen as follows:(16)PGi=tvel1,tvel2,…,tvelj,…where t(velj) is passing time of vehicle velj for the link e(velj).Figure 4
Demonstration of vehicle guidance in the network.Q-value matrix is used to guide vehicles in each subnetwork and higher level network, in which each Q-value represents the estimate optimal traveling time from the corresponding link to the destination. The proposed vehicle guidance method on both level networks is based on Sarsa learning. The equation of updating Q-values in the matrix with Sarsa learning method is shown as follows:(17)Qdi,j←Qdi,j+α∗tij+γ∗Qdj,k-Qdi,jwhere Qd(i,j) is the estimated optimal traveling time to destination d for each vehicle which selects moving to node j in node i; tij is the travelling time of the latest passing time of link sij; k is the node belonging to F(j) (the set of nodes connected from node j), through which vehicles travel to destination d after they passed link sij; α is the learning rate. γ is the discount rate.Boltzmann distribution [33] is adopted as the probability distribution of action selection in this study which can balance the exploration and exploitation of action selection according to the Q-values. The probability model of action selection is shown as follows:(18)pdi,j←e-1/τQdi,j/EQdi∑j∈Aie-1/τQdi,j/EQdiwhere EQ(i) is the average Q-value from node i to destination d; τ is temperature.(19)τ=τmax1+e-αNV-βwhere τmax,α,β are constants; NV is the total number of vehicles in the road network.
### 2.2.2. Optimizing Multilevel Network Structure
In this study, in order to accelerate the convergence of reinforcement learning in the Multilevel Network, the structure of the Multilevel Network should be considered. Both state action space of subnetworks and higher level network can be optimized with clustering method. Two objective functions have been considered, which are described as follows:(20)∑BiT∗SGi(21)SGhighwhere S(.) is the searching space of the road network, and it can be calculated as follows:(22)SG=∏i=1VEviwhere E(v) is the number of links departing from node v if the set is not null; otherwise it is 1.
## 2.2.1. Hierarchical Sarsa Learning
Hierarchical reinforcement learning (HRL)[32] decomposes a reinforcement learning task into a hierarchy of subtasks so that lower-level child tasks can be invoked by higher-level parent tasks to reduce computing time and searching space.In this study, the route guidance tasks are decomposed according to the structure of the Multilevel Network. As shown in Figure2, the guidance in the higher level network (the selected series of links in the higher level network) determines the subtasks in the subnetworks. It guides vehicles from a node in the subnetwork to a boundary node or a destination node in this subnetwork. For example, as shown in Figure 3, the vehicle guidance on the original network is decomposed into guidance on three subnetworks, which can be seen as follows:(i)
Vehicle departs from original nodeO and arrives at boundary node Bi in subnetwork G1;(ii)
Vehicle departs from boundary nodeBi and arrives at boundary node Bj in subnetwork G2;(iii)
Vehicle departs from boundary nodeBj and arrives at destination node D in subnetwork G3.Figure 2
An example of vehicle guidance in the higher level network.Figure 3
An example of decomposition of route guidance.In the hierarchical Sarsa learning model, the agent is the CDRGS in each road network (both subnetworks and higher level network), and the purpose of the CDRGS is to guide all the vehicles in the traffic road network and to pursue the optimal travelling time. For each agent, the state is continuous, which is the positions and destinations of all the vehicles in the corresponding subnetwork (or higher level network); the description of the continuous state space of any graphGi can be shown as follows:(13)StatecGi=pvel1,dvel1,…,pvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in Gi, p(velj) is the position of vehicle velj, and d(velj) is the destination of vehicle d(velj).In order to reduce the state space, the discrete states which are the nodes and destinations of each vehicle are adopted. In the original network, the state space isStated(G); with the Multilevel Network structure, the state space is reduced, each subnetwork has the state space Stated(Gi), the state space of higher level network is Stated(Ghigh), and the function can be seen as follows:(14)StatedGi=vvel1,dvel1,…,vvelj,dvelj,…where Gi is the ith subnetwork, velj∈VEL(Gi) are the vehicles in subnetwork Gi, v∈Vi, Vi is the set of nodes in subnetwork Gi, v(velj) is the nearest node in front of vehicle velj, and d(velj) is the destination node of vehicle d(velj).The action of each agent is an array which is composed of selections of next guided link of each vehicle, which is shown as follows:(15)ActionGi=evel1,…,evelj,…where e(.)∈Ei is the guided next link of vehicle, and Ei is the set of links in subnetwork Gi.According to theAction(Gi), as shown in Figure 4, in each network (both higher level network and subnetwork), vehicles would receive their guidance information. And the passing time which is the time spent by each vehicle in the corresponding link composes the penalty; the penalty can be seen as follows:(16)PGi=tvel1,tvel2,…,tvelj,…where t(velj) is passing time of vehicle velj for the link e(velj).Figure 4
Demonstration of vehicle guidance in the network.Q-value matrix is used to guide vehicles in each subnetwork and higher level network, in which each Q-value represents the estimate optimal traveling time from the corresponding link to the destination. The proposed vehicle guidance method on both level networks is based on Sarsa learning. The equation of updating Q-values in the matrix with Sarsa learning method is shown as follows:(17)Qdi,j←Qdi,j+α∗tij+γ∗Qdj,k-Qdi,jwhere Qd(i,j) is the estimated optimal traveling time to destination d for each vehicle which selects moving to node j in node i; tij is the travelling time of the latest passing time of link sij; k is the node belonging to F(j) (the set of nodes connected from node j), through which vehicles travel to destination d after they passed link sij; α is the learning rate. γ is the discount rate.Boltzmann distribution [33] is adopted as the probability distribution of action selection in this study which can balance the exploration and exploitation of action selection according to the Q-values. The probability model of action selection is shown as follows:(18)pdi,j←e-1/τQdi,j/EQdi∑j∈Aie-1/τQdi,j/EQdiwhere EQ(i) is the average Q-value from node i to destination d; τ is temperature.(19)τ=τmax1+e-αNV-βwhere τmax,α,β are constants; NV is the total number of vehicles in the road network.
## 2.2.2. Optimizing Multilevel Network Structure
In this study, in order to accelerate the convergence of reinforcement learning in the Multilevel Network, the structure of the Multilevel Network should be considered. Both state action space of subnetworks and higher level network can be optimized with clustering method. Two objective functions have been considered, which are described as follows:(20)∑BiT∗SGi(21)SGhighwhere S(.) is the searching space of the road network, and it can be calculated as follows:(22)SG=∏i=1VEviwhere E(v) is the number of links departing from node v if the set is not null; otherwise it is 1.
## 3. Differential Evolution Based Clustering Method
Ding et al. [17] divided the heterogeneous networks into homogeneous subregions, which have small variances in link densities, such that each subregion has a well-defined MFD shape. In the proposed method, multiple homogeneous similar scale subnetworks and a virtual higher level network which can effectively assign traffic flows among them are required. In this section, a Differential Evolution based clustering method is used to generate the previous Multilevel Network structure offline.
### 3.1. DE Based Clustering Method
DE [27, 28] is a well-known direction based evolution method which can search the optimal solution effectively in large scale searching space. In order to construct the proper Multilevel Network structure, various individuals should be maintained in the population, and an effective evolution direction is necessary. Thus DE is selected as the clustering method.In the proposed method, decoding operator is clustering the road network, and after decoding, each gene in the chromosome becomes a subnetwork. On the other word, subnetworkGi(Vi,Ei) is cluster i of the clustering result of the corresponding chromosome.In order to accelerate the convergence of reinforcement learning in the Multilevel Network, two factors are considered when the Multilevel Networks are constructed. The first one is the convergence efficiency of reinforcement learning on each subnetwork. The second one is the convergence efficiency of reinforcement learning on the higher level network. Therefore, there are two objective functions, minimizing the state action space of all subnetworks in (23) and minimizing the state action space of the higher level network in (24).(23)∑BiT∗SGi(24)SGhighIn order to achieve these two objective functions simultaneously, a fitness function is used, which is shown as follows:(25)Fitness=log∑BiT∗SGi+logSGhigh
### 3.2. Genetic Representation
When the Multilevel Network structure is constructed by the DE based clustering method, the number of clusters has strong influence on the number of nodes and links of the higher level network [34], which will affect the two objective functions. So, an appropriate number of clusters should be found to optimize the structure of the Multilevel Network.In this study, in order to get the proper number of clusters, two vectors, coordinate value vector and available vector, are defined in the chromosome. Each element in the coordinate value vector is corresponding to the element in the same position of the available vector. The maximum length of these vectors isM, the coordinate values vectors present cluster centroids, and each number of the available vector represents the validity of the corresponding centroid; if the number is bigger than the threshold valid, the corresponding centroid is valid, and visa versa.The decoding procedure is the clustering procedure, in which the Multilevel Network structure is generated with each valid gene.
### 3.3. Differential Evolution
The DE operator of any individualxi can be seen as follows:(26)li=r1+Fr2-r3,r1≠r2≠r3≠xiwhere r1,r2, and r3 are three different individuals which are randomly selected from the population, li is the mutants of xi, (r2-r3) forms a vector, and F which is a positive real number controls the length of the vector.The overall procedure of DE based clustering method can be seen in Algorithm1.Algorithm 1:Procedure of DE based clustering.
input: road network data, DE parameters output: optimal solutions E(P) begin current generationt←0; initialize PopulationP(t); generate Multilevel Network according to each chromosome; evaluate each Multilevel Network; while not termination conditiondo for individual xit do if random(0,1)<PC then selected different individualsr1t,r2t,r3t from P(t) randomly; lit=r1t+F(r2t-r3t) generate a Multilevel Network according to the chromosome; evaluate the Multilevel Network; end if end for t←t+1; for individual xitdo ifFitness(lit-1)<Fitness(xit-1), xit=lit-1; otherwisexit=xit-1. end for end while end
## 3.1. DE Based Clustering Method
DE [27, 28] is a well-known direction based evolution method which can search the optimal solution effectively in large scale searching space. In order to construct the proper Multilevel Network structure, various individuals should be maintained in the population, and an effective evolution direction is necessary. Thus DE is selected as the clustering method.In the proposed method, decoding operator is clustering the road network, and after decoding, each gene in the chromosome becomes a subnetwork. On the other word, subnetworkGi(Vi,Ei) is cluster i of the clustering result of the corresponding chromosome.In order to accelerate the convergence of reinforcement learning in the Multilevel Network, two factors are considered when the Multilevel Networks are constructed. The first one is the convergence efficiency of reinforcement learning on each subnetwork. The second one is the convergence efficiency of reinforcement learning on the higher level network. Therefore, there are two objective functions, minimizing the state action space of all subnetworks in (23) and minimizing the state action space of the higher level network in (24).(23)∑BiT∗SGi(24)SGhighIn order to achieve these two objective functions simultaneously, a fitness function is used, which is shown as follows:(25)Fitness=log∑BiT∗SGi+logSGhigh
## 3.2. Genetic Representation
When the Multilevel Network structure is constructed by the DE based clustering method, the number of clusters has strong influence on the number of nodes and links of the higher level network [34], which will affect the two objective functions. So, an appropriate number of clusters should be found to optimize the structure of the Multilevel Network.In this study, in order to get the proper number of clusters, two vectors, coordinate value vector and available vector, are defined in the chromosome. Each element in the coordinate value vector is corresponding to the element in the same position of the available vector. The maximum length of these vectors isM, the coordinate values vectors present cluster centroids, and each number of the available vector represents the validity of the corresponding centroid; if the number is bigger than the threshold valid, the corresponding centroid is valid, and visa versa.The decoding procedure is the clustering procedure, in which the Multilevel Network structure is generated with each valid gene.
## 3.3. Differential Evolution
The DE operator of any individualxi can be seen as follows:(26)li=r1+Fr2-r3,r1≠r2≠r3≠xiwhere r1,r2, and r3 are three different individuals which are randomly selected from the population, li is the mutants of xi, (r2-r3) forms a vector, and F which is a positive real number controls the length of the vector.The overall procedure of DE based clustering method can be seen in Algorithm1.Algorithm 1:Procedure of DE based clustering.
input: road network data, DE parameters output: optimal solutions E(P) begin current generationt←0; initialize PopulationP(t); generate Multilevel Network according to each chromosome; evaluate each Multilevel Network; while not termination conditiondo for individual xit do if random(0,1)<PC then selected different individualsr1t,r2t,r3t from P(t) randomly; lit=r1t+F(r2t-r3t) generate a Multilevel Network according to the chromosome; evaluate the Multilevel Network; end if end for t←t+1; for individual xitdo ifFitness(lit-1)<Fitness(xit-1), xit=lit-1; otherwisexit=xit-1. end for end while end
## 4. Hierarchical Sarsa Learning Based Route Guidance Algorithm
### 4.1. Overall Procedure
After generating the optimized Multilevel Network structure, the proposed hierarchical Sarsa learning based route guidance algorithm (HSLRG) can be divided into 3 stages:(i)
Initializing stage: initialize Q-values of all the boundary nodes and destination nodes in the Multilevel Network.(ii)
Route guidance stage: guide vehicles in the higher level network and subnetworks.(iii)
Updating stage: update Q-values of all the boundary nodes and destination nodes in the Multilevel Network.Before each updating stage, the CDRGS collects travelling information from the environment. During the period, the CDRGS guides vehicles with the Q-values updated in last updating stage. The overall procedure of the proposed HSLRG is shown as Algorithm2.Algorithm 2:Overall procedure of Hierarchical Sarsa Learning based route guidance algorithm.
begin //Initializing stage Initialization routine while not termination condition do while at updating interval do //Route Guidance stage Route Guidance routine end while //Updating stage Updating routine; end while end
### 4.2. Initializing Q-Values
Q-value based Dynamic Programming is adopted to initialize the Q-values of Sarsa of the Multilevel Network, and Q-values are iteratively calculated by the following equation.(27)Qdni,j=tijc+mink∈FjQdn-1j,ki∈I-d-Bd,j∈Fdwhere i,j∈I is set of nodes; d∈D is set of destinations; sij is link departure from node i to node j; tijc is the history traveling time of link sij; F(i) is set of nodes depart from node i.In this study, the procedure of initialization can be seen as Algorithm3.Algorithm 3:Procedure of Initialization.
begin //Initializing Q-value ofBT′ in each subnetwork for each d∈BT′ do InitializeQd According to Eq. (27) in the corresponding subnetwork end for //Initializing Q-value ofD in the higher level network for each d∈D do InitializeQd According to Eq. (27) in the Ghigh′ end for end
### 4.3. Route Guidance Procedure
In the HSLRG, the guidance is based on the Sarsa learning in the Multilevel Network. The guidance in the higher level network determines the actual destinations of vehicles in each subnetwork. The route guidance procedure for each vehicle of CDRGS can be divided into 3 steps, which can be seen as follows.Step 1.
Guide vehicle in the higher level network with Algorithm4 and get the selected link (the subtask on the subnetwork).Algorithm 4:Procedure of Route Guidance.
input: Vehicle v, Destination d output: Next link sjk begin Get the linksij link of vehicle v //Calculating the probabilities of next links according to Eq. (18). p d ( j , k ) ← e - 1 / τ Q d j , k / E Q d j ∑ k ∈ A ( j ) e - 1 / τ Q d j , k / E Q d j //Selecting the next link. Choosesjk by pd(j,k). endStep 2.
According to the result of Step1, guide vehicle in the subnetwork with Algorithm 4 until the vehicle reaches the boundary node or destination.Step 3.
If the vehicle does not reach destination, turn to Step1.
### 4.4. Updating Procedure
During the updating stage, the following steps should be performed:(i)
Update the Q-values ofd∈BT′ in each subnetwork Gi.(ii)
Update the Q-values ofd∈D in the higher level network Ghigh′.The procedure of updating is presented as Algorithm5.Algorithm 5:Procedure of Updating.
input: Destination d, Network G output: Qdn begin Q d n - 1 ⟵ Q d n for link sij∈L(G) do if tijd≠null then Qdn(i,j)←Qdn-1(i,j)+α∗tijd+γ∗Qdn-1j,k-Qdn-1i,j end if end for endThe updates of Q-value for each subnetwork/high level network are independent of each other, so the updating of the proposed method is designed computing parallel, and the time complexity of updating stage isO(|DG|∗LG), where, |DG| and LG are the number of elements in destination set and link set in the road network G, respectively.
## 4.1. Overall Procedure
After generating the optimized Multilevel Network structure, the proposed hierarchical Sarsa learning based route guidance algorithm (HSLRG) can be divided into 3 stages:(i)
Initializing stage: initialize Q-values of all the boundary nodes and destination nodes in the Multilevel Network.(ii)
Route guidance stage: guide vehicles in the higher level network and subnetworks.(iii)
Updating stage: update Q-values of all the boundary nodes and destination nodes in the Multilevel Network.Before each updating stage, the CDRGS collects travelling information from the environment. During the period, the CDRGS guides vehicles with the Q-values updated in last updating stage. The overall procedure of the proposed HSLRG is shown as Algorithm2.Algorithm 2:Overall procedure of Hierarchical Sarsa Learning based route guidance algorithm.
begin //Initializing stage Initialization routine while not termination condition do while at updating interval do //Route Guidance stage Route Guidance routine end while //Updating stage Updating routine; end while end
## 4.2. Initializing Q-Values
Q-value based Dynamic Programming is adopted to initialize the Q-values of Sarsa of the Multilevel Network, and Q-values are iteratively calculated by the following equation.(27)Qdni,j=tijc+mink∈FjQdn-1j,ki∈I-d-Bd,j∈Fdwhere i,j∈I is set of nodes; d∈D is set of destinations; sij is link departure from node i to node j; tijc is the history traveling time of link sij; F(i) is set of nodes depart from node i.In this study, the procedure of initialization can be seen as Algorithm3.Algorithm 3:Procedure of Initialization.
begin //Initializing Q-value ofBT′ in each subnetwork for each d∈BT′ do InitializeQd According to Eq. (27) in the corresponding subnetwork end for //Initializing Q-value ofD in the higher level network for each d∈D do InitializeQd According to Eq. (27) in the Ghigh′ end for end
## 4.3. Route Guidance Procedure
In the HSLRG, the guidance is based on the Sarsa learning in the Multilevel Network. The guidance in the higher level network determines the actual destinations of vehicles in each subnetwork. The route guidance procedure for each vehicle of CDRGS can be divided into 3 steps, which can be seen as follows.Step 1.
Guide vehicle in the higher level network with Algorithm4 and get the selected link (the subtask on the subnetwork).Algorithm 4:Procedure of Route Guidance.
input: Vehicle v, Destination d output: Next link sjk begin Get the linksij link of vehicle v //Calculating the probabilities of next links according to Eq. (18). p d ( j , k ) ← e - 1 / τ Q d j , k / E Q d j ∑ k ∈ A ( j ) e - 1 / τ Q d j , k / E Q d j //Selecting the next link. Choosesjk by pd(j,k). endStep 2.
According to the result of Step1, guide vehicle in the subnetwork with Algorithm 4 until the vehicle reaches the boundary node or destination.Step 3.
If the vehicle does not reach destination, turn to Step1.
## 4.4. Updating Procedure
During the updating stage, the following steps should be performed:(i)
Update the Q-values ofd∈BT′ in each subnetwork Gi.(ii)
Update the Q-values ofd∈D in the higher level network Ghigh′.The procedure of updating is presented as Algorithm5.Algorithm 5:Procedure of Updating.
input: Destination d, Network G output: Qdn begin Q d n - 1 ⟵ Q d n for link sij∈L(G) do if tijd≠null then Qdn(i,j)←Qdn-1(i,j)+α∗tijd+γ∗Qdn-1j,k-Qdn-1i,j end if end for endThe updates of Q-value for each subnetwork/high level network are independent of each other, so the updating of the proposed method is designed computing parallel, and the time complexity of updating stage isO(|DG|∗LG), where, |DG| and LG are the number of elements in destination set and link set in the road network G, respectively.
## 5. Simulation
In this study, the SUMO [35] simulator is used to implement the experiments with three different digital road networks as shown in Table 1. All the algorithms were coded in Java and a PC with 8-core Xeon E5-2640 v3 2.60GHZ processor and 128GB of RAM running Linux (centos 6.6) was used for the all experiments. Our experiments are conducted using real networks, representing various roads of Japan (Experiment 1 and Experiment 2) and US (Experiment 3). The Japan digital road maps are taken from Japan Digital Road Map Association (JDRMA). The US digital networks is provided by the Topologically Integrated Geographic Encoding and Referencing (TIGER)/line collection, available at http://www.diag.uniroma1.it/challenge9/data/tiger/. In the simulation, a time step means a second, and the length of simulation of experiments is set as 15000 time steps.Table 1
Data of experiments.
Item Experiment 1 Experiment 2 Experiment 3 Number of nodes 1500 1800 3500 Number of links 4620 5488 11310 Number of OD 33 33 100 Number of OD pairs 1089 1089 10000 Vehicle departure rate of each origin node 7 seconds per vehicle 7 seconds per vehicle 8 seconds per vehicle
### 5.1. Multilevel Network
DE based clustering method is used to generate Multilevel Network of each experiment, the evolution process can be seen as Figure5, the x-axis is the generation, and the y-axis is the average fitness of individuals in the population. The results of the DE can be seen as Table 2.Table 2
Results of DE based clustering method.
Item Experiment 1 Experiment 2 Experiment 3 Number of clusters 12 11 21 Fitness 228 229 607Figure 5
The evolution process of road network in Experiment 1, Experiment 2, and Experiment 3.It can be seen that the DE based clustering method can reduce the fitness during the process of evolution effectively and Multilevel Network structure which is used in the proposed algorithm has been optimized greatly.
### 5.2. Comparing Method
In the experiments, the Dijkstra algorithm (DA) and Sarsa learning based route guidance on the original road network method are adopted to compare with the proposed method.( 1 )Dijkstra algorithm (DA): DA is adopted to represent the static shortest route method, and it calculates the routes every 60 time steps based on real-time traffic information which is supposed to be collected in this study.( 2 )Sarsa learning based route guidance on the original road network method: In order to evaluate the efficiency of Multilevel Network based route guidance method, Sarsa learning with Boltzmann distribution algorithm (SLWBD), which only considers the route guidance on the original road network, is adopted as comparing method in the simulations. The Boltzmann distribution is selected as the action selection method. The Q-values are updated with (17) every 60 time steps.
### 5.3. Evaluation
Two kinds of criteria are adopted to evaluate the performance of route guidance algorithm.(1)
The number of vehicles in the traffic systemNumV;(2)
The average traveling time of vehicles arriving destinations in the a period of time, which is calculated as follows:(28)aveTt=∑i=1NtTviNtwhere t is the time step; N(t) is the total number of vehicles arriving destinations in a period of time until t; vi is one of the vehicles that reached destination in the time period. T(vi) is the traveling time of vehicle vi;Every 100 time steps, these figures are estimated, and the time period is set as 100 time steps. These two criteria can reflect the traffic condition in the road network; lowerNumV means less congestion happened in the road network; lower aveT reflects that vehicles were guided by better routes and the time they cost on waiting in the road network is reduced. So these two criteria are adopted to evaluate whether the HSLRG is converged.
### 5.4. Experiment
In this part, simulations are conducted to evaluate the performance of the proposed HSLRG. In order to evaluate the performance of the proposed method, the drivers’ acceptance of guidance is supposed as 100%. The updating interval of higher level network is set as 30 time steps, and the updating interval of subnetworks network is 60 time steps. The data shown in the following tables are results of the average of multiple independent simulations. In order to accelerate the converge of reinforcement learning at early stage of simulation and keep Q-values stable at middle and final stage, the learning rateα of Sarsa learning is changed depending on the time step of simulation. The concept of Simulated Annealing [36] is introduced, and the equation can be seen as follows:(29)α=a∗1-tMAXTIMEb+minimumαwhere t is the current time of simulation, MAXTIME is the total simulation time, a and b are constants, and minimumα is the lower limit of α.Table3 presents the results of Experiment 1, Experiment 2, and Experiment 3. Figures 6(a), 6(b), 6(c), 6(d), 6(e) and 6(f) in Figure 6 show NumV and aveT of these experiments, respectively. Table 4 shows the mean and standard deviation (Std) of these experiments.Table 3
Results of Experiments.
Experiment Algorithm Number of arriving vehicles Mean duration of each vehicle Mean route length of vehicle Experiment 1 HSLRG 63,862.3 308.03 1,880.81 SLWBD 63,542.67 616.88 6,876.22 DA 49,604.25 757.67 3,931.18 Experiment 2 HSLRG 70,572.33 194.81 1,639.80 SLWBD 62,193.4 670.91 5,127.49 DA 51,193.1 1,087.06 4,792.38 Experiment 3 HSLRG 95,919.17 456.12 4,029.92 SLWBD 50,690.11 2,738.79 23,735.61 DA 78,696.5 1,384.96 9,160.27Table 4
Mean and Std of Experiment results.
Experiment Algorithm MeanNumV StdNumV MeanaveT StdaveT 1 HSLRG 1055 296.9 236.6 108.5 SLWBD 2578 609.8 576 163.9 DA 2969 859.6 789 282.2 2 HSLRG 975.4 174.1 190 73.43 SLWBD 2606 523.5 560 125.3 DA 4347 1502 1094 477.4 3 HSLRG 3644 711.7 498.5 137.1 SLWBD 27130 13160 2606 1382 DA 9943 4463 1370 628.5Figure 6
The results of Experiments.
(a)
TheNumV of Experiment 1 (b)
TheaveT of Experiment 1 (c)
TheNumV of Experiment 2 (d)
TheaveT of Experiment 2 (e)
TheNumV of Experiment 3 (f)
TheaveT of Experiment 3As shown in Figures6(a)–6(f), HSLRG has lower figures of evolution values than SLWBD and DA almost during the entire simulations. These data indicate that HSLRG is fitting for guide vehicles in the large scale route network; it can alleviate the congestion phenomena and reduce the traveling time and traveling distance of vehicles in the larger scale route network. In Figures 6(a)–6(d), the tendency of NumV and aveT of HSLRG and SLWBD becomes decreasing after early stage of simulation (about 5000 time steps in Experiment 1, and about 2000 time steps in Experiment 2) while as shown in Figures 6(e) and 6(f), the evaluation values of SLWBD increased dramatically during the total 15000 time steps. The data indicate that, in limited size of road network, SLWBD has reasonable performance; however, in the larger scale road the performance of SLWBD becomes poor. As Figures 6(a)–6(f) show, the measured values of DA increased continuously. This performance indicates that DA is not a proper method for route guidance in the dynamic environments. The main reason is that DA only considers the static shortest routes, which may cause negative behavioral phenomena in dynamic transportation system, including overreaction and concentration phenomena. As shown in Table 4, from the mean and Std of NumV and aveT, we can see that the performance of proposed HSLRG dominates that of SLWBD and DA, which can prove the effectiveness of the proposed HSLRG.As shown in Table3, it can be seen that in all the experiments HSLRG has the best performance and outweighs the other two methods; the statistic data indicate that vehicles guided by this algorithm have not only the largest number of vehicles arriving destinations and the least mean traveling time, but also the least traveling distance. SLWBD has better performance than DA in Experiment 1 and Experiment 2 but worse performance in Experiment 3. The statistic result indicates that Sarsa learning based route guidance on the original road network is not suitable for guiding vehicles in the large scale road network. It is because the speed of convergence of reinforcement learning depends on the scale of the searching space, and it is exponential growth with the increasing of the scale of road network. And the proposed HSLRG introduced optimized Multilevel Network structure, by which route guidance on the subnetwork and route guidance on the higher level network are combined to compress the searching space of the traffic system. So, the proposed HSLRG can enhance the efficiency of CDRGS greatly.
## 5.1. Multilevel Network
DE based clustering method is used to generate Multilevel Network of each experiment, the evolution process can be seen as Figure5, the x-axis is the generation, and the y-axis is the average fitness of individuals in the population. The results of the DE can be seen as Table 2.Table 2
Results of DE based clustering method.
Item Experiment 1 Experiment 2 Experiment 3 Number of clusters 12 11 21 Fitness 228 229 607Figure 5
The evolution process of road network in Experiment 1, Experiment 2, and Experiment 3.It can be seen that the DE based clustering method can reduce the fitness during the process of evolution effectively and Multilevel Network structure which is used in the proposed algorithm has been optimized greatly.
## 5.2. Comparing Method
In the experiments, the Dijkstra algorithm (DA) and Sarsa learning based route guidance on the original road network method are adopted to compare with the proposed method.( 1 )Dijkstra algorithm (DA): DA is adopted to represent the static shortest route method, and it calculates the routes every 60 time steps based on real-time traffic information which is supposed to be collected in this study.( 2 )Sarsa learning based route guidance on the original road network method: In order to evaluate the efficiency of Multilevel Network based route guidance method, Sarsa learning with Boltzmann distribution algorithm (SLWBD), which only considers the route guidance on the original road network, is adopted as comparing method in the simulations. The Boltzmann distribution is selected as the action selection method. The Q-values are updated with (17) every 60 time steps.
## 5.3. Evaluation
Two kinds of criteria are adopted to evaluate the performance of route guidance algorithm.(1)
The number of vehicles in the traffic systemNumV;(2)
The average traveling time of vehicles arriving destinations in the a period of time, which is calculated as follows:(28)aveTt=∑i=1NtTviNtwhere t is the time step; N(t) is the total number of vehicles arriving destinations in a period of time until t; vi is one of the vehicles that reached destination in the time period. T(vi) is the traveling time of vehicle vi;Every 100 time steps, these figures are estimated, and the time period is set as 100 time steps. These two criteria can reflect the traffic condition in the road network; lowerNumV means less congestion happened in the road network; lower aveT reflects that vehicles were guided by better routes and the time they cost on waiting in the road network is reduced. So these two criteria are adopted to evaluate whether the HSLRG is converged.
## 5.4. Experiment
In this part, simulations are conducted to evaluate the performance of the proposed HSLRG. In order to evaluate the performance of the proposed method, the drivers’ acceptance of guidance is supposed as 100%. The updating interval of higher level network is set as 30 time steps, and the updating interval of subnetworks network is 60 time steps. The data shown in the following tables are results of the average of multiple independent simulations. In order to accelerate the converge of reinforcement learning at early stage of simulation and keep Q-values stable at middle and final stage, the learning rateα of Sarsa learning is changed depending on the time step of simulation. The concept of Simulated Annealing [36] is introduced, and the equation can be seen as follows:(29)α=a∗1-tMAXTIMEb+minimumαwhere t is the current time of simulation, MAXTIME is the total simulation time, a and b are constants, and minimumα is the lower limit of α.Table3 presents the results of Experiment 1, Experiment 2, and Experiment 3. Figures 6(a), 6(b), 6(c), 6(d), 6(e) and 6(f) in Figure 6 show NumV and aveT of these experiments, respectively. Table 4 shows the mean and standard deviation (Std) of these experiments.Table 3
Results of Experiments.
Experiment Algorithm Number of arriving vehicles Mean duration of each vehicle Mean route length of vehicle Experiment 1 HSLRG 63,862.3 308.03 1,880.81 SLWBD 63,542.67 616.88 6,876.22 DA 49,604.25 757.67 3,931.18 Experiment 2 HSLRG 70,572.33 194.81 1,639.80 SLWBD 62,193.4 670.91 5,127.49 DA 51,193.1 1,087.06 4,792.38 Experiment 3 HSLRG 95,919.17 456.12 4,029.92 SLWBD 50,690.11 2,738.79 23,735.61 DA 78,696.5 1,384.96 9,160.27Table 4
Mean and Std of Experiment results.
Experiment Algorithm MeanNumV StdNumV MeanaveT StdaveT 1 HSLRG 1055 296.9 236.6 108.5 SLWBD 2578 609.8 576 163.9 DA 2969 859.6 789 282.2 2 HSLRG 975.4 174.1 190 73.43 SLWBD 2606 523.5 560 125.3 DA 4347 1502 1094 477.4 3 HSLRG 3644 711.7 498.5 137.1 SLWBD 27130 13160 2606 1382 DA 9943 4463 1370 628.5Figure 6
The results of Experiments.
(a)
TheNumV of Experiment 1 (b)
TheaveT of Experiment 1 (c)
TheNumV of Experiment 2 (d)
TheaveT of Experiment 2 (e)
TheNumV of Experiment 3 (f)
TheaveT of Experiment 3As shown in Figures6(a)–6(f), HSLRG has lower figures of evolution values than SLWBD and DA almost during the entire simulations. These data indicate that HSLRG is fitting for guide vehicles in the large scale route network; it can alleviate the congestion phenomena and reduce the traveling time and traveling distance of vehicles in the larger scale route network. In Figures 6(a)–6(d), the tendency of NumV and aveT of HSLRG and SLWBD becomes decreasing after early stage of simulation (about 5000 time steps in Experiment 1, and about 2000 time steps in Experiment 2) while as shown in Figures 6(e) and 6(f), the evaluation values of SLWBD increased dramatically during the total 15000 time steps. The data indicate that, in limited size of road network, SLWBD has reasonable performance; however, in the larger scale road the performance of SLWBD becomes poor. As Figures 6(a)–6(f) show, the measured values of DA increased continuously. This performance indicates that DA is not a proper method for route guidance in the dynamic environments. The main reason is that DA only considers the static shortest routes, which may cause negative behavioral phenomena in dynamic transportation system, including overreaction and concentration phenomena. As shown in Table 4, from the mean and Std of NumV and aveT, we can see that the performance of proposed HSLRG dominates that of SLWBD and DA, which can prove the effectiveness of the proposed HSLRG.As shown in Table3, it can be seen that in all the experiments HSLRG has the best performance and outweighs the other two methods; the statistic data indicate that vehicles guided by this algorithm have not only the largest number of vehicles arriving destinations and the least mean traveling time, but also the least traveling distance. SLWBD has better performance than DA in Experiment 1 and Experiment 2 but worse performance in Experiment 3. The statistic result indicates that Sarsa learning based route guidance on the original road network is not suitable for guiding vehicles in the large scale road network. It is because the speed of convergence of reinforcement learning depends on the scale of the searching space, and it is exponential growth with the increasing of the scale of road network. And the proposed HSLRG introduced optimized Multilevel Network structure, by which route guidance on the subnetwork and route guidance on the higher level network are combined to compress the searching space of the traffic system. So, the proposed HSLRG can enhance the efficiency of CDRGS greatly.
## 6. Conclusion
In this paper, we have proposed the hierarchical Sarsa learning based route guidance algorithm (HSLRG) to solve route guidance problem in large scale road networks. HSLRG applies Multilevel Network method to reduce the state space of the traffic environment, which can greatly accelerate convergence of the route guidance algorithm. The effectiveness and efficiency of HSLRG were studied in three different scale road networks. The simulation results show that, in the large scale road network, comparing with SLWBD and DA, HSLRG can guide vehicles to the destinations more effectively. How to guide vehicles with multiobjective and considering personality of drivers are worthwhile for future research.
---
*Source: 1019078-2019-06-27.xml* | 2019 |
# Possible Factors Influencing the Seroprevalence of Dengue among Residents of the Forest Fringe Areas of Peninsular Malaysia
**Authors:** Juraina Abd-Jamil; Romano Ngui; Syahrul Nellis; Rosmadi Fauzi; Ai Lian Yvonne Lim; Karuthan Chinna; Chee-Sieng Khor; Sazaly AbuBakar
**Journal:** Journal of Tropical Medicine
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1019238
---
## Abstract
Dengue is an endemic mosquito-borne viral disease prevalent in many urban areas of the tropic, especially the Southeast Asia. Its presence among the indigenous population of Peninsular Malaysia (Orang Asli), however, has not been well described. The present study was performed to investigate the seroprevalence of dengue among the Orang Asli (OA) residing at the forest fringe areas of Peninsular Malaysia and determine the factors that could affect the transmission of dengue among the OA. Eight OA communities consisting of 491 individuals were recruited. From the study, at least 17% of the recruited study participants were positive for dengue IgG, indicating past exposure to dengue. Analysis on the demographic and socioeconomic variables suggested that high seroprevalence of dengue was significantly associated with those above 13 years old and a low household income of less than MYR500 (USD150). It was also associated with the vast presence of residential areas and the presence of a lake. Remote sensing analysis showed that higher land surface temperatures and lower land elevations also contributed to higher dengue seroprevalence. The present study suggested that both demographic and geographical factors contributed to the increasing risk of contracting dengue among the OA living at the forest fringe areas of Peninsular Malaysia. The OA, hence, remained vulnerable to dengue.
---
## Body
## 1. Introduction
Dengue is a mosquito-borne viral disease that causes an estimated 390 million infections annually of which 96 million resulted in clinical manifestations [1]. The disease is caused by dengue virus (DENV), which is transmitted by the Aedes sp. mosquitoes. There are four dengue virus serotypes: dengue type 1 virus (DENV-1), dengue type 2 virus (DENV-2), dengue type 3 virus (DENV-3), and dengue type 4 virus (DENV-4). All four DENV serotypes circulate in most of the dengue-endemic regions such as in Indonesia, Vietnam, Thailand, and Malaysia. Once infected with the virus, dengue may manifest as clinically unapparent or asymptomatic infection, undifferentiated fever, or as severe dengue.Dengue was thought to have originated from the sylvatic cycle where the virus circulated among nonhuman primates and the tree top-dwellingAedes sp. mosquitoes such as Aedes niveus and A. luteocephalus [2]. At an estimated 1,000 years ago, dengue spilled into the human populations [3] and became endemic following rapid, unplanned urbanization and massive population migration from the rural to the urban areas [4]. In the endemic human cycle, dengue is transmitted mainly by the vectors A. aegypti and A. albopictus [4, 5]. The vectors are widely found in the subtropical and tropical regions of the world. A. albopictus has been suggested to bridge the sylvatic and urban cycle of dengue due to their abundance in the rural and forested areas in comparison to A. aegypti [6, 7].Malaysia is among the earlier countries that reported dengue hyperendemicity and dengue hemorrhagic fever [8]. The dengue surveillance system implemented in Malaysia operates by receiving notifications of febrile dengue cases from both the government and private hospitals and clinics. The system, however, did not wholly include the underserved and economically marginalized communities such as the indigenous people of Peninsular Malaysia locally known as the Orang Asli (OA), as most still seek medical advice from the village shamans and use traditional medicines for treatments [9]. Earlier reports on dengue prevalence among the forest fringe populations were published in 1956 and 1958 [10, 11], which reported that virtually all adults from the rural communities of ethnic Malays had been exposed to dengue [10]. The study conducted two years later in 1958 showed that about 90% of the rural ethnic Malays and the OA in Bukit Lanong and Cameron Highlands, Pahang, had neutralizing antibodies against DENV [11]. These two studies predated the development of more accurate dengue serological assays. Results obtained from these earlier studies, hence, could be reflective of an imperfect laboratory tool where the ELISA used could highly cross-reacted with other arboviruses. Another study conducted 30 years later, however, showed a similar dengue seroprevalence (80%) among the forest fringe populations in Malaysia [12]. Nevertheless, more recent studies demonstrated that a wide difference of dengue seroprevalence existed between the rural populations in East Malaysia (24%) [13] and Peninsular Malaysia (91%) [14]. These studies suggested that dengue transmission and prevalence varied over time for populations residing in the rural and forest fringe areas of Malaysia. Many factors could have contributed to the differing dengue prevalence in these populations. The present study attempted to determine these factors by investigating the potential influence that demographic and socioeconomic variables as well as land cover and physical environmental factors might have on dengue IgG seroprevalence. The serosurvey, land cover, and remote sensing analysis were performed in eight different OA villages distributed across the states in Peninsular Malaysia. This represents a cross-sectional study using convenience-sampling method among voluntary members of different OA villages.
## 2. Methods
### 2.1. Ethics Approval and Consent to Participate
This study was approved by the Ethics Committee of the University Malaya Medical Centre (UMMC; MEC Ref. 824.11) and the Department of Orang Asli Development or locally known as the Jabatan Kemajuan Orang Asli (JAKOA), Ministry of Rural and Regional Development Malaysia.Prior to obtaining informed consent, members of the community were given a briefing on the study. Participants who agreed to participate provided an oral consent to the trained field assistants, followed by a written consent. In instances where oral consent was received, but written consent could not be obtained due to illiteracy, the participants would provide either a thumbprint (for participants older than 13 years old) or written and oral consent from the legal guardian. The UMMC MEC 824.11 approval permitted both the use of thumbprints and a legal guardian’s signature as indicators of written consent. Study participation was voluntary, and participants could withdraw at any time during study duration by informing the study coordinator.
### 2.2. Study Population and Area
The serosurvey conducted was a cross-sectional study performed among OA populations residing in eight different OA villages in the forest or forest fringe areas of Peninsular Malaysia (Figure1). The Orang Asli (OA) constituted about 0.6% of the Malaysian population and comprised of mainly 18 indigenous tribes (https://www.coac.org.my/). The sampling was performed between November 2007 and October 2010. No specific age was targeted as participation was on voluntary basis.Figure 1
Map of Peninsular Malaysia showing the locations of theOrang Asli villages surveyed in the study. The red line shows the state division, while the gray line shows the division of districts in each state. The Orang Asli villages are indicated with green bubbles.The villages in the present study were selected from the list of sites made available to the authors by JAKOA. The selection was based on the village’s ease of access, its size (more than 100 individuals in the population), the villagers’ receptivity to outsiders, and their nonnomadic lifestyle. As participation was on a voluntary basis, there were no inclusion or exclusion criteria for recruitment. A total of 716 participants were recruited; however, only 491 (68.6%) consented to blood withdrawal. The minimum number of sample size required was 246, calculated by EpiTools epidemiological calculator (epitools.ausvet.com.au) based on a 0.2 apparent prevalence, 0.5 estimated precision, 0.95 confidence level, and an estimated population size of 1,600.The selected villages were the Sungai Perah village, the Sungai Bumbun village, the Gurney village, the Pos Iskandar village, the Hulu Langat village, the Kuala Betis village, the Pos Betau village, and the Sungai Layau village, located in different parts of Peninsular Malaysia (Table1, Figure 1). The villages were located mostly in the forest fringe areas surrounded by rubber and oil palm plantations. In general, the villages had basic utility infrastructures such as water, electricity, and concrete houses. However, they were not fully utilized or evenly distributed as many could not afford the monthly utility bills. As such, the villagers depended highly on nearby rivers for daily water source. Villages such as Pos Iskandar, Kuala Betis, Pos Betau, and Sungai Layau underwent a resettlement program, which included improvement of nearby access roads. Although concrete houses were built, there were still many structures made from bamboo, wood, bricks, and Nipah palm trees. Each village had a population of more than 100 inhabitants. Most of the villagers were unskilled laborers employed at nearby construction sites, factories, vegetable farms, oil palm, and rubber plantations. The villagers also reared animals such as pigs, chickens, and ducks for food and kept monkeys, dogs, and cats as pets. These animals were mostly left to roam freely in the villages.Table 1
Location of the surveyed villages, their accessibility, and information on dengue prevalence.
Surveyed villagesLocationN∗Dengue serology positiveNearby multilane roadsLongitude (°E)Latitude (°N)N%Sungai Perah100° 54 72″4° 24″ 288″65 (43%)3250.04Gurney101° 24″ 144″3° 24″ 108″16 (11%)425.04Sungai Bumbun101° 24″ 72″2° 48″ 180″16 (11%)425.03Pos Iskandar102° 36″ 180″3° 0″ 216″109 (73%)2623.91Hulu Langat101° 54″ 36″2° 54″ 144″29 (19%)413.82Kuala Betis101° 42″ 324″4° 54″ 0″77 (51%)79.12Pos Betau101° 46″ 48″4° 6″ 0″91 (61%)44.41Sungai Layau104° 6″ 0″1° 30″ 108″88 (59%)22.32∗The percentage of participation was estimated based on an average population of 150 per village.
### 2.3. Structured Questionnaire Survey
The pretested questionnaire contained information on participant demographics (i.e., age, gender, and level of education attained) and socioeconomic status (i.e., occupation and household income; Table1). The questionnaire was designed in the national language, Bahasa Malaysia, which was well understood by all of the participants. For those who were not fluent in the language, interpreters were provided by JAKOA. The questionnaire survey was performed by trained field investigators supervised by team supervisors. It was performed prior to blood withdrawal. Each answered questionnaire was given a unique identifier, and the same identifier was used for the blood samples. Completed forms were checked for accuracy, legibility, and completeness at the end of sampling day and verified by the team supervisors. The presence of a JAKOA official was required for all the visits.
### 2.4. Blood Collection
Approximately 3 ml of venous blood was drawn from each participant by trained medical assistants and nurses. The blood samples, in vacutainer blood tubes, were kept in chilled condition and transported to the Department of Parasitology, Faculty of Medicine, University of Malaya, after each study visit. The blood samples were immediately centrifuged at 500×g for 10 min to obtain the serum. The serum was then stored at −20°C until further tests.
### 2.5. Dengue IgG Capture ELISA
The IgG capture enzyme-linked immunosorbent assay (ELISA) was performed using the Standard Diagnostics Dengue IgG Capture ELISA (SD, Korea; 11EK10) according to the recommended protocol. The absorbance was read at 450/620 nm using a Tecan Sunrise spectrophotometer (Mannedorf, Switzerland). The cutoff (CO) value was determined by adding 0.3 to the negative control’s average absorbance value. An absorbance reading ≥CO value was considered positive for the presence of dengue-specific IgG.
### 2.6. Land Cover Analysis
The villages’ location was determined using Google Earth 5.2.1 (https://www.google.com/Earth) as previously described [14]. Land cover assessment was made within a 2 km radius of the center of each village. Land cover features were divided into three categories: (1) water body, (2) built-up, and (3) vegetation. The water body was represented by rivers, streams, and ponds. Built-up consisted of residential, commercial, and industrial areas. They were identified based on the building design and location. For instance, industrial buildings would normally have a wider and bigger rooftop and located in the middle of a large clearing. A commercial area is usually found in a city center while a residential area is located at the city outskirts consisting of fairly homogenous structures. Categorization of the built-up was also assisted by Google Earth’s denomination and by physical visits to the villages. Vegetation was represented mostly by forests and plantations. A plantation site was observed as patches of distinct homogenous pattern of greenery that consisted of oil palm and rubber plantations. These observations were also supported by the physical visits to the villages and the surrounding area. The estimation of land cover area was performed using GE-Path 1.4.4 (http://www.sgrillo.net/googleearth/gepath.htm) by creating a grid map overlay with the land cover map, enabling a quantitative assessment. One grid area was equivalent to 1 km2, and the study surveyed an area of 2 km radius from the center of the villages.
### 2.7. Remote Sensing Environmental-Derived Data
The Geographical Information System (GIS) was used to integrate survey data with remotely sensed satellite sensor environmental data. The data were typically provided as a raster file or in arrays of cells, in which each grid-cell, or pixel, had a certain value depending on how the image was captured and what it represented. There were three environmental data used in the present study: 1) the monthly average land surface temperature (LST); 2) the normalized difference vegetation index (NDVI); and 3) the digital elevation model (DEM). The land surface temperature data were obtained at 30 arcsec (∼1 km) resolution and downloaded from the WorldClim website (http://www.worldclim.org). Temperature records were produced from the global weather station for the period of 1950 to 2000 and were interpolated using a thin-plate smoothing spline algorithm [15].The normalized difference vegetation index measures the vegetation density. It used a time series of a nominal 1 km spatial resolution from Moderate Resolution Imaging Spectroradiometer (MODIS) data that were downloaded from the NASA’s Earth Observing System (EOS) data gateway (http://modis.gsfc.nasa.gov/data/dataprod/index.php). The normalized difference vegetation index was generated using a novel spline-based algorithm following the methods described by Scharlemann et al. [16]. The algorithm was tested on generated artificial data using randomly selected values of both amplitudes and phases, and it provided an accurate estimate of the input variables under all conditions. The algorithm was then applied to produce layers that captured the seasonality of the MODIS data. The digital elevation model information was generated from Radarsat data obtained from the Department of Survey and Mapping Malaysia.A point estimate for each village was extracted for each environmental layer following a temporal Fourier analysis. They were transformed and analyzed using ESRI ArcGIS V9.3 software. A univariate (the Wald test) and multivariate (likelihood ratio test) logistic regression analysis with a stepwise procedure was performed to examine the relationship between remote sensing derived environmental variables and dengue seropositivity using STATA/IC 10.0 (StataCorp LP, College Station, Texas, USA).
### 2.8. Statistical Analyses
Statistical analyses were conducted using IBM SPSS 13.0 for Windows (Chicago, IL, USA). The initial data entry was crosschecked regularly to ensure that data were correctly and consistently entered. A percentage was used to describe descriptive data, such as the seroprevalence of dengue in the studied population according to the village, age, and gender. Univariate analysis was used to assess the potential associations between dengue seropositivity (the outcome of interest) and the sociodemographic characteristics. Only variables that were significantly associated in the univariate model were included in a logistic regression analysis using a backward elimination model. A significance level ofp<0.05 according to the odds ratios (OR) and a 95% confidence interval (95% CI) were used for all tests to indicate the strength of the association between dengue seropositivity and the respective variables.
## 2.1. Ethics Approval and Consent to Participate
This study was approved by the Ethics Committee of the University Malaya Medical Centre (UMMC; MEC Ref. 824.11) and the Department of Orang Asli Development or locally known as the Jabatan Kemajuan Orang Asli (JAKOA), Ministry of Rural and Regional Development Malaysia.Prior to obtaining informed consent, members of the community were given a briefing on the study. Participants who agreed to participate provided an oral consent to the trained field assistants, followed by a written consent. In instances where oral consent was received, but written consent could not be obtained due to illiteracy, the participants would provide either a thumbprint (for participants older than 13 years old) or written and oral consent from the legal guardian. The UMMC MEC 824.11 approval permitted both the use of thumbprints and a legal guardian’s signature as indicators of written consent. Study participation was voluntary, and participants could withdraw at any time during study duration by informing the study coordinator.
## 2.2. Study Population and Area
The serosurvey conducted was a cross-sectional study performed among OA populations residing in eight different OA villages in the forest or forest fringe areas of Peninsular Malaysia (Figure1). The Orang Asli (OA) constituted about 0.6% of the Malaysian population and comprised of mainly 18 indigenous tribes (https://www.coac.org.my/). The sampling was performed between November 2007 and October 2010. No specific age was targeted as participation was on voluntary basis.Figure 1
Map of Peninsular Malaysia showing the locations of theOrang Asli villages surveyed in the study. The red line shows the state division, while the gray line shows the division of districts in each state. The Orang Asli villages are indicated with green bubbles.The villages in the present study were selected from the list of sites made available to the authors by JAKOA. The selection was based on the village’s ease of access, its size (more than 100 individuals in the population), the villagers’ receptivity to outsiders, and their nonnomadic lifestyle. As participation was on a voluntary basis, there were no inclusion or exclusion criteria for recruitment. A total of 716 participants were recruited; however, only 491 (68.6%) consented to blood withdrawal. The minimum number of sample size required was 246, calculated by EpiTools epidemiological calculator (epitools.ausvet.com.au) based on a 0.2 apparent prevalence, 0.5 estimated precision, 0.95 confidence level, and an estimated population size of 1,600.The selected villages were the Sungai Perah village, the Sungai Bumbun village, the Gurney village, the Pos Iskandar village, the Hulu Langat village, the Kuala Betis village, the Pos Betau village, and the Sungai Layau village, located in different parts of Peninsular Malaysia (Table1, Figure 1). The villages were located mostly in the forest fringe areas surrounded by rubber and oil palm plantations. In general, the villages had basic utility infrastructures such as water, electricity, and concrete houses. However, they were not fully utilized or evenly distributed as many could not afford the monthly utility bills. As such, the villagers depended highly on nearby rivers for daily water source. Villages such as Pos Iskandar, Kuala Betis, Pos Betau, and Sungai Layau underwent a resettlement program, which included improvement of nearby access roads. Although concrete houses were built, there were still many structures made from bamboo, wood, bricks, and Nipah palm trees. Each village had a population of more than 100 inhabitants. Most of the villagers were unskilled laborers employed at nearby construction sites, factories, vegetable farms, oil palm, and rubber plantations. The villagers also reared animals such as pigs, chickens, and ducks for food and kept monkeys, dogs, and cats as pets. These animals were mostly left to roam freely in the villages.Table 1
Location of the surveyed villages, their accessibility, and information on dengue prevalence.
Surveyed villagesLocationN∗Dengue serology positiveNearby multilane roadsLongitude (°E)Latitude (°N)N%Sungai Perah100° 54 72″4° 24″ 288″65 (43%)3250.04Gurney101° 24″ 144″3° 24″ 108″16 (11%)425.04Sungai Bumbun101° 24″ 72″2° 48″ 180″16 (11%)425.03Pos Iskandar102° 36″ 180″3° 0″ 216″109 (73%)2623.91Hulu Langat101° 54″ 36″2° 54″ 144″29 (19%)413.82Kuala Betis101° 42″ 324″4° 54″ 0″77 (51%)79.12Pos Betau101° 46″ 48″4° 6″ 0″91 (61%)44.41Sungai Layau104° 6″ 0″1° 30″ 108″88 (59%)22.32∗The percentage of participation was estimated based on an average population of 150 per village.
## 2.3. Structured Questionnaire Survey
The pretested questionnaire contained information on participant demographics (i.e., age, gender, and level of education attained) and socioeconomic status (i.e., occupation and household income; Table1). The questionnaire was designed in the national language, Bahasa Malaysia, which was well understood by all of the participants. For those who were not fluent in the language, interpreters were provided by JAKOA. The questionnaire survey was performed by trained field investigators supervised by team supervisors. It was performed prior to blood withdrawal. Each answered questionnaire was given a unique identifier, and the same identifier was used for the blood samples. Completed forms were checked for accuracy, legibility, and completeness at the end of sampling day and verified by the team supervisors. The presence of a JAKOA official was required for all the visits.
## 2.4. Blood Collection
Approximately 3 ml of venous blood was drawn from each participant by trained medical assistants and nurses. The blood samples, in vacutainer blood tubes, were kept in chilled condition and transported to the Department of Parasitology, Faculty of Medicine, University of Malaya, after each study visit. The blood samples were immediately centrifuged at 500×g for 10 min to obtain the serum. The serum was then stored at −20°C until further tests.
## 2.5. Dengue IgG Capture ELISA
The IgG capture enzyme-linked immunosorbent assay (ELISA) was performed using the Standard Diagnostics Dengue IgG Capture ELISA (SD, Korea; 11EK10) according to the recommended protocol. The absorbance was read at 450/620 nm using a Tecan Sunrise spectrophotometer (Mannedorf, Switzerland). The cutoff (CO) value was determined by adding 0.3 to the negative control’s average absorbance value. An absorbance reading ≥CO value was considered positive for the presence of dengue-specific IgG.
## 2.6. Land Cover Analysis
The villages’ location was determined using Google Earth 5.2.1 (https://www.google.com/Earth) as previously described [14]. Land cover assessment was made within a 2 km radius of the center of each village. Land cover features were divided into three categories: (1) water body, (2) built-up, and (3) vegetation. The water body was represented by rivers, streams, and ponds. Built-up consisted of residential, commercial, and industrial areas. They were identified based on the building design and location. For instance, industrial buildings would normally have a wider and bigger rooftop and located in the middle of a large clearing. A commercial area is usually found in a city center while a residential area is located at the city outskirts consisting of fairly homogenous structures. Categorization of the built-up was also assisted by Google Earth’s denomination and by physical visits to the villages. Vegetation was represented mostly by forests and plantations. A plantation site was observed as patches of distinct homogenous pattern of greenery that consisted of oil palm and rubber plantations. These observations were also supported by the physical visits to the villages and the surrounding area. The estimation of land cover area was performed using GE-Path 1.4.4 (http://www.sgrillo.net/googleearth/gepath.htm) by creating a grid map overlay with the land cover map, enabling a quantitative assessment. One grid area was equivalent to 1 km2, and the study surveyed an area of 2 km radius from the center of the villages.
## 2.7. Remote Sensing Environmental-Derived Data
The Geographical Information System (GIS) was used to integrate survey data with remotely sensed satellite sensor environmental data. The data were typically provided as a raster file or in arrays of cells, in which each grid-cell, or pixel, had a certain value depending on how the image was captured and what it represented. There were three environmental data used in the present study: 1) the monthly average land surface temperature (LST); 2) the normalized difference vegetation index (NDVI); and 3) the digital elevation model (DEM). The land surface temperature data were obtained at 30 arcsec (∼1 km) resolution and downloaded from the WorldClim website (http://www.worldclim.org). Temperature records were produced from the global weather station for the period of 1950 to 2000 and were interpolated using a thin-plate smoothing spline algorithm [15].The normalized difference vegetation index measures the vegetation density. It used a time series of a nominal 1 km spatial resolution from Moderate Resolution Imaging Spectroradiometer (MODIS) data that were downloaded from the NASA’s Earth Observing System (EOS) data gateway (http://modis.gsfc.nasa.gov/data/dataprod/index.php). The normalized difference vegetation index was generated using a novel spline-based algorithm following the methods described by Scharlemann et al. [16]. The algorithm was tested on generated artificial data using randomly selected values of both amplitudes and phases, and it provided an accurate estimate of the input variables under all conditions. The algorithm was then applied to produce layers that captured the seasonality of the MODIS data. The digital elevation model information was generated from Radarsat data obtained from the Department of Survey and Mapping Malaysia.A point estimate for each village was extracted for each environmental layer following a temporal Fourier analysis. They were transformed and analyzed using ESRI ArcGIS V9.3 software. A univariate (the Wald test) and multivariate (likelihood ratio test) logistic regression analysis with a stepwise procedure was performed to examine the relationship between remote sensing derived environmental variables and dengue seropositivity using STATA/IC 10.0 (StataCorp LP, College Station, Texas, USA).
## 2.8. Statistical Analyses
Statistical analyses were conducted using IBM SPSS 13.0 for Windows (Chicago, IL, USA). The initial data entry was crosschecked regularly to ensure that data were correctly and consistently entered. A percentage was used to describe descriptive data, such as the seroprevalence of dengue in the studied population according to the village, age, and gender. Univariate analysis was used to assess the potential associations between dengue seropositivity (the outcome of interest) and the sociodemographic characteristics. Only variables that were significantly associated in the univariate model were included in a logistic regression analysis using a backward elimination model. A significance level ofp<0.05 according to the odds ratios (OR) and a 95% confidence interval (95% CI) were used for all tests to indicate the strength of the association between dengue seropositivity and the respective variables.
## 3. Results
The present study recruited 491 individuals from eight different OA villages in Peninsular Malaysia (Figure1). Their ages ranged from 1 to 82 years with a median age of 11 years old and a proportion of 1.2% of ≤4 years old, 2.2% of 5-6 years old, 70.1% of those aged 7–12 years, 3.5% of 13–17 years old, and 23.0% of those aged ≥18 years old.Study participation was on a voluntary basis. The highest participation was obtained from the Pos Iskandar village (n = 109), followed by the Pos Betau village (n = 91), the Sungai Layau village (n = 88), and the Kuala Betis village (n = 77). There were 65 participations from the Sungai Perah village and 30 participations from the Hulu Langat village. The lowest participation was obtained from the Gurney (n = 16) and Sungai Bumbun village (n = 16; Table 1).
### 3.1. Dengue IgG Seroprevalence
Results from dengue IgG serological assays suggested that at least 17% (n = 83) of the studied population was positive for dengue IgG. The highest seropositivity was observed in the Sungai Perah population with 50% (n = 32) seropositivity (Table 1). This was followed by the Gurney and Sungai Bumbun villages with about 25% (n = 4) seropositivity. About 23.9% (n = 26) and 13.8% (n = 4) of the Pos Iskandar and Hulu Langat villagers were also dengue IgG positive. Less than 10% dengue IgG seropositivity was observed among volunteers from the Kuala Betis (9.1%), Pos Betau (4.4%), and Sungai Layau (2.3%) villages (Table 1).
### 3.2. Dengue IgG Seropositivity and Demographic and SocioEconomic Risk Factors
The demographic and socioeconomic variables that were analyzed consisted of the gender, age, level of education, occupational status, and monthly household income (Table2). Univariate analysis of these risk factors identified females, those above 13 years old, with low education, and who are working participants as more likely to be seropositive for dengue (Table 2). Results showed that about 20.2% of female participants and 12.4% of male participants were dengue IgG seropositive (OR = 1.78; 95% CI = 1.08–2.95; p=0.023; Table 2). Dengue IgG seropositivity was also significantly different between those aged ≤12 years and ≥13 years old. About 10.6% of participants aged ≤12 years old and 34.6% of those aged ≥13 years old were positive for dengue IgG (OR = 4.45; 95% CI = 2.71–7.26; p<0.001). Further analysis on the age groups showed that dengue IgG was found only in those aged more than 4 years old where those aged 5-6, 7–12, 13–17, and ≥18 showed 9.1%, 10.8%, 11.8%, and 38.1% seropositivity to dengue, respectively. There was a significant age-dependent increase of dengue seropositivity among the volunteers (X2 = 47.26; p<0.001).Table 2
Analysis of potential risk factors associated with dengue seroprevalence among theOrang Asli communities in Peninsular Malaysia (N = 491).
VariablesNDengue serology positiveOR (95% CI)p valueN%GenderFemale2825720.21.78 (1.08–2.95)0.023Male2092612.41Age (years)∗≥13 years1304534.64.43 (2.71–7.26)<0.001≤12 years3613810.61Level of educationNo formal education1234133.32.92 (2.00–4.27)<0.001Formal education3684211.41Occupational statusWorking622133.92.34 (1.54–3.56)<0.001Not working4296214.51Household income (RM/month)∗<RM 5003296820.72.23 (1.32–3.78)<0.001>RM 500162159.31∗Variables that were significantly associated with dengue prevalence following a; multivariate analysis. A significant association is indicated by p<0.05. OR value of 1 is the reference group. N: number examined; No: number positive; %: percentage; significant association (p<0.05); reference group marked as OR = 1.Results also showed that participants with no formal education, i.e., those who did not complete their six years of primary school, were significantly correlated with higher seropositivity for dengue in comparison to those with a formal education, i.e., those who had completed their six-year primary education (OR = 2.92; 95% CI = 2.00–4.27;p<0.001; Table 2). In addition, those who had acquired jobs such as farming, hunting, as unskilled laborers, or more professional occupations such as teachers, nurses, and business entrepreneurs had significantly higher dengue seropositivity in comparison to those who did not possess a job (OR = 2.34; 95% CI = 1.54–3.56; p<0.023). Analysis of the monthly income showed that those who earned a cumulative family income of less than MYR500 (USD150) were also more likely to have had dengue (OR = 2.23; 95% CI = 1.32–3.78; p=0.002).Following a logistic regression analysis, only two groups of participants were significantly associated with dengue IgG seropositivity, those who earned a cumulative monthly income of less than MYR500 (∼USD150) and those whose age were more than 13 years old. Both groups were 2.2 and 4.4 times more likely to have had dengue in the past (95% CI = 1.86–6.60;p<0.001; Table 2), respectively.
### 3.3. Seroprevalence of Dengue and Land Cover Analysis
In addition to investigating the potential association between dengue seroprevalence and the different demographic and socioeconomic variables, the study also explored the potential influence that land use or land cover has on the prevalence of dengue. The land cover variables that were analyzed were built-up consisting of residential, industrial, and commercial areas, vegetation consisting of forest and agriculture, and water bodies consisting of lake, river, and abandoned mine pools and ponds. The land cover analysis covered about 2 km radial distance from the center of the villages. Analysis of the surveyed areas showed that all of the villages were located near to a river stream. Rivers remained an important source of water for the OA despite the pipe water facility available at the villages as many could not afford the utility bills.The Sungai Perah village, which had the highest dengue seropositivity (50%; Table1) was the only village located near to an industrial area (Table 3), and both the Sungai Perah and Sungai Bumbun village were located near a commercial site. The highest built-up content was found around the Gurney village followed by the Sungai Perah village. In addition to the presence of a river stream, two other villages had a unique water body presence where the Sungai Bumbun village was located near to an abandoned tin mine pool, and a freshwater lake was spotted near the Pos Iskandar village. The freshwater lake, Tasik Bera, is the largest freshwater swamp in Peninsular Malaysia spanning about 35 km long and 20 km wide. The lake supports an array of animal and plant life and is an important source of livelihood for the Pos Iskandar villagers. The lake is often frequented by visitors and contributed significantly to the Pos Iskandar village’s economy.Table 3
Percentage of coverage for the different types of land cover surrounding 2 km radial distance from the center of each surveyed village.
Surveyed villagesLandcover attributes (% of coverage)ResidentialIndustrialCommercialLakeRiverMine poolPondForestAgricultureSungai Perah14.62.31.5—13.8——29.338.5Gurney29.6—∗——12.6—5.914.937.0Sungai Bumbun11.5—3.1—4.650.015.411.63.8Pos Iskandar3.8——30.85.4——46.213.8Hulu LANGAT14.2———2.5——29.254.1Kuala Betis7.7———2.3——66.923.1POS BETAU5.4———2.3——38.553.8Sungai Layau8.0———24.8—12.023.232.0∗Not available.In addition to built-up and water bodies, the extent of vegetation was also estimated. The presence of vegetation was vast throughout the surveyed areas typical of equatorial rainforest consisting mainly of forested and agriculture areas. The highest content of forest was observed around the Kuala Betis village followed by the Pos Iskandar village. The least amount of vegetation and forest was observed around the Sungai Bumbun village followed by the Gurney village, where the latter also showed the highest content of built-up areas (Table3).The lowest dengue seroprevalence was observed among the Sungai Layau village residents. The village is located about 345 km southeast of Kuala Lumpur (Figure1) and was mostly surrounded by an oil palm plantation. There was no significant difference in the land cover content between the Sungai Layau village and the Sungai Perah village (which exhibited the highest dengue prevalence) except for the built-up (Table 3). The Sungai Perah village had a much higher built-up (18.4%) in comparison to the Sungai Layau village (8.0%) with the presence of both industrial and commercial areas nearby, which the latter lacked. Although there was much less presence of built-up, the Sungai Layau village was more modern and developed in comparison to Sungai Perah and the other villages. The village has a health clinic and a primary and secondary school unlike the other villages. Many of the villagers completed their tertiary education and hold professional jobs.Univariate analysis on the investigated variables showed that the different types of built-up and the presence of a lake or pond were significantly associated with dengue seroprevalence. However, multivariate analyses using logistic regression of these variables showed that only residential area (OR = 1.106; 95% CI = 1.041–1.175;p<0.001) and lake (OR = 0.152; 95% CI = 0.067–0.348; p<0.001) were significantly associated with higher seroprevalence of dengue (Table 4). Vegetation did not seem to correlate to dengue in the present study. The number of multilane roads, on the other hand, was associated with dengue prevalence (OR = 1.821; 95% CI = 1.471–2.252; p<0.001), possibly indicating the role of movement or mobility in the spreading of the disease.Table 4
Multivariate analysis of potential land cover risk factors associated with dengue seroprevalence among theOrang Asli communities living in the forest fringe areas of Peninsular Malaysia.
Land cover variablesp valueExp (B)95% C.I. for exp(B)LowerUpperResidential<0.0011.1061.0411.175Industrial0.1710.3540.0801.564Commercial0.0611.8840.9713.657Lake<0.0010.1520.0670.348Pond0.1460.9060.7921.035Multi-lane roads<0.0011.8211.4712.252Land surface temperature (LST)0.0501.1070.8601.400Land elevation (DEM)0.0402.2101.5102.630Normalized difference vegetation index (NDVI)0.1300.9760.9401.010Significant association is indicated byp<0.005.
### 3.4. Remote Sensing Environmental Data
Three environmental data were assessed in the present study: land surface temperature (LST), land elevation using the digital elevation model (DEM), and the level of vegetation represented by the normalized difference vegetation index (NDVI). A univariate analysis was performed followed by the multivariable logistic regression model in order to determine the potential influence that physical environmental factors have over disease prevalence and possibly spread. Results suggested that LST exceeding 40°C (OR = 1.107, 95% CI = 0.86–1.40,p=0.05) and elevation less than 50 meters above sea level (OR = 2.210, 95% CI = 1.51–2.63, p=0.04) were significantly associated with >20% dengue IgG seropositivity such as in Sungai Perah, Gurney, Sungai Bumbun, and Pos Iskandar villages. Similar to the previous analyses, NDVI showed no significant association with dengue prevalence (OR = 0.976, 95% CI = 0.94–1.01, p=0.13), suggesting its minimal role in possibly influencing dengue transmission (Table 3).
## 3.1. Dengue IgG Seroprevalence
Results from dengue IgG serological assays suggested that at least 17% (n = 83) of the studied population was positive for dengue IgG. The highest seropositivity was observed in the Sungai Perah population with 50% (n = 32) seropositivity (Table 1). This was followed by the Gurney and Sungai Bumbun villages with about 25% (n = 4) seropositivity. About 23.9% (n = 26) and 13.8% (n = 4) of the Pos Iskandar and Hulu Langat villagers were also dengue IgG positive. Less than 10% dengue IgG seropositivity was observed among volunteers from the Kuala Betis (9.1%), Pos Betau (4.4%), and Sungai Layau (2.3%) villages (Table 1).
## 3.2. Dengue IgG Seropositivity and Demographic and SocioEconomic Risk Factors
The demographic and socioeconomic variables that were analyzed consisted of the gender, age, level of education, occupational status, and monthly household income (Table2). Univariate analysis of these risk factors identified females, those above 13 years old, with low education, and who are working participants as more likely to be seropositive for dengue (Table 2). Results showed that about 20.2% of female participants and 12.4% of male participants were dengue IgG seropositive (OR = 1.78; 95% CI = 1.08–2.95; p=0.023; Table 2). Dengue IgG seropositivity was also significantly different between those aged ≤12 years and ≥13 years old. About 10.6% of participants aged ≤12 years old and 34.6% of those aged ≥13 years old were positive for dengue IgG (OR = 4.45; 95% CI = 2.71–7.26; p<0.001). Further analysis on the age groups showed that dengue IgG was found only in those aged more than 4 years old where those aged 5-6, 7–12, 13–17, and ≥18 showed 9.1%, 10.8%, 11.8%, and 38.1% seropositivity to dengue, respectively. There was a significant age-dependent increase of dengue seropositivity among the volunteers (X2 = 47.26; p<0.001).Table 2
Analysis of potential risk factors associated with dengue seroprevalence among theOrang Asli communities in Peninsular Malaysia (N = 491).
VariablesNDengue serology positiveOR (95% CI)p valueN%GenderFemale2825720.21.78 (1.08–2.95)0.023Male2092612.41Age (years)∗≥13 years1304534.64.43 (2.71–7.26)<0.001≤12 years3613810.61Level of educationNo formal education1234133.32.92 (2.00–4.27)<0.001Formal education3684211.41Occupational statusWorking622133.92.34 (1.54–3.56)<0.001Not working4296214.51Household income (RM/month)∗<RM 5003296820.72.23 (1.32–3.78)<0.001>RM 500162159.31∗Variables that were significantly associated with dengue prevalence following a; multivariate analysis. A significant association is indicated by p<0.05. OR value of 1 is the reference group. N: number examined; No: number positive; %: percentage; significant association (p<0.05); reference group marked as OR = 1.Results also showed that participants with no formal education, i.e., those who did not complete their six years of primary school, were significantly correlated with higher seropositivity for dengue in comparison to those with a formal education, i.e., those who had completed their six-year primary education (OR = 2.92; 95% CI = 2.00–4.27;p<0.001; Table 2). In addition, those who had acquired jobs such as farming, hunting, as unskilled laborers, or more professional occupations such as teachers, nurses, and business entrepreneurs had significantly higher dengue seropositivity in comparison to those who did not possess a job (OR = 2.34; 95% CI = 1.54–3.56; p<0.023). Analysis of the monthly income showed that those who earned a cumulative family income of less than MYR500 (USD150) were also more likely to have had dengue (OR = 2.23; 95% CI = 1.32–3.78; p=0.002).Following a logistic regression analysis, only two groups of participants were significantly associated with dengue IgG seropositivity, those who earned a cumulative monthly income of less than MYR500 (∼USD150) and those whose age were more than 13 years old. Both groups were 2.2 and 4.4 times more likely to have had dengue in the past (95% CI = 1.86–6.60;p<0.001; Table 2), respectively.
## 3.3. Seroprevalence of Dengue and Land Cover Analysis
In addition to investigating the potential association between dengue seroprevalence and the different demographic and socioeconomic variables, the study also explored the potential influence that land use or land cover has on the prevalence of dengue. The land cover variables that were analyzed were built-up consisting of residential, industrial, and commercial areas, vegetation consisting of forest and agriculture, and water bodies consisting of lake, river, and abandoned mine pools and ponds. The land cover analysis covered about 2 km radial distance from the center of the villages. Analysis of the surveyed areas showed that all of the villages were located near to a river stream. Rivers remained an important source of water for the OA despite the pipe water facility available at the villages as many could not afford the utility bills.The Sungai Perah village, which had the highest dengue seropositivity (50%; Table1) was the only village located near to an industrial area (Table 3), and both the Sungai Perah and Sungai Bumbun village were located near a commercial site. The highest built-up content was found around the Gurney village followed by the Sungai Perah village. In addition to the presence of a river stream, two other villages had a unique water body presence where the Sungai Bumbun village was located near to an abandoned tin mine pool, and a freshwater lake was spotted near the Pos Iskandar village. The freshwater lake, Tasik Bera, is the largest freshwater swamp in Peninsular Malaysia spanning about 35 km long and 20 km wide. The lake supports an array of animal and plant life and is an important source of livelihood for the Pos Iskandar villagers. The lake is often frequented by visitors and contributed significantly to the Pos Iskandar village’s economy.Table 3
Percentage of coverage for the different types of land cover surrounding 2 km radial distance from the center of each surveyed village.
Surveyed villagesLandcover attributes (% of coverage)ResidentialIndustrialCommercialLakeRiverMine poolPondForestAgricultureSungai Perah14.62.31.5—13.8——29.338.5Gurney29.6—∗——12.6—5.914.937.0Sungai Bumbun11.5—3.1—4.650.015.411.63.8Pos Iskandar3.8——30.85.4——46.213.8Hulu LANGAT14.2———2.5——29.254.1Kuala Betis7.7———2.3——66.923.1POS BETAU5.4———2.3——38.553.8Sungai Layau8.0———24.8—12.023.232.0∗Not available.In addition to built-up and water bodies, the extent of vegetation was also estimated. The presence of vegetation was vast throughout the surveyed areas typical of equatorial rainforest consisting mainly of forested and agriculture areas. The highest content of forest was observed around the Kuala Betis village followed by the Pos Iskandar village. The least amount of vegetation and forest was observed around the Sungai Bumbun village followed by the Gurney village, where the latter also showed the highest content of built-up areas (Table3).The lowest dengue seroprevalence was observed among the Sungai Layau village residents. The village is located about 345 km southeast of Kuala Lumpur (Figure1) and was mostly surrounded by an oil palm plantation. There was no significant difference in the land cover content between the Sungai Layau village and the Sungai Perah village (which exhibited the highest dengue prevalence) except for the built-up (Table 3). The Sungai Perah village had a much higher built-up (18.4%) in comparison to the Sungai Layau village (8.0%) with the presence of both industrial and commercial areas nearby, which the latter lacked. Although there was much less presence of built-up, the Sungai Layau village was more modern and developed in comparison to Sungai Perah and the other villages. The village has a health clinic and a primary and secondary school unlike the other villages. Many of the villagers completed their tertiary education and hold professional jobs.Univariate analysis on the investigated variables showed that the different types of built-up and the presence of a lake or pond were significantly associated with dengue seroprevalence. However, multivariate analyses using logistic regression of these variables showed that only residential area (OR = 1.106; 95% CI = 1.041–1.175;p<0.001) and lake (OR = 0.152; 95% CI = 0.067–0.348; p<0.001) were significantly associated with higher seroprevalence of dengue (Table 4). Vegetation did not seem to correlate to dengue in the present study. The number of multilane roads, on the other hand, was associated with dengue prevalence (OR = 1.821; 95% CI = 1.471–2.252; p<0.001), possibly indicating the role of movement or mobility in the spreading of the disease.Table 4
Multivariate analysis of potential land cover risk factors associated with dengue seroprevalence among theOrang Asli communities living in the forest fringe areas of Peninsular Malaysia.
Land cover variablesp valueExp (B)95% C.I. for exp(B)LowerUpperResidential<0.0011.1061.0411.175Industrial0.1710.3540.0801.564Commercial0.0611.8840.9713.657Lake<0.0010.1520.0670.348Pond0.1460.9060.7921.035Multi-lane roads<0.0011.8211.4712.252Land surface temperature (LST)0.0501.1070.8601.400Land elevation (DEM)0.0402.2101.5102.630Normalized difference vegetation index (NDVI)0.1300.9760.9401.010Significant association is indicated byp<0.005.
## 3.4. Remote Sensing Environmental Data
Three environmental data were assessed in the present study: land surface temperature (LST), land elevation using the digital elevation model (DEM), and the level of vegetation represented by the normalized difference vegetation index (NDVI). A univariate analysis was performed followed by the multivariable logistic regression model in order to determine the potential influence that physical environmental factors have over disease prevalence and possibly spread. Results suggested that LST exceeding 40°C (OR = 1.107, 95% CI = 0.86–1.40,p=0.05) and elevation less than 50 meters above sea level (OR = 2.210, 95% CI = 1.51–2.63, p=0.04) were significantly associated with >20% dengue IgG seropositivity such as in Sungai Perah, Gurney, Sungai Bumbun, and Pos Iskandar villages. Similar to the previous analyses, NDVI showed no significant association with dengue prevalence (OR = 0.976, 95% CI = 0.94–1.01, p=0.13), suggesting its minimal role in possibly influencing dengue transmission (Table 3).
## 4. Discussion
Activities such as the opening of oil palm and rubber plantations, timber extraction, and eco-tourism have resulted in substantial land surface changes in many tropical and subtropical regions of the world. To ease travelling and the transport of forest resources, workers, and tourists, highways and multilane roads were built, contributing to an increase in population movement to and from the forest fringe areas. Although the increase in mobility helped to boost economic activities, it also may inadvertently increase the chances for transmission of infectious diseases. Dengue, a mosquito-borne disease and hyperendemic in Malaysia, remained a serious public health threat. Although the disease is mandated as a notifiable disease within 24 hour of detection, it is most likely still under reported especially in population where minimal health services are available, such as among the OA or those living in the forest fringe areas. Only few studies have been undertaken to investigate the prevalence of dengue in these populations [10–13], and none of them attempted to determine potential factors that could influence disease transmission and prevalence. The earlier studies conducted in 1956 [10] and 1958 [11] reported high prevalence of dengue (>90%) among rural ethnic Malays and OA in Pahang and an 80% prevalence among the forest fringe populations [12]. The prevalence increased to 91.6% in 2011 in rural areas [14] but varied significantly (24% prevalence) among the forest fringe populations of East Malaysia in the year 2006 [13]. Following the report in 1986 [12], the present study attempted to determine the prevalence of dengue among the forest fringe populations in Peninsular Malaysia and associate it with demographic and socioeconomic factors in addition to land cover and aspects of environment such as LST, land elevation, and vegetation.The present study observed a low prevalence of dengue (17%) among the forest fringe populations in comparison to those reported earlier. An even lower prevalence of dengue (4.9%) was reported more recently where the study also showed significantly higher presence of antibodies against Japanese encephalitis (48.4%) among the OA and some presence of IgG antibodies against Zika (13.2%) [17]. This serosurvey suggested that dengue prevalence had decreased over time among those living in forest fringe areas and that it is significantly different from that of the rural areas. Increasing dengue trend in the rural area was estimated to reach as high as that in the urban areas if not higher [18, 19]. Although the study by Schmidt et al. showed that the lack of piped water supply contributed to higher dengue prevalence in the rural areas of Vietnam, the same was not observed among the villages surveyed in the present study [19].Despite the overall low dengue seroprevalence in the present study, there was a significant difference of dengue exposure across the surveyed villages, ranging from 2% in the Sungai Layau village to 50% in the Sungai Perah village. Upon comparison of these two villages, no particular differences were observed, except for the presence of an industrial area near Sungai Perah. Industrial area, however, was not shown to be a significant contributor to dengue seroprevalence in contrast to the presence and size of residential areas. Based on the assumption that residential area reflected the number of families or individuals present in each village, this could indicate the importance of density or crowding in the transmission and prevalence of dengue in the forest fringe areas [20].Our study also showed mobility as a cofactor that contributed to high seropositivity of dengue. This was reflected by the higher dengue seropositivity detected among the Pos Iskandar participants in comparison to the Kuala Lipis participants where the Pos Iskandar village was located near to Tasik Bera, the largest natural freshwater lake in Malaysia. The lake area is inhabited by the OA from the Semelai tribe, who is also known as the lake people and is frequently visited by eco-tourists. Due to the traffic and population movement to Tasik Bera, the risk of bringing in DENV from either asymptomatic or viremic individuals increased especially with ample presence of vector mosquitoes in the surrounding areas. The role of mobility in the dispersion and prevalence of dengue was made more evident when the Pos Iskandar village was compared to the Kuala Lipis village where both were located near to only one multilane road, suggesting similar accessibility; however, only 5.3% of the Kuala Lipis participants were exposed to dengue in comparison to 23.9% of the Pos Iskandar participants. The suggestion that mobility plays an important role in the spread of diseases is consistent with other earlier studies [21, 22].In addition to population crowding and mobility, previous studies have shown that demographic and socioeconomic attributes contributed significantly to the prevalence of dengue. Lower socioeconomic status and age has been consistently associated with dengue in a number of studies [20, 23–25]. In impoverished areas such as in Recife, Brazil, dengue prevalence was as high as 59% among children ≤5 years old [20]. Similarly, the present study showed that dengue was inversely associated with wealth and socioeconomic status even among the forest fringe populations of OA and that increasing age was a significant variable associated with dengue. There was a display of age-dependent increase of prevalence in the present study, with those above 18 years old displaying the highest seropositivity to dengue (38.1%) and those earning the very bare minimum of ∼USD150 were more likely to be exposed to the disease. In our study, better accessibility to healthcare and education could be important factors in reducing dengue transmission. Empowering the community with disease knowledge and prevention practices, hence, would likely assist in curbing the spread of dengue [26].Environmental variables such as LST and elevation were also found to contribute to the prevalence of dengue. Land surface temperature (LST) and elevation played a role by possibly influencing the vectorial capacity of dengue vector, theAedes sp. Mosquitoes [27–30] where the abundance of A. aegypti has been shown to reduce significantly at elevation higher than 1,700 m [28, 29]. Higher LST was also associated with high occurrence of severe dengue in four provinces in Thailand [30]. These two environmental factors could be included in a simple predictive algorithm to determine dengue expansion, just as they have been used in the development of a K-map model to visualize dengue hot spot areas [31]. Vegetation, however, was not shown as a significant contributor to the prevalence of dengue in the present study despite the perception of higher abundance of mosquitoes in highly vegetated areas and the previous display of association between greenery and the number of dengue cases [32].Despite the presented results, the present study would have benefit from the use of a more sophisticated land survey methods such as an unmanned aerial vehicle where higher resolution satellite images could be obtained. Inclusion of climatic variables such as air temperature and rainfall should be considered in future studies, as is mosquito population and density. In addition, future studies should also address issues of dengue cross-reactivity with other arboviruses such as Zika and Japanese encephalitis in serological assays [17]. Since the study was performed approximately 10 years ago, it is possible that much societal and climate changes have occurred over the years, which could have affected the present dengue serological status among the OA. This, however, remained to be assessed. Further studies, hence, are needed to ascertain the degree of influence that the examined variables have on dengue transmission and prevalence in these forest fringe OA populations.
## 5. Conclusion
The present study highlighted the prevalence of dengue among the underserved and economically marginalized OA population of Malaysia. Variables such as population mobility, household density, age, and lower socioeconomic status are among the risk factors for dengue identified in the study. In addition, environmental factors consisting of LST and elevation appeared to also influence the prevalence of dengue. These factors, however, are not solely exclusive to populations living in forest fringe areas but could also be true in other underserved or economically marginalized population. Better access to healthcare and empowerment with disease knowledge is recommended to ensure better success of preventive measures against dengue in these populations.
---
*Source: 1019238-2020-05-25.xml* | 1019238-2020-05-25_1019238-2020-05-25.md | 59,904 | Possible Factors Influencing the Seroprevalence of Dengue among Residents of the Forest Fringe Areas of Peninsular Malaysia | Juraina Abd-Jamil; Romano Ngui; Syahrul Nellis; Rosmadi Fauzi; Ai Lian Yvonne Lim; Karuthan Chinna; Chee-Sieng Khor; Sazaly AbuBakar | Journal of Tropical Medicine
(2020) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1019238 | 1019238-2020-05-25.xml | ---
## Abstract
Dengue is an endemic mosquito-borne viral disease prevalent in many urban areas of the tropic, especially the Southeast Asia. Its presence among the indigenous population of Peninsular Malaysia (Orang Asli), however, has not been well described. The present study was performed to investigate the seroprevalence of dengue among the Orang Asli (OA) residing at the forest fringe areas of Peninsular Malaysia and determine the factors that could affect the transmission of dengue among the OA. Eight OA communities consisting of 491 individuals were recruited. From the study, at least 17% of the recruited study participants were positive for dengue IgG, indicating past exposure to dengue. Analysis on the demographic and socioeconomic variables suggested that high seroprevalence of dengue was significantly associated with those above 13 years old and a low household income of less than MYR500 (USD150). It was also associated with the vast presence of residential areas and the presence of a lake. Remote sensing analysis showed that higher land surface temperatures and lower land elevations also contributed to higher dengue seroprevalence. The present study suggested that both demographic and geographical factors contributed to the increasing risk of contracting dengue among the OA living at the forest fringe areas of Peninsular Malaysia. The OA, hence, remained vulnerable to dengue.
---
## Body
## 1. Introduction
Dengue is a mosquito-borne viral disease that causes an estimated 390 million infections annually of which 96 million resulted in clinical manifestations [1]. The disease is caused by dengue virus (DENV), which is transmitted by the Aedes sp. mosquitoes. There are four dengue virus serotypes: dengue type 1 virus (DENV-1), dengue type 2 virus (DENV-2), dengue type 3 virus (DENV-3), and dengue type 4 virus (DENV-4). All four DENV serotypes circulate in most of the dengue-endemic regions such as in Indonesia, Vietnam, Thailand, and Malaysia. Once infected with the virus, dengue may manifest as clinically unapparent or asymptomatic infection, undifferentiated fever, or as severe dengue.Dengue was thought to have originated from the sylvatic cycle where the virus circulated among nonhuman primates and the tree top-dwellingAedes sp. mosquitoes such as Aedes niveus and A. luteocephalus [2]. At an estimated 1,000 years ago, dengue spilled into the human populations [3] and became endemic following rapid, unplanned urbanization and massive population migration from the rural to the urban areas [4]. In the endemic human cycle, dengue is transmitted mainly by the vectors A. aegypti and A. albopictus [4, 5]. The vectors are widely found in the subtropical and tropical regions of the world. A. albopictus has been suggested to bridge the sylvatic and urban cycle of dengue due to their abundance in the rural and forested areas in comparison to A. aegypti [6, 7].Malaysia is among the earlier countries that reported dengue hyperendemicity and dengue hemorrhagic fever [8]. The dengue surveillance system implemented in Malaysia operates by receiving notifications of febrile dengue cases from both the government and private hospitals and clinics. The system, however, did not wholly include the underserved and economically marginalized communities such as the indigenous people of Peninsular Malaysia locally known as the Orang Asli (OA), as most still seek medical advice from the village shamans and use traditional medicines for treatments [9]. Earlier reports on dengue prevalence among the forest fringe populations were published in 1956 and 1958 [10, 11], which reported that virtually all adults from the rural communities of ethnic Malays had been exposed to dengue [10]. The study conducted two years later in 1958 showed that about 90% of the rural ethnic Malays and the OA in Bukit Lanong and Cameron Highlands, Pahang, had neutralizing antibodies against DENV [11]. These two studies predated the development of more accurate dengue serological assays. Results obtained from these earlier studies, hence, could be reflective of an imperfect laboratory tool where the ELISA used could highly cross-reacted with other arboviruses. Another study conducted 30 years later, however, showed a similar dengue seroprevalence (80%) among the forest fringe populations in Malaysia [12]. Nevertheless, more recent studies demonstrated that a wide difference of dengue seroprevalence existed between the rural populations in East Malaysia (24%) [13] and Peninsular Malaysia (91%) [14]. These studies suggested that dengue transmission and prevalence varied over time for populations residing in the rural and forest fringe areas of Malaysia. Many factors could have contributed to the differing dengue prevalence in these populations. The present study attempted to determine these factors by investigating the potential influence that demographic and socioeconomic variables as well as land cover and physical environmental factors might have on dengue IgG seroprevalence. The serosurvey, land cover, and remote sensing analysis were performed in eight different OA villages distributed across the states in Peninsular Malaysia. This represents a cross-sectional study using convenience-sampling method among voluntary members of different OA villages.
## 2. Methods
### 2.1. Ethics Approval and Consent to Participate
This study was approved by the Ethics Committee of the University Malaya Medical Centre (UMMC; MEC Ref. 824.11) and the Department of Orang Asli Development or locally known as the Jabatan Kemajuan Orang Asli (JAKOA), Ministry of Rural and Regional Development Malaysia.Prior to obtaining informed consent, members of the community were given a briefing on the study. Participants who agreed to participate provided an oral consent to the trained field assistants, followed by a written consent. In instances where oral consent was received, but written consent could not be obtained due to illiteracy, the participants would provide either a thumbprint (for participants older than 13 years old) or written and oral consent from the legal guardian. The UMMC MEC 824.11 approval permitted both the use of thumbprints and a legal guardian’s signature as indicators of written consent. Study participation was voluntary, and participants could withdraw at any time during study duration by informing the study coordinator.
### 2.2. Study Population and Area
The serosurvey conducted was a cross-sectional study performed among OA populations residing in eight different OA villages in the forest or forest fringe areas of Peninsular Malaysia (Figure1). The Orang Asli (OA) constituted about 0.6% of the Malaysian population and comprised of mainly 18 indigenous tribes (https://www.coac.org.my/). The sampling was performed between November 2007 and October 2010. No specific age was targeted as participation was on voluntary basis.Figure 1
Map of Peninsular Malaysia showing the locations of theOrang Asli villages surveyed in the study. The red line shows the state division, while the gray line shows the division of districts in each state. The Orang Asli villages are indicated with green bubbles.The villages in the present study were selected from the list of sites made available to the authors by JAKOA. The selection was based on the village’s ease of access, its size (more than 100 individuals in the population), the villagers’ receptivity to outsiders, and their nonnomadic lifestyle. As participation was on a voluntary basis, there were no inclusion or exclusion criteria for recruitment. A total of 716 participants were recruited; however, only 491 (68.6%) consented to blood withdrawal. The minimum number of sample size required was 246, calculated by EpiTools epidemiological calculator (epitools.ausvet.com.au) based on a 0.2 apparent prevalence, 0.5 estimated precision, 0.95 confidence level, and an estimated population size of 1,600.The selected villages were the Sungai Perah village, the Sungai Bumbun village, the Gurney village, the Pos Iskandar village, the Hulu Langat village, the Kuala Betis village, the Pos Betau village, and the Sungai Layau village, located in different parts of Peninsular Malaysia (Table1, Figure 1). The villages were located mostly in the forest fringe areas surrounded by rubber and oil palm plantations. In general, the villages had basic utility infrastructures such as water, electricity, and concrete houses. However, they were not fully utilized or evenly distributed as many could not afford the monthly utility bills. As such, the villagers depended highly on nearby rivers for daily water source. Villages such as Pos Iskandar, Kuala Betis, Pos Betau, and Sungai Layau underwent a resettlement program, which included improvement of nearby access roads. Although concrete houses were built, there were still many structures made from bamboo, wood, bricks, and Nipah palm trees. Each village had a population of more than 100 inhabitants. Most of the villagers were unskilled laborers employed at nearby construction sites, factories, vegetable farms, oil palm, and rubber plantations. The villagers also reared animals such as pigs, chickens, and ducks for food and kept monkeys, dogs, and cats as pets. These animals were mostly left to roam freely in the villages.Table 1
Location of the surveyed villages, their accessibility, and information on dengue prevalence.
Surveyed villagesLocationN∗Dengue serology positiveNearby multilane roadsLongitude (°E)Latitude (°N)N%Sungai Perah100° 54 72″4° 24″ 288″65 (43%)3250.04Gurney101° 24″ 144″3° 24″ 108″16 (11%)425.04Sungai Bumbun101° 24″ 72″2° 48″ 180″16 (11%)425.03Pos Iskandar102° 36″ 180″3° 0″ 216″109 (73%)2623.91Hulu Langat101° 54″ 36″2° 54″ 144″29 (19%)413.82Kuala Betis101° 42″ 324″4° 54″ 0″77 (51%)79.12Pos Betau101° 46″ 48″4° 6″ 0″91 (61%)44.41Sungai Layau104° 6″ 0″1° 30″ 108″88 (59%)22.32∗The percentage of participation was estimated based on an average population of 150 per village.
### 2.3. Structured Questionnaire Survey
The pretested questionnaire contained information on participant demographics (i.e., age, gender, and level of education attained) and socioeconomic status (i.e., occupation and household income; Table1). The questionnaire was designed in the national language, Bahasa Malaysia, which was well understood by all of the participants. For those who were not fluent in the language, interpreters were provided by JAKOA. The questionnaire survey was performed by trained field investigators supervised by team supervisors. It was performed prior to blood withdrawal. Each answered questionnaire was given a unique identifier, and the same identifier was used for the blood samples. Completed forms were checked for accuracy, legibility, and completeness at the end of sampling day and verified by the team supervisors. The presence of a JAKOA official was required for all the visits.
### 2.4. Blood Collection
Approximately 3 ml of venous blood was drawn from each participant by trained medical assistants and nurses. The blood samples, in vacutainer blood tubes, were kept in chilled condition and transported to the Department of Parasitology, Faculty of Medicine, University of Malaya, after each study visit. The blood samples were immediately centrifuged at 500×g for 10 min to obtain the serum. The serum was then stored at −20°C until further tests.
### 2.5. Dengue IgG Capture ELISA
The IgG capture enzyme-linked immunosorbent assay (ELISA) was performed using the Standard Diagnostics Dengue IgG Capture ELISA (SD, Korea; 11EK10) according to the recommended protocol. The absorbance was read at 450/620 nm using a Tecan Sunrise spectrophotometer (Mannedorf, Switzerland). The cutoff (CO) value was determined by adding 0.3 to the negative control’s average absorbance value. An absorbance reading ≥CO value was considered positive for the presence of dengue-specific IgG.
### 2.6. Land Cover Analysis
The villages’ location was determined using Google Earth 5.2.1 (https://www.google.com/Earth) as previously described [14]. Land cover assessment was made within a 2 km radius of the center of each village. Land cover features were divided into three categories: (1) water body, (2) built-up, and (3) vegetation. The water body was represented by rivers, streams, and ponds. Built-up consisted of residential, commercial, and industrial areas. They were identified based on the building design and location. For instance, industrial buildings would normally have a wider and bigger rooftop and located in the middle of a large clearing. A commercial area is usually found in a city center while a residential area is located at the city outskirts consisting of fairly homogenous structures. Categorization of the built-up was also assisted by Google Earth’s denomination and by physical visits to the villages. Vegetation was represented mostly by forests and plantations. A plantation site was observed as patches of distinct homogenous pattern of greenery that consisted of oil palm and rubber plantations. These observations were also supported by the physical visits to the villages and the surrounding area. The estimation of land cover area was performed using GE-Path 1.4.4 (http://www.sgrillo.net/googleearth/gepath.htm) by creating a grid map overlay with the land cover map, enabling a quantitative assessment. One grid area was equivalent to 1 km2, and the study surveyed an area of 2 km radius from the center of the villages.
### 2.7. Remote Sensing Environmental-Derived Data
The Geographical Information System (GIS) was used to integrate survey data with remotely sensed satellite sensor environmental data. The data were typically provided as a raster file or in arrays of cells, in which each grid-cell, or pixel, had a certain value depending on how the image was captured and what it represented. There were three environmental data used in the present study: 1) the monthly average land surface temperature (LST); 2) the normalized difference vegetation index (NDVI); and 3) the digital elevation model (DEM). The land surface temperature data were obtained at 30 arcsec (∼1 km) resolution and downloaded from the WorldClim website (http://www.worldclim.org). Temperature records were produced from the global weather station for the period of 1950 to 2000 and were interpolated using a thin-plate smoothing spline algorithm [15].The normalized difference vegetation index measures the vegetation density. It used a time series of a nominal 1 km spatial resolution from Moderate Resolution Imaging Spectroradiometer (MODIS) data that were downloaded from the NASA’s Earth Observing System (EOS) data gateway (http://modis.gsfc.nasa.gov/data/dataprod/index.php). The normalized difference vegetation index was generated using a novel spline-based algorithm following the methods described by Scharlemann et al. [16]. The algorithm was tested on generated artificial data using randomly selected values of both amplitudes and phases, and it provided an accurate estimate of the input variables under all conditions. The algorithm was then applied to produce layers that captured the seasonality of the MODIS data. The digital elevation model information was generated from Radarsat data obtained from the Department of Survey and Mapping Malaysia.A point estimate for each village was extracted for each environmental layer following a temporal Fourier analysis. They were transformed and analyzed using ESRI ArcGIS V9.3 software. A univariate (the Wald test) and multivariate (likelihood ratio test) logistic regression analysis with a stepwise procedure was performed to examine the relationship between remote sensing derived environmental variables and dengue seropositivity using STATA/IC 10.0 (StataCorp LP, College Station, Texas, USA).
### 2.8. Statistical Analyses
Statistical analyses were conducted using IBM SPSS 13.0 for Windows (Chicago, IL, USA). The initial data entry was crosschecked regularly to ensure that data were correctly and consistently entered. A percentage was used to describe descriptive data, such as the seroprevalence of dengue in the studied population according to the village, age, and gender. Univariate analysis was used to assess the potential associations between dengue seropositivity (the outcome of interest) and the sociodemographic characteristics. Only variables that were significantly associated in the univariate model were included in a logistic regression analysis using a backward elimination model. A significance level ofp<0.05 according to the odds ratios (OR) and a 95% confidence interval (95% CI) were used for all tests to indicate the strength of the association between dengue seropositivity and the respective variables.
## 2.1. Ethics Approval and Consent to Participate
This study was approved by the Ethics Committee of the University Malaya Medical Centre (UMMC; MEC Ref. 824.11) and the Department of Orang Asli Development or locally known as the Jabatan Kemajuan Orang Asli (JAKOA), Ministry of Rural and Regional Development Malaysia.Prior to obtaining informed consent, members of the community were given a briefing on the study. Participants who agreed to participate provided an oral consent to the trained field assistants, followed by a written consent. In instances where oral consent was received, but written consent could not be obtained due to illiteracy, the participants would provide either a thumbprint (for participants older than 13 years old) or written and oral consent from the legal guardian. The UMMC MEC 824.11 approval permitted both the use of thumbprints and a legal guardian’s signature as indicators of written consent. Study participation was voluntary, and participants could withdraw at any time during study duration by informing the study coordinator.
## 2.2. Study Population and Area
The serosurvey conducted was a cross-sectional study performed among OA populations residing in eight different OA villages in the forest or forest fringe areas of Peninsular Malaysia (Figure1). The Orang Asli (OA) constituted about 0.6% of the Malaysian population and comprised of mainly 18 indigenous tribes (https://www.coac.org.my/). The sampling was performed between November 2007 and October 2010. No specific age was targeted as participation was on voluntary basis.Figure 1
Map of Peninsular Malaysia showing the locations of theOrang Asli villages surveyed in the study. The red line shows the state division, while the gray line shows the division of districts in each state. The Orang Asli villages are indicated with green bubbles.The villages in the present study were selected from the list of sites made available to the authors by JAKOA. The selection was based on the village’s ease of access, its size (more than 100 individuals in the population), the villagers’ receptivity to outsiders, and their nonnomadic lifestyle. As participation was on a voluntary basis, there were no inclusion or exclusion criteria for recruitment. A total of 716 participants were recruited; however, only 491 (68.6%) consented to blood withdrawal. The minimum number of sample size required was 246, calculated by EpiTools epidemiological calculator (epitools.ausvet.com.au) based on a 0.2 apparent prevalence, 0.5 estimated precision, 0.95 confidence level, and an estimated population size of 1,600.The selected villages were the Sungai Perah village, the Sungai Bumbun village, the Gurney village, the Pos Iskandar village, the Hulu Langat village, the Kuala Betis village, the Pos Betau village, and the Sungai Layau village, located in different parts of Peninsular Malaysia (Table1, Figure 1). The villages were located mostly in the forest fringe areas surrounded by rubber and oil palm plantations. In general, the villages had basic utility infrastructures such as water, electricity, and concrete houses. However, they were not fully utilized or evenly distributed as many could not afford the monthly utility bills. As such, the villagers depended highly on nearby rivers for daily water source. Villages such as Pos Iskandar, Kuala Betis, Pos Betau, and Sungai Layau underwent a resettlement program, which included improvement of nearby access roads. Although concrete houses were built, there were still many structures made from bamboo, wood, bricks, and Nipah palm trees. Each village had a population of more than 100 inhabitants. Most of the villagers were unskilled laborers employed at nearby construction sites, factories, vegetable farms, oil palm, and rubber plantations. The villagers also reared animals such as pigs, chickens, and ducks for food and kept monkeys, dogs, and cats as pets. These animals were mostly left to roam freely in the villages.Table 1
Location of the surveyed villages, their accessibility, and information on dengue prevalence.
Surveyed villagesLocationN∗Dengue serology positiveNearby multilane roadsLongitude (°E)Latitude (°N)N%Sungai Perah100° 54 72″4° 24″ 288″65 (43%)3250.04Gurney101° 24″ 144″3° 24″ 108″16 (11%)425.04Sungai Bumbun101° 24″ 72″2° 48″ 180″16 (11%)425.03Pos Iskandar102° 36″ 180″3° 0″ 216″109 (73%)2623.91Hulu Langat101° 54″ 36″2° 54″ 144″29 (19%)413.82Kuala Betis101° 42″ 324″4° 54″ 0″77 (51%)79.12Pos Betau101° 46″ 48″4° 6″ 0″91 (61%)44.41Sungai Layau104° 6″ 0″1° 30″ 108″88 (59%)22.32∗The percentage of participation was estimated based on an average population of 150 per village.
## 2.3. Structured Questionnaire Survey
The pretested questionnaire contained information on participant demographics (i.e., age, gender, and level of education attained) and socioeconomic status (i.e., occupation and household income; Table1). The questionnaire was designed in the national language, Bahasa Malaysia, which was well understood by all of the participants. For those who were not fluent in the language, interpreters were provided by JAKOA. The questionnaire survey was performed by trained field investigators supervised by team supervisors. It was performed prior to blood withdrawal. Each answered questionnaire was given a unique identifier, and the same identifier was used for the blood samples. Completed forms were checked for accuracy, legibility, and completeness at the end of sampling day and verified by the team supervisors. The presence of a JAKOA official was required for all the visits.
## 2.4. Blood Collection
Approximately 3 ml of venous blood was drawn from each participant by trained medical assistants and nurses. The blood samples, in vacutainer blood tubes, were kept in chilled condition and transported to the Department of Parasitology, Faculty of Medicine, University of Malaya, after each study visit. The blood samples were immediately centrifuged at 500×g for 10 min to obtain the serum. The serum was then stored at −20°C until further tests.
## 2.5. Dengue IgG Capture ELISA
The IgG capture enzyme-linked immunosorbent assay (ELISA) was performed using the Standard Diagnostics Dengue IgG Capture ELISA (SD, Korea; 11EK10) according to the recommended protocol. The absorbance was read at 450/620 nm using a Tecan Sunrise spectrophotometer (Mannedorf, Switzerland). The cutoff (CO) value was determined by adding 0.3 to the negative control’s average absorbance value. An absorbance reading ≥CO value was considered positive for the presence of dengue-specific IgG.
## 2.6. Land Cover Analysis
The villages’ location was determined using Google Earth 5.2.1 (https://www.google.com/Earth) as previously described [14]. Land cover assessment was made within a 2 km radius of the center of each village. Land cover features were divided into three categories: (1) water body, (2) built-up, and (3) vegetation. The water body was represented by rivers, streams, and ponds. Built-up consisted of residential, commercial, and industrial areas. They were identified based on the building design and location. For instance, industrial buildings would normally have a wider and bigger rooftop and located in the middle of a large clearing. A commercial area is usually found in a city center while a residential area is located at the city outskirts consisting of fairly homogenous structures. Categorization of the built-up was also assisted by Google Earth’s denomination and by physical visits to the villages. Vegetation was represented mostly by forests and plantations. A plantation site was observed as patches of distinct homogenous pattern of greenery that consisted of oil palm and rubber plantations. These observations were also supported by the physical visits to the villages and the surrounding area. The estimation of land cover area was performed using GE-Path 1.4.4 (http://www.sgrillo.net/googleearth/gepath.htm) by creating a grid map overlay with the land cover map, enabling a quantitative assessment. One grid area was equivalent to 1 km2, and the study surveyed an area of 2 km radius from the center of the villages.
## 2.7. Remote Sensing Environmental-Derived Data
The Geographical Information System (GIS) was used to integrate survey data with remotely sensed satellite sensor environmental data. The data were typically provided as a raster file or in arrays of cells, in which each grid-cell, or pixel, had a certain value depending on how the image was captured and what it represented. There were three environmental data used in the present study: 1) the monthly average land surface temperature (LST); 2) the normalized difference vegetation index (NDVI); and 3) the digital elevation model (DEM). The land surface temperature data were obtained at 30 arcsec (∼1 km) resolution and downloaded from the WorldClim website (http://www.worldclim.org). Temperature records were produced from the global weather station for the period of 1950 to 2000 and were interpolated using a thin-plate smoothing spline algorithm [15].The normalized difference vegetation index measures the vegetation density. It used a time series of a nominal 1 km spatial resolution from Moderate Resolution Imaging Spectroradiometer (MODIS) data that were downloaded from the NASA’s Earth Observing System (EOS) data gateway (http://modis.gsfc.nasa.gov/data/dataprod/index.php). The normalized difference vegetation index was generated using a novel spline-based algorithm following the methods described by Scharlemann et al. [16]. The algorithm was tested on generated artificial data using randomly selected values of both amplitudes and phases, and it provided an accurate estimate of the input variables under all conditions. The algorithm was then applied to produce layers that captured the seasonality of the MODIS data. The digital elevation model information was generated from Radarsat data obtained from the Department of Survey and Mapping Malaysia.A point estimate for each village was extracted for each environmental layer following a temporal Fourier analysis. They were transformed and analyzed using ESRI ArcGIS V9.3 software. A univariate (the Wald test) and multivariate (likelihood ratio test) logistic regression analysis with a stepwise procedure was performed to examine the relationship between remote sensing derived environmental variables and dengue seropositivity using STATA/IC 10.0 (StataCorp LP, College Station, Texas, USA).
## 2.8. Statistical Analyses
Statistical analyses were conducted using IBM SPSS 13.0 for Windows (Chicago, IL, USA). The initial data entry was crosschecked regularly to ensure that data were correctly and consistently entered. A percentage was used to describe descriptive data, such as the seroprevalence of dengue in the studied population according to the village, age, and gender. Univariate analysis was used to assess the potential associations between dengue seropositivity (the outcome of interest) and the sociodemographic characteristics. Only variables that were significantly associated in the univariate model were included in a logistic regression analysis using a backward elimination model. A significance level ofp<0.05 according to the odds ratios (OR) and a 95% confidence interval (95% CI) were used for all tests to indicate the strength of the association between dengue seropositivity and the respective variables.
## 3. Results
The present study recruited 491 individuals from eight different OA villages in Peninsular Malaysia (Figure1). Their ages ranged from 1 to 82 years with a median age of 11 years old and a proportion of 1.2% of ≤4 years old, 2.2% of 5-6 years old, 70.1% of those aged 7–12 years, 3.5% of 13–17 years old, and 23.0% of those aged ≥18 years old.Study participation was on a voluntary basis. The highest participation was obtained from the Pos Iskandar village (n = 109), followed by the Pos Betau village (n = 91), the Sungai Layau village (n = 88), and the Kuala Betis village (n = 77). There were 65 participations from the Sungai Perah village and 30 participations from the Hulu Langat village. The lowest participation was obtained from the Gurney (n = 16) and Sungai Bumbun village (n = 16; Table 1).
### 3.1. Dengue IgG Seroprevalence
Results from dengue IgG serological assays suggested that at least 17% (n = 83) of the studied population was positive for dengue IgG. The highest seropositivity was observed in the Sungai Perah population with 50% (n = 32) seropositivity (Table 1). This was followed by the Gurney and Sungai Bumbun villages with about 25% (n = 4) seropositivity. About 23.9% (n = 26) and 13.8% (n = 4) of the Pos Iskandar and Hulu Langat villagers were also dengue IgG positive. Less than 10% dengue IgG seropositivity was observed among volunteers from the Kuala Betis (9.1%), Pos Betau (4.4%), and Sungai Layau (2.3%) villages (Table 1).
### 3.2. Dengue IgG Seropositivity and Demographic and SocioEconomic Risk Factors
The demographic and socioeconomic variables that were analyzed consisted of the gender, age, level of education, occupational status, and monthly household income (Table2). Univariate analysis of these risk factors identified females, those above 13 years old, with low education, and who are working participants as more likely to be seropositive for dengue (Table 2). Results showed that about 20.2% of female participants and 12.4% of male participants were dengue IgG seropositive (OR = 1.78; 95% CI = 1.08–2.95; p=0.023; Table 2). Dengue IgG seropositivity was also significantly different between those aged ≤12 years and ≥13 years old. About 10.6% of participants aged ≤12 years old and 34.6% of those aged ≥13 years old were positive for dengue IgG (OR = 4.45; 95% CI = 2.71–7.26; p<0.001). Further analysis on the age groups showed that dengue IgG was found only in those aged more than 4 years old where those aged 5-6, 7–12, 13–17, and ≥18 showed 9.1%, 10.8%, 11.8%, and 38.1% seropositivity to dengue, respectively. There was a significant age-dependent increase of dengue seropositivity among the volunteers (X2 = 47.26; p<0.001).Table 2
Analysis of potential risk factors associated with dengue seroprevalence among theOrang Asli communities in Peninsular Malaysia (N = 491).
VariablesNDengue serology positiveOR (95% CI)p valueN%GenderFemale2825720.21.78 (1.08–2.95)0.023Male2092612.41Age (years)∗≥13 years1304534.64.43 (2.71–7.26)<0.001≤12 years3613810.61Level of educationNo formal education1234133.32.92 (2.00–4.27)<0.001Formal education3684211.41Occupational statusWorking622133.92.34 (1.54–3.56)<0.001Not working4296214.51Household income (RM/month)∗<RM 5003296820.72.23 (1.32–3.78)<0.001>RM 500162159.31∗Variables that were significantly associated with dengue prevalence following a; multivariate analysis. A significant association is indicated by p<0.05. OR value of 1 is the reference group. N: number examined; No: number positive; %: percentage; significant association (p<0.05); reference group marked as OR = 1.Results also showed that participants with no formal education, i.e., those who did not complete their six years of primary school, were significantly correlated with higher seropositivity for dengue in comparison to those with a formal education, i.e., those who had completed their six-year primary education (OR = 2.92; 95% CI = 2.00–4.27;p<0.001; Table 2). In addition, those who had acquired jobs such as farming, hunting, as unskilled laborers, or more professional occupations such as teachers, nurses, and business entrepreneurs had significantly higher dengue seropositivity in comparison to those who did not possess a job (OR = 2.34; 95% CI = 1.54–3.56; p<0.023). Analysis of the monthly income showed that those who earned a cumulative family income of less than MYR500 (USD150) were also more likely to have had dengue (OR = 2.23; 95% CI = 1.32–3.78; p=0.002).Following a logistic regression analysis, only two groups of participants were significantly associated with dengue IgG seropositivity, those who earned a cumulative monthly income of less than MYR500 (∼USD150) and those whose age were more than 13 years old. Both groups were 2.2 and 4.4 times more likely to have had dengue in the past (95% CI = 1.86–6.60;p<0.001; Table 2), respectively.
### 3.3. Seroprevalence of Dengue and Land Cover Analysis
In addition to investigating the potential association between dengue seroprevalence and the different demographic and socioeconomic variables, the study also explored the potential influence that land use or land cover has on the prevalence of dengue. The land cover variables that were analyzed were built-up consisting of residential, industrial, and commercial areas, vegetation consisting of forest and agriculture, and water bodies consisting of lake, river, and abandoned mine pools and ponds. The land cover analysis covered about 2 km radial distance from the center of the villages. Analysis of the surveyed areas showed that all of the villages were located near to a river stream. Rivers remained an important source of water for the OA despite the pipe water facility available at the villages as many could not afford the utility bills.The Sungai Perah village, which had the highest dengue seropositivity (50%; Table1) was the only village located near to an industrial area (Table 3), and both the Sungai Perah and Sungai Bumbun village were located near a commercial site. The highest built-up content was found around the Gurney village followed by the Sungai Perah village. In addition to the presence of a river stream, two other villages had a unique water body presence where the Sungai Bumbun village was located near to an abandoned tin mine pool, and a freshwater lake was spotted near the Pos Iskandar village. The freshwater lake, Tasik Bera, is the largest freshwater swamp in Peninsular Malaysia spanning about 35 km long and 20 km wide. The lake supports an array of animal and plant life and is an important source of livelihood for the Pos Iskandar villagers. The lake is often frequented by visitors and contributed significantly to the Pos Iskandar village’s economy.Table 3
Percentage of coverage for the different types of land cover surrounding 2 km radial distance from the center of each surveyed village.
Surveyed villagesLandcover attributes (% of coverage)ResidentialIndustrialCommercialLakeRiverMine poolPondForestAgricultureSungai Perah14.62.31.5—13.8——29.338.5Gurney29.6—∗——12.6—5.914.937.0Sungai Bumbun11.5—3.1—4.650.015.411.63.8Pos Iskandar3.8——30.85.4——46.213.8Hulu LANGAT14.2———2.5——29.254.1Kuala Betis7.7———2.3——66.923.1POS BETAU5.4———2.3——38.553.8Sungai Layau8.0———24.8—12.023.232.0∗Not available.In addition to built-up and water bodies, the extent of vegetation was also estimated. The presence of vegetation was vast throughout the surveyed areas typical of equatorial rainforest consisting mainly of forested and agriculture areas. The highest content of forest was observed around the Kuala Betis village followed by the Pos Iskandar village. The least amount of vegetation and forest was observed around the Sungai Bumbun village followed by the Gurney village, where the latter also showed the highest content of built-up areas (Table3).The lowest dengue seroprevalence was observed among the Sungai Layau village residents. The village is located about 345 km southeast of Kuala Lumpur (Figure1) and was mostly surrounded by an oil palm plantation. There was no significant difference in the land cover content between the Sungai Layau village and the Sungai Perah village (which exhibited the highest dengue prevalence) except for the built-up (Table 3). The Sungai Perah village had a much higher built-up (18.4%) in comparison to the Sungai Layau village (8.0%) with the presence of both industrial and commercial areas nearby, which the latter lacked. Although there was much less presence of built-up, the Sungai Layau village was more modern and developed in comparison to Sungai Perah and the other villages. The village has a health clinic and a primary and secondary school unlike the other villages. Many of the villagers completed their tertiary education and hold professional jobs.Univariate analysis on the investigated variables showed that the different types of built-up and the presence of a lake or pond were significantly associated with dengue seroprevalence. However, multivariate analyses using logistic regression of these variables showed that only residential area (OR = 1.106; 95% CI = 1.041–1.175;p<0.001) and lake (OR = 0.152; 95% CI = 0.067–0.348; p<0.001) were significantly associated with higher seroprevalence of dengue (Table 4). Vegetation did not seem to correlate to dengue in the present study. The number of multilane roads, on the other hand, was associated with dengue prevalence (OR = 1.821; 95% CI = 1.471–2.252; p<0.001), possibly indicating the role of movement or mobility in the spreading of the disease.Table 4
Multivariate analysis of potential land cover risk factors associated with dengue seroprevalence among theOrang Asli communities living in the forest fringe areas of Peninsular Malaysia.
Land cover variablesp valueExp (B)95% C.I. for exp(B)LowerUpperResidential<0.0011.1061.0411.175Industrial0.1710.3540.0801.564Commercial0.0611.8840.9713.657Lake<0.0010.1520.0670.348Pond0.1460.9060.7921.035Multi-lane roads<0.0011.8211.4712.252Land surface temperature (LST)0.0501.1070.8601.400Land elevation (DEM)0.0402.2101.5102.630Normalized difference vegetation index (NDVI)0.1300.9760.9401.010Significant association is indicated byp<0.005.
### 3.4. Remote Sensing Environmental Data
Three environmental data were assessed in the present study: land surface temperature (LST), land elevation using the digital elevation model (DEM), and the level of vegetation represented by the normalized difference vegetation index (NDVI). A univariate analysis was performed followed by the multivariable logistic regression model in order to determine the potential influence that physical environmental factors have over disease prevalence and possibly spread. Results suggested that LST exceeding 40°C (OR = 1.107, 95% CI = 0.86–1.40,p=0.05) and elevation less than 50 meters above sea level (OR = 2.210, 95% CI = 1.51–2.63, p=0.04) were significantly associated with >20% dengue IgG seropositivity such as in Sungai Perah, Gurney, Sungai Bumbun, and Pos Iskandar villages. Similar to the previous analyses, NDVI showed no significant association with dengue prevalence (OR = 0.976, 95% CI = 0.94–1.01, p=0.13), suggesting its minimal role in possibly influencing dengue transmission (Table 3).
## 3.1. Dengue IgG Seroprevalence
Results from dengue IgG serological assays suggested that at least 17% (n = 83) of the studied population was positive for dengue IgG. The highest seropositivity was observed in the Sungai Perah population with 50% (n = 32) seropositivity (Table 1). This was followed by the Gurney and Sungai Bumbun villages with about 25% (n = 4) seropositivity. About 23.9% (n = 26) and 13.8% (n = 4) of the Pos Iskandar and Hulu Langat villagers were also dengue IgG positive. Less than 10% dengue IgG seropositivity was observed among volunteers from the Kuala Betis (9.1%), Pos Betau (4.4%), and Sungai Layau (2.3%) villages (Table 1).
## 3.2. Dengue IgG Seropositivity and Demographic and SocioEconomic Risk Factors
The demographic and socioeconomic variables that were analyzed consisted of the gender, age, level of education, occupational status, and monthly household income (Table2). Univariate analysis of these risk factors identified females, those above 13 years old, with low education, and who are working participants as more likely to be seropositive for dengue (Table 2). Results showed that about 20.2% of female participants and 12.4% of male participants were dengue IgG seropositive (OR = 1.78; 95% CI = 1.08–2.95; p=0.023; Table 2). Dengue IgG seropositivity was also significantly different between those aged ≤12 years and ≥13 years old. About 10.6% of participants aged ≤12 years old and 34.6% of those aged ≥13 years old were positive for dengue IgG (OR = 4.45; 95% CI = 2.71–7.26; p<0.001). Further analysis on the age groups showed that dengue IgG was found only in those aged more than 4 years old where those aged 5-6, 7–12, 13–17, and ≥18 showed 9.1%, 10.8%, 11.8%, and 38.1% seropositivity to dengue, respectively. There was a significant age-dependent increase of dengue seropositivity among the volunteers (X2 = 47.26; p<0.001).Table 2
Analysis of potential risk factors associated with dengue seroprevalence among theOrang Asli communities in Peninsular Malaysia (N = 491).
VariablesNDengue serology positiveOR (95% CI)p valueN%GenderFemale2825720.21.78 (1.08–2.95)0.023Male2092612.41Age (years)∗≥13 years1304534.64.43 (2.71–7.26)<0.001≤12 years3613810.61Level of educationNo formal education1234133.32.92 (2.00–4.27)<0.001Formal education3684211.41Occupational statusWorking622133.92.34 (1.54–3.56)<0.001Not working4296214.51Household income (RM/month)∗<RM 5003296820.72.23 (1.32–3.78)<0.001>RM 500162159.31∗Variables that were significantly associated with dengue prevalence following a; multivariate analysis. A significant association is indicated by p<0.05. OR value of 1 is the reference group. N: number examined; No: number positive; %: percentage; significant association (p<0.05); reference group marked as OR = 1.Results also showed that participants with no formal education, i.e., those who did not complete their six years of primary school, were significantly correlated with higher seropositivity for dengue in comparison to those with a formal education, i.e., those who had completed their six-year primary education (OR = 2.92; 95% CI = 2.00–4.27;p<0.001; Table 2). In addition, those who had acquired jobs such as farming, hunting, as unskilled laborers, or more professional occupations such as teachers, nurses, and business entrepreneurs had significantly higher dengue seropositivity in comparison to those who did not possess a job (OR = 2.34; 95% CI = 1.54–3.56; p<0.023). Analysis of the monthly income showed that those who earned a cumulative family income of less than MYR500 (USD150) were also more likely to have had dengue (OR = 2.23; 95% CI = 1.32–3.78; p=0.002).Following a logistic regression analysis, only two groups of participants were significantly associated with dengue IgG seropositivity, those who earned a cumulative monthly income of less than MYR500 (∼USD150) and those whose age were more than 13 years old. Both groups were 2.2 and 4.4 times more likely to have had dengue in the past (95% CI = 1.86–6.60;p<0.001; Table 2), respectively.
## 3.3. Seroprevalence of Dengue and Land Cover Analysis
In addition to investigating the potential association between dengue seroprevalence and the different demographic and socioeconomic variables, the study also explored the potential influence that land use or land cover has on the prevalence of dengue. The land cover variables that were analyzed were built-up consisting of residential, industrial, and commercial areas, vegetation consisting of forest and agriculture, and water bodies consisting of lake, river, and abandoned mine pools and ponds. The land cover analysis covered about 2 km radial distance from the center of the villages. Analysis of the surveyed areas showed that all of the villages were located near to a river stream. Rivers remained an important source of water for the OA despite the pipe water facility available at the villages as many could not afford the utility bills.The Sungai Perah village, which had the highest dengue seropositivity (50%; Table1) was the only village located near to an industrial area (Table 3), and both the Sungai Perah and Sungai Bumbun village were located near a commercial site. The highest built-up content was found around the Gurney village followed by the Sungai Perah village. In addition to the presence of a river stream, two other villages had a unique water body presence where the Sungai Bumbun village was located near to an abandoned tin mine pool, and a freshwater lake was spotted near the Pos Iskandar village. The freshwater lake, Tasik Bera, is the largest freshwater swamp in Peninsular Malaysia spanning about 35 km long and 20 km wide. The lake supports an array of animal and plant life and is an important source of livelihood for the Pos Iskandar villagers. The lake is often frequented by visitors and contributed significantly to the Pos Iskandar village’s economy.Table 3
Percentage of coverage for the different types of land cover surrounding 2 km radial distance from the center of each surveyed village.
Surveyed villagesLandcover attributes (% of coverage)ResidentialIndustrialCommercialLakeRiverMine poolPondForestAgricultureSungai Perah14.62.31.5—13.8——29.338.5Gurney29.6—∗——12.6—5.914.937.0Sungai Bumbun11.5—3.1—4.650.015.411.63.8Pos Iskandar3.8——30.85.4——46.213.8Hulu LANGAT14.2———2.5——29.254.1Kuala Betis7.7———2.3——66.923.1POS BETAU5.4———2.3——38.553.8Sungai Layau8.0———24.8—12.023.232.0∗Not available.In addition to built-up and water bodies, the extent of vegetation was also estimated. The presence of vegetation was vast throughout the surveyed areas typical of equatorial rainforest consisting mainly of forested and agriculture areas. The highest content of forest was observed around the Kuala Betis village followed by the Pos Iskandar village. The least amount of vegetation and forest was observed around the Sungai Bumbun village followed by the Gurney village, where the latter also showed the highest content of built-up areas (Table3).The lowest dengue seroprevalence was observed among the Sungai Layau village residents. The village is located about 345 km southeast of Kuala Lumpur (Figure1) and was mostly surrounded by an oil palm plantation. There was no significant difference in the land cover content between the Sungai Layau village and the Sungai Perah village (which exhibited the highest dengue prevalence) except for the built-up (Table 3). The Sungai Perah village had a much higher built-up (18.4%) in comparison to the Sungai Layau village (8.0%) with the presence of both industrial and commercial areas nearby, which the latter lacked. Although there was much less presence of built-up, the Sungai Layau village was more modern and developed in comparison to Sungai Perah and the other villages. The village has a health clinic and a primary and secondary school unlike the other villages. Many of the villagers completed their tertiary education and hold professional jobs.Univariate analysis on the investigated variables showed that the different types of built-up and the presence of a lake or pond were significantly associated with dengue seroprevalence. However, multivariate analyses using logistic regression of these variables showed that only residential area (OR = 1.106; 95% CI = 1.041–1.175;p<0.001) and lake (OR = 0.152; 95% CI = 0.067–0.348; p<0.001) were significantly associated with higher seroprevalence of dengue (Table 4). Vegetation did not seem to correlate to dengue in the present study. The number of multilane roads, on the other hand, was associated with dengue prevalence (OR = 1.821; 95% CI = 1.471–2.252; p<0.001), possibly indicating the role of movement or mobility in the spreading of the disease.Table 4
Multivariate analysis of potential land cover risk factors associated with dengue seroprevalence among theOrang Asli communities living in the forest fringe areas of Peninsular Malaysia.
Land cover variablesp valueExp (B)95% C.I. for exp(B)LowerUpperResidential<0.0011.1061.0411.175Industrial0.1710.3540.0801.564Commercial0.0611.8840.9713.657Lake<0.0010.1520.0670.348Pond0.1460.9060.7921.035Multi-lane roads<0.0011.8211.4712.252Land surface temperature (LST)0.0501.1070.8601.400Land elevation (DEM)0.0402.2101.5102.630Normalized difference vegetation index (NDVI)0.1300.9760.9401.010Significant association is indicated byp<0.005.
## 3.4. Remote Sensing Environmental Data
Three environmental data were assessed in the present study: land surface temperature (LST), land elevation using the digital elevation model (DEM), and the level of vegetation represented by the normalized difference vegetation index (NDVI). A univariate analysis was performed followed by the multivariable logistic regression model in order to determine the potential influence that physical environmental factors have over disease prevalence and possibly spread. Results suggested that LST exceeding 40°C (OR = 1.107, 95% CI = 0.86–1.40,p=0.05) and elevation less than 50 meters above sea level (OR = 2.210, 95% CI = 1.51–2.63, p=0.04) were significantly associated with >20% dengue IgG seropositivity such as in Sungai Perah, Gurney, Sungai Bumbun, and Pos Iskandar villages. Similar to the previous analyses, NDVI showed no significant association with dengue prevalence (OR = 0.976, 95% CI = 0.94–1.01, p=0.13), suggesting its minimal role in possibly influencing dengue transmission (Table 3).
## 4. Discussion
Activities such as the opening of oil palm and rubber plantations, timber extraction, and eco-tourism have resulted in substantial land surface changes in many tropical and subtropical regions of the world. To ease travelling and the transport of forest resources, workers, and tourists, highways and multilane roads were built, contributing to an increase in population movement to and from the forest fringe areas. Although the increase in mobility helped to boost economic activities, it also may inadvertently increase the chances for transmission of infectious diseases. Dengue, a mosquito-borne disease and hyperendemic in Malaysia, remained a serious public health threat. Although the disease is mandated as a notifiable disease within 24 hour of detection, it is most likely still under reported especially in population where minimal health services are available, such as among the OA or those living in the forest fringe areas. Only few studies have been undertaken to investigate the prevalence of dengue in these populations [10–13], and none of them attempted to determine potential factors that could influence disease transmission and prevalence. The earlier studies conducted in 1956 [10] and 1958 [11] reported high prevalence of dengue (>90%) among rural ethnic Malays and OA in Pahang and an 80% prevalence among the forest fringe populations [12]. The prevalence increased to 91.6% in 2011 in rural areas [14] but varied significantly (24% prevalence) among the forest fringe populations of East Malaysia in the year 2006 [13]. Following the report in 1986 [12], the present study attempted to determine the prevalence of dengue among the forest fringe populations in Peninsular Malaysia and associate it with demographic and socioeconomic factors in addition to land cover and aspects of environment such as LST, land elevation, and vegetation.The present study observed a low prevalence of dengue (17%) among the forest fringe populations in comparison to those reported earlier. An even lower prevalence of dengue (4.9%) was reported more recently where the study also showed significantly higher presence of antibodies against Japanese encephalitis (48.4%) among the OA and some presence of IgG antibodies against Zika (13.2%) [17]. This serosurvey suggested that dengue prevalence had decreased over time among those living in forest fringe areas and that it is significantly different from that of the rural areas. Increasing dengue trend in the rural area was estimated to reach as high as that in the urban areas if not higher [18, 19]. Although the study by Schmidt et al. showed that the lack of piped water supply contributed to higher dengue prevalence in the rural areas of Vietnam, the same was not observed among the villages surveyed in the present study [19].Despite the overall low dengue seroprevalence in the present study, there was a significant difference of dengue exposure across the surveyed villages, ranging from 2% in the Sungai Layau village to 50% in the Sungai Perah village. Upon comparison of these two villages, no particular differences were observed, except for the presence of an industrial area near Sungai Perah. Industrial area, however, was not shown to be a significant contributor to dengue seroprevalence in contrast to the presence and size of residential areas. Based on the assumption that residential area reflected the number of families or individuals present in each village, this could indicate the importance of density or crowding in the transmission and prevalence of dengue in the forest fringe areas [20].Our study also showed mobility as a cofactor that contributed to high seropositivity of dengue. This was reflected by the higher dengue seropositivity detected among the Pos Iskandar participants in comparison to the Kuala Lipis participants where the Pos Iskandar village was located near to Tasik Bera, the largest natural freshwater lake in Malaysia. The lake area is inhabited by the OA from the Semelai tribe, who is also known as the lake people and is frequently visited by eco-tourists. Due to the traffic and population movement to Tasik Bera, the risk of bringing in DENV from either asymptomatic or viremic individuals increased especially with ample presence of vector mosquitoes in the surrounding areas. The role of mobility in the dispersion and prevalence of dengue was made more evident when the Pos Iskandar village was compared to the Kuala Lipis village where both were located near to only one multilane road, suggesting similar accessibility; however, only 5.3% of the Kuala Lipis participants were exposed to dengue in comparison to 23.9% of the Pos Iskandar participants. The suggestion that mobility plays an important role in the spread of diseases is consistent with other earlier studies [21, 22].In addition to population crowding and mobility, previous studies have shown that demographic and socioeconomic attributes contributed significantly to the prevalence of dengue. Lower socioeconomic status and age has been consistently associated with dengue in a number of studies [20, 23–25]. In impoverished areas such as in Recife, Brazil, dengue prevalence was as high as 59% among children ≤5 years old [20]. Similarly, the present study showed that dengue was inversely associated with wealth and socioeconomic status even among the forest fringe populations of OA and that increasing age was a significant variable associated with dengue. There was a display of age-dependent increase of prevalence in the present study, with those above 18 years old displaying the highest seropositivity to dengue (38.1%) and those earning the very bare minimum of ∼USD150 were more likely to be exposed to the disease. In our study, better accessibility to healthcare and education could be important factors in reducing dengue transmission. Empowering the community with disease knowledge and prevention practices, hence, would likely assist in curbing the spread of dengue [26].Environmental variables such as LST and elevation were also found to contribute to the prevalence of dengue. Land surface temperature (LST) and elevation played a role by possibly influencing the vectorial capacity of dengue vector, theAedes sp. Mosquitoes [27–30] where the abundance of A. aegypti has been shown to reduce significantly at elevation higher than 1,700 m [28, 29]. Higher LST was also associated with high occurrence of severe dengue in four provinces in Thailand [30]. These two environmental factors could be included in a simple predictive algorithm to determine dengue expansion, just as they have been used in the development of a K-map model to visualize dengue hot spot areas [31]. Vegetation, however, was not shown as a significant contributor to the prevalence of dengue in the present study despite the perception of higher abundance of mosquitoes in highly vegetated areas and the previous display of association between greenery and the number of dengue cases [32].Despite the presented results, the present study would have benefit from the use of a more sophisticated land survey methods such as an unmanned aerial vehicle where higher resolution satellite images could be obtained. Inclusion of climatic variables such as air temperature and rainfall should be considered in future studies, as is mosquito population and density. In addition, future studies should also address issues of dengue cross-reactivity with other arboviruses such as Zika and Japanese encephalitis in serological assays [17]. Since the study was performed approximately 10 years ago, it is possible that much societal and climate changes have occurred over the years, which could have affected the present dengue serological status among the OA. This, however, remained to be assessed. Further studies, hence, are needed to ascertain the degree of influence that the examined variables have on dengue transmission and prevalence in these forest fringe OA populations.
## 5. Conclusion
The present study highlighted the prevalence of dengue among the underserved and economically marginalized OA population of Malaysia. Variables such as population mobility, household density, age, and lower socioeconomic status are among the risk factors for dengue identified in the study. In addition, environmental factors consisting of LST and elevation appeared to also influence the prevalence of dengue. These factors, however, are not solely exclusive to populations living in forest fringe areas but could also be true in other underserved or economically marginalized population. Better access to healthcare and empowerment with disease knowledge is recommended to ensure better success of preventive measures against dengue in these populations.
---
*Source: 1019238-2020-05-25.xml* | 2020 |
# A Fractional Order Model for Viral Infection with Cure of Infected Cells and Humoral Immunity
**Authors:** Adnane Boukhouima; Khalid Hattaf; Noura Yousfi
**Journal:** International Journal of Differential Equations
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1019242
---
## Abstract
In this paper, we study the dynamics of a viral infection model formulated by five fractional differential equations (FDEs) to describe the interactions between host cells, virus, and humoral immunity presented by antibodies. The infection transmission process is modeled by Hattaf-Yousfi functional response which covers several forms of incidence rate existing in the literature. We first show that the model is mathematically and biologically well-posed. By constructing suitable Lyapunov functionals, the global stability of equilibria is established and characterized by two threshold parameters. Finally, some numerical simulations are presented to illustrate our theoretical analysis.
---
## Body
## 1. Introduction
The immune response plays an important role to control the dynamics of viral infections such as human immunodeficiency virus (HIV), hepatitis B virus (HBV), hepatitis C virus (HCV), and human T-cell leukemia virus (HTLV). Therefore, many mathematical models have been developed to incorporate the role of immune response in viral infections. Some of these models considered the cellular immune response mediated by cytotoxic T lymphocytes (CTL) cells that attack and kill the infected cells [1–5] and the others considered the humoral immune response based on the antibodies which are produced by the B-cells and are programmed to neutralize the viruses [6–11]. However, all these models have been formulated by using ordinary differential equations (ODEs) in which the memory effect is neglected while the immune response involves memory [12, 13].Fractional derivative is a generalization of integer derivative and it is a suitable tool to model real phenomena with memory which exists in most biological systems [14–16]. The fractional derivative is a nonlocal operator in contrast to integer derivative. This means that if we want to compute the fractional derivative at some point t=t1, it is necessary to take into account the entire history from the starting point t=t0 up to the point t=t1. For these reasons, modeling some real process by using fractional derivative has drawn attention of several authors in various fields [17–22]. In biology, it has been shown that the fractional derivative is useful to analyse the rheological proprieties of cells [23]. Furthermore, it has been deduced that the membranes of cells of biological organism have fractional order electrical conductance [24]. Recently, much works have been done on modeling the dynamics of viral infections with FDEs [25–31]. These works ignored the impact of the immune response and the majority of them deal only with the local stability.In some viral infections, the humoral immune response is more effective than cellular immune response [32]. For this reason, we improve the above ODE and FDE models by proposing a new fractional order model that describes the interactions between susceptible host cells, viral particles, and the humoral immune response mediated by the antibodies; that is,(1)Dαxt=λ-dx-fx,vv+ρl,Dαlt=fx,vv-m+ρ+γl,Dαyt=γl-ay,Dαvt=ky-μv-qvw,Dαwt=gvw-hw,where x(t), l(t), y(t), v(t), and w(t) are the concentrations of susceptible host cells, latently infected cells (infected cells which are not yet able to produce virions), productive infected cells, free virus particles, and antibodies at time t, respectively. Susceptible host cells are assumed to be produced at a constant rate λ, die at the rate dx, and become infected by virus at the rate f(x,v)v. Latently infected cells die at the rate ml and return to the uninfected state by loss of all covalently closed circular DNA (cccDNA) from their nucleus at the rate ρl. Productive infected cells are produced from latently infected cells at the rate γl and die at the rate ay. Free virus particles are produced from productive infected cells at the rate ky, cleared at the rate μv, and are neutralized by antibodies at the rate qvw. Antibodies are activated against virus at the rate gvw and die at the rate hw.In system (1), Dα represents the Caputo fractional derivative of order α defined for an arbitrary function φ by (2)Dαφt=1Γ1-α∫0tφ′ut-uαdu,with 0<α≤1 [33]. Further, the infection transmission process in (1) is modeled by Hattaf-Yousfi functional response [34] which was recently used in [35, 36] and has the form f(x,v)=βx/(α0+α1x+α2v+α3xv), where α0,α1,α2,α3≥0 are the saturation factors measuring the psychological or inhibitory effect and β>0 is the infection rate. In addition, this functional response generalizes many common types existing in the literature such as the specific functional response proposed by Hattaf et al. in [37] and used in [2, 31] when α0=1; the Crowley-Martin functional response introduced in [38] and used in [39] when α0=1 and α3=α1α2; and the Beddington-DeAngelis functional response proposed in [40, 41] and used in [3, 4, 10] when α0=1 and α3=0. Also, the Hattaf-Yousfi functional response is reduced to the saturated incidence rate used in [9] when α0=1 and α1=α3=0 and the standard incidence function used in [27] when α0=α3=0 and α1=α2=1, and it was simplified to the bilinear incidence rate used in [5, 6] when α0=1 and α1=α2=α3=0.On the other hand, system (1) becomes a model with ODEs when α=1, which improves and generalizes the ODE model with bilinear incidence rate [42], the ODE model with saturated incidence rate [43], and the ODE model with specific functional response [44].The rest of the paper is organized as follows. The next section deals with some basic proprieties of the solutions and the existence of equilibria. The global stability of equilibria is established in Section3. To verify our theoretical results, we provide some numerical simulations in Section 4, and we conclude in Section 5.
## 2. Basic Properties and Equilibria
In this section, we will show that our model is well-posed and we discuss the existence of equilibria.Since system (1) describes the evolution of cells, then we need to prove that the cell numbers should remain nonnegative and bounded. For biological considerations, we assume that the initial conditions of (1) satisfy(3)x0≥0,l0≥0,y0≥0,v0≥0,w0≥0.Then we have the following result.Theorem 1.
Assume that the initial conditions satisfy (3). Then there exists a unique solution of system (1) defined on 0,+∞. Moreover, this solution remains nonnegative and bounded for all t≥0.Proof.
First, system (1) can be written as follows:(4)DαXt=FX,where(5)Xt=xtltytvtwtandFX=λ-dx-fx,vv+ρlfx,vv-m+ρ+γlγl-ayky-μv-qvwgvw-hw.
It is important to note that whenα=1, (4) becomes a system with ODEs. In this case, we refer the reader to [45] for the existence of solutions and to the works [46–50] for the stability of equilibria. In the case of FDEs, we will use Lemma 2.4 in [31] to prove the existence and uniqueness of solutions. Hence, we put (6)ζ=λ0000,A=-dρ0000-m+ρ+γ0000γ-a0000k-μ00000-handC=0000000000000000000-q0000g.We discuss four cases:(i)
Ifα0≠0, F(X) can be formulated as follows: (7)FX=ζ+AX+α0α0+α1x+α2v+α3xvvB0X+vCX,
where(8)B0=-βα00000βα00000000000000000000.Hence,(9)FX≤ζ+A+vB0+CX.(ii)
Ifα1≠0, we can write F(X) in the form (10)FX=ζ+AX+α1xα0+α1x+α2v+α3xvB1X+vCX,
where(11)B1=000-βα10000βα10000000000000000.Moreover, we get(12)FX≤ζ+A+B1+vCX.(iii)
Ifα2≠0, we have (13)FX=ζ+AX+α2vα0+α1x+α2v+α3xvB2X+vCX,
where(14)B2=-βα20000βα20000000000000000000.Further, we obtain(15)FX≤ζ+A+B2+vCX.(iv)
Ifα3≠0, we have (16)FX=ζ+AX+α3xvα0+α1x+α2v+α3xvB3+vCX,
where(17)B3=-βα3βα3000.Then(18)FX≤ζ+B3+A+vCX.Hence, the conditions of Lemma 2.4 in [31] are verified. Then system (1) has a unique solution on 0,+∞. Now, we show the nonnegativity of solutions. By (1), we have (19)Dαxtx=0=λ+ρl≥0,Dαltl=0=fx,vv≥0,Dαyty=0=γl≥0,Dαvtv=0=ky≥0,Dαwtw=0=0≥0.As in [31, Theorem 2.7], we deduce that the solution of (1) is nonnegative.
Finally, we prove the boundedness of solutions. We define the function(20)Tt=xt+lt+yt+a2kvt+aq2kgwt.Then, we have (21)DαTt=Dαxt+Dαlt+Dαyt+a2kDαvt+aq2kgDαwt=λ-dxt-mlt-a2yt-aμ2kvt-aqh2kgwt≤λ-δTt,where δ=min{d,m,a/2,μ,h}. Thus, we obtain (22)Tt≤T0Eα-δtα+λδ1-Eα-δtα.Since 0≤Eα(-δtα)≤1, we get (23)Tt≤T0+λδ.This completes the proof.Now, we discuss the existence of equilibria. It is clear that system (1) has always an infection-free equilibrium E0λ/d,0,0,0,0. Then the basic reproduction number of (1) is as follows: (24)R0=kβλγaμm+ρ+γdα0+λα1.To find the other equilibria, we solve the following system:(25)λ-dx-fx,vv+ρl=0,(26)fx,vv-m+ρ+γl=0,(27)γl-ay=0,(28)ky-μv-qvw=0,(29)gvw-hw=0.From (29), we get w=0 or v=h/g. Then we discuss two cases.Ifw=0, by (25)-(28), we have l=(λ-dx)/(m+γ), y=γ(λ-dx)/a(m+γ), v=kγ(λ-dx)/aμ(m+γ), and(30)fx,kγλ-dxaμm+γ=aμm+ρ+γkγ.Since l≥0, y≥0, and v≥0, then x≤λ/d. Consequently, there is no equilibrium when x>λ/d.We define the functionh1 on 0,λ/d by (31)h1x=fx,kγλ-dxaμm+γ-aμm+ρ+γkγ.We have h1(0)=-aμ(m+ρ+γ)/kγ<0, h1′(x)=∂f/∂x-(kγd/aμm+γ)(∂f/∂v)>0, and h1λ/d=(aμm+ρ+γ/kγ)(R0-1).Hence ifR0>1, (30) has a unique root x1∈0,λ/d. As a result, when R0>1 there exists an equilibrium E1(x1,l1,y1,v1,0) satisfying x1∈0,λ/d, l1=(λ-dx1)/(m+γ), y1=γ(λ-dx1)/a(m+γ), and v1=kγ(λ-dx1)/aμ(m+γ).Ifw≠0, then v=h/g. By (25)-(27), we obtain l=(λ-dx)/(m+γ), y=γ(λ-dx)/a(m+γ), w=kγg(λ-dx)/aqh(m+γ)-μ/q, and(32)fx,hg=gm+ρ+γhm+γλ-dx.Since l≥0, y≥0, and w≥0, we have x≤λ/d-ahμ(m+γ)/dkgγ. Hence, there is no equilibrium if x>λ/d-ahμ(m+γ)/dkgγ.We define the functionh2 on 0,λ/d-ahμ(m+γ)/dkgγ by (33)h2x=fx,hg-gm+ρ+γhm+γλ-dx.We have h2(0)=-gλ(m+ρ+γ)/h(m+γ)<0, h2′(x)=∂f/∂x+gd(m+ρ+γ)/h(m+γ)>0, and h2λ/d-ahμ(m+γ)/dkgγ=h1λ/d-ahμ(m+γ)/dkgγ.Let us introduce the reproduction number for humoral immunity as follows:(34)R1=gv1h,which 1/h denotes the average life expectancy of antibodies and v1 is the number of free viruses at E1. For the biological significance, R1 represents the average number of the antibodies activated by virus.IfR1<1, we have x1>λ/d-ahμ(m+γ)/dkgγ and (35)h2λd-ahμm+γdkgγ<h1x1=0.Therefore, there is no equilibrium when R1<1.IfR1>1, then x1<λ/d-ahμ(m+γ)/dkgγ and (36)h2λd-ahμm+γdkgγ>h1x1=0.In this case, (32) has one root x2∈0,λ/d-ahμ(m+γ)/dkgγ. Consequently, when R1>1, there exists an equilibrium E2(x2,l2,y2,v2,w2) satisfying x2∈0,λ/d-ahμ(m+γ)/dkgγ, l2=(λ-dx2)/(m+γ), y2=γ(λ-dx2)/a(m+γ), v2=h/g, and w2=kγg(λ-dx2)/aqh(m+γ)-μ/q. When R1=1, E1=E2.We summarize the above discussions in the following theorem.Theorem 2.
(i)
IfR0≤1, then system (1) has one infection-free equilibrium of the form E0x0,0,0,0,0, where x0=λ/d.(ii)
IfR0>1, then system (1) has an infection equilibrium without humoral immunity of the form E1(x1,l1,y1,v1,0), where x1∈0,λ/d, l1=(λ-dx1)/(m+γ), y1=γ(λ-dx1)/a(m+γ), and v1=kγ(λ-dx1)/aμ(m+γ).(iii)
IfR1>1, then system (1) has an infection equilibrium with humoral immunity of the form E2(x2,l2,y2,v2,w2), where x2∈0,λ/d-ahμ(m+γ)/dkgγ, l2=(λ-dx2)/(m+γ), y2=γ(λ-dx2)/a(m+γ), v2=h/g, and w2=kγg(λ-dx1)/aqh(m+γ)-μ/q.
## 3. Global Stability of Equilibria
In this section, we focus on the global stability of equilibria.Theorem 3.
IfR0≤1, then the infection-free equilibrium E0 is globally asymptotically stable and it becomes unstable if R0>1.Proof.
The proof of the first part of this theorem is based on the construction of a suitable Lyapunov functional that satisfies the conditions given in [51, Lemma 4.6]. Hence, we define a Lyapunov functional as follows: (37)L0t=α0α0+α1x0x0Φxx0+ρα02d+m+γα0+α1x0x0x-x0+l2+l+m+ρ+γγy+am+ρ+γkγv+aqm+ρ+γkgγw,where Φ(x)=x-1-ln(x) for x>0. It is not hard to show that the functional L0 is nonnegative. In fact, the function Φ has a global minimum at x=1. Consequently, Φ(x)≥0 for all x>0.
Calculating the fractional derivative ofL0(t) along solutions of system (1) and using the results in [52], we get (38)DαL0t≤α0α0+α1x01-x0xDαx+ρα0d+m+γα0+α1x0x0x-x0+lDαx+Dαl+Dαl+m+ρ+γγDαy+am+ρ+γkγDαv+aqm+ρ+γkgγDαw.Using λ=dx0, we obtain (39)DαL0t≤-dα0x-x02α0+α1x0x-α0α0+α1x01-x0xfx,vv+ρα0α0+α1x01-x0xlρα0x-x0+ld+m+γα0+α1x0x0dx0-x-m+γl+fx,vv-aμm+ρ+γkγv-aqhm+ρ+γkgγw≤-1x+ρd+m+γx0dα0x-x02α0+α1x0-ρα0m+γl2d+m+γα0+α1x0x0-ρα0x-x02lα0+α1x0xx0+aμm+ρ+γkγR0-1v-aqhm+ρ+γkgγw.Hence if R0≤1, then DαL0(t)≤0. In addition, the equality holds if and only if x=x0,l=0,y=0,w=0, and (R0-1)v=0. If R0<1, then v=0. If R0=1, from (1), we get f(x0,v)v=0 which implies that v=0. Consequently, the largest invariant set of {(x,l,y,v,w)∈R+5:DαL0(t)=0} is the singleton {E0}. Therefore, by the LaSalle’s invariance principle [51], E0 is globally asymptotically stable.
The proof of the instability ofE0 is based on the computation of the Jacobean matrix of system (1) and the results presented in [53–55]. The Jacobean matrix of (1) at any equilibrium E(x,l,y,v,w) is given by(40)-d-∂f∂xvρ0-∂f∂vv-fx,v0∂f∂xv-m+ρ+γ0∂f∂vv+fx,v00γ-a0000k-μ-qw-qv000gwgv-h.We recall that E is locally asymptotically stable if the all eigenvalues ξi of (40) satisfy the following condition [53–55]: (41)argξi>απ2.From (40), the characteristic equation at E0 is given as follows:(42)d+ξh+ξg0ξ=0,where (43)g0ξ=m+ρ+γ+ξa+ξμ+ξ-kγβλdα0+α1λ.Obviously, (42) has the roots ξ1=-d and ξ2=-h. If R0>1, we have g0(0)=aμ(m+ρ+γ)(1-R0)<0 and limξ→+∞g0(ξ)=+∞. Then, there exists ξ∗>0 satisfying g0(ξ∗)=0. In addition, we have argξ∗=0<απ/2. Consequently, when R0>1, E0 is unstable.Theorem 4.
(i)
The infection equilibrium without humoral immunityE1 is globally asymptotically stable if R0>1, R1≤1, and(44)R0≤1+m+ρ+γα0adμm+ρ+dkλγα2+kργα3λ2aρμm+ρ+γα0d+λα1.(ii)
WhenR1>1, E1 is unstable.Proof.
Define a Lyapunov functional as follows:(45)L1t=α0+α2v1α0+α1x1+α2v1+α3x1v1x1Φxx1+l1Φll1+ρα0+α2v12d+m+γα0+α1x1+α2v1+α3x1v1x1x-x1+l-l12+m+ρ+γγy1Φyy1+am+ρ+γkγv1Φvv1+aqm+ρ+γkgγw.Calculating the fractional derivative of L1(t), we get (46)DαL1t=α0+α2v1α0+α1x1+α2v1+α3x1v11-x1xDαx+1-l1lDαl+ρα0+α2v1x-x1+l-l1d+m+γα0+α1x1+α2v1+α3x1v1x1Dαx+Dαl+m+ρ+γγ1-y1yDαy+am+ρ+γkγ1-v1vDαv+aqm+ρ+γkgγw.Using λ=dx1+(m+γ)l1, f(x1,v1)v1=(m+ρ+γ)l1, γl1=ay1, ky1=μv1, and 1-fxi,vi/fx,vi=(α0+α2vi/α0+α1xi+α2vi+α3xivi)1-xi/x∀i∈{1,2}, we obtain (47)DαL1t≤d1-fx1,v1fx,v1x1-x+m+ρ+γl11-fx1,v1fx,v1+vv1fx,vfx,v1+m+ρ+γl11-l1fx,vvlfx1,v1v1+m+ρ+γl11-ly1l1y+m+ρ+γl11-vv1-yv1y1v+ρl-l11-fx1,v1fx,v1-ρα0+α2v1dx-x12+m+γl-l12+d+m+γx-x1l-l1d+m+γα0+α1x1+α2v1+α3x1v1x1+aqhm+ρ+γkgγgv1h-1w.Hence,(48)DαL1t≤-α0+α2v1x-x12xx1α0+α1x1+α2v1+α3x1v1dx1-ρl1+ρl+dρxd+m+γ-ρα0+α2v1m+γl-l12m+ρ+γα0+α1x1+α2v1+α3x1v1x1+m+ρ+γl15-fx1,v1fx,v1-l1fx,vvlfx1,v1v1-ly1l1y-yv1y1v-fx,v1fx,v-m+ρ+γl1α0+α1xα2+α3xv-v12v1α0+α1x+α2v+α3xvα0+α1x+α2v1+α3xv1+aqhm+ρ+γkgγR1-1w.Using the arithmetic-geometric inequality, we have(49)5-fxi,vifx,vi-lifx,vvlfxi,vivi-lyiliy-yviyiv-fx,vifx,v≤0.Since R1≤1, we have DαL1(t)≤0 if dx1≥ρl1. It is easy to see that this condition is equivalent to (44). Furthermore, DαL1(t)=0 if and only if x=x1,l=l1,y=y1,v=v1, and R1-1w=0. We discuss two cases: If R1<1, then w=0. If R1=1, from (1), we get Dαv1=0=ky1-μv1-qv1w, and then w=0. Hence, the largest invariant set of {(x,l,y,v,w)∈R+5:DαL1(t)=0} is the singleton {E1}. By the LaSalle’s invariance principle, E1 is globally asymptotically stable.
AtE1, the characteristic equation of (40) is given as follows:(50)gv1-h-ξg1ξ=0,where(51)g1ξ=-d-∂f∂xx1,v1v1-ξρ0-∂f∂vx1,v1v1-fx1,v1∂f∂xx1,v1v1-m+ρ+γ-ξ0∂f∂vx1,v1v1+fx1,v10γ-a-ξ000k-μ-ξ.We can easily see that (50) has the root ξ1=gv1-h. Then, when R1>1, we have ξ1>0. In this case, E1 is unstable.Theorem 5.
The infection equilibrium with humoral immunityE2 is globally asymptotically stable if R1>1 and(52)ρβh≤dm+ρ+γα0g+α2h+ρλα1g+α3h.Proof.
Consider the following Lyapunov functional:(53)
L
2
t
=
α
0
+
α
2
v
2
α
0
+
α
1
x
2
+
α
2
v
2
+
α
3
x
2
v
2
x
2
Φ
x
x
2
+
l
2
Φ
l
l
2
+
ρ
α
0
+
α
2
v
2
2
d
+
m
+
γ
α
0
+
α
1
x
2
+
α
2
v
2
+
α
3
x
2
v
2
x
2
x
-
x
2
+
l
-
l
2
2
+
m
+
ρ
+
γ
γ
y
2
Φ
y
y
2
+
a
m
+
ρ
+
γ
k
γ
v
2
Φ
v
v
2
+
a
q
m
+
ρ
+
γ
k
g
γ
w
2
Φ
w
w
2
.Computing the fractional derivative of L2(t) and using λ=dx2+(m+γ)l2, f(x2,v2)v2=(m+ρ+γ)l2, γl2=ay2, ky2=(μ+qw2)v2, and v2=h/g, we get (54)DαL2t≤d1-fx2,v2fx,v2x2-x+m+ρ+γl21-fx2,v2fx,v2+fx,vvfx,v2v2+m+ρ+γl21-l2fx,vvlfx2,v2v2+m+ρ+γl21-ly2l2y+m+ρ+γl21-vv2-yv2y2v+ρl-l21-fx2,v2fx,v2-ρα0+α2v2dx-x22+m+γl-l22+d+m+γx-x2l-l2d+m+γα0+α1x2+α2v2+α3x2v2x2≤-α0+α2v2x-x22xx2α0+α1x2+α2v2+α3x2v2dx2-ρl2+ρl+dρxd+m+γ-ρα0+α2v2m+γl-l22m+ρ+γα0+α1x2+α2v2+α3x2v2x2+m+ρ+γl25-fx2,v2fx,v2-l2fx,vvlfx2,v2v2-ly2l2y-yv2y2v-fx,v2fx,v-m+ρ+γl2α0+α1xα2+α3xv-v22v2α0+α1x+α2v+α3xvα0+α1x+α2v2+α3xv2.From (49), we have DαL2(t)≤0 when dx2≥ρl2. This condition is equivalent to (52). In addition, DαL2(t)=0 if x=x2,l=l2,y=y2, and v=v2. Further, Dαv2=0=ky2-μv2-qv2w; then w=w2. Consequently, the largest invariant set of {(x,l,y,v,w)∈R+5:DαL2(t)=0} is the singleton {E2}. By the LaSalle’s invariance principle, E2 is globally asymptotically stable.It is important to note that whenρ is sufficiently small or γ is sufficiently large, the two conditions (44) and (52) are satisfied. Then, we have the following corollary.Corollary 6.
Assume thatR0>1. When ρ is sufficiently small or γ is sufficiently large, then we have the following:(i)
The infection equilibrium without humoral immunityE1 is globally asymptotically stable if R1≤1.(ii)
The infection equilibrium with humoral immunityE2 is globally asymptotically stable if R1>1.
## 4. Numerical Simulations
In this section, we validate our theoretical results to HIV infection. Firstly, we take the parameter values as shown in Table1.Table 1
Parameter values of system (1).
parameters values parameters values parameters values λ 10 a 0.27 h 0.2 d 0.0139 γ 0.01 g 0.0001 β 0.00024 k 800 α 0 1 ρ 0.01 μ 3 α 1 0.1 m 0.0347 q 0.01 α 2 0.01 α 3 0.00001By calculation, we haveR0=0.4274≤1. Then system (1) has an infection-free equilibrium E0(719.4245,0,0,0,0). By Theorem 3, the solution of (1) converges to E0 (see Figure 1). Consequently, the virus is cleared and the infection dies out.Figure 1
Stability of the infection-free equilibriumE0.Now, we chooseβ=0.0012 and we keep the other parameter values. Hence, we obtain R0=2.137, R1=0.8334, and (55)1+m+ρ+γα0adμm+ρ+dkλγα2+kργα3λ2aρμm+ρ+γα0d+λα1=2.5934.Consequently, condition (44) is satisfied. Therefore, the infection equilibrium without humoral immunity E1(176.6853,168.7712,6.2508,1666.9,0) is globally asymptotically stable. Figure 2 demonstrates this result. In this case, the infection becomes chronic.Figure 2
Stability of the infection equilibrium without humoral immunityE1.Next, we takeg=0.0004 and do not change the other parameter values. In this case, we have R1=3.3338, ρβh=0.0000024, and d(m+ρ+γ)(α0g+α2h)+ρλ(α1g+α3h)=0.000006. Hence, condition (52) is satisfied. Consequently, system (1) has an infection equilibrium with humoral immunity E2(423.4261,92.0442,3.4090,500,245.4473) which is globally asymptotically stable. Figure 3 illustrates this result. We can observe that the activation of the humoral immune response increases the healthy cells and decreases the productive infected cells and viral load to a lower levels but it is not able to eradicate the infection.Figure 3
Stability of the infection equilibrium with humoral immunityE2.
## 5. Conclusion
In the present paper, we have studied the dynamics of a viral infection model by taking into account the memory effect represented by the Caputo fractional derivative and the humoral immunity. We have proved that the solutions of the model are nonnegative and bounded which assure the well-posedness. We have shown that the proposed model has three infection equilibriums, namely, the infection-free equilibriumE0, the infection equilibrium without humoral immunity E1, and the infection equilibrium with humoral immunity E2. By constructing suitable Lyapunov functionals, the global stability of these equilibria is fully determined by two threshold parameters R0 and R1. More precisely, when R0≤1, E0 is globally asymptotically stable, whereas if R0>1, it becomes unstable and another equilibrium point appears, that is, E1, which is globally asymptotically stable whenever R1≤1 and condition (44) is satisfied. In the case that R1>1, E1 becomes unstable and there exists another equilibrium point E2 which is globally asymptotically stable when condition (52) is satisfied. In addition, we remarked that when ρ is sufficiently small or γ is sufficiently large, conditions (44) and (52) are verified, and then the global stability of E1 and E2 is characterized only by R0 and R1.From our theoretical and numerical results, we deduce that the order of the fractional derivativeα has no effect on the dynamics of the model. However, when the value of α decreases (long memory), the solutions of our model converge rapidly to the steady states (see Figures 1–3). This behavior can be explained by the memory term 1/Γ(1-α)(t-u)α included in the fractional derivative which represents the time needed for the interaction between cells and viral particles and the time needed for the activation of humoral immune response. In fact, the knowledge about the infection and the activation of the humoral immune response in an early stage can help us to control the infection.
---
*Source: 1019242-2018-12-02.xml* | 1019242-2018-12-02_1019242-2018-12-02.md | 22,293 | A Fractional Order Model for Viral Infection with Cure of Infected Cells and Humoral Immunity | Adnane Boukhouima; Khalid Hattaf; Noura Yousfi | International Journal of Differential Equations
(2018) | Mathematical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1019242 | 1019242-2018-12-02.xml | ---
## Abstract
In this paper, we study the dynamics of a viral infection model formulated by five fractional differential equations (FDEs) to describe the interactions between host cells, virus, and humoral immunity presented by antibodies. The infection transmission process is modeled by Hattaf-Yousfi functional response which covers several forms of incidence rate existing in the literature. We first show that the model is mathematically and biologically well-posed. By constructing suitable Lyapunov functionals, the global stability of equilibria is established and characterized by two threshold parameters. Finally, some numerical simulations are presented to illustrate our theoretical analysis.
---
## Body
## 1. Introduction
The immune response plays an important role to control the dynamics of viral infections such as human immunodeficiency virus (HIV), hepatitis B virus (HBV), hepatitis C virus (HCV), and human T-cell leukemia virus (HTLV). Therefore, many mathematical models have been developed to incorporate the role of immune response in viral infections. Some of these models considered the cellular immune response mediated by cytotoxic T lymphocytes (CTL) cells that attack and kill the infected cells [1–5] and the others considered the humoral immune response based on the antibodies which are produced by the B-cells and are programmed to neutralize the viruses [6–11]. However, all these models have been formulated by using ordinary differential equations (ODEs) in which the memory effect is neglected while the immune response involves memory [12, 13].Fractional derivative is a generalization of integer derivative and it is a suitable tool to model real phenomena with memory which exists in most biological systems [14–16]. The fractional derivative is a nonlocal operator in contrast to integer derivative. This means that if we want to compute the fractional derivative at some point t=t1, it is necessary to take into account the entire history from the starting point t=t0 up to the point t=t1. For these reasons, modeling some real process by using fractional derivative has drawn attention of several authors in various fields [17–22]. In biology, it has been shown that the fractional derivative is useful to analyse the rheological proprieties of cells [23]. Furthermore, it has been deduced that the membranes of cells of biological organism have fractional order electrical conductance [24]. Recently, much works have been done on modeling the dynamics of viral infections with FDEs [25–31]. These works ignored the impact of the immune response and the majority of them deal only with the local stability.In some viral infections, the humoral immune response is more effective than cellular immune response [32]. For this reason, we improve the above ODE and FDE models by proposing a new fractional order model that describes the interactions between susceptible host cells, viral particles, and the humoral immune response mediated by the antibodies; that is,(1)Dαxt=λ-dx-fx,vv+ρl,Dαlt=fx,vv-m+ρ+γl,Dαyt=γl-ay,Dαvt=ky-μv-qvw,Dαwt=gvw-hw,where x(t), l(t), y(t), v(t), and w(t) are the concentrations of susceptible host cells, latently infected cells (infected cells which are not yet able to produce virions), productive infected cells, free virus particles, and antibodies at time t, respectively. Susceptible host cells are assumed to be produced at a constant rate λ, die at the rate dx, and become infected by virus at the rate f(x,v)v. Latently infected cells die at the rate ml and return to the uninfected state by loss of all covalently closed circular DNA (cccDNA) from their nucleus at the rate ρl. Productive infected cells are produced from latently infected cells at the rate γl and die at the rate ay. Free virus particles are produced from productive infected cells at the rate ky, cleared at the rate μv, and are neutralized by antibodies at the rate qvw. Antibodies are activated against virus at the rate gvw and die at the rate hw.In system (1), Dα represents the Caputo fractional derivative of order α defined for an arbitrary function φ by (2)Dαφt=1Γ1-α∫0tφ′ut-uαdu,with 0<α≤1 [33]. Further, the infection transmission process in (1) is modeled by Hattaf-Yousfi functional response [34] which was recently used in [35, 36] and has the form f(x,v)=βx/(α0+α1x+α2v+α3xv), where α0,α1,α2,α3≥0 are the saturation factors measuring the psychological or inhibitory effect and β>0 is the infection rate. In addition, this functional response generalizes many common types existing in the literature such as the specific functional response proposed by Hattaf et al. in [37] and used in [2, 31] when α0=1; the Crowley-Martin functional response introduced in [38] and used in [39] when α0=1 and α3=α1α2; and the Beddington-DeAngelis functional response proposed in [40, 41] and used in [3, 4, 10] when α0=1 and α3=0. Also, the Hattaf-Yousfi functional response is reduced to the saturated incidence rate used in [9] when α0=1 and α1=α3=0 and the standard incidence function used in [27] when α0=α3=0 and α1=α2=1, and it was simplified to the bilinear incidence rate used in [5, 6] when α0=1 and α1=α2=α3=0.On the other hand, system (1) becomes a model with ODEs when α=1, which improves and generalizes the ODE model with bilinear incidence rate [42], the ODE model with saturated incidence rate [43], and the ODE model with specific functional response [44].The rest of the paper is organized as follows. The next section deals with some basic proprieties of the solutions and the existence of equilibria. The global stability of equilibria is established in Section3. To verify our theoretical results, we provide some numerical simulations in Section 4, and we conclude in Section 5.
## 2. Basic Properties and Equilibria
In this section, we will show that our model is well-posed and we discuss the existence of equilibria.Since system (1) describes the evolution of cells, then we need to prove that the cell numbers should remain nonnegative and bounded. For biological considerations, we assume that the initial conditions of (1) satisfy(3)x0≥0,l0≥0,y0≥0,v0≥0,w0≥0.Then we have the following result.Theorem 1.
Assume that the initial conditions satisfy (3). Then there exists a unique solution of system (1) defined on 0,+∞. Moreover, this solution remains nonnegative and bounded for all t≥0.Proof.
First, system (1) can be written as follows:(4)DαXt=FX,where(5)Xt=xtltytvtwtandFX=λ-dx-fx,vv+ρlfx,vv-m+ρ+γlγl-ayky-μv-qvwgvw-hw.
It is important to note that whenα=1, (4) becomes a system with ODEs. In this case, we refer the reader to [45] for the existence of solutions and to the works [46–50] for the stability of equilibria. In the case of FDEs, we will use Lemma 2.4 in [31] to prove the existence and uniqueness of solutions. Hence, we put (6)ζ=λ0000,A=-dρ0000-m+ρ+γ0000γ-a0000k-μ00000-handC=0000000000000000000-q0000g.We discuss four cases:(i)
Ifα0≠0, F(X) can be formulated as follows: (7)FX=ζ+AX+α0α0+α1x+α2v+α3xvvB0X+vCX,
where(8)B0=-βα00000βα00000000000000000000.Hence,(9)FX≤ζ+A+vB0+CX.(ii)
Ifα1≠0, we can write F(X) in the form (10)FX=ζ+AX+α1xα0+α1x+α2v+α3xvB1X+vCX,
where(11)B1=000-βα10000βα10000000000000000.Moreover, we get(12)FX≤ζ+A+B1+vCX.(iii)
Ifα2≠0, we have (13)FX=ζ+AX+α2vα0+α1x+α2v+α3xvB2X+vCX,
where(14)B2=-βα20000βα20000000000000000000.Further, we obtain(15)FX≤ζ+A+B2+vCX.(iv)
Ifα3≠0, we have (16)FX=ζ+AX+α3xvα0+α1x+α2v+α3xvB3+vCX,
where(17)B3=-βα3βα3000.Then(18)FX≤ζ+B3+A+vCX.Hence, the conditions of Lemma 2.4 in [31] are verified. Then system (1) has a unique solution on 0,+∞. Now, we show the nonnegativity of solutions. By (1), we have (19)Dαxtx=0=λ+ρl≥0,Dαltl=0=fx,vv≥0,Dαyty=0=γl≥0,Dαvtv=0=ky≥0,Dαwtw=0=0≥0.As in [31, Theorem 2.7], we deduce that the solution of (1) is nonnegative.
Finally, we prove the boundedness of solutions. We define the function(20)Tt=xt+lt+yt+a2kvt+aq2kgwt.Then, we have (21)DαTt=Dαxt+Dαlt+Dαyt+a2kDαvt+aq2kgDαwt=λ-dxt-mlt-a2yt-aμ2kvt-aqh2kgwt≤λ-δTt,where δ=min{d,m,a/2,μ,h}. Thus, we obtain (22)Tt≤T0Eα-δtα+λδ1-Eα-δtα.Since 0≤Eα(-δtα)≤1, we get (23)Tt≤T0+λδ.This completes the proof.Now, we discuss the existence of equilibria. It is clear that system (1) has always an infection-free equilibrium E0λ/d,0,0,0,0. Then the basic reproduction number of (1) is as follows: (24)R0=kβλγaμm+ρ+γdα0+λα1.To find the other equilibria, we solve the following system:(25)λ-dx-fx,vv+ρl=0,(26)fx,vv-m+ρ+γl=0,(27)γl-ay=0,(28)ky-μv-qvw=0,(29)gvw-hw=0.From (29), we get w=0 or v=h/g. Then we discuss two cases.Ifw=0, by (25)-(28), we have l=(λ-dx)/(m+γ), y=γ(λ-dx)/a(m+γ), v=kγ(λ-dx)/aμ(m+γ), and(30)fx,kγλ-dxaμm+γ=aμm+ρ+γkγ.Since l≥0, y≥0, and v≥0, then x≤λ/d. Consequently, there is no equilibrium when x>λ/d.We define the functionh1 on 0,λ/d by (31)h1x=fx,kγλ-dxaμm+γ-aμm+ρ+γkγ.We have h1(0)=-aμ(m+ρ+γ)/kγ<0, h1′(x)=∂f/∂x-(kγd/aμm+γ)(∂f/∂v)>0, and h1λ/d=(aμm+ρ+γ/kγ)(R0-1).Hence ifR0>1, (30) has a unique root x1∈0,λ/d. As a result, when R0>1 there exists an equilibrium E1(x1,l1,y1,v1,0) satisfying x1∈0,λ/d, l1=(λ-dx1)/(m+γ), y1=γ(λ-dx1)/a(m+γ), and v1=kγ(λ-dx1)/aμ(m+γ).Ifw≠0, then v=h/g. By (25)-(27), we obtain l=(λ-dx)/(m+γ), y=γ(λ-dx)/a(m+γ), w=kγg(λ-dx)/aqh(m+γ)-μ/q, and(32)fx,hg=gm+ρ+γhm+γλ-dx.Since l≥0, y≥0, and w≥0, we have x≤λ/d-ahμ(m+γ)/dkgγ. Hence, there is no equilibrium if x>λ/d-ahμ(m+γ)/dkgγ.We define the functionh2 on 0,λ/d-ahμ(m+γ)/dkgγ by (33)h2x=fx,hg-gm+ρ+γhm+γλ-dx.We have h2(0)=-gλ(m+ρ+γ)/h(m+γ)<0, h2′(x)=∂f/∂x+gd(m+ρ+γ)/h(m+γ)>0, and h2λ/d-ahμ(m+γ)/dkgγ=h1λ/d-ahμ(m+γ)/dkgγ.Let us introduce the reproduction number for humoral immunity as follows:(34)R1=gv1h,which 1/h denotes the average life expectancy of antibodies and v1 is the number of free viruses at E1. For the biological significance, R1 represents the average number of the antibodies activated by virus.IfR1<1, we have x1>λ/d-ahμ(m+γ)/dkgγ and (35)h2λd-ahμm+γdkgγ<h1x1=0.Therefore, there is no equilibrium when R1<1.IfR1>1, then x1<λ/d-ahμ(m+γ)/dkgγ and (36)h2λd-ahμm+γdkgγ>h1x1=0.In this case, (32) has one root x2∈0,λ/d-ahμ(m+γ)/dkgγ. Consequently, when R1>1, there exists an equilibrium E2(x2,l2,y2,v2,w2) satisfying x2∈0,λ/d-ahμ(m+γ)/dkgγ, l2=(λ-dx2)/(m+γ), y2=γ(λ-dx2)/a(m+γ), v2=h/g, and w2=kγg(λ-dx2)/aqh(m+γ)-μ/q. When R1=1, E1=E2.We summarize the above discussions in the following theorem.Theorem 2.
(i)
IfR0≤1, then system (1) has one infection-free equilibrium of the form E0x0,0,0,0,0, where x0=λ/d.(ii)
IfR0>1, then system (1) has an infection equilibrium without humoral immunity of the form E1(x1,l1,y1,v1,0), where x1∈0,λ/d, l1=(λ-dx1)/(m+γ), y1=γ(λ-dx1)/a(m+γ), and v1=kγ(λ-dx1)/aμ(m+γ).(iii)
IfR1>1, then system (1) has an infection equilibrium with humoral immunity of the form E2(x2,l2,y2,v2,w2), where x2∈0,λ/d-ahμ(m+γ)/dkgγ, l2=(λ-dx2)/(m+γ), y2=γ(λ-dx2)/a(m+γ), v2=h/g, and w2=kγg(λ-dx1)/aqh(m+γ)-μ/q.
## 3. Global Stability of Equilibria
In this section, we focus on the global stability of equilibria.Theorem 3.
IfR0≤1, then the infection-free equilibrium E0 is globally asymptotically stable and it becomes unstable if R0>1.Proof.
The proof of the first part of this theorem is based on the construction of a suitable Lyapunov functional that satisfies the conditions given in [51, Lemma 4.6]. Hence, we define a Lyapunov functional as follows: (37)L0t=α0α0+α1x0x0Φxx0+ρα02d+m+γα0+α1x0x0x-x0+l2+l+m+ρ+γγy+am+ρ+γkγv+aqm+ρ+γkgγw,where Φ(x)=x-1-ln(x) for x>0. It is not hard to show that the functional L0 is nonnegative. In fact, the function Φ has a global minimum at x=1. Consequently, Φ(x)≥0 for all x>0.
Calculating the fractional derivative ofL0(t) along solutions of system (1) and using the results in [52], we get (38)DαL0t≤α0α0+α1x01-x0xDαx+ρα0d+m+γα0+α1x0x0x-x0+lDαx+Dαl+Dαl+m+ρ+γγDαy+am+ρ+γkγDαv+aqm+ρ+γkgγDαw.Using λ=dx0, we obtain (39)DαL0t≤-dα0x-x02α0+α1x0x-α0α0+α1x01-x0xfx,vv+ρα0α0+α1x01-x0xlρα0x-x0+ld+m+γα0+α1x0x0dx0-x-m+γl+fx,vv-aμm+ρ+γkγv-aqhm+ρ+γkgγw≤-1x+ρd+m+γx0dα0x-x02α0+α1x0-ρα0m+γl2d+m+γα0+α1x0x0-ρα0x-x02lα0+α1x0xx0+aμm+ρ+γkγR0-1v-aqhm+ρ+γkgγw.Hence if R0≤1, then DαL0(t)≤0. In addition, the equality holds if and only if x=x0,l=0,y=0,w=0, and (R0-1)v=0. If R0<1, then v=0. If R0=1, from (1), we get f(x0,v)v=0 which implies that v=0. Consequently, the largest invariant set of {(x,l,y,v,w)∈R+5:DαL0(t)=0} is the singleton {E0}. Therefore, by the LaSalle’s invariance principle [51], E0 is globally asymptotically stable.
The proof of the instability ofE0 is based on the computation of the Jacobean matrix of system (1) and the results presented in [53–55]. The Jacobean matrix of (1) at any equilibrium E(x,l,y,v,w) is given by(40)-d-∂f∂xvρ0-∂f∂vv-fx,v0∂f∂xv-m+ρ+γ0∂f∂vv+fx,v00γ-a0000k-μ-qw-qv000gwgv-h.We recall that E is locally asymptotically stable if the all eigenvalues ξi of (40) satisfy the following condition [53–55]: (41)argξi>απ2.From (40), the characteristic equation at E0 is given as follows:(42)d+ξh+ξg0ξ=0,where (43)g0ξ=m+ρ+γ+ξa+ξμ+ξ-kγβλdα0+α1λ.Obviously, (42) has the roots ξ1=-d and ξ2=-h. If R0>1, we have g0(0)=aμ(m+ρ+γ)(1-R0)<0 and limξ→+∞g0(ξ)=+∞. Then, there exists ξ∗>0 satisfying g0(ξ∗)=0. In addition, we have argξ∗=0<απ/2. Consequently, when R0>1, E0 is unstable.Theorem 4.
(i)
The infection equilibrium without humoral immunityE1 is globally asymptotically stable if R0>1, R1≤1, and(44)R0≤1+m+ρ+γα0adμm+ρ+dkλγα2+kργα3λ2aρμm+ρ+γα0d+λα1.(ii)
WhenR1>1, E1 is unstable.Proof.
Define a Lyapunov functional as follows:(45)L1t=α0+α2v1α0+α1x1+α2v1+α3x1v1x1Φxx1+l1Φll1+ρα0+α2v12d+m+γα0+α1x1+α2v1+α3x1v1x1x-x1+l-l12+m+ρ+γγy1Φyy1+am+ρ+γkγv1Φvv1+aqm+ρ+γkgγw.Calculating the fractional derivative of L1(t), we get (46)DαL1t=α0+α2v1α0+α1x1+α2v1+α3x1v11-x1xDαx+1-l1lDαl+ρα0+α2v1x-x1+l-l1d+m+γα0+α1x1+α2v1+α3x1v1x1Dαx+Dαl+m+ρ+γγ1-y1yDαy+am+ρ+γkγ1-v1vDαv+aqm+ρ+γkgγw.Using λ=dx1+(m+γ)l1, f(x1,v1)v1=(m+ρ+γ)l1, γl1=ay1, ky1=μv1, and 1-fxi,vi/fx,vi=(α0+α2vi/α0+α1xi+α2vi+α3xivi)1-xi/x∀i∈{1,2}, we obtain (47)DαL1t≤d1-fx1,v1fx,v1x1-x+m+ρ+γl11-fx1,v1fx,v1+vv1fx,vfx,v1+m+ρ+γl11-l1fx,vvlfx1,v1v1+m+ρ+γl11-ly1l1y+m+ρ+γl11-vv1-yv1y1v+ρl-l11-fx1,v1fx,v1-ρα0+α2v1dx-x12+m+γl-l12+d+m+γx-x1l-l1d+m+γα0+α1x1+α2v1+α3x1v1x1+aqhm+ρ+γkgγgv1h-1w.Hence,(48)DαL1t≤-α0+α2v1x-x12xx1α0+α1x1+α2v1+α3x1v1dx1-ρl1+ρl+dρxd+m+γ-ρα0+α2v1m+γl-l12m+ρ+γα0+α1x1+α2v1+α3x1v1x1+m+ρ+γl15-fx1,v1fx,v1-l1fx,vvlfx1,v1v1-ly1l1y-yv1y1v-fx,v1fx,v-m+ρ+γl1α0+α1xα2+α3xv-v12v1α0+α1x+α2v+α3xvα0+α1x+α2v1+α3xv1+aqhm+ρ+γkgγR1-1w.Using the arithmetic-geometric inequality, we have(49)5-fxi,vifx,vi-lifx,vvlfxi,vivi-lyiliy-yviyiv-fx,vifx,v≤0.Since R1≤1, we have DαL1(t)≤0 if dx1≥ρl1. It is easy to see that this condition is equivalent to (44). Furthermore, DαL1(t)=0 if and only if x=x1,l=l1,y=y1,v=v1, and R1-1w=0. We discuss two cases: If R1<1, then w=0. If R1=1, from (1), we get Dαv1=0=ky1-μv1-qv1w, and then w=0. Hence, the largest invariant set of {(x,l,y,v,w)∈R+5:DαL1(t)=0} is the singleton {E1}. By the LaSalle’s invariance principle, E1 is globally asymptotically stable.
AtE1, the characteristic equation of (40) is given as follows:(50)gv1-h-ξg1ξ=0,where(51)g1ξ=-d-∂f∂xx1,v1v1-ξρ0-∂f∂vx1,v1v1-fx1,v1∂f∂xx1,v1v1-m+ρ+γ-ξ0∂f∂vx1,v1v1+fx1,v10γ-a-ξ000k-μ-ξ.We can easily see that (50) has the root ξ1=gv1-h. Then, when R1>1, we have ξ1>0. In this case, E1 is unstable.Theorem 5.
The infection equilibrium with humoral immunityE2 is globally asymptotically stable if R1>1 and(52)ρβh≤dm+ρ+γα0g+α2h+ρλα1g+α3h.Proof.
Consider the following Lyapunov functional:(53)
L
2
t
=
α
0
+
α
2
v
2
α
0
+
α
1
x
2
+
α
2
v
2
+
α
3
x
2
v
2
x
2
Φ
x
x
2
+
l
2
Φ
l
l
2
+
ρ
α
0
+
α
2
v
2
2
d
+
m
+
γ
α
0
+
α
1
x
2
+
α
2
v
2
+
α
3
x
2
v
2
x
2
x
-
x
2
+
l
-
l
2
2
+
m
+
ρ
+
γ
γ
y
2
Φ
y
y
2
+
a
m
+
ρ
+
γ
k
γ
v
2
Φ
v
v
2
+
a
q
m
+
ρ
+
γ
k
g
γ
w
2
Φ
w
w
2
.Computing the fractional derivative of L2(t) and using λ=dx2+(m+γ)l2, f(x2,v2)v2=(m+ρ+γ)l2, γl2=ay2, ky2=(μ+qw2)v2, and v2=h/g, we get (54)DαL2t≤d1-fx2,v2fx,v2x2-x+m+ρ+γl21-fx2,v2fx,v2+fx,vvfx,v2v2+m+ρ+γl21-l2fx,vvlfx2,v2v2+m+ρ+γl21-ly2l2y+m+ρ+γl21-vv2-yv2y2v+ρl-l21-fx2,v2fx,v2-ρα0+α2v2dx-x22+m+γl-l22+d+m+γx-x2l-l2d+m+γα0+α1x2+α2v2+α3x2v2x2≤-α0+α2v2x-x22xx2α0+α1x2+α2v2+α3x2v2dx2-ρl2+ρl+dρxd+m+γ-ρα0+α2v2m+γl-l22m+ρ+γα0+α1x2+α2v2+α3x2v2x2+m+ρ+γl25-fx2,v2fx,v2-l2fx,vvlfx2,v2v2-ly2l2y-yv2y2v-fx,v2fx,v-m+ρ+γl2α0+α1xα2+α3xv-v22v2α0+α1x+α2v+α3xvα0+α1x+α2v2+α3xv2.From (49), we have DαL2(t)≤0 when dx2≥ρl2. This condition is equivalent to (52). In addition, DαL2(t)=0 if x=x2,l=l2,y=y2, and v=v2. Further, Dαv2=0=ky2-μv2-qv2w; then w=w2. Consequently, the largest invariant set of {(x,l,y,v,w)∈R+5:DαL2(t)=0} is the singleton {E2}. By the LaSalle’s invariance principle, E2 is globally asymptotically stable.It is important to note that whenρ is sufficiently small or γ is sufficiently large, the two conditions (44) and (52) are satisfied. Then, we have the following corollary.Corollary 6.
Assume thatR0>1. When ρ is sufficiently small or γ is sufficiently large, then we have the following:(i)
The infection equilibrium without humoral immunityE1 is globally asymptotically stable if R1≤1.(ii)
The infection equilibrium with humoral immunityE2 is globally asymptotically stable if R1>1.
## 4. Numerical Simulations
In this section, we validate our theoretical results to HIV infection. Firstly, we take the parameter values as shown in Table1.Table 1
Parameter values of system (1).
parameters values parameters values parameters values λ 10 a 0.27 h 0.2 d 0.0139 γ 0.01 g 0.0001 β 0.00024 k 800 α 0 1 ρ 0.01 μ 3 α 1 0.1 m 0.0347 q 0.01 α 2 0.01 α 3 0.00001By calculation, we haveR0=0.4274≤1. Then system (1) has an infection-free equilibrium E0(719.4245,0,0,0,0). By Theorem 3, the solution of (1) converges to E0 (see Figure 1). Consequently, the virus is cleared and the infection dies out.Figure 1
Stability of the infection-free equilibriumE0.Now, we chooseβ=0.0012 and we keep the other parameter values. Hence, we obtain R0=2.137, R1=0.8334, and (55)1+m+ρ+γα0adμm+ρ+dkλγα2+kργα3λ2aρμm+ρ+γα0d+λα1=2.5934.Consequently, condition (44) is satisfied. Therefore, the infection equilibrium without humoral immunity E1(176.6853,168.7712,6.2508,1666.9,0) is globally asymptotically stable. Figure 2 demonstrates this result. In this case, the infection becomes chronic.Figure 2
Stability of the infection equilibrium without humoral immunityE1.Next, we takeg=0.0004 and do not change the other parameter values. In this case, we have R1=3.3338, ρβh=0.0000024, and d(m+ρ+γ)(α0g+α2h)+ρλ(α1g+α3h)=0.000006. Hence, condition (52) is satisfied. Consequently, system (1) has an infection equilibrium with humoral immunity E2(423.4261,92.0442,3.4090,500,245.4473) which is globally asymptotically stable. Figure 3 illustrates this result. We can observe that the activation of the humoral immune response increases the healthy cells and decreases the productive infected cells and viral load to a lower levels but it is not able to eradicate the infection.Figure 3
Stability of the infection equilibrium with humoral immunityE2.
## 5. Conclusion
In the present paper, we have studied the dynamics of a viral infection model by taking into account the memory effect represented by the Caputo fractional derivative and the humoral immunity. We have proved that the solutions of the model are nonnegative and bounded which assure the well-posedness. We have shown that the proposed model has three infection equilibriums, namely, the infection-free equilibriumE0, the infection equilibrium without humoral immunity E1, and the infection equilibrium with humoral immunity E2. By constructing suitable Lyapunov functionals, the global stability of these equilibria is fully determined by two threshold parameters R0 and R1. More precisely, when R0≤1, E0 is globally asymptotically stable, whereas if R0>1, it becomes unstable and another equilibrium point appears, that is, E1, which is globally asymptotically stable whenever R1≤1 and condition (44) is satisfied. In the case that R1>1, E1 becomes unstable and there exists another equilibrium point E2 which is globally asymptotically stable when condition (52) is satisfied. In addition, we remarked that when ρ is sufficiently small or γ is sufficiently large, conditions (44) and (52) are verified, and then the global stability of E1 and E2 is characterized only by R0 and R1.From our theoretical and numerical results, we deduce that the order of the fractional derivativeα has no effect on the dynamics of the model. However, when the value of α decreases (long memory), the solutions of our model converge rapidly to the steady states (see Figures 1–3). This behavior can be explained by the memory term 1/Γ(1-α)(t-u)α included in the fractional derivative which represents the time needed for the interaction between cells and viral particles and the time needed for the activation of humoral immune response. In fact, the knowledge about the infection and the activation of the humoral immune response in an early stage can help us to control the infection.
---
*Source: 1019242-2018-12-02.xml* | 2018 |
# Binary Representations of Regular Graphs
**Authors:** Yury J. Ionin
**Journal:** International Journal of Combinatorics
(2011)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2011/101928
---
## Abstract
For any 2-distance setX in the n-dimensional binary Hamming space Hn, let ΓX be the graph with X as the vertex set and with two vertices adjacent if and only if the distance between them is the smaller of the two nonzero distances in X.
The binary spherical representation number of a graph Γ, or bsr(Γ), is the least n such that Γ is isomorphic to ΓX, where X is a 2-distance set lying on a sphere in Hn. It is shown that if Γ is a connected regular graph, then bsr(Γ)≥b−m, where b is the order of Γ and m is the multiplicity of the least eigenvalue of Γ, and the case of
equality is characterized. In particular, if Γ is a connected strongly regular graph, then
bsr(Γ)=b−m if and only if Γ is the block graph of a quasisymmetric 2-design. It is also shown that if a connected regular graph is cospectral with a line graph and has the same binary spherical representation number as this line graph, then it is a line graph.
---
## Body
## 1. Introduction
The subject of this paper is mutual relations between regular and strongly regular graphs, 2-distance sets in binary Hamming spaces, and quasisymmetric 1- and 2-designs.The following relation between strongly regular graphs and 2-distance sets in Euclidean spaces is well known (cf. [1, Theorem 2.23]): ifmis the multiplicity of the least eigenvalue of a connected strongly regular graphΓof ordern, then the vertex set ofΓcan be represented as a set of points, lying on a sphere inℝn-m-1, so that there exist positive real numbersh1<h2such that the distance between any two distinct vertices is equal toh1if they are adjacent as vertices ofΓand it is equal toh2otherwise. This result was recently generalized to all connected regular graphs in [2]. It has also been proved in [2] that, given n and m, such a representation of a connected regular graph in ℝn-m-2 is not possible.The notion of a 2-distance set representing a graph makes sense for any metric space, and the spaces of choice in this paper are the binary Hamming spaces. We will show (Theorem3.3) that the dimension of a binary Hamming space, in which a connected regular graph Γ can be represented, is at least n-m, where n and m have the same meaning as in the previous paragraph.It is also well known that the block graph of a quasisymmetric 2-design is strongly regular. However, many strongly regular graphs are not block graphs, and there is no good characterization of the graphs that are block graphs of quasisymmetric 2-designs. The situation changes if we consider the representation of graphs in binary Hamming spaces. We will show (Theorem4.6) that a connected strongly regular graph admits a representation in the binary Hamming space of the minimal dimension n-m if and only if it is the block graph of a quasisymmetric 2-design.At the dawn of graph theory there was a short-lived conjecture that a graph is determined by the spectrum of its adjacency matrix. Of course, it is not true (see a very interesting discussion in [3]). However, some classes of graphs can be described by their spectra. In particular, if a connected regular graph has the same spectrum as a line graph, then it is almost always a line graph itself (all exceptions are known). We will show (Corollary 5.7) that if a connected regular graph Γ is cospectral with a line graph L(G) of a graph G and, beside that, the minimal dimension of a binary Hamming space, in which either graph can be represented, is the same for Γ and L(G), then Γ is a line graph.
## 2. Preliminaries
All graphs in this paper are finite and simple, and all incidence structures are without repeated blocks. For a graphΓ, |Γ| denotes the order ofΓ, that is, the number of vertices. If x and y are vertices of a graph Γ, then x~y means that x and y are adjacent, while x≁y means that x and y are distinct and nonadjacent. Two graphs are said to be cospectral if their adjacency matrices have the same characteristic polynomial.Throughout the paper we useI to denote identity matrices and J to denote square matrices with every entry equal to 1. The order of I and J will be always apparent from the context. We denote as 0 and 1 vectors (columns, rows, points) with all entries (coordinates) equal to 0 or all equal to 1, respectively. In examples throughout the paper we will use digits and letters to denote elements of a small set and omit braces and commas when a subset of such a set is presented; for example, we will write 1350b instead of {1,3,5,0,b}.Ifn is a positive integer, then [n] denotes the set {1,2,…,n}.Definition 2.1.
Thebinary Hamming spaceHn consists of all n-tuples a=(a1,a2,…,an) with each ai equal to 0 or 1. When it is convenient, one identifies a with the set {i∈[n]:ai=1}. The distance h(a,b) between a and b=(b1,b2,…,bn)∈Hn is the number of indices i for which ai≠bi. The Euclidean norm of a vector x∈ℝn is denoted as ∥x∥, so, for a,b∈Hn, h(a,b)=∥a-b∥2.
A setX⊂Hn is called a 2-distance set if |{h(a,b):a,b∈X,a≠b}|=2.
Asphere with center c∈Hn and integer radius k, 1≤k≤n-1, is the set of all points x∈Hn such that h(c,x)=k. Any subset of a sphere (of radius k) is called a spherical set (of radiusk).Remark 2.2.
The sphere of radiusk in Hn, centered at a, coincides (as a set) with the sphere of radius n-k centered at the opposite point b=1-a. This allows us to assume, when needed, that the radius of a sphere does not exceed n/2. A sphere of radius k in Hn centered at 0, regarded as a subset of ℝn, is the intersection of the unit cube and the hyperplane x1+x2+⋯+xn=k.Remark 2.3.
Forn≥2, the distance between any two points of a spherical set in Hn is even.Definition 2.4.
Anincidence structure (without repeated blocks) is a pair D=(V,ℬ), where V is a nonempty finite set (of points) and ℬ is a nonempty set of subsets of V (blocks). The cardinality of the intersection of two distinct blocks is called an intersection number of D. An incidence structure is said to be quasisymmetric if it has exactly two distinct intersection numbers. For a nonnegative integer t, an incidence structure D is called a t-design if all blocks of D have the same cardinality and every set of t points is contained in the same number of blocks. A t-design D with an (points versus blocks) incidence matrix N is called nonsquareif N is not a square matrix, and it is called nonsingular if det(NN⊤)≠0. A 2-design is also called a (v,b,r,k,λ)-design, where v is the number of points, b is the number of blocks, r is the replication number, that is, the number of blocks containing any given point, k is the block size, and λ is the number of blocks containing any given pair of points.
With any quasisymmetric incidence structure we associate itsblock graph.Definition 2.5.
IfD is a quasisymmetric incidence structure with intersection numbers α<β, then the block graph of D is the graph whose vertices are the blocks of D and two vertices are adjacent if and only if the corresponding blocks meet in β points.Remark 2.6.
If a regular graph, other than a complete graph, is connected, then it has at least three distinct eigenvalues. It is strongly regular if and only if it has exactly three distinct eigenvalues. IfD is a quasisymmetric 2-design, then it is nonsquare and its block graph is strongly regular. If D is a quasisymmetric t-design with block size k and intersection numbers α<β, then N⊤N=(k-α)I+(β-α)A+αJ, where N is an incidence matrix of D and A is an adjacency matrix of the block graph of D. If D is a (v,b,r,k,λ)-design, then NN⊤=(r-λ)I+λJ. Therefore, det(NN⊤)=rk(r-λ)v-1≠0, so D is nonsingular. For these and other basic results on designs and regular graphs, see [1] or [4].Definition 2.7.
LetX={x1,x2,…,xb} be a 2-distance set of cardinality b in Hn, and let h1<h2 be the nonzero distances in X. One denotes as ΓX the graph whose vertex set is X and the edge set is the set of all pairs {xi,xj} with h(xi,xj)=h1. For i=1,2,…,b, let xi=(xi1,xi2,…,xin) and Bi={j∈[n]:xij=1}, so xi is the characteristic vector of Bi. Let ℬ={B1,B2,…,Bb}. One denotes as DX the incidence structure ([n],ℬ).Remark 2.8.
IfX is a spherical 2-distance set centered at 0, then the incidence structure DX is a quasisymmetric 0-design and ΓX is its block graph.Proposition 2.9.
LetX be a 2-distance set in Hn, and let h1<h2 be the nonzero distances in X. If the graph ΓX is connected, then h2≤2h1.Proof.
Supposeh2>2h1. If x,y, and z are distinct vertices of ΓX such that x~y and x~z, then the triangle inequality implies that y~z. Therefore, all neighbors of x form a connected component of ΓX. Since ΓX is not a complete graph, it is not connected; a contradiction.Definition 2.10.
One will say that a spherical 2-distance setX⊂Hn represents a graphΓ inHn if Γ is isomorphic to ΓX. The least n for which such a set X exists is called the binary spherical representation number ofΓ and is denoted as bsr(Γ).Proposition 2.11.
Every simple graphΓ, except null graphs and complete graphs, admits a spherical representation in Hn if n is sufficiently large.Proof.
LetΓ be a noncomplete graph of order b with e≥1 edges, and let N=[nij] be an incidence matrix of Γ. For i=1,2,…,b, let Xi={j∈[e]:nij=1}. Let k=max{|Xi|:1≤i≤b}, and let Y1,Y2,…,Yb be pairwise disjoint subsets of {e+1,e+2,…,e+bk} such that |Yi|=k-|Xi|. For i=1,2,…,b, let xi=(xi1,xi2,…,xi,e+bk)∈He+bk, where xij=1 if and only if j∈Xi∪Yi. Then, for 1≤i<j≤b, the distance between points xi and xj is equal to 2(k-2) if the ith and jth vertices of Γ are adjacent, and it is equal to 2k otherwise. Since Γ is not a complete graph, the set {x1,x2,…,xb} is a 2-distance set representing Γ in He+bk, and this set lies on a sphere of radius k centered at 0.If the graphΓ in the above proof is regular, we do not need to add columns to its incidence matrix N.Proposition 2.12.
IfΓ is a noncomplete regular graph with e≥1 edges, then bsr(Γ)≤e.Theorem5.1 implies that if Γ is a cycle, then its binary spherical representation number equals the number of edges.For any graphG, the line graph ofG, denoted as L(G), is the graph whose vertex set is the edge set of G; two distinct vertices of L(G) are adjacent if and only if the corresponding edges of G share a vertex. Line graphs are precisely the graphs representable by spherical 2-distance sets of radius 2.Proposition 2.13.
A graphΓ can be represented in Hn by a spherical 2-distance sets of radius 2 if and only if Γ is isomorphic to the line graph of a graph of order n.Proof.
IfΓ=L(G), where G is a graph of order n, then the columns of an incidence matrix of G form a 2-distance subset of Hn of radius 2 representing Γ. Conversely, let X be a 2-distance subset of Hn of radius 2 centered at 0 and representing a graph Γ. Let G be a graph whose incidence matrix coincides with an incidence matrix of DX. Then |G|=n and Γ is isomorphic to L(G).Remark 2.14.
LetG be a regular graph of degree r, and let X be the set of columns of an incidence matrix N of G. Then DX is a quasisymmetric 1-design (with block size 2 and replication number r) and N is its incidence matrix. If r≥3, this design is non-square. The next result (Proposition 2.3 in [5]) yields a necessary and sufficient condition for this 1-design to be nonsingular.Proposition 2.15.
IfN is an incidence matrix of a graph Γ of order n and c is the number of connected components of Γ, then
(2.1)rank(NN⊤)={n,ifΓisnotabipartitegraph,n-c,ifΓisabipartitegraph.
## 3. Lower Bounds
The main tool in obtaining a lower bound onbsr(Γ) is the following classical theorem of distance geometry.Definition 3.1.
LetX={x1,x2,…,xb} be a set of b points in ℝn.TheSchoenberg matrix ofXwith respect to a pointz∈ℝn is the matrix Sz(X)=[sij] of order b with
(3.1)sij=‖z-xi‖2+‖z-xj‖2-‖xi-xj‖2.Theorem 3.2 (see [6, 7]).
IfX is a finite set in ℝn, then, for any z∈ℝn, the Schoenberg matrix Sz(X) is positive semidefinite and rank(Sz(X))≤n.We will now derive a sharp lower bound on the binary spherical representation number of a connected regular graph.Theorem 3.3.
LetΓ be a connected regular graph, and let m be the multiplicity of the least eigenvalue of Γ. Then bsr(Γ)≥|Γ|-m. Moreover, bsr(Γ)=|Γ|-m if and only if Γ is the block graph of a nonsquare nonsingular quasisymmetric 1-design.Proof.
Letbsr(Γ)=n, and let Γ be isomorphic to ΓX, where X is a spherical 2-subset of Hn. Let h1<h2 be the nonzero distances in X and k the radius of a sphere in Hn containing X. Without loss of generality, we assume that this sphere is centered at 0. Then z=(k/n,k/n,…,k/n) is the center of an Euclidean sphere containing X. The radius of this sphere is equal to k(n-k)/n. Let A be an adjacency matrix of ΓX. Then the matrix
(3.2)S=Sz(X)=h2I+(h2-h1)A+(2k(n-k)n-h2)J
is the Schoenberg matrix of the set X with respect to z.
Letd be the degree of Γ, ρ0=d>ρ1>⋯>ρs=ρ all distinct eigenvalues of A, and m0=1,m1,…,ms=m their respective multiplicities. Then the eigenvalues of S are
(3.3)σ0=2|Γ|k(n-k)n-dh1-(|Γ|-d-1)h2,
with 1 as an eigenvector, and, for i=1,2,…,s,
(3.4)σi=h2+(h2-h1)ρi
(with eigenvectors orthogonal to 1). For 0≤i≤s, the multiplicity of σi is at least mi. (It is greater than mi if σi=σ0, i≠0.)
Theorem3.2 implies that all eigenvalues of S are nonnegative, so σi>0 for 1≤i≤s-1. Therefore, rank(S)≥∑i=1s-1mi=|Γ|-m-1. On the other hand, since both X and z lie in the hyperplane x1+x2+⋯+xn=k, Theorem 3.2 implies that rank(S)≤n-1, so n≥|Γ|-m.
Suppose now thatn=|Γ|-m. Then rank(S)=|Γ|-m-1, and therefore σs=σ0=0. From σ0=0 we derive
(3.5)2|Γ|k(n-k)=n(dh1+(|Γ|-d-1)h2).
The incidence structureDX=([n],ℬ) has n points, |Γ| blocks, all of cardinality k, and two intersection numbers, α=k-h2/2<β=k-h1/2. The graph Γ is the block graph of DX. Using h1=2(k-β) and h2=2(k-α), we transform (3.5) into
(3.6)(β-α)d=k(|Γ|kn-1)-α(|Γ|-1).
For eachi∈[n], let ri denote the number of blocks of DX containing i. Fix a block C and count in two ways pairs (B,i), where B∈ℬ, B≠C, and i∈C∩B:
(3.7)dβ+(|Γ|-d-1)α=∑i∈C(ri-1).
Using this equation and (3.6), we derive
(3.8)∑i∈Cri=dβ+(|Γ|-d-1)α+k=|Γ|k2n.
Therefore,
(3.9)∑C∈B∑i∈Cri=|Γ|2k2n.
Since each i∈[n] contributes ri2 into the left-hand side of this equation, we obtain that
(3.10)∑i=1nri2=|Γ|2k2n.
On the other hand, counting in two ways pairs(i,B) with B∈ℬ and i∈B yields
(3.11)∑i=1nri=∑B∈B|B|=|Γ|k.
Thus,
(3.12)(1n∑i=1nri)2=1n∑i=1nri2.
Therefore, ri=r=|Γ|k/n for i=1,2,…,n. Thus, DX is a quasisymmetric 1-design. (Note that we have derived this result from (3.5) rather than from a stronger equation rank(S)=|Γ|-m-1.) Since n<|Γ|, the 1-design DX is non-square, so we have to show that it is nonsingular.The incidence matrix N of DX satisfies the equation
(3.13)N⊤N=(k-α)I+(β-α)A+αJ.
Therefore, the eigenvalues of N⊤N are
(3.14)τ0=k-α+(β-α)ρ0+α|Γ|,τi=k-α+(β-α)ρi(1≤i≤s).
Since τ0>τ1>⋯>τs and since rank(N⊤N)≤n, we obtain that τs=0 and τi>0 for 0≤i≤s-1. Since the multiplicity of τs is the same as the multiplicity of ρs, we have rank(N⊤N)=n. Therefore, rank(NN⊤)=n, and then det(NN⊤)≠0, that is, DX is nonsingular.
Suppose now thatΓ is the block graph of a nonsquare nonsingular quasisymmetric 1-design D with intersection numbers α<β. The design D has less points than blocks, so let b be the number of blocks and b-m the number of points. We have to show that m is the multiplicity of the least eigenvalue of Γ and that bsr(Γ)=b-m.
LetN be an incidence matrix of D and X the set of all columns of N regarded as points in Hb-m. Then X is a 2-distance set and D is DX. The set X lies on a sphere of radius k centered at 0, where k is the cardinality of each block of D, and the nonzero distances in X are h1=2(k-β) and h2=2(k-α).
MatrixN satisfies (3.13) with A being an adjacency matrix of Γ. Let ρ0>ρ1>⋯>ρs be all distinct eigenvalues of A. Then the eigenvalues of N⊤N are given by (3.14). Since N⊤ has more rows than columns, we have τs=0. Since det(NN⊤)≠0, the sum of the multiplicities of the nonzero eigenvalues of N⊤N is b-m, so the multiplicity of τs is equal to m. Therefore, the multiplicity of ρs is equal to m, and then bsr(Γ)≥b-m. Since X is in Hb-m, we have bsr(Γ)=b-m.It has been shown in the course of this proof that ifbsr(Γ)=|Γ|-m, then σ0=0, which implies (3.5), and σs=0, which implies
(3.15)h2h1=ρρ+1.
In fact, (3.15) must hold whenever bsr(Γ)<|Γ|, because otherwise rank(A)≥|Γ|-1 and then bsr(Γ)≥|Γ|. If bsr(Γ)=|Γ| and (3.15) does not hold, then σ0=0. It has also been shown that if bsr(Γ)=|Γ|-m, then the replication number of the corresponding 1-design is |Γ|k/(b-m). We combine these observations in the following two theorems.Theorem 3.4.
LetΓ be a connected regular graph of order b and degree d, and let m be the multiplicity of the least eigenvalue ρ of Γ. Let bsr(Γ)=b-m, and let Γ be isomorphic to ΓX, where X is a 2-subset of Hb-m lying on a sphere of radius k centered at 0. Let h1<h2 be the nonzero distances in X. Then,(i)
2bk(b-m-k)=(b-m)(dh1+(b-d-1)h2);(ii)
h2/h1=ρ/(ρ+1);(iii)
DX is a nonsquare nonsingular quasisymmetric 1-design with b-m points, b blocks, block size k, replication number bk/(b-m), and intersection numbers k-h1/2 and k-h2/2.Theorem 3.5.
LetΓ be a connected regular graph of order b and degree d, and let ρ be the least eigenvalue of Γ. Let bsr(Γ)=n, and let Γ be isomorphic to ΓX, where X is a 2-subset of Hn lying on a sphere of radius k. Let h1<h2 be the nonzero distances in X:(i)
if2bk(n-k)=n(dh1+(b-d-1)h2), then DX is a quasisymmetric 1-design;(ii)
ifn<b, then h2/h1=ρ/(ρ+1);(iii)
ifn=b and h2/h1≠ρ/(ρ+1), then 2k(b-k)=dh1+(b-d-1)h2.Ifh2/h1=ρ/(ρ+1), then ρ is rational, so (ii) implies that the following useful result.Corollary 3.6.
If the least eigenvalue of a connected regular graphΓ is irrational, then bsr(Γ)≥|Γ|.An infinite family of regular graphs attaining the lower bound of Theorem3.3 is given in the following example.Example 3.7.
LetD be a (v,b,r,k,1)-design with b≥v+r and k≥3, and let D′ be an incidence structure obtained by deleting from D one point and all blocks containing this point. Then D′ is a 1-design with v-1 points, b-r>v-1 blocks of cardinality k, replication number r-1, and intersection numbers 0 and 1. Without loss of generality, we assume that the point set of D is [v], the deleted point is v, and the deleted blocks are
(3.16){1,2,…,k-1,v},{k,k+1,…,2k-2,v},…,{v-k+1,v-k+2,…,v}.
Let N be the corresponding incidence matrix of D′. Then NN⊤ is an r×r block matrix of (k-1)×(k-1) blocks with all diagonal blocks equal to (r-1)I and all off-diagonal blocks equal J. The spectrum of NN⊤ consists of eigenvalues (r-1)k of multiplicity 1, r-1 of multiplicity (k-2)r, and r-k of multiplicity r-1. Therefore, det(NN⊤)≠0; that is, the design D′ is nonsingular. The spectrum of N⊤N is obtained by adjoining the eigenvalue 0 of multiplicity b-r-v+1 to the spectrum of NN⊤. Since N⊤N=kI+A, where A is an adjacency matrix of the block graph Γ of D′, we determine that the multiplicities of the largest and the smallest eigenvalues of A are 1 and b-r-v+1, respectively. Therefore, Γ is a connected regular graph and bsr(Γ)=v-1.
## 4. Strongly Regular Graphs
For strongly regular graphs we first obtain a sharp upper bound for the binary spherical representation number.Proposition 4.1.
IfΓ is a connected strongly regular graph of order n, then bsr(Γ)≤n.Proof.
LetΓ be an srg(n,d,λ,μ), and let A be an adjacency matrix of Γ. Then A2=(d-μ)I+(λ-μ)A+μJ. Therefore, (A+I)2=(d-μ+1)I+(λ-μ+2)A+μJ. Let X be the set of rows of A+I regarded as points in Hn. Then the distance between two distinct points of X is equal to 2(d-λ-1) if the points correspond to adjacent vertices of Γ; otherwise, it is equal to 2(d-μ+1). Thus, X is a 2-distance set in Hn, lying on a sphere of radius d+1 centered at 0, and, if λ≥μ-1, then Γ is isomorphic to ΓX.
Ifλ≤μ-1, then let Y be the set of rows of the matrix J-A. The distance between two distinct points of Y is equal to 2(d-μ) if the points correspond to adjacent vertices of Γ; otherwise, it is equal to 2(d-λ). Therefore, Y is a 2-distance set in Hn, lying on a sphere of radius n-d centered at O, and Γ is isomorphic to ΓY.This proposition and Corollary3.6 imply the next result.Corollary 4.2.
If the least eigenvalue of a strongly regular graphΓ of order n is irrational, then bsr(Γ)=n.Remark 4.3.
The least eigenvalue of a strongly regular graph is irrational if and only if it is ansrg(n,(n-1)/2,(n-5)/4,(n-1)/4), where n≡1(mod4) is not a square. A graph with these parameters exists if and only if there exists a conference matrix of order n+1.Example 4.4.
LetΓ be the complement of the cycle C7. The least eigenvalue of Γ is irrational, so bsr(Γ)≥7. Suppose bsr(Γ)=7, and let Γ be isomorphic to ΓX, where X is a 2-subset of H7 with nonzero distances h1<h2, lying on a sphere of radius k centered at 0. Since h1 and h2 are even, h2≤7 and h2≤2h1 (Proposition 2.9), we have h1=2, h2=4 or h1=4, h2=6. In either case, Theorem 3.5(iii) yields an equation without integer solutions. Thus, bsr(Γ)≥8, so the strong regularity in Proposition 4.1 is essential.Remark 4.5.
There are 167 nonisomorphic strongly regular graphs with parameters(64,18,2,6) [8]. The least eigenvalue of these graphs is −6 of multiplicity 18. Theorem 3.3 and Proposition 4.1 imply that if Γ is any of these 167 graphs, then 46≤bsr(Γ)≤64. Therefore, there are nonisomorphic graphs with these parameters having the same binary spherical representation number.
Also, there are 41 nonisomorphic strongly regular graphs with parameters(29,14,6,7) [8]. The least eigenvalue of these graphs is irrational, so by Corollary 4.2 the binary spherical representation number of all these graphs is 29.
Theorem3.3 for regular graphs can be rectified if the graph is strongly regular.Theorem 4.6.
LetΓ be a connected strongly regular graph of order b, and let m be the multiplicity of the least eigenvalue of Γ. Then bsr(Γ)=b-m if and only if Γ is the block graph of a quasisymmetric 2-design.Proof.
IfΓ is the block graph of a quasisymmetric 2-design D, then Remark 2.6 and Theorem 3.3 imply that bsr(Γ)=b-m.
Suppose now thatbsr(Γ)=b-m, and let X be a spherical 2-distance subset of Hb-m representing Γ. Let h1<h2 be the nonzero distances in X and k the radius of the sphere centered at 0 and containing X. Every block of the incidence structure DX=([b-m],ℬ) is of cardinality k, the intersection numbers of D are α=k-h2/2<β=k-h1/2, and the replication number of D is r=bk/(b-m) (Theorem 3.4). The graph Γ is the block graph of D. Let ρ0=d>ρ1>ρ2 be the eigenvalues of Γ. Since Γ is connected, the multiplicity of ρ0 is 1. Since the multiplicity of ρ2 is m, the multiplicity of ρ1 is b-m-1.
LetA be an adjacency matrix of Γ. Theorem 3.4(ii) implies that (β-α)ρ2=(1/2)(h1-h2)ρ2=-(k-α). Since Tr(A)=d+(b-m-1)ρ1+mρ2=0, we use Theorem 3.4(i) to derive that
(4.1)(β-α)ρ1=α-k+r-λ,
where λ=r(k-1)/(b-m-1). Since k<b-m, we have λ<r.
LetN be an incidence matrix of DX. Then
(4.2)N⊤NJ=NN⊤J=krJ,N⊤N=(k-α)I+(β-α)A+αJ.
From these equations we determine the eigenvalues ofN⊤N: τ0=kr, τ1=k-α+(β-α)ρ1=r-λ, and τ2=k-α+(β-α)ρ2=0. Their respective multiplicities are 1, b-m-1, and m. Therefore, the eigenvalues of NN⊤ are τ0 of multiplicity 1 and τ1 of multiplicity b-m-1. Since NN⊤J=krJ, the eigenspace E0 of NN⊤ corresponding to the eigenvalue τ0 is generated by 1. Therefore, E1=E0⊥ is the eigenspace corresponding to the eigenvalue τ1. On the other hand, the matrix M=(r-λ)I+λJ has the same eigenvalues with the same respective eigenspaces. Thus, NN⊤=M, and therefore DX is a quasisymmetric 2-design with intersection numbers α and β. The graph Γ is the block graph of this design.Example 4.7.
TheCocktail Party graphCP(n) has 2n vertices split into n pairs with two vertices adjacent if and only if they are not in the same pair. It is the block graph of a quasisymmetric 2-design if and only if the design is aHadamard 3-design with n+1 points (cf. [4, Theorem 8.2.23]). The least eigenvalue of CP(n) is −2 of multiplicity n-1. By Theorem 4.6, bsr(CP(n))≥n+1 and bsr(CP(n))=n+1 if and only there exists a Hadamard matrix of order n+1. This example shows that it is hard to expect a simple general method for computing the binary spherical representation number of a strongly regular graph.
## 5. Line Graphs
In this section we determine the binary spherical representation number for the line graphs of regular graphs. IfN is an incidence matrix of a graph G, then N⊤N=2I+A, where A is an adjacency matrix of the line graph Γ=L(G). Let G be connected and have n vertices and e edges. If e>n, then the least eigenvalue of N⊤N is 0, and therefore the least eigenvalue of Γ is ρ=-2. Since matrices NN⊤ and N⊤N have the same positive eigenvalues, Proposition 2.15 implies that the multiplicity of ρ is equal to e-n if the graph G is not bipartite, and it is equal to e-n+1 if G is a connected bipartite graph. If e=n, then G is a cycle, so Γ=Cn is a cycle of order n too. If n is even, then the least eigenvalue of Cn is −2 of multiplicity 1; if n≥5 is odd, then the least eigenvalue of Cn is irrational. See [9] for details.Theorem 5.1.
IfΓ is the line graph of a connected regular graph of order n≥4, then bsr(Γ)=n.Proof.
LetΓ be the line graph of a connected regular graph G of order n≥4 and degree d. Then Γ is a connected regular graph of order nd/2 and degree 2d-2. The columns of an incidence matrix of G form a spherical 2-distance set in Hn representing Γ, so bsr(Γ)≤n.
Suppose first thatd=2, that is, G is Cn, and that n is odd. Then the least eigenvalue of Γ is irrational. Therefore, bsr(Γ)≥n by Corollary 3.6, so bsr(Γ)=n.
Suppose now thatd≥3 and the graph G is not bipartite. From Proposition 2.15, the multiplicity of the least eigenvalue of Γ is nd/2-n, and then Theorem 3.3 implies that bsr(Γ)≥n, so bsr(Γ)=n.
Suppose finally thatG is a bipartite graph (this includes the case G=Cn with even n). Then the least eigenvalue of Γ is −2 and its multiplicity is nd/2-n+1. Therefore, by Theorem 3.3, bsr(Γ)≥n-1. Suppose bsr(Γ)=n-1. Theorem 3.4(ii) implies that h2=2h1 and then the condition (i) of Theorem 3.4 can be rewritten as
(5.1)nk(n-1-k)=h1(n-1)(n-2).
If n is odd, then (n-1)(n-2) and n are relatively prime; if n is even, then (n-1)(n-2)/2 and n/2 are relatively prime. In either case, (n-1)(n-2)/2 divides k(n-1-k). However, k(n-1-k)≤(n-1)2/4<(n-1)(n-2)/2. Therefore, bsr(Γ)=n.The graphL2(n) is the line graph of the bipartite graph with the bipartition sets of cardinality n. The following corollary generalizes the well-known result [10] that these graphs are not block graphs of quasisymmetric 2-designs.Corollary 5.2.
The line graph of a connected regular graphG with more than three vertices is the block graph of a nonsquare nonsingular quasisymmetric 1-design if and only if G is not a cycle and is not a bipartite graph.Remark 5.3.
IfG is a semiregular connected bipartite graph of order n, then the graph L(G) is regular and bsr(L(G))=n or n-1. We do not know of any example when bsr(L(G))=n-1.There exist regular graphs that are cospectral with a line graph but are not line graphs. The complete list of such graphs is given in the following theorem.Theorem 5.4 (see [11]).
Let a regular graphΓ be cospectral with the line graph L(G) of a connected graph G. If Γ is not a line graph, then G is a regular 3-connected graph of order 8 or K3,6 or the semiregular bipartite graph of order 9 with 12 edges.Sincebsr(L(G))<10 for every graph G listed in Theorem 5.4, the next theorem implies that if a connected regular graph Γ is cospectral with a line graph L(G) and if bsr(Γ)=bsr(L(G)), then Γ is a line graph. The proof is based on the following theorem according to Beineke [12].Theorem 5.5.
A graph is a line graph if and only if it does not contain as an induced subgraph any of the nine graphs of Figure1.Figure 1Theorem 5.6.
Let the least eigenvalue of a connected regular graphΓ be equal to −2. If bsr(Γ)<10, then Γ is a line graph or the Petersen graph or CP(n) with 4≤n≤7.Proof.
The Petersen graphP is the block graph the quasisymmetric (6,10,5,3,2)-design, so bsr(P)=6. We also have bsr(CP(7))=8 (Example 4.7). For n=4,5, and 6, CP(n) is an induced subgraph of CP(8), so bsr(CP(n))≤8.
Letbsr(Γ)=n≤9, and let a 2-distance set X represent Γ in Hn. Let h1<h2 be the nonzero distances in X, and let X lie on a sphere of radius k centered at 0. Let f be an isomorphism from Γ to ΓX. For each vertex x of Γ we regard f(x) as a k-subset of [n].
SupposeΓ is not a line graph. Since the least eigenvalue of Γ is −2, Theorem 3.4 implies that h2=2h1. Since n≤9, we assume that k≤4. Proposition 2.13 implies that k≠2, so k=3 or 4. By Theorem 5.5, Γ contains one of the nine graphs of Figure 1 as an induced subgraph. All subgraphs of Γ considered throughout the proof are assumed to be induced subgraphs.
Case 1 : (h1=4. Then h2=8, and therefore k=4).
IfΓ contains a coclique xyz of size 3, then |f(x)∪f(y)∪f(z)|=12>n. This rules out subgraphs F1 and F4. If the subgraph induced by Γ on a set xyz of three vertices has only one edge, then |f(x)∪f(y)∪f(z)|=10>n. This rules out subgraphs F2, F5, F6, F7, F8, and F9.
SupposeΓ contains F3 as a subgraph. Suppose also that every subgraph of order 3 of Γ has at least two edges. We assume without loss of generality that f(q)=1234 and f(s)=5678. Let x be a vertex of Γ, x≠q and x≠s. Since the subgraph with the vertex set qsx has at least two edges, x is adjacent to both q and s. Therefore, f(x) is a 4-subset of 12345678 for every vertex x of Γ. This implies that x is not adjacent to at most one other vertex. Since Γ is regular and not complete, Γ is a cocktail party graph CP(m). Therefore, m+1≤bsr(Γ)≤9 and bsr(Γ)≠m+1 for m=8 (Example 4.7). Since CP(2) and CP(3) are line graphs (of C4 and K4, resp.), we have 4≤n≤7.Case 2 (h1=2 and k=3).
SupposeΓ contains F1 as a subgraph. We let f(p)=123, f(q)=124, f(r)=135, and f(s)=236. Since the degree of q in F1 is 1 and the degree of p is 3, Γ has vertices q1 and q2 adjacent to q but not to p. Then f(q1)=146 and f(q2)=245. Similarly, we find vertices r1 and r2 adjacent to r but not to p and vertices s1 and s2 adjacent to s but not to p and assume that f(r1)=345, f(r2)=156, f(s1)=256, and f(s2)=346. The set U of the 10 vertices that we have found is the vertex set of a Petersen subgraph of Γ. The set f(U) consists of ten 3-subsets of 123456, no two of which are disjoint. Therefore, if Γ has a vertex v∉U, then f(v) is disjoint from at least one of the sets f(x), x∈U; a contradiction. Thus, Γ is the Petersen graph.
IfΓ contains F2 as a subgraph, we let f(p)=123 and f(r)=145. Then f(q),f(s),f(t)∈{124,125,134,135}. Since |f(q)∩f(s)|=|f(t)∩f(s)|=1, there is no feasible choice for f(s).
IfΓ contains F3 as a subgraph, we assume that f(p)=123, f(q)=124, and f(s)=135. Then f(r),f(t)∈{125,134}, and therefore |f(r)∩f(t)|≠2.
LetΓ contains F4 as a subgraph. Suppose first that f(p)∩f(q)∩f(s)≠∅. Then we assume that f(p)=123, f(q)=145, f(s)=167, f(t)=124, and f(u)=136, and there is no feasible choice for f(r). Suppose now that f(p)∩f(q)∩f(s)=f(r)∩f(q)∩f(s)=∅. We assume that f(p)=123, f(q)=145, and f(s)=246. Then f(t)=125 or 134. If f(t)=125, then f(u)=234 and f(r)=235. Γ has distinct vertices q1 and q2 adjacent to q but not to r, and we have f(q1)=f(q2)=156; a contradiction. If f(t)=134, then f(u)=126 and f(r)=136. Γ has distinct vertices q1 and q2 adjacent to q but not to t, and we have again f(q1)=f(q2)=156.
IfΓ contains F5 as a subgraph, we let f(p)=123, f(q)=145, and f(t)=124. Then f(r)=234 or 126, and in either case f(s)=f(u).
IfΓ contains F6 as a subgraph, we let f(p)=123, f(q)=124, and f(s)=135. Then we assume that f(r)=125. This implies f(u)=235 and f(t)=126. Γ has distinct vertices q1 and q2 adjacent to q but not to r. Then f(q1)=134 and f(q2)=234, so both q1 and q2 are adjacent to p. Since q1≁u and q2~u, Γ has three distinct vertices ui adjacent to u but not to p. However, f(ui)=245 for all these vertices.
IfΓ contains F8 as a subgraph, we let f(p)=123 and f(q)=124 and assume that f(s)=156 or 345. If f(s)=156, then f(r)=135, f(u)=136, and there is no feasible choice for f(t). If f(s)=345, then f(r)=135, f(u)=235, and again there is no feasible choice for f(t).
SupposeΓ contains F7 or F9 as a subgraph. We let f(p)=123, f(q)=124, and f(t)=135. Then f(r)=146 or 245 and f(s)=156 or 345, respectively. In either case, there is no feasible choice for f(u).Case 3 (h1=2 and k=4).
SupposeΓ contains F1 as a subgraph. We let f(p)=1234, f(q)=1235, f(r)=1246, and f(s)=1347. Γ has vertices q1 and q2 adjacent to q but not to p. Then f(q1)=1257 and f(q2)=1356. Similarly, we find vertices r1 and r2 adjacent to r but not to p and vertices s1 and s2 adjacent to s but not to p and assume that f(r1)=1267, f(r2)=1456, f(s1)=1367, and f(s2)=1457. The ten vertices that we have found form a Petersen subgraph of Γ. If 1∈f(x) for every vertex x of Γ, then we delete 1 from each f(x) and refer to Case 2. Suppose that there is a vertex x with 1∉f(x). Then the 4-set f(x) must meet each of the sets 234,235,246,347,257,356,267,456,367, and 457 in at least two points. Thus, there is no feasible choice for f(x).
IfΓ contains F2 as a subgraph, we let f(p)=1234, f(q)=1235, and f(r)=1256. Then f(s)=1246 and there is no feasible choice for f(t).
IfΓ contains F3 as a subgraph, we assume that f(p)=1234, f(q)=1235, and f(s)=1246. Then f(r),f(t)∈{1236,1245}, and therefore |f(r)∩f(t)|=2; a contradiction.
IfΓ contains F4 as a subgraph, we let f(p)=1234, f(t)=1235, f(u)=1246, f(r)=1236, and f(q)=1257. Then f(s)=1456. Let s1 and s2 be vertices of Γ adjacent to s but not to u. Then f(s1)=1345 and f(s2)=1356, and there is no feasible choice for f(v), where v~q and v≁t.
IfΓ contains F5 as a subgraph, we let f(p)=1234, f(q)=1256, and f(t)=1235. Then we may assume that either f(s)=1346 and f(u)=2346 or f(s)=1247 and f(u)=1248. In either case, there is no feasible choice for f(r).
SupposeΓ contains F6 as a subgraph. We let f(p)=1234 and f(r)=1235. Then f(q),f(s),f(t),f(u)∈{1245,1345,2345}∪{123α:α≥6}. Since the subgraph induced on qstu is triangle-free, we let f(q)=1236, f(t)=1237, f(s)=1245, and f(u)=1345. Let q1 and q2 be distinct vertices of Γ adjacent to q but not to p. Then f(qi)∈{1256,1356,2356}, so both q1 and q2 are adjacent to r but not to t. Therefore, Γ has at least four vertices ti adjacent to t but not to r. However, f(ti)∈{1247,1347,2347}; a contradiction.
IfΓ contains F8 as a subgraph, we let f(p)=1234, f(q)=1235, and f(r)=1246. Then (f(t),f(u))∈{(1236,1247),(1245,1346),(1245,2346)}, and there is no feasible choice for f(s).
SupposeΓ contains F7 or F9 as a subgraph. We let f(p)=1234, f(q)=1235, and f(t)=1246. Then f(r)∈{1356,2356}∪{125α:α≥7} and f(s)∈{1456,2456}∪{126α:α≥7}, so we assume that (f(r),f(s))∈{(1257,1267),(1356,1456),(2356,2456)}. In each case, there is no feasible choice for f(u).Corollary 5.7.
LetΓ be a connected regular graph cospectral with a line graph L(G) of a connected graph G. If bsr(Γ)=bsr(L(G)), then Γ is a line graph.Proof.
IfG is not an exceptional graph from Theorem 5.4, then Γ is a line graph by that theorem. If G is one of the exceptional graphs, then bsr(L(G))<10 and L(G) has more edges than vertices. Therefore, the least eigenvalue of L(G) is −2. Since the Petersen graph and graphs CP(n) are not exceptional, Theorem 5.6 implies that Γ is a line graph.Example 5.8.
LetX be the set of all points of H5 with even sum of coordinates. It is a 2-distance set and ΓX is the complement of the Clebsch graph. The least eigenvalue of ΓX is −2, and, since it is not a line graph, Theorem 5.6 implies that bsr(ΓX)≥10 (so X is not spherical). Let Y be the set of all points (y1,y2,…,y10)∈H10 such that ∑i=15yi is even and yi+yi+5=1 for i=1,2,3,4,5. Then Y is a spherical 2-distance set and ΓY is isomorphic to ΓX. Thus, bsr(ΓX)=10.Example 5.9.
TheShrikhande graph is cospectral with L2(4), and the three Chang graphs are cospectral with T(8), the line graph of K8, so we have examples of cospectral strongly regular graphs with distinct binary spherical representation numbers. It can be shown that the binary spherical representation number of the Shrikhande graph is 12.
---
*Source: 101928-2011-10-11.xml* | 101928-2011-10-11_101928-2011-10-11.md | 37,175 | Binary Representations of Regular Graphs | Yury J. Ionin | International Journal of Combinatorics
(2011) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2011/101928 | 101928-2011-10-11.xml | ---
## Abstract
For any 2-distance setX in the n-dimensional binary Hamming space Hn, let ΓX be the graph with X as the vertex set and with two vertices adjacent if and only if the distance between them is the smaller of the two nonzero distances in X.
The binary spherical representation number of a graph Γ, or bsr(Γ), is the least n such that Γ is isomorphic to ΓX, where X is a 2-distance set lying on a sphere in Hn. It is shown that if Γ is a connected regular graph, then bsr(Γ)≥b−m, where b is the order of Γ and m is the multiplicity of the least eigenvalue of Γ, and the case of
equality is characterized. In particular, if Γ is a connected strongly regular graph, then
bsr(Γ)=b−m if and only if Γ is the block graph of a quasisymmetric 2-design. It is also shown that if a connected regular graph is cospectral with a line graph and has the same binary spherical representation number as this line graph, then it is a line graph.
---
## Body
## 1. Introduction
The subject of this paper is mutual relations between regular and strongly regular graphs, 2-distance sets in binary Hamming spaces, and quasisymmetric 1- and 2-designs.The following relation between strongly regular graphs and 2-distance sets in Euclidean spaces is well known (cf. [1, Theorem 2.23]): ifmis the multiplicity of the least eigenvalue of a connected strongly regular graphΓof ordern, then the vertex set ofΓcan be represented as a set of points, lying on a sphere inℝn-m-1, so that there exist positive real numbersh1<h2such that the distance between any two distinct vertices is equal toh1if they are adjacent as vertices ofΓand it is equal toh2otherwise. This result was recently generalized to all connected regular graphs in [2]. It has also been proved in [2] that, given n and m, such a representation of a connected regular graph in ℝn-m-2 is not possible.The notion of a 2-distance set representing a graph makes sense for any metric space, and the spaces of choice in this paper are the binary Hamming spaces. We will show (Theorem3.3) that the dimension of a binary Hamming space, in which a connected regular graph Γ can be represented, is at least n-m, where n and m have the same meaning as in the previous paragraph.It is also well known that the block graph of a quasisymmetric 2-design is strongly regular. However, many strongly regular graphs are not block graphs, and there is no good characterization of the graphs that are block graphs of quasisymmetric 2-designs. The situation changes if we consider the representation of graphs in binary Hamming spaces. We will show (Theorem4.6) that a connected strongly regular graph admits a representation in the binary Hamming space of the minimal dimension n-m if and only if it is the block graph of a quasisymmetric 2-design.At the dawn of graph theory there was a short-lived conjecture that a graph is determined by the spectrum of its adjacency matrix. Of course, it is not true (see a very interesting discussion in [3]). However, some classes of graphs can be described by their spectra. In particular, if a connected regular graph has the same spectrum as a line graph, then it is almost always a line graph itself (all exceptions are known). We will show (Corollary 5.7) that if a connected regular graph Γ is cospectral with a line graph L(G) of a graph G and, beside that, the minimal dimension of a binary Hamming space, in which either graph can be represented, is the same for Γ and L(G), then Γ is a line graph.
## 2. Preliminaries
All graphs in this paper are finite and simple, and all incidence structures are without repeated blocks. For a graphΓ, |Γ| denotes the order ofΓ, that is, the number of vertices. If x and y are vertices of a graph Γ, then x~y means that x and y are adjacent, while x≁y means that x and y are distinct and nonadjacent. Two graphs are said to be cospectral if their adjacency matrices have the same characteristic polynomial.Throughout the paper we useI to denote identity matrices and J to denote square matrices with every entry equal to 1. The order of I and J will be always apparent from the context. We denote as 0 and 1 vectors (columns, rows, points) with all entries (coordinates) equal to 0 or all equal to 1, respectively. In examples throughout the paper we will use digits and letters to denote elements of a small set and omit braces and commas when a subset of such a set is presented; for example, we will write 1350b instead of {1,3,5,0,b}.Ifn is a positive integer, then [n] denotes the set {1,2,…,n}.Definition 2.1.
Thebinary Hamming spaceHn consists of all n-tuples a=(a1,a2,…,an) with each ai equal to 0 or 1. When it is convenient, one identifies a with the set {i∈[n]:ai=1}. The distance h(a,b) between a and b=(b1,b2,…,bn)∈Hn is the number of indices i for which ai≠bi. The Euclidean norm of a vector x∈ℝn is denoted as ∥x∥, so, for a,b∈Hn, h(a,b)=∥a-b∥2.
A setX⊂Hn is called a 2-distance set if |{h(a,b):a,b∈X,a≠b}|=2.
Asphere with center c∈Hn and integer radius k, 1≤k≤n-1, is the set of all points x∈Hn such that h(c,x)=k. Any subset of a sphere (of radius k) is called a spherical set (of radiusk).Remark 2.2.
The sphere of radiusk in Hn, centered at a, coincides (as a set) with the sphere of radius n-k centered at the opposite point b=1-a. This allows us to assume, when needed, that the radius of a sphere does not exceed n/2. A sphere of radius k in Hn centered at 0, regarded as a subset of ℝn, is the intersection of the unit cube and the hyperplane x1+x2+⋯+xn=k.Remark 2.3.
Forn≥2, the distance between any two points of a spherical set in Hn is even.Definition 2.4.
Anincidence structure (without repeated blocks) is a pair D=(V,ℬ), where V is a nonempty finite set (of points) and ℬ is a nonempty set of subsets of V (blocks). The cardinality of the intersection of two distinct blocks is called an intersection number of D. An incidence structure is said to be quasisymmetric if it has exactly two distinct intersection numbers. For a nonnegative integer t, an incidence structure D is called a t-design if all blocks of D have the same cardinality and every set of t points is contained in the same number of blocks. A t-design D with an (points versus blocks) incidence matrix N is called nonsquareif N is not a square matrix, and it is called nonsingular if det(NN⊤)≠0. A 2-design is also called a (v,b,r,k,λ)-design, where v is the number of points, b is the number of blocks, r is the replication number, that is, the number of blocks containing any given point, k is the block size, and λ is the number of blocks containing any given pair of points.
With any quasisymmetric incidence structure we associate itsblock graph.Definition 2.5.
IfD is a quasisymmetric incidence structure with intersection numbers α<β, then the block graph of D is the graph whose vertices are the blocks of D and two vertices are adjacent if and only if the corresponding blocks meet in β points.Remark 2.6.
If a regular graph, other than a complete graph, is connected, then it has at least three distinct eigenvalues. It is strongly regular if and only if it has exactly three distinct eigenvalues. IfD is a quasisymmetric 2-design, then it is nonsquare and its block graph is strongly regular. If D is a quasisymmetric t-design with block size k and intersection numbers α<β, then N⊤N=(k-α)I+(β-α)A+αJ, where N is an incidence matrix of D and A is an adjacency matrix of the block graph of D. If D is a (v,b,r,k,λ)-design, then NN⊤=(r-λ)I+λJ. Therefore, det(NN⊤)=rk(r-λ)v-1≠0, so D is nonsingular. For these and other basic results on designs and regular graphs, see [1] or [4].Definition 2.7.
LetX={x1,x2,…,xb} be a 2-distance set of cardinality b in Hn, and let h1<h2 be the nonzero distances in X. One denotes as ΓX the graph whose vertex set is X and the edge set is the set of all pairs {xi,xj} with h(xi,xj)=h1. For i=1,2,…,b, let xi=(xi1,xi2,…,xin) and Bi={j∈[n]:xij=1}, so xi is the characteristic vector of Bi. Let ℬ={B1,B2,…,Bb}. One denotes as DX the incidence structure ([n],ℬ).Remark 2.8.
IfX is a spherical 2-distance set centered at 0, then the incidence structure DX is a quasisymmetric 0-design and ΓX is its block graph.Proposition 2.9.
LetX be a 2-distance set in Hn, and let h1<h2 be the nonzero distances in X. If the graph ΓX is connected, then h2≤2h1.Proof.
Supposeh2>2h1. If x,y, and z are distinct vertices of ΓX such that x~y and x~z, then the triangle inequality implies that y~z. Therefore, all neighbors of x form a connected component of ΓX. Since ΓX is not a complete graph, it is not connected; a contradiction.Definition 2.10.
One will say that a spherical 2-distance setX⊂Hn represents a graphΓ inHn if Γ is isomorphic to ΓX. The least n for which such a set X exists is called the binary spherical representation number ofΓ and is denoted as bsr(Γ).Proposition 2.11.
Every simple graphΓ, except null graphs and complete graphs, admits a spherical representation in Hn if n is sufficiently large.Proof.
LetΓ be a noncomplete graph of order b with e≥1 edges, and let N=[nij] be an incidence matrix of Γ. For i=1,2,…,b, let Xi={j∈[e]:nij=1}. Let k=max{|Xi|:1≤i≤b}, and let Y1,Y2,…,Yb be pairwise disjoint subsets of {e+1,e+2,…,e+bk} such that |Yi|=k-|Xi|. For i=1,2,…,b, let xi=(xi1,xi2,…,xi,e+bk)∈He+bk, where xij=1 if and only if j∈Xi∪Yi. Then, for 1≤i<j≤b, the distance between points xi and xj is equal to 2(k-2) if the ith and jth vertices of Γ are adjacent, and it is equal to 2k otherwise. Since Γ is not a complete graph, the set {x1,x2,…,xb} is a 2-distance set representing Γ in He+bk, and this set lies on a sphere of radius k centered at 0.If the graphΓ in the above proof is regular, we do not need to add columns to its incidence matrix N.Proposition 2.12.
IfΓ is a noncomplete regular graph with e≥1 edges, then bsr(Γ)≤e.Theorem5.1 implies that if Γ is a cycle, then its binary spherical representation number equals the number of edges.For any graphG, the line graph ofG, denoted as L(G), is the graph whose vertex set is the edge set of G; two distinct vertices of L(G) are adjacent if and only if the corresponding edges of G share a vertex. Line graphs are precisely the graphs representable by spherical 2-distance sets of radius 2.Proposition 2.13.
A graphΓ can be represented in Hn by a spherical 2-distance sets of radius 2 if and only if Γ is isomorphic to the line graph of a graph of order n.Proof.
IfΓ=L(G), where G is a graph of order n, then the columns of an incidence matrix of G form a 2-distance subset of Hn of radius 2 representing Γ. Conversely, let X be a 2-distance subset of Hn of radius 2 centered at 0 and representing a graph Γ. Let G be a graph whose incidence matrix coincides with an incidence matrix of DX. Then |G|=n and Γ is isomorphic to L(G).Remark 2.14.
LetG be a regular graph of degree r, and let X be the set of columns of an incidence matrix N of G. Then DX is a quasisymmetric 1-design (with block size 2 and replication number r) and N is its incidence matrix. If r≥3, this design is non-square. The next result (Proposition 2.3 in [5]) yields a necessary and sufficient condition for this 1-design to be nonsingular.Proposition 2.15.
IfN is an incidence matrix of a graph Γ of order n and c is the number of connected components of Γ, then
(2.1)rank(NN⊤)={n,ifΓisnotabipartitegraph,n-c,ifΓisabipartitegraph.
## 3. Lower Bounds
The main tool in obtaining a lower bound onbsr(Γ) is the following classical theorem of distance geometry.Definition 3.1.
LetX={x1,x2,…,xb} be a set of b points in ℝn.TheSchoenberg matrix ofXwith respect to a pointz∈ℝn is the matrix Sz(X)=[sij] of order b with
(3.1)sij=‖z-xi‖2+‖z-xj‖2-‖xi-xj‖2.Theorem 3.2 (see [6, 7]).
IfX is a finite set in ℝn, then, for any z∈ℝn, the Schoenberg matrix Sz(X) is positive semidefinite and rank(Sz(X))≤n.We will now derive a sharp lower bound on the binary spherical representation number of a connected regular graph.Theorem 3.3.
LetΓ be a connected regular graph, and let m be the multiplicity of the least eigenvalue of Γ. Then bsr(Γ)≥|Γ|-m. Moreover, bsr(Γ)=|Γ|-m if and only if Γ is the block graph of a nonsquare nonsingular quasisymmetric 1-design.Proof.
Letbsr(Γ)=n, and let Γ be isomorphic to ΓX, where X is a spherical 2-subset of Hn. Let h1<h2 be the nonzero distances in X and k the radius of a sphere in Hn containing X. Without loss of generality, we assume that this sphere is centered at 0. Then z=(k/n,k/n,…,k/n) is the center of an Euclidean sphere containing X. The radius of this sphere is equal to k(n-k)/n. Let A be an adjacency matrix of ΓX. Then the matrix
(3.2)S=Sz(X)=h2I+(h2-h1)A+(2k(n-k)n-h2)J
is the Schoenberg matrix of the set X with respect to z.
Letd be the degree of Γ, ρ0=d>ρ1>⋯>ρs=ρ all distinct eigenvalues of A, and m0=1,m1,…,ms=m their respective multiplicities. Then the eigenvalues of S are
(3.3)σ0=2|Γ|k(n-k)n-dh1-(|Γ|-d-1)h2,
with 1 as an eigenvector, and, for i=1,2,…,s,
(3.4)σi=h2+(h2-h1)ρi
(with eigenvectors orthogonal to 1). For 0≤i≤s, the multiplicity of σi is at least mi. (It is greater than mi if σi=σ0, i≠0.)
Theorem3.2 implies that all eigenvalues of S are nonnegative, so σi>0 for 1≤i≤s-1. Therefore, rank(S)≥∑i=1s-1mi=|Γ|-m-1. On the other hand, since both X and z lie in the hyperplane x1+x2+⋯+xn=k, Theorem 3.2 implies that rank(S)≤n-1, so n≥|Γ|-m.
Suppose now thatn=|Γ|-m. Then rank(S)=|Γ|-m-1, and therefore σs=σ0=0. From σ0=0 we derive
(3.5)2|Γ|k(n-k)=n(dh1+(|Γ|-d-1)h2).
The incidence structureDX=([n],ℬ) has n points, |Γ| blocks, all of cardinality k, and two intersection numbers, α=k-h2/2<β=k-h1/2. The graph Γ is the block graph of DX. Using h1=2(k-β) and h2=2(k-α), we transform (3.5) into
(3.6)(β-α)d=k(|Γ|kn-1)-α(|Γ|-1).
For eachi∈[n], let ri denote the number of blocks of DX containing i. Fix a block C and count in two ways pairs (B,i), where B∈ℬ, B≠C, and i∈C∩B:
(3.7)dβ+(|Γ|-d-1)α=∑i∈C(ri-1).
Using this equation and (3.6), we derive
(3.8)∑i∈Cri=dβ+(|Γ|-d-1)α+k=|Γ|k2n.
Therefore,
(3.9)∑C∈B∑i∈Cri=|Γ|2k2n.
Since each i∈[n] contributes ri2 into the left-hand side of this equation, we obtain that
(3.10)∑i=1nri2=|Γ|2k2n.
On the other hand, counting in two ways pairs(i,B) with B∈ℬ and i∈B yields
(3.11)∑i=1nri=∑B∈B|B|=|Γ|k.
Thus,
(3.12)(1n∑i=1nri)2=1n∑i=1nri2.
Therefore, ri=r=|Γ|k/n for i=1,2,…,n. Thus, DX is a quasisymmetric 1-design. (Note that we have derived this result from (3.5) rather than from a stronger equation rank(S)=|Γ|-m-1.) Since n<|Γ|, the 1-design DX is non-square, so we have to show that it is nonsingular.The incidence matrix N of DX satisfies the equation
(3.13)N⊤N=(k-α)I+(β-α)A+αJ.
Therefore, the eigenvalues of N⊤N are
(3.14)τ0=k-α+(β-α)ρ0+α|Γ|,τi=k-α+(β-α)ρi(1≤i≤s).
Since τ0>τ1>⋯>τs and since rank(N⊤N)≤n, we obtain that τs=0 and τi>0 for 0≤i≤s-1. Since the multiplicity of τs is the same as the multiplicity of ρs, we have rank(N⊤N)=n. Therefore, rank(NN⊤)=n, and then det(NN⊤)≠0, that is, DX is nonsingular.
Suppose now thatΓ is the block graph of a nonsquare nonsingular quasisymmetric 1-design D with intersection numbers α<β. The design D has less points than blocks, so let b be the number of blocks and b-m the number of points. We have to show that m is the multiplicity of the least eigenvalue of Γ and that bsr(Γ)=b-m.
LetN be an incidence matrix of D and X the set of all columns of N regarded as points in Hb-m. Then X is a 2-distance set and D is DX. The set X lies on a sphere of radius k centered at 0, where k is the cardinality of each block of D, and the nonzero distances in X are h1=2(k-β) and h2=2(k-α).
MatrixN satisfies (3.13) with A being an adjacency matrix of Γ. Let ρ0>ρ1>⋯>ρs be all distinct eigenvalues of A. Then the eigenvalues of N⊤N are given by (3.14). Since N⊤ has more rows than columns, we have τs=0. Since det(NN⊤)≠0, the sum of the multiplicities of the nonzero eigenvalues of N⊤N is b-m, so the multiplicity of τs is equal to m. Therefore, the multiplicity of ρs is equal to m, and then bsr(Γ)≥b-m. Since X is in Hb-m, we have bsr(Γ)=b-m.It has been shown in the course of this proof that ifbsr(Γ)=|Γ|-m, then σ0=0, which implies (3.5), and σs=0, which implies
(3.15)h2h1=ρρ+1.
In fact, (3.15) must hold whenever bsr(Γ)<|Γ|, because otherwise rank(A)≥|Γ|-1 and then bsr(Γ)≥|Γ|. If bsr(Γ)=|Γ| and (3.15) does not hold, then σ0=0. It has also been shown that if bsr(Γ)=|Γ|-m, then the replication number of the corresponding 1-design is |Γ|k/(b-m). We combine these observations in the following two theorems.Theorem 3.4.
LetΓ be a connected regular graph of order b and degree d, and let m be the multiplicity of the least eigenvalue ρ of Γ. Let bsr(Γ)=b-m, and let Γ be isomorphic to ΓX, where X is a 2-subset of Hb-m lying on a sphere of radius k centered at 0. Let h1<h2 be the nonzero distances in X. Then,(i)
2bk(b-m-k)=(b-m)(dh1+(b-d-1)h2);(ii)
h2/h1=ρ/(ρ+1);(iii)
DX is a nonsquare nonsingular quasisymmetric 1-design with b-m points, b blocks, block size k, replication number bk/(b-m), and intersection numbers k-h1/2 and k-h2/2.Theorem 3.5.
LetΓ be a connected regular graph of order b and degree d, and let ρ be the least eigenvalue of Γ. Let bsr(Γ)=n, and let Γ be isomorphic to ΓX, where X is a 2-subset of Hn lying on a sphere of radius k. Let h1<h2 be the nonzero distances in X:(i)
if2bk(n-k)=n(dh1+(b-d-1)h2), then DX is a quasisymmetric 1-design;(ii)
ifn<b, then h2/h1=ρ/(ρ+1);(iii)
ifn=b and h2/h1≠ρ/(ρ+1), then 2k(b-k)=dh1+(b-d-1)h2.Ifh2/h1=ρ/(ρ+1), then ρ is rational, so (ii) implies that the following useful result.Corollary 3.6.
If the least eigenvalue of a connected regular graphΓ is irrational, then bsr(Γ)≥|Γ|.An infinite family of regular graphs attaining the lower bound of Theorem3.3 is given in the following example.Example 3.7.
LetD be a (v,b,r,k,1)-design with b≥v+r and k≥3, and let D′ be an incidence structure obtained by deleting from D one point and all blocks containing this point. Then D′ is a 1-design with v-1 points, b-r>v-1 blocks of cardinality k, replication number r-1, and intersection numbers 0 and 1. Without loss of generality, we assume that the point set of D is [v], the deleted point is v, and the deleted blocks are
(3.16){1,2,…,k-1,v},{k,k+1,…,2k-2,v},…,{v-k+1,v-k+2,…,v}.
Let N be the corresponding incidence matrix of D′. Then NN⊤ is an r×r block matrix of (k-1)×(k-1) blocks with all diagonal blocks equal to (r-1)I and all off-diagonal blocks equal J. The spectrum of NN⊤ consists of eigenvalues (r-1)k of multiplicity 1, r-1 of multiplicity (k-2)r, and r-k of multiplicity r-1. Therefore, det(NN⊤)≠0; that is, the design D′ is nonsingular. The spectrum of N⊤N is obtained by adjoining the eigenvalue 0 of multiplicity b-r-v+1 to the spectrum of NN⊤. Since N⊤N=kI+A, where A is an adjacency matrix of the block graph Γ of D′, we determine that the multiplicities of the largest and the smallest eigenvalues of A are 1 and b-r-v+1, respectively. Therefore, Γ is a connected regular graph and bsr(Γ)=v-1.
## 4. Strongly Regular Graphs
For strongly regular graphs we first obtain a sharp upper bound for the binary spherical representation number.Proposition 4.1.
IfΓ is a connected strongly regular graph of order n, then bsr(Γ)≤n.Proof.
LetΓ be an srg(n,d,λ,μ), and let A be an adjacency matrix of Γ. Then A2=(d-μ)I+(λ-μ)A+μJ. Therefore, (A+I)2=(d-μ+1)I+(λ-μ+2)A+μJ. Let X be the set of rows of A+I regarded as points in Hn. Then the distance between two distinct points of X is equal to 2(d-λ-1) if the points correspond to adjacent vertices of Γ; otherwise, it is equal to 2(d-μ+1). Thus, X is a 2-distance set in Hn, lying on a sphere of radius d+1 centered at 0, and, if λ≥μ-1, then Γ is isomorphic to ΓX.
Ifλ≤μ-1, then let Y be the set of rows of the matrix J-A. The distance between two distinct points of Y is equal to 2(d-μ) if the points correspond to adjacent vertices of Γ; otherwise, it is equal to 2(d-λ). Therefore, Y is a 2-distance set in Hn, lying on a sphere of radius n-d centered at O, and Γ is isomorphic to ΓY.This proposition and Corollary3.6 imply the next result.Corollary 4.2.
If the least eigenvalue of a strongly regular graphΓ of order n is irrational, then bsr(Γ)=n.Remark 4.3.
The least eigenvalue of a strongly regular graph is irrational if and only if it is ansrg(n,(n-1)/2,(n-5)/4,(n-1)/4), where n≡1(mod4) is not a square. A graph with these parameters exists if and only if there exists a conference matrix of order n+1.Example 4.4.
LetΓ be the complement of the cycle C7. The least eigenvalue of Γ is irrational, so bsr(Γ)≥7. Suppose bsr(Γ)=7, and let Γ be isomorphic to ΓX, where X is a 2-subset of H7 with nonzero distances h1<h2, lying on a sphere of radius k centered at 0. Since h1 and h2 are even, h2≤7 and h2≤2h1 (Proposition 2.9), we have h1=2, h2=4 or h1=4, h2=6. In either case, Theorem 3.5(iii) yields an equation without integer solutions. Thus, bsr(Γ)≥8, so the strong regularity in Proposition 4.1 is essential.Remark 4.5.
There are 167 nonisomorphic strongly regular graphs with parameters(64,18,2,6) [8]. The least eigenvalue of these graphs is −6 of multiplicity 18. Theorem 3.3 and Proposition 4.1 imply that if Γ is any of these 167 graphs, then 46≤bsr(Γ)≤64. Therefore, there are nonisomorphic graphs with these parameters having the same binary spherical representation number.
Also, there are 41 nonisomorphic strongly regular graphs with parameters(29,14,6,7) [8]. The least eigenvalue of these graphs is irrational, so by Corollary 4.2 the binary spherical representation number of all these graphs is 29.
Theorem3.3 for regular graphs can be rectified if the graph is strongly regular.Theorem 4.6.
LetΓ be a connected strongly regular graph of order b, and let m be the multiplicity of the least eigenvalue of Γ. Then bsr(Γ)=b-m if and only if Γ is the block graph of a quasisymmetric 2-design.Proof.
IfΓ is the block graph of a quasisymmetric 2-design D, then Remark 2.6 and Theorem 3.3 imply that bsr(Γ)=b-m.
Suppose now thatbsr(Γ)=b-m, and let X be a spherical 2-distance subset of Hb-m representing Γ. Let h1<h2 be the nonzero distances in X and k the radius of the sphere centered at 0 and containing X. Every block of the incidence structure DX=([b-m],ℬ) is of cardinality k, the intersection numbers of D are α=k-h2/2<β=k-h1/2, and the replication number of D is r=bk/(b-m) (Theorem 3.4). The graph Γ is the block graph of D. Let ρ0=d>ρ1>ρ2 be the eigenvalues of Γ. Since Γ is connected, the multiplicity of ρ0 is 1. Since the multiplicity of ρ2 is m, the multiplicity of ρ1 is b-m-1.
LetA be an adjacency matrix of Γ. Theorem 3.4(ii) implies that (β-α)ρ2=(1/2)(h1-h2)ρ2=-(k-α). Since Tr(A)=d+(b-m-1)ρ1+mρ2=0, we use Theorem 3.4(i) to derive that
(4.1)(β-α)ρ1=α-k+r-λ,
where λ=r(k-1)/(b-m-1). Since k<b-m, we have λ<r.
LetN be an incidence matrix of DX. Then
(4.2)N⊤NJ=NN⊤J=krJ,N⊤N=(k-α)I+(β-α)A+αJ.
From these equations we determine the eigenvalues ofN⊤N: τ0=kr, τ1=k-α+(β-α)ρ1=r-λ, and τ2=k-α+(β-α)ρ2=0. Their respective multiplicities are 1, b-m-1, and m. Therefore, the eigenvalues of NN⊤ are τ0 of multiplicity 1 and τ1 of multiplicity b-m-1. Since NN⊤J=krJ, the eigenspace E0 of NN⊤ corresponding to the eigenvalue τ0 is generated by 1. Therefore, E1=E0⊥ is the eigenspace corresponding to the eigenvalue τ1. On the other hand, the matrix M=(r-λ)I+λJ has the same eigenvalues with the same respective eigenspaces. Thus, NN⊤=M, and therefore DX is a quasisymmetric 2-design with intersection numbers α and β. The graph Γ is the block graph of this design.Example 4.7.
TheCocktail Party graphCP(n) has 2n vertices split into n pairs with two vertices adjacent if and only if they are not in the same pair. It is the block graph of a quasisymmetric 2-design if and only if the design is aHadamard 3-design with n+1 points (cf. [4, Theorem 8.2.23]). The least eigenvalue of CP(n) is −2 of multiplicity n-1. By Theorem 4.6, bsr(CP(n))≥n+1 and bsr(CP(n))=n+1 if and only there exists a Hadamard matrix of order n+1. This example shows that it is hard to expect a simple general method for computing the binary spherical representation number of a strongly regular graph.
## 5. Line Graphs
In this section we determine the binary spherical representation number for the line graphs of regular graphs. IfN is an incidence matrix of a graph G, then N⊤N=2I+A, where A is an adjacency matrix of the line graph Γ=L(G). Let G be connected and have n vertices and e edges. If e>n, then the least eigenvalue of N⊤N is 0, and therefore the least eigenvalue of Γ is ρ=-2. Since matrices NN⊤ and N⊤N have the same positive eigenvalues, Proposition 2.15 implies that the multiplicity of ρ is equal to e-n if the graph G is not bipartite, and it is equal to e-n+1 if G is a connected bipartite graph. If e=n, then G is a cycle, so Γ=Cn is a cycle of order n too. If n is even, then the least eigenvalue of Cn is −2 of multiplicity 1; if n≥5 is odd, then the least eigenvalue of Cn is irrational. See [9] for details.Theorem 5.1.
IfΓ is the line graph of a connected regular graph of order n≥4, then bsr(Γ)=n.Proof.
LetΓ be the line graph of a connected regular graph G of order n≥4 and degree d. Then Γ is a connected regular graph of order nd/2 and degree 2d-2. The columns of an incidence matrix of G form a spherical 2-distance set in Hn representing Γ, so bsr(Γ)≤n.
Suppose first thatd=2, that is, G is Cn, and that n is odd. Then the least eigenvalue of Γ is irrational. Therefore, bsr(Γ)≥n by Corollary 3.6, so bsr(Γ)=n.
Suppose now thatd≥3 and the graph G is not bipartite. From Proposition 2.15, the multiplicity of the least eigenvalue of Γ is nd/2-n, and then Theorem 3.3 implies that bsr(Γ)≥n, so bsr(Γ)=n.
Suppose finally thatG is a bipartite graph (this includes the case G=Cn with even n). Then the least eigenvalue of Γ is −2 and its multiplicity is nd/2-n+1. Therefore, by Theorem 3.3, bsr(Γ)≥n-1. Suppose bsr(Γ)=n-1. Theorem 3.4(ii) implies that h2=2h1 and then the condition (i) of Theorem 3.4 can be rewritten as
(5.1)nk(n-1-k)=h1(n-1)(n-2).
If n is odd, then (n-1)(n-2) and n are relatively prime; if n is even, then (n-1)(n-2)/2 and n/2 are relatively prime. In either case, (n-1)(n-2)/2 divides k(n-1-k). However, k(n-1-k)≤(n-1)2/4<(n-1)(n-2)/2. Therefore, bsr(Γ)=n.The graphL2(n) is the line graph of the bipartite graph with the bipartition sets of cardinality n. The following corollary generalizes the well-known result [10] that these graphs are not block graphs of quasisymmetric 2-designs.Corollary 5.2.
The line graph of a connected regular graphG with more than three vertices is the block graph of a nonsquare nonsingular quasisymmetric 1-design if and only if G is not a cycle and is not a bipartite graph.Remark 5.3.
IfG is a semiregular connected bipartite graph of order n, then the graph L(G) is regular and bsr(L(G))=n or n-1. We do not know of any example when bsr(L(G))=n-1.There exist regular graphs that are cospectral with a line graph but are not line graphs. The complete list of such graphs is given in the following theorem.Theorem 5.4 (see [11]).
Let a regular graphΓ be cospectral with the line graph L(G) of a connected graph G. If Γ is not a line graph, then G is a regular 3-connected graph of order 8 or K3,6 or the semiregular bipartite graph of order 9 with 12 edges.Sincebsr(L(G))<10 for every graph G listed in Theorem 5.4, the next theorem implies that if a connected regular graph Γ is cospectral with a line graph L(G) and if bsr(Γ)=bsr(L(G)), then Γ is a line graph. The proof is based on the following theorem according to Beineke [12].Theorem 5.5.
A graph is a line graph if and only if it does not contain as an induced subgraph any of the nine graphs of Figure1.Figure 1Theorem 5.6.
Let the least eigenvalue of a connected regular graphΓ be equal to −2. If bsr(Γ)<10, then Γ is a line graph or the Petersen graph or CP(n) with 4≤n≤7.Proof.
The Petersen graphP is the block graph the quasisymmetric (6,10,5,3,2)-design, so bsr(P)=6. We also have bsr(CP(7))=8 (Example 4.7). For n=4,5, and 6, CP(n) is an induced subgraph of CP(8), so bsr(CP(n))≤8.
Letbsr(Γ)=n≤9, and let a 2-distance set X represent Γ in Hn. Let h1<h2 be the nonzero distances in X, and let X lie on a sphere of radius k centered at 0. Let f be an isomorphism from Γ to ΓX. For each vertex x of Γ we regard f(x) as a k-subset of [n].
SupposeΓ is not a line graph. Since the least eigenvalue of Γ is −2, Theorem 3.4 implies that h2=2h1. Since n≤9, we assume that k≤4. Proposition 2.13 implies that k≠2, so k=3 or 4. By Theorem 5.5, Γ contains one of the nine graphs of Figure 1 as an induced subgraph. All subgraphs of Γ considered throughout the proof are assumed to be induced subgraphs.
Case 1 : (h1=4. Then h2=8, and therefore k=4).
IfΓ contains a coclique xyz of size 3, then |f(x)∪f(y)∪f(z)|=12>n. This rules out subgraphs F1 and F4. If the subgraph induced by Γ on a set xyz of three vertices has only one edge, then |f(x)∪f(y)∪f(z)|=10>n. This rules out subgraphs F2, F5, F6, F7, F8, and F9.
SupposeΓ contains F3 as a subgraph. Suppose also that every subgraph of order 3 of Γ has at least two edges. We assume without loss of generality that f(q)=1234 and f(s)=5678. Let x be a vertex of Γ, x≠q and x≠s. Since the subgraph with the vertex set qsx has at least two edges, x is adjacent to both q and s. Therefore, f(x) is a 4-subset of 12345678 for every vertex x of Γ. This implies that x is not adjacent to at most one other vertex. Since Γ is regular and not complete, Γ is a cocktail party graph CP(m). Therefore, m+1≤bsr(Γ)≤9 and bsr(Γ)≠m+1 for m=8 (Example 4.7). Since CP(2) and CP(3) are line graphs (of C4 and K4, resp.), we have 4≤n≤7.Case 2 (h1=2 and k=3).
SupposeΓ contains F1 as a subgraph. We let f(p)=123, f(q)=124, f(r)=135, and f(s)=236. Since the degree of q in F1 is 1 and the degree of p is 3, Γ has vertices q1 and q2 adjacent to q but not to p. Then f(q1)=146 and f(q2)=245. Similarly, we find vertices r1 and r2 adjacent to r but not to p and vertices s1 and s2 adjacent to s but not to p and assume that f(r1)=345, f(r2)=156, f(s1)=256, and f(s2)=346. The set U of the 10 vertices that we have found is the vertex set of a Petersen subgraph of Γ. The set f(U) consists of ten 3-subsets of 123456, no two of which are disjoint. Therefore, if Γ has a vertex v∉U, then f(v) is disjoint from at least one of the sets f(x), x∈U; a contradiction. Thus, Γ is the Petersen graph.
IfΓ contains F2 as a subgraph, we let f(p)=123 and f(r)=145. Then f(q),f(s),f(t)∈{124,125,134,135}. Since |f(q)∩f(s)|=|f(t)∩f(s)|=1, there is no feasible choice for f(s).
IfΓ contains F3 as a subgraph, we assume that f(p)=123, f(q)=124, and f(s)=135. Then f(r),f(t)∈{125,134}, and therefore |f(r)∩f(t)|≠2.
LetΓ contains F4 as a subgraph. Suppose first that f(p)∩f(q)∩f(s)≠∅. Then we assume that f(p)=123, f(q)=145, f(s)=167, f(t)=124, and f(u)=136, and there is no feasible choice for f(r). Suppose now that f(p)∩f(q)∩f(s)=f(r)∩f(q)∩f(s)=∅. We assume that f(p)=123, f(q)=145, and f(s)=246. Then f(t)=125 or 134. If f(t)=125, then f(u)=234 and f(r)=235. Γ has distinct vertices q1 and q2 adjacent to q but not to r, and we have f(q1)=f(q2)=156; a contradiction. If f(t)=134, then f(u)=126 and f(r)=136. Γ has distinct vertices q1 and q2 adjacent to q but not to t, and we have again f(q1)=f(q2)=156.
IfΓ contains F5 as a subgraph, we let f(p)=123, f(q)=145, and f(t)=124. Then f(r)=234 or 126, and in either case f(s)=f(u).
IfΓ contains F6 as a subgraph, we let f(p)=123, f(q)=124, and f(s)=135. Then we assume that f(r)=125. This implies f(u)=235 and f(t)=126. Γ has distinct vertices q1 and q2 adjacent to q but not to r. Then f(q1)=134 and f(q2)=234, so both q1 and q2 are adjacent to p. Since q1≁u and q2~u, Γ has three distinct vertices ui adjacent to u but not to p. However, f(ui)=245 for all these vertices.
IfΓ contains F8 as a subgraph, we let f(p)=123 and f(q)=124 and assume that f(s)=156 or 345. If f(s)=156, then f(r)=135, f(u)=136, and there is no feasible choice for f(t). If f(s)=345, then f(r)=135, f(u)=235, and again there is no feasible choice for f(t).
SupposeΓ contains F7 or F9 as a subgraph. We let f(p)=123, f(q)=124, and f(t)=135. Then f(r)=146 or 245 and f(s)=156 or 345, respectively. In either case, there is no feasible choice for f(u).Case 3 (h1=2 and k=4).
SupposeΓ contains F1 as a subgraph. We let f(p)=1234, f(q)=1235, f(r)=1246, and f(s)=1347. Γ has vertices q1 and q2 adjacent to q but not to p. Then f(q1)=1257 and f(q2)=1356. Similarly, we find vertices r1 and r2 adjacent to r but not to p and vertices s1 and s2 adjacent to s but not to p and assume that f(r1)=1267, f(r2)=1456, f(s1)=1367, and f(s2)=1457. The ten vertices that we have found form a Petersen subgraph of Γ. If 1∈f(x) for every vertex x of Γ, then we delete 1 from each f(x) and refer to Case 2. Suppose that there is a vertex x with 1∉f(x). Then the 4-set f(x) must meet each of the sets 234,235,246,347,257,356,267,456,367, and 457 in at least two points. Thus, there is no feasible choice for f(x).
IfΓ contains F2 as a subgraph, we let f(p)=1234, f(q)=1235, and f(r)=1256. Then f(s)=1246 and there is no feasible choice for f(t).
IfΓ contains F3 as a subgraph, we assume that f(p)=1234, f(q)=1235, and f(s)=1246. Then f(r),f(t)∈{1236,1245}, and therefore |f(r)∩f(t)|=2; a contradiction.
IfΓ contains F4 as a subgraph, we let f(p)=1234, f(t)=1235, f(u)=1246, f(r)=1236, and f(q)=1257. Then f(s)=1456. Let s1 and s2 be vertices of Γ adjacent to s but not to u. Then f(s1)=1345 and f(s2)=1356, and there is no feasible choice for f(v), where v~q and v≁t.
IfΓ contains F5 as a subgraph, we let f(p)=1234, f(q)=1256, and f(t)=1235. Then we may assume that either f(s)=1346 and f(u)=2346 or f(s)=1247 and f(u)=1248. In either case, there is no feasible choice for f(r).
SupposeΓ contains F6 as a subgraph. We let f(p)=1234 and f(r)=1235. Then f(q),f(s),f(t),f(u)∈{1245,1345,2345}∪{123α:α≥6}. Since the subgraph induced on qstu is triangle-free, we let f(q)=1236, f(t)=1237, f(s)=1245, and f(u)=1345. Let q1 and q2 be distinct vertices of Γ adjacent to q but not to p. Then f(qi)∈{1256,1356,2356}, so both q1 and q2 are adjacent to r but not to t. Therefore, Γ has at least four vertices ti adjacent to t but not to r. However, f(ti)∈{1247,1347,2347}; a contradiction.
IfΓ contains F8 as a subgraph, we let f(p)=1234, f(q)=1235, and f(r)=1246. Then (f(t),f(u))∈{(1236,1247),(1245,1346),(1245,2346)}, and there is no feasible choice for f(s).
SupposeΓ contains F7 or F9 as a subgraph. We let f(p)=1234, f(q)=1235, and f(t)=1246. Then f(r)∈{1356,2356}∪{125α:α≥7} and f(s)∈{1456,2456}∪{126α:α≥7}, so we assume that (f(r),f(s))∈{(1257,1267),(1356,1456),(2356,2456)}. In each case, there is no feasible choice for f(u).Corollary 5.7.
LetΓ be a connected regular graph cospectral with a line graph L(G) of a connected graph G. If bsr(Γ)=bsr(L(G)), then Γ is a line graph.Proof.
IfG is not an exceptional graph from Theorem 5.4, then Γ is a line graph by that theorem. If G is one of the exceptional graphs, then bsr(L(G))<10 and L(G) has more edges than vertices. Therefore, the least eigenvalue of L(G) is −2. Since the Petersen graph and graphs CP(n) are not exceptional, Theorem 5.6 implies that Γ is a line graph.Example 5.8.
LetX be the set of all points of H5 with even sum of coordinates. It is a 2-distance set and ΓX is the complement of the Clebsch graph. The least eigenvalue of ΓX is −2, and, since it is not a line graph, Theorem 5.6 implies that bsr(ΓX)≥10 (so X is not spherical). Let Y be the set of all points (y1,y2,…,y10)∈H10 such that ∑i=15yi is even and yi+yi+5=1 for i=1,2,3,4,5. Then Y is a spherical 2-distance set and ΓY is isomorphic to ΓX. Thus, bsr(ΓX)=10.Example 5.9.
TheShrikhande graph is cospectral with L2(4), and the three Chang graphs are cospectral with T(8), the line graph of K8, so we have examples of cospectral strongly regular graphs with distinct binary spherical representation numbers. It can be shown that the binary spherical representation number of the Shrikhande graph is 12.
---
*Source: 101928-2011-10-11.xml* | 2011 |
# Study on the Multitarget Mechanism and Active Compounds of Essential Oil fromArtemisia argyi Treating Pressure Injuries Based on Network Pharmacology
**Authors:** Shu-ting Lu; Lu-lu Tang; Ling-han Zhou; Ying-tao Lai; Lan-xing Liu; Yifan Duan
**Journal:** Evidence-Based Complementary and Alternative Medicine
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1019289
---
## Abstract
In order to comprehensively explore multitarget mechanism and key active compounds ofArtemisia argyi essential oil (AAEO) in the treatment of pressure injuries (PIs), we analyzed the biological functions and pathways involved in the intersection targets of AAEO and PIs based on network pharmacology, and the affinity of AAEO active compounds and core targets was verified by molecular docking finally. In our study, we first screened 54 effective components according to the relative content and biological activity. In total, 103 targets related to active compounds of AAEO and 2760 targets associated with PIs were obtained, respectively, and 50 key targets were overlapped by Venny 2.1.0. The construction of key targets-compounds network was achieved by the STRING database and Cytoscape 3.7.2 software. GO analysis from Matespace shows that GO results are mainly enriched in biological processes, including adrenergic receptor activity, neurotransmitter clearance, and neurotransmitter metabolic process. KEGG analysis by the David and Kobas website shows that the key targets can achieve the treatment on PIs through a pathway in cancer, PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, MAPK signaling pathway, Wnt signaling pathway, etc. In addition, molecular docking results from the CB-Dock server indicated that active compounds of AAEO had good activity docking with the first 10 key targets. In conclusion, the potential targets and regulatory molecular mechanisms of AAEO in the treatment of PIs were analyzed by network pharmacology and molecular docking. AAEO can cure PIs through the synergistic effect of multicomponent, multitarget, and multipathway, providing a theoretical basis and new direction for further study.
---
## Body
## 1. Introduction
Pressure injuries (PIs), also named pressure ulcers, refer to localized injuries occurring in the skin and/or potential subcutaneous soft tissue, usually occurring in bone bulges or in contact with medical facilities [1]. PIs have the characteristics of refractory, high incidence, and high treatment cost [2, 3]. Once infected, it is easy to cause sepsis and death [4]. At present, the treatment of PIs mainly includes drug therapy [5], dressing therapy [6], stem cell factor therapy [7], and negative pressure wound therapy [8]. There are no effective measures yet; expert consensus believes that prevention and early treatment are crucial [9].TheArtemisia argyi (AA), which is widely distributed in China and other Asian countries, has been used as traditional medicine or food supplement for hundreds of years [10]. AA is the dried leaf of Artemisia argyi (Levl.) et Van., the herb with a spicy, bitter flavor and warm properties, enters into the channels of liver and kidney, and functions on resolving blood stasis, dispersing cold and relieving pain [11, 12]. AA is rich in volatile essential oils (AAEO), such as eucalyptol, camphor, and borneol, with extensive pharmacological effects of antioxidative stress [13], resisting pathogens [14], suppressing inflammatory responses [15], and activating immunomodulatory responses [16].AA often treats diseases in the form of moxibustion; moxibustion is a critical intervention in traditional Chinese medicine (TCM).Artemisia argyi is usually the main raw material [17]. Although the mechanism of moxibustion is uncertain, the thermal effect and moxa smoke may play a synergistic role in the treatment of diseases [18, 19]. The fumigation and heating effects produced by moxibustion have played a certain role in promoting the wound healing of PIs, and the pharmacological effects of moxa smoke need to be paid special attention. Nevertheless, we found that moxa smoke and AAEO have 80% of the same compounds by searching the relevant literature. Also, in view of the increasing emphasis on the toxicity of moxa smoke to cardiovascular and respiratory systems, AAEO is safer.In view of the complex chemical compounds of AAEO, the chemical components and the corresponding mechanism of action that play the efficacy after entering the human body include a lot of unknown information. Therefore, it is necessary to comprehensively explore the mechanism of AAEO in the treatment of PIs.Network pharmacology is a new discipline emerging in recent years that combines the overall network analysis and pharmacological effects [20]. With the development of bioinformatics and chemical informatics, network pharmacology has become a new method to study the mechanism of traditional drugs and discover potential bioactive components effectively and systematically [21]. Network pharmacology explores the relationship between drugs and diseases from a holistic perspective and, through a large number of databases screening drug treatment of diseases related targets and pathways, is widely used in TCM-related fields, providing new ideas for the study of complex Chinese medicine system [20, 22, 23]. Molecular docking, as a new technology for drug molecular screening, utilizes one-to-one pairs of ligands and receptors according to the “lock-key principle,” the computer-aided high-throughput screening of drug molecules was realized by studying the geometric matching and energy matching between protein macromolecular receptors and small drug molecules, and the mechanism of drug molecules was further predicted to improve the scientificity, accuracy, sensitivity, and predictability of drug molecule screening [24].For all we know, our study is first time applied network pharmacology methods to explore the biological effect of active compounds in AAEO and the multitarget mechanism of active compounds in the treatment of PIs. In our study, TNF, PTGS2, IL6, IL1β, NR3C1, CASP3, TP53, PGR, REN, and NOS2 could be the potential receptor targets, involving many inflammatory proteins. The top three molecular docking points are PTGS2 (prostaglandin-endoperoxide synthase 2), TP53 (tumor protein p53), and PGR (progesterone receptor). PTGS2, also known as COX-2, as an important inflammatory mediator, exists in the early stage of inflammation to the whole process of inflammation formation [25]. It is upregulated when stimulated by various stimuli and participates in various pathological processes, closely related to inflammation, tumor occurrence, and development [26, 27]. TP53 and PGR are tumor suppressor proteins, being a biomarker and prognostic predictor of cancers usually [28–31]. Recent studies have shown that TP53 plays an important role in regulating signaling pathways to maintain the health and function of skeletal muscle cells. It can improve cell survival rate by participating in the activation to increase the repair time of cells and prevent abnormal cell proliferation through the initiation of DNA fragmentation-induced apoptosis to promote the increase of cell stress level [32].
## 2. Methods
### 2.1. Active Compounds of AAEO Database Building and Screening
Over 200 components of AAEO can be detected by current technology, but more than 90 of them are common active, so we use 94 components as active compounds [33, 34]. Fifty-four compounds were screened by criteria. Finally, the inclusion criteria were as follows: the compounds with relative content >0.1% from works of literature of GS-MC quantitative analysis (hydrodistillation) of AAEO in recent years [14, 35, 36], compounds included in TCMSP [37] (https://tcmspw.com) and PubChem database [38] (https://pubchem.ncbi.nlm.nih.gov/), and compounds with relevant targets.
### 2.2. Targets Fishing
The targets information identifying 54 potential compounds were attained on TCMSP and were reconfirmed by DrugBank [39] (https://www.drugbank.ca) and Pharmmapper [40] (https://www.lilab-ecust.cn/pharmmapper/). Next, the targets were entered into UniProt (https://www.uniprot.org/); the species selected was “Homo sapiens”; transformed gene symbols were obtained finally.GeneCard (https://www.genecards.org/), OMIM (https://omim.org/) and DrugBank (https://go.drugbank.com/) database were used to screen relative targets of PIs. “Pressure Ulcers,” “Bedsore,” “Pressure Sore,” and “pressure injury” were keywords to search targets related to PIs. The obtained targets were integrated and eliminated duplication. Finally, the intersection targets were obtained on Venny 2.1.0 (https://bioinfogp.cnb.csic.es/tools/venny/). At last, 50 overlapping targets were obtained.
### 2.3. PPI Analysis and Compounds-Targets Network Construction
PPI analysis of the overlapping targets was carried out in the STRING 11.0 (https://www.string-db.org/). Protein with disconnected other protein and a combined score <0.4 was removed [41]. The information of the PPI network was visualized by Cytoscape 3.7.2 software [42]; then, core network calculations were performed by the Cytoscape plug-in module, MCODE, the degree of freedom threshold was set as 100, the node scoring threshold was 0.2, the K value was 2, and the maximum depth was 100 [43].
### 2.4. Gene Ontology (GO) Analysis
The overlapping targets were imported into Matescape [44] (https://metascape.org/gp/index.html) to carry out GO analysis. The specific steps were as follows: input the gene ID, the parameter selected was “Homo sapiens,” click “custom analysis,” and click GO Molecular Functions, GO Biological Processes, and GO Cellular Components in turn for analysis [44]. Finally, Bioinformatics (https://www.bioinformatics.com.cn/) was used to acquire the visualization of the results.
### 2.5. Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathways Analysis
50 overlapping targets were converted from gene symbol to ENTRZ_GENE ID in David Database (https://david.ncifcrf.gov/tools.jsp), and the ENTRZ_GENE ID was input into Kobas (https://kobas.cbi.pku.edu.cn/) for KEGG pathways analysis [45, 46]. KEGG pathways with P values <0.01 were selected [47].
### 2.6. Molecular Docking
In silico methods are alternatives to experimental approaches to screen for potential bioactivity of compounds of essential oil compounds; for example, docking evaluated in silico the ability of EOs to interact with molecular targets with advantages of being less time-consuming and cheap. We selected the top 10 core targets and got the ligand with relative content of the first 7 for molecular docking; the PDB formats of proteins were obtained from the protein database (https://www.rcsb.org) and ligand files in mol2 formats from PubChem (https://pubchem.ncbi.nlm.nih.gov/) [48]; both of them were used in the same way they were obtained from the databases. Molecular docking was carried out in CB-Dock (https://cao.labshare.cn/cb-dock/). CB-Dock server is a user-friendly blind docking network server developed by Dr. Liu’s research team. It uses a novel curvature-based cavity detection approach, and Autodock Vina, the popular docking program, is used for docking [49]. The success rate of this tool was more than 70%, which outperformed the state-of-the-art blind docking tools. The downloaded formats files were input into CB-Dock; the style and color of ligand and receptor were set the same as those of Dr. Tao [50]. The RMSD between each pair of the two structures must be less than 2 angstroms [51].
## 2.1. Active Compounds of AAEO Database Building and Screening
Over 200 components of AAEO can be detected by current technology, but more than 90 of them are common active, so we use 94 components as active compounds [33, 34]. Fifty-four compounds were screened by criteria. Finally, the inclusion criteria were as follows: the compounds with relative content >0.1% from works of literature of GS-MC quantitative analysis (hydrodistillation) of AAEO in recent years [14, 35, 36], compounds included in TCMSP [37] (https://tcmspw.com) and PubChem database [38] (https://pubchem.ncbi.nlm.nih.gov/), and compounds with relevant targets.
## 2.2. Targets Fishing
The targets information identifying 54 potential compounds were attained on TCMSP and were reconfirmed by DrugBank [39] (https://www.drugbank.ca) and Pharmmapper [40] (https://www.lilab-ecust.cn/pharmmapper/). Next, the targets were entered into UniProt (https://www.uniprot.org/); the species selected was “Homo sapiens”; transformed gene symbols were obtained finally.GeneCard (https://www.genecards.org/), OMIM (https://omim.org/) and DrugBank (https://go.drugbank.com/) database were used to screen relative targets of PIs. “Pressure Ulcers,” “Bedsore,” “Pressure Sore,” and “pressure injury” were keywords to search targets related to PIs. The obtained targets were integrated and eliminated duplication. Finally, the intersection targets were obtained on Venny 2.1.0 (https://bioinfogp.cnb.csic.es/tools/venny/). At last, 50 overlapping targets were obtained.
## 2.3. PPI Analysis and Compounds-Targets Network Construction
PPI analysis of the overlapping targets was carried out in the STRING 11.0 (https://www.string-db.org/). Protein with disconnected other protein and a combined score <0.4 was removed [41]. The information of the PPI network was visualized by Cytoscape 3.7.2 software [42]; then, core network calculations were performed by the Cytoscape plug-in module, MCODE, the degree of freedom threshold was set as 100, the node scoring threshold was 0.2, the K value was 2, and the maximum depth was 100 [43].
## 2.4. Gene Ontology (GO) Analysis
The overlapping targets were imported into Matescape [44] (https://metascape.org/gp/index.html) to carry out GO analysis. The specific steps were as follows: input the gene ID, the parameter selected was “Homo sapiens,” click “custom analysis,” and click GO Molecular Functions, GO Biological Processes, and GO Cellular Components in turn for analysis [44]. Finally, Bioinformatics (https://www.bioinformatics.com.cn/) was used to acquire the visualization of the results.
## 2.5. Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathways Analysis
50 overlapping targets were converted from gene symbol to ENTRZ_GENE ID in David Database (https://david.ncifcrf.gov/tools.jsp), and the ENTRZ_GENE ID was input into Kobas (https://kobas.cbi.pku.edu.cn/) for KEGG pathways analysis [45, 46]. KEGG pathways with P values <0.01 were selected [47].
## 2.6. Molecular Docking
In silico methods are alternatives to experimental approaches to screen for potential bioactivity of compounds of essential oil compounds; for example, docking evaluated in silico the ability of EOs to interact with molecular targets with advantages of being less time-consuming and cheap. We selected the top 10 core targets and got the ligand with relative content of the first 7 for molecular docking; the PDB formats of proteins were obtained from the protein database (https://www.rcsb.org) and ligand files in mol2 formats from PubChem (https://pubchem.ncbi.nlm.nih.gov/) [48]; both of them were used in the same way they were obtained from the databases. Molecular docking was carried out in CB-Dock (https://cao.labshare.cn/cb-dock/). CB-Dock server is a user-friendly blind docking network server developed by Dr. Liu’s research team. It uses a novel curvature-based cavity detection approach, and Autodock Vina, the popular docking program, is used for docking [49]. The success rate of this tool was more than 70%, which outperformed the state-of-the-art blind docking tools. The downloaded formats files were input into CB-Dock; the style and color of ligand and receptor were set the same as those of Dr. Tao [50]. The RMSD between each pair of the two structures must be less than 2 angstroms [51].
## 3. Results
### 3.1. Compounds of AAEO and Targets Related to Active Compounds
A total of 54 active compounds that met the criteria were finally collected. The basic information of 54 obtained compounds is shown in Table1.Table 1
The basic information of potential compounds of AAEO.
No.Molecule nameCASMolecular formulaRelative content (%)ReferencesA11,8-Cineole470-82-6C10H18O20.91Guan et al. [14]A2Caryophyllene87-44-5C15H247.50Guan et al. [14]A3(-)-Camphor76-22-2C10H16O5.57Guan et al. [14]A4Neointermedeol5945-72-2C15H26O9.65Guan et al. [14]A5Caryophyllene oxide1139-30-6C15H24O8.71Guan et al. [14]A6(-)-Borneol464-45-9C10H18O16.35Guan et al. [14]A7D-Carvone5948/4/9C10H16O0.25Guan et al. [14]A8Bornyl acetate76-49-3C12H20O20.24Guan et al. [14]A94-Terpineol562-74-3C10H18O5.47Guan et al. [14]A10Sabinene10408-16-9C10H163.36Guan et al. [14]A11α-Thujone546-80-5C10H16O14.55Guan et al. [14]A12α-Humulene6753-98-6C15H242.24Guan et al. [14]A13Eugenol97-53-0C10H12O20.56Gu et al. [36]A14cis-Carveol1197-06-4C10H16O1.40Guan et al. [14]A15Germacrene D23986-74-5C15H240.55Guan et al. [14]A16Terpinolene586-62-9C10H160.15Guan et al. [14]A17Cymene527-84-4C10H140.32Guan et al. [14]A18α-Terpineol10482-56-1C10H18O3.62Guan et al. [14]A19cis-Carveol1197-06-4C10H16O1.40Guan et al. [14]A20Espatulenol6750-60-3C15H24O1.51Guan et al. [14]A21γ-Elemene515-13-9C15H240.12Gu et al. [36]A22α-Pinene2437-95-8C10H163.84Dai et al. [35]A23Piperitone89-81-6C10H16O0.42Guan et al. [14]A24(-)-Camphene5794/3/6C10H161.83Dai et al. [35]A25Isoborneol124-76-5C10H18O0.63Dai et al. [35]A26cis-β-Farnesene18794-84-8C15H240.11Dai et al. [35]A27β-Caryophyllene87-44-5C15H2413.64Guan et al. [14]A28γ-Terpinene99-85-4C10H160.24Guan et al. [14]A29Spathulenol4221-98-1C15H24O0.82Dai et al. [35]A30Diisooctyl phthalate27554-26-3C24H38O40.14Dai et al. [35]A31β-Pinene127-91-3C10H163.05Dai et al. [35]A32Hexahydrofarnesyl acetone502-69-2C18H36O0.77Dai et al. [35]A33Tricyclene508-32-7C10H160.12Gu et al. [36]A34Terpinene99-86-5C10H162.26Gu et al. [36]A35Dihydroactinidiolide15356-74-8C11H16O20.21Dai et al. [35]A36Cyclohexadiene4221-98-1C10H160.77Dai et al. [35]A37n-Hexadecanoic acid57-10-3C16H32O20.22Dai et al. [35]A38Terpinyl acetate58206-95-4C12H20O20.27Dai et al. [35]A39Diisobutyl phthalate84-69-5C16H22O40.14Dai et al. [35]A40Myrtenol19894-97-4C10H16O0.77Dai et al. [35]A41Carvacrol499-75-2C10H14O0.55Dai et al. [35]A42Curcumene4176-17-4C15H221.06Dai et al. [35]A43trans-Carveol2102-58-1C10H16O1.17Dai et al. [35]A44(+)-Limonene5989-27-5C10H160.39Dai et al. [35]A45L-Carvone6485-40-1C10H14O0.11Dai et al. [35]A46cis-β-Terpineol7299-40-3C10H18O6.61Dai et al. [35]A47cis-Piperitol16721-38-3C10H18O3.66Dai et al. [35]A48Nerolidol7212-44-4C15H26O0.59Dai et al. [35]A49cis-Jasmon488-10-8C11H16O0.42Dai et al. [35]A50α-Caryophyllene6753-98-6C15H240.37Dai et al. [35]A51Terpinen-4-ol2438-10-0C10H16O11.09Dai et al. [35]A52(5R)-5-Isopropenyl-2-methyl-2-cyclohexen-1-ol99-48-9C10H16O0.12Dai et al. [35]A53Oct-1-en-3-ol3391-86-4C8H16O2.57Dai et al. [35]A54α-Phellandrene99-86-5C10H161.66Dai et al. [35]
### 3.2. Targets’ Intersection and PPI Network Construction
103 AAEO compound-related targets were retrieved from TCMSP and converted into official gene symbols according to the UniProt database. Moreover, 2760 PIs targets were searched by GeneCard, OMIM, and DrugBank databases. Finally, 50 targets were obtained by intersecting two parts of targets (Figure1); the PPIs of 50 overlapping targets are shown in Figure 2.Figure 1
Venn diagram of targets’ intersection of AAEO and PIs.Figure 2
PPI network diagram. Protein-protein interactions (P>0.7) of 50 overlapping targets.
### 3.3. Active Compounds and Overlapping Targets Network Construction
Compounds-overlapping targets network involved 104 nodes and 441 edges. The results reflect the complex mechanism of multicomponent and multitarget treatment of diseases. Moreover, a core network was calculated by MCODE with 15 targets (Figure3).Figure 3
Compounds-overlapping targets network: the right square matrix green circle nodes represent 52 potential compounds (2 compounds (A33 and A53) have no associated targets) and the left circular nodes with gradual color represent 50 overlapping targets of AAEO and PIs. Larger size and deeper color of a node mean a greater degree.
### 3.4. GO Analysis of Targets’ Intersection
GO analysis was mainly focused on the biological process, with a total of 3269 enrichment results, involving adrenergic receptor activity, nuclear receptor activity, and aspartic-type endopeptidase activity. The top 10 GO functional annotations of BP, CC, and MF are shown in Figure4.Figure 4
GO analysis.The top 10 GO functional annotations of BP, CC, and MF are represented by green for biological process, orange for cellular component, light purple for molecular function, respectively.
### 3.5. KEGG Pathways of Targets’ Intersection
KEGG enrichment results were involved in 128 pathways, including pathway in cancer, PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, MAPK signaling pathway, and Wnt signaling pathway. The top 20 pathways were selected by cluster analysis andP-value (Figure 5).Figure 5
Top 20 enriched KEGG pathways.Each bubble represents an enriched function, and the size of the bubble is from small to large. The bubble is colored according to its −log (P value); when the color is redder, P value is smaller.
### 3.6. Compound-Target Docking
The 10 key targets, TNF, PTGS2, IL6, IL1β, NR3C1, CASP3, TP53, PGR, REN, and NOS2, were docked with top 7 compounds: β-caryophyllene (A27),1,8-cineole (A1), terpinen-4-ol (A51), neointermedeol (A4), α-thujone (A11), borneol (A6), and camphor (A3). Generally, the Vina score is negative; the lower the score, the better the binding activity between ligand and protein. There will be top five Vina scores and docking cavity sizes from obtained results, which were first selected as representation [50]. The results indicated that the top 7 active compounds of AAEO had a good affinity to key targets and the RMSD of each docking target and compound was less than 2 angstroms (Tables 2 and 3). The top 3 compounds (A4-neointermedeol, A27-β-caryophyllene, and A3-camphor) and proteins (PTGS2, PGR, and TP53) with better binding affinities are shown in Figures 6–8.Table 2
Vina score of compound-target docking (unit: kcal/mol).
IDA27A1A51A4A11A6A3TNF−6−5.3−6.6−7.8−5.4−5.6−6.8PTGS2−6.6−7.1−6.8−7.2−6.4−6.2−7.3IL6−6.3−5.5−6.7−6.3−5.5−5.2−6.9IL1B−6.8−5.6−5.5−7−5.3−5.6−5.9NR3C1−5.5−5.7−6.3−6.1−6.2−5.1−5.1CASP3−7−5.9−5.9−6.7−5.9−5.4−5.7TP53−7.4−5.8−6−7.1−6.1−5.8−7.5PGR−7.8−6.7−6.4−7.6−6.6−6.2−6.5REN−7.9−6.2−5.9−7.8−6.2−6.2−6.4NOS2−6.9−5.4−7−7.5−6.4−5.3−5.5Table 3
Docking parameters.
TargetPDB IDLigandCavity sizeCenterSizeRMSDXyzxyzTNF1D0GA27330292081818180.000A1791938471616160.098A51330292081717170.096A4330292081818180.000A1145872355163516280.585A6791938471616160.640A316204534151828180.000PTGS21EQGA273720948341893535350.548A1180973231952426280,104A51358922282032629240,103A4358922282032629240.000A11180973231952426280.591A6180973231952426280.645A3358922282032629240.000IL64O9HA27533−2017271818180.000A1533−2017271616160.103A51533−201727171710.103A4533−2017271818180.000A11533−2017271616160.590A6533−2017271616160.664A3533−2017271818180.000IL1β3POKA27987−15−17−81818180.000A12542145261818180.105A51987−15−17−82317170.103A4987−15−17−81818180.000A11199−255−71616160.592A6199−255−71616160.646A3987−15−17−81818180.000NR3C11LATA2721723138812832350.000A121723138812832350.000A511532738921717170.000A421723138812832350.000A1121723138812832350.000A621723138812832350.000A321723138812832350.000CASP35JFTA27183434−251831180.000A1183434−251731170.104A51183434−251731170.103A4183434−251831180.000A11183434−251731171.153A6183434−251631160.646A3183434−251631160.592TP536WQXA275969363−122635310.000A116442318123016160.272A518740−10491717170.427A45969363−122635310.000A1116442318123016161.153A616442318123016160.586A38740−10491818181.127PGR1A28A276174334301818180.000A16174334301616160.000A51631235731717170.000A44212310601818180.000A116174334301717170.000A66174334301616160.000A36174334301616160.000REN3OWNA27118429−14−303535330.000A1191020−1−182616160.107A51183434−251731170.106A4191020−1−182618180.000A111682−10−28−371723171.156A61682−10−28−371623220.648A3118429−14−303535330.594NOS21M7ZA273650533113027300.000A13650533113027300.103A513650533113027300.102A43650533113027300.000A113650533113027301.152A63650533113027300.645A33650533113027300.590Figure 6
(a–c) Docking results of compound A4-neointermedeol and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)Figure 7
(a–c) Docking results of compound A27β-caryophyllene and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)Figure 8
(a–c) Docking results of compound A3 terpinen-4-ol and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)
## 3.1. Compounds of AAEO and Targets Related to Active Compounds
A total of 54 active compounds that met the criteria were finally collected. The basic information of 54 obtained compounds is shown in Table1.Table 1
The basic information of potential compounds of AAEO.
No.Molecule nameCASMolecular formulaRelative content (%)ReferencesA11,8-Cineole470-82-6C10H18O20.91Guan et al. [14]A2Caryophyllene87-44-5C15H247.50Guan et al. [14]A3(-)-Camphor76-22-2C10H16O5.57Guan et al. [14]A4Neointermedeol5945-72-2C15H26O9.65Guan et al. [14]A5Caryophyllene oxide1139-30-6C15H24O8.71Guan et al. [14]A6(-)-Borneol464-45-9C10H18O16.35Guan et al. [14]A7D-Carvone5948/4/9C10H16O0.25Guan et al. [14]A8Bornyl acetate76-49-3C12H20O20.24Guan et al. [14]A94-Terpineol562-74-3C10H18O5.47Guan et al. [14]A10Sabinene10408-16-9C10H163.36Guan et al. [14]A11α-Thujone546-80-5C10H16O14.55Guan et al. [14]A12α-Humulene6753-98-6C15H242.24Guan et al. [14]A13Eugenol97-53-0C10H12O20.56Gu et al. [36]A14cis-Carveol1197-06-4C10H16O1.40Guan et al. [14]A15Germacrene D23986-74-5C15H240.55Guan et al. [14]A16Terpinolene586-62-9C10H160.15Guan et al. [14]A17Cymene527-84-4C10H140.32Guan et al. [14]A18α-Terpineol10482-56-1C10H18O3.62Guan et al. [14]A19cis-Carveol1197-06-4C10H16O1.40Guan et al. [14]A20Espatulenol6750-60-3C15H24O1.51Guan et al. [14]A21γ-Elemene515-13-9C15H240.12Gu et al. [36]A22α-Pinene2437-95-8C10H163.84Dai et al. [35]A23Piperitone89-81-6C10H16O0.42Guan et al. [14]A24(-)-Camphene5794/3/6C10H161.83Dai et al. [35]A25Isoborneol124-76-5C10H18O0.63Dai et al. [35]A26cis-β-Farnesene18794-84-8C15H240.11Dai et al. [35]A27β-Caryophyllene87-44-5C15H2413.64Guan et al. [14]A28γ-Terpinene99-85-4C10H160.24Guan et al. [14]A29Spathulenol4221-98-1C15H24O0.82Dai et al. [35]A30Diisooctyl phthalate27554-26-3C24H38O40.14Dai et al. [35]A31β-Pinene127-91-3C10H163.05Dai et al. [35]A32Hexahydrofarnesyl acetone502-69-2C18H36O0.77Dai et al. [35]A33Tricyclene508-32-7C10H160.12Gu et al. [36]A34Terpinene99-86-5C10H162.26Gu et al. [36]A35Dihydroactinidiolide15356-74-8C11H16O20.21Dai et al. [35]A36Cyclohexadiene4221-98-1C10H160.77Dai et al. [35]A37n-Hexadecanoic acid57-10-3C16H32O20.22Dai et al. [35]A38Terpinyl acetate58206-95-4C12H20O20.27Dai et al. [35]A39Diisobutyl phthalate84-69-5C16H22O40.14Dai et al. [35]A40Myrtenol19894-97-4C10H16O0.77Dai et al. [35]A41Carvacrol499-75-2C10H14O0.55Dai et al. [35]A42Curcumene4176-17-4C15H221.06Dai et al. [35]A43trans-Carveol2102-58-1C10H16O1.17Dai et al. [35]A44(+)-Limonene5989-27-5C10H160.39Dai et al. [35]A45L-Carvone6485-40-1C10H14O0.11Dai et al. [35]A46cis-β-Terpineol7299-40-3C10H18O6.61Dai et al. [35]A47cis-Piperitol16721-38-3C10H18O3.66Dai et al. [35]A48Nerolidol7212-44-4C15H26O0.59Dai et al. [35]A49cis-Jasmon488-10-8C11H16O0.42Dai et al. [35]A50α-Caryophyllene6753-98-6C15H240.37Dai et al. [35]A51Terpinen-4-ol2438-10-0C10H16O11.09Dai et al. [35]A52(5R)-5-Isopropenyl-2-methyl-2-cyclohexen-1-ol99-48-9C10H16O0.12Dai et al. [35]A53Oct-1-en-3-ol3391-86-4C8H16O2.57Dai et al. [35]A54α-Phellandrene99-86-5C10H161.66Dai et al. [35]
## 3.2. Targets’ Intersection and PPI Network Construction
103 AAEO compound-related targets were retrieved from TCMSP and converted into official gene symbols according to the UniProt database. Moreover, 2760 PIs targets were searched by GeneCard, OMIM, and DrugBank databases. Finally, 50 targets were obtained by intersecting two parts of targets (Figure1); the PPIs of 50 overlapping targets are shown in Figure 2.Figure 1
Venn diagram of targets’ intersection of AAEO and PIs.Figure 2
PPI network diagram. Protein-protein interactions (P>0.7) of 50 overlapping targets.
## 3.3. Active Compounds and Overlapping Targets Network Construction
Compounds-overlapping targets network involved 104 nodes and 441 edges. The results reflect the complex mechanism of multicomponent and multitarget treatment of diseases. Moreover, a core network was calculated by MCODE with 15 targets (Figure3).Figure 3
Compounds-overlapping targets network: the right square matrix green circle nodes represent 52 potential compounds (2 compounds (A33 and A53) have no associated targets) and the left circular nodes with gradual color represent 50 overlapping targets of AAEO and PIs. Larger size and deeper color of a node mean a greater degree.
## 3.4. GO Analysis of Targets’ Intersection
GO analysis was mainly focused on the biological process, with a total of 3269 enrichment results, involving adrenergic receptor activity, nuclear receptor activity, and aspartic-type endopeptidase activity. The top 10 GO functional annotations of BP, CC, and MF are shown in Figure4.Figure 4
GO analysis.The top 10 GO functional annotations of BP, CC, and MF are represented by green for biological process, orange for cellular component, light purple for molecular function, respectively.
## 3.5. KEGG Pathways of Targets’ Intersection
KEGG enrichment results were involved in 128 pathways, including pathway in cancer, PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, MAPK signaling pathway, and Wnt signaling pathway. The top 20 pathways were selected by cluster analysis andP-value (Figure 5).Figure 5
Top 20 enriched KEGG pathways.Each bubble represents an enriched function, and the size of the bubble is from small to large. The bubble is colored according to its −log (P value); when the color is redder, P value is smaller.
## 3.6. Compound-Target Docking
The 10 key targets, TNF, PTGS2, IL6, IL1β, NR3C1, CASP3, TP53, PGR, REN, and NOS2, were docked with top 7 compounds: β-caryophyllene (A27),1,8-cineole (A1), terpinen-4-ol (A51), neointermedeol (A4), α-thujone (A11), borneol (A6), and camphor (A3). Generally, the Vina score is negative; the lower the score, the better the binding activity between ligand and protein. There will be top five Vina scores and docking cavity sizes from obtained results, which were first selected as representation [50]. The results indicated that the top 7 active compounds of AAEO had a good affinity to key targets and the RMSD of each docking target and compound was less than 2 angstroms (Tables 2 and 3). The top 3 compounds (A4-neointermedeol, A27-β-caryophyllene, and A3-camphor) and proteins (PTGS2, PGR, and TP53) with better binding affinities are shown in Figures 6–8.Table 2
Vina score of compound-target docking (unit: kcal/mol).
IDA27A1A51A4A11A6A3TNF−6−5.3−6.6−7.8−5.4−5.6−6.8PTGS2−6.6−7.1−6.8−7.2−6.4−6.2−7.3IL6−6.3−5.5−6.7−6.3−5.5−5.2−6.9IL1B−6.8−5.6−5.5−7−5.3−5.6−5.9NR3C1−5.5−5.7−6.3−6.1−6.2−5.1−5.1CASP3−7−5.9−5.9−6.7−5.9−5.4−5.7TP53−7.4−5.8−6−7.1−6.1−5.8−7.5PGR−7.8−6.7−6.4−7.6−6.6−6.2−6.5REN−7.9−6.2−5.9−7.8−6.2−6.2−6.4NOS2−6.9−5.4−7−7.5−6.4−5.3−5.5Table 3
Docking parameters.
TargetPDB IDLigandCavity sizeCenterSizeRMSDXyzxyzTNF1D0GA27330292081818180.000A1791938471616160.098A51330292081717170.096A4330292081818180.000A1145872355163516280.585A6791938471616160.640A316204534151828180.000PTGS21EQGA273720948341893535350.548A1180973231952426280,104A51358922282032629240,103A4358922282032629240.000A11180973231952426280.591A6180973231952426280.645A3358922282032629240.000IL64O9HA27533−2017271818180.000A1533−2017271616160.103A51533−201727171710.103A4533−2017271818180.000A11533−2017271616160.590A6533−2017271616160.664A3533−2017271818180.000IL1β3POKA27987−15−17−81818180.000A12542145261818180.105A51987−15−17−82317170.103A4987−15−17−81818180.000A11199−255−71616160.592A6199−255−71616160.646A3987−15−17−81818180.000NR3C11LATA2721723138812832350.000A121723138812832350.000A511532738921717170.000A421723138812832350.000A1121723138812832350.000A621723138812832350.000A321723138812832350.000CASP35JFTA27183434−251831180.000A1183434−251731170.104A51183434−251731170.103A4183434−251831180.000A11183434−251731171.153A6183434−251631160.646A3183434−251631160.592TP536WQXA275969363−122635310.000A116442318123016160.272A518740−10491717170.427A45969363−122635310.000A1116442318123016161.153A616442318123016160.586A38740−10491818181.127PGR1A28A276174334301818180.000A16174334301616160.000A51631235731717170.000A44212310601818180.000A116174334301717170.000A66174334301616160.000A36174334301616160.000REN3OWNA27118429−14−303535330.000A1191020−1−182616160.107A51183434−251731170.106A4191020−1−182618180.000A111682−10−28−371723171.156A61682−10−28−371623220.648A3118429−14−303535330.594NOS21M7ZA273650533113027300.000A13650533113027300.103A513650533113027300.102A43650533113027300.000A113650533113027301.152A63650533113027300.645A33650533113027300.590Figure 6
(a–c) Docking results of compound A4-neointermedeol and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)Figure 7
(a–c) Docking results of compound A27β-caryophyllene and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)Figure 8
(a–c) Docking results of compound A3 terpinen-4-ol and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)
## 4. Discussion
Artemisia argyi, a dried leaf of Ai Ye with multiple biological activities, is widely used to treat inflammatory diseases such as eczema, dermatitis, arthritis, allergic asthma, and colitis [52]. The pharmacological mechanisms of AAEO associated with PIS are uncertain. Our study was first used network pharmacology to discover the potential targets and regulatory molecular mechanism of AAEO on PIs treatment. As a result, we identified 54 compounds as the main active components, obtained 50 key targets, including pathway in cancer, PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, MAPK signaling pathway, and Wnt signaling pathway, demonstrated the multitarget and multipathway specialty of TCM in treating diseases.Over 200 species of AAEO can be detected by gas chromatography-mass spectrometry (GC-MS) [34], mainly including terpenoids, ketones (aldehydes), alcohols (phenols), acids (esters), alkanes (alkenes), and other chemical constituents. In our study, β-caryophyllene,1,8-cineole, terpinen-4-ol, neointermedeol, α-thujone, borneol, and camphor had a relative content of the first 7 [14, 34–36]. 1,8-Cineole, camphor, and borneol accounted for the largest proportion of AAEO [53]. In lipopolysaccharide0 (LPS-) induced cell and mouse inflammation experiments, 1,8-cineole alleviates LPS-induced vascular endothelial cell injury, obviously inhibits the production of the inflammatory mediator, increases the release of anti-inflammatory factor IL10, and improves inflammatory symptoms [54]. Borneol significantly decreased the auricular swelling rate and pain threshold of rats by activating the p38-COX-2-PGE2 signaling pathway, which has significant analgesic and anti-inflammatory effects on PDT of acne [55]. Numerous investigations have shown various essential oils of several species containing camphor as the major component, exhibiting antimicrobial activity [56–59]. Also, the application of camphor to the skin was proved to increase local blood flow in the skin and muscle, induce both cold and warm sensations, and improve blood circulation [60]. More noteworthy is that the top three compounds of molecular docking score were neointermedeol, β-caryophyllene, and camphor. Neointermedeol has been shown to have antioxidant, antibacterial, and other biological activities [61, 62]. Recent studies have shown that caryophyllene can provide protection for animal cells and reduce proinflammatory mediators such as TNF-α, IL-1β, IL-6, and NF-κB, thereby improving the symptoms of inflammation and oxidative stress [63, 64].The core network calculated by MCODE had 15 targets, mostly related to inflammation, oxidative stress, and apoptosis. TNF, IL6, IL-1β, and PTGS2 participate in regulating inflammatory cascade reaction [65–68] and can be inhibited by inflammation in different levels by AAEO. TP53, BAX, and CASP3 regulate the apoptotic process and cell protection negatively [69, 70]. KEGG Pathways enrichment analysis is mainly involved in PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, and human T-cell leukemia virus 1 infection, MAPK signaling pathway, and Wnt signaling pathway. The study found that the PI3K-Akt pathway plays a great role in antiapoptosis and angiogenesis. The PI3K-Akt pathway phosphorylated Akt, and phosphorylated Akt first activated downstream factors Bad and Caspase-9 to play an antiapoptotic role and promote angiogenesis [71], and then phosphorylated Akt further regulated eNOS, which could promote the generation of NO [72], provide oxygen and nutrients for tissue recovery and mediate skin injury repair. Ischemic-reperfusion is recognized as the mechanism of PIs; the process includes oxidative stress, excessive release of oxygen free radicals, apoptosis, and activation of inflammatory cytokines [73]. The prediction results of our network pharmacology are mostly consistent with the progress of ischemic-reperfusion. MAPK signaling pathway is involved in the repair of PIs, increases the expression of Ras, c-Raf, MEK1, p-MEK1 protein, p-ERK1 protein, and MEK1 mRNA, promotes the proliferation of vascular endothelial cells, and accelerates microvascular regeneration and remodeling [74]. More studies have shown that the repair of pressure ulcers is highly correlated with the Wnt/β-catenin signaling pathway regulating the proliferation and differentiation of epithelial cells, hair follicles, and sebaceous glands [75, 76].The above arguments verify the accuracy of this network pharmacology prediction. Besides, the docking result showed that all selected core protein and ligand have a better affinity (≤5 kcal/mol), and there were 15 docking scores ≥7 kcal/mol, indicating strong binding affinity of the compound to docking protein [77]. The RMSD of the target protein is less than 2 Å, which indicates that the docking method and parameter setting are reasonable and can be used for the next docking with components [78].In addition, study showed that AAEO dose-dependently inhibits inflammatory mediators, such as NO, PGE2, TNF-α, IL-6, IL-10, IFN-β, and MCP-1 [79]. In the experiment of AAEO in anti-inflammatory and blood stasis animals, the effect of the lowest dose of skin administration (0.25 mL/kg) was equivalent to that oral administration of the middle dose (0.50 mL/kg) [80]. PTGS2 with the highest docking scores is a biomarker of iron death; it can inhibit the expression of inflammatory factors and apoptosis [81–83]. The future research direction can explore the way of administration, dosage, and iron death mechanism pathway.The limitation of this study is that we have not conducted clinical or animal experiments as certification; further studies will validate the potential key targets and pathways predicted and explore the mechanism of effective components of the essential oil fromArtemisia argyi in preventing and treating PIs by combining molecular biology and pathophysiology.
## 5. Conclusion
In conclusion, in this study, the potential targets and regulatory molecular mechanisms of AAEO in the treatment of PIs were analyzed by network pharmacology and molecular docking. In total, 54 active components and 50 potential targets were screened, mainly involving PI3K-Akt signaling pathway, pathway in cancer, PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, MAPK signaling pathway, and Wnt signaling pathway, revealing that AAEO may play a role in the treatment of PIs by reducing inflammation, inhibiting apoptosis and oxidative stress, and showing the characteristics of multitarget and multipathway. Our study provides a basis for the mechanism and further research direction of AAEO in treating PIs by combining literature research, network analysis, and molecular docking.
---
*Source: 1019289-2022-01-19.xml* | 1019289-2022-01-19_1019289-2022-01-19.md | 41,159 | Study on the Multitarget Mechanism and Active Compounds of Essential Oil fromArtemisia argyi Treating Pressure Injuries Based on Network Pharmacology | Shu-ting Lu; Lu-lu Tang; Ling-han Zhou; Ying-tao Lai; Lan-xing Liu; Yifan Duan | Evidence-Based Complementary and Alternative Medicine
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1019289 | 1019289-2022-01-19.xml | ---
## Abstract
In order to comprehensively explore multitarget mechanism and key active compounds ofArtemisia argyi essential oil (AAEO) in the treatment of pressure injuries (PIs), we analyzed the biological functions and pathways involved in the intersection targets of AAEO and PIs based on network pharmacology, and the affinity of AAEO active compounds and core targets was verified by molecular docking finally. In our study, we first screened 54 effective components according to the relative content and biological activity. In total, 103 targets related to active compounds of AAEO and 2760 targets associated with PIs were obtained, respectively, and 50 key targets were overlapped by Venny 2.1.0. The construction of key targets-compounds network was achieved by the STRING database and Cytoscape 3.7.2 software. GO analysis from Matespace shows that GO results are mainly enriched in biological processes, including adrenergic receptor activity, neurotransmitter clearance, and neurotransmitter metabolic process. KEGG analysis by the David and Kobas website shows that the key targets can achieve the treatment on PIs through a pathway in cancer, PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, MAPK signaling pathway, Wnt signaling pathway, etc. In addition, molecular docking results from the CB-Dock server indicated that active compounds of AAEO had good activity docking with the first 10 key targets. In conclusion, the potential targets and regulatory molecular mechanisms of AAEO in the treatment of PIs were analyzed by network pharmacology and molecular docking. AAEO can cure PIs through the synergistic effect of multicomponent, multitarget, and multipathway, providing a theoretical basis and new direction for further study.
---
## Body
## 1. Introduction
Pressure injuries (PIs), also named pressure ulcers, refer to localized injuries occurring in the skin and/or potential subcutaneous soft tissue, usually occurring in bone bulges or in contact with medical facilities [1]. PIs have the characteristics of refractory, high incidence, and high treatment cost [2, 3]. Once infected, it is easy to cause sepsis and death [4]. At present, the treatment of PIs mainly includes drug therapy [5], dressing therapy [6], stem cell factor therapy [7], and negative pressure wound therapy [8]. There are no effective measures yet; expert consensus believes that prevention and early treatment are crucial [9].TheArtemisia argyi (AA), which is widely distributed in China and other Asian countries, has been used as traditional medicine or food supplement for hundreds of years [10]. AA is the dried leaf of Artemisia argyi (Levl.) et Van., the herb with a spicy, bitter flavor and warm properties, enters into the channels of liver and kidney, and functions on resolving blood stasis, dispersing cold and relieving pain [11, 12]. AA is rich in volatile essential oils (AAEO), such as eucalyptol, camphor, and borneol, with extensive pharmacological effects of antioxidative stress [13], resisting pathogens [14], suppressing inflammatory responses [15], and activating immunomodulatory responses [16].AA often treats diseases in the form of moxibustion; moxibustion is a critical intervention in traditional Chinese medicine (TCM).Artemisia argyi is usually the main raw material [17]. Although the mechanism of moxibustion is uncertain, the thermal effect and moxa smoke may play a synergistic role in the treatment of diseases [18, 19]. The fumigation and heating effects produced by moxibustion have played a certain role in promoting the wound healing of PIs, and the pharmacological effects of moxa smoke need to be paid special attention. Nevertheless, we found that moxa smoke and AAEO have 80% of the same compounds by searching the relevant literature. Also, in view of the increasing emphasis on the toxicity of moxa smoke to cardiovascular and respiratory systems, AAEO is safer.In view of the complex chemical compounds of AAEO, the chemical components and the corresponding mechanism of action that play the efficacy after entering the human body include a lot of unknown information. Therefore, it is necessary to comprehensively explore the mechanism of AAEO in the treatment of PIs.Network pharmacology is a new discipline emerging in recent years that combines the overall network analysis and pharmacological effects [20]. With the development of bioinformatics and chemical informatics, network pharmacology has become a new method to study the mechanism of traditional drugs and discover potential bioactive components effectively and systematically [21]. Network pharmacology explores the relationship between drugs and diseases from a holistic perspective and, through a large number of databases screening drug treatment of diseases related targets and pathways, is widely used in TCM-related fields, providing new ideas for the study of complex Chinese medicine system [20, 22, 23]. Molecular docking, as a new technology for drug molecular screening, utilizes one-to-one pairs of ligands and receptors according to the “lock-key principle,” the computer-aided high-throughput screening of drug molecules was realized by studying the geometric matching and energy matching between protein macromolecular receptors and small drug molecules, and the mechanism of drug molecules was further predicted to improve the scientificity, accuracy, sensitivity, and predictability of drug molecule screening [24].For all we know, our study is first time applied network pharmacology methods to explore the biological effect of active compounds in AAEO and the multitarget mechanism of active compounds in the treatment of PIs. In our study, TNF, PTGS2, IL6, IL1β, NR3C1, CASP3, TP53, PGR, REN, and NOS2 could be the potential receptor targets, involving many inflammatory proteins. The top three molecular docking points are PTGS2 (prostaglandin-endoperoxide synthase 2), TP53 (tumor protein p53), and PGR (progesterone receptor). PTGS2, also known as COX-2, as an important inflammatory mediator, exists in the early stage of inflammation to the whole process of inflammation formation [25]. It is upregulated when stimulated by various stimuli and participates in various pathological processes, closely related to inflammation, tumor occurrence, and development [26, 27]. TP53 and PGR are tumor suppressor proteins, being a biomarker and prognostic predictor of cancers usually [28–31]. Recent studies have shown that TP53 plays an important role in regulating signaling pathways to maintain the health and function of skeletal muscle cells. It can improve cell survival rate by participating in the activation to increase the repair time of cells and prevent abnormal cell proliferation through the initiation of DNA fragmentation-induced apoptosis to promote the increase of cell stress level [32].
## 2. Methods
### 2.1. Active Compounds of AAEO Database Building and Screening
Over 200 components of AAEO can be detected by current technology, but more than 90 of them are common active, so we use 94 components as active compounds [33, 34]. Fifty-four compounds were screened by criteria. Finally, the inclusion criteria were as follows: the compounds with relative content >0.1% from works of literature of GS-MC quantitative analysis (hydrodistillation) of AAEO in recent years [14, 35, 36], compounds included in TCMSP [37] (https://tcmspw.com) and PubChem database [38] (https://pubchem.ncbi.nlm.nih.gov/), and compounds with relevant targets.
### 2.2. Targets Fishing
The targets information identifying 54 potential compounds were attained on TCMSP and were reconfirmed by DrugBank [39] (https://www.drugbank.ca) and Pharmmapper [40] (https://www.lilab-ecust.cn/pharmmapper/). Next, the targets were entered into UniProt (https://www.uniprot.org/); the species selected was “Homo sapiens”; transformed gene symbols were obtained finally.GeneCard (https://www.genecards.org/), OMIM (https://omim.org/) and DrugBank (https://go.drugbank.com/) database were used to screen relative targets of PIs. “Pressure Ulcers,” “Bedsore,” “Pressure Sore,” and “pressure injury” were keywords to search targets related to PIs. The obtained targets were integrated and eliminated duplication. Finally, the intersection targets were obtained on Venny 2.1.0 (https://bioinfogp.cnb.csic.es/tools/venny/). At last, 50 overlapping targets were obtained.
### 2.3. PPI Analysis and Compounds-Targets Network Construction
PPI analysis of the overlapping targets was carried out in the STRING 11.0 (https://www.string-db.org/). Protein with disconnected other protein and a combined score <0.4 was removed [41]. The information of the PPI network was visualized by Cytoscape 3.7.2 software [42]; then, core network calculations were performed by the Cytoscape plug-in module, MCODE, the degree of freedom threshold was set as 100, the node scoring threshold was 0.2, the K value was 2, and the maximum depth was 100 [43].
### 2.4. Gene Ontology (GO) Analysis
The overlapping targets were imported into Matescape [44] (https://metascape.org/gp/index.html) to carry out GO analysis. The specific steps were as follows: input the gene ID, the parameter selected was “Homo sapiens,” click “custom analysis,” and click GO Molecular Functions, GO Biological Processes, and GO Cellular Components in turn for analysis [44]. Finally, Bioinformatics (https://www.bioinformatics.com.cn/) was used to acquire the visualization of the results.
### 2.5. Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathways Analysis
50 overlapping targets were converted from gene symbol to ENTRZ_GENE ID in David Database (https://david.ncifcrf.gov/tools.jsp), and the ENTRZ_GENE ID was input into Kobas (https://kobas.cbi.pku.edu.cn/) for KEGG pathways analysis [45, 46]. KEGG pathways with P values <0.01 were selected [47].
### 2.6. Molecular Docking
In silico methods are alternatives to experimental approaches to screen for potential bioactivity of compounds of essential oil compounds; for example, docking evaluated in silico the ability of EOs to interact with molecular targets with advantages of being less time-consuming and cheap. We selected the top 10 core targets and got the ligand with relative content of the first 7 for molecular docking; the PDB formats of proteins were obtained from the protein database (https://www.rcsb.org) and ligand files in mol2 formats from PubChem (https://pubchem.ncbi.nlm.nih.gov/) [48]; both of them were used in the same way they were obtained from the databases. Molecular docking was carried out in CB-Dock (https://cao.labshare.cn/cb-dock/). CB-Dock server is a user-friendly blind docking network server developed by Dr. Liu’s research team. It uses a novel curvature-based cavity detection approach, and Autodock Vina, the popular docking program, is used for docking [49]. The success rate of this tool was more than 70%, which outperformed the state-of-the-art blind docking tools. The downloaded formats files were input into CB-Dock; the style and color of ligand and receptor were set the same as those of Dr. Tao [50]. The RMSD between each pair of the two structures must be less than 2 angstroms [51].
## 2.1. Active Compounds of AAEO Database Building and Screening
Over 200 components of AAEO can be detected by current technology, but more than 90 of them are common active, so we use 94 components as active compounds [33, 34]. Fifty-four compounds were screened by criteria. Finally, the inclusion criteria were as follows: the compounds with relative content >0.1% from works of literature of GS-MC quantitative analysis (hydrodistillation) of AAEO in recent years [14, 35, 36], compounds included in TCMSP [37] (https://tcmspw.com) and PubChem database [38] (https://pubchem.ncbi.nlm.nih.gov/), and compounds with relevant targets.
## 2.2. Targets Fishing
The targets information identifying 54 potential compounds were attained on TCMSP and were reconfirmed by DrugBank [39] (https://www.drugbank.ca) and Pharmmapper [40] (https://www.lilab-ecust.cn/pharmmapper/). Next, the targets were entered into UniProt (https://www.uniprot.org/); the species selected was “Homo sapiens”; transformed gene symbols were obtained finally.GeneCard (https://www.genecards.org/), OMIM (https://omim.org/) and DrugBank (https://go.drugbank.com/) database were used to screen relative targets of PIs. “Pressure Ulcers,” “Bedsore,” “Pressure Sore,” and “pressure injury” were keywords to search targets related to PIs. The obtained targets were integrated and eliminated duplication. Finally, the intersection targets were obtained on Venny 2.1.0 (https://bioinfogp.cnb.csic.es/tools/venny/). At last, 50 overlapping targets were obtained.
## 2.3. PPI Analysis and Compounds-Targets Network Construction
PPI analysis of the overlapping targets was carried out in the STRING 11.0 (https://www.string-db.org/). Protein with disconnected other protein and a combined score <0.4 was removed [41]. The information of the PPI network was visualized by Cytoscape 3.7.2 software [42]; then, core network calculations were performed by the Cytoscape plug-in module, MCODE, the degree of freedom threshold was set as 100, the node scoring threshold was 0.2, the K value was 2, and the maximum depth was 100 [43].
## 2.4. Gene Ontology (GO) Analysis
The overlapping targets were imported into Matescape [44] (https://metascape.org/gp/index.html) to carry out GO analysis. The specific steps were as follows: input the gene ID, the parameter selected was “Homo sapiens,” click “custom analysis,” and click GO Molecular Functions, GO Biological Processes, and GO Cellular Components in turn for analysis [44]. Finally, Bioinformatics (https://www.bioinformatics.com.cn/) was used to acquire the visualization of the results.
## 2.5. Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathways Analysis
50 overlapping targets were converted from gene symbol to ENTRZ_GENE ID in David Database (https://david.ncifcrf.gov/tools.jsp), and the ENTRZ_GENE ID was input into Kobas (https://kobas.cbi.pku.edu.cn/) for KEGG pathways analysis [45, 46]. KEGG pathways with P values <0.01 were selected [47].
## 2.6. Molecular Docking
In silico methods are alternatives to experimental approaches to screen for potential bioactivity of compounds of essential oil compounds; for example, docking evaluated in silico the ability of EOs to interact with molecular targets with advantages of being less time-consuming and cheap. We selected the top 10 core targets and got the ligand with relative content of the first 7 for molecular docking; the PDB formats of proteins were obtained from the protein database (https://www.rcsb.org) and ligand files in mol2 formats from PubChem (https://pubchem.ncbi.nlm.nih.gov/) [48]; both of them were used in the same way they were obtained from the databases. Molecular docking was carried out in CB-Dock (https://cao.labshare.cn/cb-dock/). CB-Dock server is a user-friendly blind docking network server developed by Dr. Liu’s research team. It uses a novel curvature-based cavity detection approach, and Autodock Vina, the popular docking program, is used for docking [49]. The success rate of this tool was more than 70%, which outperformed the state-of-the-art blind docking tools. The downloaded formats files were input into CB-Dock; the style and color of ligand and receptor were set the same as those of Dr. Tao [50]. The RMSD between each pair of the two structures must be less than 2 angstroms [51].
## 3. Results
### 3.1. Compounds of AAEO and Targets Related to Active Compounds
A total of 54 active compounds that met the criteria were finally collected. The basic information of 54 obtained compounds is shown in Table1.Table 1
The basic information of potential compounds of AAEO.
No.Molecule nameCASMolecular formulaRelative content (%)ReferencesA11,8-Cineole470-82-6C10H18O20.91Guan et al. [14]A2Caryophyllene87-44-5C15H247.50Guan et al. [14]A3(-)-Camphor76-22-2C10H16O5.57Guan et al. [14]A4Neointermedeol5945-72-2C15H26O9.65Guan et al. [14]A5Caryophyllene oxide1139-30-6C15H24O8.71Guan et al. [14]A6(-)-Borneol464-45-9C10H18O16.35Guan et al. [14]A7D-Carvone5948/4/9C10H16O0.25Guan et al. [14]A8Bornyl acetate76-49-3C12H20O20.24Guan et al. [14]A94-Terpineol562-74-3C10H18O5.47Guan et al. [14]A10Sabinene10408-16-9C10H163.36Guan et al. [14]A11α-Thujone546-80-5C10H16O14.55Guan et al. [14]A12α-Humulene6753-98-6C15H242.24Guan et al. [14]A13Eugenol97-53-0C10H12O20.56Gu et al. [36]A14cis-Carveol1197-06-4C10H16O1.40Guan et al. [14]A15Germacrene D23986-74-5C15H240.55Guan et al. [14]A16Terpinolene586-62-9C10H160.15Guan et al. [14]A17Cymene527-84-4C10H140.32Guan et al. [14]A18α-Terpineol10482-56-1C10H18O3.62Guan et al. [14]A19cis-Carveol1197-06-4C10H16O1.40Guan et al. [14]A20Espatulenol6750-60-3C15H24O1.51Guan et al. [14]A21γ-Elemene515-13-9C15H240.12Gu et al. [36]A22α-Pinene2437-95-8C10H163.84Dai et al. [35]A23Piperitone89-81-6C10H16O0.42Guan et al. [14]A24(-)-Camphene5794/3/6C10H161.83Dai et al. [35]A25Isoborneol124-76-5C10H18O0.63Dai et al. [35]A26cis-β-Farnesene18794-84-8C15H240.11Dai et al. [35]A27β-Caryophyllene87-44-5C15H2413.64Guan et al. [14]A28γ-Terpinene99-85-4C10H160.24Guan et al. [14]A29Spathulenol4221-98-1C15H24O0.82Dai et al. [35]A30Diisooctyl phthalate27554-26-3C24H38O40.14Dai et al. [35]A31β-Pinene127-91-3C10H163.05Dai et al. [35]A32Hexahydrofarnesyl acetone502-69-2C18H36O0.77Dai et al. [35]A33Tricyclene508-32-7C10H160.12Gu et al. [36]A34Terpinene99-86-5C10H162.26Gu et al. [36]A35Dihydroactinidiolide15356-74-8C11H16O20.21Dai et al. [35]A36Cyclohexadiene4221-98-1C10H160.77Dai et al. [35]A37n-Hexadecanoic acid57-10-3C16H32O20.22Dai et al. [35]A38Terpinyl acetate58206-95-4C12H20O20.27Dai et al. [35]A39Diisobutyl phthalate84-69-5C16H22O40.14Dai et al. [35]A40Myrtenol19894-97-4C10H16O0.77Dai et al. [35]A41Carvacrol499-75-2C10H14O0.55Dai et al. [35]A42Curcumene4176-17-4C15H221.06Dai et al. [35]A43trans-Carveol2102-58-1C10H16O1.17Dai et al. [35]A44(+)-Limonene5989-27-5C10H160.39Dai et al. [35]A45L-Carvone6485-40-1C10H14O0.11Dai et al. [35]A46cis-β-Terpineol7299-40-3C10H18O6.61Dai et al. [35]A47cis-Piperitol16721-38-3C10H18O3.66Dai et al. [35]A48Nerolidol7212-44-4C15H26O0.59Dai et al. [35]A49cis-Jasmon488-10-8C11H16O0.42Dai et al. [35]A50α-Caryophyllene6753-98-6C15H240.37Dai et al. [35]A51Terpinen-4-ol2438-10-0C10H16O11.09Dai et al. [35]A52(5R)-5-Isopropenyl-2-methyl-2-cyclohexen-1-ol99-48-9C10H16O0.12Dai et al. [35]A53Oct-1-en-3-ol3391-86-4C8H16O2.57Dai et al. [35]A54α-Phellandrene99-86-5C10H161.66Dai et al. [35]
### 3.2. Targets’ Intersection and PPI Network Construction
103 AAEO compound-related targets were retrieved from TCMSP and converted into official gene symbols according to the UniProt database. Moreover, 2760 PIs targets were searched by GeneCard, OMIM, and DrugBank databases. Finally, 50 targets were obtained by intersecting two parts of targets (Figure1); the PPIs of 50 overlapping targets are shown in Figure 2.Figure 1
Venn diagram of targets’ intersection of AAEO and PIs.Figure 2
PPI network diagram. Protein-protein interactions (P>0.7) of 50 overlapping targets.
### 3.3. Active Compounds and Overlapping Targets Network Construction
Compounds-overlapping targets network involved 104 nodes and 441 edges. The results reflect the complex mechanism of multicomponent and multitarget treatment of diseases. Moreover, a core network was calculated by MCODE with 15 targets (Figure3).Figure 3
Compounds-overlapping targets network: the right square matrix green circle nodes represent 52 potential compounds (2 compounds (A33 and A53) have no associated targets) and the left circular nodes with gradual color represent 50 overlapping targets of AAEO and PIs. Larger size and deeper color of a node mean a greater degree.
### 3.4. GO Analysis of Targets’ Intersection
GO analysis was mainly focused on the biological process, with a total of 3269 enrichment results, involving adrenergic receptor activity, nuclear receptor activity, and aspartic-type endopeptidase activity. The top 10 GO functional annotations of BP, CC, and MF are shown in Figure4.Figure 4
GO analysis.The top 10 GO functional annotations of BP, CC, and MF are represented by green for biological process, orange for cellular component, light purple for molecular function, respectively.
### 3.5. KEGG Pathways of Targets’ Intersection
KEGG enrichment results were involved in 128 pathways, including pathway in cancer, PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, MAPK signaling pathway, and Wnt signaling pathway. The top 20 pathways were selected by cluster analysis andP-value (Figure 5).Figure 5
Top 20 enriched KEGG pathways.Each bubble represents an enriched function, and the size of the bubble is from small to large. The bubble is colored according to its −log (P value); when the color is redder, P value is smaller.
### 3.6. Compound-Target Docking
The 10 key targets, TNF, PTGS2, IL6, IL1β, NR3C1, CASP3, TP53, PGR, REN, and NOS2, were docked with top 7 compounds: β-caryophyllene (A27),1,8-cineole (A1), terpinen-4-ol (A51), neointermedeol (A4), α-thujone (A11), borneol (A6), and camphor (A3). Generally, the Vina score is negative; the lower the score, the better the binding activity between ligand and protein. There will be top five Vina scores and docking cavity sizes from obtained results, which were first selected as representation [50]. The results indicated that the top 7 active compounds of AAEO had a good affinity to key targets and the RMSD of each docking target and compound was less than 2 angstroms (Tables 2 and 3). The top 3 compounds (A4-neointermedeol, A27-β-caryophyllene, and A3-camphor) and proteins (PTGS2, PGR, and TP53) with better binding affinities are shown in Figures 6–8.Table 2
Vina score of compound-target docking (unit: kcal/mol).
IDA27A1A51A4A11A6A3TNF−6−5.3−6.6−7.8−5.4−5.6−6.8PTGS2−6.6−7.1−6.8−7.2−6.4−6.2−7.3IL6−6.3−5.5−6.7−6.3−5.5−5.2−6.9IL1B−6.8−5.6−5.5−7−5.3−5.6−5.9NR3C1−5.5−5.7−6.3−6.1−6.2−5.1−5.1CASP3−7−5.9−5.9−6.7−5.9−5.4−5.7TP53−7.4−5.8−6−7.1−6.1−5.8−7.5PGR−7.8−6.7−6.4−7.6−6.6−6.2−6.5REN−7.9−6.2−5.9−7.8−6.2−6.2−6.4NOS2−6.9−5.4−7−7.5−6.4−5.3−5.5Table 3
Docking parameters.
TargetPDB IDLigandCavity sizeCenterSizeRMSDXyzxyzTNF1D0GA27330292081818180.000A1791938471616160.098A51330292081717170.096A4330292081818180.000A1145872355163516280.585A6791938471616160.640A316204534151828180.000PTGS21EQGA273720948341893535350.548A1180973231952426280,104A51358922282032629240,103A4358922282032629240.000A11180973231952426280.591A6180973231952426280.645A3358922282032629240.000IL64O9HA27533−2017271818180.000A1533−2017271616160.103A51533−201727171710.103A4533−2017271818180.000A11533−2017271616160.590A6533−2017271616160.664A3533−2017271818180.000IL1β3POKA27987−15−17−81818180.000A12542145261818180.105A51987−15−17−82317170.103A4987−15−17−81818180.000A11199−255−71616160.592A6199−255−71616160.646A3987−15−17−81818180.000NR3C11LATA2721723138812832350.000A121723138812832350.000A511532738921717170.000A421723138812832350.000A1121723138812832350.000A621723138812832350.000A321723138812832350.000CASP35JFTA27183434−251831180.000A1183434−251731170.104A51183434−251731170.103A4183434−251831180.000A11183434−251731171.153A6183434−251631160.646A3183434−251631160.592TP536WQXA275969363−122635310.000A116442318123016160.272A518740−10491717170.427A45969363−122635310.000A1116442318123016161.153A616442318123016160.586A38740−10491818181.127PGR1A28A276174334301818180.000A16174334301616160.000A51631235731717170.000A44212310601818180.000A116174334301717170.000A66174334301616160.000A36174334301616160.000REN3OWNA27118429−14−303535330.000A1191020−1−182616160.107A51183434−251731170.106A4191020−1−182618180.000A111682−10−28−371723171.156A61682−10−28−371623220.648A3118429−14−303535330.594NOS21M7ZA273650533113027300.000A13650533113027300.103A513650533113027300.102A43650533113027300.000A113650533113027301.152A63650533113027300.645A33650533113027300.590Figure 6
(a–c) Docking results of compound A4-neointermedeol and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)Figure 7
(a–c) Docking results of compound A27β-caryophyllene and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)Figure 8
(a–c) Docking results of compound A3 terpinen-4-ol and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)
## 3.1. Compounds of AAEO and Targets Related to Active Compounds
A total of 54 active compounds that met the criteria were finally collected. The basic information of 54 obtained compounds is shown in Table1.Table 1
The basic information of potential compounds of AAEO.
No.Molecule nameCASMolecular formulaRelative content (%)ReferencesA11,8-Cineole470-82-6C10H18O20.91Guan et al. [14]A2Caryophyllene87-44-5C15H247.50Guan et al. [14]A3(-)-Camphor76-22-2C10H16O5.57Guan et al. [14]A4Neointermedeol5945-72-2C15H26O9.65Guan et al. [14]A5Caryophyllene oxide1139-30-6C15H24O8.71Guan et al. [14]A6(-)-Borneol464-45-9C10H18O16.35Guan et al. [14]A7D-Carvone5948/4/9C10H16O0.25Guan et al. [14]A8Bornyl acetate76-49-3C12H20O20.24Guan et al. [14]A94-Terpineol562-74-3C10H18O5.47Guan et al. [14]A10Sabinene10408-16-9C10H163.36Guan et al. [14]A11α-Thujone546-80-5C10H16O14.55Guan et al. [14]A12α-Humulene6753-98-6C15H242.24Guan et al. [14]A13Eugenol97-53-0C10H12O20.56Gu et al. [36]A14cis-Carveol1197-06-4C10H16O1.40Guan et al. [14]A15Germacrene D23986-74-5C15H240.55Guan et al. [14]A16Terpinolene586-62-9C10H160.15Guan et al. [14]A17Cymene527-84-4C10H140.32Guan et al. [14]A18α-Terpineol10482-56-1C10H18O3.62Guan et al. [14]A19cis-Carveol1197-06-4C10H16O1.40Guan et al. [14]A20Espatulenol6750-60-3C15H24O1.51Guan et al. [14]A21γ-Elemene515-13-9C15H240.12Gu et al. [36]A22α-Pinene2437-95-8C10H163.84Dai et al. [35]A23Piperitone89-81-6C10H16O0.42Guan et al. [14]A24(-)-Camphene5794/3/6C10H161.83Dai et al. [35]A25Isoborneol124-76-5C10H18O0.63Dai et al. [35]A26cis-β-Farnesene18794-84-8C15H240.11Dai et al. [35]A27β-Caryophyllene87-44-5C15H2413.64Guan et al. [14]A28γ-Terpinene99-85-4C10H160.24Guan et al. [14]A29Spathulenol4221-98-1C15H24O0.82Dai et al. [35]A30Diisooctyl phthalate27554-26-3C24H38O40.14Dai et al. [35]A31β-Pinene127-91-3C10H163.05Dai et al. [35]A32Hexahydrofarnesyl acetone502-69-2C18H36O0.77Dai et al. [35]A33Tricyclene508-32-7C10H160.12Gu et al. [36]A34Terpinene99-86-5C10H162.26Gu et al. [36]A35Dihydroactinidiolide15356-74-8C11H16O20.21Dai et al. [35]A36Cyclohexadiene4221-98-1C10H160.77Dai et al. [35]A37n-Hexadecanoic acid57-10-3C16H32O20.22Dai et al. [35]A38Terpinyl acetate58206-95-4C12H20O20.27Dai et al. [35]A39Diisobutyl phthalate84-69-5C16H22O40.14Dai et al. [35]A40Myrtenol19894-97-4C10H16O0.77Dai et al. [35]A41Carvacrol499-75-2C10H14O0.55Dai et al. [35]A42Curcumene4176-17-4C15H221.06Dai et al. [35]A43trans-Carveol2102-58-1C10H16O1.17Dai et al. [35]A44(+)-Limonene5989-27-5C10H160.39Dai et al. [35]A45L-Carvone6485-40-1C10H14O0.11Dai et al. [35]A46cis-β-Terpineol7299-40-3C10H18O6.61Dai et al. [35]A47cis-Piperitol16721-38-3C10H18O3.66Dai et al. [35]A48Nerolidol7212-44-4C15H26O0.59Dai et al. [35]A49cis-Jasmon488-10-8C11H16O0.42Dai et al. [35]A50α-Caryophyllene6753-98-6C15H240.37Dai et al. [35]A51Terpinen-4-ol2438-10-0C10H16O11.09Dai et al. [35]A52(5R)-5-Isopropenyl-2-methyl-2-cyclohexen-1-ol99-48-9C10H16O0.12Dai et al. [35]A53Oct-1-en-3-ol3391-86-4C8H16O2.57Dai et al. [35]A54α-Phellandrene99-86-5C10H161.66Dai et al. [35]
## 3.2. Targets’ Intersection and PPI Network Construction
103 AAEO compound-related targets were retrieved from TCMSP and converted into official gene symbols according to the UniProt database. Moreover, 2760 PIs targets were searched by GeneCard, OMIM, and DrugBank databases. Finally, 50 targets were obtained by intersecting two parts of targets (Figure1); the PPIs of 50 overlapping targets are shown in Figure 2.Figure 1
Venn diagram of targets’ intersection of AAEO and PIs.Figure 2
PPI network diagram. Protein-protein interactions (P>0.7) of 50 overlapping targets.
## 3.3. Active Compounds and Overlapping Targets Network Construction
Compounds-overlapping targets network involved 104 nodes and 441 edges. The results reflect the complex mechanism of multicomponent and multitarget treatment of diseases. Moreover, a core network was calculated by MCODE with 15 targets (Figure3).Figure 3
Compounds-overlapping targets network: the right square matrix green circle nodes represent 52 potential compounds (2 compounds (A33 and A53) have no associated targets) and the left circular nodes with gradual color represent 50 overlapping targets of AAEO and PIs. Larger size and deeper color of a node mean a greater degree.
## 3.4. GO Analysis of Targets’ Intersection
GO analysis was mainly focused on the biological process, with a total of 3269 enrichment results, involving adrenergic receptor activity, nuclear receptor activity, and aspartic-type endopeptidase activity. The top 10 GO functional annotations of BP, CC, and MF are shown in Figure4.Figure 4
GO analysis.The top 10 GO functional annotations of BP, CC, and MF are represented by green for biological process, orange for cellular component, light purple for molecular function, respectively.
## 3.5. KEGG Pathways of Targets’ Intersection
KEGG enrichment results were involved in 128 pathways, including pathway in cancer, PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, MAPK signaling pathway, and Wnt signaling pathway. The top 20 pathways were selected by cluster analysis andP-value (Figure 5).Figure 5
Top 20 enriched KEGG pathways.Each bubble represents an enriched function, and the size of the bubble is from small to large. The bubble is colored according to its −log (P value); when the color is redder, P value is smaller.
## 3.6. Compound-Target Docking
The 10 key targets, TNF, PTGS2, IL6, IL1β, NR3C1, CASP3, TP53, PGR, REN, and NOS2, were docked with top 7 compounds: β-caryophyllene (A27),1,8-cineole (A1), terpinen-4-ol (A51), neointermedeol (A4), α-thujone (A11), borneol (A6), and camphor (A3). Generally, the Vina score is negative; the lower the score, the better the binding activity between ligand and protein. There will be top five Vina scores and docking cavity sizes from obtained results, which were first selected as representation [50]. The results indicated that the top 7 active compounds of AAEO had a good affinity to key targets and the RMSD of each docking target and compound was less than 2 angstroms (Tables 2 and 3). The top 3 compounds (A4-neointermedeol, A27-β-caryophyllene, and A3-camphor) and proteins (PTGS2, PGR, and TP53) with better binding affinities are shown in Figures 6–8.Table 2
Vina score of compound-target docking (unit: kcal/mol).
IDA27A1A51A4A11A6A3TNF−6−5.3−6.6−7.8−5.4−5.6−6.8PTGS2−6.6−7.1−6.8−7.2−6.4−6.2−7.3IL6−6.3−5.5−6.7−6.3−5.5−5.2−6.9IL1B−6.8−5.6−5.5−7−5.3−5.6−5.9NR3C1−5.5−5.7−6.3−6.1−6.2−5.1−5.1CASP3−7−5.9−5.9−6.7−5.9−5.4−5.7TP53−7.4−5.8−6−7.1−6.1−5.8−7.5PGR−7.8−6.7−6.4−7.6−6.6−6.2−6.5REN−7.9−6.2−5.9−7.8−6.2−6.2−6.4NOS2−6.9−5.4−7−7.5−6.4−5.3−5.5Table 3
Docking parameters.
TargetPDB IDLigandCavity sizeCenterSizeRMSDXyzxyzTNF1D0GA27330292081818180.000A1791938471616160.098A51330292081717170.096A4330292081818180.000A1145872355163516280.585A6791938471616160.640A316204534151828180.000PTGS21EQGA273720948341893535350.548A1180973231952426280,104A51358922282032629240,103A4358922282032629240.000A11180973231952426280.591A6180973231952426280.645A3358922282032629240.000IL64O9HA27533−2017271818180.000A1533−2017271616160.103A51533−201727171710.103A4533−2017271818180.000A11533−2017271616160.590A6533−2017271616160.664A3533−2017271818180.000IL1β3POKA27987−15−17−81818180.000A12542145261818180.105A51987−15−17−82317170.103A4987−15−17−81818180.000A11199−255−71616160.592A6199−255−71616160.646A3987−15−17−81818180.000NR3C11LATA2721723138812832350.000A121723138812832350.000A511532738921717170.000A421723138812832350.000A1121723138812832350.000A621723138812832350.000A321723138812832350.000CASP35JFTA27183434−251831180.000A1183434−251731170.104A51183434−251731170.103A4183434−251831180.000A11183434−251731171.153A6183434−251631160.646A3183434−251631160.592TP536WQXA275969363−122635310.000A116442318123016160.272A518740−10491717170.427A45969363−122635310.000A1116442318123016161.153A616442318123016160.586A38740−10491818181.127PGR1A28A276174334301818180.000A16174334301616160.000A51631235731717170.000A44212310601818180.000A116174334301717170.000A66174334301616160.000A36174334301616160.000REN3OWNA27118429−14−303535330.000A1191020−1−182616160.107A51183434−251731170.106A4191020−1−182618180.000A111682−10−28−371723171.156A61682−10−28−371623220.648A3118429−14−303535330.594NOS21M7ZA273650533113027300.000A13650533113027300.103A513650533113027300.102A43650533113027300.000A113650533113027301.152A63650533113027300.645A33650533113027300.590Figure 6
(a–c) Docking results of compound A4-neointermedeol and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)Figure 7
(a–c) Docking results of compound A27β-caryophyllene and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)Figure 8
(a–c) Docking results of compound A3 terpinen-4-ol and PTGS2 (1EQG), PGR (1A28), and TP53 (6WQX), respectively.
(a)(b)(c)
## 4. Discussion
Artemisia argyi, a dried leaf of Ai Ye with multiple biological activities, is widely used to treat inflammatory diseases such as eczema, dermatitis, arthritis, allergic asthma, and colitis [52]. The pharmacological mechanisms of AAEO associated with PIS are uncertain. Our study was first used network pharmacology to discover the potential targets and regulatory molecular mechanism of AAEO on PIs treatment. As a result, we identified 54 compounds as the main active components, obtained 50 key targets, including pathway in cancer, PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, MAPK signaling pathway, and Wnt signaling pathway, demonstrated the multitarget and multipathway specialty of TCM in treating diseases.Over 200 species of AAEO can be detected by gas chromatography-mass spectrometry (GC-MS) [34], mainly including terpenoids, ketones (aldehydes), alcohols (phenols), acids (esters), alkanes (alkenes), and other chemical constituents. In our study, β-caryophyllene,1,8-cineole, terpinen-4-ol, neointermedeol, α-thujone, borneol, and camphor had a relative content of the first 7 [14, 34–36]. 1,8-Cineole, camphor, and borneol accounted for the largest proportion of AAEO [53]. In lipopolysaccharide0 (LPS-) induced cell and mouse inflammation experiments, 1,8-cineole alleviates LPS-induced vascular endothelial cell injury, obviously inhibits the production of the inflammatory mediator, increases the release of anti-inflammatory factor IL10, and improves inflammatory symptoms [54]. Borneol significantly decreased the auricular swelling rate and pain threshold of rats by activating the p38-COX-2-PGE2 signaling pathway, which has significant analgesic and anti-inflammatory effects on PDT of acne [55]. Numerous investigations have shown various essential oils of several species containing camphor as the major component, exhibiting antimicrobial activity [56–59]. Also, the application of camphor to the skin was proved to increase local blood flow in the skin and muscle, induce both cold and warm sensations, and improve blood circulation [60]. More noteworthy is that the top three compounds of molecular docking score were neointermedeol, β-caryophyllene, and camphor. Neointermedeol has been shown to have antioxidant, antibacterial, and other biological activities [61, 62]. Recent studies have shown that caryophyllene can provide protection for animal cells and reduce proinflammatory mediators such as TNF-α, IL-1β, IL-6, and NF-κB, thereby improving the symptoms of inflammation and oxidative stress [63, 64].The core network calculated by MCODE had 15 targets, mostly related to inflammation, oxidative stress, and apoptosis. TNF, IL6, IL-1β, and PTGS2 participate in regulating inflammatory cascade reaction [65–68] and can be inhibited by inflammation in different levels by AAEO. TP53, BAX, and CASP3 regulate the apoptotic process and cell protection negatively [69, 70]. KEGG Pathways enrichment analysis is mainly involved in PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, and human T-cell leukemia virus 1 infection, MAPK signaling pathway, and Wnt signaling pathway. The study found that the PI3K-Akt pathway plays a great role in antiapoptosis and angiogenesis. The PI3K-Akt pathway phosphorylated Akt, and phosphorylated Akt first activated downstream factors Bad and Caspase-9 to play an antiapoptotic role and promote angiogenesis [71], and then phosphorylated Akt further regulated eNOS, which could promote the generation of NO [72], provide oxygen and nutrients for tissue recovery and mediate skin injury repair. Ischemic-reperfusion is recognized as the mechanism of PIs; the process includes oxidative stress, excessive release of oxygen free radicals, apoptosis, and activation of inflammatory cytokines [73]. The prediction results of our network pharmacology are mostly consistent with the progress of ischemic-reperfusion. MAPK signaling pathway is involved in the repair of PIs, increases the expression of Ras, c-Raf, MEK1, p-MEK1 protein, p-ERK1 protein, and MEK1 mRNA, promotes the proliferation of vascular endothelial cells, and accelerates microvascular regeneration and remodeling [74]. More studies have shown that the repair of pressure ulcers is highly correlated with the Wnt/β-catenin signaling pathway regulating the proliferation and differentiation of epithelial cells, hair follicles, and sebaceous glands [75, 76].The above arguments verify the accuracy of this network pharmacology prediction. Besides, the docking result showed that all selected core protein and ligand have a better affinity (≤5 kcal/mol), and there were 15 docking scores ≥7 kcal/mol, indicating strong binding affinity of the compound to docking protein [77]. The RMSD of the target protein is less than 2 Å, which indicates that the docking method and parameter setting are reasonable and can be used for the next docking with components [78].In addition, study showed that AAEO dose-dependently inhibits inflammatory mediators, such as NO, PGE2, TNF-α, IL-6, IL-10, IFN-β, and MCP-1 [79]. In the experiment of AAEO in anti-inflammatory and blood stasis animals, the effect of the lowest dose of skin administration (0.25 mL/kg) was equivalent to that oral administration of the middle dose (0.50 mL/kg) [80]. PTGS2 with the highest docking scores is a biomarker of iron death; it can inhibit the expression of inflammatory factors and apoptosis [81–83]. The future research direction can explore the way of administration, dosage, and iron death mechanism pathway.The limitation of this study is that we have not conducted clinical or animal experiments as certification; further studies will validate the potential key targets and pathways predicted and explore the mechanism of effective components of the essential oil fromArtemisia argyi in preventing and treating PIs by combining molecular biology and pathophysiology.
## 5. Conclusion
In conclusion, in this study, the potential targets and regulatory molecular mechanisms of AAEO in the treatment of PIs were analyzed by network pharmacology and molecular docking. In total, 54 active components and 50 potential targets were screened, mainly involving PI3K-Akt signaling pathway, pathway in cancer, PI3K-Akt signaling pathway, human immunodeficiency virus 1 infection, MAPK signaling pathway, and Wnt signaling pathway, revealing that AAEO may play a role in the treatment of PIs by reducing inflammation, inhibiting apoptosis and oxidative stress, and showing the characteristics of multitarget and multipathway. Our study provides a basis for the mechanism and further research direction of AAEO in treating PIs by combining literature research, network analysis, and molecular docking.
---
*Source: 1019289-2022-01-19.xml* | 2022 |
# Traditional Chinese Medicine Compound Preparations Are Associated with Low Disease-Related Complication Rates in Patients with Rheumatoid Arthritis: A Retrospective Cohort Study of 11,074 Patients
**Authors:** Yanyan Fang; Jian Liu; Ling Xin; Xiaolu Chen; Xiang Ding; Qi Han; Mingyu He; Xu Li; Yanqiu Sun; Fanfan Wang; Jie Wang; Xin Wang; Jianting Wen; Xianheng Zhang; Qin Zhou; Junru Zhang
**Journal:** BioMed Research International
(2023)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2023/1019290
---
## Abstract
Objective. To evaluate whether traditional Chinese medicine compound preparations (TCMCPs) are associated with rheumatoid arthritis- (RA-) related complications (including readmission, Sjogren’s syndrome, surgical treatment, and all-cause death) in patients with RA. Methods. Clinical outcome data were retrospectively collected from patients with RA discharged from the Department of Rheumatology and Immunology of the First Affiliated Hospital of Anhui University of Chinese Medicine from January 2009 to June 2021. The propensity score matching method was used to match baseline data. Multivariate analysis was conducted to analyze sex, age, the incidence of hypertension, diabetes, and hyperlipidemia and identify the risk of readmission, Sjogren’s syndrome, surgical treatment, and all-cause death. Users of TCMCP and nonusers of TCMCP were defined as the TCMCP and non-TCMCP groups, respectively. Results. A total of 11,074 patients with RA were included in the study. The median follow-up time was 54.85 months. After propensity score matching, the baseline data of TCMCP users corresponded with those of non-TCMCP users, with 3517 cases in each group. Retrospective analysis revealed that TCMCP significantly reduced clinical, immune, and inflammatory indices in patients with RA, and these indices were highly correlated. Notably, the composite endpoint prognosis for treatment failure in TCMCP users was better than that in non-TCMCP users (HR=0.75 (0.71-0.80)). The risk of RA-related complications in TCMCP users with high-exposure intensity (HR=0.669 (0.650-0.751)) and medium-exposure intensity (HR=0.796 (0.691-0.918)) was significantly lower than those in non-TCMCP users. An increase in exposure intensity was associated with a concomitant decrease in the risk of RA-related complications. Conclusion. The use of TCMCPs, as well as long-term exposure to TCMCPs, may lower RA-related complications, including readmission, Sjogren’s syndrome, surgical treatment, and all-cause death, in patients with RA.
---
## Body
## 1. Introduction
Rheumatoid arthritis (RA) is a chronic inflammatory disease that mainly causes gradual joint damage and affects other body systems [1, 2]. The worldwide incidence rate of RA is approximately 1%, and although this condition affects people of all ages, it is more prevalent in women than in men [3, 4]. Currently, the etiology of RA is unclear, which poses a challenge to the effective treatment of RA and increases rehospitalization rates [5]. Although synovitis is a primary pathological marker of RA, many extra-articular manifestations may occur because of RA’s complex, chronic, inflammatory, and autoimmune characteristics [6–8]. Extra-articular manifestations and complications are common in RA, contributing to higher incidence rates and premature mortality [6]. A hallmark clinical feature of RA is the symmetrical polyarthritis that manifests as redness and pain in the joints, especially smaller joints, and long-term morning stiffness [9, 10], with the potential to progress to serious joint injury and disability [11]. Progressive and severe joint injury, chronic pain, loss of function, and insufficient response to treatment regimens are indications for final joint replacement surgery [12]. Cohort studies based on national data from several countries have shown that RA is associated with high mortality [13, 14]. Therefore, readmission, extra-articular manifestations, surgical treatment, and all-cause death are considered potential RA-related complications.Modern pharmacological treatments for RA mainly include nonsteroidal anti-inflammatory drugs, glucocorticoids, conventional disease-modifying antirheumatic drugs (cDMARDs), and biologic DMARDs that are used to alleviate chronic pain in patients by reducing the local inflammatory response [15]. However, RA treatment is complex and requires the combined application of multiple drugs, some of which have significant side effects and high treatment costs, resulting in poor patient compliance. Traditional Chinese medicine (TCM) might have many therapeutic advantages for RA [16–18]. Xin’an Jianpi Tongbi prescription, including Xinfeng capsule (XFC), Huangqin Qingre Chubi capsule (HQC), and Wuwei Wentong Chubi capsule (WWT), is a routinely used TCM compound preparation (TCMCP), which contains Astragalus membranaceus, Semen coicis, Tripterygium wilfordii, Scolopendra spp., Scutellaria baicalensis, Gardenia jasminoides, Prunus persica, Clematis chinensis, Poria cocos, Epimedium brevicornu, Cinnamomum cassia, Curcumae Longae, and other drugs. Many studies have shown that this TCMCP has high efficacy against RA [18–20]. A randomized, double-blind, multicenter, and placebo-controlled trial showed high efficacy and safety of XFC in the treatment of patients with RA [21, 22]. Animal experiments have demonstrated that HQC improves the baseline severity of arthritis in a collagen-induced arthritis mouse model [23, 24]. WWT has also been reported to have a good pharmacological effect on RA [25]. However, although the TCMCPs have favorable therapeutic effects on RA, their specific effect on the incidence of RA-related complications is still unclear.In this study, we retrospectively analyzed the effect of Xin’an Jianpi Tongbi prescription on immune inflammation in RA and the risk of four RA-related complications, including readmission, Sjogren’s syndrome, surgical treatment, and all-cause death.
## 2. Methods
### 2.1. Study Cohort (Figure1)
Figure 1
Flow chart of inclusion in the cohort. TCMCP: users of traditional Chinese medicine compound preparation; non-TCMCP: nonusers of traditional Chinese medicine compound preparation.Clinical data of discharged patients with RA from the Department of Rheumatology and Immunology of the First Affiliated Hospital of Anhui University of Chinese Medicine were retrospectively collected from January 2009 to June 2021. The diagnostic criteria for RA by the American College of Rheumatology were adopted in this study [26]. Telephonic follow-up time was calculated from the time of discharge until February 28, 2022. Based on the history of TCMCP usage, the risk of RA-related complications, including readmissions, Sjogren’s syndrome, surgical treatments, and all-cause death, was evaluated. This study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Anhui University of Chinese Medicine (approval number: 2022MCZQ01).
### 2.2. Data Collection
Demographic information, including age and sex; clinical data including baseline complications, baseline cDMARD, and corticosteroid treatment; and data on TCMCPs were collected and evaluated retrospectively.
### 2.3. Treatment
In the First Affiliated Hospital of Anhui University of Chinese Medicine, the basic drugs for treating RA consist of cDMARDs (including methotrexate, leflunomide, sulfasalazine, and hydroxychloroquine sulfate), nonsteroidal anti-inflammatory drugs (including celecoxib, meloxicam, and lornoxicam), and glucocorticoids (methylprednisolone). It should be noted that TCM is a commonly used treatment means in TCM hospitals. We gradually withdrew the use of biologics by increasing the use of TCM.
### 2.4. Inflammatory and Immune Indices
Inflammatory and immune indices, including erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), anti-cyclic citrullinated peptide (anti-CCP), rheumatoid factor (RF), immunoglobulin A (IgA), immunoglobulin G (IgG), immunoglobulin M (IgM), complement component 3 (C3), and complement component 4 (C4) levels, were evaluated after TCMCP treatment.
### 2.5. Research Definition
#### 2.5.1. Xin’an Jianpi Tongbi Prescription
Xin’an Jianpi Tongbi prescription is a compound preparation of TCM based on the Xin’an medical theory. It contains Xinfeng capsule [22] (Z20050062 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2013 1 0011369.8), Huangqin Qingre Chubi capsule [24] (Z20200001 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2011 1 0095718.X), and Wuwei Wentong Chubi capsule [25] (patent number: ZL 2020 10714863.0). The Xinfeng capsule is composed of Astragalus membranaceus, Semen coicis, Tripterygium wilfordii, and Scolopendra spp. These four medicinal materials were extracted by refluxing twice with 75% ethanol. In the first step, ten times the amount of ethanol was added for extraction for 2 h, after which eight times the amount of ethanol was added and allowed to extract for 1.5 h. The drug residues were boiled with eight times the amount of water and extracted for 1.5 h. This was then filtered and allowed to stand. The supernatant was collected and combined with the alcohol extract under pressure to concentrate, and the paste was collected. The sample was dried, crushed, mixed with dextrin, and granulated with ethanol. This was followed by drying, whole granulating, sterilizing, filling, and outsourcing. The Huangqin Qingre Chubi capsule is composed of Scutellaria baicalensis, Prunus persica, Gardenia jasminoides, Semen coicis, and Clematis chinensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. The mixture was strained and allowed to stand. Then, the supernatant was absorbed and concentrated under pressure, and the paste was collected; this was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the materials, which were screened, granulated, dried, whole-grained, and filled into capsules. The Wuwei Wentong Chubi capsule is composed of Poria cocos, Epimedium brevicornu, Cinnamomum cassia, Curcumae Longae, and Scutellaria baicalensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. This mixture was strained and allowed to settle. Then, the supernatant was absorbed and concentrated under reduced pressure, and the paste was collected. This was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the material, which was then sieved using no. 12 mesh, granulated, dried, whole-grained, filled into capsules, and outsourced. All capsules were produced by the preparation center of the First Affiliated Hospital of Anhui University of Chinese medicine, and the variation range of each capsule was ±10%.
#### 2.5.2. RA-Related Complications
Readmission refers to RA patients who have been hospitalized twice or more. Sjogren’s syndrome refers to an RA-related complication with a frequency of 10.41% (1022/9813). Surgical treatment refers to RA patients with severe joint deformities requiring surgical treatment. All-cause death refers to death caused by long-term RA distress.
#### 2.5.3. Classification of Quantitative Variables
The usage of TCMCPs was defined as “1,” and nonusage was defined as “0.” After TCMCP treatment, a decrease in ESR, CRP, IgA, IgG, IgM, C3, C4, RF, and anti-CCP levels was recorded as “1,” whereas an increase or no change in the level was recorded as “0.” The decrease in inflammatory and immune index values indicated the effectiveness of TCMCP treatment.
#### 2.5.4. Exposure Intensity
According to exposure intensity, patients who received TCMCPs for less than 1 month, 1–3 months, 3–6 months, and ≥6 months after discharge were defined as the nonexposure, low-exposure, medium-exposure, and high-exposure groups, respectively.
### 2.6. Statistical Analysis
Continuous variables are reported as medians with interquartile ranges (IQR), whereas categorical variables are reported as frequencies and percentages. Categorical variables were compared using Fisher’s exact test, whereas continuous variables were compared using the Wilcoxon signed-rank test. Univariate and multivariate COX proportional hazards models were developed to evaluate risk factors for the occurrence of endpoint events and are presented as hazard ratios (HR with 95% confidence intervals (CIs)). Univariate models contained a single predictor for calculating different baseline risks for each site. Multivariate models included age, sex, comorbidities at baseline, and TCMCP as model covariates. All analyses were performed using SPSS V.22 (Armonk, NY, USA) software. Differences were considered statistically significant when thep value was less than 0.05.
## 2.1. Study Cohort (Figure1)
Figure 1
Flow chart of inclusion in the cohort. TCMCP: users of traditional Chinese medicine compound preparation; non-TCMCP: nonusers of traditional Chinese medicine compound preparation.Clinical data of discharged patients with RA from the Department of Rheumatology and Immunology of the First Affiliated Hospital of Anhui University of Chinese Medicine were retrospectively collected from January 2009 to June 2021. The diagnostic criteria for RA by the American College of Rheumatology were adopted in this study [26]. Telephonic follow-up time was calculated from the time of discharge until February 28, 2022. Based on the history of TCMCP usage, the risk of RA-related complications, including readmissions, Sjogren’s syndrome, surgical treatments, and all-cause death, was evaluated. This study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Anhui University of Chinese Medicine (approval number: 2022MCZQ01).
## 2.2. Data Collection
Demographic information, including age and sex; clinical data including baseline complications, baseline cDMARD, and corticosteroid treatment; and data on TCMCPs were collected and evaluated retrospectively.
## 2.3. Treatment
In the First Affiliated Hospital of Anhui University of Chinese Medicine, the basic drugs for treating RA consist of cDMARDs (including methotrexate, leflunomide, sulfasalazine, and hydroxychloroquine sulfate), nonsteroidal anti-inflammatory drugs (including celecoxib, meloxicam, and lornoxicam), and glucocorticoids (methylprednisolone). It should be noted that TCM is a commonly used treatment means in TCM hospitals. We gradually withdrew the use of biologics by increasing the use of TCM.
## 2.4. Inflammatory and Immune Indices
Inflammatory and immune indices, including erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), anti-cyclic citrullinated peptide (anti-CCP), rheumatoid factor (RF), immunoglobulin A (IgA), immunoglobulin G (IgG), immunoglobulin M (IgM), complement component 3 (C3), and complement component 4 (C4) levels, were evaluated after TCMCP treatment.
## 2.5. Research Definition
### 2.5.1. Xin’an Jianpi Tongbi Prescription
Xin’an Jianpi Tongbi prescription is a compound preparation of TCM based on the Xin’an medical theory. It contains Xinfeng capsule [22] (Z20050062 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2013 1 0011369.8), Huangqin Qingre Chubi capsule [24] (Z20200001 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2011 1 0095718.X), and Wuwei Wentong Chubi capsule [25] (patent number: ZL 2020 10714863.0). The Xinfeng capsule is composed of Astragalus membranaceus, Semen coicis, Tripterygium wilfordii, and Scolopendra spp. These four medicinal materials were extracted by refluxing twice with 75% ethanol. In the first step, ten times the amount of ethanol was added for extraction for 2 h, after which eight times the amount of ethanol was added and allowed to extract for 1.5 h. The drug residues were boiled with eight times the amount of water and extracted for 1.5 h. This was then filtered and allowed to stand. The supernatant was collected and combined with the alcohol extract under pressure to concentrate, and the paste was collected. The sample was dried, crushed, mixed with dextrin, and granulated with ethanol. This was followed by drying, whole granulating, sterilizing, filling, and outsourcing. The Huangqin Qingre Chubi capsule is composed of Scutellaria baicalensis, Prunus persica, Gardenia jasminoides, Semen coicis, and Clematis chinensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. The mixture was strained and allowed to stand. Then, the supernatant was absorbed and concentrated under pressure, and the paste was collected; this was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the materials, which were screened, granulated, dried, whole-grained, and filled into capsules. The Wuwei Wentong Chubi capsule is composed of Poria cocos, Epimedium brevicornu, Cinnamomum cassia, Curcumae Longae, and Scutellaria baicalensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. This mixture was strained and allowed to settle. Then, the supernatant was absorbed and concentrated under reduced pressure, and the paste was collected. This was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the material, which was then sieved using no. 12 mesh, granulated, dried, whole-grained, filled into capsules, and outsourced. All capsules were produced by the preparation center of the First Affiliated Hospital of Anhui University of Chinese medicine, and the variation range of each capsule was ±10%.
### 2.5.2. RA-Related Complications
Readmission refers to RA patients who have been hospitalized twice or more. Sjogren’s syndrome refers to an RA-related complication with a frequency of 10.41% (1022/9813). Surgical treatment refers to RA patients with severe joint deformities requiring surgical treatment. All-cause death refers to death caused by long-term RA distress.
### 2.5.3. Classification of Quantitative Variables
The usage of TCMCPs was defined as “1,” and nonusage was defined as “0.” After TCMCP treatment, a decrease in ESR, CRP, IgA, IgG, IgM, C3, C4, RF, and anti-CCP levels was recorded as “1,” whereas an increase or no change in the level was recorded as “0.” The decrease in inflammatory and immune index values indicated the effectiveness of TCMCP treatment.
### 2.5.4. Exposure Intensity
According to exposure intensity, patients who received TCMCPs for less than 1 month, 1–3 months, 3–6 months, and ≥6 months after discharge were defined as the nonexposure, low-exposure, medium-exposure, and high-exposure groups, respectively.
## 2.5.1. Xin’an Jianpi Tongbi Prescription
Xin’an Jianpi Tongbi prescription is a compound preparation of TCM based on the Xin’an medical theory. It contains Xinfeng capsule [22] (Z20050062 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2013 1 0011369.8), Huangqin Qingre Chubi capsule [24] (Z20200001 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2011 1 0095718.X), and Wuwei Wentong Chubi capsule [25] (patent number: ZL 2020 10714863.0). The Xinfeng capsule is composed of Astragalus membranaceus, Semen coicis, Tripterygium wilfordii, and Scolopendra spp. These four medicinal materials were extracted by refluxing twice with 75% ethanol. In the first step, ten times the amount of ethanol was added for extraction for 2 h, after which eight times the amount of ethanol was added and allowed to extract for 1.5 h. The drug residues were boiled with eight times the amount of water and extracted for 1.5 h. This was then filtered and allowed to stand. The supernatant was collected and combined with the alcohol extract under pressure to concentrate, and the paste was collected. The sample was dried, crushed, mixed with dextrin, and granulated with ethanol. This was followed by drying, whole granulating, sterilizing, filling, and outsourcing. The Huangqin Qingre Chubi capsule is composed of Scutellaria baicalensis, Prunus persica, Gardenia jasminoides, Semen coicis, and Clematis chinensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. The mixture was strained and allowed to stand. Then, the supernatant was absorbed and concentrated under pressure, and the paste was collected; this was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the materials, which were screened, granulated, dried, whole-grained, and filled into capsules. The Wuwei Wentong Chubi capsule is composed of Poria cocos, Epimedium brevicornu, Cinnamomum cassia, Curcumae Longae, and Scutellaria baicalensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. This mixture was strained and allowed to settle. Then, the supernatant was absorbed and concentrated under reduced pressure, and the paste was collected. This was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the material, which was then sieved using no. 12 mesh, granulated, dried, whole-grained, filled into capsules, and outsourced. All capsules were produced by the preparation center of the First Affiliated Hospital of Anhui University of Chinese medicine, and the variation range of each capsule was ±10%.
## 2.5.2. RA-Related Complications
Readmission refers to RA patients who have been hospitalized twice or more. Sjogren’s syndrome refers to an RA-related complication with a frequency of 10.41% (1022/9813). Surgical treatment refers to RA patients with severe joint deformities requiring surgical treatment. All-cause death refers to death caused by long-term RA distress.
## 2.5.3. Classification of Quantitative Variables
The usage of TCMCPs was defined as “1,” and nonusage was defined as “0.” After TCMCP treatment, a decrease in ESR, CRP, IgA, IgG, IgM, C3, C4, RF, and anti-CCP levels was recorded as “1,” whereas an increase or no change in the level was recorded as “0.” The decrease in inflammatory and immune index values indicated the effectiveness of TCMCP treatment.
## 2.5.4. Exposure Intensity
According to exposure intensity, patients who received TCMCPs for less than 1 month, 1–3 months, 3–6 months, and ≥6 months after discharge were defined as the nonexposure, low-exposure, medium-exposure, and high-exposure groups, respectively.
## 2.6. Statistical Analysis
Continuous variables are reported as medians with interquartile ranges (IQR), whereas categorical variables are reported as frequencies and percentages. Categorical variables were compared using Fisher’s exact test, whereas continuous variables were compared using the Wilcoxon signed-rank test. Univariate and multivariate COX proportional hazards models were developed to evaluate risk factors for the occurrence of endpoint events and are presented as hazard ratios (HR with 95% confidence intervals (CIs)). Univariate models contained a single predictor for calculating different baseline risks for each site. Multivariate models included age, sex, comorbidities at baseline, and TCMCP as model covariates. All analyses were performed using SPSS V.22 (Armonk, NY, USA) software. Differences were considered statistically significant when thep value was less than 0.05.
## 3. Results
### 3.1. Baseline Characteristics of TCMCP and Non-TCMCP Patients (Table1)
Table 1
Baseline characteristics of TCMCP and non-TCMCP patients, matched and unmatched by propensity score.
Before matchingAfter matchingTotal (n=9183)TCMCP (n=3869)Non-TCMCP (n=5314)χ2p valueTotal (n=7034)TCMCP (n=3517)Non-TCMCP (n=3517)χ2p valueAge (year)10.6490.0013.3950.065<574643 (50.6%)1879 (48.6%)2764 (52.0%)3223 (45.8%)1650 (46.9%)1573 (44.7%)≥574540 (49.4%)1990 (51.4%)2550 (48.0%)3811 (54.2%)1867 (53.1%)1944 (55.3%)Sex55.078<0.0011.1640.281Female7534 (82.0%)3039 (78.5%)4494 (84.6%)5923 (84.2%)2945 (83.7%)2978 (84.7%)Male1650 (18.0%)830 (21.5%)820 (15.4%)1111 (15.8%)572 (16.3%)539 (15.3%)Underlying diseasesHypertension2804 (30.5%)988 (25.5%)1816 (34.2%)78.751<0.0012111 (30.0%)1072 (30.5%)1039 (29.5%)0.7370.391Diabetes903 (9.8%)422 (10.9%)481 (12.4%)8.6950.003647 (9.20%)315 (8.96%)332 (9.44%)0.4920.483Hyperlipidemia250 (2.7%)99 (2.6%)151 (2.8%)0.6760.411179 (2.54%)85 (2.42%)94 (2.67%)0.4640.496TreatmentcDMARDs6276 (68.3%)2697 (69.7%)3579 (67.4%)5.7520.0164968 (70.6%)2507 (71.3%)2461 (70.0%)1.4500.229Corticosteroid5597 (60.9%)2191 (56.6%)3406 (64.1%)52.423<0.0014291 (61.0%)2117 (60.2%)2174 (61.8%)1.9420.163Note: TCMCP: users of traditional Chinese medicine compound preparation; non-TCMCP: nonusers of traditional Chinese medicine compound preparation.The baseline data, including sex, age, hypertension, diabetes, hyperlipidemia, and cDMARD and corticosteroid treatment, of 9813 patients with RA who were successfully followed up were recorded. The median follow-up time was 54.85 months. Before propensity score matching, there were significant differences between TCMCP users and non-TCMCP users in terms of sex, age, hypertension, diabetes, cDMARDs, and corticosteroid treatment (p<0.05). However, after matching, no significant difference was found between TCMCP users and non-TCMCP users in the same aspects (p>0.05).
### 3.2. Changes in the RA-Related Inflammatory and Immune Indices after TCMCP Treatment (Table2)
Table 2
Changes in the rheumatoid arthritis-related inflammatory and immune indices after administration of TCMCP (n=3517).
Before treatmentAfter treatmentZp valueReference rangesESR (median (Q1, Q3), mm/h)48.0 (29.00, 70.00)28.00 (16.00, 44.00)-34.498<0.0012-6CRP (median (Q1, Q3), mg/L)24.29 (7.00, 50.57)2.00 (0.48, 8.66)-40.240<0.0010-5IgA (median (Q1, Q3), g/L)2.65 (2.00, 3.48)2.48 (1.91, 3.22)-15.313<0.0010.7-4.06IgG (median (Q1, Q3), g/L)12.98 (10.07, 16.70)12.00 (9.56, 15.30)-18.241<0.0016.8-14.5IgM (median (Q1, Q3), g/L)1.26 (0.89, 1.66)1.28 (0.91, 1.72)-3.983<0.0010.3-2.2C3 (median (Q1, Q3), g/L)110.60 (87.70, 129.90)100.20 (81.70, 115.10)-23.671<0.00175-135C4 (median (Q1, Q3), g/L)23.50 (15.6, 30.10)19.30 (12.60, 24.90)-27.145<0.0019-36Anti-CCP (median (Q1, Q3), mmol/L)240.90 (105.92, 473.98)220.45 (84.15, 452.69)-6.170<0.001<25RF (median (Q1, Q3), U/mL)101.70 (33.55, 244.50)88.85 (27.53, 216.45)-20.372<0.0010-14Note: TCMCP: traditional Chinese medicine compound preparation; ESR: erythrocyte sedimentation rate; CRP: C-reactive protein; IgA: immunoglobulin A; IgM: immunoglobulin M; IgG: immunoglobulin G; C3: complement C3; C4: complement C4; anti-CCP: anti-cyclic citrullinated peptide; RF: rheumatoid factor.Z is the standardized test statistics before and after treatment of TCMCP. The p value is compared before and after treatment with TCMCP.Hospitalization data of 3517 patients in the matched TCMCP group who received Xin’an Jianpi Tongbi prescription during hospitalization were collected and analyzed. Their posttreatment inflammatory and immune indices were lower than those before treatment (p<0.05).
### 3.3. Association Analysis of TCMCPs with RA-Related Inflammatory and Immune Indices (Table3)
Table 3
Association between traditional Chinese medicine compound preparation with rheumatoid arthritis-related inflammatory and immune indices.
TCMCPIndexesχ2p valueOR95% CIXFCCRP ↓4.2640.0391.2161.010-1.463XFCESR ↓9.0260.0031.2981.095-1.539XFCC4 ↓4.8200.0281.2581.025-1.544HQCCRP ↓24.65<0.0011.6411.348-1.998HQCESR ↓10.0010.0021.3241.112-1.575HQCC4 ↓5.0950.0241.2721.032-1.569HQCIgG ↓5.5090.0191.2471.037-1.499HQCIgA ↓5.2850.0221.2371.032-1.484Note: TCMCP: users of traditional Chinese medicine compound preparation; XFC: Xinfeng capsule; HQC: Huangqin Qingre capsule; ESR: erythrocyte sedimentation rate; CRP: C-reactive protein; C4: complement C4; IgG: immunoglobulin G; IgA: immunoglobulin A. “↓” represents the decrease of quantitative variables, indicating that the laboratory indicators improved after TCMCP treatment.We further analyzed the association between Xin’an Jianpi Tongbi prescription and RA-related inflammatory and immune indices. The results indicated that XFC was positively correlated with a decrease in CRP (p=0.039, OR=1.216), ESR (p=0.003, OR=1.298), and C4 (p=0.028, OR=1.258) levels. Similarly, HQC was positively correlated with a decrease in CRP (p<0.001,OR=1.641), ESR (p=0.002,OR=1.324), C4 (p=0.024,OR=1.272), IgG (p=0.019,OR=1.247), and IgA (p=0.022,OR=1.237) levels.
### 3.4. Kaplan-Meier Curves for a Composite Endpoint for Treatment Failure for TCMCP Users versus Non-TCMCP Users (Figure2)
Figure 2
Kaplan-Meier curves for a composite endpoint for treatment failure for TCMCP users versus non-TCMCP users. The composite endpoint prognosis for treatment failure was better in TCMCP users (HR=0.75 (0.71-0.80), p<0.001) than in non-TCMCP users. The p value represents the comparison in composite endpoint for treatment failure predicted by log-rank test between TCMCP users and non-TCMCP users. TCMCP: traditional Chinese medicine compound preparation.The results of the log-rank test showed that TCMCP users had better composite endpoint prognoses for treatment failure (HR=0.75 (0.71-0.80), p<0.001) than non-TCMCP users.
### 3.5. COX Regression Model for Analysis of Risk Factors for Four RA-Related Complications (Table4) and Visualization of the Analysis Results (Figure 3)
Table 4
Analysis of risk factors for the four rheumatoid arthritis-related complications using the COX regression model.
Number of endpoint eventsUnivariate analysisMultivariate analysisHR95% CIp valueHR95% CIp valueReadmission3253 (46.2%)TCMCP1510 (21.5%)0.7860.733-0.842<0.0010.7930.740-0.850<0.001Age (year)1.0410.971-1.1150.257<571459 (20.7%)≥571794 (25.5%)Sex0.9100.826-1.0020.054Female2767 (39.3%)Male486 (6.9%)Underlying diseasesHypertension1043 (14.8%)1.5251.416-1.643<0.0011.5191.410-1.636<0.001Diabetes277 (3.9%)1.0910.965-1.2340.165Hyperlipidemia85 (1.2%)1.3501.088-1.6750.0061.3401.080-1.6630.008Sjogren’s syndrome965 (13.7%)TCMCP414 (5.9%)0.6840.602-0.777<0.0010.6740.593-0.766<0.001Age (year)1.6331.429-1.865<0.0011.6051.404-1.835<0.001<57331 (4.7%)≥57634 (9.0%)Sex1.1230.952-1.3250.167Female793 (11.3%)Male172 (2.4%)Underlying diseasesHypertension335 (4.8%)1.6601.451-1.898<0.0011.5661.369-1.792<0.001Diabetes88 (1.3%)1.1630.934-1.4480.177Hyperlipidemia27 (0.4%)1.4020.956-2.0570.084Surgical treatment182 (2.6%)TCMCP82 (1.2%)0.7240.540-0.9700.0300.7100.530-0.9520.022Age (year)1.9901.452-2.726<0.0011.9471.416-2.677<0.001<5756 (0.8%)≥57126 (1.8%)Sex1.1370.780-1.3880.505Female149 (2.1%)Male33 (0.5%)Underlying diseasesHypertension52 (0.7%)1.0050.728-1.3880.975Diabetes23 (0.3%)1.5811.021-2.4480.0401.3660.878-2.1260.167Hyperlipidemia8 (0.1%)1.9050.938-3.8710.075All-cause death215 (3.1%)TCMCP93 (1.3%)0.7580.578-0.9940.0450.7260.553-0.9520.020Age (year)2.3141.710-3.134<0.0012.0601.509-2.812<0.001<5757 (0.8%)≥57158 (2.2%)Sex1.6871.237-2.3010.0011.5171.106-2.0800.010Female162 (2.3%)Male53 (0.8%)Underlying diseasesHypertension72 (1.0%)1.5901.192-2.1190.0021.5061.128-2.0120.006Diabetes30 (0.4%)1.8801.278-2.7660.0011.5961.080-2.3580.019Hyperlipidemia5 (0.1%)1.2090.497-2.9380.676Abbreviation: TCMCP: traditional Chinese medicine compound preparation.Figure 3
Multivariate regression analysis of rheumatoid arthritis-related complications: (a) TCMCP is a protective factor of recurrent admission, whereas hypertension and hyperlipidemia are risk factors; (b) TCMCP is a protective factor of Sjogren’s syndrome, whereas higher age and hypertension are risk factors; (c) TCMCP is a protective factor of surgical treatment whereas higher age is a risk factor; (d) TCMCP is a protective factor of all-cause death, whereas higher age, male, hypertension, and diabetes are risk factors. TCMCP: traditional Chinese medicine compound preparation.
(a)(b)(c)(d)Further, we used univariate and multivariate COX regression to analyze risk factors for the four RA-related complications, namely, readmission, Sjogren’s syndrome, surgical treatment, and all-cause death. The results showed that TCMCPs reduced the risk of readmission, Sjogren’s syndrome, surgical treatment, and risk of all-cause death by 20.7%, 32.6%, 29.0%, and 27.4%, respectively.Advancing age increased the risk of Sjogren’s syndrome, surgical treatment, and all-cause death by 60.5%, 94.7%, and 106.0%, respectively. The comorbidity hypertension increased the risk of readmission, Sjogren’s syndrome, and all-cause death by 51.9%, 56.6%, and 50.6%, respectively. The male sex and the presence of comorbidity diabetes increased the risk of all-cause death by 51.7% and 59.6%, respectively. Hyperlipidemia had a 34.0% increased risk of readmission.
### 3.6. Risk of RA-Related Complications at Different Exposure Times (Table5)
Table 5
Hazard ratios and 95% confidence intervals of the risk of rheumatoid arthritis-related complications at different exposure times.
Exposure groupTotalNumber of complicationsHR95% CIp valueNone (<1 month)38252877 (75.2%)1Low (1-3 months)417271 (65.0%)0.9940.868-1.1370.926Medium (3-6 months)395241 (61.0%)0.7960.691-0.9180.002High (≥6 months)23971226 (51.1%)0.6990.650-0.751<0.001We found that the use of TCMCPs was associated with a lower risk of RA-related complications. In addition, the risk of RA-related complications varied according to the exposure time. Notably, the risk of RA-related complications in TCMCP users with high-exposure intensity (adjustedHR=0.699,95%CI=0.650-0.751, p<0.001) and medium-exposure intensity (adjusted HR=0.796,95%CI =0.691-0.918, p=0.002) was significantly lower than that in non-TCMCP patients.
## 3.1. Baseline Characteristics of TCMCP and Non-TCMCP Patients (Table1)
Table 1
Baseline characteristics of TCMCP and non-TCMCP patients, matched and unmatched by propensity score.
Before matchingAfter matchingTotal (n=9183)TCMCP (n=3869)Non-TCMCP (n=5314)χ2p valueTotal (n=7034)TCMCP (n=3517)Non-TCMCP (n=3517)χ2p valueAge (year)10.6490.0013.3950.065<574643 (50.6%)1879 (48.6%)2764 (52.0%)3223 (45.8%)1650 (46.9%)1573 (44.7%)≥574540 (49.4%)1990 (51.4%)2550 (48.0%)3811 (54.2%)1867 (53.1%)1944 (55.3%)Sex55.078<0.0011.1640.281Female7534 (82.0%)3039 (78.5%)4494 (84.6%)5923 (84.2%)2945 (83.7%)2978 (84.7%)Male1650 (18.0%)830 (21.5%)820 (15.4%)1111 (15.8%)572 (16.3%)539 (15.3%)Underlying diseasesHypertension2804 (30.5%)988 (25.5%)1816 (34.2%)78.751<0.0012111 (30.0%)1072 (30.5%)1039 (29.5%)0.7370.391Diabetes903 (9.8%)422 (10.9%)481 (12.4%)8.6950.003647 (9.20%)315 (8.96%)332 (9.44%)0.4920.483Hyperlipidemia250 (2.7%)99 (2.6%)151 (2.8%)0.6760.411179 (2.54%)85 (2.42%)94 (2.67%)0.4640.496TreatmentcDMARDs6276 (68.3%)2697 (69.7%)3579 (67.4%)5.7520.0164968 (70.6%)2507 (71.3%)2461 (70.0%)1.4500.229Corticosteroid5597 (60.9%)2191 (56.6%)3406 (64.1%)52.423<0.0014291 (61.0%)2117 (60.2%)2174 (61.8%)1.9420.163Note: TCMCP: users of traditional Chinese medicine compound preparation; non-TCMCP: nonusers of traditional Chinese medicine compound preparation.The baseline data, including sex, age, hypertension, diabetes, hyperlipidemia, and cDMARD and corticosteroid treatment, of 9813 patients with RA who were successfully followed up were recorded. The median follow-up time was 54.85 months. Before propensity score matching, there were significant differences between TCMCP users and non-TCMCP users in terms of sex, age, hypertension, diabetes, cDMARDs, and corticosteroid treatment (p<0.05). However, after matching, no significant difference was found between TCMCP users and non-TCMCP users in the same aspects (p>0.05).
## 3.2. Changes in the RA-Related Inflammatory and Immune Indices after TCMCP Treatment (Table2)
Table 2
Changes in the rheumatoid arthritis-related inflammatory and immune indices after administration of TCMCP (n=3517).
Before treatmentAfter treatmentZp valueReference rangesESR (median (Q1, Q3), mm/h)48.0 (29.00, 70.00)28.00 (16.00, 44.00)-34.498<0.0012-6CRP (median (Q1, Q3), mg/L)24.29 (7.00, 50.57)2.00 (0.48, 8.66)-40.240<0.0010-5IgA (median (Q1, Q3), g/L)2.65 (2.00, 3.48)2.48 (1.91, 3.22)-15.313<0.0010.7-4.06IgG (median (Q1, Q3), g/L)12.98 (10.07, 16.70)12.00 (9.56, 15.30)-18.241<0.0016.8-14.5IgM (median (Q1, Q3), g/L)1.26 (0.89, 1.66)1.28 (0.91, 1.72)-3.983<0.0010.3-2.2C3 (median (Q1, Q3), g/L)110.60 (87.70, 129.90)100.20 (81.70, 115.10)-23.671<0.00175-135C4 (median (Q1, Q3), g/L)23.50 (15.6, 30.10)19.30 (12.60, 24.90)-27.145<0.0019-36Anti-CCP (median (Q1, Q3), mmol/L)240.90 (105.92, 473.98)220.45 (84.15, 452.69)-6.170<0.001<25RF (median (Q1, Q3), U/mL)101.70 (33.55, 244.50)88.85 (27.53, 216.45)-20.372<0.0010-14Note: TCMCP: traditional Chinese medicine compound preparation; ESR: erythrocyte sedimentation rate; CRP: C-reactive protein; IgA: immunoglobulin A; IgM: immunoglobulin M; IgG: immunoglobulin G; C3: complement C3; C4: complement C4; anti-CCP: anti-cyclic citrullinated peptide; RF: rheumatoid factor.Z is the standardized test statistics before and after treatment of TCMCP. The p value is compared before and after treatment with TCMCP.Hospitalization data of 3517 patients in the matched TCMCP group who received Xin’an Jianpi Tongbi prescription during hospitalization were collected and analyzed. Their posttreatment inflammatory and immune indices were lower than those before treatment (p<0.05).
## 3.3. Association Analysis of TCMCPs with RA-Related Inflammatory and Immune Indices (Table3)
Table 3
Association between traditional Chinese medicine compound preparation with rheumatoid arthritis-related inflammatory and immune indices.
TCMCPIndexesχ2p valueOR95% CIXFCCRP ↓4.2640.0391.2161.010-1.463XFCESR ↓9.0260.0031.2981.095-1.539XFCC4 ↓4.8200.0281.2581.025-1.544HQCCRP ↓24.65<0.0011.6411.348-1.998HQCESR ↓10.0010.0021.3241.112-1.575HQCC4 ↓5.0950.0241.2721.032-1.569HQCIgG ↓5.5090.0191.2471.037-1.499HQCIgA ↓5.2850.0221.2371.032-1.484Note: TCMCP: users of traditional Chinese medicine compound preparation; XFC: Xinfeng capsule; HQC: Huangqin Qingre capsule; ESR: erythrocyte sedimentation rate; CRP: C-reactive protein; C4: complement C4; IgG: immunoglobulin G; IgA: immunoglobulin A. “↓” represents the decrease of quantitative variables, indicating that the laboratory indicators improved after TCMCP treatment.We further analyzed the association between Xin’an Jianpi Tongbi prescription and RA-related inflammatory and immune indices. The results indicated that XFC was positively correlated with a decrease in CRP (p=0.039, OR=1.216), ESR (p=0.003, OR=1.298), and C4 (p=0.028, OR=1.258) levels. Similarly, HQC was positively correlated with a decrease in CRP (p<0.001,OR=1.641), ESR (p=0.002,OR=1.324), C4 (p=0.024,OR=1.272), IgG (p=0.019,OR=1.247), and IgA (p=0.022,OR=1.237) levels.
## 3.4. Kaplan-Meier Curves for a Composite Endpoint for Treatment Failure for TCMCP Users versus Non-TCMCP Users (Figure2)
Figure 2
Kaplan-Meier curves for a composite endpoint for treatment failure for TCMCP users versus non-TCMCP users. The composite endpoint prognosis for treatment failure was better in TCMCP users (HR=0.75 (0.71-0.80), p<0.001) than in non-TCMCP users. The p value represents the comparison in composite endpoint for treatment failure predicted by log-rank test between TCMCP users and non-TCMCP users. TCMCP: traditional Chinese medicine compound preparation.The results of the log-rank test showed that TCMCP users had better composite endpoint prognoses for treatment failure (HR=0.75 (0.71-0.80), p<0.001) than non-TCMCP users.
## 3.5. COX Regression Model for Analysis of Risk Factors for Four RA-Related Complications (Table4) and Visualization of the Analysis Results (Figure 3)
Table 4
Analysis of risk factors for the four rheumatoid arthritis-related complications using the COX regression model.
Number of endpoint eventsUnivariate analysisMultivariate analysisHR95% CIp valueHR95% CIp valueReadmission3253 (46.2%)TCMCP1510 (21.5%)0.7860.733-0.842<0.0010.7930.740-0.850<0.001Age (year)1.0410.971-1.1150.257<571459 (20.7%)≥571794 (25.5%)Sex0.9100.826-1.0020.054Female2767 (39.3%)Male486 (6.9%)Underlying diseasesHypertension1043 (14.8%)1.5251.416-1.643<0.0011.5191.410-1.636<0.001Diabetes277 (3.9%)1.0910.965-1.2340.165Hyperlipidemia85 (1.2%)1.3501.088-1.6750.0061.3401.080-1.6630.008Sjogren’s syndrome965 (13.7%)TCMCP414 (5.9%)0.6840.602-0.777<0.0010.6740.593-0.766<0.001Age (year)1.6331.429-1.865<0.0011.6051.404-1.835<0.001<57331 (4.7%)≥57634 (9.0%)Sex1.1230.952-1.3250.167Female793 (11.3%)Male172 (2.4%)Underlying diseasesHypertension335 (4.8%)1.6601.451-1.898<0.0011.5661.369-1.792<0.001Diabetes88 (1.3%)1.1630.934-1.4480.177Hyperlipidemia27 (0.4%)1.4020.956-2.0570.084Surgical treatment182 (2.6%)TCMCP82 (1.2%)0.7240.540-0.9700.0300.7100.530-0.9520.022Age (year)1.9901.452-2.726<0.0011.9471.416-2.677<0.001<5756 (0.8%)≥57126 (1.8%)Sex1.1370.780-1.3880.505Female149 (2.1%)Male33 (0.5%)Underlying diseasesHypertension52 (0.7%)1.0050.728-1.3880.975Diabetes23 (0.3%)1.5811.021-2.4480.0401.3660.878-2.1260.167Hyperlipidemia8 (0.1%)1.9050.938-3.8710.075All-cause death215 (3.1%)TCMCP93 (1.3%)0.7580.578-0.9940.0450.7260.553-0.9520.020Age (year)2.3141.710-3.134<0.0012.0601.509-2.812<0.001<5757 (0.8%)≥57158 (2.2%)Sex1.6871.237-2.3010.0011.5171.106-2.0800.010Female162 (2.3%)Male53 (0.8%)Underlying diseasesHypertension72 (1.0%)1.5901.192-2.1190.0021.5061.128-2.0120.006Diabetes30 (0.4%)1.8801.278-2.7660.0011.5961.080-2.3580.019Hyperlipidemia5 (0.1%)1.2090.497-2.9380.676Abbreviation: TCMCP: traditional Chinese medicine compound preparation.Figure 3
Multivariate regression analysis of rheumatoid arthritis-related complications: (a) TCMCP is a protective factor of recurrent admission, whereas hypertension and hyperlipidemia are risk factors; (b) TCMCP is a protective factor of Sjogren’s syndrome, whereas higher age and hypertension are risk factors; (c) TCMCP is a protective factor of surgical treatment whereas higher age is a risk factor; (d) TCMCP is a protective factor of all-cause death, whereas higher age, male, hypertension, and diabetes are risk factors. TCMCP: traditional Chinese medicine compound preparation.
(a)(b)(c)(d)Further, we used univariate and multivariate COX regression to analyze risk factors for the four RA-related complications, namely, readmission, Sjogren’s syndrome, surgical treatment, and all-cause death. The results showed that TCMCPs reduced the risk of readmission, Sjogren’s syndrome, surgical treatment, and risk of all-cause death by 20.7%, 32.6%, 29.0%, and 27.4%, respectively.Advancing age increased the risk of Sjogren’s syndrome, surgical treatment, and all-cause death by 60.5%, 94.7%, and 106.0%, respectively. The comorbidity hypertension increased the risk of readmission, Sjogren’s syndrome, and all-cause death by 51.9%, 56.6%, and 50.6%, respectively. The male sex and the presence of comorbidity diabetes increased the risk of all-cause death by 51.7% and 59.6%, respectively. Hyperlipidemia had a 34.0% increased risk of readmission.
## 3.6. Risk of RA-Related Complications at Different Exposure Times (Table5)
Table 5
Hazard ratios and 95% confidence intervals of the risk of rheumatoid arthritis-related complications at different exposure times.
Exposure groupTotalNumber of complicationsHR95% CIp valueNone (<1 month)38252877 (75.2%)1Low (1-3 months)417271 (65.0%)0.9940.868-1.1370.926Medium (3-6 months)395241 (61.0%)0.7960.691-0.9180.002High (≥6 months)23971226 (51.1%)0.6990.650-0.751<0.001We found that the use of TCMCPs was associated with a lower risk of RA-related complications. In addition, the risk of RA-related complications varied according to the exposure time. Notably, the risk of RA-related complications in TCMCP users with high-exposure intensity (adjustedHR=0.699,95%CI=0.650-0.751, p<0.001) and medium-exposure intensity (adjusted HR=0.796,95%CI =0.691-0.918, p=0.002) was significantly lower than that in non-TCMCP patients.
## 4. Discussion
In this population-based cohort study, a large amount of data on RA patients from the First Affiliated Hospital of Anhui University of Chinese Medicine were used to evaluate the effects of TCMCPs on clinical immunological and inflammatory indicators and RA-related complications. We found that RA patients treated with Xin’an Jianpi Tongbi Preparation not only exhibited lower immune and inflammatory indices than non-TCMCP users but also were associated with a low risk of RA-related complications.TCM has a multicomponent, multitargeted synergistic anti-inflammatory and anti-immune effect. Previously, we found that TCMCPs significantly improved the RA-related immunological and inflammatory effects [16]. Modern pharmacological studies have also reported that Xin’an Jianpi Tongbi preparation drugs, i.e., Astragalus membranaceus, Semen coicis, Tripterygium wilfordii, Scolopendra spp., Scutellaria baicalensis, Gardenia jasminoides, Poria cocos, Epimedium brevicornu, Cinnamomum cassia, and Curcumae Longae, can improve the RA-related immunological and inflammatory response. Among them, active agents in Astragalus membranaceus have been shown to improve RA-induced synovial and joint injury [27, 28]. Semen coicis extract, including polyphenols and polysaccharides, has immunological, antioxidant, and anti-inflammatory effects [29]. Tripterygium wilfordii lactone, the active ingredient of Tripterygium wilfordii[30, 31], inhibited cell growth and inflammatory response of RA-associated fibroblasts, such as synovial cells, by regulating the expression of the hsa-circ-0003353/microRNA-31-5p/cyclin-dependent kinase 1 axis [32]. Scolopendra spp. combined with TCM has shown significant clinical efficacy in patients with RA [33]. Baicalin had an anti-inflammatory effect in a collagen-induced arthritis rat model, possibly by inhibiting the toll-like receptor 2/myeloid differentiation factor 88/NF-kappa B p65 signaling pathway [34]. Geniposide exhibited anti-inflammatory and antiangiogenesis pharmacological effects through the inhibition of vascular endothelial growth factor-induced angiogenesis in vascular endothelial cells by reducing the translocation of sphingosine kinase 1 [35]. Poria cocos polysaccharide enhanced the secretion of immune stimulants but inhibited the secretion of immune inhibitors, enhancing the host immune response [36]. Icariin inhibited cell proliferation by interfering with the cell cycle in RA fibroblasts, including synovial cells, promoting mitochondria-dependent apoptosis and intracellular reactive oxygen species production, which potentially improves RA outcomes [37]. Cinnamomum cassia extract had a therapeutic effect on RA, which was attributed to its antiproliferation and antimigration effects on synovial fibroblasts [38]. A systematic review showed that curcumin had a significant effect on the clinical and inflammatory parameters of RA and significantly improved morning stiffness, walking time, and joint swelling [39]. Thus, the pharmacological effects of TCM support the use of TCMCP for reducing the risk of RA-related complications. These results demonstrated that TCMCPs could act as a protective factor against RA-related complications (readmission, Sjogren’s syndrome, surgical treatment, and all-cause death).However, we also found that RA patients with comorbidities such as hypertension or hyperlipidemia had a significantly high risk of readmission. A study showed that hypertension and dyslipidemia were the most common complications of RA [40]. Consistent with our results, these classic complications increased the risk of recurrence of RA inflammation [41], potentially contributing to increased readmission of RA patients. Advanced age and hypertension were shown to be significantly associated with the extra-articular manifestations of RA [42, 43], which is consistent with our findings. An analysis based on British electronic medical records showed that the incidence of joint replacement increased with age [12]. Our results revealed a 94.7% increased risk of surgical treatment in patients with RA aged 57 years and older, which corroborates findings from previous studies. Consistent with other studies, our results also showed that older patients with RA, men, and those with hypertension and diabetes had a higher risk of death [44–46]. Collectively, these results show that advanced age is a significant risk factor for extra-articular diseases, surgical treatment, and all-cause death. In addition, comorbidity hypertension is a risk factor for admission, extra-articular diseases, and all-cause death, whereas hyperlipidemia and diabetes are risk factors for recurrent admission and all-cause death, respectively. Among patients with RA, the risk of all-cause death is higher in men than in women.Our study further found that medium- and high-exposure intensity, especially high-exposure intensity, were significantly associated with a reduced risk of RA-related complications. This indicates that long-term treatment with TCM could decrease the frequency of RA-related complications, which is consistent with the results of previous clinical data mining studies [16, 47]. Our results also suggest that long-term exposure to Xin’an Jianpi Tongbi preparation reduces RA-related complications.This study had some notable limitations. First, there were no radiological data in our research to measure the severity of RA disease. Although, early on, we retrieved radiological data from the hospital information system, these data were textual, and we lacked models and algorithms to process textual data. Second, biologic DMARDs were not included in this study owing to insufficient data, which constitutes a major difference from the common practice in RA treatment and prevents appropriate comparisons with most of the literature on RA. Third, the recurrence frequency per unit time was not calculated for the frequently hospitalized patients, which differs from the common practice for RA treatment and also hinders proper comparison with most of the literature on RA. Fourth, the lack of data on adverse events of TCMCPs in this study did not allow a comprehensive analysis of the role of the drugs. Finally, we only studied Sjogren’s syndrome and lacked data on other extra-articular manifestations of RA, which makes our findings one-sided. We intend to address these limitations in our future research. Nevertheless, our study has two significant strengths: the clinical advantage of using TCM and the statistical advantage of using large samples. This was a population-based cohort study, which included the clinical administration of medication to a population, making our results more clinically acceptable. The large sample size provides sufficient statistical ability to study the improvement effect of TCMCP on RA-related clinical indicators.
## 5. Conclusion
This population-based cohort study showed that TCMCP use, as well as long-term exposure to TCMCP in patients with RA, decreased the risk of RA-related complications, including readmission, Sjogren’s syndrome, surgical treatment, and all-cause death. These findings are expected to inform clinical decisions regarding the use of TCMCP in RA management.
---
*Source: 1019290-2023-02-23.xml* | 1019290-2023-02-23_1019290-2023-02-23.md | 52,425 | Traditional Chinese Medicine Compound Preparations Are Associated with Low Disease-Related Complication Rates in Patients with Rheumatoid Arthritis: A Retrospective Cohort Study of 11,074 Patients | Yanyan Fang; Jian Liu; Ling Xin; Xiaolu Chen; Xiang Ding; Qi Han; Mingyu He; Xu Li; Yanqiu Sun; Fanfan Wang; Jie Wang; Xin Wang; Jianting Wen; Xianheng Zhang; Qin Zhou; Junru Zhang | BioMed Research International
(2023) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2023/1019290 | 1019290-2023-02-23.xml | ---
## Abstract
Objective. To evaluate whether traditional Chinese medicine compound preparations (TCMCPs) are associated with rheumatoid arthritis- (RA-) related complications (including readmission, Sjogren’s syndrome, surgical treatment, and all-cause death) in patients with RA. Methods. Clinical outcome data were retrospectively collected from patients with RA discharged from the Department of Rheumatology and Immunology of the First Affiliated Hospital of Anhui University of Chinese Medicine from January 2009 to June 2021. The propensity score matching method was used to match baseline data. Multivariate analysis was conducted to analyze sex, age, the incidence of hypertension, diabetes, and hyperlipidemia and identify the risk of readmission, Sjogren’s syndrome, surgical treatment, and all-cause death. Users of TCMCP and nonusers of TCMCP were defined as the TCMCP and non-TCMCP groups, respectively. Results. A total of 11,074 patients with RA were included in the study. The median follow-up time was 54.85 months. After propensity score matching, the baseline data of TCMCP users corresponded with those of non-TCMCP users, with 3517 cases in each group. Retrospective analysis revealed that TCMCP significantly reduced clinical, immune, and inflammatory indices in patients with RA, and these indices were highly correlated. Notably, the composite endpoint prognosis for treatment failure in TCMCP users was better than that in non-TCMCP users (HR=0.75 (0.71-0.80)). The risk of RA-related complications in TCMCP users with high-exposure intensity (HR=0.669 (0.650-0.751)) and medium-exposure intensity (HR=0.796 (0.691-0.918)) was significantly lower than those in non-TCMCP users. An increase in exposure intensity was associated with a concomitant decrease in the risk of RA-related complications. Conclusion. The use of TCMCPs, as well as long-term exposure to TCMCPs, may lower RA-related complications, including readmission, Sjogren’s syndrome, surgical treatment, and all-cause death, in patients with RA.
---
## Body
## 1. Introduction
Rheumatoid arthritis (RA) is a chronic inflammatory disease that mainly causes gradual joint damage and affects other body systems [1, 2]. The worldwide incidence rate of RA is approximately 1%, and although this condition affects people of all ages, it is more prevalent in women than in men [3, 4]. Currently, the etiology of RA is unclear, which poses a challenge to the effective treatment of RA and increases rehospitalization rates [5]. Although synovitis is a primary pathological marker of RA, many extra-articular manifestations may occur because of RA’s complex, chronic, inflammatory, and autoimmune characteristics [6–8]. Extra-articular manifestations and complications are common in RA, contributing to higher incidence rates and premature mortality [6]. A hallmark clinical feature of RA is the symmetrical polyarthritis that manifests as redness and pain in the joints, especially smaller joints, and long-term morning stiffness [9, 10], with the potential to progress to serious joint injury and disability [11]. Progressive and severe joint injury, chronic pain, loss of function, and insufficient response to treatment regimens are indications for final joint replacement surgery [12]. Cohort studies based on national data from several countries have shown that RA is associated with high mortality [13, 14]. Therefore, readmission, extra-articular manifestations, surgical treatment, and all-cause death are considered potential RA-related complications.Modern pharmacological treatments for RA mainly include nonsteroidal anti-inflammatory drugs, glucocorticoids, conventional disease-modifying antirheumatic drugs (cDMARDs), and biologic DMARDs that are used to alleviate chronic pain in patients by reducing the local inflammatory response [15]. However, RA treatment is complex and requires the combined application of multiple drugs, some of which have significant side effects and high treatment costs, resulting in poor patient compliance. Traditional Chinese medicine (TCM) might have many therapeutic advantages for RA [16–18]. Xin’an Jianpi Tongbi prescription, including Xinfeng capsule (XFC), Huangqin Qingre Chubi capsule (HQC), and Wuwei Wentong Chubi capsule (WWT), is a routinely used TCM compound preparation (TCMCP), which contains Astragalus membranaceus, Semen coicis, Tripterygium wilfordii, Scolopendra spp., Scutellaria baicalensis, Gardenia jasminoides, Prunus persica, Clematis chinensis, Poria cocos, Epimedium brevicornu, Cinnamomum cassia, Curcumae Longae, and other drugs. Many studies have shown that this TCMCP has high efficacy against RA [18–20]. A randomized, double-blind, multicenter, and placebo-controlled trial showed high efficacy and safety of XFC in the treatment of patients with RA [21, 22]. Animal experiments have demonstrated that HQC improves the baseline severity of arthritis in a collagen-induced arthritis mouse model [23, 24]. WWT has also been reported to have a good pharmacological effect on RA [25]. However, although the TCMCPs have favorable therapeutic effects on RA, their specific effect on the incidence of RA-related complications is still unclear.In this study, we retrospectively analyzed the effect of Xin’an Jianpi Tongbi prescription on immune inflammation in RA and the risk of four RA-related complications, including readmission, Sjogren’s syndrome, surgical treatment, and all-cause death.
## 2. Methods
### 2.1. Study Cohort (Figure1)
Figure 1
Flow chart of inclusion in the cohort. TCMCP: users of traditional Chinese medicine compound preparation; non-TCMCP: nonusers of traditional Chinese medicine compound preparation.Clinical data of discharged patients with RA from the Department of Rheumatology and Immunology of the First Affiliated Hospital of Anhui University of Chinese Medicine were retrospectively collected from January 2009 to June 2021. The diagnostic criteria for RA by the American College of Rheumatology were adopted in this study [26]. Telephonic follow-up time was calculated from the time of discharge until February 28, 2022. Based on the history of TCMCP usage, the risk of RA-related complications, including readmissions, Sjogren’s syndrome, surgical treatments, and all-cause death, was evaluated. This study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Anhui University of Chinese Medicine (approval number: 2022MCZQ01).
### 2.2. Data Collection
Demographic information, including age and sex; clinical data including baseline complications, baseline cDMARD, and corticosteroid treatment; and data on TCMCPs were collected and evaluated retrospectively.
### 2.3. Treatment
In the First Affiliated Hospital of Anhui University of Chinese Medicine, the basic drugs for treating RA consist of cDMARDs (including methotrexate, leflunomide, sulfasalazine, and hydroxychloroquine sulfate), nonsteroidal anti-inflammatory drugs (including celecoxib, meloxicam, and lornoxicam), and glucocorticoids (methylprednisolone). It should be noted that TCM is a commonly used treatment means in TCM hospitals. We gradually withdrew the use of biologics by increasing the use of TCM.
### 2.4. Inflammatory and Immune Indices
Inflammatory and immune indices, including erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), anti-cyclic citrullinated peptide (anti-CCP), rheumatoid factor (RF), immunoglobulin A (IgA), immunoglobulin G (IgG), immunoglobulin M (IgM), complement component 3 (C3), and complement component 4 (C4) levels, were evaluated after TCMCP treatment.
### 2.5. Research Definition
#### 2.5.1. Xin’an Jianpi Tongbi Prescription
Xin’an Jianpi Tongbi prescription is a compound preparation of TCM based on the Xin’an medical theory. It contains Xinfeng capsule [22] (Z20050062 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2013 1 0011369.8), Huangqin Qingre Chubi capsule [24] (Z20200001 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2011 1 0095718.X), and Wuwei Wentong Chubi capsule [25] (patent number: ZL 2020 10714863.0). The Xinfeng capsule is composed of Astragalus membranaceus, Semen coicis, Tripterygium wilfordii, and Scolopendra spp. These four medicinal materials were extracted by refluxing twice with 75% ethanol. In the first step, ten times the amount of ethanol was added for extraction for 2 h, after which eight times the amount of ethanol was added and allowed to extract for 1.5 h. The drug residues were boiled with eight times the amount of water and extracted for 1.5 h. This was then filtered and allowed to stand. The supernatant was collected and combined with the alcohol extract under pressure to concentrate, and the paste was collected. The sample was dried, crushed, mixed with dextrin, and granulated with ethanol. This was followed by drying, whole granulating, sterilizing, filling, and outsourcing. The Huangqin Qingre Chubi capsule is composed of Scutellaria baicalensis, Prunus persica, Gardenia jasminoides, Semen coicis, and Clematis chinensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. The mixture was strained and allowed to stand. Then, the supernatant was absorbed and concentrated under pressure, and the paste was collected; this was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the materials, which were screened, granulated, dried, whole-grained, and filled into capsules. The Wuwei Wentong Chubi capsule is composed of Poria cocos, Epimedium brevicornu, Cinnamomum cassia, Curcumae Longae, and Scutellaria baicalensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. This mixture was strained and allowed to settle. Then, the supernatant was absorbed and concentrated under reduced pressure, and the paste was collected. This was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the material, which was then sieved using no. 12 mesh, granulated, dried, whole-grained, filled into capsules, and outsourced. All capsules were produced by the preparation center of the First Affiliated Hospital of Anhui University of Chinese medicine, and the variation range of each capsule was ±10%.
#### 2.5.2. RA-Related Complications
Readmission refers to RA patients who have been hospitalized twice or more. Sjogren’s syndrome refers to an RA-related complication with a frequency of 10.41% (1022/9813). Surgical treatment refers to RA patients with severe joint deformities requiring surgical treatment. All-cause death refers to death caused by long-term RA distress.
#### 2.5.3. Classification of Quantitative Variables
The usage of TCMCPs was defined as “1,” and nonusage was defined as “0.” After TCMCP treatment, a decrease in ESR, CRP, IgA, IgG, IgM, C3, C4, RF, and anti-CCP levels was recorded as “1,” whereas an increase or no change in the level was recorded as “0.” The decrease in inflammatory and immune index values indicated the effectiveness of TCMCP treatment.
#### 2.5.4. Exposure Intensity
According to exposure intensity, patients who received TCMCPs for less than 1 month, 1–3 months, 3–6 months, and ≥6 months after discharge were defined as the nonexposure, low-exposure, medium-exposure, and high-exposure groups, respectively.
### 2.6. Statistical Analysis
Continuous variables are reported as medians with interquartile ranges (IQR), whereas categorical variables are reported as frequencies and percentages. Categorical variables were compared using Fisher’s exact test, whereas continuous variables were compared using the Wilcoxon signed-rank test. Univariate and multivariate COX proportional hazards models were developed to evaluate risk factors for the occurrence of endpoint events and are presented as hazard ratios (HR with 95% confidence intervals (CIs)). Univariate models contained a single predictor for calculating different baseline risks for each site. Multivariate models included age, sex, comorbidities at baseline, and TCMCP as model covariates. All analyses were performed using SPSS V.22 (Armonk, NY, USA) software. Differences were considered statistically significant when thep value was less than 0.05.
## 2.1. Study Cohort (Figure1)
Figure 1
Flow chart of inclusion in the cohort. TCMCP: users of traditional Chinese medicine compound preparation; non-TCMCP: nonusers of traditional Chinese medicine compound preparation.Clinical data of discharged patients with RA from the Department of Rheumatology and Immunology of the First Affiliated Hospital of Anhui University of Chinese Medicine were retrospectively collected from January 2009 to June 2021. The diagnostic criteria for RA by the American College of Rheumatology were adopted in this study [26]. Telephonic follow-up time was calculated from the time of discharge until February 28, 2022. Based on the history of TCMCP usage, the risk of RA-related complications, including readmissions, Sjogren’s syndrome, surgical treatments, and all-cause death, was evaluated. This study was approved by the Medical Ethics Committee of the First Affiliated Hospital of Anhui University of Chinese Medicine (approval number: 2022MCZQ01).
## 2.2. Data Collection
Demographic information, including age and sex; clinical data including baseline complications, baseline cDMARD, and corticosteroid treatment; and data on TCMCPs were collected and evaluated retrospectively.
## 2.3. Treatment
In the First Affiliated Hospital of Anhui University of Chinese Medicine, the basic drugs for treating RA consist of cDMARDs (including methotrexate, leflunomide, sulfasalazine, and hydroxychloroquine sulfate), nonsteroidal anti-inflammatory drugs (including celecoxib, meloxicam, and lornoxicam), and glucocorticoids (methylprednisolone). It should be noted that TCM is a commonly used treatment means in TCM hospitals. We gradually withdrew the use of biologics by increasing the use of TCM.
## 2.4. Inflammatory and Immune Indices
Inflammatory and immune indices, including erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), anti-cyclic citrullinated peptide (anti-CCP), rheumatoid factor (RF), immunoglobulin A (IgA), immunoglobulin G (IgG), immunoglobulin M (IgM), complement component 3 (C3), and complement component 4 (C4) levels, were evaluated after TCMCP treatment.
## 2.5. Research Definition
### 2.5.1. Xin’an Jianpi Tongbi Prescription
Xin’an Jianpi Tongbi prescription is a compound preparation of TCM based on the Xin’an medical theory. It contains Xinfeng capsule [22] (Z20050062 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2013 1 0011369.8), Huangqin Qingre Chubi capsule [24] (Z20200001 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2011 1 0095718.X), and Wuwei Wentong Chubi capsule [25] (patent number: ZL 2020 10714863.0). The Xinfeng capsule is composed of Astragalus membranaceus, Semen coicis, Tripterygium wilfordii, and Scolopendra spp. These four medicinal materials were extracted by refluxing twice with 75% ethanol. In the first step, ten times the amount of ethanol was added for extraction for 2 h, after which eight times the amount of ethanol was added and allowed to extract for 1.5 h. The drug residues were boiled with eight times the amount of water and extracted for 1.5 h. This was then filtered and allowed to stand. The supernatant was collected and combined with the alcohol extract under pressure to concentrate, and the paste was collected. The sample was dried, crushed, mixed with dextrin, and granulated with ethanol. This was followed by drying, whole granulating, sterilizing, filling, and outsourcing. The Huangqin Qingre Chubi capsule is composed of Scutellaria baicalensis, Prunus persica, Gardenia jasminoides, Semen coicis, and Clematis chinensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. The mixture was strained and allowed to stand. Then, the supernatant was absorbed and concentrated under pressure, and the paste was collected; this was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the materials, which were screened, granulated, dried, whole-grained, and filled into capsules. The Wuwei Wentong Chubi capsule is composed of Poria cocos, Epimedium brevicornu, Cinnamomum cassia, Curcumae Longae, and Scutellaria baicalensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. This mixture was strained and allowed to settle. Then, the supernatant was absorbed and concentrated under reduced pressure, and the paste was collected. This was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the material, which was then sieved using no. 12 mesh, granulated, dried, whole-grained, filled into capsules, and outsourced. All capsules were produced by the preparation center of the First Affiliated Hospital of Anhui University of Chinese medicine, and the variation range of each capsule was ±10%.
### 2.5.2. RA-Related Complications
Readmission refers to RA patients who have been hospitalized twice or more. Sjogren’s syndrome refers to an RA-related complication with a frequency of 10.41% (1022/9813). Surgical treatment refers to RA patients with severe joint deformities requiring surgical treatment. All-cause death refers to death caused by long-term RA distress.
### 2.5.3. Classification of Quantitative Variables
The usage of TCMCPs was defined as “1,” and nonusage was defined as “0.” After TCMCP treatment, a decrease in ESR, CRP, IgA, IgG, IgM, C3, C4, RF, and anti-CCP levels was recorded as “1,” whereas an increase or no change in the level was recorded as “0.” The decrease in inflammatory and immune index values indicated the effectiveness of TCMCP treatment.
### 2.5.4. Exposure Intensity
According to exposure intensity, patients who received TCMCPs for less than 1 month, 1–3 months, 3–6 months, and ≥6 months after discharge were defined as the nonexposure, low-exposure, medium-exposure, and high-exposure groups, respectively.
## 2.5.1. Xin’an Jianpi Tongbi Prescription
Xin’an Jianpi Tongbi prescription is a compound preparation of TCM based on the Xin’an medical theory. It contains Xinfeng capsule [22] (Z20050062 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2013 1 0011369.8), Huangqin Qingre Chubi capsule [24] (Z20200001 from Wanyao Pharmaceutical Co., Ltd., patent number: ZL 2011 1 0095718.X), and Wuwei Wentong Chubi capsule [25] (patent number: ZL 2020 10714863.0). The Xinfeng capsule is composed of Astragalus membranaceus, Semen coicis, Tripterygium wilfordii, and Scolopendra spp. These four medicinal materials were extracted by refluxing twice with 75% ethanol. In the first step, ten times the amount of ethanol was added for extraction for 2 h, after which eight times the amount of ethanol was added and allowed to extract for 1.5 h. The drug residues were boiled with eight times the amount of water and extracted for 1.5 h. This was then filtered and allowed to stand. The supernatant was collected and combined with the alcohol extract under pressure to concentrate, and the paste was collected. The sample was dried, crushed, mixed with dextrin, and granulated with ethanol. This was followed by drying, whole granulating, sterilizing, filling, and outsourcing. The Huangqin Qingre Chubi capsule is composed of Scutellaria baicalensis, Prunus persica, Gardenia jasminoides, Semen coicis, and Clematis chinensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. The mixture was strained and allowed to stand. Then, the supernatant was absorbed and concentrated under pressure, and the paste was collected; this was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the materials, which were screened, granulated, dried, whole-grained, and filled into capsules. The Wuwei Wentong Chubi capsule is composed of Poria cocos, Epimedium brevicornu, Cinnamomum cassia, Curcumae Longae, and Scutellaria baicalensis. These five medicinal materials were decocted and extracted three times as follows: ten times the amount of water was added for the first time and extracted for 1.5 h; eight times the amount of water was added for the second and third times and extracted for 1 h. This mixture was strained and allowed to settle. Then, the supernatant was absorbed and concentrated under reduced pressure, and the paste was collected. This was then vacuum-dried, the dry extract was crushed, and dextrin was added. Ethanol was used to soften the material, which was then sieved using no. 12 mesh, granulated, dried, whole-grained, filled into capsules, and outsourced. All capsules were produced by the preparation center of the First Affiliated Hospital of Anhui University of Chinese medicine, and the variation range of each capsule was ±10%.
## 2.5.2. RA-Related Complications
Readmission refers to RA patients who have been hospitalized twice or more. Sjogren’s syndrome refers to an RA-related complication with a frequency of 10.41% (1022/9813). Surgical treatment refers to RA patients with severe joint deformities requiring surgical treatment. All-cause death refers to death caused by long-term RA distress.
## 2.5.3. Classification of Quantitative Variables
The usage of TCMCPs was defined as “1,” and nonusage was defined as “0.” After TCMCP treatment, a decrease in ESR, CRP, IgA, IgG, IgM, C3, C4, RF, and anti-CCP levels was recorded as “1,” whereas an increase or no change in the level was recorded as “0.” The decrease in inflammatory and immune index values indicated the effectiveness of TCMCP treatment.
## 2.5.4. Exposure Intensity
According to exposure intensity, patients who received TCMCPs for less than 1 month, 1–3 months, 3–6 months, and ≥6 months after discharge were defined as the nonexposure, low-exposure, medium-exposure, and high-exposure groups, respectively.
## 2.6. Statistical Analysis
Continuous variables are reported as medians with interquartile ranges (IQR), whereas categorical variables are reported as frequencies and percentages. Categorical variables were compared using Fisher’s exact test, whereas continuous variables were compared using the Wilcoxon signed-rank test. Univariate and multivariate COX proportional hazards models were developed to evaluate risk factors for the occurrence of endpoint events and are presented as hazard ratios (HR with 95% confidence intervals (CIs)). Univariate models contained a single predictor for calculating different baseline risks for each site. Multivariate models included age, sex, comorbidities at baseline, and TCMCP as model covariates. All analyses were performed using SPSS V.22 (Armonk, NY, USA) software. Differences were considered statistically significant when thep value was less than 0.05.
## 3. Results
### 3.1. Baseline Characteristics of TCMCP and Non-TCMCP Patients (Table1)
Table 1
Baseline characteristics of TCMCP and non-TCMCP patients, matched and unmatched by propensity score.
Before matchingAfter matchingTotal (n=9183)TCMCP (n=3869)Non-TCMCP (n=5314)χ2p valueTotal (n=7034)TCMCP (n=3517)Non-TCMCP (n=3517)χ2p valueAge (year)10.6490.0013.3950.065<574643 (50.6%)1879 (48.6%)2764 (52.0%)3223 (45.8%)1650 (46.9%)1573 (44.7%)≥574540 (49.4%)1990 (51.4%)2550 (48.0%)3811 (54.2%)1867 (53.1%)1944 (55.3%)Sex55.078<0.0011.1640.281Female7534 (82.0%)3039 (78.5%)4494 (84.6%)5923 (84.2%)2945 (83.7%)2978 (84.7%)Male1650 (18.0%)830 (21.5%)820 (15.4%)1111 (15.8%)572 (16.3%)539 (15.3%)Underlying diseasesHypertension2804 (30.5%)988 (25.5%)1816 (34.2%)78.751<0.0012111 (30.0%)1072 (30.5%)1039 (29.5%)0.7370.391Diabetes903 (9.8%)422 (10.9%)481 (12.4%)8.6950.003647 (9.20%)315 (8.96%)332 (9.44%)0.4920.483Hyperlipidemia250 (2.7%)99 (2.6%)151 (2.8%)0.6760.411179 (2.54%)85 (2.42%)94 (2.67%)0.4640.496TreatmentcDMARDs6276 (68.3%)2697 (69.7%)3579 (67.4%)5.7520.0164968 (70.6%)2507 (71.3%)2461 (70.0%)1.4500.229Corticosteroid5597 (60.9%)2191 (56.6%)3406 (64.1%)52.423<0.0014291 (61.0%)2117 (60.2%)2174 (61.8%)1.9420.163Note: TCMCP: users of traditional Chinese medicine compound preparation; non-TCMCP: nonusers of traditional Chinese medicine compound preparation.The baseline data, including sex, age, hypertension, diabetes, hyperlipidemia, and cDMARD and corticosteroid treatment, of 9813 patients with RA who were successfully followed up were recorded. The median follow-up time was 54.85 months. Before propensity score matching, there were significant differences between TCMCP users and non-TCMCP users in terms of sex, age, hypertension, diabetes, cDMARDs, and corticosteroid treatment (p<0.05). However, after matching, no significant difference was found between TCMCP users and non-TCMCP users in the same aspects (p>0.05).
### 3.2. Changes in the RA-Related Inflammatory and Immune Indices after TCMCP Treatment (Table2)
Table 2
Changes in the rheumatoid arthritis-related inflammatory and immune indices after administration of TCMCP (n=3517).
Before treatmentAfter treatmentZp valueReference rangesESR (median (Q1, Q3), mm/h)48.0 (29.00, 70.00)28.00 (16.00, 44.00)-34.498<0.0012-6CRP (median (Q1, Q3), mg/L)24.29 (7.00, 50.57)2.00 (0.48, 8.66)-40.240<0.0010-5IgA (median (Q1, Q3), g/L)2.65 (2.00, 3.48)2.48 (1.91, 3.22)-15.313<0.0010.7-4.06IgG (median (Q1, Q3), g/L)12.98 (10.07, 16.70)12.00 (9.56, 15.30)-18.241<0.0016.8-14.5IgM (median (Q1, Q3), g/L)1.26 (0.89, 1.66)1.28 (0.91, 1.72)-3.983<0.0010.3-2.2C3 (median (Q1, Q3), g/L)110.60 (87.70, 129.90)100.20 (81.70, 115.10)-23.671<0.00175-135C4 (median (Q1, Q3), g/L)23.50 (15.6, 30.10)19.30 (12.60, 24.90)-27.145<0.0019-36Anti-CCP (median (Q1, Q3), mmol/L)240.90 (105.92, 473.98)220.45 (84.15, 452.69)-6.170<0.001<25RF (median (Q1, Q3), U/mL)101.70 (33.55, 244.50)88.85 (27.53, 216.45)-20.372<0.0010-14Note: TCMCP: traditional Chinese medicine compound preparation; ESR: erythrocyte sedimentation rate; CRP: C-reactive protein; IgA: immunoglobulin A; IgM: immunoglobulin M; IgG: immunoglobulin G; C3: complement C3; C4: complement C4; anti-CCP: anti-cyclic citrullinated peptide; RF: rheumatoid factor.Z is the standardized test statistics before and after treatment of TCMCP. The p value is compared before and after treatment with TCMCP.Hospitalization data of 3517 patients in the matched TCMCP group who received Xin’an Jianpi Tongbi prescription during hospitalization were collected and analyzed. Their posttreatment inflammatory and immune indices were lower than those before treatment (p<0.05).
### 3.3. Association Analysis of TCMCPs with RA-Related Inflammatory and Immune Indices (Table3)
Table 3
Association between traditional Chinese medicine compound preparation with rheumatoid arthritis-related inflammatory and immune indices.
TCMCPIndexesχ2p valueOR95% CIXFCCRP ↓4.2640.0391.2161.010-1.463XFCESR ↓9.0260.0031.2981.095-1.539XFCC4 ↓4.8200.0281.2581.025-1.544HQCCRP ↓24.65<0.0011.6411.348-1.998HQCESR ↓10.0010.0021.3241.112-1.575HQCC4 ↓5.0950.0241.2721.032-1.569HQCIgG ↓5.5090.0191.2471.037-1.499HQCIgA ↓5.2850.0221.2371.032-1.484Note: TCMCP: users of traditional Chinese medicine compound preparation; XFC: Xinfeng capsule; HQC: Huangqin Qingre capsule; ESR: erythrocyte sedimentation rate; CRP: C-reactive protein; C4: complement C4; IgG: immunoglobulin G; IgA: immunoglobulin A. “↓” represents the decrease of quantitative variables, indicating that the laboratory indicators improved after TCMCP treatment.We further analyzed the association between Xin’an Jianpi Tongbi prescription and RA-related inflammatory and immune indices. The results indicated that XFC was positively correlated with a decrease in CRP (p=0.039, OR=1.216), ESR (p=0.003, OR=1.298), and C4 (p=0.028, OR=1.258) levels. Similarly, HQC was positively correlated with a decrease in CRP (p<0.001,OR=1.641), ESR (p=0.002,OR=1.324), C4 (p=0.024,OR=1.272), IgG (p=0.019,OR=1.247), and IgA (p=0.022,OR=1.237) levels.
### 3.4. Kaplan-Meier Curves for a Composite Endpoint for Treatment Failure for TCMCP Users versus Non-TCMCP Users (Figure2)
Figure 2
Kaplan-Meier curves for a composite endpoint for treatment failure for TCMCP users versus non-TCMCP users. The composite endpoint prognosis for treatment failure was better in TCMCP users (HR=0.75 (0.71-0.80), p<0.001) than in non-TCMCP users. The p value represents the comparison in composite endpoint for treatment failure predicted by log-rank test between TCMCP users and non-TCMCP users. TCMCP: traditional Chinese medicine compound preparation.The results of the log-rank test showed that TCMCP users had better composite endpoint prognoses for treatment failure (HR=0.75 (0.71-0.80), p<0.001) than non-TCMCP users.
### 3.5. COX Regression Model for Analysis of Risk Factors for Four RA-Related Complications (Table4) and Visualization of the Analysis Results (Figure 3)
Table 4
Analysis of risk factors for the four rheumatoid arthritis-related complications using the COX regression model.
Number of endpoint eventsUnivariate analysisMultivariate analysisHR95% CIp valueHR95% CIp valueReadmission3253 (46.2%)TCMCP1510 (21.5%)0.7860.733-0.842<0.0010.7930.740-0.850<0.001Age (year)1.0410.971-1.1150.257<571459 (20.7%)≥571794 (25.5%)Sex0.9100.826-1.0020.054Female2767 (39.3%)Male486 (6.9%)Underlying diseasesHypertension1043 (14.8%)1.5251.416-1.643<0.0011.5191.410-1.636<0.001Diabetes277 (3.9%)1.0910.965-1.2340.165Hyperlipidemia85 (1.2%)1.3501.088-1.6750.0061.3401.080-1.6630.008Sjogren’s syndrome965 (13.7%)TCMCP414 (5.9%)0.6840.602-0.777<0.0010.6740.593-0.766<0.001Age (year)1.6331.429-1.865<0.0011.6051.404-1.835<0.001<57331 (4.7%)≥57634 (9.0%)Sex1.1230.952-1.3250.167Female793 (11.3%)Male172 (2.4%)Underlying diseasesHypertension335 (4.8%)1.6601.451-1.898<0.0011.5661.369-1.792<0.001Diabetes88 (1.3%)1.1630.934-1.4480.177Hyperlipidemia27 (0.4%)1.4020.956-2.0570.084Surgical treatment182 (2.6%)TCMCP82 (1.2%)0.7240.540-0.9700.0300.7100.530-0.9520.022Age (year)1.9901.452-2.726<0.0011.9471.416-2.677<0.001<5756 (0.8%)≥57126 (1.8%)Sex1.1370.780-1.3880.505Female149 (2.1%)Male33 (0.5%)Underlying diseasesHypertension52 (0.7%)1.0050.728-1.3880.975Diabetes23 (0.3%)1.5811.021-2.4480.0401.3660.878-2.1260.167Hyperlipidemia8 (0.1%)1.9050.938-3.8710.075All-cause death215 (3.1%)TCMCP93 (1.3%)0.7580.578-0.9940.0450.7260.553-0.9520.020Age (year)2.3141.710-3.134<0.0012.0601.509-2.812<0.001<5757 (0.8%)≥57158 (2.2%)Sex1.6871.237-2.3010.0011.5171.106-2.0800.010Female162 (2.3%)Male53 (0.8%)Underlying diseasesHypertension72 (1.0%)1.5901.192-2.1190.0021.5061.128-2.0120.006Diabetes30 (0.4%)1.8801.278-2.7660.0011.5961.080-2.3580.019Hyperlipidemia5 (0.1%)1.2090.497-2.9380.676Abbreviation: TCMCP: traditional Chinese medicine compound preparation.Figure 3
Multivariate regression analysis of rheumatoid arthritis-related complications: (a) TCMCP is a protective factor of recurrent admission, whereas hypertension and hyperlipidemia are risk factors; (b) TCMCP is a protective factor of Sjogren’s syndrome, whereas higher age and hypertension are risk factors; (c) TCMCP is a protective factor of surgical treatment whereas higher age is a risk factor; (d) TCMCP is a protective factor of all-cause death, whereas higher age, male, hypertension, and diabetes are risk factors. TCMCP: traditional Chinese medicine compound preparation.
(a)(b)(c)(d)Further, we used univariate and multivariate COX regression to analyze risk factors for the four RA-related complications, namely, readmission, Sjogren’s syndrome, surgical treatment, and all-cause death. The results showed that TCMCPs reduced the risk of readmission, Sjogren’s syndrome, surgical treatment, and risk of all-cause death by 20.7%, 32.6%, 29.0%, and 27.4%, respectively.Advancing age increased the risk of Sjogren’s syndrome, surgical treatment, and all-cause death by 60.5%, 94.7%, and 106.0%, respectively. The comorbidity hypertension increased the risk of readmission, Sjogren’s syndrome, and all-cause death by 51.9%, 56.6%, and 50.6%, respectively. The male sex and the presence of comorbidity diabetes increased the risk of all-cause death by 51.7% and 59.6%, respectively. Hyperlipidemia had a 34.0% increased risk of readmission.
### 3.6. Risk of RA-Related Complications at Different Exposure Times (Table5)
Table 5
Hazard ratios and 95% confidence intervals of the risk of rheumatoid arthritis-related complications at different exposure times.
Exposure groupTotalNumber of complicationsHR95% CIp valueNone (<1 month)38252877 (75.2%)1Low (1-3 months)417271 (65.0%)0.9940.868-1.1370.926Medium (3-6 months)395241 (61.0%)0.7960.691-0.9180.002High (≥6 months)23971226 (51.1%)0.6990.650-0.751<0.001We found that the use of TCMCPs was associated with a lower risk of RA-related complications. In addition, the risk of RA-related complications varied according to the exposure time. Notably, the risk of RA-related complications in TCMCP users with high-exposure intensity (adjustedHR=0.699,95%CI=0.650-0.751, p<0.001) and medium-exposure intensity (adjusted HR=0.796,95%CI =0.691-0.918, p=0.002) was significantly lower than that in non-TCMCP patients.
## 3.1. Baseline Characteristics of TCMCP and Non-TCMCP Patients (Table1)
Table 1
Baseline characteristics of TCMCP and non-TCMCP patients, matched and unmatched by propensity score.
Before matchingAfter matchingTotal (n=9183)TCMCP (n=3869)Non-TCMCP (n=5314)χ2p valueTotal (n=7034)TCMCP (n=3517)Non-TCMCP (n=3517)χ2p valueAge (year)10.6490.0013.3950.065<574643 (50.6%)1879 (48.6%)2764 (52.0%)3223 (45.8%)1650 (46.9%)1573 (44.7%)≥574540 (49.4%)1990 (51.4%)2550 (48.0%)3811 (54.2%)1867 (53.1%)1944 (55.3%)Sex55.078<0.0011.1640.281Female7534 (82.0%)3039 (78.5%)4494 (84.6%)5923 (84.2%)2945 (83.7%)2978 (84.7%)Male1650 (18.0%)830 (21.5%)820 (15.4%)1111 (15.8%)572 (16.3%)539 (15.3%)Underlying diseasesHypertension2804 (30.5%)988 (25.5%)1816 (34.2%)78.751<0.0012111 (30.0%)1072 (30.5%)1039 (29.5%)0.7370.391Diabetes903 (9.8%)422 (10.9%)481 (12.4%)8.6950.003647 (9.20%)315 (8.96%)332 (9.44%)0.4920.483Hyperlipidemia250 (2.7%)99 (2.6%)151 (2.8%)0.6760.411179 (2.54%)85 (2.42%)94 (2.67%)0.4640.496TreatmentcDMARDs6276 (68.3%)2697 (69.7%)3579 (67.4%)5.7520.0164968 (70.6%)2507 (71.3%)2461 (70.0%)1.4500.229Corticosteroid5597 (60.9%)2191 (56.6%)3406 (64.1%)52.423<0.0014291 (61.0%)2117 (60.2%)2174 (61.8%)1.9420.163Note: TCMCP: users of traditional Chinese medicine compound preparation; non-TCMCP: nonusers of traditional Chinese medicine compound preparation.The baseline data, including sex, age, hypertension, diabetes, hyperlipidemia, and cDMARD and corticosteroid treatment, of 9813 patients with RA who were successfully followed up were recorded. The median follow-up time was 54.85 months. Before propensity score matching, there were significant differences between TCMCP users and non-TCMCP users in terms of sex, age, hypertension, diabetes, cDMARDs, and corticosteroid treatment (p<0.05). However, after matching, no significant difference was found between TCMCP users and non-TCMCP users in the same aspects (p>0.05).
## 3.2. Changes in the RA-Related Inflammatory and Immune Indices after TCMCP Treatment (Table2)
Table 2
Changes in the rheumatoid arthritis-related inflammatory and immune indices after administration of TCMCP (n=3517).
Before treatmentAfter treatmentZp valueReference rangesESR (median (Q1, Q3), mm/h)48.0 (29.00, 70.00)28.00 (16.00, 44.00)-34.498<0.0012-6CRP (median (Q1, Q3), mg/L)24.29 (7.00, 50.57)2.00 (0.48, 8.66)-40.240<0.0010-5IgA (median (Q1, Q3), g/L)2.65 (2.00, 3.48)2.48 (1.91, 3.22)-15.313<0.0010.7-4.06IgG (median (Q1, Q3), g/L)12.98 (10.07, 16.70)12.00 (9.56, 15.30)-18.241<0.0016.8-14.5IgM (median (Q1, Q3), g/L)1.26 (0.89, 1.66)1.28 (0.91, 1.72)-3.983<0.0010.3-2.2C3 (median (Q1, Q3), g/L)110.60 (87.70, 129.90)100.20 (81.70, 115.10)-23.671<0.00175-135C4 (median (Q1, Q3), g/L)23.50 (15.6, 30.10)19.30 (12.60, 24.90)-27.145<0.0019-36Anti-CCP (median (Q1, Q3), mmol/L)240.90 (105.92, 473.98)220.45 (84.15, 452.69)-6.170<0.001<25RF (median (Q1, Q3), U/mL)101.70 (33.55, 244.50)88.85 (27.53, 216.45)-20.372<0.0010-14Note: TCMCP: traditional Chinese medicine compound preparation; ESR: erythrocyte sedimentation rate; CRP: C-reactive protein; IgA: immunoglobulin A; IgM: immunoglobulin M; IgG: immunoglobulin G; C3: complement C3; C4: complement C4; anti-CCP: anti-cyclic citrullinated peptide; RF: rheumatoid factor.Z is the standardized test statistics before and after treatment of TCMCP. The p value is compared before and after treatment with TCMCP.Hospitalization data of 3517 patients in the matched TCMCP group who received Xin’an Jianpi Tongbi prescription during hospitalization were collected and analyzed. Their posttreatment inflammatory and immune indices were lower than those before treatment (p<0.05).
## 3.3. Association Analysis of TCMCPs with RA-Related Inflammatory and Immune Indices (Table3)
Table 3
Association between traditional Chinese medicine compound preparation with rheumatoid arthritis-related inflammatory and immune indices.
TCMCPIndexesχ2p valueOR95% CIXFCCRP ↓4.2640.0391.2161.010-1.463XFCESR ↓9.0260.0031.2981.095-1.539XFCC4 ↓4.8200.0281.2581.025-1.544HQCCRP ↓24.65<0.0011.6411.348-1.998HQCESR ↓10.0010.0021.3241.112-1.575HQCC4 ↓5.0950.0241.2721.032-1.569HQCIgG ↓5.5090.0191.2471.037-1.499HQCIgA ↓5.2850.0221.2371.032-1.484Note: TCMCP: users of traditional Chinese medicine compound preparation; XFC: Xinfeng capsule; HQC: Huangqin Qingre capsule; ESR: erythrocyte sedimentation rate; CRP: C-reactive protein; C4: complement C4; IgG: immunoglobulin G; IgA: immunoglobulin A. “↓” represents the decrease of quantitative variables, indicating that the laboratory indicators improved after TCMCP treatment.We further analyzed the association between Xin’an Jianpi Tongbi prescription and RA-related inflammatory and immune indices. The results indicated that XFC was positively correlated with a decrease in CRP (p=0.039, OR=1.216), ESR (p=0.003, OR=1.298), and C4 (p=0.028, OR=1.258) levels. Similarly, HQC was positively correlated with a decrease in CRP (p<0.001,OR=1.641), ESR (p=0.002,OR=1.324), C4 (p=0.024,OR=1.272), IgG (p=0.019,OR=1.247), and IgA (p=0.022,OR=1.237) levels.
## 3.4. Kaplan-Meier Curves for a Composite Endpoint for Treatment Failure for TCMCP Users versus Non-TCMCP Users (Figure2)
Figure 2
Kaplan-Meier curves for a composite endpoint for treatment failure for TCMCP users versus non-TCMCP users. The composite endpoint prognosis for treatment failure was better in TCMCP users (HR=0.75 (0.71-0.80), p<0.001) than in non-TCMCP users. The p value represents the comparison in composite endpoint for treatment failure predicted by log-rank test between TCMCP users and non-TCMCP users. TCMCP: traditional Chinese medicine compound preparation.The results of the log-rank test showed that TCMCP users had better composite endpoint prognoses for treatment failure (HR=0.75 (0.71-0.80), p<0.001) than non-TCMCP users.
## 3.5. COX Regression Model for Analysis of Risk Factors for Four RA-Related Complications (Table4) and Visualization of the Analysis Results (Figure 3)
Table 4
Analysis of risk factors for the four rheumatoid arthritis-related complications using the COX regression model.
Number of endpoint eventsUnivariate analysisMultivariate analysisHR95% CIp valueHR95% CIp valueReadmission3253 (46.2%)TCMCP1510 (21.5%)0.7860.733-0.842<0.0010.7930.740-0.850<0.001Age (year)1.0410.971-1.1150.257<571459 (20.7%)≥571794 (25.5%)Sex0.9100.826-1.0020.054Female2767 (39.3%)Male486 (6.9%)Underlying diseasesHypertension1043 (14.8%)1.5251.416-1.643<0.0011.5191.410-1.636<0.001Diabetes277 (3.9%)1.0910.965-1.2340.165Hyperlipidemia85 (1.2%)1.3501.088-1.6750.0061.3401.080-1.6630.008Sjogren’s syndrome965 (13.7%)TCMCP414 (5.9%)0.6840.602-0.777<0.0010.6740.593-0.766<0.001Age (year)1.6331.429-1.865<0.0011.6051.404-1.835<0.001<57331 (4.7%)≥57634 (9.0%)Sex1.1230.952-1.3250.167Female793 (11.3%)Male172 (2.4%)Underlying diseasesHypertension335 (4.8%)1.6601.451-1.898<0.0011.5661.369-1.792<0.001Diabetes88 (1.3%)1.1630.934-1.4480.177Hyperlipidemia27 (0.4%)1.4020.956-2.0570.084Surgical treatment182 (2.6%)TCMCP82 (1.2%)0.7240.540-0.9700.0300.7100.530-0.9520.022Age (year)1.9901.452-2.726<0.0011.9471.416-2.677<0.001<5756 (0.8%)≥57126 (1.8%)Sex1.1370.780-1.3880.505Female149 (2.1%)Male33 (0.5%)Underlying diseasesHypertension52 (0.7%)1.0050.728-1.3880.975Diabetes23 (0.3%)1.5811.021-2.4480.0401.3660.878-2.1260.167Hyperlipidemia8 (0.1%)1.9050.938-3.8710.075All-cause death215 (3.1%)TCMCP93 (1.3%)0.7580.578-0.9940.0450.7260.553-0.9520.020Age (year)2.3141.710-3.134<0.0012.0601.509-2.812<0.001<5757 (0.8%)≥57158 (2.2%)Sex1.6871.237-2.3010.0011.5171.106-2.0800.010Female162 (2.3%)Male53 (0.8%)Underlying diseasesHypertension72 (1.0%)1.5901.192-2.1190.0021.5061.128-2.0120.006Diabetes30 (0.4%)1.8801.278-2.7660.0011.5961.080-2.3580.019Hyperlipidemia5 (0.1%)1.2090.497-2.9380.676Abbreviation: TCMCP: traditional Chinese medicine compound preparation.Figure 3
Multivariate regression analysis of rheumatoid arthritis-related complications: (a) TCMCP is a protective factor of recurrent admission, whereas hypertension and hyperlipidemia are risk factors; (b) TCMCP is a protective factor of Sjogren’s syndrome, whereas higher age and hypertension are risk factors; (c) TCMCP is a protective factor of surgical treatment whereas higher age is a risk factor; (d) TCMCP is a protective factor of all-cause death, whereas higher age, male, hypertension, and diabetes are risk factors. TCMCP: traditional Chinese medicine compound preparation.
(a)(b)(c)(d)Further, we used univariate and multivariate COX regression to analyze risk factors for the four RA-related complications, namely, readmission, Sjogren’s syndrome, surgical treatment, and all-cause death. The results showed that TCMCPs reduced the risk of readmission, Sjogren’s syndrome, surgical treatment, and risk of all-cause death by 20.7%, 32.6%, 29.0%, and 27.4%, respectively.Advancing age increased the risk of Sjogren’s syndrome, surgical treatment, and all-cause death by 60.5%, 94.7%, and 106.0%, respectively. The comorbidity hypertension increased the risk of readmission, Sjogren’s syndrome, and all-cause death by 51.9%, 56.6%, and 50.6%, respectively. The male sex and the presence of comorbidity diabetes increased the risk of all-cause death by 51.7% and 59.6%, respectively. Hyperlipidemia had a 34.0% increased risk of readmission.
## 3.6. Risk of RA-Related Complications at Different Exposure Times (Table5)
Table 5
Hazard ratios and 95% confidence intervals of the risk of rheumatoid arthritis-related complications at different exposure times.
Exposure groupTotalNumber of complicationsHR95% CIp valueNone (<1 month)38252877 (75.2%)1Low (1-3 months)417271 (65.0%)0.9940.868-1.1370.926Medium (3-6 months)395241 (61.0%)0.7960.691-0.9180.002High (≥6 months)23971226 (51.1%)0.6990.650-0.751<0.001We found that the use of TCMCPs was associated with a lower risk of RA-related complications. In addition, the risk of RA-related complications varied according to the exposure time. Notably, the risk of RA-related complications in TCMCP users with high-exposure intensity (adjustedHR=0.699,95%CI=0.650-0.751, p<0.001) and medium-exposure intensity (adjusted HR=0.796,95%CI =0.691-0.918, p=0.002) was significantly lower than that in non-TCMCP patients.
## 4. Discussion
In this population-based cohort study, a large amount of data on RA patients from the First Affiliated Hospital of Anhui University of Chinese Medicine were used to evaluate the effects of TCMCPs on clinical immunological and inflammatory indicators and RA-related complications. We found that RA patients treated with Xin’an Jianpi Tongbi Preparation not only exhibited lower immune and inflammatory indices than non-TCMCP users but also were associated with a low risk of RA-related complications.TCM has a multicomponent, multitargeted synergistic anti-inflammatory and anti-immune effect. Previously, we found that TCMCPs significantly improved the RA-related immunological and inflammatory effects [16]. Modern pharmacological studies have also reported that Xin’an Jianpi Tongbi preparation drugs, i.e., Astragalus membranaceus, Semen coicis, Tripterygium wilfordii, Scolopendra spp., Scutellaria baicalensis, Gardenia jasminoides, Poria cocos, Epimedium brevicornu, Cinnamomum cassia, and Curcumae Longae, can improve the RA-related immunological and inflammatory response. Among them, active agents in Astragalus membranaceus have been shown to improve RA-induced synovial and joint injury [27, 28]. Semen coicis extract, including polyphenols and polysaccharides, has immunological, antioxidant, and anti-inflammatory effects [29]. Tripterygium wilfordii lactone, the active ingredient of Tripterygium wilfordii[30, 31], inhibited cell growth and inflammatory response of RA-associated fibroblasts, such as synovial cells, by regulating the expression of the hsa-circ-0003353/microRNA-31-5p/cyclin-dependent kinase 1 axis [32]. Scolopendra spp. combined with TCM has shown significant clinical efficacy in patients with RA [33]. Baicalin had an anti-inflammatory effect in a collagen-induced arthritis rat model, possibly by inhibiting the toll-like receptor 2/myeloid differentiation factor 88/NF-kappa B p65 signaling pathway [34]. Geniposide exhibited anti-inflammatory and antiangiogenesis pharmacological effects through the inhibition of vascular endothelial growth factor-induced angiogenesis in vascular endothelial cells by reducing the translocation of sphingosine kinase 1 [35]. Poria cocos polysaccharide enhanced the secretion of immune stimulants but inhibited the secretion of immune inhibitors, enhancing the host immune response [36]. Icariin inhibited cell proliferation by interfering with the cell cycle in RA fibroblasts, including synovial cells, promoting mitochondria-dependent apoptosis and intracellular reactive oxygen species production, which potentially improves RA outcomes [37]. Cinnamomum cassia extract had a therapeutic effect on RA, which was attributed to its antiproliferation and antimigration effects on synovial fibroblasts [38]. A systematic review showed that curcumin had a significant effect on the clinical and inflammatory parameters of RA and significantly improved morning stiffness, walking time, and joint swelling [39]. Thus, the pharmacological effects of TCM support the use of TCMCP for reducing the risk of RA-related complications. These results demonstrated that TCMCPs could act as a protective factor against RA-related complications (readmission, Sjogren’s syndrome, surgical treatment, and all-cause death).However, we also found that RA patients with comorbidities such as hypertension or hyperlipidemia had a significantly high risk of readmission. A study showed that hypertension and dyslipidemia were the most common complications of RA [40]. Consistent with our results, these classic complications increased the risk of recurrence of RA inflammation [41], potentially contributing to increased readmission of RA patients. Advanced age and hypertension were shown to be significantly associated with the extra-articular manifestations of RA [42, 43], which is consistent with our findings. An analysis based on British electronic medical records showed that the incidence of joint replacement increased with age [12]. Our results revealed a 94.7% increased risk of surgical treatment in patients with RA aged 57 years and older, which corroborates findings from previous studies. Consistent with other studies, our results also showed that older patients with RA, men, and those with hypertension and diabetes had a higher risk of death [44–46]. Collectively, these results show that advanced age is a significant risk factor for extra-articular diseases, surgical treatment, and all-cause death. In addition, comorbidity hypertension is a risk factor for admission, extra-articular diseases, and all-cause death, whereas hyperlipidemia and diabetes are risk factors for recurrent admission and all-cause death, respectively. Among patients with RA, the risk of all-cause death is higher in men than in women.Our study further found that medium- and high-exposure intensity, especially high-exposure intensity, were significantly associated with a reduced risk of RA-related complications. This indicates that long-term treatment with TCM could decrease the frequency of RA-related complications, which is consistent with the results of previous clinical data mining studies [16, 47]. Our results also suggest that long-term exposure to Xin’an Jianpi Tongbi preparation reduces RA-related complications.This study had some notable limitations. First, there were no radiological data in our research to measure the severity of RA disease. Although, early on, we retrieved radiological data from the hospital information system, these data were textual, and we lacked models and algorithms to process textual data. Second, biologic DMARDs were not included in this study owing to insufficient data, which constitutes a major difference from the common practice in RA treatment and prevents appropriate comparisons with most of the literature on RA. Third, the recurrence frequency per unit time was not calculated for the frequently hospitalized patients, which differs from the common practice for RA treatment and also hinders proper comparison with most of the literature on RA. Fourth, the lack of data on adverse events of TCMCPs in this study did not allow a comprehensive analysis of the role of the drugs. Finally, we only studied Sjogren’s syndrome and lacked data on other extra-articular manifestations of RA, which makes our findings one-sided. We intend to address these limitations in our future research. Nevertheless, our study has two significant strengths: the clinical advantage of using TCM and the statistical advantage of using large samples. This was a population-based cohort study, which included the clinical administration of medication to a population, making our results more clinically acceptable. The large sample size provides sufficient statistical ability to study the improvement effect of TCMCP on RA-related clinical indicators.
## 5. Conclusion
This population-based cohort study showed that TCMCP use, as well as long-term exposure to TCMCP in patients with RA, decreased the risk of RA-related complications, including readmission, Sjogren’s syndrome, surgical treatment, and all-cause death. These findings are expected to inform clinical decisions regarding the use of TCMCP in RA management.
---
*Source: 1019290-2023-02-23.xml* | 2023 |
# An Alternative Sensitivity Approach for Longitudinal Analysis with Dropout
**Authors:** Amal Almohisen; Robin Henderson; Arwa M. Alshingiti
**Journal:** Journal of Probability and Statistics
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1019303
---
## Abstract
In any longitudinal study, a dropout before the final timepoint can rarely be avoided. The chosen dropout model is commonly one of these types: Missing Completely at Random (MCAR), Missing at Random (MAR), Missing Not at Random (MNAR), and Shared Parameter (SP). In this paper we estimate the parameters of the longitudinal model for simulated data and real data using the Linear Mixed Effect (LME) method. We investigate the consequences of misspecifying the missingness mechanism by deriving the so-called least false values. These are the values the parameter estimates converge to, when the assumptions may be wrong. The knowledge of the least false values allows us to conduct a sensitivity analysis, which is illustrated. This method provides an alternative to a local misspecification sensitivity procedure, which has been developed for likelihood-based analysis. We compare the results obtained by the method proposed with the results found by using the local misspecification method. We apply the local misspecification and least false methods to estimate the bias and sensitivity of parameter estimates for a clinical trial example.
---
## Body
## 1. Introduction
Missing data are common in various settings, including surveys, clinical trials, and longitudinal studies. Methods for handling missing data strongly depend on the mechanism that generated the missing values as well as the distributional and modeling assumptions at various stages. This study focuses only on Missing at Random and Missing Not at Random dropout models, under a Linear Mixed Effect (LME) model.Much of the literature on missing data problems assumes the dropout model is only MAR and not MNAR, but this assumption is clearly limited [1]. The consequences of misspecifying the missingness mechanism are investigated by deriving the so-called least false values, which are the values the parameter estimates converge to when the assumptions may be wrong. Derivation and illustration of theoretical least false values for the LME method are made under Missing at Random (MAR) and Missing Not at Random (MNAR) dropout. The misspecified dropout model MAR is assumed in this study.Copas and Eguchi [2] gave a formula to estimate the bias under such misspecification using a likelihood approach. As the LME is a likelihood-based method, the estimates obtained through the Copas and Eguchi method can be compared with the LME least false estimates. The procedure will be applied by adding a tilt to the MAR dropout model to provide what Copas and Eguchi call local misspecification.The local model uncertainty is elaborated as proposed by Copas and Eguchi [2] and illustrated both when model misspecification is present and when the data is incomplete. Furthermore, we find that the Copas and Eguchi method gives very similar results to the least false method. Misspecification will be dealt with assuming MAR where actually the truth is MNAR. Beside Copas and Eguchi [2], many other authors have developed methods to assess the sensitivity of inference under the MAR assumption [3, 4]. Moreover, Lin et al. [5] extended the Copas and Eguchi method and assumed a doubly misspecified model while having only single misspecification. Also, there has been interest in the Copas and Eguchi method from a Bayes perspective [6–10]. Recently, [11] performed simulation based sensitivity analysis.In Section2, the LME method is presented and we show how to calculate the least false values. A description of the Copas and Eguchi method is provided in Section 3.1, followed by an example in Section 3.2. A simulation study is described in Section 4. The Copas and Eguchi bias estimate results are studied and examined with the least false values derived from the LME method, and we then show the coverage of nominal confidence intervals. A sensitivity analysis is conducted to assess how inference can depend on missing data. In Section 5, the methods are applied to data from a clinical trial with two treatments and two measurement times as introduced and analysed by Matthews et al. [12]. We compared the results obtained by the proposed method with the results found by using the Copas and Eguchi method.
## 2. Linear Mixed Effect (LME) Method
A statistical model containing fixed effects and random effects is called a mixed effect model. These models have been shown to be effective in many disciplines in the biological, physical, and social sciences. Usually a linear form is assumed.Reference [13] gave a definition of the response Y in the LME model which is of the form: (1)measuredresponse=covariateeffects+randomeffects+measurementerror.For example, a simplified version of the Liard and Ware [14] mixed model approach for longitudinal data would include a random effect in the intercept term in a model for responses. If Yij is the response at time j on subject i, the model is (2)Yij=μij+Ui+ϵijwhere μij is the marginal mean, which will usually be a linear function of covariates, ϵij is independent Gaussian noise, and Ui is a realisation of a zero mean scalar Gaussian random variable. Since Ui has zero mean, the marginal mean of Yij remains μij after integrating out Uij. However, since Ui is common to all j, we get dependence between observations on the same subject. For example, if Ui is positive, then all values would tend to be above the marginal mean and so on. In the context of longitudinal data, some reviews of linear mixed models can be found in [15, 16].
### 2.1. Assumptions
Suppose there aren individuals in a study and each provides longitudinal responses Y and dropout information R. Generally, we will assume a linear model for Y (in the absence of dropout) and logistic models for the probability ofcontinuing to the next timepoint t+1 given that a subject is still under observation at time t. At times, we refer to atrue orgenerating model as the way in which data are obtained and to anassumed orfitting model as that chosen by the analyst for estimation.For simplicity in this work, the study assumes that there are just two observations or treatment periods. The methods are of course more general.At time 1, there is a measurement provided for all subjects, denoted byYi1 for subject i. Then at time 2, some subjects are dropped out before measurement. Let Ri=1 indicate that there is a measurement at time 2 and Ri=0 otherwise. Let Yi=(Yi1,Yi2)T and assume E[Yi]=xiβ where β is a parameter vector of dimension p and xi is the design matrix associated with subject i, which is of dimension 2×p. The standard model assumes just one covariate and is(3)Y1i=β1G+β2Gxi+Ui+ϵ1iY2i=β3G+β4Gxi+Ui+ϵ2iYi=XiβG+Ui1+ϵi,where Yi=Y1iY2i, Xi=1xi00001xi, βG=(β1G,β2G,β3G,β4G)T, 1=11, ϵi=ϵ1iϵ2i, and xi~N(0,σx2), Ui~N(0,σU2), ϵ1i~N(0,σϵ12) and ϵ2i~N(0,σϵ22). Let σ12=σU2+σϵ12, σ22=σU2+σϵ22 and ρ=σU2/σ1σ2.Returning to the general case, the influence of missing data depends on the missingness mechanism, that is, the probability model for missingness. Knowing the reason for the missingness is obviously helpful to handle missing data. There are four generalmissingness mechanisms as introduced by Little and Rubin [17] and Wu and Carroll [18]. They are Missing Completely at Random (MCAR), Missing at Random (MAR), Missing Not at Random (MNAR), and Shared Parameter (SP).For simplicity in this investigation, the parameters are assumed to be common between timepoints. Let the dropout parameters beθ=(θ0,θ1). The MAR dropout logistic model is then(4)πiθ=PRi=1∣Y1i,Y2i=eθ0+θ1Y1i1+eθ0+θ1Y1i=expitθ0+θ1Y1i.The missingness is called Missing Not at Random, if it depends on unrecorded information, which predicts the missing values. An example is that a patient was unsatisfied with a particular treatment, and thus this patient is more likely to quit the study. If missingness is not at random, then some bias is expected in inferences.Let the dropout parameters now be(θ,θ2). The MNAR version for the two-timepoint example is the logistic model:(5)πiθ,θ2=PRi=1∣Y1i,Y2i=eθ0+θ1Y1i+θ2Y2i1+eθ0+θ1Y1i+θ2Y2i=expitθ0+θ1Y1i+θ2Y2i.
### 2.2. LME Least False
In this section, the Linear Mixed Effect (LME) method is investigated, which is based on a maximum likelihood estimating approach. The performance of the LME method under MAR and MNAR dropout is examined. Derivation and illustration of theoretical least false values are made. Assuming a Gaussian random intercept model, the score equation of current interest is [19](6)∑i=1nRiXiTV-1Yi-Xiβ^+1-Riσ12xi1Yi1-xi1Tβ^=0where Yi=(Yi1,Yi2), Xi is a 2×4 design matrix associated with subject i which is Xi=1xi00001xi, and we will use xi1T as notation for the first row of Xi; thus xi1T=(1,xi,0,0), β^T=(β^1,β^2,β^3,β^4), and V=σ12ρσ1σ2ρσ1σ2σ22. We can rearrange the terms in (6) to be(7)∑i=1nRiXiTV-1Xi+1-Riσ12xi1xi1Tβ^(8)=∑i=1nRiXiTV-1Yi+1-Riσ12xi1Yi1.These components are in detail(9)V-1=Kσ22-ρσ1σ2-ρσ1σ2σ12where K=1/σ12σ22(1-ρ2), and(10)XiTV-1Xi=Kσ22σ22xi-ρσ1σ2-ρσ1σ2xiσ22xiσ22xi2-ρσ1σ2xi-ρσ1σ2xi2-ρσ1σ2-ρσ1σ2xiσ12σ12xi-ρσ1σ2xi-ρσ1σ2xi2σ12xiσ12xi2.Also(11)xi1xi1T=1xi00xixi20000000000.Similarly for the right hand side of (8)(12)XiTV-1Yi=Kσ22Yi1-ρσ1σ2Yi2σ22Yi1xi-ρσ1σ2Yi2xiσ12Yi2-ρσ1σ2Yi1σ12Yi2xi-ρσ1σ2Yi1xi.Finally(13)xi1Yi1=Yi1xiYi100.We assume independent and identically distributed responses, with finite variance for the covariate and error distributions, and dropout probabilities bounded away from both zero and one. On dividing all sums by n, the weak law of large numbers applies and we can replace the sums with expectations as follows:(14)ERXTV-1X+1-Rσ12x1x1Tβ∗=ERXTV-1Y+1-Rσ12x1Y1.In the left hand side of (14), there will be two parts. First(15)ERXTV-1X(16)=Kσ22ERσ22ERx-ρσ1σ2ER-ρσ1σ2ERxσ22ERxσ22ERx2-ρσ1σ2ERx-ρσ1σ2ERx2-ρσ1σ2ER-ρσ1σ2ERxσ12ERσ12ERx-ρσ1σ2ERx-ρσ1σ2ERx2σ12ERxσ12ERx2,and second(17)1-Rσ12x1x1T=1σ121-EREx-ERx00Ex-ERxEx2-ERx20000000000.Similarly, the right hand side is(18)Kσ22ERY1-ρσ1σ2ERY2σ22ERY1x-ρσ1σ2ERY2xσ12ERY2-ρσ1σ2ERY1σ12ERY2x-ρσ1σ2ERY1x+1σ12EY1-ERY1EY1x-ERY1x00.Expressions for E[R], E[Rx], E[Rx2], E[RY1], E[RY2], E[RY1x], and E[RY2x] have been obtained under different dropout models. For illustration, we show calculation of E[R] under MAR in the Supplementary Materials available at the journal website (available here).Finally to find the least false valueβ∗, the inverse of the matrix has been considered in the left hand side of (14) and we multiply this inverse by the matrix in the right hand side, which will yield the array of the least false values β∗T=(β1∗,β2∗,β3∗,β4∗). In the following section, we present simulations regarding how the LME method performs under MAR and MNAR dropout model.
### 2.3. Numerical Investigation
A scalarN(0,1) variable x is generated, and then the longitudinal means are generated μ1=β1+β2x, μ2=β3+β4x. This was followed by (Y1,Y2) from a bivariate normal distribution with mean (μ1,μ2). Missingness was generated from (4) and (5) for the MAR and MNAR models, respectively. In all of the following simulations, unless it is stated otherwise, the parameters β=(-2,-2,-1,-1), σx=σ1=σ2=1, ρ=0.5 were followed. In the following, we show the effect of dropout on the limiting values β3∗ and β4∗.As LME provides consistent estimates under MAR, the least false valuesβ3∗ and β4∗ are not affected by changing the dropout probabilities under MAR. Therefore, only MNAR concentrations were considered. From a contour plot of β3∗ under MNAR (Figure 1), in order to minimise the bias in β3∗, θ1 should be chosen to be around zero. For negative θ1, the dropout is associated with large U, so Y1 and Y2 both tend to be low if dropout does not occur. Hence β3∗ is lower than it should be. The opposite happens for a positive θ1.Figure 1
Contour plot ofβ3∗ under MNAR.Figure2 shows a contour plot of β4∗ under MNAR. Here, negative bias is obtained as θ1 moves away from zero in either direction. Such an attenuation of regression effect is common when there are errors in variables [20]. It seems that a similar effect is obtained here.Figure 2
Contour plot ofβ4∗ under MNAR.Having obtained least false values, we propose their use in sensitivity analyses. Before doing so, a sensitivity procedure is investigated for local misspecification as proposed by Copas and Eguchi [2].
## 2.1. Assumptions
Suppose there aren individuals in a study and each provides longitudinal responses Y and dropout information R. Generally, we will assume a linear model for Y (in the absence of dropout) and logistic models for the probability ofcontinuing to the next timepoint t+1 given that a subject is still under observation at time t. At times, we refer to atrue orgenerating model as the way in which data are obtained and to anassumed orfitting model as that chosen by the analyst for estimation.For simplicity in this work, the study assumes that there are just two observations or treatment periods. The methods are of course more general.At time 1, there is a measurement provided for all subjects, denoted byYi1 for subject i. Then at time 2, some subjects are dropped out before measurement. Let Ri=1 indicate that there is a measurement at time 2 and Ri=0 otherwise. Let Yi=(Yi1,Yi2)T and assume E[Yi]=xiβ where β is a parameter vector of dimension p and xi is the design matrix associated with subject i, which is of dimension 2×p. The standard model assumes just one covariate and is(3)Y1i=β1G+β2Gxi+Ui+ϵ1iY2i=β3G+β4Gxi+Ui+ϵ2iYi=XiβG+Ui1+ϵi,where Yi=Y1iY2i, Xi=1xi00001xi, βG=(β1G,β2G,β3G,β4G)T, 1=11, ϵi=ϵ1iϵ2i, and xi~N(0,σx2), Ui~N(0,σU2), ϵ1i~N(0,σϵ12) and ϵ2i~N(0,σϵ22). Let σ12=σU2+σϵ12, σ22=σU2+σϵ22 and ρ=σU2/σ1σ2.Returning to the general case, the influence of missing data depends on the missingness mechanism, that is, the probability model for missingness. Knowing the reason for the missingness is obviously helpful to handle missing data. There are four generalmissingness mechanisms as introduced by Little and Rubin [17] and Wu and Carroll [18]. They are Missing Completely at Random (MCAR), Missing at Random (MAR), Missing Not at Random (MNAR), and Shared Parameter (SP).For simplicity in this investigation, the parameters are assumed to be common between timepoints. Let the dropout parameters beθ=(θ0,θ1). The MAR dropout logistic model is then(4)πiθ=PRi=1∣Y1i,Y2i=eθ0+θ1Y1i1+eθ0+θ1Y1i=expitθ0+θ1Y1i.The missingness is called Missing Not at Random, if it depends on unrecorded information, which predicts the missing values. An example is that a patient was unsatisfied with a particular treatment, and thus this patient is more likely to quit the study. If missingness is not at random, then some bias is expected in inferences.Let the dropout parameters now be(θ,θ2). The MNAR version for the two-timepoint example is the logistic model:(5)πiθ,θ2=PRi=1∣Y1i,Y2i=eθ0+θ1Y1i+θ2Y2i1+eθ0+θ1Y1i+θ2Y2i=expitθ0+θ1Y1i+θ2Y2i.
## 2.2. LME Least False
In this section, the Linear Mixed Effect (LME) method is investigated, which is based on a maximum likelihood estimating approach. The performance of the LME method under MAR and MNAR dropout is examined. Derivation and illustration of theoretical least false values are made. Assuming a Gaussian random intercept model, the score equation of current interest is [19](6)∑i=1nRiXiTV-1Yi-Xiβ^+1-Riσ12xi1Yi1-xi1Tβ^=0where Yi=(Yi1,Yi2), Xi is a 2×4 design matrix associated with subject i which is Xi=1xi00001xi, and we will use xi1T as notation for the first row of Xi; thus xi1T=(1,xi,0,0), β^T=(β^1,β^2,β^3,β^4), and V=σ12ρσ1σ2ρσ1σ2σ22. We can rearrange the terms in (6) to be(7)∑i=1nRiXiTV-1Xi+1-Riσ12xi1xi1Tβ^(8)=∑i=1nRiXiTV-1Yi+1-Riσ12xi1Yi1.These components are in detail(9)V-1=Kσ22-ρσ1σ2-ρσ1σ2σ12where K=1/σ12σ22(1-ρ2), and(10)XiTV-1Xi=Kσ22σ22xi-ρσ1σ2-ρσ1σ2xiσ22xiσ22xi2-ρσ1σ2xi-ρσ1σ2xi2-ρσ1σ2-ρσ1σ2xiσ12σ12xi-ρσ1σ2xi-ρσ1σ2xi2σ12xiσ12xi2.Also(11)xi1xi1T=1xi00xixi20000000000.Similarly for the right hand side of (8)(12)XiTV-1Yi=Kσ22Yi1-ρσ1σ2Yi2σ22Yi1xi-ρσ1σ2Yi2xiσ12Yi2-ρσ1σ2Yi1σ12Yi2xi-ρσ1σ2Yi1xi.Finally(13)xi1Yi1=Yi1xiYi100.We assume independent and identically distributed responses, with finite variance for the covariate and error distributions, and dropout probabilities bounded away from both zero and one. On dividing all sums by n, the weak law of large numbers applies and we can replace the sums with expectations as follows:(14)ERXTV-1X+1-Rσ12x1x1Tβ∗=ERXTV-1Y+1-Rσ12x1Y1.In the left hand side of (14), there will be two parts. First(15)ERXTV-1X(16)=Kσ22ERσ22ERx-ρσ1σ2ER-ρσ1σ2ERxσ22ERxσ22ERx2-ρσ1σ2ERx-ρσ1σ2ERx2-ρσ1σ2ER-ρσ1σ2ERxσ12ERσ12ERx-ρσ1σ2ERx-ρσ1σ2ERx2σ12ERxσ12ERx2,and second(17)1-Rσ12x1x1T=1σ121-EREx-ERx00Ex-ERxEx2-ERx20000000000.Similarly, the right hand side is(18)Kσ22ERY1-ρσ1σ2ERY2σ22ERY1x-ρσ1σ2ERY2xσ12ERY2-ρσ1σ2ERY1σ12ERY2x-ρσ1σ2ERY1x+1σ12EY1-ERY1EY1x-ERY1x00.Expressions for E[R], E[Rx], E[Rx2], E[RY1], E[RY2], E[RY1x], and E[RY2x] have been obtained under different dropout models. For illustration, we show calculation of E[R] under MAR in the Supplementary Materials available at the journal website (available here).Finally to find the least false valueβ∗, the inverse of the matrix has been considered in the left hand side of (14) and we multiply this inverse by the matrix in the right hand side, which will yield the array of the least false values β∗T=(β1∗,β2∗,β3∗,β4∗). In the following section, we present simulations regarding how the LME method performs under MAR and MNAR dropout model.
## 2.3. Numerical Investigation
A scalarN(0,1) variable x is generated, and then the longitudinal means are generated μ1=β1+β2x, μ2=β3+β4x. This was followed by (Y1,Y2) from a bivariate normal distribution with mean (μ1,μ2). Missingness was generated from (4) and (5) for the MAR and MNAR models, respectively. In all of the following simulations, unless it is stated otherwise, the parameters β=(-2,-2,-1,-1), σx=σ1=σ2=1, ρ=0.5 were followed. In the following, we show the effect of dropout on the limiting values β3∗ and β4∗.As LME provides consistent estimates under MAR, the least false valuesβ3∗ and β4∗ are not affected by changing the dropout probabilities under MAR. Therefore, only MNAR concentrations were considered. From a contour plot of β3∗ under MNAR (Figure 1), in order to minimise the bias in β3∗, θ1 should be chosen to be around zero. For negative θ1, the dropout is associated with large U, so Y1 and Y2 both tend to be low if dropout does not occur. Hence β3∗ is lower than it should be. The opposite happens for a positive θ1.Figure 1
Contour plot ofβ3∗ under MNAR.Figure2 shows a contour plot of β4∗ under MNAR. Here, negative bias is obtained as θ1 moves away from zero in either direction. Such an attenuation of regression effect is common when there are errors in variables [20]. It seems that a similar effect is obtained here.Figure 2
Contour plot ofβ4∗ under MNAR.Having obtained least false values, we propose their use in sensitivity analyses. Before doing so, a sensitivity procedure is investigated for local misspecification as proposed by Copas and Eguchi [2].
## 3. The Effect of Local Misspecification of the Dropout Model When Using Likelihood-Based Methods under the MAR Assumption
In the previous section, we investigated the consequences of misspecifying the missingness mechanism by deriving the so-called least false values, which are the values the parameter estimates converge to when the assumptions may be wrong.As an alternative, Copas and Eguchi [2] give a formula to estimate the bias under such misspecification using a likelihood approach. As the LME is a likelihood-based method, we can compare the Copas and Eguchi method with the LME least false estimates. The procedure will be applied by adding a tilt to the MAR dropout model to provide what Copas and Eguchi [2] call local misspecification.
### 3.1. Description of Copas and Eguchi Method
We use the notation of Copas and Eguchi [2], denoting by Z complete data and by Y incomplete data. There are two types of model: the true model and the assumed model. The true model is also called the generating model and it means how the data are actually generated or simulated. On the other hand, the assumed model or what is also known as the fitting model is what we fit to data. The true model for complete data is denoted by gZ=gZ(z;ψ) and the corresponding true model for incomplete data is gY=gY(y:ψ) which can be derived from gZ. Here ψ is a generic (vector) parameter. The assumed or working model is a parametric model fZ=fZ(z;ψ) which gives the distribution of Z, and its marginal density is fY=fY(y:ψ).Thus(19)fY=∫yfZdzwhere the notation (y) means integration over all missing values in Z that are consistent with the observed Y.A method is provided to approximate the bias in the estimation of the parameters of the misspecified model following Copas and Eguchi [2]. We consider MAR as the working model and MNAR as the true model. Thus, the misspecification is caused by assuming MAR but the truth is MNAR.Suppose there is a random sample ofn observations, and the true model is given by gZ, which is defined by (16) in Copas and Eguchi [2] as a tilt model:(20)gZ=gZz;ψ,ε,uZ=fZz;ψexpεuZz;ψ.Thus, the misspecification is determined by the quantityεuZ(z;ψ). In this, ε, which is assumed to be small, measures the size of misspecification while uZ(z;ψ) determines its direction. We assume uZ(z;ψ) has zero mean and unit variance under the working model fZ. The misspecification is local because ε is small. Hence, gZ is close to fZ and can be written as(21)gZfZ=expεuZz;ψ.Now if the model actually used to fit the data is fZ(z;ψ), then the limiting value of the MLE ψ^ as n→∞ is given by equation (18) in Copas and Eguchi [2](22)ψgZ=argψEgsZz;ψ=0=ψ+εIZ-1EfZuZz;ψsZz;ψ,where sZ(·;ψ)=∂{log(fZ)}/∂ψ and IZ=E[-∂2{log(fZ)}/∂ψ∂θT] are the score and information matrix for the model fZ, respectively.However,fY will be considered as the working model for the marginal data. Copas and Eguchi [2] show that if (20) is true and ε is small, then a similar approximation holds for the marginal data Y, i.e.,(23)gY=gYy;ψ,ε,uY=fYy;ψexpεuYy;ψwhere again uY(y;ψ) has zero mean and unit variance. In this case according to (19) in Copas and Eguchi [2] the limiting value is(24)ψgY≈ψ+εIY-1EfuYsY=ψ+IY-1EfεuYsYwhere sY(·;ψ)=∂{log(fY)}/∂ψ and IY=E-∂2{log(fY)}/∂ψ∂ψT are the score and information matrix for the model fY, respectively. To calculate the bias, IY-1Ef[εuYsY], tilt εuY. In the next section, how to calculate this amount under MAR and MNAR in our setting of two timepoints will be determined.
### 3.2. Copas and Eguchi Method for Two-Timepoint Example
The bias consists of, as shown in (24), the score, information matrix, and the tilt. In order to calculate these components, the likelihood model in use is defined. Under MAR, either of the following equivalent formulations can be selected:(25)L=fY1,Y2PR=1∣Y1,Y2RfY1PR=0∣Y11-R(26)=fY2∣Y1fY1PR=1∣Y1,Y2RfY1PR=0∣Y11-R.The conditional distribution of Y2 given Y1 is needed quite a lot in this section. Hence, for simplicity, we use Y21 to denote this quantity. Since f(Y1,Y2) is bivariate normal in this assumed model, Y21~N(μ21,σ21) where μ21=μ2+σ2/σ1ρ(Y1-μ1) and σ21=σ21-ρ2. Also, the complete data is Z=(Y1,Y2,R) and incomplete data is Y=(Y1,Y2(R),R) where(27)Y2R=Y2,R=1undefined,R=0.Therefore, at R=1, Y=Z, but Y will differ from Z at R=0.In addition, the models are defined asfZ, fY, gZ, and gY. MAR is assumed as the working model or misspecified model. Under MAR, there is P(R=1∣Y1,Y2)=P(R=1∣Y1); then from (25) the working model for complete data by assuming R=1 is(28)fZ=fY1,Y2PR=1∣Y1.Similarly, from (26) the working model for incomplete data by assuming R=0 is(29)fY=fY1PR=0∣Y1.Under MNAR, if there is complete data, then we will always setR=1. Thus, from (25), the true model for complete data is(30)gZ=fY1,Y2PR=1∣Y1,Y2.The true model for incomplete data on the other hand is the marginal density: (31)gY=∫ygZdz=∫Y2RfY1,Y2PR=1∣Y1,Y2dY2R.Note that the integral is over the missing values Y2R. Referring to (27), the missing values Y2 are undefined in case that R=0.This means that in order to use Copas and Eguchi’s ideas, we should convert the specificgY in (31) into the general form of (23). To do this, we will redefine the MNAR model in tilt form:(32)PR=1∣Y1,Y2=expitθ0+θ1Y1expεuY.Here ε=θ2σ21 and uY=uY(y;θ)=(Y2-μ21)/σ21. For small θ2 this is a good approximation to the logistic MNAR model.Calculation of the terms needed for the bias expression (24) is now possible and follows directly. Details are in the Supplementary Materials available at the journal website.
## 3.1. Description of Copas and Eguchi Method
We use the notation of Copas and Eguchi [2], denoting by Z complete data and by Y incomplete data. There are two types of model: the true model and the assumed model. The true model is also called the generating model and it means how the data are actually generated or simulated. On the other hand, the assumed model or what is also known as the fitting model is what we fit to data. The true model for complete data is denoted by gZ=gZ(z;ψ) and the corresponding true model for incomplete data is gY=gY(y:ψ) which can be derived from gZ. Here ψ is a generic (vector) parameter. The assumed or working model is a parametric model fZ=fZ(z;ψ) which gives the distribution of Z, and its marginal density is fY=fY(y:ψ).Thus(19)fY=∫yfZdzwhere the notation (y) means integration over all missing values in Z that are consistent with the observed Y.A method is provided to approximate the bias in the estimation of the parameters of the misspecified model following Copas and Eguchi [2]. We consider MAR as the working model and MNAR as the true model. Thus, the misspecification is caused by assuming MAR but the truth is MNAR.Suppose there is a random sample ofn observations, and the true model is given by gZ, which is defined by (16) in Copas and Eguchi [2] as a tilt model:(20)gZ=gZz;ψ,ε,uZ=fZz;ψexpεuZz;ψ.Thus, the misspecification is determined by the quantityεuZ(z;ψ). In this, ε, which is assumed to be small, measures the size of misspecification while uZ(z;ψ) determines its direction. We assume uZ(z;ψ) has zero mean and unit variance under the working model fZ. The misspecification is local because ε is small. Hence, gZ is close to fZ and can be written as(21)gZfZ=expεuZz;ψ.Now if the model actually used to fit the data is fZ(z;ψ), then the limiting value of the MLE ψ^ as n→∞ is given by equation (18) in Copas and Eguchi [2](22)ψgZ=argψEgsZz;ψ=0=ψ+εIZ-1EfZuZz;ψsZz;ψ,where sZ(·;ψ)=∂{log(fZ)}/∂ψ and IZ=E[-∂2{log(fZ)}/∂ψ∂θT] are the score and information matrix for the model fZ, respectively.However,fY will be considered as the working model for the marginal data. Copas and Eguchi [2] show that if (20) is true and ε is small, then a similar approximation holds for the marginal data Y, i.e.,(23)gY=gYy;ψ,ε,uY=fYy;ψexpεuYy;ψwhere again uY(y;ψ) has zero mean and unit variance. In this case according to (19) in Copas and Eguchi [2] the limiting value is(24)ψgY≈ψ+εIY-1EfuYsY=ψ+IY-1EfεuYsYwhere sY(·;ψ)=∂{log(fY)}/∂ψ and IY=E-∂2{log(fY)}/∂ψ∂ψT are the score and information matrix for the model fY, respectively. To calculate the bias, IY-1Ef[εuYsY], tilt εuY. In the next section, how to calculate this amount under MAR and MNAR in our setting of two timepoints will be determined.
## 3.2. Copas and Eguchi Method for Two-Timepoint Example
The bias consists of, as shown in (24), the score, information matrix, and the tilt. In order to calculate these components, the likelihood model in use is defined. Under MAR, either of the following equivalent formulations can be selected:(25)L=fY1,Y2PR=1∣Y1,Y2RfY1PR=0∣Y11-R(26)=fY2∣Y1fY1PR=1∣Y1,Y2RfY1PR=0∣Y11-R.The conditional distribution of Y2 given Y1 is needed quite a lot in this section. Hence, for simplicity, we use Y21 to denote this quantity. Since f(Y1,Y2) is bivariate normal in this assumed model, Y21~N(μ21,σ21) where μ21=μ2+σ2/σ1ρ(Y1-μ1) and σ21=σ21-ρ2. Also, the complete data is Z=(Y1,Y2,R) and incomplete data is Y=(Y1,Y2(R),R) where(27)Y2R=Y2,R=1undefined,R=0.Therefore, at R=1, Y=Z, but Y will differ from Z at R=0.In addition, the models are defined asfZ, fY, gZ, and gY. MAR is assumed as the working model or misspecified model. Under MAR, there is P(R=1∣Y1,Y2)=P(R=1∣Y1); then from (25) the working model for complete data by assuming R=1 is(28)fZ=fY1,Y2PR=1∣Y1.Similarly, from (26) the working model for incomplete data by assuming R=0 is(29)fY=fY1PR=0∣Y1.Under MNAR, if there is complete data, then we will always setR=1. Thus, from (25), the true model for complete data is(30)gZ=fY1,Y2PR=1∣Y1,Y2.The true model for incomplete data on the other hand is the marginal density: (31)gY=∫ygZdz=∫Y2RfY1,Y2PR=1∣Y1,Y2dY2R.Note that the integral is over the missing values Y2R. Referring to (27), the missing values Y2 are undefined in case that R=0.This means that in order to use Copas and Eguchi’s ideas, we should convert the specificgY in (31) into the general form of (23). To do this, we will redefine the MNAR model in tilt form:(32)PR=1∣Y1,Y2=expitθ0+θ1Y1expεuY.Here ε=θ2σ21 and uY=uY(y;θ)=(Y2-μ21)/σ21. For small θ2 this is a good approximation to the logistic MNAR model.Calculation of the terms needed for the bias expression (24) is now possible and follows directly. Details are in the Supplementary Materials available at the journal website.
## 4. Simulation Study
We use the same simulation setup as before. The limiting valuesβ3∗ and β4∗ are compared using different methods under MAR and MNAR dropout models. Next, the local model uncertainty will be elaborated as proposed by Copas, and we illustrate how to apply it both when model misspecification is present and when the data is incomplete. We find that the Copas and Eguchi [2] method gives very similar results to the least false. Misspecification will be dealt with assuming MAR where actually the truth is MNAR.
### 4.1. Comparing the Copas and Eguchi Method with LME Least False Results
In this section, the parameter estimates are affected when a MAR model is fitted to data that are MNAR, and compared with the values that the Copas and Eguchi method predicts.The sample size is 10000, and 10 simulations are used. We used large samples here, as our first task is to check the accuracy of the large-sample approximations underpinning the least false values. The aim is to show the variation in treatment effect estimates asθ2 varies. A grid of θ2 from -0.2 to 0.2 is selected. Figure 3 is produced when β=(-2,-2,-1,-1) and θ=(-0.5,0), which gives dropout rate around 40%. Here the blue lines (dotted lines) are simulation estimates using maximum likelihood, the red lines (solid lines) are Copas and Eguchi estimates, and the light blue lines are the LME least false estimates. These show that the least false, simulations, and Copas and Eguchi [2] results all match well. Therefore, we can use the least false results for bias correction as an alternative to Copas and Eguchi.Figure 3
Comparison 1:β=(-2,-2,-1,-1), θ=(-0.5,0). The blue lines (dotted lines) are simulation estimates using maximum likelihood, the red lines (solid lines) are Copas and Eguchi estimates, and the light blue lines are the LME least false estimates.
### 4.2. CI Coverage for the Estimatedβ3 and β4
The Copas and Eguchi and LME least false values show how estimates are biased by assuming MAR when the data are MNAR. The misspecification parameter isθ2, with θ2=0 meaning no misspecification. If the value of θ2 was known, then the parameter estimates will be adjusted to take into account the misspecification. This idea will be illustrated in this section.For a range of true (generating)θ2, 1000 samples are simulated, each of size 1000. This is a realistic number for applications. In each case, β3 and β4 are estimated using maximum likelihood under a MAR assumption. Afterwards, the estimates are adjusted using either the estimated Copas and Eguchi bias or the bias arising through least false calculations, in both cases taking anassumedθ2. Coverage of the resulting nominal 95% confidence intervals is then recorded. The estimated confidence interval width is not adjusted, just its location.Tables1 and 2 give the results. Here we use θ2T for the true θ2, and θ2A denotes the assumed value used in adjusting the estimates. Also, (β3∗∗,β4∗∗) are used for the Copas and Eguchi adjustment method and (β3∗,β4∗) for the least false adjustment method.Table 1
CI coverage in percent for the estimatedβ3 and β4 at assumed θ2=0. We use θ2T for the true θ2, θ2A for the assumed value in adjusting the estimates, (β3∗∗,β4∗∗) for the Copas and Eguchi adjustment method, and (β3∗,β4∗) for the least false adjustment method. Results based on 1000 samples of size 1000.
θ 2 T θ 2 A β 3 ∗ ∗ β4∗∗ β 3 ∗ β 4 ∗ -0.10 0.00 84.80 95.30 84.90 95.30 -0.09 0.00 85.90 96.70 85.80 96.70 -0.06 0.00 92.00 94.70 91.90 94.70 -0.03 0.00 95.00 95.00 95.10 94.90 0.00 0.00 94.70 95.10 94.70 95.10 0.03 0.00 95.20 94.20 95.00 94.20 0.06 0.00 91.70 94.70 91.70 94.70 0.09 0.00 88.00 95.00 87.80 95.00 0.10 0.00 83.40 95.10 83.60 95.00Table 2
CI coverage for the estimatedβ3 and β4 in percent at assumed θ2=-0.10. We use θ2T for the true θ2, θ2A for the assumed value in adjusting the estimates, (β3∗∗,β4∗∗) for the Copas and Eguchi adjustment method, and (β3∗,β4∗) for the least false adjustment method. Results based on 1000 samples of size 1000.
θ 2 T θ 2 A β 3 ∗ ∗ β4∗∗ β 3 ∗ β 4 ∗ -0.10 -0.10 95.30 95.40 95.70 95.10 -0.09 -0.10 94.80 95.10 95.50 94.80 -0.06 -0.10 95.80 95.20 94.90 95.60 -0.03 -0.10 92.50 94.90 92.00 95.10 0.00 -0.10 89.30 95.30 87.40 95.40 0.03 -0.10 83.70 93.80 81.20 93.80 0.06 -0.10 74.40 95.10 70.50 95.00 0.09 -0.10 62.20 96.00 58.30 95.80 0.10 -0.10 62.70 93.90 57.80 94.20In Table1, the assumed θ2 is zero, meaning no correction. Results at the correct value of θ2A =0 are good. Otherwise, the CI for β3 goes badly wrong. Note that there is no correction here, so the Copas and Eguchi and least false results should be the same. Small differences are just because of the different calculations that are involved. For example, the least false calculation needs an estimate of σx but the Copas and Eguchi one does not. The CI coverage is noted for β4 which is not too much affected at any true θ2 in the range (-0.1,+0.1). For example, at θ2A=-0.1, the CI coverage for β4 is about 95%, whereas there is undercoverage for β3 when θ2T deviates from zero. For example at θ2T=-0.1, the CI coverage for β3 is about 85%. This indicates that β4 is less sensitive to the misspecification than β3 in this scenario.In Table2, the assumed value is taken of θ2=-0.1, which means that dropout is associated with high Y2. Note that, in contrast to the Table 2, there is correction here, so the Copas and Eguchi and least false results will not be the same; for example, at θ2T=+0.1, the CI coverage for β3∗∗ is about 62.7%, but the CI coverage for β3∗ is about 57.8%. However, both estimates β3∗∗ and β3∗ have undercoverage as θ2T goes further from the assumed value -0.1.
### 4.3. Sensitivity Analysis
Of course, in practiceθ2 is not known. For any given data set, a sensible sensitivity procedure would mean plotting bias-corrected estimates and confidence intervals for a range of assumed θ2 values. Here, a grid of assumed θ2 is used from -0.2 to 0.2. We will show that, for each limiting value calculated by the Copas and Eguchi method, the simulated values are within noise of the theoretical values for large sample sizes (n=10000). The noise is estimated from the simulations; that is, a confidence interval is achieved from the simulations with reassurance that the population values are present. A correct MAR model is obtained and after that, under true MNAR, MAR is assumed.Figure4 illustrates the case when MAR is the correct model (θ2=0) and the unadjusted confidence intervals (red lines) include the true parameter values (β3=-1 and β4=-1), as in this case so do the adjusted ones (blue lines). The horizontal lines are at the true values. We note that β3∗∗ decreases as θ2 increases whereas β4∗∗ increases as θ2 increases. Note that β4 has a wider CI than β3.Figure 4
CI under MAR:β=(-2,-2,-1,-1), θ=(-0.5,-0.5), θ2T=0. The blue lines are the adjusted estimates, and red lines are the unadjusted estimates. The horizontal dotted lines are at the true values.Figure5 has the true θ2=0.1, so the study has fitted MAR to data that are really MNAR. The lines cross at θ2=0 because the same MAR model is fitted. The important point is that better estimates of the true β’s are obtained at the correct θ2. Also, as mentioned in Figure 4, β4 has wider CI than β3.Figure 5
CI under MNAR:β=(-2,-2,-1,-1), θ=(-0.5,-0.5), θ2T=0.1. The blue lines are the adjusted estimates, and red lines are the unadjusted estimates. The horizontal lines are at the true values.Note that, both under MAR and MNAR,β3 and β4 have opposite trends; β3 decreases as θ2 increases whereas β4 increases as θ2 increases.
## 4.1. Comparing the Copas and Eguchi Method with LME Least False Results
In this section, the parameter estimates are affected when a MAR model is fitted to data that are MNAR, and compared with the values that the Copas and Eguchi method predicts.The sample size is 10000, and 10 simulations are used. We used large samples here, as our first task is to check the accuracy of the large-sample approximations underpinning the least false values. The aim is to show the variation in treatment effect estimates asθ2 varies. A grid of θ2 from -0.2 to 0.2 is selected. Figure 3 is produced when β=(-2,-2,-1,-1) and θ=(-0.5,0), which gives dropout rate around 40%. Here the blue lines (dotted lines) are simulation estimates using maximum likelihood, the red lines (solid lines) are Copas and Eguchi estimates, and the light blue lines are the LME least false estimates. These show that the least false, simulations, and Copas and Eguchi [2] results all match well. Therefore, we can use the least false results for bias correction as an alternative to Copas and Eguchi.Figure 3
Comparison 1:β=(-2,-2,-1,-1), θ=(-0.5,0). The blue lines (dotted lines) are simulation estimates using maximum likelihood, the red lines (solid lines) are Copas and Eguchi estimates, and the light blue lines are the LME least false estimates.
## 4.2. CI Coverage for the Estimatedβ3 and β4
The Copas and Eguchi and LME least false values show how estimates are biased by assuming MAR when the data are MNAR. The misspecification parameter isθ2, with θ2=0 meaning no misspecification. If the value of θ2 was known, then the parameter estimates will be adjusted to take into account the misspecification. This idea will be illustrated in this section.For a range of true (generating)θ2, 1000 samples are simulated, each of size 1000. This is a realistic number for applications. In each case, β3 and β4 are estimated using maximum likelihood under a MAR assumption. Afterwards, the estimates are adjusted using either the estimated Copas and Eguchi bias or the bias arising through least false calculations, in both cases taking anassumedθ2. Coverage of the resulting nominal 95% confidence intervals is then recorded. The estimated confidence interval width is not adjusted, just its location.Tables1 and 2 give the results. Here we use θ2T for the true θ2, and θ2A denotes the assumed value used in adjusting the estimates. Also, (β3∗∗,β4∗∗) are used for the Copas and Eguchi adjustment method and (β3∗,β4∗) for the least false adjustment method.Table 1
CI coverage in percent for the estimatedβ3 and β4 at assumed θ2=0. We use θ2T for the true θ2, θ2A for the assumed value in adjusting the estimates, (β3∗∗,β4∗∗) for the Copas and Eguchi adjustment method, and (β3∗,β4∗) for the least false adjustment method. Results based on 1000 samples of size 1000.
θ 2 T θ 2 A β 3 ∗ ∗ β4∗∗ β 3 ∗ β 4 ∗ -0.10 0.00 84.80 95.30 84.90 95.30 -0.09 0.00 85.90 96.70 85.80 96.70 -0.06 0.00 92.00 94.70 91.90 94.70 -0.03 0.00 95.00 95.00 95.10 94.90 0.00 0.00 94.70 95.10 94.70 95.10 0.03 0.00 95.20 94.20 95.00 94.20 0.06 0.00 91.70 94.70 91.70 94.70 0.09 0.00 88.00 95.00 87.80 95.00 0.10 0.00 83.40 95.10 83.60 95.00Table 2
CI coverage for the estimatedβ3 and β4 in percent at assumed θ2=-0.10. We use θ2T for the true θ2, θ2A for the assumed value in adjusting the estimates, (β3∗∗,β4∗∗) for the Copas and Eguchi adjustment method, and (β3∗,β4∗) for the least false adjustment method. Results based on 1000 samples of size 1000.
θ 2 T θ 2 A β 3 ∗ ∗ β4∗∗ β 3 ∗ β 4 ∗ -0.10 -0.10 95.30 95.40 95.70 95.10 -0.09 -0.10 94.80 95.10 95.50 94.80 -0.06 -0.10 95.80 95.20 94.90 95.60 -0.03 -0.10 92.50 94.90 92.00 95.10 0.00 -0.10 89.30 95.30 87.40 95.40 0.03 -0.10 83.70 93.80 81.20 93.80 0.06 -0.10 74.40 95.10 70.50 95.00 0.09 -0.10 62.20 96.00 58.30 95.80 0.10 -0.10 62.70 93.90 57.80 94.20In Table1, the assumed θ2 is zero, meaning no correction. Results at the correct value of θ2A =0 are good. Otherwise, the CI for β3 goes badly wrong. Note that there is no correction here, so the Copas and Eguchi and least false results should be the same. Small differences are just because of the different calculations that are involved. For example, the least false calculation needs an estimate of σx but the Copas and Eguchi one does not. The CI coverage is noted for β4 which is not too much affected at any true θ2 in the range (-0.1,+0.1). For example, at θ2A=-0.1, the CI coverage for β4 is about 95%, whereas there is undercoverage for β3 when θ2T deviates from zero. For example at θ2T=-0.1, the CI coverage for β3 is about 85%. This indicates that β4 is less sensitive to the misspecification than β3 in this scenario.In Table2, the assumed value is taken of θ2=-0.1, which means that dropout is associated with high Y2. Note that, in contrast to the Table 2, there is correction here, so the Copas and Eguchi and least false results will not be the same; for example, at θ2T=+0.1, the CI coverage for β3∗∗ is about 62.7%, but the CI coverage for β3∗ is about 57.8%. However, both estimates β3∗∗ and β3∗ have undercoverage as θ2T goes further from the assumed value -0.1.
## 4.3. Sensitivity Analysis
Of course, in practiceθ2 is not known. For any given data set, a sensible sensitivity procedure would mean plotting bias-corrected estimates and confidence intervals for a range of assumed θ2 values. Here, a grid of assumed θ2 is used from -0.2 to 0.2. We will show that, for each limiting value calculated by the Copas and Eguchi method, the simulated values are within noise of the theoretical values for large sample sizes (n=10000). The noise is estimated from the simulations; that is, a confidence interval is achieved from the simulations with reassurance that the population values are present. A correct MAR model is obtained and after that, under true MNAR, MAR is assumed.Figure4 illustrates the case when MAR is the correct model (θ2=0) and the unadjusted confidence intervals (red lines) include the true parameter values (β3=-1 and β4=-1), as in this case so do the adjusted ones (blue lines). The horizontal lines are at the true values. We note that β3∗∗ decreases as θ2 increases whereas β4∗∗ increases as θ2 increases. Note that β4 has a wider CI than β3.Figure 4
CI under MAR:β=(-2,-2,-1,-1), θ=(-0.5,-0.5), θ2T=0. The blue lines are the adjusted estimates, and red lines are the unadjusted estimates. The horizontal dotted lines are at the true values.Figure5 has the true θ2=0.1, so the study has fitted MAR to data that are really MNAR. The lines cross at θ2=0 because the same MAR model is fitted. The important point is that better estimates of the true β’s are obtained at the correct θ2. Also, as mentioned in Figure 4, β4 has wider CI than β3.Figure 5
CI under MNAR:β=(-2,-2,-1,-1), θ=(-0.5,-0.5), θ2T=0.1. The blue lines are the adjusted estimates, and red lines are the unadjusted estimates. The horizontal lines are at the true values.Note that, both under MAR and MNAR,β3 and β4 have opposite trends; β3 decreases as θ2 increases whereas β4 increases as θ2 increases.
## 5. Application: Sensitivity Analysis for Clinical Trial
In this section, the method is illustrated using a real data example. The data is considered from a clinical trial with two treatments and two measurement times as introduced and analysed by Mathews et al. [12]. The covariates are only treatment type and time. The parameter vector is (β1,β2,β3,β4), ignoring any time interaction. There are 422 subjects, assigned to either treatment A or B. Treatment A is associated with treatment effect x=1 and treatment B is when x=0. Then, at time 2, the mean of the group receiving treatment B is β3 and the mean of the group receiving treatment A is β3+β4. At time 1, all subjects provided a response, but 24.4% dropped out by time 2. There are 212 subjects receiving treatment A, but only 126 provided a response at time 2 and the other 86 dropped out. Hence the missingness percentage is about 40%. The dropout reason is not known. For treatment B, there are 210 subjects, of which 193 subjects continued to time 2 and hence there are 17 that did not, and this gave around 8% missingness.A sensitivity analysis approach (over a grid ofθ2) using the Copas and Eguchi and LME methods is shown in Figure 6. The blue lines use the Copas and Eguchi method and the red lines use the least false method. The idea is to adjust the estimate to compensate for bias from a misspecified MAR fit. Consequently, for example, if the least false value is known under MAR to underestimate a parameter, the difference for the estimate is added to back-calculate. Dashes are the CIs, based on the MAR standard errors. The first plot shows confidence intervals for the treatment B mean as the assumed value of θ2 changes. The horizontal line is the estimate under MAR. The second plot shows the confidence intervals for the mean of treatment A. The third plot is the difference in means between treatment A and B, which yields the treatment effect means, i.e., β4 means. In the first plot, the horizontal line is at -0.74 which is the same value for the LME estimate for β3. Again, the LME estimate for β4 is about -0.40 in Figure 2. Also, note that β3+β4 equals -1.15. This supports the finding here and shows better results.Figure 6
Clinical trial example: 95% CI forβ3, β3+β4, and β4. The blue lines use the Copas and Eguchi method, the red lines use the least false method, and the horizontal line is at the MAR estimate.The first thing to note is how close the least false and Copas and Eguchi estimates are. There is almost no difference over this range ofθ2. We take θ2 from -1.5 to +1.5. The value of θ1^ under MAR is -1.66, meaning the range of θ2 allows Y2 to have the same order of effect as Y1. Clearly at large values of θ2, there is concern that the misspecification is not local, which is the assumption of Copas and Eguchi. However, the least false results apply to any misspecification, not necessarily local, and the fact that Copas and Eguchi estimate is so close to the least false one suggests that it can work well even under quite large misspecification.Whenθ2 is negative, the estimates get adjusted upwards, and the opposite is true for positive θ2. This makes sense: at negative θ2, large Y2 values have low probability of staying in the trial. Hence the observed means are lower than they would be in the hypothetical no-dropout situation, so we adjust upwards.The estimates seem to be affected more at the positiveθ2 than at the negative one. At the very largest θ2 shown, there would be a significant change in the value of the estimated true mean. However, there is very little effect of misspecification on the difference between means (third subplot), as the adjustments essentially cancel.
## 6. Conclusion
We considered the Linear Mixed Effect models (maximum likelihood method) for handling missing data. Then, by deriving the so-called least false values, we investigated the consequences of misspecifying the missingness mechanism. The closed form expressions were given to calculate the least false valuesβ3∗ and β4∗. The knowledge of these least false values allowed us to conduct sensitivity analysis, which was illustrated for the LME method.Copas and Eguchi [2] gave a formula to estimate the bias under the misspecification. We derived and explored the Copas and Eguchi approximation for the bias raised by the misspecification of the working model. The results found by using Copas and Eguchi method are compared with the results obtained by the method proposed. Also, we applied the Copas and Eguchi method to estimate the bias for the real data example.Moreover, we explained how to use a sensitivity analysis to see how the methods work under a range ofθ2. We found that the Copas and Eguchi method and LME least false match very well. Both gave very close results over the grid of θ2 considered. This suggests that the least false method can provide a credible alternative to Copas and Eguchi in sensitivity analysis. In fact, it might be preferred since there is no assumption of local misspecification. Finally, we illustrated the results using example data from a clinical trial with two measurement times.
---
*Source: 1019303-2019-07-01.xml* | 1019303-2019-07-01_1019303-2019-07-01.md | 50,763 | An Alternative Sensitivity Approach for Longitudinal Analysis with Dropout | Amal Almohisen; Robin Henderson; Arwa M. Alshingiti | Journal of Probability and Statistics
(2019) | Mathematical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1019303 | 1019303-2019-07-01.xml | ---
## Abstract
In any longitudinal study, a dropout before the final timepoint can rarely be avoided. The chosen dropout model is commonly one of these types: Missing Completely at Random (MCAR), Missing at Random (MAR), Missing Not at Random (MNAR), and Shared Parameter (SP). In this paper we estimate the parameters of the longitudinal model for simulated data and real data using the Linear Mixed Effect (LME) method. We investigate the consequences of misspecifying the missingness mechanism by deriving the so-called least false values. These are the values the parameter estimates converge to, when the assumptions may be wrong. The knowledge of the least false values allows us to conduct a sensitivity analysis, which is illustrated. This method provides an alternative to a local misspecification sensitivity procedure, which has been developed for likelihood-based analysis. We compare the results obtained by the method proposed with the results found by using the local misspecification method. We apply the local misspecification and least false methods to estimate the bias and sensitivity of parameter estimates for a clinical trial example.
---
## Body
## 1. Introduction
Missing data are common in various settings, including surveys, clinical trials, and longitudinal studies. Methods for handling missing data strongly depend on the mechanism that generated the missing values as well as the distributional and modeling assumptions at various stages. This study focuses only on Missing at Random and Missing Not at Random dropout models, under a Linear Mixed Effect (LME) model.Much of the literature on missing data problems assumes the dropout model is only MAR and not MNAR, but this assumption is clearly limited [1]. The consequences of misspecifying the missingness mechanism are investigated by deriving the so-called least false values, which are the values the parameter estimates converge to when the assumptions may be wrong. Derivation and illustration of theoretical least false values for the LME method are made under Missing at Random (MAR) and Missing Not at Random (MNAR) dropout. The misspecified dropout model MAR is assumed in this study.Copas and Eguchi [2] gave a formula to estimate the bias under such misspecification using a likelihood approach. As the LME is a likelihood-based method, the estimates obtained through the Copas and Eguchi method can be compared with the LME least false estimates. The procedure will be applied by adding a tilt to the MAR dropout model to provide what Copas and Eguchi call local misspecification.The local model uncertainty is elaborated as proposed by Copas and Eguchi [2] and illustrated both when model misspecification is present and when the data is incomplete. Furthermore, we find that the Copas and Eguchi method gives very similar results to the least false method. Misspecification will be dealt with assuming MAR where actually the truth is MNAR. Beside Copas and Eguchi [2], many other authors have developed methods to assess the sensitivity of inference under the MAR assumption [3, 4]. Moreover, Lin et al. [5] extended the Copas and Eguchi method and assumed a doubly misspecified model while having only single misspecification. Also, there has been interest in the Copas and Eguchi method from a Bayes perspective [6–10]. Recently, [11] performed simulation based sensitivity analysis.In Section2, the LME method is presented and we show how to calculate the least false values. A description of the Copas and Eguchi method is provided in Section 3.1, followed by an example in Section 3.2. A simulation study is described in Section 4. The Copas and Eguchi bias estimate results are studied and examined with the least false values derived from the LME method, and we then show the coverage of nominal confidence intervals. A sensitivity analysis is conducted to assess how inference can depend on missing data. In Section 5, the methods are applied to data from a clinical trial with two treatments and two measurement times as introduced and analysed by Matthews et al. [12]. We compared the results obtained by the proposed method with the results found by using the Copas and Eguchi method.
## 2. Linear Mixed Effect (LME) Method
A statistical model containing fixed effects and random effects is called a mixed effect model. These models have been shown to be effective in many disciplines in the biological, physical, and social sciences. Usually a linear form is assumed.Reference [13] gave a definition of the response Y in the LME model which is of the form: (1)measuredresponse=covariateeffects+randomeffects+measurementerror.For example, a simplified version of the Liard and Ware [14] mixed model approach for longitudinal data would include a random effect in the intercept term in a model for responses. If Yij is the response at time j on subject i, the model is (2)Yij=μij+Ui+ϵijwhere μij is the marginal mean, which will usually be a linear function of covariates, ϵij is independent Gaussian noise, and Ui is a realisation of a zero mean scalar Gaussian random variable. Since Ui has zero mean, the marginal mean of Yij remains μij after integrating out Uij. However, since Ui is common to all j, we get dependence between observations on the same subject. For example, if Ui is positive, then all values would tend to be above the marginal mean and so on. In the context of longitudinal data, some reviews of linear mixed models can be found in [15, 16].
### 2.1. Assumptions
Suppose there aren individuals in a study and each provides longitudinal responses Y and dropout information R. Generally, we will assume a linear model for Y (in the absence of dropout) and logistic models for the probability ofcontinuing to the next timepoint t+1 given that a subject is still under observation at time t. At times, we refer to atrue orgenerating model as the way in which data are obtained and to anassumed orfitting model as that chosen by the analyst for estimation.For simplicity in this work, the study assumes that there are just two observations or treatment periods. The methods are of course more general.At time 1, there is a measurement provided for all subjects, denoted byYi1 for subject i. Then at time 2, some subjects are dropped out before measurement. Let Ri=1 indicate that there is a measurement at time 2 and Ri=0 otherwise. Let Yi=(Yi1,Yi2)T and assume E[Yi]=xiβ where β is a parameter vector of dimension p and xi is the design matrix associated with subject i, which is of dimension 2×p. The standard model assumes just one covariate and is(3)Y1i=β1G+β2Gxi+Ui+ϵ1iY2i=β3G+β4Gxi+Ui+ϵ2iYi=XiβG+Ui1+ϵi,where Yi=Y1iY2i, Xi=1xi00001xi, βG=(β1G,β2G,β3G,β4G)T, 1=11, ϵi=ϵ1iϵ2i, and xi~N(0,σx2), Ui~N(0,σU2), ϵ1i~N(0,σϵ12) and ϵ2i~N(0,σϵ22). Let σ12=σU2+σϵ12, σ22=σU2+σϵ22 and ρ=σU2/σ1σ2.Returning to the general case, the influence of missing data depends on the missingness mechanism, that is, the probability model for missingness. Knowing the reason for the missingness is obviously helpful to handle missing data. There are four generalmissingness mechanisms as introduced by Little and Rubin [17] and Wu and Carroll [18]. They are Missing Completely at Random (MCAR), Missing at Random (MAR), Missing Not at Random (MNAR), and Shared Parameter (SP).For simplicity in this investigation, the parameters are assumed to be common between timepoints. Let the dropout parameters beθ=(θ0,θ1). The MAR dropout logistic model is then(4)πiθ=PRi=1∣Y1i,Y2i=eθ0+θ1Y1i1+eθ0+θ1Y1i=expitθ0+θ1Y1i.The missingness is called Missing Not at Random, if it depends on unrecorded information, which predicts the missing values. An example is that a patient was unsatisfied with a particular treatment, and thus this patient is more likely to quit the study. If missingness is not at random, then some bias is expected in inferences.Let the dropout parameters now be(θ,θ2). The MNAR version for the two-timepoint example is the logistic model:(5)πiθ,θ2=PRi=1∣Y1i,Y2i=eθ0+θ1Y1i+θ2Y2i1+eθ0+θ1Y1i+θ2Y2i=expitθ0+θ1Y1i+θ2Y2i.
### 2.2. LME Least False
In this section, the Linear Mixed Effect (LME) method is investigated, which is based on a maximum likelihood estimating approach. The performance of the LME method under MAR and MNAR dropout is examined. Derivation and illustration of theoretical least false values are made. Assuming a Gaussian random intercept model, the score equation of current interest is [19](6)∑i=1nRiXiTV-1Yi-Xiβ^+1-Riσ12xi1Yi1-xi1Tβ^=0where Yi=(Yi1,Yi2), Xi is a 2×4 design matrix associated with subject i which is Xi=1xi00001xi, and we will use xi1T as notation for the first row of Xi; thus xi1T=(1,xi,0,0), β^T=(β^1,β^2,β^3,β^4), and V=σ12ρσ1σ2ρσ1σ2σ22. We can rearrange the terms in (6) to be(7)∑i=1nRiXiTV-1Xi+1-Riσ12xi1xi1Tβ^(8)=∑i=1nRiXiTV-1Yi+1-Riσ12xi1Yi1.These components are in detail(9)V-1=Kσ22-ρσ1σ2-ρσ1σ2σ12where K=1/σ12σ22(1-ρ2), and(10)XiTV-1Xi=Kσ22σ22xi-ρσ1σ2-ρσ1σ2xiσ22xiσ22xi2-ρσ1σ2xi-ρσ1σ2xi2-ρσ1σ2-ρσ1σ2xiσ12σ12xi-ρσ1σ2xi-ρσ1σ2xi2σ12xiσ12xi2.Also(11)xi1xi1T=1xi00xixi20000000000.Similarly for the right hand side of (8)(12)XiTV-1Yi=Kσ22Yi1-ρσ1σ2Yi2σ22Yi1xi-ρσ1σ2Yi2xiσ12Yi2-ρσ1σ2Yi1σ12Yi2xi-ρσ1σ2Yi1xi.Finally(13)xi1Yi1=Yi1xiYi100.We assume independent and identically distributed responses, with finite variance for the covariate and error distributions, and dropout probabilities bounded away from both zero and one. On dividing all sums by n, the weak law of large numbers applies and we can replace the sums with expectations as follows:(14)ERXTV-1X+1-Rσ12x1x1Tβ∗=ERXTV-1Y+1-Rσ12x1Y1.In the left hand side of (14), there will be two parts. First(15)ERXTV-1X(16)=Kσ22ERσ22ERx-ρσ1σ2ER-ρσ1σ2ERxσ22ERxσ22ERx2-ρσ1σ2ERx-ρσ1σ2ERx2-ρσ1σ2ER-ρσ1σ2ERxσ12ERσ12ERx-ρσ1σ2ERx-ρσ1σ2ERx2σ12ERxσ12ERx2,and second(17)1-Rσ12x1x1T=1σ121-EREx-ERx00Ex-ERxEx2-ERx20000000000.Similarly, the right hand side is(18)Kσ22ERY1-ρσ1σ2ERY2σ22ERY1x-ρσ1σ2ERY2xσ12ERY2-ρσ1σ2ERY1σ12ERY2x-ρσ1σ2ERY1x+1σ12EY1-ERY1EY1x-ERY1x00.Expressions for E[R], E[Rx], E[Rx2], E[RY1], E[RY2], E[RY1x], and E[RY2x] have been obtained under different dropout models. For illustration, we show calculation of E[R] under MAR in the Supplementary Materials available at the journal website (available here).Finally to find the least false valueβ∗, the inverse of the matrix has been considered in the left hand side of (14) and we multiply this inverse by the matrix in the right hand side, which will yield the array of the least false values β∗T=(β1∗,β2∗,β3∗,β4∗). In the following section, we present simulations regarding how the LME method performs under MAR and MNAR dropout model.
### 2.3. Numerical Investigation
A scalarN(0,1) variable x is generated, and then the longitudinal means are generated μ1=β1+β2x, μ2=β3+β4x. This was followed by (Y1,Y2) from a bivariate normal distribution with mean (μ1,μ2). Missingness was generated from (4) and (5) for the MAR and MNAR models, respectively. In all of the following simulations, unless it is stated otherwise, the parameters β=(-2,-2,-1,-1), σx=σ1=σ2=1, ρ=0.5 were followed. In the following, we show the effect of dropout on the limiting values β3∗ and β4∗.As LME provides consistent estimates under MAR, the least false valuesβ3∗ and β4∗ are not affected by changing the dropout probabilities under MAR. Therefore, only MNAR concentrations were considered. From a contour plot of β3∗ under MNAR (Figure 1), in order to minimise the bias in β3∗, θ1 should be chosen to be around zero. For negative θ1, the dropout is associated with large U, so Y1 and Y2 both tend to be low if dropout does not occur. Hence β3∗ is lower than it should be. The opposite happens for a positive θ1.Figure 1
Contour plot ofβ3∗ under MNAR.Figure2 shows a contour plot of β4∗ under MNAR. Here, negative bias is obtained as θ1 moves away from zero in either direction. Such an attenuation of regression effect is common when there are errors in variables [20]. It seems that a similar effect is obtained here.Figure 2
Contour plot ofβ4∗ under MNAR.Having obtained least false values, we propose their use in sensitivity analyses. Before doing so, a sensitivity procedure is investigated for local misspecification as proposed by Copas and Eguchi [2].
## 2.1. Assumptions
Suppose there aren individuals in a study and each provides longitudinal responses Y and dropout information R. Generally, we will assume a linear model for Y (in the absence of dropout) and logistic models for the probability ofcontinuing to the next timepoint t+1 given that a subject is still under observation at time t. At times, we refer to atrue orgenerating model as the way in which data are obtained and to anassumed orfitting model as that chosen by the analyst for estimation.For simplicity in this work, the study assumes that there are just two observations or treatment periods. The methods are of course more general.At time 1, there is a measurement provided for all subjects, denoted byYi1 for subject i. Then at time 2, some subjects are dropped out before measurement. Let Ri=1 indicate that there is a measurement at time 2 and Ri=0 otherwise. Let Yi=(Yi1,Yi2)T and assume E[Yi]=xiβ where β is a parameter vector of dimension p and xi is the design matrix associated with subject i, which is of dimension 2×p. The standard model assumes just one covariate and is(3)Y1i=β1G+β2Gxi+Ui+ϵ1iY2i=β3G+β4Gxi+Ui+ϵ2iYi=XiβG+Ui1+ϵi,where Yi=Y1iY2i, Xi=1xi00001xi, βG=(β1G,β2G,β3G,β4G)T, 1=11, ϵi=ϵ1iϵ2i, and xi~N(0,σx2), Ui~N(0,σU2), ϵ1i~N(0,σϵ12) and ϵ2i~N(0,σϵ22). Let σ12=σU2+σϵ12, σ22=σU2+σϵ22 and ρ=σU2/σ1σ2.Returning to the general case, the influence of missing data depends on the missingness mechanism, that is, the probability model for missingness. Knowing the reason for the missingness is obviously helpful to handle missing data. There are four generalmissingness mechanisms as introduced by Little and Rubin [17] and Wu and Carroll [18]. They are Missing Completely at Random (MCAR), Missing at Random (MAR), Missing Not at Random (MNAR), and Shared Parameter (SP).For simplicity in this investigation, the parameters are assumed to be common between timepoints. Let the dropout parameters beθ=(θ0,θ1). The MAR dropout logistic model is then(4)πiθ=PRi=1∣Y1i,Y2i=eθ0+θ1Y1i1+eθ0+θ1Y1i=expitθ0+θ1Y1i.The missingness is called Missing Not at Random, if it depends on unrecorded information, which predicts the missing values. An example is that a patient was unsatisfied with a particular treatment, and thus this patient is more likely to quit the study. If missingness is not at random, then some bias is expected in inferences.Let the dropout parameters now be(θ,θ2). The MNAR version for the two-timepoint example is the logistic model:(5)πiθ,θ2=PRi=1∣Y1i,Y2i=eθ0+θ1Y1i+θ2Y2i1+eθ0+θ1Y1i+θ2Y2i=expitθ0+θ1Y1i+θ2Y2i.
## 2.2. LME Least False
In this section, the Linear Mixed Effect (LME) method is investigated, which is based on a maximum likelihood estimating approach. The performance of the LME method under MAR and MNAR dropout is examined. Derivation and illustration of theoretical least false values are made. Assuming a Gaussian random intercept model, the score equation of current interest is [19](6)∑i=1nRiXiTV-1Yi-Xiβ^+1-Riσ12xi1Yi1-xi1Tβ^=0where Yi=(Yi1,Yi2), Xi is a 2×4 design matrix associated with subject i which is Xi=1xi00001xi, and we will use xi1T as notation for the first row of Xi; thus xi1T=(1,xi,0,0), β^T=(β^1,β^2,β^3,β^4), and V=σ12ρσ1σ2ρσ1σ2σ22. We can rearrange the terms in (6) to be(7)∑i=1nRiXiTV-1Xi+1-Riσ12xi1xi1Tβ^(8)=∑i=1nRiXiTV-1Yi+1-Riσ12xi1Yi1.These components are in detail(9)V-1=Kσ22-ρσ1σ2-ρσ1σ2σ12where K=1/σ12σ22(1-ρ2), and(10)XiTV-1Xi=Kσ22σ22xi-ρσ1σ2-ρσ1σ2xiσ22xiσ22xi2-ρσ1σ2xi-ρσ1σ2xi2-ρσ1σ2-ρσ1σ2xiσ12σ12xi-ρσ1σ2xi-ρσ1σ2xi2σ12xiσ12xi2.Also(11)xi1xi1T=1xi00xixi20000000000.Similarly for the right hand side of (8)(12)XiTV-1Yi=Kσ22Yi1-ρσ1σ2Yi2σ22Yi1xi-ρσ1σ2Yi2xiσ12Yi2-ρσ1σ2Yi1σ12Yi2xi-ρσ1σ2Yi1xi.Finally(13)xi1Yi1=Yi1xiYi100.We assume independent and identically distributed responses, with finite variance for the covariate and error distributions, and dropout probabilities bounded away from both zero and one. On dividing all sums by n, the weak law of large numbers applies and we can replace the sums with expectations as follows:(14)ERXTV-1X+1-Rσ12x1x1Tβ∗=ERXTV-1Y+1-Rσ12x1Y1.In the left hand side of (14), there will be two parts. First(15)ERXTV-1X(16)=Kσ22ERσ22ERx-ρσ1σ2ER-ρσ1σ2ERxσ22ERxσ22ERx2-ρσ1σ2ERx-ρσ1σ2ERx2-ρσ1σ2ER-ρσ1σ2ERxσ12ERσ12ERx-ρσ1σ2ERx-ρσ1σ2ERx2σ12ERxσ12ERx2,and second(17)1-Rσ12x1x1T=1σ121-EREx-ERx00Ex-ERxEx2-ERx20000000000.Similarly, the right hand side is(18)Kσ22ERY1-ρσ1σ2ERY2σ22ERY1x-ρσ1σ2ERY2xσ12ERY2-ρσ1σ2ERY1σ12ERY2x-ρσ1σ2ERY1x+1σ12EY1-ERY1EY1x-ERY1x00.Expressions for E[R], E[Rx], E[Rx2], E[RY1], E[RY2], E[RY1x], and E[RY2x] have been obtained under different dropout models. For illustration, we show calculation of E[R] under MAR in the Supplementary Materials available at the journal website (available here).Finally to find the least false valueβ∗, the inverse of the matrix has been considered in the left hand side of (14) and we multiply this inverse by the matrix in the right hand side, which will yield the array of the least false values β∗T=(β1∗,β2∗,β3∗,β4∗). In the following section, we present simulations regarding how the LME method performs under MAR and MNAR dropout model.
## 2.3. Numerical Investigation
A scalarN(0,1) variable x is generated, and then the longitudinal means are generated μ1=β1+β2x, μ2=β3+β4x. This was followed by (Y1,Y2) from a bivariate normal distribution with mean (μ1,μ2). Missingness was generated from (4) and (5) for the MAR and MNAR models, respectively. In all of the following simulations, unless it is stated otherwise, the parameters β=(-2,-2,-1,-1), σx=σ1=σ2=1, ρ=0.5 were followed. In the following, we show the effect of dropout on the limiting values β3∗ and β4∗.As LME provides consistent estimates under MAR, the least false valuesβ3∗ and β4∗ are not affected by changing the dropout probabilities under MAR. Therefore, only MNAR concentrations were considered. From a contour plot of β3∗ under MNAR (Figure 1), in order to minimise the bias in β3∗, θ1 should be chosen to be around zero. For negative θ1, the dropout is associated with large U, so Y1 and Y2 both tend to be low if dropout does not occur. Hence β3∗ is lower than it should be. The opposite happens for a positive θ1.Figure 1
Contour plot ofβ3∗ under MNAR.Figure2 shows a contour plot of β4∗ under MNAR. Here, negative bias is obtained as θ1 moves away from zero in either direction. Such an attenuation of regression effect is common when there are errors in variables [20]. It seems that a similar effect is obtained here.Figure 2
Contour plot ofβ4∗ under MNAR.Having obtained least false values, we propose their use in sensitivity analyses. Before doing so, a sensitivity procedure is investigated for local misspecification as proposed by Copas and Eguchi [2].
## 3. The Effect of Local Misspecification of the Dropout Model When Using Likelihood-Based Methods under the MAR Assumption
In the previous section, we investigated the consequences of misspecifying the missingness mechanism by deriving the so-called least false values, which are the values the parameter estimates converge to when the assumptions may be wrong.As an alternative, Copas and Eguchi [2] give a formula to estimate the bias under such misspecification using a likelihood approach. As the LME is a likelihood-based method, we can compare the Copas and Eguchi method with the LME least false estimates. The procedure will be applied by adding a tilt to the MAR dropout model to provide what Copas and Eguchi [2] call local misspecification.
### 3.1. Description of Copas and Eguchi Method
We use the notation of Copas and Eguchi [2], denoting by Z complete data and by Y incomplete data. There are two types of model: the true model and the assumed model. The true model is also called the generating model and it means how the data are actually generated or simulated. On the other hand, the assumed model or what is also known as the fitting model is what we fit to data. The true model for complete data is denoted by gZ=gZ(z;ψ) and the corresponding true model for incomplete data is gY=gY(y:ψ) which can be derived from gZ. Here ψ is a generic (vector) parameter. The assumed or working model is a parametric model fZ=fZ(z;ψ) which gives the distribution of Z, and its marginal density is fY=fY(y:ψ).Thus(19)fY=∫yfZdzwhere the notation (y) means integration over all missing values in Z that are consistent with the observed Y.A method is provided to approximate the bias in the estimation of the parameters of the misspecified model following Copas and Eguchi [2]. We consider MAR as the working model and MNAR as the true model. Thus, the misspecification is caused by assuming MAR but the truth is MNAR.Suppose there is a random sample ofn observations, and the true model is given by gZ, which is defined by (16) in Copas and Eguchi [2] as a tilt model:(20)gZ=gZz;ψ,ε,uZ=fZz;ψexpεuZz;ψ.Thus, the misspecification is determined by the quantityεuZ(z;ψ). In this, ε, which is assumed to be small, measures the size of misspecification while uZ(z;ψ) determines its direction. We assume uZ(z;ψ) has zero mean and unit variance under the working model fZ. The misspecification is local because ε is small. Hence, gZ is close to fZ and can be written as(21)gZfZ=expεuZz;ψ.Now if the model actually used to fit the data is fZ(z;ψ), then the limiting value of the MLE ψ^ as n→∞ is given by equation (18) in Copas and Eguchi [2](22)ψgZ=argψEgsZz;ψ=0=ψ+εIZ-1EfZuZz;ψsZz;ψ,where sZ(·;ψ)=∂{log(fZ)}/∂ψ and IZ=E[-∂2{log(fZ)}/∂ψ∂θT] are the score and information matrix for the model fZ, respectively.However,fY will be considered as the working model for the marginal data. Copas and Eguchi [2] show that if (20) is true and ε is small, then a similar approximation holds for the marginal data Y, i.e.,(23)gY=gYy;ψ,ε,uY=fYy;ψexpεuYy;ψwhere again uY(y;ψ) has zero mean and unit variance. In this case according to (19) in Copas and Eguchi [2] the limiting value is(24)ψgY≈ψ+εIY-1EfuYsY=ψ+IY-1EfεuYsYwhere sY(·;ψ)=∂{log(fY)}/∂ψ and IY=E-∂2{log(fY)}/∂ψ∂ψT are the score and information matrix for the model fY, respectively. To calculate the bias, IY-1Ef[εuYsY], tilt εuY. In the next section, how to calculate this amount under MAR and MNAR in our setting of two timepoints will be determined.
### 3.2. Copas and Eguchi Method for Two-Timepoint Example
The bias consists of, as shown in (24), the score, information matrix, and the tilt. In order to calculate these components, the likelihood model in use is defined. Under MAR, either of the following equivalent formulations can be selected:(25)L=fY1,Y2PR=1∣Y1,Y2RfY1PR=0∣Y11-R(26)=fY2∣Y1fY1PR=1∣Y1,Y2RfY1PR=0∣Y11-R.The conditional distribution of Y2 given Y1 is needed quite a lot in this section. Hence, for simplicity, we use Y21 to denote this quantity. Since f(Y1,Y2) is bivariate normal in this assumed model, Y21~N(μ21,σ21) where μ21=μ2+σ2/σ1ρ(Y1-μ1) and σ21=σ21-ρ2. Also, the complete data is Z=(Y1,Y2,R) and incomplete data is Y=(Y1,Y2(R),R) where(27)Y2R=Y2,R=1undefined,R=0.Therefore, at R=1, Y=Z, but Y will differ from Z at R=0.In addition, the models are defined asfZ, fY, gZ, and gY. MAR is assumed as the working model or misspecified model. Under MAR, there is P(R=1∣Y1,Y2)=P(R=1∣Y1); then from (25) the working model for complete data by assuming R=1 is(28)fZ=fY1,Y2PR=1∣Y1.Similarly, from (26) the working model for incomplete data by assuming R=0 is(29)fY=fY1PR=0∣Y1.Under MNAR, if there is complete data, then we will always setR=1. Thus, from (25), the true model for complete data is(30)gZ=fY1,Y2PR=1∣Y1,Y2.The true model for incomplete data on the other hand is the marginal density: (31)gY=∫ygZdz=∫Y2RfY1,Y2PR=1∣Y1,Y2dY2R.Note that the integral is over the missing values Y2R. Referring to (27), the missing values Y2 are undefined in case that R=0.This means that in order to use Copas and Eguchi’s ideas, we should convert the specificgY in (31) into the general form of (23). To do this, we will redefine the MNAR model in tilt form:(32)PR=1∣Y1,Y2=expitθ0+θ1Y1expεuY.Here ε=θ2σ21 and uY=uY(y;θ)=(Y2-μ21)/σ21. For small θ2 this is a good approximation to the logistic MNAR model.Calculation of the terms needed for the bias expression (24) is now possible and follows directly. Details are in the Supplementary Materials available at the journal website.
## 3.1. Description of Copas and Eguchi Method
We use the notation of Copas and Eguchi [2], denoting by Z complete data and by Y incomplete data. There are two types of model: the true model and the assumed model. The true model is also called the generating model and it means how the data are actually generated or simulated. On the other hand, the assumed model or what is also known as the fitting model is what we fit to data. The true model for complete data is denoted by gZ=gZ(z;ψ) and the corresponding true model for incomplete data is gY=gY(y:ψ) which can be derived from gZ. Here ψ is a generic (vector) parameter. The assumed or working model is a parametric model fZ=fZ(z;ψ) which gives the distribution of Z, and its marginal density is fY=fY(y:ψ).Thus(19)fY=∫yfZdzwhere the notation (y) means integration over all missing values in Z that are consistent with the observed Y.A method is provided to approximate the bias in the estimation of the parameters of the misspecified model following Copas and Eguchi [2]. We consider MAR as the working model and MNAR as the true model. Thus, the misspecification is caused by assuming MAR but the truth is MNAR.Suppose there is a random sample ofn observations, and the true model is given by gZ, which is defined by (16) in Copas and Eguchi [2] as a tilt model:(20)gZ=gZz;ψ,ε,uZ=fZz;ψexpεuZz;ψ.Thus, the misspecification is determined by the quantityεuZ(z;ψ). In this, ε, which is assumed to be small, measures the size of misspecification while uZ(z;ψ) determines its direction. We assume uZ(z;ψ) has zero mean and unit variance under the working model fZ. The misspecification is local because ε is small. Hence, gZ is close to fZ and can be written as(21)gZfZ=expεuZz;ψ.Now if the model actually used to fit the data is fZ(z;ψ), then the limiting value of the MLE ψ^ as n→∞ is given by equation (18) in Copas and Eguchi [2](22)ψgZ=argψEgsZz;ψ=0=ψ+εIZ-1EfZuZz;ψsZz;ψ,where sZ(·;ψ)=∂{log(fZ)}/∂ψ and IZ=E[-∂2{log(fZ)}/∂ψ∂θT] are the score and information matrix for the model fZ, respectively.However,fY will be considered as the working model for the marginal data. Copas and Eguchi [2] show that if (20) is true and ε is small, then a similar approximation holds for the marginal data Y, i.e.,(23)gY=gYy;ψ,ε,uY=fYy;ψexpεuYy;ψwhere again uY(y;ψ) has zero mean and unit variance. In this case according to (19) in Copas and Eguchi [2] the limiting value is(24)ψgY≈ψ+εIY-1EfuYsY=ψ+IY-1EfεuYsYwhere sY(·;ψ)=∂{log(fY)}/∂ψ and IY=E-∂2{log(fY)}/∂ψ∂ψT are the score and information matrix for the model fY, respectively. To calculate the bias, IY-1Ef[εuYsY], tilt εuY. In the next section, how to calculate this amount under MAR and MNAR in our setting of two timepoints will be determined.
## 3.2. Copas and Eguchi Method for Two-Timepoint Example
The bias consists of, as shown in (24), the score, information matrix, and the tilt. In order to calculate these components, the likelihood model in use is defined. Under MAR, either of the following equivalent formulations can be selected:(25)L=fY1,Y2PR=1∣Y1,Y2RfY1PR=0∣Y11-R(26)=fY2∣Y1fY1PR=1∣Y1,Y2RfY1PR=0∣Y11-R.The conditional distribution of Y2 given Y1 is needed quite a lot in this section. Hence, for simplicity, we use Y21 to denote this quantity. Since f(Y1,Y2) is bivariate normal in this assumed model, Y21~N(μ21,σ21) where μ21=μ2+σ2/σ1ρ(Y1-μ1) and σ21=σ21-ρ2. Also, the complete data is Z=(Y1,Y2,R) and incomplete data is Y=(Y1,Y2(R),R) where(27)Y2R=Y2,R=1undefined,R=0.Therefore, at R=1, Y=Z, but Y will differ from Z at R=0.In addition, the models are defined asfZ, fY, gZ, and gY. MAR is assumed as the working model or misspecified model. Under MAR, there is P(R=1∣Y1,Y2)=P(R=1∣Y1); then from (25) the working model for complete data by assuming R=1 is(28)fZ=fY1,Y2PR=1∣Y1.Similarly, from (26) the working model for incomplete data by assuming R=0 is(29)fY=fY1PR=0∣Y1.Under MNAR, if there is complete data, then we will always setR=1. Thus, from (25), the true model for complete data is(30)gZ=fY1,Y2PR=1∣Y1,Y2.The true model for incomplete data on the other hand is the marginal density: (31)gY=∫ygZdz=∫Y2RfY1,Y2PR=1∣Y1,Y2dY2R.Note that the integral is over the missing values Y2R. Referring to (27), the missing values Y2 are undefined in case that R=0.This means that in order to use Copas and Eguchi’s ideas, we should convert the specificgY in (31) into the general form of (23). To do this, we will redefine the MNAR model in tilt form:(32)PR=1∣Y1,Y2=expitθ0+θ1Y1expεuY.Here ε=θ2σ21 and uY=uY(y;θ)=(Y2-μ21)/σ21. For small θ2 this is a good approximation to the logistic MNAR model.Calculation of the terms needed for the bias expression (24) is now possible and follows directly. Details are in the Supplementary Materials available at the journal website.
## 4. Simulation Study
We use the same simulation setup as before. The limiting valuesβ3∗ and β4∗ are compared using different methods under MAR and MNAR dropout models. Next, the local model uncertainty will be elaborated as proposed by Copas, and we illustrate how to apply it both when model misspecification is present and when the data is incomplete. We find that the Copas and Eguchi [2] method gives very similar results to the least false. Misspecification will be dealt with assuming MAR where actually the truth is MNAR.
### 4.1. Comparing the Copas and Eguchi Method with LME Least False Results
In this section, the parameter estimates are affected when a MAR model is fitted to data that are MNAR, and compared with the values that the Copas and Eguchi method predicts.The sample size is 10000, and 10 simulations are used. We used large samples here, as our first task is to check the accuracy of the large-sample approximations underpinning the least false values. The aim is to show the variation in treatment effect estimates asθ2 varies. A grid of θ2 from -0.2 to 0.2 is selected. Figure 3 is produced when β=(-2,-2,-1,-1) and θ=(-0.5,0), which gives dropout rate around 40%. Here the blue lines (dotted lines) are simulation estimates using maximum likelihood, the red lines (solid lines) are Copas and Eguchi estimates, and the light blue lines are the LME least false estimates. These show that the least false, simulations, and Copas and Eguchi [2] results all match well. Therefore, we can use the least false results for bias correction as an alternative to Copas and Eguchi.Figure 3
Comparison 1:β=(-2,-2,-1,-1), θ=(-0.5,0). The blue lines (dotted lines) are simulation estimates using maximum likelihood, the red lines (solid lines) are Copas and Eguchi estimates, and the light blue lines are the LME least false estimates.
### 4.2. CI Coverage for the Estimatedβ3 and β4
The Copas and Eguchi and LME least false values show how estimates are biased by assuming MAR when the data are MNAR. The misspecification parameter isθ2, with θ2=0 meaning no misspecification. If the value of θ2 was known, then the parameter estimates will be adjusted to take into account the misspecification. This idea will be illustrated in this section.For a range of true (generating)θ2, 1000 samples are simulated, each of size 1000. This is a realistic number for applications. In each case, β3 and β4 are estimated using maximum likelihood under a MAR assumption. Afterwards, the estimates are adjusted using either the estimated Copas and Eguchi bias or the bias arising through least false calculations, in both cases taking anassumedθ2. Coverage of the resulting nominal 95% confidence intervals is then recorded. The estimated confidence interval width is not adjusted, just its location.Tables1 and 2 give the results. Here we use θ2T for the true θ2, and θ2A denotes the assumed value used in adjusting the estimates. Also, (β3∗∗,β4∗∗) are used for the Copas and Eguchi adjustment method and (β3∗,β4∗) for the least false adjustment method.Table 1
CI coverage in percent for the estimatedβ3 and β4 at assumed θ2=0. We use θ2T for the true θ2, θ2A for the assumed value in adjusting the estimates, (β3∗∗,β4∗∗) for the Copas and Eguchi adjustment method, and (β3∗,β4∗) for the least false adjustment method. Results based on 1000 samples of size 1000.
θ 2 T θ 2 A β 3 ∗ ∗ β4∗∗ β 3 ∗ β 4 ∗ -0.10 0.00 84.80 95.30 84.90 95.30 -0.09 0.00 85.90 96.70 85.80 96.70 -0.06 0.00 92.00 94.70 91.90 94.70 -0.03 0.00 95.00 95.00 95.10 94.90 0.00 0.00 94.70 95.10 94.70 95.10 0.03 0.00 95.20 94.20 95.00 94.20 0.06 0.00 91.70 94.70 91.70 94.70 0.09 0.00 88.00 95.00 87.80 95.00 0.10 0.00 83.40 95.10 83.60 95.00Table 2
CI coverage for the estimatedβ3 and β4 in percent at assumed θ2=-0.10. We use θ2T for the true θ2, θ2A for the assumed value in adjusting the estimates, (β3∗∗,β4∗∗) for the Copas and Eguchi adjustment method, and (β3∗,β4∗) for the least false adjustment method. Results based on 1000 samples of size 1000.
θ 2 T θ 2 A β 3 ∗ ∗ β4∗∗ β 3 ∗ β 4 ∗ -0.10 -0.10 95.30 95.40 95.70 95.10 -0.09 -0.10 94.80 95.10 95.50 94.80 -0.06 -0.10 95.80 95.20 94.90 95.60 -0.03 -0.10 92.50 94.90 92.00 95.10 0.00 -0.10 89.30 95.30 87.40 95.40 0.03 -0.10 83.70 93.80 81.20 93.80 0.06 -0.10 74.40 95.10 70.50 95.00 0.09 -0.10 62.20 96.00 58.30 95.80 0.10 -0.10 62.70 93.90 57.80 94.20In Table1, the assumed θ2 is zero, meaning no correction. Results at the correct value of θ2A =0 are good. Otherwise, the CI for β3 goes badly wrong. Note that there is no correction here, so the Copas and Eguchi and least false results should be the same. Small differences are just because of the different calculations that are involved. For example, the least false calculation needs an estimate of σx but the Copas and Eguchi one does not. The CI coverage is noted for β4 which is not too much affected at any true θ2 in the range (-0.1,+0.1). For example, at θ2A=-0.1, the CI coverage for β4 is about 95%, whereas there is undercoverage for β3 when θ2T deviates from zero. For example at θ2T=-0.1, the CI coverage for β3 is about 85%. This indicates that β4 is less sensitive to the misspecification than β3 in this scenario.In Table2, the assumed value is taken of θ2=-0.1, which means that dropout is associated with high Y2. Note that, in contrast to the Table 2, there is correction here, so the Copas and Eguchi and least false results will not be the same; for example, at θ2T=+0.1, the CI coverage for β3∗∗ is about 62.7%, but the CI coverage for β3∗ is about 57.8%. However, both estimates β3∗∗ and β3∗ have undercoverage as θ2T goes further from the assumed value -0.1.
### 4.3. Sensitivity Analysis
Of course, in practiceθ2 is not known. For any given data set, a sensible sensitivity procedure would mean plotting bias-corrected estimates and confidence intervals for a range of assumed θ2 values. Here, a grid of assumed θ2 is used from -0.2 to 0.2. We will show that, for each limiting value calculated by the Copas and Eguchi method, the simulated values are within noise of the theoretical values for large sample sizes (n=10000). The noise is estimated from the simulations; that is, a confidence interval is achieved from the simulations with reassurance that the population values are present. A correct MAR model is obtained and after that, under true MNAR, MAR is assumed.Figure4 illustrates the case when MAR is the correct model (θ2=0) and the unadjusted confidence intervals (red lines) include the true parameter values (β3=-1 and β4=-1), as in this case so do the adjusted ones (blue lines). The horizontal lines are at the true values. We note that β3∗∗ decreases as θ2 increases whereas β4∗∗ increases as θ2 increases. Note that β4 has a wider CI than β3.Figure 4
CI under MAR:β=(-2,-2,-1,-1), θ=(-0.5,-0.5), θ2T=0. The blue lines are the adjusted estimates, and red lines are the unadjusted estimates. The horizontal dotted lines are at the true values.Figure5 has the true θ2=0.1, so the study has fitted MAR to data that are really MNAR. The lines cross at θ2=0 because the same MAR model is fitted. The important point is that better estimates of the true β’s are obtained at the correct θ2. Also, as mentioned in Figure 4, β4 has wider CI than β3.Figure 5
CI under MNAR:β=(-2,-2,-1,-1), θ=(-0.5,-0.5), θ2T=0.1. The blue lines are the adjusted estimates, and red lines are the unadjusted estimates. The horizontal lines are at the true values.Note that, both under MAR and MNAR,β3 and β4 have opposite trends; β3 decreases as θ2 increases whereas β4 increases as θ2 increases.
## 4.1. Comparing the Copas and Eguchi Method with LME Least False Results
In this section, the parameter estimates are affected when a MAR model is fitted to data that are MNAR, and compared with the values that the Copas and Eguchi method predicts.The sample size is 10000, and 10 simulations are used. We used large samples here, as our first task is to check the accuracy of the large-sample approximations underpinning the least false values. The aim is to show the variation in treatment effect estimates asθ2 varies. A grid of θ2 from -0.2 to 0.2 is selected. Figure 3 is produced when β=(-2,-2,-1,-1) and θ=(-0.5,0), which gives dropout rate around 40%. Here the blue lines (dotted lines) are simulation estimates using maximum likelihood, the red lines (solid lines) are Copas and Eguchi estimates, and the light blue lines are the LME least false estimates. These show that the least false, simulations, and Copas and Eguchi [2] results all match well. Therefore, we can use the least false results for bias correction as an alternative to Copas and Eguchi.Figure 3
Comparison 1:β=(-2,-2,-1,-1), θ=(-0.5,0). The blue lines (dotted lines) are simulation estimates using maximum likelihood, the red lines (solid lines) are Copas and Eguchi estimates, and the light blue lines are the LME least false estimates.
## 4.2. CI Coverage for the Estimatedβ3 and β4
The Copas and Eguchi and LME least false values show how estimates are biased by assuming MAR when the data are MNAR. The misspecification parameter isθ2, with θ2=0 meaning no misspecification. If the value of θ2 was known, then the parameter estimates will be adjusted to take into account the misspecification. This idea will be illustrated in this section.For a range of true (generating)θ2, 1000 samples are simulated, each of size 1000. This is a realistic number for applications. In each case, β3 and β4 are estimated using maximum likelihood under a MAR assumption. Afterwards, the estimates are adjusted using either the estimated Copas and Eguchi bias or the bias arising through least false calculations, in both cases taking anassumedθ2. Coverage of the resulting nominal 95% confidence intervals is then recorded. The estimated confidence interval width is not adjusted, just its location.Tables1 and 2 give the results. Here we use θ2T for the true θ2, and θ2A denotes the assumed value used in adjusting the estimates. Also, (β3∗∗,β4∗∗) are used for the Copas and Eguchi adjustment method and (β3∗,β4∗) for the least false adjustment method.Table 1
CI coverage in percent for the estimatedβ3 and β4 at assumed θ2=0. We use θ2T for the true θ2, θ2A for the assumed value in adjusting the estimates, (β3∗∗,β4∗∗) for the Copas and Eguchi adjustment method, and (β3∗,β4∗) for the least false adjustment method. Results based on 1000 samples of size 1000.
θ 2 T θ 2 A β 3 ∗ ∗ β4∗∗ β 3 ∗ β 4 ∗ -0.10 0.00 84.80 95.30 84.90 95.30 -0.09 0.00 85.90 96.70 85.80 96.70 -0.06 0.00 92.00 94.70 91.90 94.70 -0.03 0.00 95.00 95.00 95.10 94.90 0.00 0.00 94.70 95.10 94.70 95.10 0.03 0.00 95.20 94.20 95.00 94.20 0.06 0.00 91.70 94.70 91.70 94.70 0.09 0.00 88.00 95.00 87.80 95.00 0.10 0.00 83.40 95.10 83.60 95.00Table 2
CI coverage for the estimatedβ3 and β4 in percent at assumed θ2=-0.10. We use θ2T for the true θ2, θ2A for the assumed value in adjusting the estimates, (β3∗∗,β4∗∗) for the Copas and Eguchi adjustment method, and (β3∗,β4∗) for the least false adjustment method. Results based on 1000 samples of size 1000.
θ 2 T θ 2 A β 3 ∗ ∗ β4∗∗ β 3 ∗ β 4 ∗ -0.10 -0.10 95.30 95.40 95.70 95.10 -0.09 -0.10 94.80 95.10 95.50 94.80 -0.06 -0.10 95.80 95.20 94.90 95.60 -0.03 -0.10 92.50 94.90 92.00 95.10 0.00 -0.10 89.30 95.30 87.40 95.40 0.03 -0.10 83.70 93.80 81.20 93.80 0.06 -0.10 74.40 95.10 70.50 95.00 0.09 -0.10 62.20 96.00 58.30 95.80 0.10 -0.10 62.70 93.90 57.80 94.20In Table1, the assumed θ2 is zero, meaning no correction. Results at the correct value of θ2A =0 are good. Otherwise, the CI for β3 goes badly wrong. Note that there is no correction here, so the Copas and Eguchi and least false results should be the same. Small differences are just because of the different calculations that are involved. For example, the least false calculation needs an estimate of σx but the Copas and Eguchi one does not. The CI coverage is noted for β4 which is not too much affected at any true θ2 in the range (-0.1,+0.1). For example, at θ2A=-0.1, the CI coverage for β4 is about 95%, whereas there is undercoverage for β3 when θ2T deviates from zero. For example at θ2T=-0.1, the CI coverage for β3 is about 85%. This indicates that β4 is less sensitive to the misspecification than β3 in this scenario.In Table2, the assumed value is taken of θ2=-0.1, which means that dropout is associated with high Y2. Note that, in contrast to the Table 2, there is correction here, so the Copas and Eguchi and least false results will not be the same; for example, at θ2T=+0.1, the CI coverage for β3∗∗ is about 62.7%, but the CI coverage for β3∗ is about 57.8%. However, both estimates β3∗∗ and β3∗ have undercoverage as θ2T goes further from the assumed value -0.1.
## 4.3. Sensitivity Analysis
Of course, in practiceθ2 is not known. For any given data set, a sensible sensitivity procedure would mean plotting bias-corrected estimates and confidence intervals for a range of assumed θ2 values. Here, a grid of assumed θ2 is used from -0.2 to 0.2. We will show that, for each limiting value calculated by the Copas and Eguchi method, the simulated values are within noise of the theoretical values for large sample sizes (n=10000). The noise is estimated from the simulations; that is, a confidence interval is achieved from the simulations with reassurance that the population values are present. A correct MAR model is obtained and after that, under true MNAR, MAR is assumed.Figure4 illustrates the case when MAR is the correct model (θ2=0) and the unadjusted confidence intervals (red lines) include the true parameter values (β3=-1 and β4=-1), as in this case so do the adjusted ones (blue lines). The horizontal lines are at the true values. We note that β3∗∗ decreases as θ2 increases whereas β4∗∗ increases as θ2 increases. Note that β4 has a wider CI than β3.Figure 4
CI under MAR:β=(-2,-2,-1,-1), θ=(-0.5,-0.5), θ2T=0. The blue lines are the adjusted estimates, and red lines are the unadjusted estimates. The horizontal dotted lines are at the true values.Figure5 has the true θ2=0.1, so the study has fitted MAR to data that are really MNAR. The lines cross at θ2=0 because the same MAR model is fitted. The important point is that better estimates of the true β’s are obtained at the correct θ2. Also, as mentioned in Figure 4, β4 has wider CI than β3.Figure 5
CI under MNAR:β=(-2,-2,-1,-1), θ=(-0.5,-0.5), θ2T=0.1. The blue lines are the adjusted estimates, and red lines are the unadjusted estimates. The horizontal lines are at the true values.Note that, both under MAR and MNAR,β3 and β4 have opposite trends; β3 decreases as θ2 increases whereas β4 increases as θ2 increases.
## 5. Application: Sensitivity Analysis for Clinical Trial
In this section, the method is illustrated using a real data example. The data is considered from a clinical trial with two treatments and two measurement times as introduced and analysed by Mathews et al. [12]. The covariates are only treatment type and time. The parameter vector is (β1,β2,β3,β4), ignoring any time interaction. There are 422 subjects, assigned to either treatment A or B. Treatment A is associated with treatment effect x=1 and treatment B is when x=0. Then, at time 2, the mean of the group receiving treatment B is β3 and the mean of the group receiving treatment A is β3+β4. At time 1, all subjects provided a response, but 24.4% dropped out by time 2. There are 212 subjects receiving treatment A, but only 126 provided a response at time 2 and the other 86 dropped out. Hence the missingness percentage is about 40%. The dropout reason is not known. For treatment B, there are 210 subjects, of which 193 subjects continued to time 2 and hence there are 17 that did not, and this gave around 8% missingness.A sensitivity analysis approach (over a grid ofθ2) using the Copas and Eguchi and LME methods is shown in Figure 6. The blue lines use the Copas and Eguchi method and the red lines use the least false method. The idea is to adjust the estimate to compensate for bias from a misspecified MAR fit. Consequently, for example, if the least false value is known under MAR to underestimate a parameter, the difference for the estimate is added to back-calculate. Dashes are the CIs, based on the MAR standard errors. The first plot shows confidence intervals for the treatment B mean as the assumed value of θ2 changes. The horizontal line is the estimate under MAR. The second plot shows the confidence intervals for the mean of treatment A. The third plot is the difference in means between treatment A and B, which yields the treatment effect means, i.e., β4 means. In the first plot, the horizontal line is at -0.74 which is the same value for the LME estimate for β3. Again, the LME estimate for β4 is about -0.40 in Figure 2. Also, note that β3+β4 equals -1.15. This supports the finding here and shows better results.Figure 6
Clinical trial example: 95% CI forβ3, β3+β4, and β4. The blue lines use the Copas and Eguchi method, the red lines use the least false method, and the horizontal line is at the MAR estimate.The first thing to note is how close the least false and Copas and Eguchi estimates are. There is almost no difference over this range ofθ2. We take θ2 from -1.5 to +1.5. The value of θ1^ under MAR is -1.66, meaning the range of θ2 allows Y2 to have the same order of effect as Y1. Clearly at large values of θ2, there is concern that the misspecification is not local, which is the assumption of Copas and Eguchi. However, the least false results apply to any misspecification, not necessarily local, and the fact that Copas and Eguchi estimate is so close to the least false one suggests that it can work well even under quite large misspecification.Whenθ2 is negative, the estimates get adjusted upwards, and the opposite is true for positive θ2. This makes sense: at negative θ2, large Y2 values have low probability of staying in the trial. Hence the observed means are lower than they would be in the hypothetical no-dropout situation, so we adjust upwards.The estimates seem to be affected more at the positiveθ2 than at the negative one. At the very largest θ2 shown, there would be a significant change in the value of the estimated true mean. However, there is very little effect of misspecification on the difference between means (third subplot), as the adjustments essentially cancel.
## 6. Conclusion
We considered the Linear Mixed Effect models (maximum likelihood method) for handling missing data. Then, by deriving the so-called least false values, we investigated the consequences of misspecifying the missingness mechanism. The closed form expressions were given to calculate the least false valuesβ3∗ and β4∗. The knowledge of these least false values allowed us to conduct sensitivity analysis, which was illustrated for the LME method.Copas and Eguchi [2] gave a formula to estimate the bias under the misspecification. We derived and explored the Copas and Eguchi approximation for the bias raised by the misspecification of the working model. The results found by using Copas and Eguchi method are compared with the results obtained by the method proposed. Also, we applied the Copas and Eguchi method to estimate the bias for the real data example.Moreover, we explained how to use a sensitivity analysis to see how the methods work under a range ofθ2. We found that the Copas and Eguchi method and LME least false match very well. Both gave very close results over the grid of θ2 considered. This suggests that the least false method can provide a credible alternative to Copas and Eguchi in sensitivity analysis. In fact, it might be preferred since there is no assumption of local misspecification. Finally, we illustrated the results using example data from a clinical trial with two measurement times.
---
*Source: 1019303-2019-07-01.xml* | 2019 |
# Using Response Surface Methodology to Optimize Edible Coating Formulations to Delay Ripening and Preserve Postharvest Quality of Tomatoes
**Authors:** Robin Tsague Donjio; Jean Aghofack Nguemezi; Mariette Anoumaa; Eugene Tafre Phounzong; Justine Odelonne Kenfack; Théophile Fonkou
**Journal:** Journal of Food Quality
(2023)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2023/1019310
---
## Abstract
Tomato is a nutrient-rich but highly perishable fruit. In order to delay the rapid ripening and degradation of fruits and reduce postharvest losses, response surface methodology (RSM) was used as the optimizing method to formulate edible coating based on pineapple peel extract and Arabic gum of twenty concentrations of pineapple (0.5–0.83 kg/l) and 20 concentrations of Arabic gum (5–15%, w/v). Tomatoes were soaked for 10–30 min in any of the coating solution. Five parameters including ripening rate, chlorophylla content, firmness, total flavonoid content, and titratable acidity of tomatoes were evaluated after 8 days of storage at 24 ± 0.5°C and 82 ± 1.5% relative humidity. Results showed that the experimental data could be adequately fitted into a second-order polynomial model with coefficient of determination (R2) ranging from 0.775 to 0.976 for all the variables studied. The optimum concentrations were predicted as 0.70 kg/l pineapple peel extract and 17.04% with 18.72 min optimum time. Under these conditions, predicted values of response variables are as follows: ripening rate (RR) 40.75, chlorophyll a (Chl a) 8.11, firmness (Fir) 4.00, total flavonoid content (TFC) 43.51, and titratable acidity (TA) 0.30. It is concluded that RSM can be used to optimize pineapple peel extract and Arabic gum-based edible coating formulation to extend the shelf life or delay the ripening process of tomato fruit at ambient conditions.
---
## Body
## 1. Introduction
Tomato (Solanum lycopersicum L.) is one of the most consumed fruits in the world [1]. It is important in human nutrition and health due to high nutrient content and significant amount of bioactive substances such as lycopene, ascorbic acid, tocopherols, folic acid, and flavonoids [2–4]. Due to its high nutritive value and water content, postharvest tomatoes are susceptible to diseases. These fruits are also sensitive to low-temperature storage [5, 6]. This leads to the loss of the quality parameters of the fruits such as color, texture, aroma, and appearance responsible for their commercial interest [4, 7]. Previous studies reported an increase in tomato’s shelf life through modified atmosphere storage (relatively high CO2 and low O2) and controlled atmosphere storage [8], active packaging of cardboard [9], and genetic engineering [10]. In order to meet the increasing demand and consumption of minimally processed and additive-free foods, different means have been used to extend the shelf life of tomatoes such as edible coatings. Many edible coatings are made from waste agricultural resources through bio-production [11]. Edible coatings are thin layers of the edible component such as hydrocolloids (polysaccharides and proteins), lipids (waxes and resins), and synthetic polymers, applied to the fruit’s surface in addition to or as a replacement for natural protective waxy coatings [12]. They act as a physical barrier towards carbon dioxide, oxygen, and moisture movement for the fruits [13]. The uses of edible films and coatings containing synthetic antimicrobial agents, organic, and vegetable material have been shown to be useful in preserving the quality of tomatoes [13, 14]. Moreover, these materials have been used to incorporate functional ingredients such as antioxidants, antimicrobial agent, plants extract, byproduct extract, and nutraceuticals in fruits [15–19]. Pineapple (Ananas comosus L. Merr.) is a fruit rich in several nutrients and bioactive compounds including vitamins C, calcium, and nonvolatile organic acids such as malate and citrate which has been used to preserve the postharvest quality of tomato and strawberry [20–23]. Luo et al. [24] reported that the polysaccharides contained in pineapple peels have some degree of antioxidant activity. Arabic gum (AG) has been used as barrier to CO2 and O2 in some edible coating formulations to extend the shelf life of certain fruits such as guava, mango, and tomato [17, 25, 26]. Response surface methodology (RSM) was used to study properties of edible films and the main formulation that have effects on the preservation shelf-life and quality of some fruits [27, 28]. Therefore, the objective of this study was to use the RSM to determine the optimum concentrations of aqueous extract of pineapple peel and Arabic gum as well as the optimal time to coat tomato in order to increase shelf life and reduce postharvest losses.
## 2. Materials and Methods
### 2.1. Biological Material
Healthy tomatoes at the mature green stage (Figure1) were collected from the local farm in Dschang (Cameroon) and kept in the laboratory. The pineapples used as coating material were harvested from farmers field in Melong (Cameroon) at ripening stage four (with low sugar content) [29]. Arabic gum was harvested on Acacia Senegal plants in Garoua, northern Cameroon [30].Figure 1
Mature green tomato fruits.
### 2.2. Coating Preparation
Pineapples were peeled after washing with water. The peels were dried in the shade, scrambled in the mill, and then grounded to obtain a homogeneous paste. Different quantities of paste were weighted (Table1) and macerated in a water/ethanol mix (1/1, v/v). 230 μl/l of bleach was added to disinfect the medium. The macerate was transferred on a sieve for filtration. In order to thicken the extract and form an adhesive and transparent film on the surface of the tomatoes, Arabic gum was added to of filtrate as coating matrix according to the quantities shown in Table 1. The mixtures were macerated for 15 hours, allowing pineapple peel extract to adhere to the Arabic gum.Table 1
Center composite design (CCD) and experimental data obtained for the response variables studied.
RunIndependent variablesRR (%)Chla (μg/g)Fir (N)TFC (μg/ml)TA (%)CPE (kg/l)CGA (%)Time (min)T10.8353046.677.023.9159.640.275T20.83151053.335.474.0074.580.352T30.83153040.007.804.0078.790.291T40.5153046.675.573.7786.040.293T50.8351046.676.153.9376.470.293T60.5151070.006.973.9861.110.298T70.51151066.678.373.9337.450.283T8C0.67102036.678.374.0040.920.278T90.51153036.679.843.7651.440.259T10C0.67102035.268.203.9940.890.276T11C0.67102037.368.603.9740.290.279T12C0.67102035.468.824.0040.790.272T130.670.9552036.673.653.8649.540.299T140.67101.9163.336.554.0064.900.345T15C0.67102036.568.684.0040.120.279T160.381102050.0011.983.9057.850.26T170.6719.052040.009.384.0039.440.305T18C0.67102035.768.483.9940.610.271T190.671038.0933.338.303.7567.060.300T200.959102046.677.634.0072.690.292CPE: concentration of water/ethanol pineapple peel extract, CGA: concentration of gum Arabic, C: center point, RR: ripening rate, Fir: firmness, TFC: total flavonoid content, TA: titratable acidity, Chla: chlorophyll a.
### 2.3. Coating
Tomatoes were washed and soaked in the different coatings for 10, 20, or 30 minutes depending on the experimental design for coatings T1 to T20 (Table1). Ten fruits were left without coating as the control. All the fruits were left on the bench at room temperature (24 ± 0.5°C) and 82 ± 1.5% RH.
### 2.4. Analytical Methods
Two physical parameters and three physiological parameters including ripening rate (RR), firmness, total flavonoid content, chlorophylla content, and titratable acidity (TA) of fruits were evaluated after 8 days of storage.
#### 2.4.1. Ripening Rate
The ripening rate was evaluated by counting the number of red ripped fruits (Figure2) at the 8th day after treatment according to ripening rate defined by The United Fresh Fruit and Vegetable Association, in cooperation with USDA [31].Figure 2
Red ripe tomato fruits.
#### 2.4.2. Determination of Chlorophyll a Content
Quantitative analysis for chlorophylla content in tomato pulp was performed using a Biochrom Libra S22 spectrophotometer. Chlorophyll a content was determined using the method described by Nagata and Yamashita [32]. Six (6) grams of the tomato pulp was crushed and introduced in a test tube, then 10 ml of acetone/hexane (4/6, v/v) was added. The mixture was stored at 4°C for 48 hours. Subsequently, chlorophyll a in the hexanolic extracts was detected by spectrophotometry at 663 and 645 nm. The chlorophyll a content was calculated using the following equation:(1)Chlorophyllaμg/100ml=0.0999A663−0.0989A645.A663 and A645 are the absorbances at 663 nm and 645 nm, respectively.
#### 2.4.3. Firmness
Firmness is the force (N) required to press the fruit against the tip of a penetrometer. Epicarp was removed at equatorial and top region of tomato fruits. The cylindrical tip of the penetrometer was pressed down gradually on tomato notches, and the measurements were read on the board of the penetrometer [23].
#### 2.4.4. Total Flavonoid Content
The concentration of total flavonoids was measured using the aluminum chloride colorimetric method [33] with some modifications. One milliliter (1 ml) of filtered tomato juice was added to a 10 ml Erlenmeyer flask containing 4 ml of distilled water. Then, 0.3 ml of 5% NaNO2 was added. After 5 min, 0.3 ml of 10% AlCl3 was added. Finally, 2 ml of 1 M NaOH was added after 6 minutes; the volume was completed to 10 ml with distilled water. The solution was mixed thoroughly, and the absorbance was measured at 510 nm using a spectrophotometer. Flavonoid compounds were determined according to a catechin standard curve in μg/ml.
#### 2.4.5. Titratable Acidity
Forty milliliters of distilled water were added to 20 ml of tomato juice. The volume was made up to 40 ml with distilled water. After adjusting the pH to 8.1, the titrate value was measured and was used to calculate the titrable acidity following the method of Gharezi et al. [34].(2)%acid=Titratevalue×Normality×M.eq.wt.ofacidVolumeofsample×100,Milli−equivalentweightofcitricacid=0.06404.
### 2.5. Experimental Design and Statistical Analysis
RSM was used to generate the experimental design statistical analysis and regression model with the help of Minitab software. The central composite rotatable design (CCRD) with a quadratic model [35] was employed as Nandane et al. [28]. Each independent variable had three (3) levels: −1.809, 0 and +1.809 (Table 2). Six replicates of the center points were chosen in random order according to a CCRD configuration for three factors divided in two blocks. The p values in the design outside the ranges were selected for rotatability of the design [36]. The center points for these designs were selected with ingredients at levels expected to yield satisfactory experimental results. Twenty (20) edible coating formulations with different concentrations of pineapple peel extracts (0.5–0.83 kg/l), Arabic gum (5–15%), and soaking time 10–30 min were designed. The response functions (y) measured were physiological weight loss, firmness, total soluble solid contents and titratable acidity, total flavonoids, and proteins levels of tomatoes. The response values are related to the coded variables (xi, i = 1, 2 and 3) by a second-degree polynomial equation given as follows:(3)Yi=a0+a1X1+a2X2+a3X3+a12X1X2+a13X1X3+a23X2X3+a11X12+a22X22+a33X32.Table 2
Level of independent variables used for the center composite design (CCD).
Independent variableSymbolLevel−α (−1.809)LowHighα (1.809)CPE (kg/l)X10.3810.50.830.959CGA (%)X20.95551519.045Time (min)X31.909103038.091CPE: concentration of pineapple peel extract in water/ethanol (1/1, v/v) solvent mixture, CGA: concentration of gum arabic.The coefficients of the polynomial equation were represented bya0 (constant term), a1, a2, and a3 (linear effects), a12, a13, and a23 (interaction effects), and a11, a22, and a33 (quadratic effects).The analysis of regression was made and regression tables were generated; the effect and regression coefficients of individual linear, quadratic, and interaction terms were determined. The significance of all terms in the polynomial equation was appreciated statistically by computing theF-value and comparing response variables at standard significance levels of 0.1, 0.05, 0.01, and 0.001. Because tomato fruits are perishables, agricultural products with big variations in the quality attribute between one another [27]. The adequacy of the model was determined using regression coefficient (R2) analysis. Using Minitab Software, numerical and graphical optimization procedures were applied to determine the optimum level of the independent variables.
## 2.1. Biological Material
Healthy tomatoes at the mature green stage (Figure1) were collected from the local farm in Dschang (Cameroon) and kept in the laboratory. The pineapples used as coating material were harvested from farmers field in Melong (Cameroon) at ripening stage four (with low sugar content) [29]. Arabic gum was harvested on Acacia Senegal plants in Garoua, northern Cameroon [30].Figure 1
Mature green tomato fruits.
## 2.2. Coating Preparation
Pineapples were peeled after washing with water. The peels were dried in the shade, scrambled in the mill, and then grounded to obtain a homogeneous paste. Different quantities of paste were weighted (Table1) and macerated in a water/ethanol mix (1/1, v/v). 230 μl/l of bleach was added to disinfect the medium. The macerate was transferred on a sieve for filtration. In order to thicken the extract and form an adhesive and transparent film on the surface of the tomatoes, Arabic gum was added to of filtrate as coating matrix according to the quantities shown in Table 1. The mixtures were macerated for 15 hours, allowing pineapple peel extract to adhere to the Arabic gum.Table 1
Center composite design (CCD) and experimental data obtained for the response variables studied.
RunIndependent variablesRR (%)Chla (μg/g)Fir (N)TFC (μg/ml)TA (%)CPE (kg/l)CGA (%)Time (min)T10.8353046.677.023.9159.640.275T20.83151053.335.474.0074.580.352T30.83153040.007.804.0078.790.291T40.5153046.675.573.7786.040.293T50.8351046.676.153.9376.470.293T60.5151070.006.973.9861.110.298T70.51151066.678.373.9337.450.283T8C0.67102036.678.374.0040.920.278T90.51153036.679.843.7651.440.259T10C0.67102035.268.203.9940.890.276T11C0.67102037.368.603.9740.290.279T12C0.67102035.468.824.0040.790.272T130.670.9552036.673.653.8649.540.299T140.67101.9163.336.554.0064.900.345T15C0.67102036.568.684.0040.120.279T160.381102050.0011.983.9057.850.26T170.6719.052040.009.384.0039.440.305T18C0.67102035.768.483.9940.610.271T190.671038.0933.338.303.7567.060.300T200.959102046.677.634.0072.690.292CPE: concentration of water/ethanol pineapple peel extract, CGA: concentration of gum Arabic, C: center point, RR: ripening rate, Fir: firmness, TFC: total flavonoid content, TA: titratable acidity, Chla: chlorophyll a.
## 2.3. Coating
Tomatoes were washed and soaked in the different coatings for 10, 20, or 30 minutes depending on the experimental design for coatings T1 to T20 (Table1). Ten fruits were left without coating as the control. All the fruits were left on the bench at room temperature (24 ± 0.5°C) and 82 ± 1.5% RH.
## 2.4. Analytical Methods
Two physical parameters and three physiological parameters including ripening rate (RR), firmness, total flavonoid content, chlorophylla content, and titratable acidity (TA) of fruits were evaluated after 8 days of storage.
### 2.4.1. Ripening Rate
The ripening rate was evaluated by counting the number of red ripped fruits (Figure2) at the 8th day after treatment according to ripening rate defined by The United Fresh Fruit and Vegetable Association, in cooperation with USDA [31].Figure 2
Red ripe tomato fruits.
### 2.4.2. Determination of Chlorophyll a Content
Quantitative analysis for chlorophylla content in tomato pulp was performed using a Biochrom Libra S22 spectrophotometer. Chlorophyll a content was determined using the method described by Nagata and Yamashita [32]. Six (6) grams of the tomato pulp was crushed and introduced in a test tube, then 10 ml of acetone/hexane (4/6, v/v) was added. The mixture was stored at 4°C for 48 hours. Subsequently, chlorophyll a in the hexanolic extracts was detected by spectrophotometry at 663 and 645 nm. The chlorophyll a content was calculated using the following equation:(1)Chlorophyllaμg/100ml=0.0999A663−0.0989A645.A663 and A645 are the absorbances at 663 nm and 645 nm, respectively.
### 2.4.3. Firmness
Firmness is the force (N) required to press the fruit against the tip of a penetrometer. Epicarp was removed at equatorial and top region of tomato fruits. The cylindrical tip of the penetrometer was pressed down gradually on tomato notches, and the measurements were read on the board of the penetrometer [23].
### 2.4.4. Total Flavonoid Content
The concentration of total flavonoids was measured using the aluminum chloride colorimetric method [33] with some modifications. One milliliter (1 ml) of filtered tomato juice was added to a 10 ml Erlenmeyer flask containing 4 ml of distilled water. Then, 0.3 ml of 5% NaNO2 was added. After 5 min, 0.3 ml of 10% AlCl3 was added. Finally, 2 ml of 1 M NaOH was added after 6 minutes; the volume was completed to 10 ml with distilled water. The solution was mixed thoroughly, and the absorbance was measured at 510 nm using a spectrophotometer. Flavonoid compounds were determined according to a catechin standard curve in μg/ml.
### 2.4.5. Titratable Acidity
Forty milliliters of distilled water were added to 20 ml of tomato juice. The volume was made up to 40 ml with distilled water. After adjusting the pH to 8.1, the titrate value was measured and was used to calculate the titrable acidity following the method of Gharezi et al. [34].(2)%acid=Titratevalue×Normality×M.eq.wt.ofacidVolumeofsample×100,Milli−equivalentweightofcitricacid=0.06404.
## 2.4.1. Ripening Rate
The ripening rate was evaluated by counting the number of red ripped fruits (Figure2) at the 8th day after treatment according to ripening rate defined by The United Fresh Fruit and Vegetable Association, in cooperation with USDA [31].Figure 2
Red ripe tomato fruits.
## 2.4.2. Determination of Chlorophyll a Content
Quantitative analysis for chlorophylla content in tomato pulp was performed using a Biochrom Libra S22 spectrophotometer. Chlorophyll a content was determined using the method described by Nagata and Yamashita [32]. Six (6) grams of the tomato pulp was crushed and introduced in a test tube, then 10 ml of acetone/hexane (4/6, v/v) was added. The mixture was stored at 4°C for 48 hours. Subsequently, chlorophyll a in the hexanolic extracts was detected by spectrophotometry at 663 and 645 nm. The chlorophyll a content was calculated using the following equation:(1)Chlorophyllaμg/100ml=0.0999A663−0.0989A645.A663 and A645 are the absorbances at 663 nm and 645 nm, respectively.
## 2.4.3. Firmness
Firmness is the force (N) required to press the fruit against the tip of a penetrometer. Epicarp was removed at equatorial and top region of tomato fruits. The cylindrical tip of the penetrometer was pressed down gradually on tomato notches, and the measurements were read on the board of the penetrometer [23].
## 2.4.4. Total Flavonoid Content
The concentration of total flavonoids was measured using the aluminum chloride colorimetric method [33] with some modifications. One milliliter (1 ml) of filtered tomato juice was added to a 10 ml Erlenmeyer flask containing 4 ml of distilled water. Then, 0.3 ml of 5% NaNO2 was added. After 5 min, 0.3 ml of 10% AlCl3 was added. Finally, 2 ml of 1 M NaOH was added after 6 minutes; the volume was completed to 10 ml with distilled water. The solution was mixed thoroughly, and the absorbance was measured at 510 nm using a spectrophotometer. Flavonoid compounds were determined according to a catechin standard curve in μg/ml.
## 2.4.5. Titratable Acidity
Forty milliliters of distilled water were added to 20 ml of tomato juice. The volume was made up to 40 ml with distilled water. After adjusting the pH to 8.1, the titrate value was measured and was used to calculate the titrable acidity following the method of Gharezi et al. [34].(2)%acid=Titratevalue×Normality×M.eq.wt.ofacidVolumeofsample×100,Milli−equivalentweightofcitricacid=0.06404.
## 2.5. Experimental Design and Statistical Analysis
RSM was used to generate the experimental design statistical analysis and regression model with the help of Minitab software. The central composite rotatable design (CCRD) with a quadratic model [35] was employed as Nandane et al. [28]. Each independent variable had three (3) levels: −1.809, 0 and +1.809 (Table 2). Six replicates of the center points were chosen in random order according to a CCRD configuration for three factors divided in two blocks. The p values in the design outside the ranges were selected for rotatability of the design [36]. The center points for these designs were selected with ingredients at levels expected to yield satisfactory experimental results. Twenty (20) edible coating formulations with different concentrations of pineapple peel extracts (0.5–0.83 kg/l), Arabic gum (5–15%), and soaking time 10–30 min were designed. The response functions (y) measured were physiological weight loss, firmness, total soluble solid contents and titratable acidity, total flavonoids, and proteins levels of tomatoes. The response values are related to the coded variables (xi, i = 1, 2 and 3) by a second-degree polynomial equation given as follows:(3)Yi=a0+a1X1+a2X2+a3X3+a12X1X2+a13X1X3+a23X2X3+a11X12+a22X22+a33X32.Table 2
Level of independent variables used for the center composite design (CCD).
Independent variableSymbolLevel−α (−1.809)LowHighα (1.809)CPE (kg/l)X10.3810.50.830.959CGA (%)X20.95551519.045Time (min)X31.909103038.091CPE: concentration of pineapple peel extract in water/ethanol (1/1, v/v) solvent mixture, CGA: concentration of gum arabic.The coefficients of the polynomial equation were represented bya0 (constant term), a1, a2, and a3 (linear effects), a12, a13, and a23 (interaction effects), and a11, a22, and a33 (quadratic effects).The analysis of regression was made and regression tables were generated; the effect and regression coefficients of individual linear, quadratic, and interaction terms were determined. The significance of all terms in the polynomial equation was appreciated statistically by computing theF-value and comparing response variables at standard significance levels of 0.1, 0.05, 0.01, and 0.001. Because tomato fruits are perishables, agricultural products with big variations in the quality attribute between one another [27]. The adequacy of the model was determined using regression coefficient (R2) analysis. Using Minitab Software, numerical and graphical optimization procedures were applied to determine the optimum level of the independent variables.
## 3. Results and Discussion
### 3.1. Effect of Edible Coating on Ripening Rate of Tomato Fruits
As shown in Figure3, the ripening rate (RR) of the coated tomato decreased with the pineapple peel extract concentration, while the optimum predicted time of treatment was 20 min. The CGA was fixed at 10%. Ali et al. [17] observed that fruit coated with 10% arabic gum delayed the ripening process by slowing down the rate of respiration and ethylene production. The regression coefficient table of RSM analysis with ripening rate as response variable is shown in Table 3. The model F-value of 69.25 obtained for the effect on ripening rate (%) of treated tomato fruit implies that this model was significant. Values of “Prob ˃ F” less than 0.1 indicates significance of the model terms (Table 3). In this case, a1a3, a13, a11, a22, and a33 are significant model terms. Thus, the ripening rate is affected by linear effect of pineapple peel extract concentrations and time, interactions effects of time of treatment, and concentration of pineapple peel extract and by the quadratic effect of tree factors. The “Lack of Fit p-value” of ˂0.001 implies that the Lack of Fit is significantly relative to the pure error. Thus, independent variables had a significant effect on the ripening rate. Observations from RSM analysis suggested that the ripening rate was negatively related to the concentration of the pineapple peel extract used (Figure 3). As the concentration of the pineapple peel extract in the solution increased, there was a relative decrease in ripening rate of the fruit. This shows that the calcium and antioxidants compounds in the pineapple extract may have induced the delay of ripening [37–40]. The final equation in terms of actual factors for ripening rate is as follows:(4)RR%=220.7−349.5CPE−1.86CGA−4.329Time+186.0CPE∗CPE+0.0683CGA∗CGA+0.0476Time∗Time+2.08CPE∗CGA+3.13CPE∗Time−0.0500CGA∗Time.Figure 3
Response surface for ripening rate of coated tomato as function of concentration of pineapple peel extract and time of treatment.Table 3
Regression coefficients,R2, R2 (adj), and probability values for four dependent variables.
Regression coefficientsRRChlaFirmnessTFCTAConstant220.7c8.22c4.043c255.1b0.374cCPE−349.5c−15.1c0.090c−469.3b−0.0195cCGA−1.861.185c−0.0039c−11.27b−0.0151aTime−4.329c−0.041c−0.011c−1.00−0.0025CPE∗CPE4.76c10.20−0.562b348.0c−0.0284CGA∗CGA1.71−0.0298c−0.0008c0.1020.00029cTime∗time4.76c−0.0046b−0.0004c0.091c0.00013cCPE∗CGA1.66−0.872b0.0342b11.80c0.0194cCPE∗time5.00b0.2440.0282c−4.03b−0.00392cCGA∗time−2.500.01080.00010.025−0.00015cR20.8990.8810.9500.9010.976R2 adj0.8090.7750.9050.8120.955ModelF-value69.2528.239.41866.443.06Lack of fit (p value)<0.0010.0010.014<0.0010.122cSignificant at 0.01 level. bSignificant at 0.05 level. aSignificant at 0.1 level. RR: ripening rate; Chl a: chlorophyll a; TFC: total flavonoid content; TA: titratable acidity.
### 3.2. Effect of Edible Coating on Chlorophyll a in Tomatoes
Chlorophylla is a more appropriate biomarker for evaluation of the ripening-retarding effects of edible coatings, because it is part of ripening process through the conversion of chlorophyll a and chlorophyll b into chlorophyllide a and then pheophorbide a before its complete degradation in nongreen products [11, 37]. As shown in Figure 4, concentration of chlorophyll a gradually increased with Arabic gum and with the increase of time of treatment. The regression coefficient table for RSM analysis for chlorophyll a as response variable is shown in Table 3. The F-value (28.23) of the model implies that this model is significant. In this case, the value of chlorophyll a was influenced by concentrations of Arabic gum and time of treatment. But only the coefficient of the linear term effect of time of treatment was significant (p<0.1). The evolution ripening rate was confirmed by alterations level of chlorophyll a (Figure 3). The treatment applied induced the inhibition of chlorophyll a breakdown at different rate, resulting in the delaying of the ripening process. The final equation in terms of actual factors for chlorophyll a is as follows:(5)Chlorophyllaμg/g=8.22−15.1CPE+1.185CGA−0.041Time+10.20CPE∗CPE−0.02980CGA∗CGA−0.00466Time∗Time−0.872CPE∗CGA+0.244CPE∗Time+0.01085CGA∗Time.Figure 4
Response surface for chlorophylla of coated tomatoes as function of Arabic gum concentration and time of treatment.
### 3.3. Effect of Edible Coating on the Firmness of Tomatoes
As shown in Figure5, the response surface firmness of fruits increased with Arabic gum concentration in the coating solution. Firmness was affected by pineapple peel extract concentration and time of treatment. The regression coefficient table for RSM analysis with firmness as response variable is shown in Table 3. The F-value (9.41) obtained on firmness of treated tomato fruit implies that this model is significant. Ali et al. [41] showed that tomato fruit coated with Arabic gum at 10% resulted in a significant delay in change of firmness. Low levels respiration gas (O2, CO2) exchanges limit pectin esterase and polygalacturonase activities and allow retention of the firmness. Calcium ion, as a firming agent, in edible coatings could improve the rigidity of the cell wall of coated fruits [37, 39]. The final equation in terms of actual factors for firmness is given as follows:(6)FirmnessN=4.043+0.090CPE−0.0039CGA−0.01105Time−0.562CPE∗CPE−0.000849CGA∗CGA−0.000380Time∗Time+0.0342CPE∗CGA+0.02823CPE∗Time+0.000147CGA∗Time.Figure 5
Response surface for firmness of coated tomatoes as function of pineapple peel extract concentration and time of treatment.
### 3.4. Effect of Edible Coating on Total Flavonoid Content of Tomatoes
Figure6 shows that the total flavonoid content (TFC) value is increased with pineapple peel extract concentration in the coating solution. This parameter decreased with the time of treatment. The flavonoid content value was affected by interaction effect of pineapple peel extract and Arabic gum, quadratic effect of pineapple peel extract, and quadratic effect of time of treatment in the coating formulation. The regression coefficient table for RSM analysis of flavonoid content as response variable is as shown in Table 3. The F-value of 866.44 obtained implies that the model is significant. Flavonoid compounds are secondary metabolites in plants with antioxidant capacities that can be produce during abiotic stress by edible coating in tomatoes [13]. The final equation in terms of actual factors for flavonoid content is as follows:(7)Totalflavonoidcontentμg/ml=255.1−469.3CPE−11.27CGA−1.00Time+348.0CPE∗CPE+0.1024CGA∗CGA+0.0913Time∗Time+11.80CPE∗CGA−4.03CPE∗Time+0.0252CGA∗Time.Figure 6
Response surface for total flavonoid content of coated tomatoes as function of pineapple peel extract concentration and Arabic gum concentration.
### 3.5. Effect of Edible Coating on Titrable Acidity of Tomatoes
The titratable acidity (TA) values of coated fruit during storage were maintained with Arabic gum concentration and decreased with concentration peel extract (Figure7), and the value of linear term was significant (p<0.1). The TA value was positively related to Arabic gum concentration. Regression coefficient table for RSM analysis for titrable acidity as response variable is shown in Table 3. The F-value of 3.06 obtained implies that the model is not significant. The same was observed by Ali et al. [41] who reported that the arabic gum coating delayed ripening of tomato by providing a semipermeable film around the fruit. Since organic acids, such as malic or citric acid, are primary substrates for respiration, a reduction in acidity is expected in highly respiring fruit as reported by El-Anany et al. [42]. The final equation in terms of actual factors for TA is as follows:(8)Titrableacidity%=0.3740−0.0195CPE−0.01517CGA−0.002492Time−0.0284CPE∗CPE+0.000289CGA∗CGA+0.000134Time∗Time+0.01940CPE∗CGA−0.00392CPE∗Time−0.000155CGA∗Time.Figure 7
Response surface for titrable acidity of tomatoes as function of pineapple peel extract concentration and Arabic gum concentration.
## 3.1. Effect of Edible Coating on Ripening Rate of Tomato Fruits
As shown in Figure3, the ripening rate (RR) of the coated tomato decreased with the pineapple peel extract concentration, while the optimum predicted time of treatment was 20 min. The CGA was fixed at 10%. Ali et al. [17] observed that fruit coated with 10% arabic gum delayed the ripening process by slowing down the rate of respiration and ethylene production. The regression coefficient table of RSM analysis with ripening rate as response variable is shown in Table 3. The model F-value of 69.25 obtained for the effect on ripening rate (%) of treated tomato fruit implies that this model was significant. Values of “Prob ˃ F” less than 0.1 indicates significance of the model terms (Table 3). In this case, a1a3, a13, a11, a22, and a33 are significant model terms. Thus, the ripening rate is affected by linear effect of pineapple peel extract concentrations and time, interactions effects of time of treatment, and concentration of pineapple peel extract and by the quadratic effect of tree factors. The “Lack of Fit p-value” of ˂0.001 implies that the Lack of Fit is significantly relative to the pure error. Thus, independent variables had a significant effect on the ripening rate. Observations from RSM analysis suggested that the ripening rate was negatively related to the concentration of the pineapple peel extract used (Figure 3). As the concentration of the pineapple peel extract in the solution increased, there was a relative decrease in ripening rate of the fruit. This shows that the calcium and antioxidants compounds in the pineapple extract may have induced the delay of ripening [37–40]. The final equation in terms of actual factors for ripening rate is as follows:(4)RR%=220.7−349.5CPE−1.86CGA−4.329Time+186.0CPE∗CPE+0.0683CGA∗CGA+0.0476Time∗Time+2.08CPE∗CGA+3.13CPE∗Time−0.0500CGA∗Time.Figure 3
Response surface for ripening rate of coated tomato as function of concentration of pineapple peel extract and time of treatment.Table 3
Regression coefficients,R2, R2 (adj), and probability values for four dependent variables.
Regression coefficientsRRChlaFirmnessTFCTAConstant220.7c8.22c4.043c255.1b0.374cCPE−349.5c−15.1c0.090c−469.3b−0.0195cCGA−1.861.185c−0.0039c−11.27b−0.0151aTime−4.329c−0.041c−0.011c−1.00−0.0025CPE∗CPE4.76c10.20−0.562b348.0c−0.0284CGA∗CGA1.71−0.0298c−0.0008c0.1020.00029cTime∗time4.76c−0.0046b−0.0004c0.091c0.00013cCPE∗CGA1.66−0.872b0.0342b11.80c0.0194cCPE∗time5.00b0.2440.0282c−4.03b−0.00392cCGA∗time−2.500.01080.00010.025−0.00015cR20.8990.8810.9500.9010.976R2 adj0.8090.7750.9050.8120.955ModelF-value69.2528.239.41866.443.06Lack of fit (p value)<0.0010.0010.014<0.0010.122cSignificant at 0.01 level. bSignificant at 0.05 level. aSignificant at 0.1 level. RR: ripening rate; Chl a: chlorophyll a; TFC: total flavonoid content; TA: titratable acidity.
## 3.2. Effect of Edible Coating on Chlorophyll a in Tomatoes
Chlorophylla is a more appropriate biomarker for evaluation of the ripening-retarding effects of edible coatings, because it is part of ripening process through the conversion of chlorophyll a and chlorophyll b into chlorophyllide a and then pheophorbide a before its complete degradation in nongreen products [11, 37]. As shown in Figure 4, concentration of chlorophyll a gradually increased with Arabic gum and with the increase of time of treatment. The regression coefficient table for RSM analysis for chlorophyll a as response variable is shown in Table 3. The F-value (28.23) of the model implies that this model is significant. In this case, the value of chlorophyll a was influenced by concentrations of Arabic gum and time of treatment. But only the coefficient of the linear term effect of time of treatment was significant (p<0.1). The evolution ripening rate was confirmed by alterations level of chlorophyll a (Figure 3). The treatment applied induced the inhibition of chlorophyll a breakdown at different rate, resulting in the delaying of the ripening process. The final equation in terms of actual factors for chlorophyll a is as follows:(5)Chlorophyllaμg/g=8.22−15.1CPE+1.185CGA−0.041Time+10.20CPE∗CPE−0.02980CGA∗CGA−0.00466Time∗Time−0.872CPE∗CGA+0.244CPE∗Time+0.01085CGA∗Time.Figure 4
Response surface for chlorophylla of coated tomatoes as function of Arabic gum concentration and time of treatment.
## 3.3. Effect of Edible Coating on the Firmness of Tomatoes
As shown in Figure5, the response surface firmness of fruits increased with Arabic gum concentration in the coating solution. Firmness was affected by pineapple peel extract concentration and time of treatment. The regression coefficient table for RSM analysis with firmness as response variable is shown in Table 3. The F-value (9.41) obtained on firmness of treated tomato fruit implies that this model is significant. Ali et al. [41] showed that tomato fruit coated with Arabic gum at 10% resulted in a significant delay in change of firmness. Low levels respiration gas (O2, CO2) exchanges limit pectin esterase and polygalacturonase activities and allow retention of the firmness. Calcium ion, as a firming agent, in edible coatings could improve the rigidity of the cell wall of coated fruits [37, 39]. The final equation in terms of actual factors for firmness is given as follows:(6)FirmnessN=4.043+0.090CPE−0.0039CGA−0.01105Time−0.562CPE∗CPE−0.000849CGA∗CGA−0.000380Time∗Time+0.0342CPE∗CGA+0.02823CPE∗Time+0.000147CGA∗Time.Figure 5
Response surface for firmness of coated tomatoes as function of pineapple peel extract concentration and time of treatment.
## 3.4. Effect of Edible Coating on Total Flavonoid Content of Tomatoes
Figure6 shows that the total flavonoid content (TFC) value is increased with pineapple peel extract concentration in the coating solution. This parameter decreased with the time of treatment. The flavonoid content value was affected by interaction effect of pineapple peel extract and Arabic gum, quadratic effect of pineapple peel extract, and quadratic effect of time of treatment in the coating formulation. The regression coefficient table for RSM analysis of flavonoid content as response variable is as shown in Table 3. The F-value of 866.44 obtained implies that the model is significant. Flavonoid compounds are secondary metabolites in plants with antioxidant capacities that can be produce during abiotic stress by edible coating in tomatoes [13]. The final equation in terms of actual factors for flavonoid content is as follows:(7)Totalflavonoidcontentμg/ml=255.1−469.3CPE−11.27CGA−1.00Time+348.0CPE∗CPE+0.1024CGA∗CGA+0.0913Time∗Time+11.80CPE∗CGA−4.03CPE∗Time+0.0252CGA∗Time.Figure 6
Response surface for total flavonoid content of coated tomatoes as function of pineapple peel extract concentration and Arabic gum concentration.
## 3.5. Effect of Edible Coating on Titrable Acidity of Tomatoes
The titratable acidity (TA) values of coated fruit during storage were maintained with Arabic gum concentration and decreased with concentration peel extract (Figure7), and the value of linear term was significant (p<0.1). The TA value was positively related to Arabic gum concentration. Regression coefficient table for RSM analysis for titrable acidity as response variable is shown in Table 3. The F-value of 3.06 obtained implies that the model is not significant. The same was observed by Ali et al. [41] who reported that the arabic gum coating delayed ripening of tomato by providing a semipermeable film around the fruit. Since organic acids, such as malic or citric acid, are primary substrates for respiration, a reduction in acidity is expected in highly respiring fruit as reported by El-Anany et al. [42]. The final equation in terms of actual factors for TA is as follows:(8)Titrableacidity%=0.3740−0.0195CPE−0.01517CGA−0.002492Time−0.0284CPE∗CPE+0.000289CGA∗CGA+0.000134Time∗Time+0.01940CPE∗CGA−0.00392CPE∗Time−0.000155CGA∗Time.Figure 7
Response surface for titrable acidity of tomatoes as function of pineapple peel extract concentration and Arabic gum concentration.
## 4. Conclusion
Increasing concentration of pineapple peel extract and Arabic gum improved the thickness of edible coating and had important effects on their quality. The ripening rate was correlated with the alterations level of chlorophylla which decreased simultaneously with the ripening of tomato fruits. The thickness of edible coating was confirmed by the correlation between the production of secondary metabolites as flavonoid compounds and the increasing of concentration of pineapple peel extract and Arabic gum. The optimum concentration of CPE, CGA, and time of treatment were predicted to be 0.70 kg/l, 17.04%, and 18.72 min, respectively, with predicted values of response variables denoted as RR 40.75%, chlorophyll a 8.106 μg/g, firmness 4.00 N, TFC 43.51 μg/ml, and TA 0.302%. Edible coating formulation with pineapple peel extract and Arabic gum can be used in extending the shelf life and delaying the ripening process of tomatoes at ambient conditions. The RSM method can be effective to study the effect of edible coatings on the ripening of tomato fruits postharvest.
---
*Source: 1019310-2023-02-15.xml* | 1019310-2023-02-15_1019310-2023-02-15.md | 41,016 | Using Response Surface Methodology to Optimize Edible Coating Formulations to Delay Ripening and Preserve Postharvest Quality of Tomatoes | Robin Tsague Donjio; Jean Aghofack Nguemezi; Mariette Anoumaa; Eugene Tafre Phounzong; Justine Odelonne Kenfack; Théophile Fonkou | Journal of Food Quality
(2023) | Agricultural Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2023/1019310 | 1019310-2023-02-15.xml | ---
## Abstract
Tomato is a nutrient-rich but highly perishable fruit. In order to delay the rapid ripening and degradation of fruits and reduce postharvest losses, response surface methodology (RSM) was used as the optimizing method to formulate edible coating based on pineapple peel extract and Arabic gum of twenty concentrations of pineapple (0.5–0.83 kg/l) and 20 concentrations of Arabic gum (5–15%, w/v). Tomatoes were soaked for 10–30 min in any of the coating solution. Five parameters including ripening rate, chlorophylla content, firmness, total flavonoid content, and titratable acidity of tomatoes were evaluated after 8 days of storage at 24 ± 0.5°C and 82 ± 1.5% relative humidity. Results showed that the experimental data could be adequately fitted into a second-order polynomial model with coefficient of determination (R2) ranging from 0.775 to 0.976 for all the variables studied. The optimum concentrations were predicted as 0.70 kg/l pineapple peel extract and 17.04% with 18.72 min optimum time. Under these conditions, predicted values of response variables are as follows: ripening rate (RR) 40.75, chlorophyll a (Chl a) 8.11, firmness (Fir) 4.00, total flavonoid content (TFC) 43.51, and titratable acidity (TA) 0.30. It is concluded that RSM can be used to optimize pineapple peel extract and Arabic gum-based edible coating formulation to extend the shelf life or delay the ripening process of tomato fruit at ambient conditions.
---
## Body
## 1. Introduction
Tomato (Solanum lycopersicum L.) is one of the most consumed fruits in the world [1]. It is important in human nutrition and health due to high nutrient content and significant amount of bioactive substances such as lycopene, ascorbic acid, tocopherols, folic acid, and flavonoids [2–4]. Due to its high nutritive value and water content, postharvest tomatoes are susceptible to diseases. These fruits are also sensitive to low-temperature storage [5, 6]. This leads to the loss of the quality parameters of the fruits such as color, texture, aroma, and appearance responsible for their commercial interest [4, 7]. Previous studies reported an increase in tomato’s shelf life through modified atmosphere storage (relatively high CO2 and low O2) and controlled atmosphere storage [8], active packaging of cardboard [9], and genetic engineering [10]. In order to meet the increasing demand and consumption of minimally processed and additive-free foods, different means have been used to extend the shelf life of tomatoes such as edible coatings. Many edible coatings are made from waste agricultural resources through bio-production [11]. Edible coatings are thin layers of the edible component such as hydrocolloids (polysaccharides and proteins), lipids (waxes and resins), and synthetic polymers, applied to the fruit’s surface in addition to or as a replacement for natural protective waxy coatings [12]. They act as a physical barrier towards carbon dioxide, oxygen, and moisture movement for the fruits [13]. The uses of edible films and coatings containing synthetic antimicrobial agents, organic, and vegetable material have been shown to be useful in preserving the quality of tomatoes [13, 14]. Moreover, these materials have been used to incorporate functional ingredients such as antioxidants, antimicrobial agent, plants extract, byproduct extract, and nutraceuticals in fruits [15–19]. Pineapple (Ananas comosus L. Merr.) is a fruit rich in several nutrients and bioactive compounds including vitamins C, calcium, and nonvolatile organic acids such as malate and citrate which has been used to preserve the postharvest quality of tomato and strawberry [20–23]. Luo et al. [24] reported that the polysaccharides contained in pineapple peels have some degree of antioxidant activity. Arabic gum (AG) has been used as barrier to CO2 and O2 in some edible coating formulations to extend the shelf life of certain fruits such as guava, mango, and tomato [17, 25, 26]. Response surface methodology (RSM) was used to study properties of edible films and the main formulation that have effects on the preservation shelf-life and quality of some fruits [27, 28]. Therefore, the objective of this study was to use the RSM to determine the optimum concentrations of aqueous extract of pineapple peel and Arabic gum as well as the optimal time to coat tomato in order to increase shelf life and reduce postharvest losses.
## 2. Materials and Methods
### 2.1. Biological Material
Healthy tomatoes at the mature green stage (Figure1) were collected from the local farm in Dschang (Cameroon) and kept in the laboratory. The pineapples used as coating material were harvested from farmers field in Melong (Cameroon) at ripening stage four (with low sugar content) [29]. Arabic gum was harvested on Acacia Senegal plants in Garoua, northern Cameroon [30].Figure 1
Mature green tomato fruits.
### 2.2. Coating Preparation
Pineapples were peeled after washing with water. The peels were dried in the shade, scrambled in the mill, and then grounded to obtain a homogeneous paste. Different quantities of paste were weighted (Table1) and macerated in a water/ethanol mix (1/1, v/v). 230 μl/l of bleach was added to disinfect the medium. The macerate was transferred on a sieve for filtration. In order to thicken the extract and form an adhesive and transparent film on the surface of the tomatoes, Arabic gum was added to of filtrate as coating matrix according to the quantities shown in Table 1. The mixtures were macerated for 15 hours, allowing pineapple peel extract to adhere to the Arabic gum.Table 1
Center composite design (CCD) and experimental data obtained for the response variables studied.
RunIndependent variablesRR (%)Chla (μg/g)Fir (N)TFC (μg/ml)TA (%)CPE (kg/l)CGA (%)Time (min)T10.8353046.677.023.9159.640.275T20.83151053.335.474.0074.580.352T30.83153040.007.804.0078.790.291T40.5153046.675.573.7786.040.293T50.8351046.676.153.9376.470.293T60.5151070.006.973.9861.110.298T70.51151066.678.373.9337.450.283T8C0.67102036.678.374.0040.920.278T90.51153036.679.843.7651.440.259T10C0.67102035.268.203.9940.890.276T11C0.67102037.368.603.9740.290.279T12C0.67102035.468.824.0040.790.272T130.670.9552036.673.653.8649.540.299T140.67101.9163.336.554.0064.900.345T15C0.67102036.568.684.0040.120.279T160.381102050.0011.983.9057.850.26T170.6719.052040.009.384.0039.440.305T18C0.67102035.768.483.9940.610.271T190.671038.0933.338.303.7567.060.300T200.959102046.677.634.0072.690.292CPE: concentration of water/ethanol pineapple peel extract, CGA: concentration of gum Arabic, C: center point, RR: ripening rate, Fir: firmness, TFC: total flavonoid content, TA: titratable acidity, Chla: chlorophyll a.
### 2.3. Coating
Tomatoes were washed and soaked in the different coatings for 10, 20, or 30 minutes depending on the experimental design for coatings T1 to T20 (Table1). Ten fruits were left without coating as the control. All the fruits were left on the bench at room temperature (24 ± 0.5°C) and 82 ± 1.5% RH.
### 2.4. Analytical Methods
Two physical parameters and three physiological parameters including ripening rate (RR), firmness, total flavonoid content, chlorophylla content, and titratable acidity (TA) of fruits were evaluated after 8 days of storage.
#### 2.4.1. Ripening Rate
The ripening rate was evaluated by counting the number of red ripped fruits (Figure2) at the 8th day after treatment according to ripening rate defined by The United Fresh Fruit and Vegetable Association, in cooperation with USDA [31].Figure 2
Red ripe tomato fruits.
#### 2.4.2. Determination of Chlorophyll a Content
Quantitative analysis for chlorophylla content in tomato pulp was performed using a Biochrom Libra S22 spectrophotometer. Chlorophyll a content was determined using the method described by Nagata and Yamashita [32]. Six (6) grams of the tomato pulp was crushed and introduced in a test tube, then 10 ml of acetone/hexane (4/6, v/v) was added. The mixture was stored at 4°C for 48 hours. Subsequently, chlorophyll a in the hexanolic extracts was detected by spectrophotometry at 663 and 645 nm. The chlorophyll a content was calculated using the following equation:(1)Chlorophyllaμg/100ml=0.0999A663−0.0989A645.A663 and A645 are the absorbances at 663 nm and 645 nm, respectively.
#### 2.4.3. Firmness
Firmness is the force (N) required to press the fruit against the tip of a penetrometer. Epicarp was removed at equatorial and top region of tomato fruits. The cylindrical tip of the penetrometer was pressed down gradually on tomato notches, and the measurements were read on the board of the penetrometer [23].
#### 2.4.4. Total Flavonoid Content
The concentration of total flavonoids was measured using the aluminum chloride colorimetric method [33] with some modifications. One milliliter (1 ml) of filtered tomato juice was added to a 10 ml Erlenmeyer flask containing 4 ml of distilled water. Then, 0.3 ml of 5% NaNO2 was added. After 5 min, 0.3 ml of 10% AlCl3 was added. Finally, 2 ml of 1 M NaOH was added after 6 minutes; the volume was completed to 10 ml with distilled water. The solution was mixed thoroughly, and the absorbance was measured at 510 nm using a spectrophotometer. Flavonoid compounds were determined according to a catechin standard curve in μg/ml.
#### 2.4.5. Titratable Acidity
Forty milliliters of distilled water were added to 20 ml of tomato juice. The volume was made up to 40 ml with distilled water. After adjusting the pH to 8.1, the titrate value was measured and was used to calculate the titrable acidity following the method of Gharezi et al. [34].(2)%acid=Titratevalue×Normality×M.eq.wt.ofacidVolumeofsample×100,Milli−equivalentweightofcitricacid=0.06404.
### 2.5. Experimental Design and Statistical Analysis
RSM was used to generate the experimental design statistical analysis and regression model with the help of Minitab software. The central composite rotatable design (CCRD) with a quadratic model [35] was employed as Nandane et al. [28]. Each independent variable had three (3) levels: −1.809, 0 and +1.809 (Table 2). Six replicates of the center points were chosen in random order according to a CCRD configuration for three factors divided in two blocks. The p values in the design outside the ranges were selected for rotatability of the design [36]. The center points for these designs were selected with ingredients at levels expected to yield satisfactory experimental results. Twenty (20) edible coating formulations with different concentrations of pineapple peel extracts (0.5–0.83 kg/l), Arabic gum (5–15%), and soaking time 10–30 min were designed. The response functions (y) measured were physiological weight loss, firmness, total soluble solid contents and titratable acidity, total flavonoids, and proteins levels of tomatoes. The response values are related to the coded variables (xi, i = 1, 2 and 3) by a second-degree polynomial equation given as follows:(3)Yi=a0+a1X1+a2X2+a3X3+a12X1X2+a13X1X3+a23X2X3+a11X12+a22X22+a33X32.Table 2
Level of independent variables used for the center composite design (CCD).
Independent variableSymbolLevel−α (−1.809)LowHighα (1.809)CPE (kg/l)X10.3810.50.830.959CGA (%)X20.95551519.045Time (min)X31.909103038.091CPE: concentration of pineapple peel extract in water/ethanol (1/1, v/v) solvent mixture, CGA: concentration of gum arabic.The coefficients of the polynomial equation were represented bya0 (constant term), a1, a2, and a3 (linear effects), a12, a13, and a23 (interaction effects), and a11, a22, and a33 (quadratic effects).The analysis of regression was made and regression tables were generated; the effect and regression coefficients of individual linear, quadratic, and interaction terms were determined. The significance of all terms in the polynomial equation was appreciated statistically by computing theF-value and comparing response variables at standard significance levels of 0.1, 0.05, 0.01, and 0.001. Because tomato fruits are perishables, agricultural products with big variations in the quality attribute between one another [27]. The adequacy of the model was determined using regression coefficient (R2) analysis. Using Minitab Software, numerical and graphical optimization procedures were applied to determine the optimum level of the independent variables.
## 2.1. Biological Material
Healthy tomatoes at the mature green stage (Figure1) were collected from the local farm in Dschang (Cameroon) and kept in the laboratory. The pineapples used as coating material were harvested from farmers field in Melong (Cameroon) at ripening stage four (with low sugar content) [29]. Arabic gum was harvested on Acacia Senegal plants in Garoua, northern Cameroon [30].Figure 1
Mature green tomato fruits.
## 2.2. Coating Preparation
Pineapples were peeled after washing with water. The peels were dried in the shade, scrambled in the mill, and then grounded to obtain a homogeneous paste. Different quantities of paste were weighted (Table1) and macerated in a water/ethanol mix (1/1, v/v). 230 μl/l of bleach was added to disinfect the medium. The macerate was transferred on a sieve for filtration. In order to thicken the extract and form an adhesive and transparent film on the surface of the tomatoes, Arabic gum was added to of filtrate as coating matrix according to the quantities shown in Table 1. The mixtures were macerated for 15 hours, allowing pineapple peel extract to adhere to the Arabic gum.Table 1
Center composite design (CCD) and experimental data obtained for the response variables studied.
RunIndependent variablesRR (%)Chla (μg/g)Fir (N)TFC (μg/ml)TA (%)CPE (kg/l)CGA (%)Time (min)T10.8353046.677.023.9159.640.275T20.83151053.335.474.0074.580.352T30.83153040.007.804.0078.790.291T40.5153046.675.573.7786.040.293T50.8351046.676.153.9376.470.293T60.5151070.006.973.9861.110.298T70.51151066.678.373.9337.450.283T8C0.67102036.678.374.0040.920.278T90.51153036.679.843.7651.440.259T10C0.67102035.268.203.9940.890.276T11C0.67102037.368.603.9740.290.279T12C0.67102035.468.824.0040.790.272T130.670.9552036.673.653.8649.540.299T140.67101.9163.336.554.0064.900.345T15C0.67102036.568.684.0040.120.279T160.381102050.0011.983.9057.850.26T170.6719.052040.009.384.0039.440.305T18C0.67102035.768.483.9940.610.271T190.671038.0933.338.303.7567.060.300T200.959102046.677.634.0072.690.292CPE: concentration of water/ethanol pineapple peel extract, CGA: concentration of gum Arabic, C: center point, RR: ripening rate, Fir: firmness, TFC: total flavonoid content, TA: titratable acidity, Chla: chlorophyll a.
## 2.3. Coating
Tomatoes were washed and soaked in the different coatings for 10, 20, or 30 minutes depending on the experimental design for coatings T1 to T20 (Table1). Ten fruits were left without coating as the control. All the fruits were left on the bench at room temperature (24 ± 0.5°C) and 82 ± 1.5% RH.
## 2.4. Analytical Methods
Two physical parameters and three physiological parameters including ripening rate (RR), firmness, total flavonoid content, chlorophylla content, and titratable acidity (TA) of fruits were evaluated after 8 days of storage.
### 2.4.1. Ripening Rate
The ripening rate was evaluated by counting the number of red ripped fruits (Figure2) at the 8th day after treatment according to ripening rate defined by The United Fresh Fruit and Vegetable Association, in cooperation with USDA [31].Figure 2
Red ripe tomato fruits.
### 2.4.2. Determination of Chlorophyll a Content
Quantitative analysis for chlorophylla content in tomato pulp was performed using a Biochrom Libra S22 spectrophotometer. Chlorophyll a content was determined using the method described by Nagata and Yamashita [32]. Six (6) grams of the tomato pulp was crushed and introduced in a test tube, then 10 ml of acetone/hexane (4/6, v/v) was added. The mixture was stored at 4°C for 48 hours. Subsequently, chlorophyll a in the hexanolic extracts was detected by spectrophotometry at 663 and 645 nm. The chlorophyll a content was calculated using the following equation:(1)Chlorophyllaμg/100ml=0.0999A663−0.0989A645.A663 and A645 are the absorbances at 663 nm and 645 nm, respectively.
### 2.4.3. Firmness
Firmness is the force (N) required to press the fruit against the tip of a penetrometer. Epicarp was removed at equatorial and top region of tomato fruits. The cylindrical tip of the penetrometer was pressed down gradually on tomato notches, and the measurements were read on the board of the penetrometer [23].
### 2.4.4. Total Flavonoid Content
The concentration of total flavonoids was measured using the aluminum chloride colorimetric method [33] with some modifications. One milliliter (1 ml) of filtered tomato juice was added to a 10 ml Erlenmeyer flask containing 4 ml of distilled water. Then, 0.3 ml of 5% NaNO2 was added. After 5 min, 0.3 ml of 10% AlCl3 was added. Finally, 2 ml of 1 M NaOH was added after 6 minutes; the volume was completed to 10 ml with distilled water. The solution was mixed thoroughly, and the absorbance was measured at 510 nm using a spectrophotometer. Flavonoid compounds were determined according to a catechin standard curve in μg/ml.
### 2.4.5. Titratable Acidity
Forty milliliters of distilled water were added to 20 ml of tomato juice. The volume was made up to 40 ml with distilled water. After adjusting the pH to 8.1, the titrate value was measured and was used to calculate the titrable acidity following the method of Gharezi et al. [34].(2)%acid=Titratevalue×Normality×M.eq.wt.ofacidVolumeofsample×100,Milli−equivalentweightofcitricacid=0.06404.
## 2.4.1. Ripening Rate
The ripening rate was evaluated by counting the number of red ripped fruits (Figure2) at the 8th day after treatment according to ripening rate defined by The United Fresh Fruit and Vegetable Association, in cooperation with USDA [31].Figure 2
Red ripe tomato fruits.
## 2.4.2. Determination of Chlorophyll a Content
Quantitative analysis for chlorophylla content in tomato pulp was performed using a Biochrom Libra S22 spectrophotometer. Chlorophyll a content was determined using the method described by Nagata and Yamashita [32]. Six (6) grams of the tomato pulp was crushed and introduced in a test tube, then 10 ml of acetone/hexane (4/6, v/v) was added. The mixture was stored at 4°C for 48 hours. Subsequently, chlorophyll a in the hexanolic extracts was detected by spectrophotometry at 663 and 645 nm. The chlorophyll a content was calculated using the following equation:(1)Chlorophyllaμg/100ml=0.0999A663−0.0989A645.A663 and A645 are the absorbances at 663 nm and 645 nm, respectively.
## 2.4.3. Firmness
Firmness is the force (N) required to press the fruit against the tip of a penetrometer. Epicarp was removed at equatorial and top region of tomato fruits. The cylindrical tip of the penetrometer was pressed down gradually on tomato notches, and the measurements were read on the board of the penetrometer [23].
## 2.4.4. Total Flavonoid Content
The concentration of total flavonoids was measured using the aluminum chloride colorimetric method [33] with some modifications. One milliliter (1 ml) of filtered tomato juice was added to a 10 ml Erlenmeyer flask containing 4 ml of distilled water. Then, 0.3 ml of 5% NaNO2 was added. After 5 min, 0.3 ml of 10% AlCl3 was added. Finally, 2 ml of 1 M NaOH was added after 6 minutes; the volume was completed to 10 ml with distilled water. The solution was mixed thoroughly, and the absorbance was measured at 510 nm using a spectrophotometer. Flavonoid compounds were determined according to a catechin standard curve in μg/ml.
## 2.4.5. Titratable Acidity
Forty milliliters of distilled water were added to 20 ml of tomato juice. The volume was made up to 40 ml with distilled water. After adjusting the pH to 8.1, the titrate value was measured and was used to calculate the titrable acidity following the method of Gharezi et al. [34].(2)%acid=Titratevalue×Normality×M.eq.wt.ofacidVolumeofsample×100,Milli−equivalentweightofcitricacid=0.06404.
## 2.5. Experimental Design and Statistical Analysis
RSM was used to generate the experimental design statistical analysis and regression model with the help of Minitab software. The central composite rotatable design (CCRD) with a quadratic model [35] was employed as Nandane et al. [28]. Each independent variable had three (3) levels: −1.809, 0 and +1.809 (Table 2). Six replicates of the center points were chosen in random order according to a CCRD configuration for three factors divided in two blocks. The p values in the design outside the ranges were selected for rotatability of the design [36]. The center points for these designs were selected with ingredients at levels expected to yield satisfactory experimental results. Twenty (20) edible coating formulations with different concentrations of pineapple peel extracts (0.5–0.83 kg/l), Arabic gum (5–15%), and soaking time 10–30 min were designed. The response functions (y) measured were physiological weight loss, firmness, total soluble solid contents and titratable acidity, total flavonoids, and proteins levels of tomatoes. The response values are related to the coded variables (xi, i = 1, 2 and 3) by a second-degree polynomial equation given as follows:(3)Yi=a0+a1X1+a2X2+a3X3+a12X1X2+a13X1X3+a23X2X3+a11X12+a22X22+a33X32.Table 2
Level of independent variables used for the center composite design (CCD).
Independent variableSymbolLevel−α (−1.809)LowHighα (1.809)CPE (kg/l)X10.3810.50.830.959CGA (%)X20.95551519.045Time (min)X31.909103038.091CPE: concentration of pineapple peel extract in water/ethanol (1/1, v/v) solvent mixture, CGA: concentration of gum arabic.The coefficients of the polynomial equation were represented bya0 (constant term), a1, a2, and a3 (linear effects), a12, a13, and a23 (interaction effects), and a11, a22, and a33 (quadratic effects).The analysis of regression was made and regression tables were generated; the effect and regression coefficients of individual linear, quadratic, and interaction terms were determined. The significance of all terms in the polynomial equation was appreciated statistically by computing theF-value and comparing response variables at standard significance levels of 0.1, 0.05, 0.01, and 0.001. Because tomato fruits are perishables, agricultural products with big variations in the quality attribute between one another [27]. The adequacy of the model was determined using regression coefficient (R2) analysis. Using Minitab Software, numerical and graphical optimization procedures were applied to determine the optimum level of the independent variables.
## 3. Results and Discussion
### 3.1. Effect of Edible Coating on Ripening Rate of Tomato Fruits
As shown in Figure3, the ripening rate (RR) of the coated tomato decreased with the pineapple peel extract concentration, while the optimum predicted time of treatment was 20 min. The CGA was fixed at 10%. Ali et al. [17] observed that fruit coated with 10% arabic gum delayed the ripening process by slowing down the rate of respiration and ethylene production. The regression coefficient table of RSM analysis with ripening rate as response variable is shown in Table 3. The model F-value of 69.25 obtained for the effect on ripening rate (%) of treated tomato fruit implies that this model was significant. Values of “Prob ˃ F” less than 0.1 indicates significance of the model terms (Table 3). In this case, a1a3, a13, a11, a22, and a33 are significant model terms. Thus, the ripening rate is affected by linear effect of pineapple peel extract concentrations and time, interactions effects of time of treatment, and concentration of pineapple peel extract and by the quadratic effect of tree factors. The “Lack of Fit p-value” of ˂0.001 implies that the Lack of Fit is significantly relative to the pure error. Thus, independent variables had a significant effect on the ripening rate. Observations from RSM analysis suggested that the ripening rate was negatively related to the concentration of the pineapple peel extract used (Figure 3). As the concentration of the pineapple peel extract in the solution increased, there was a relative decrease in ripening rate of the fruit. This shows that the calcium and antioxidants compounds in the pineapple extract may have induced the delay of ripening [37–40]. The final equation in terms of actual factors for ripening rate is as follows:(4)RR%=220.7−349.5CPE−1.86CGA−4.329Time+186.0CPE∗CPE+0.0683CGA∗CGA+0.0476Time∗Time+2.08CPE∗CGA+3.13CPE∗Time−0.0500CGA∗Time.Figure 3
Response surface for ripening rate of coated tomato as function of concentration of pineapple peel extract and time of treatment.Table 3
Regression coefficients,R2, R2 (adj), and probability values for four dependent variables.
Regression coefficientsRRChlaFirmnessTFCTAConstant220.7c8.22c4.043c255.1b0.374cCPE−349.5c−15.1c0.090c−469.3b−0.0195cCGA−1.861.185c−0.0039c−11.27b−0.0151aTime−4.329c−0.041c−0.011c−1.00−0.0025CPE∗CPE4.76c10.20−0.562b348.0c−0.0284CGA∗CGA1.71−0.0298c−0.0008c0.1020.00029cTime∗time4.76c−0.0046b−0.0004c0.091c0.00013cCPE∗CGA1.66−0.872b0.0342b11.80c0.0194cCPE∗time5.00b0.2440.0282c−4.03b−0.00392cCGA∗time−2.500.01080.00010.025−0.00015cR20.8990.8810.9500.9010.976R2 adj0.8090.7750.9050.8120.955ModelF-value69.2528.239.41866.443.06Lack of fit (p value)<0.0010.0010.014<0.0010.122cSignificant at 0.01 level. bSignificant at 0.05 level. aSignificant at 0.1 level. RR: ripening rate; Chl a: chlorophyll a; TFC: total flavonoid content; TA: titratable acidity.
### 3.2. Effect of Edible Coating on Chlorophyll a in Tomatoes
Chlorophylla is a more appropriate biomarker for evaluation of the ripening-retarding effects of edible coatings, because it is part of ripening process through the conversion of chlorophyll a and chlorophyll b into chlorophyllide a and then pheophorbide a before its complete degradation in nongreen products [11, 37]. As shown in Figure 4, concentration of chlorophyll a gradually increased with Arabic gum and with the increase of time of treatment. The regression coefficient table for RSM analysis for chlorophyll a as response variable is shown in Table 3. The F-value (28.23) of the model implies that this model is significant. In this case, the value of chlorophyll a was influenced by concentrations of Arabic gum and time of treatment. But only the coefficient of the linear term effect of time of treatment was significant (p<0.1). The evolution ripening rate was confirmed by alterations level of chlorophyll a (Figure 3). The treatment applied induced the inhibition of chlorophyll a breakdown at different rate, resulting in the delaying of the ripening process. The final equation in terms of actual factors for chlorophyll a is as follows:(5)Chlorophyllaμg/g=8.22−15.1CPE+1.185CGA−0.041Time+10.20CPE∗CPE−0.02980CGA∗CGA−0.00466Time∗Time−0.872CPE∗CGA+0.244CPE∗Time+0.01085CGA∗Time.Figure 4
Response surface for chlorophylla of coated tomatoes as function of Arabic gum concentration and time of treatment.
### 3.3. Effect of Edible Coating on the Firmness of Tomatoes
As shown in Figure5, the response surface firmness of fruits increased with Arabic gum concentration in the coating solution. Firmness was affected by pineapple peel extract concentration and time of treatment. The regression coefficient table for RSM analysis with firmness as response variable is shown in Table 3. The F-value (9.41) obtained on firmness of treated tomato fruit implies that this model is significant. Ali et al. [41] showed that tomato fruit coated with Arabic gum at 10% resulted in a significant delay in change of firmness. Low levels respiration gas (O2, CO2) exchanges limit pectin esterase and polygalacturonase activities and allow retention of the firmness. Calcium ion, as a firming agent, in edible coatings could improve the rigidity of the cell wall of coated fruits [37, 39]. The final equation in terms of actual factors for firmness is given as follows:(6)FirmnessN=4.043+0.090CPE−0.0039CGA−0.01105Time−0.562CPE∗CPE−0.000849CGA∗CGA−0.000380Time∗Time+0.0342CPE∗CGA+0.02823CPE∗Time+0.000147CGA∗Time.Figure 5
Response surface for firmness of coated tomatoes as function of pineapple peel extract concentration and time of treatment.
### 3.4. Effect of Edible Coating on Total Flavonoid Content of Tomatoes
Figure6 shows that the total flavonoid content (TFC) value is increased with pineapple peel extract concentration in the coating solution. This parameter decreased with the time of treatment. The flavonoid content value was affected by interaction effect of pineapple peel extract and Arabic gum, quadratic effect of pineapple peel extract, and quadratic effect of time of treatment in the coating formulation. The regression coefficient table for RSM analysis of flavonoid content as response variable is as shown in Table 3. The F-value of 866.44 obtained implies that the model is significant. Flavonoid compounds are secondary metabolites in plants with antioxidant capacities that can be produce during abiotic stress by edible coating in tomatoes [13]. The final equation in terms of actual factors for flavonoid content is as follows:(7)Totalflavonoidcontentμg/ml=255.1−469.3CPE−11.27CGA−1.00Time+348.0CPE∗CPE+0.1024CGA∗CGA+0.0913Time∗Time+11.80CPE∗CGA−4.03CPE∗Time+0.0252CGA∗Time.Figure 6
Response surface for total flavonoid content of coated tomatoes as function of pineapple peel extract concentration and Arabic gum concentration.
### 3.5. Effect of Edible Coating on Titrable Acidity of Tomatoes
The titratable acidity (TA) values of coated fruit during storage were maintained with Arabic gum concentration and decreased with concentration peel extract (Figure7), and the value of linear term was significant (p<0.1). The TA value was positively related to Arabic gum concentration. Regression coefficient table for RSM analysis for titrable acidity as response variable is shown in Table 3. The F-value of 3.06 obtained implies that the model is not significant. The same was observed by Ali et al. [41] who reported that the arabic gum coating delayed ripening of tomato by providing a semipermeable film around the fruit. Since organic acids, such as malic or citric acid, are primary substrates for respiration, a reduction in acidity is expected in highly respiring fruit as reported by El-Anany et al. [42]. The final equation in terms of actual factors for TA is as follows:(8)Titrableacidity%=0.3740−0.0195CPE−0.01517CGA−0.002492Time−0.0284CPE∗CPE+0.000289CGA∗CGA+0.000134Time∗Time+0.01940CPE∗CGA−0.00392CPE∗Time−0.000155CGA∗Time.Figure 7
Response surface for titrable acidity of tomatoes as function of pineapple peel extract concentration and Arabic gum concentration.
## 3.1. Effect of Edible Coating on Ripening Rate of Tomato Fruits
As shown in Figure3, the ripening rate (RR) of the coated tomato decreased with the pineapple peel extract concentration, while the optimum predicted time of treatment was 20 min. The CGA was fixed at 10%. Ali et al. [17] observed that fruit coated with 10% arabic gum delayed the ripening process by slowing down the rate of respiration and ethylene production. The regression coefficient table of RSM analysis with ripening rate as response variable is shown in Table 3. The model F-value of 69.25 obtained for the effect on ripening rate (%) of treated tomato fruit implies that this model was significant. Values of “Prob ˃ F” less than 0.1 indicates significance of the model terms (Table 3). In this case, a1a3, a13, a11, a22, and a33 are significant model terms. Thus, the ripening rate is affected by linear effect of pineapple peel extract concentrations and time, interactions effects of time of treatment, and concentration of pineapple peel extract and by the quadratic effect of tree factors. The “Lack of Fit p-value” of ˂0.001 implies that the Lack of Fit is significantly relative to the pure error. Thus, independent variables had a significant effect on the ripening rate. Observations from RSM analysis suggested that the ripening rate was negatively related to the concentration of the pineapple peel extract used (Figure 3). As the concentration of the pineapple peel extract in the solution increased, there was a relative decrease in ripening rate of the fruit. This shows that the calcium and antioxidants compounds in the pineapple extract may have induced the delay of ripening [37–40]. The final equation in terms of actual factors for ripening rate is as follows:(4)RR%=220.7−349.5CPE−1.86CGA−4.329Time+186.0CPE∗CPE+0.0683CGA∗CGA+0.0476Time∗Time+2.08CPE∗CGA+3.13CPE∗Time−0.0500CGA∗Time.Figure 3
Response surface for ripening rate of coated tomato as function of concentration of pineapple peel extract and time of treatment.Table 3
Regression coefficients,R2, R2 (adj), and probability values for four dependent variables.
Regression coefficientsRRChlaFirmnessTFCTAConstant220.7c8.22c4.043c255.1b0.374cCPE−349.5c−15.1c0.090c−469.3b−0.0195cCGA−1.861.185c−0.0039c−11.27b−0.0151aTime−4.329c−0.041c−0.011c−1.00−0.0025CPE∗CPE4.76c10.20−0.562b348.0c−0.0284CGA∗CGA1.71−0.0298c−0.0008c0.1020.00029cTime∗time4.76c−0.0046b−0.0004c0.091c0.00013cCPE∗CGA1.66−0.872b0.0342b11.80c0.0194cCPE∗time5.00b0.2440.0282c−4.03b−0.00392cCGA∗time−2.500.01080.00010.025−0.00015cR20.8990.8810.9500.9010.976R2 adj0.8090.7750.9050.8120.955ModelF-value69.2528.239.41866.443.06Lack of fit (p value)<0.0010.0010.014<0.0010.122cSignificant at 0.01 level. bSignificant at 0.05 level. aSignificant at 0.1 level. RR: ripening rate; Chl a: chlorophyll a; TFC: total flavonoid content; TA: titratable acidity.
## 3.2. Effect of Edible Coating on Chlorophyll a in Tomatoes
Chlorophylla is a more appropriate biomarker for evaluation of the ripening-retarding effects of edible coatings, because it is part of ripening process through the conversion of chlorophyll a and chlorophyll b into chlorophyllide a and then pheophorbide a before its complete degradation in nongreen products [11, 37]. As shown in Figure 4, concentration of chlorophyll a gradually increased with Arabic gum and with the increase of time of treatment. The regression coefficient table for RSM analysis for chlorophyll a as response variable is shown in Table 3. The F-value (28.23) of the model implies that this model is significant. In this case, the value of chlorophyll a was influenced by concentrations of Arabic gum and time of treatment. But only the coefficient of the linear term effect of time of treatment was significant (p<0.1). The evolution ripening rate was confirmed by alterations level of chlorophyll a (Figure 3). The treatment applied induced the inhibition of chlorophyll a breakdown at different rate, resulting in the delaying of the ripening process. The final equation in terms of actual factors for chlorophyll a is as follows:(5)Chlorophyllaμg/g=8.22−15.1CPE+1.185CGA−0.041Time+10.20CPE∗CPE−0.02980CGA∗CGA−0.00466Time∗Time−0.872CPE∗CGA+0.244CPE∗Time+0.01085CGA∗Time.Figure 4
Response surface for chlorophylla of coated tomatoes as function of Arabic gum concentration and time of treatment.
## 3.3. Effect of Edible Coating on the Firmness of Tomatoes
As shown in Figure5, the response surface firmness of fruits increased with Arabic gum concentration in the coating solution. Firmness was affected by pineapple peel extract concentration and time of treatment. The regression coefficient table for RSM analysis with firmness as response variable is shown in Table 3. The F-value (9.41) obtained on firmness of treated tomato fruit implies that this model is significant. Ali et al. [41] showed that tomato fruit coated with Arabic gum at 10% resulted in a significant delay in change of firmness. Low levels respiration gas (O2, CO2) exchanges limit pectin esterase and polygalacturonase activities and allow retention of the firmness. Calcium ion, as a firming agent, in edible coatings could improve the rigidity of the cell wall of coated fruits [37, 39]. The final equation in terms of actual factors for firmness is given as follows:(6)FirmnessN=4.043+0.090CPE−0.0039CGA−0.01105Time−0.562CPE∗CPE−0.000849CGA∗CGA−0.000380Time∗Time+0.0342CPE∗CGA+0.02823CPE∗Time+0.000147CGA∗Time.Figure 5
Response surface for firmness of coated tomatoes as function of pineapple peel extract concentration and time of treatment.
## 3.4. Effect of Edible Coating on Total Flavonoid Content of Tomatoes
Figure6 shows that the total flavonoid content (TFC) value is increased with pineapple peel extract concentration in the coating solution. This parameter decreased with the time of treatment. The flavonoid content value was affected by interaction effect of pineapple peel extract and Arabic gum, quadratic effect of pineapple peel extract, and quadratic effect of time of treatment in the coating formulation. The regression coefficient table for RSM analysis of flavonoid content as response variable is as shown in Table 3. The F-value of 866.44 obtained implies that the model is significant. Flavonoid compounds are secondary metabolites in plants with antioxidant capacities that can be produce during abiotic stress by edible coating in tomatoes [13]. The final equation in terms of actual factors for flavonoid content is as follows:(7)Totalflavonoidcontentμg/ml=255.1−469.3CPE−11.27CGA−1.00Time+348.0CPE∗CPE+0.1024CGA∗CGA+0.0913Time∗Time+11.80CPE∗CGA−4.03CPE∗Time+0.0252CGA∗Time.Figure 6
Response surface for total flavonoid content of coated tomatoes as function of pineapple peel extract concentration and Arabic gum concentration.
## 3.5. Effect of Edible Coating on Titrable Acidity of Tomatoes
The titratable acidity (TA) values of coated fruit during storage were maintained with Arabic gum concentration and decreased with concentration peel extract (Figure7), and the value of linear term was significant (p<0.1). The TA value was positively related to Arabic gum concentration. Regression coefficient table for RSM analysis for titrable acidity as response variable is shown in Table 3. The F-value of 3.06 obtained implies that the model is not significant. The same was observed by Ali et al. [41] who reported that the arabic gum coating delayed ripening of tomato by providing a semipermeable film around the fruit. Since organic acids, such as malic or citric acid, are primary substrates for respiration, a reduction in acidity is expected in highly respiring fruit as reported by El-Anany et al. [42]. The final equation in terms of actual factors for TA is as follows:(8)Titrableacidity%=0.3740−0.0195CPE−0.01517CGA−0.002492Time−0.0284CPE∗CPE+0.000289CGA∗CGA+0.000134Time∗Time+0.01940CPE∗CGA−0.00392CPE∗Time−0.000155CGA∗Time.Figure 7
Response surface for titrable acidity of tomatoes as function of pineapple peel extract concentration and Arabic gum concentration.
## 4. Conclusion
Increasing concentration of pineapple peel extract and Arabic gum improved the thickness of edible coating and had important effects on their quality. The ripening rate was correlated with the alterations level of chlorophylla which decreased simultaneously with the ripening of tomato fruits. The thickness of edible coating was confirmed by the correlation between the production of secondary metabolites as flavonoid compounds and the increasing of concentration of pineapple peel extract and Arabic gum. The optimum concentration of CPE, CGA, and time of treatment were predicted to be 0.70 kg/l, 17.04%, and 18.72 min, respectively, with predicted values of response variables denoted as RR 40.75%, chlorophyll a 8.106 μg/g, firmness 4.00 N, TFC 43.51 μg/ml, and TA 0.302%. Edible coating formulation with pineapple peel extract and Arabic gum can be used in extending the shelf life and delaying the ripening process of tomatoes at ambient conditions. The RSM method can be effective to study the effect of edible coatings on the ripening of tomato fruits postharvest.
---
*Source: 1019310-2023-02-15.xml* | 2023 |
# SAROTUP: Scanner and Reporter of Target-Unrelated Peptides
**Authors:** Jian Huang; Beibei Ru; Shiyong Li; Hao Lin; Feng-Biao Guo
**Journal:** Journal of Biomedicine and Biotechnology
(2010)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2010/101932
---
## Abstract
As epitope mimics, mimotopes have been widely utilized in the study of epitope prediction and the development of new diagnostics, therapeutics, and vaccines. Screening the random peptide libraries constructed with phage display or any other surface display technologies provides an efficient and convenient approach to acquire mimotopes. However, target-unrelated peptides creep into mimotopes from time to time through binding to contaminants or other components of the screening system. In this study, we present SAROTUP, a free web tool for scanning, reporting and excluding possible target-unrelated peptides from real mimotopes. Preliminary tests show that SAROTUP is efficient and capable of improving the accuracy of mimotope-based epitope mapping. It is also helpful for the development of mimotope-based diagnostics, therapeutics, and vaccines.
---
## Body
## 1. Introduction
In 1985, Smith pioneered phage display technology, an in vitro methodology and system for presenting, selecting and evolving proteins and peptides displayed on the surface of phage virion [1]. Since then, phage display has developed rapidly and become an increasingly popular tool for both basic research such as the exploration of protein-protein interaction networks and sites [2–4], and applied research such as the development of new diagnostics, therapeutics, and vaccines [5–10]. Usually, the protein used to screen the phage display library is termed as target and the genuine partner binding to the target is called template. Peptide mimicking the binding site on the template and binding to the target is defined as mimotope, which was first introduced by Geysen et al. [11]. One type of the most frequently used targets is monoclonal antibody. In this situation, the template is the corresponding antigen inducing the antibody, and the mimotope is a mimic of the genuine epitope. In fact, the original definition of mimotope given by Geysen et al. goes “A mimotope is defined as a molecule able to bind to the antigen combining site of an antibody molecule, not necessarily identical with the epitope inducing the antibody, but an acceptable mimic of the essential features of the epitope [11].” Mimotopes and the corresponding epitope are considered to have similar physicochemical properties and spatial organization. The mimicry between mimotopes and genuine epitope makes mimotopes reasonable solutions to epitope mapping, network inferring, and new diagnostics, therapeutics, and vaccines developing.Powered by phage display technology, mimotopes can be acquired in a relatively cheap, efficient and convenient way, that is, screening phage-displayed random peptides libraries with a given target. However, not all phages selected out are target-specific, because the target itself is only one component of the screening system [12]. From time to time, phages reacting with contaminants in the target sample or other components of the screening system such as the solid phase (e.g., plastic plates) and the capturing molecule (e.g., streptavidin, secondary antibody) rather than binding to the actual target are recovered with those target-specific binders (displaying mimotopes) during the rounds of panning. Peptides displayed on these phages are called target-unrelated peptides (TUP), a term coined recently by Menendez and Scott in a review [12].The results from phage display technology might be a mixture of target-unrelated peptides and mimotopes, and it can be difficult to discriminate TUP from mimotopes since the binding assays used to confirm the affinity of peptides for the target often employ the same components as the initial panning experiment [12]. Therefore, target-unrelated peptides might be taken into study as mimotopes if the researchers are not careful enough. Undoubtedly, this will make the conclusion of the study dubious. Several such examples have been discussed in references [12, 13]. Obviously, target-unrelated peptides are not appropriate candidates for the development of new diagnostics, therapeutics, and vaccines. For mimotope-based epitope mapping, target-unrelated peptides are main noise. If TUP is included in the mapping, the input data is improper and the result might be misleading [14]. There are now quite a few programs for mimotope based epitope mapping, none of them, however, has a procedure to scan, report and exclude target-unrelated peptides [15–23].In this study, we describe a web server named SAROTUP, which is an acronym for “Scanner And Reporter Of Target-Unrelated Peptides”. SAROTUP was coded with Perl as a CGI program and can be freely accessed and used to scan peptides acquired from phage display technology. It is capable of finding, reporting, and precluding possible target-unrelated peptides, which is very helpful for the development of mimotope-based diagnostics, therapeutics, and vaccines. The power and efficiency of SAROTUP was also demonstrated by preliminary tests in the present study.
## 2. Materials and Methods
### 2.1. Compilation of TUP Motifs
Recently, Menendez and Scott reviewed a collection of target-unrelated peptides recovered in the screening of phage-displayed random peptide libraries with antibodies [12]. They divided their collection into several categories according to the component of the screening system to which target-unrelated peptides bind. They also derived one or more TUP motifs for each category. Very recently, Brammer et al. reported a completely new type of target-unrelated peptides [13]. In the review of Menendez and Scott, target-unrelated selection is due to the binding to contaminants or components other than target; however, in the report of Brammer et al., target-unrelated selection is due to a coincident point mutation in the phage library [12, 13]. We compiled a set of 23 TUP motifs from the above two references [12, 13], including 12 motifs specific for the capturing agents, 5 motifs specific for the constant region of antibody, 3 motifs specific for the screening solid phase, 2 motifs specific for the contaminants in the target sample, and 1 motif for a mutation in phage library (Table 1). All motifs are presented in patterns according to Prosite format [24].Table 1
Known patterns of target-unrelated peptides.
TUP CategoryTUP PatternMechanism in briefCapturing agentsH-P-[QM], G-D-[WF]-x-F, W-x-W-L, E-P-Binding to streptavidinD-W-[FY], D-V-E-x-W-[LIV]W-x-P-P-F-[RK]Binding to biotinW-[TS]-[LI]-x(2)-H-[RK]Binding to Protein AR-T-[LI]-[TS]-K-P, [LFW]-x-F-Q, W-I-S-Binding to secondary antibodyx(2)-D-W, Q-[LV]-[LV]-Q, RTYKConstant region of antibody (the target)S-S-[IL], GELVW, G-[LI]-T-D-[WY],Binding to the Fc fragment[RHK]-P-S-P, P-S-P-[RK]Screening solid phaseW-x(2)-W, WHWRLPS, F-H-x(2)-WBinding to plasticContaminants in the target sampleF-H-E-x-W-P-[ST]Binding to contaminant bovine serum albuminQSYPBinding to contaminant bovine IgGPhage mutationHAIYPRHGrowing faster than other phages
### 2.2. Implementation of SAROTUP
The SAROTUP was implemented as a free online service, powered by Apache and Perl. Three pages are designed and integrated into a tabbed web interface with cascading style sheets codes. The core program of SAROTUP wassar.pl, a CGI script coded with Perl. In this script, the 23 TUP motifs were converted to regular expressions, which were then used to match each input peptide sequence.
### 2.3. Construction of Test Data Sets
We constructed two-test data sets from [12, 13, 15–23, 25, 26]. The first data set contains 8 cases; 6 of them are sourced from test cases used in extant programs for mimotope-based epitope mapping [15–23]; the left 2 are cases studies published recently [25, 26]. As shown in Table 2, the target of each case in the first data set is monoclonal antibody and the structure of corresponding antigen-antibody complex has been resolved, which is used to derive its structural epitope as the golden standard for evaluation. For each case, there is one or more sets of peptides recovered from phage display technology. These peptides have been used in mimotope-based epitope mapping by other researchers. We scanned each set of peptides with SAROTUP. If target-unrelated peptides were found, a new panel of peptides excluding TUP was produced. The old and the new panel of peptides were then used to predict epitope using Mapitope or PepSurf [15, 21, 22]. Finally, the results were compared to show if SAROTUP could improve the performance of mimotope-based epitope mapping.Table 2
A summary of the first test data set for SAROTUP.
TargetTemplateComplexPeptidesSource17bHIV gp120 envelope glycoprotein (gp120)1GC111[15]trastuzumabhuman receptor tyrosine-protein kinase erbB-2 (HER2)1N8Z5[20]82D6A3human von Willebrand factor (vWF)2ADF5[19]13b5HIV-1 capsid protein p241E6J14[15]BO2C11human coagulation factor VIII1IQD27[19]cetuximabhuman epidermal growth factor receptor1YY94[20]80RSARS-coronavirus spike protein S12GHW42 + 18[26]b12HIV gp120 envelope glycoprotein (gp120)2NY72 + 32 + 19[25]The second data set is composed of 100 peptides in raw sequence format. It has two groups. The first group has 77 sequences compiled from the first data set without any known TUP motifs; the second group has 23 sequences sourced from [12, 13] with various TUP motifs. The mixture of the two groups of sequences made the second data set, which was then used as the sample input and can be used to evaluate the efficiency of SAROTUP.
## 2.1. Compilation of TUP Motifs
Recently, Menendez and Scott reviewed a collection of target-unrelated peptides recovered in the screening of phage-displayed random peptide libraries with antibodies [12]. They divided their collection into several categories according to the component of the screening system to which target-unrelated peptides bind. They also derived one or more TUP motifs for each category. Very recently, Brammer et al. reported a completely new type of target-unrelated peptides [13]. In the review of Menendez and Scott, target-unrelated selection is due to the binding to contaminants or components other than target; however, in the report of Brammer et al., target-unrelated selection is due to a coincident point mutation in the phage library [12, 13]. We compiled a set of 23 TUP motifs from the above two references [12, 13], including 12 motifs specific for the capturing agents, 5 motifs specific for the constant region of antibody, 3 motifs specific for the screening solid phase, 2 motifs specific for the contaminants in the target sample, and 1 motif for a mutation in phage library (Table 1). All motifs are presented in patterns according to Prosite format [24].Table 1
Known patterns of target-unrelated peptides.
TUP CategoryTUP PatternMechanism in briefCapturing agentsH-P-[QM], G-D-[WF]-x-F, W-x-W-L, E-P-Binding to streptavidinD-W-[FY], D-V-E-x-W-[LIV]W-x-P-P-F-[RK]Binding to biotinW-[TS]-[LI]-x(2)-H-[RK]Binding to Protein AR-T-[LI]-[TS]-K-P, [LFW]-x-F-Q, W-I-S-Binding to secondary antibodyx(2)-D-W, Q-[LV]-[LV]-Q, RTYKConstant region of antibody (the target)S-S-[IL], GELVW, G-[LI]-T-D-[WY],Binding to the Fc fragment[RHK]-P-S-P, P-S-P-[RK]Screening solid phaseW-x(2)-W, WHWRLPS, F-H-x(2)-WBinding to plasticContaminants in the target sampleF-H-E-x-W-P-[ST]Binding to contaminant bovine serum albuminQSYPBinding to contaminant bovine IgGPhage mutationHAIYPRHGrowing faster than other phages
## 2.2. Implementation of SAROTUP
The SAROTUP was implemented as a free online service, powered by Apache and Perl. Three pages are designed and integrated into a tabbed web interface with cascading style sheets codes. The core program of SAROTUP wassar.pl, a CGI script coded with Perl. In this script, the 23 TUP motifs were converted to regular expressions, which were then used to match each input peptide sequence.
## 2.3. Construction of Test Data Sets
We constructed two-test data sets from [12, 13, 15–23, 25, 26]. The first data set contains 8 cases; 6 of them are sourced from test cases used in extant programs for mimotope-based epitope mapping [15–23]; the left 2 are cases studies published recently [25, 26]. As shown in Table 2, the target of each case in the first data set is monoclonal antibody and the structure of corresponding antigen-antibody complex has been resolved, which is used to derive its structural epitope as the golden standard for evaluation. For each case, there is one or more sets of peptides recovered from phage display technology. These peptides have been used in mimotope-based epitope mapping by other researchers. We scanned each set of peptides with SAROTUP. If target-unrelated peptides were found, a new panel of peptides excluding TUP was produced. The old and the new panel of peptides were then used to predict epitope using Mapitope or PepSurf [15, 21, 22]. Finally, the results were compared to show if SAROTUP could improve the performance of mimotope-based epitope mapping.Table 2
A summary of the first test data set for SAROTUP.
TargetTemplateComplexPeptidesSource17bHIV gp120 envelope glycoprotein (gp120)1GC111[15]trastuzumabhuman receptor tyrosine-protein kinase erbB-2 (HER2)1N8Z5[20]82D6A3human von Willebrand factor (vWF)2ADF5[19]13b5HIV-1 capsid protein p241E6J14[15]BO2C11human coagulation factor VIII1IQD27[19]cetuximabhuman epidermal growth factor receptor1YY94[20]80RSARS-coronavirus spike protein S12GHW42 + 18[26]b12HIV gp120 envelope glycoprotein (gp120)2NY72 + 32 + 19[25]The second data set is composed of 100 peptides in raw sequence format. It has two groups. The first group has 77 sequences compiled from the first data set without any known TUP motifs; the second group has 23 sequences sourced from [12, 13] with various TUP motifs. The mixture of the two groups of sequences made the second data set, which was then used as the sample input and can be used to evaluate the efficiency of SAROTUP.
## 3. Results and Discussion
### 3.1. Web Interface of SAROTUP
As a free online service, the web interface of SAROTUP has successfully been implemented as a tabbed web page. The left tab is the default page, providing a brief introduction to this web service. The right tab is a more detailed help page. Click the middle tab will display a web form. The upper section of the form is for basic input (Figure1). The users can either paste a set of peptide sequences in the text box or upload a sequence file to the SAROTUP server for scanning. As shown in Figure 1, a panel of peptides in raw sequence format taken from the b12 test case was pasted in the text box. Besides the raw sequences, SAROTUP also supports peptides in FASTA format. However, only the standard IUPAC one-letter amino acid codes are accepted at present.Figure 1
Snapshot of the upper section of SAROTUP.The lower section of the form has a series of options (Figure2). It includes three drop lists for the screening target, screened library, and screening solid phase, respectively. It also has two groups of check boxes for the capturing reagents and contaminants in the target sample or screening system. By default, SAROTUP will scan each peptide against all the known 23 TUP motifs. However, the users can customize their scan according to their experiment at this section.Figure 2
Snapshot of the lower section of SAROTUP.After the users submit their request, the scanning results of SAROTUP will be displayed on the middle tabbed page. If any target-unrelated peptides are found, they will be reported in a table. At the same time, a new panel of peptides excluding target-unrelated peptides is produced and can be downloaded from the hyperlink created by the SAROTUP server (Figure3). The file of the new panel of peptides will be stored on the server for a month and then automatically deleted.Figure 3
Snapshot of SAROTUP result page. Target-unrelated peptides in the b12 test case are reported in the table. The new panel of peptides excluding the target-unrelated peptides can be downloaded from the hyperlink.We have tested SAROTUP on the Internet Explorer (version 6.0), Mozilla Firefox (version 3.5.2), and Google Chrome (version 3.0). Although SAROTUP looks a little bit different among different browsers, it works normally on all browsers tested.
### 3.2. Power of SAROTUP
As shown in Table2, the first test data set has 11 panels of peptides acquired from phage display libraries screened with 8 targets. In the 11 panels of peptides SAROTUP scanned, there were target-unrelated peptides in 3 panels from cetuximab, 80R, and b12 test case, respectively (Table 3). This result suggested it was not rare that target-unrelated peptides sneaked into biopanning results and then were taken as mimotopes in study. In all, 7 target-unrelated peptides were found; 4 of them were due to binding to plastic; the left 3 were due to binding to the Fc fragment (Table 3).Table 3
Target-unrelated peptides in the first test data set.
TargetTarget-unrelated peptidesMechanismcetuximabVWQRWQKSYVBinding to plastic80RCESSLCLMYSLGPPA,Binding to the Fc fragmentYSTPSSILDTHPLYKBinding to the Fc fragmentb12NLRSTSFFELWAKWPBinding to plasticNWPRWWEEFVDKHSSBinding to plasticNWPRWEEFVDKHSSBinding to plasticICFPFNTRYCIFAMMVSSLVFBinding to the Fc fragmentFor the above 3 cases, the genuine epitopes recognized by cetuximab, 80R, and b12 monoclonal antibodies are compiled according to the CED records [27] and PDBsum entries[28]. Mapitope or PepSurf [15, 21, 22] were used to perform mimotope-based epitope prediction with or without SAROTUP procedure. For Mapitope and PepSurf algorithm, the library type was set to “random”; the stop codon modification was set to “none”; and all other options were in default. The cluster with best score was taken as the predicted epitope. In the cetuximab case, PepSurf was used because there are only four or three peptides in the panel, statistically too few for Mapitope. In the case of 80R and b12, Mapitope was used because many peptides in the two cases exceeding the length limit of PepSurf, that is, 14 amino acids. If a predicted residue is identical with a residue in the true epitope, it is underlined (Table 4).Table 4
Mimotope-based epitope prediction with or without SAROTUP procedure.
TargetPrediction without SAROTUP procedureGenuine epitopePrediction with SAROTUP procedurecetuximabN134, E136, S137, I138, Q139, W140, R141, Q164, K185, L186, T187, K188P349, R353, L382, Q384, Q408, H409, Q411, F412, V417, S418, I438, S440, G441, K443, K465, I467, S468, N469, G471, N473K375, I401, R403, R405, T406, K407,Q408, H409, G410, Q411, F412, D43680RH445, V458, P459, F460, S461, P462, D463, G464, K465, P466, C467, T468,P469, P470, A471, L472, N473, C474, Y475R426, S432, Y436, K439, Y440, Y442, P469, P470, A471, L472, C474, Y475, W476, L478, N479, D480, G482, Y484, T485, T486, T487, G488,Y491, Q492L443, R444, H445, I455, S456, N457, V458, P459, F460, S461, P462, D463, G464, K465, P466, C467, T468,P469, P470, A471, L472, N473, C474, Y475b12I108, C109, S110, L111, D113, Q114, S115, L116, K117, P118, C119, V120, P206, K207, V208, S209, F210, E211, P212, I213, P214, I251, R252, P253, I424, N425, M426, W427, C428, K429,V430N280, A281, S365, G366, G367, D368, P369, I371, V372, T373, Y384, N386, P417, R419, V430, G431, K432, T455, R456, G472, G473, D474, M475W95, T232, F233, N234, T236, S257, L260, N262, G263, S264, L265, A266, E267, E268, E269, V270, V271, T290, S291, S364,S365, G366, G367, D368, P369, E370, I371, V372, T373, T450, S481As shown in Table4, the number of true positives improved from zero to four in the cetuximab case with SAROTUP procedure. When it came to the b12 case, the number of true positives increased from one to eight. SAROTUP did not improve the number of true positives in the 80R case when the parameters are same to the cetuximab and b12 cases. However, when the distance parameter was adjusted from default (i.e., 9 Å) to 10 Å, SAROTUP did increase the number of true positive residues from eight to eleven. These results indicate: (1) epitope prediction based on mimotope will be interfered if target-unrelated peptides are taken as mimotopes; (2) SAROTUP can improve the performance of mimotope based epitope mapping through cleaning the input data.We also scanned the second data set to evaluate the efficiency of SAROTUP. The second data set has 100 peptides, varying from 6 to 22 residues long. Suppose that matching each pattern to each peptide manually costs 10 seconds, then it would take a researcher more than 6 hours (23,000 seconds) to look through the second data set for target-unrelated peptides, even if he is as prompt during the whole period. However, it took only one second for SAROTUP to complete this work. Besides, a table of target-unrelated peptides and a new panel of peptides excluding TUP was produced at the same time by SAROTUP. It is true that some target-unrelated peptides can be identified through control and binding competition experiments. However, using SAROTUP first will certainly save a lot of labor, money, and time for researchers in this area.
### 3.3. Extending of SAROTUP
Although the target of all tests described previously were monoclonal antibodies, SAROTUP can be customized and used in scanning the results from phage display technology using other targets such as enzymes and receptors. This is because their screening systems are similar. For the same reason, we can also expect that SAROTUP will extend its use to other similar in vitro evolution techniques, such as ribosome display [29–31], yeast display [32], and bacterial display [33–35].Furthermore, SAROTUP will not only benefit the mimotope-based epitope mapping, but also the development of new diagnostics, therapeutics, and vaccines. Target-unrelated peptides are not appropriate candidates for mimotope based diagnostics, therapeutics, and vaccines, since they are mimics to components or contaminants of the screening system rather than target. Therefore, it is reasonable to find and exclude possible target-unrelated peptides from the candidate list of new diagnostics, therapeutics, and vaccines. Take the cetuximab as an example. Riemer et al. screened a phage-displayed random peptides library with the cetuximab and got four different peptides, that is, QFDLSTRRLK, QYNLSSRALK, VWQRWQKSYV, and MWDRFSRWYK [36]. As described previously, we scanned the four “mimotopes” with SAROTUP and the result suggested that the peptide VWQRWQKSYV might be a TUP. Indeed, the dot blot analysis of Riemer et al. showed that QYNLSSRALK bound the cetuximab with high affinity but VWQRWQKSYV was less reactive with the cetuximab [36]. Trying to develop a mimotope vaccine, Riemer et al. synthesized two-vaccine constructs with the peptide QYNLSSRALK and VWQRWQKSYV, respectively. After immunization mice with these constructs, they found that either the cetuximab or the antibodies induced by the QYNLSSRALK vaccine construct inhibited the growth of A431 cancer cells significantly. The inhibition of the antibodies induced by the VWQRWQKSYV vaccine construct however, was not statistically significant when compared with the inhibition caused by the isotype control antibody [36].
### 3.4. Cautions in Using SAROTUP
SAROTUP must be used with caution since it is a tool only based on pattern matching at present. There are a lot of target-unrelated peptides bearing no known motifs [12]. As these TUPs are not embedded in SAROTUP at present, it is possible that a true TUP cannot be detected by SAROTUP. To reduce this kind of false negatives, we are constructing a database for target-unrelated peptides and mimotopes. Besides the motif-based search, the database-based search can find out the known TUP without known motifs.It is also possible that a SAROTUP predicted target unrelated peptide is actually target-specific. To decrease this kind of false positives, the users should customize the scan according to their experiment at the section of advance options. For example, the user should select “antibody without Fc fragment” as the target if Fab was used in biopanning; this will prevent SAROTUP from reporting peptides bearing the Fc-binding motifs as TUP. As described above, SAROTUP in future will also provide an exact match tool based on database search. In this way, a match might mean that different research groups have isolated the same peptide with a variety of targets. It is obvious that this peptide can hardly be a true target binder. Thus, the false positive rate of SAROTUP can be decreased further when its new feature become available.At last, we must point out that the controlled experiment is still the gold standard to distinguish TUPs from the specific mimotopes. The report of SAROTUP should be verified with experiment.
## 3.1. Web Interface of SAROTUP
As a free online service, the web interface of SAROTUP has successfully been implemented as a tabbed web page. The left tab is the default page, providing a brief introduction to this web service. The right tab is a more detailed help page. Click the middle tab will display a web form. The upper section of the form is for basic input (Figure1). The users can either paste a set of peptide sequences in the text box or upload a sequence file to the SAROTUP server for scanning. As shown in Figure 1, a panel of peptides in raw sequence format taken from the b12 test case was pasted in the text box. Besides the raw sequences, SAROTUP also supports peptides in FASTA format. However, only the standard IUPAC one-letter amino acid codes are accepted at present.Figure 1
Snapshot of the upper section of SAROTUP.The lower section of the form has a series of options (Figure2). It includes three drop lists for the screening target, screened library, and screening solid phase, respectively. It also has two groups of check boxes for the capturing reagents and contaminants in the target sample or screening system. By default, SAROTUP will scan each peptide against all the known 23 TUP motifs. However, the users can customize their scan according to their experiment at this section.Figure 2
Snapshot of the lower section of SAROTUP.After the users submit their request, the scanning results of SAROTUP will be displayed on the middle tabbed page. If any target-unrelated peptides are found, they will be reported in a table. At the same time, a new panel of peptides excluding target-unrelated peptides is produced and can be downloaded from the hyperlink created by the SAROTUP server (Figure3). The file of the new panel of peptides will be stored on the server for a month and then automatically deleted.Figure 3
Snapshot of SAROTUP result page. Target-unrelated peptides in the b12 test case are reported in the table. The new panel of peptides excluding the target-unrelated peptides can be downloaded from the hyperlink.We have tested SAROTUP on the Internet Explorer (version 6.0), Mozilla Firefox (version 3.5.2), and Google Chrome (version 3.0). Although SAROTUP looks a little bit different among different browsers, it works normally on all browsers tested.
## 3.2. Power of SAROTUP
As shown in Table2, the first test data set has 11 panels of peptides acquired from phage display libraries screened with 8 targets. In the 11 panels of peptides SAROTUP scanned, there were target-unrelated peptides in 3 panels from cetuximab, 80R, and b12 test case, respectively (Table 3). This result suggested it was not rare that target-unrelated peptides sneaked into biopanning results and then were taken as mimotopes in study. In all, 7 target-unrelated peptides were found; 4 of them were due to binding to plastic; the left 3 were due to binding to the Fc fragment (Table 3).Table 3
Target-unrelated peptides in the first test data set.
TargetTarget-unrelated peptidesMechanismcetuximabVWQRWQKSYVBinding to plastic80RCESSLCLMYSLGPPA,Binding to the Fc fragmentYSTPSSILDTHPLYKBinding to the Fc fragmentb12NLRSTSFFELWAKWPBinding to plasticNWPRWWEEFVDKHSSBinding to plasticNWPRWEEFVDKHSSBinding to plasticICFPFNTRYCIFAMMVSSLVFBinding to the Fc fragmentFor the above 3 cases, the genuine epitopes recognized by cetuximab, 80R, and b12 monoclonal antibodies are compiled according to the CED records [27] and PDBsum entries[28]. Mapitope or PepSurf [15, 21, 22] were used to perform mimotope-based epitope prediction with or without SAROTUP procedure. For Mapitope and PepSurf algorithm, the library type was set to “random”; the stop codon modification was set to “none”; and all other options were in default. The cluster with best score was taken as the predicted epitope. In the cetuximab case, PepSurf was used because there are only four or three peptides in the panel, statistically too few for Mapitope. In the case of 80R and b12, Mapitope was used because many peptides in the two cases exceeding the length limit of PepSurf, that is, 14 amino acids. If a predicted residue is identical with a residue in the true epitope, it is underlined (Table 4).Table 4
Mimotope-based epitope prediction with or without SAROTUP procedure.
TargetPrediction without SAROTUP procedureGenuine epitopePrediction with SAROTUP procedurecetuximabN134, E136, S137, I138, Q139, W140, R141, Q164, K185, L186, T187, K188P349, R353, L382, Q384, Q408, H409, Q411, F412, V417, S418, I438, S440, G441, K443, K465, I467, S468, N469, G471, N473K375, I401, R403, R405, T406, K407,Q408, H409, G410, Q411, F412, D43680RH445, V458, P459, F460, S461, P462, D463, G464, K465, P466, C467, T468,P469, P470, A471, L472, N473, C474, Y475R426, S432, Y436, K439, Y440, Y442, P469, P470, A471, L472, C474, Y475, W476, L478, N479, D480, G482, Y484, T485, T486, T487, G488,Y491, Q492L443, R444, H445, I455, S456, N457, V458, P459, F460, S461, P462, D463, G464, K465, P466, C467, T468,P469, P470, A471, L472, N473, C474, Y475b12I108, C109, S110, L111, D113, Q114, S115, L116, K117, P118, C119, V120, P206, K207, V208, S209, F210, E211, P212, I213, P214, I251, R252, P253, I424, N425, M426, W427, C428, K429,V430N280, A281, S365, G366, G367, D368, P369, I371, V372, T373, Y384, N386, P417, R419, V430, G431, K432, T455, R456, G472, G473, D474, M475W95, T232, F233, N234, T236, S257, L260, N262, G263, S264, L265, A266, E267, E268, E269, V270, V271, T290, S291, S364,S365, G366, G367, D368, P369, E370, I371, V372, T373, T450, S481As shown in Table4, the number of true positives improved from zero to four in the cetuximab case with SAROTUP procedure. When it came to the b12 case, the number of true positives increased from one to eight. SAROTUP did not improve the number of true positives in the 80R case when the parameters are same to the cetuximab and b12 cases. However, when the distance parameter was adjusted from default (i.e., 9 Å) to 10 Å, SAROTUP did increase the number of true positive residues from eight to eleven. These results indicate: (1) epitope prediction based on mimotope will be interfered if target-unrelated peptides are taken as mimotopes; (2) SAROTUP can improve the performance of mimotope based epitope mapping through cleaning the input data.We also scanned the second data set to evaluate the efficiency of SAROTUP. The second data set has 100 peptides, varying from 6 to 22 residues long. Suppose that matching each pattern to each peptide manually costs 10 seconds, then it would take a researcher more than 6 hours (23,000 seconds) to look through the second data set for target-unrelated peptides, even if he is as prompt during the whole period. However, it took only one second for SAROTUP to complete this work. Besides, a table of target-unrelated peptides and a new panel of peptides excluding TUP was produced at the same time by SAROTUP. It is true that some target-unrelated peptides can be identified through control and binding competition experiments. However, using SAROTUP first will certainly save a lot of labor, money, and time for researchers in this area.
## 3.3. Extending of SAROTUP
Although the target of all tests described previously were monoclonal antibodies, SAROTUP can be customized and used in scanning the results from phage display technology using other targets such as enzymes and receptors. This is because their screening systems are similar. For the same reason, we can also expect that SAROTUP will extend its use to other similar in vitro evolution techniques, such as ribosome display [29–31], yeast display [32], and bacterial display [33–35].Furthermore, SAROTUP will not only benefit the mimotope-based epitope mapping, but also the development of new diagnostics, therapeutics, and vaccines. Target-unrelated peptides are not appropriate candidates for mimotope based diagnostics, therapeutics, and vaccines, since they are mimics to components or contaminants of the screening system rather than target. Therefore, it is reasonable to find and exclude possible target-unrelated peptides from the candidate list of new diagnostics, therapeutics, and vaccines. Take the cetuximab as an example. Riemer et al. screened a phage-displayed random peptides library with the cetuximab and got four different peptides, that is, QFDLSTRRLK, QYNLSSRALK, VWQRWQKSYV, and MWDRFSRWYK [36]. As described previously, we scanned the four “mimotopes” with SAROTUP and the result suggested that the peptide VWQRWQKSYV might be a TUP. Indeed, the dot blot analysis of Riemer et al. showed that QYNLSSRALK bound the cetuximab with high affinity but VWQRWQKSYV was less reactive with the cetuximab [36]. Trying to develop a mimotope vaccine, Riemer et al. synthesized two-vaccine constructs with the peptide QYNLSSRALK and VWQRWQKSYV, respectively. After immunization mice with these constructs, they found that either the cetuximab or the antibodies induced by the QYNLSSRALK vaccine construct inhibited the growth of A431 cancer cells significantly. The inhibition of the antibodies induced by the VWQRWQKSYV vaccine construct however, was not statistically significant when compared with the inhibition caused by the isotype control antibody [36].
## 3.4. Cautions in Using SAROTUP
SAROTUP must be used with caution since it is a tool only based on pattern matching at present. There are a lot of target-unrelated peptides bearing no known motifs [12]. As these TUPs are not embedded in SAROTUP at present, it is possible that a true TUP cannot be detected by SAROTUP. To reduce this kind of false negatives, we are constructing a database for target-unrelated peptides and mimotopes. Besides the motif-based search, the database-based search can find out the known TUP without known motifs.It is also possible that a SAROTUP predicted target unrelated peptide is actually target-specific. To decrease this kind of false positives, the users should customize the scan according to their experiment at the section of advance options. For example, the user should select “antibody without Fc fragment” as the target if Fab was used in biopanning; this will prevent SAROTUP from reporting peptides bearing the Fc-binding motifs as TUP. As described above, SAROTUP in future will also provide an exact match tool based on database search. In this way, a match might mean that different research groups have isolated the same peptide with a variety of targets. It is obvious that this peptide can hardly be a true target binder. Thus, the false positive rate of SAROTUP can be decreased further when its new feature become available.At last, we must point out that the controlled experiment is still the gold standard to distinguish TUPs from the specific mimotopes. The report of SAROTUP should be verified with experiment.
## 4. Conclusions
SAROTUP, a web application for scanning, reporting and excluding target-unrelated peptides has been coded with Perl. It helps researchers to predict epitope more accurately based on mimotopes. It is also useful in the development of diagnostics, therapeutics, and vaccines. To our knowledge, SAROTUP is the first web tool for TUP detecting and data cleaning. It is very convenient for the community to access SAROTUP throughhttp://immunet.cn/sarotup/.
---
*Source: 101932-2010-03-21.xml* | 101932-2010-03-21_101932-2010-03-21.md | 36,415 | SAROTUP: Scanner and Reporter of Target-Unrelated Peptides | Jian Huang; Beibei Ru; Shiyong Li; Hao Lin; Feng-Biao Guo | Journal of Biomedicine and Biotechnology
(2010) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2010/101932 | 101932-2010-03-21.xml | ---
## Abstract
As epitope mimics, mimotopes have been widely utilized in the study of epitope prediction and the development of new diagnostics, therapeutics, and vaccines. Screening the random peptide libraries constructed with phage display or any other surface display technologies provides an efficient and convenient approach to acquire mimotopes. However, target-unrelated peptides creep into mimotopes from time to time through binding to contaminants or other components of the screening system. In this study, we present SAROTUP, a free web tool for scanning, reporting and excluding possible target-unrelated peptides from real mimotopes. Preliminary tests show that SAROTUP is efficient and capable of improving the accuracy of mimotope-based epitope mapping. It is also helpful for the development of mimotope-based diagnostics, therapeutics, and vaccines.
---
## Body
## 1. Introduction
In 1985, Smith pioneered phage display technology, an in vitro methodology and system for presenting, selecting and evolving proteins and peptides displayed on the surface of phage virion [1]. Since then, phage display has developed rapidly and become an increasingly popular tool for both basic research such as the exploration of protein-protein interaction networks and sites [2–4], and applied research such as the development of new diagnostics, therapeutics, and vaccines [5–10]. Usually, the protein used to screen the phage display library is termed as target and the genuine partner binding to the target is called template. Peptide mimicking the binding site on the template and binding to the target is defined as mimotope, which was first introduced by Geysen et al. [11]. One type of the most frequently used targets is monoclonal antibody. In this situation, the template is the corresponding antigen inducing the antibody, and the mimotope is a mimic of the genuine epitope. In fact, the original definition of mimotope given by Geysen et al. goes “A mimotope is defined as a molecule able to bind to the antigen combining site of an antibody molecule, not necessarily identical with the epitope inducing the antibody, but an acceptable mimic of the essential features of the epitope [11].” Mimotopes and the corresponding epitope are considered to have similar physicochemical properties and spatial organization. The mimicry between mimotopes and genuine epitope makes mimotopes reasonable solutions to epitope mapping, network inferring, and new diagnostics, therapeutics, and vaccines developing.Powered by phage display technology, mimotopes can be acquired in a relatively cheap, efficient and convenient way, that is, screening phage-displayed random peptides libraries with a given target. However, not all phages selected out are target-specific, because the target itself is only one component of the screening system [12]. From time to time, phages reacting with contaminants in the target sample or other components of the screening system such as the solid phase (e.g., plastic plates) and the capturing molecule (e.g., streptavidin, secondary antibody) rather than binding to the actual target are recovered with those target-specific binders (displaying mimotopes) during the rounds of panning. Peptides displayed on these phages are called target-unrelated peptides (TUP), a term coined recently by Menendez and Scott in a review [12].The results from phage display technology might be a mixture of target-unrelated peptides and mimotopes, and it can be difficult to discriminate TUP from mimotopes since the binding assays used to confirm the affinity of peptides for the target often employ the same components as the initial panning experiment [12]. Therefore, target-unrelated peptides might be taken into study as mimotopes if the researchers are not careful enough. Undoubtedly, this will make the conclusion of the study dubious. Several such examples have been discussed in references [12, 13]. Obviously, target-unrelated peptides are not appropriate candidates for the development of new diagnostics, therapeutics, and vaccines. For mimotope-based epitope mapping, target-unrelated peptides are main noise. If TUP is included in the mapping, the input data is improper and the result might be misleading [14]. There are now quite a few programs for mimotope based epitope mapping, none of them, however, has a procedure to scan, report and exclude target-unrelated peptides [15–23].In this study, we describe a web server named SAROTUP, which is an acronym for “Scanner And Reporter Of Target-Unrelated Peptides”. SAROTUP was coded with Perl as a CGI program and can be freely accessed and used to scan peptides acquired from phage display technology. It is capable of finding, reporting, and precluding possible target-unrelated peptides, which is very helpful for the development of mimotope-based diagnostics, therapeutics, and vaccines. The power and efficiency of SAROTUP was also demonstrated by preliminary tests in the present study.
## 2. Materials and Methods
### 2.1. Compilation of TUP Motifs
Recently, Menendez and Scott reviewed a collection of target-unrelated peptides recovered in the screening of phage-displayed random peptide libraries with antibodies [12]. They divided their collection into several categories according to the component of the screening system to which target-unrelated peptides bind. They also derived one or more TUP motifs for each category. Very recently, Brammer et al. reported a completely new type of target-unrelated peptides [13]. In the review of Menendez and Scott, target-unrelated selection is due to the binding to contaminants or components other than target; however, in the report of Brammer et al., target-unrelated selection is due to a coincident point mutation in the phage library [12, 13]. We compiled a set of 23 TUP motifs from the above two references [12, 13], including 12 motifs specific for the capturing agents, 5 motifs specific for the constant region of antibody, 3 motifs specific for the screening solid phase, 2 motifs specific for the contaminants in the target sample, and 1 motif for a mutation in phage library (Table 1). All motifs are presented in patterns according to Prosite format [24].Table 1
Known patterns of target-unrelated peptides.
TUP CategoryTUP PatternMechanism in briefCapturing agentsH-P-[QM], G-D-[WF]-x-F, W-x-W-L, E-P-Binding to streptavidinD-W-[FY], D-V-E-x-W-[LIV]W-x-P-P-F-[RK]Binding to biotinW-[TS]-[LI]-x(2)-H-[RK]Binding to Protein AR-T-[LI]-[TS]-K-P, [LFW]-x-F-Q, W-I-S-Binding to secondary antibodyx(2)-D-W, Q-[LV]-[LV]-Q, RTYKConstant region of antibody (the target)S-S-[IL], GELVW, G-[LI]-T-D-[WY],Binding to the Fc fragment[RHK]-P-S-P, P-S-P-[RK]Screening solid phaseW-x(2)-W, WHWRLPS, F-H-x(2)-WBinding to plasticContaminants in the target sampleF-H-E-x-W-P-[ST]Binding to contaminant bovine serum albuminQSYPBinding to contaminant bovine IgGPhage mutationHAIYPRHGrowing faster than other phages
### 2.2. Implementation of SAROTUP
The SAROTUP was implemented as a free online service, powered by Apache and Perl. Three pages are designed and integrated into a tabbed web interface with cascading style sheets codes. The core program of SAROTUP wassar.pl, a CGI script coded with Perl. In this script, the 23 TUP motifs were converted to regular expressions, which were then used to match each input peptide sequence.
### 2.3. Construction of Test Data Sets
We constructed two-test data sets from [12, 13, 15–23, 25, 26]. The first data set contains 8 cases; 6 of them are sourced from test cases used in extant programs for mimotope-based epitope mapping [15–23]; the left 2 are cases studies published recently [25, 26]. As shown in Table 2, the target of each case in the first data set is monoclonal antibody and the structure of corresponding antigen-antibody complex has been resolved, which is used to derive its structural epitope as the golden standard for evaluation. For each case, there is one or more sets of peptides recovered from phage display technology. These peptides have been used in mimotope-based epitope mapping by other researchers. We scanned each set of peptides with SAROTUP. If target-unrelated peptides were found, a new panel of peptides excluding TUP was produced. The old and the new panel of peptides were then used to predict epitope using Mapitope or PepSurf [15, 21, 22]. Finally, the results were compared to show if SAROTUP could improve the performance of mimotope-based epitope mapping.Table 2
A summary of the first test data set for SAROTUP.
TargetTemplateComplexPeptidesSource17bHIV gp120 envelope glycoprotein (gp120)1GC111[15]trastuzumabhuman receptor tyrosine-protein kinase erbB-2 (HER2)1N8Z5[20]82D6A3human von Willebrand factor (vWF)2ADF5[19]13b5HIV-1 capsid protein p241E6J14[15]BO2C11human coagulation factor VIII1IQD27[19]cetuximabhuman epidermal growth factor receptor1YY94[20]80RSARS-coronavirus spike protein S12GHW42 + 18[26]b12HIV gp120 envelope glycoprotein (gp120)2NY72 + 32 + 19[25]The second data set is composed of 100 peptides in raw sequence format. It has two groups. The first group has 77 sequences compiled from the first data set without any known TUP motifs; the second group has 23 sequences sourced from [12, 13] with various TUP motifs. The mixture of the two groups of sequences made the second data set, which was then used as the sample input and can be used to evaluate the efficiency of SAROTUP.
## 2.1. Compilation of TUP Motifs
Recently, Menendez and Scott reviewed a collection of target-unrelated peptides recovered in the screening of phage-displayed random peptide libraries with antibodies [12]. They divided their collection into several categories according to the component of the screening system to which target-unrelated peptides bind. They also derived one or more TUP motifs for each category. Very recently, Brammer et al. reported a completely new type of target-unrelated peptides [13]. In the review of Menendez and Scott, target-unrelated selection is due to the binding to contaminants or components other than target; however, in the report of Brammer et al., target-unrelated selection is due to a coincident point mutation in the phage library [12, 13]. We compiled a set of 23 TUP motifs from the above two references [12, 13], including 12 motifs specific for the capturing agents, 5 motifs specific for the constant region of antibody, 3 motifs specific for the screening solid phase, 2 motifs specific for the contaminants in the target sample, and 1 motif for a mutation in phage library (Table 1). All motifs are presented in patterns according to Prosite format [24].Table 1
Known patterns of target-unrelated peptides.
TUP CategoryTUP PatternMechanism in briefCapturing agentsH-P-[QM], G-D-[WF]-x-F, W-x-W-L, E-P-Binding to streptavidinD-W-[FY], D-V-E-x-W-[LIV]W-x-P-P-F-[RK]Binding to biotinW-[TS]-[LI]-x(2)-H-[RK]Binding to Protein AR-T-[LI]-[TS]-K-P, [LFW]-x-F-Q, W-I-S-Binding to secondary antibodyx(2)-D-W, Q-[LV]-[LV]-Q, RTYKConstant region of antibody (the target)S-S-[IL], GELVW, G-[LI]-T-D-[WY],Binding to the Fc fragment[RHK]-P-S-P, P-S-P-[RK]Screening solid phaseW-x(2)-W, WHWRLPS, F-H-x(2)-WBinding to plasticContaminants in the target sampleF-H-E-x-W-P-[ST]Binding to contaminant bovine serum albuminQSYPBinding to contaminant bovine IgGPhage mutationHAIYPRHGrowing faster than other phages
## 2.2. Implementation of SAROTUP
The SAROTUP was implemented as a free online service, powered by Apache and Perl. Three pages are designed and integrated into a tabbed web interface with cascading style sheets codes. The core program of SAROTUP wassar.pl, a CGI script coded with Perl. In this script, the 23 TUP motifs were converted to regular expressions, which were then used to match each input peptide sequence.
## 2.3. Construction of Test Data Sets
We constructed two-test data sets from [12, 13, 15–23, 25, 26]. The first data set contains 8 cases; 6 of them are sourced from test cases used in extant programs for mimotope-based epitope mapping [15–23]; the left 2 are cases studies published recently [25, 26]. As shown in Table 2, the target of each case in the first data set is monoclonal antibody and the structure of corresponding antigen-antibody complex has been resolved, which is used to derive its structural epitope as the golden standard for evaluation. For each case, there is one or more sets of peptides recovered from phage display technology. These peptides have been used in mimotope-based epitope mapping by other researchers. We scanned each set of peptides with SAROTUP. If target-unrelated peptides were found, a new panel of peptides excluding TUP was produced. The old and the new panel of peptides were then used to predict epitope using Mapitope or PepSurf [15, 21, 22]. Finally, the results were compared to show if SAROTUP could improve the performance of mimotope-based epitope mapping.Table 2
A summary of the first test data set for SAROTUP.
TargetTemplateComplexPeptidesSource17bHIV gp120 envelope glycoprotein (gp120)1GC111[15]trastuzumabhuman receptor tyrosine-protein kinase erbB-2 (HER2)1N8Z5[20]82D6A3human von Willebrand factor (vWF)2ADF5[19]13b5HIV-1 capsid protein p241E6J14[15]BO2C11human coagulation factor VIII1IQD27[19]cetuximabhuman epidermal growth factor receptor1YY94[20]80RSARS-coronavirus spike protein S12GHW42 + 18[26]b12HIV gp120 envelope glycoprotein (gp120)2NY72 + 32 + 19[25]The second data set is composed of 100 peptides in raw sequence format. It has two groups. The first group has 77 sequences compiled from the first data set without any known TUP motifs; the second group has 23 sequences sourced from [12, 13] with various TUP motifs. The mixture of the two groups of sequences made the second data set, which was then used as the sample input and can be used to evaluate the efficiency of SAROTUP.
## 3. Results and Discussion
### 3.1. Web Interface of SAROTUP
As a free online service, the web interface of SAROTUP has successfully been implemented as a tabbed web page. The left tab is the default page, providing a brief introduction to this web service. The right tab is a more detailed help page. Click the middle tab will display a web form. The upper section of the form is for basic input (Figure1). The users can either paste a set of peptide sequences in the text box or upload a sequence file to the SAROTUP server for scanning. As shown in Figure 1, a panel of peptides in raw sequence format taken from the b12 test case was pasted in the text box. Besides the raw sequences, SAROTUP also supports peptides in FASTA format. However, only the standard IUPAC one-letter amino acid codes are accepted at present.Figure 1
Snapshot of the upper section of SAROTUP.The lower section of the form has a series of options (Figure2). It includes three drop lists for the screening target, screened library, and screening solid phase, respectively. It also has two groups of check boxes for the capturing reagents and contaminants in the target sample or screening system. By default, SAROTUP will scan each peptide against all the known 23 TUP motifs. However, the users can customize their scan according to their experiment at this section.Figure 2
Snapshot of the lower section of SAROTUP.After the users submit their request, the scanning results of SAROTUP will be displayed on the middle tabbed page. If any target-unrelated peptides are found, they will be reported in a table. At the same time, a new panel of peptides excluding target-unrelated peptides is produced and can be downloaded from the hyperlink created by the SAROTUP server (Figure3). The file of the new panel of peptides will be stored on the server for a month and then automatically deleted.Figure 3
Snapshot of SAROTUP result page. Target-unrelated peptides in the b12 test case are reported in the table. The new panel of peptides excluding the target-unrelated peptides can be downloaded from the hyperlink.We have tested SAROTUP on the Internet Explorer (version 6.0), Mozilla Firefox (version 3.5.2), and Google Chrome (version 3.0). Although SAROTUP looks a little bit different among different browsers, it works normally on all browsers tested.
### 3.2. Power of SAROTUP
As shown in Table2, the first test data set has 11 panels of peptides acquired from phage display libraries screened with 8 targets. In the 11 panels of peptides SAROTUP scanned, there were target-unrelated peptides in 3 panels from cetuximab, 80R, and b12 test case, respectively (Table 3). This result suggested it was not rare that target-unrelated peptides sneaked into biopanning results and then were taken as mimotopes in study. In all, 7 target-unrelated peptides were found; 4 of them were due to binding to plastic; the left 3 were due to binding to the Fc fragment (Table 3).Table 3
Target-unrelated peptides in the first test data set.
TargetTarget-unrelated peptidesMechanismcetuximabVWQRWQKSYVBinding to plastic80RCESSLCLMYSLGPPA,Binding to the Fc fragmentYSTPSSILDTHPLYKBinding to the Fc fragmentb12NLRSTSFFELWAKWPBinding to plasticNWPRWWEEFVDKHSSBinding to plasticNWPRWEEFVDKHSSBinding to plasticICFPFNTRYCIFAMMVSSLVFBinding to the Fc fragmentFor the above 3 cases, the genuine epitopes recognized by cetuximab, 80R, and b12 monoclonal antibodies are compiled according to the CED records [27] and PDBsum entries[28]. Mapitope or PepSurf [15, 21, 22] were used to perform mimotope-based epitope prediction with or without SAROTUP procedure. For Mapitope and PepSurf algorithm, the library type was set to “random”; the stop codon modification was set to “none”; and all other options were in default. The cluster with best score was taken as the predicted epitope. In the cetuximab case, PepSurf was used because there are only four or three peptides in the panel, statistically too few for Mapitope. In the case of 80R and b12, Mapitope was used because many peptides in the two cases exceeding the length limit of PepSurf, that is, 14 amino acids. If a predicted residue is identical with a residue in the true epitope, it is underlined (Table 4).Table 4
Mimotope-based epitope prediction with or without SAROTUP procedure.
TargetPrediction without SAROTUP procedureGenuine epitopePrediction with SAROTUP procedurecetuximabN134, E136, S137, I138, Q139, W140, R141, Q164, K185, L186, T187, K188P349, R353, L382, Q384, Q408, H409, Q411, F412, V417, S418, I438, S440, G441, K443, K465, I467, S468, N469, G471, N473K375, I401, R403, R405, T406, K407,Q408, H409, G410, Q411, F412, D43680RH445, V458, P459, F460, S461, P462, D463, G464, K465, P466, C467, T468,P469, P470, A471, L472, N473, C474, Y475R426, S432, Y436, K439, Y440, Y442, P469, P470, A471, L472, C474, Y475, W476, L478, N479, D480, G482, Y484, T485, T486, T487, G488,Y491, Q492L443, R444, H445, I455, S456, N457, V458, P459, F460, S461, P462, D463, G464, K465, P466, C467, T468,P469, P470, A471, L472, N473, C474, Y475b12I108, C109, S110, L111, D113, Q114, S115, L116, K117, P118, C119, V120, P206, K207, V208, S209, F210, E211, P212, I213, P214, I251, R252, P253, I424, N425, M426, W427, C428, K429,V430N280, A281, S365, G366, G367, D368, P369, I371, V372, T373, Y384, N386, P417, R419, V430, G431, K432, T455, R456, G472, G473, D474, M475W95, T232, F233, N234, T236, S257, L260, N262, G263, S264, L265, A266, E267, E268, E269, V270, V271, T290, S291, S364,S365, G366, G367, D368, P369, E370, I371, V372, T373, T450, S481As shown in Table4, the number of true positives improved from zero to four in the cetuximab case with SAROTUP procedure. When it came to the b12 case, the number of true positives increased from one to eight. SAROTUP did not improve the number of true positives in the 80R case when the parameters are same to the cetuximab and b12 cases. However, when the distance parameter was adjusted from default (i.e., 9 Å) to 10 Å, SAROTUP did increase the number of true positive residues from eight to eleven. These results indicate: (1) epitope prediction based on mimotope will be interfered if target-unrelated peptides are taken as mimotopes; (2) SAROTUP can improve the performance of mimotope based epitope mapping through cleaning the input data.We also scanned the second data set to evaluate the efficiency of SAROTUP. The second data set has 100 peptides, varying from 6 to 22 residues long. Suppose that matching each pattern to each peptide manually costs 10 seconds, then it would take a researcher more than 6 hours (23,000 seconds) to look through the second data set for target-unrelated peptides, even if he is as prompt during the whole period. However, it took only one second for SAROTUP to complete this work. Besides, a table of target-unrelated peptides and a new panel of peptides excluding TUP was produced at the same time by SAROTUP. It is true that some target-unrelated peptides can be identified through control and binding competition experiments. However, using SAROTUP first will certainly save a lot of labor, money, and time for researchers in this area.
### 3.3. Extending of SAROTUP
Although the target of all tests described previously were monoclonal antibodies, SAROTUP can be customized and used in scanning the results from phage display technology using other targets such as enzymes and receptors. This is because their screening systems are similar. For the same reason, we can also expect that SAROTUP will extend its use to other similar in vitro evolution techniques, such as ribosome display [29–31], yeast display [32], and bacterial display [33–35].Furthermore, SAROTUP will not only benefit the mimotope-based epitope mapping, but also the development of new diagnostics, therapeutics, and vaccines. Target-unrelated peptides are not appropriate candidates for mimotope based diagnostics, therapeutics, and vaccines, since they are mimics to components or contaminants of the screening system rather than target. Therefore, it is reasonable to find and exclude possible target-unrelated peptides from the candidate list of new diagnostics, therapeutics, and vaccines. Take the cetuximab as an example. Riemer et al. screened a phage-displayed random peptides library with the cetuximab and got four different peptides, that is, QFDLSTRRLK, QYNLSSRALK, VWQRWQKSYV, and MWDRFSRWYK [36]. As described previously, we scanned the four “mimotopes” with SAROTUP and the result suggested that the peptide VWQRWQKSYV might be a TUP. Indeed, the dot blot analysis of Riemer et al. showed that QYNLSSRALK bound the cetuximab with high affinity but VWQRWQKSYV was less reactive with the cetuximab [36]. Trying to develop a mimotope vaccine, Riemer et al. synthesized two-vaccine constructs with the peptide QYNLSSRALK and VWQRWQKSYV, respectively. After immunization mice with these constructs, they found that either the cetuximab or the antibodies induced by the QYNLSSRALK vaccine construct inhibited the growth of A431 cancer cells significantly. The inhibition of the antibodies induced by the VWQRWQKSYV vaccine construct however, was not statistically significant when compared with the inhibition caused by the isotype control antibody [36].
### 3.4. Cautions in Using SAROTUP
SAROTUP must be used with caution since it is a tool only based on pattern matching at present. There are a lot of target-unrelated peptides bearing no known motifs [12]. As these TUPs are not embedded in SAROTUP at present, it is possible that a true TUP cannot be detected by SAROTUP. To reduce this kind of false negatives, we are constructing a database for target-unrelated peptides and mimotopes. Besides the motif-based search, the database-based search can find out the known TUP without known motifs.It is also possible that a SAROTUP predicted target unrelated peptide is actually target-specific. To decrease this kind of false positives, the users should customize the scan according to their experiment at the section of advance options. For example, the user should select “antibody without Fc fragment” as the target if Fab was used in biopanning; this will prevent SAROTUP from reporting peptides bearing the Fc-binding motifs as TUP. As described above, SAROTUP in future will also provide an exact match tool based on database search. In this way, a match might mean that different research groups have isolated the same peptide with a variety of targets. It is obvious that this peptide can hardly be a true target binder. Thus, the false positive rate of SAROTUP can be decreased further when its new feature become available.At last, we must point out that the controlled experiment is still the gold standard to distinguish TUPs from the specific mimotopes. The report of SAROTUP should be verified with experiment.
## 3.1. Web Interface of SAROTUP
As a free online service, the web interface of SAROTUP has successfully been implemented as a tabbed web page. The left tab is the default page, providing a brief introduction to this web service. The right tab is a more detailed help page. Click the middle tab will display a web form. The upper section of the form is for basic input (Figure1). The users can either paste a set of peptide sequences in the text box or upload a sequence file to the SAROTUP server for scanning. As shown in Figure 1, a panel of peptides in raw sequence format taken from the b12 test case was pasted in the text box. Besides the raw sequences, SAROTUP also supports peptides in FASTA format. However, only the standard IUPAC one-letter amino acid codes are accepted at present.Figure 1
Snapshot of the upper section of SAROTUP.The lower section of the form has a series of options (Figure2). It includes three drop lists for the screening target, screened library, and screening solid phase, respectively. It also has two groups of check boxes for the capturing reagents and contaminants in the target sample or screening system. By default, SAROTUP will scan each peptide against all the known 23 TUP motifs. However, the users can customize their scan according to their experiment at this section.Figure 2
Snapshot of the lower section of SAROTUP.After the users submit their request, the scanning results of SAROTUP will be displayed on the middle tabbed page. If any target-unrelated peptides are found, they will be reported in a table. At the same time, a new panel of peptides excluding target-unrelated peptides is produced and can be downloaded from the hyperlink created by the SAROTUP server (Figure3). The file of the new panel of peptides will be stored on the server for a month and then automatically deleted.Figure 3
Snapshot of SAROTUP result page. Target-unrelated peptides in the b12 test case are reported in the table. The new panel of peptides excluding the target-unrelated peptides can be downloaded from the hyperlink.We have tested SAROTUP on the Internet Explorer (version 6.0), Mozilla Firefox (version 3.5.2), and Google Chrome (version 3.0). Although SAROTUP looks a little bit different among different browsers, it works normally on all browsers tested.
## 3.2. Power of SAROTUP
As shown in Table2, the first test data set has 11 panels of peptides acquired from phage display libraries screened with 8 targets. In the 11 panels of peptides SAROTUP scanned, there were target-unrelated peptides in 3 panels from cetuximab, 80R, and b12 test case, respectively (Table 3). This result suggested it was not rare that target-unrelated peptides sneaked into biopanning results and then were taken as mimotopes in study. In all, 7 target-unrelated peptides were found; 4 of them were due to binding to plastic; the left 3 were due to binding to the Fc fragment (Table 3).Table 3
Target-unrelated peptides in the first test data set.
TargetTarget-unrelated peptidesMechanismcetuximabVWQRWQKSYVBinding to plastic80RCESSLCLMYSLGPPA,Binding to the Fc fragmentYSTPSSILDTHPLYKBinding to the Fc fragmentb12NLRSTSFFELWAKWPBinding to plasticNWPRWWEEFVDKHSSBinding to plasticNWPRWEEFVDKHSSBinding to plasticICFPFNTRYCIFAMMVSSLVFBinding to the Fc fragmentFor the above 3 cases, the genuine epitopes recognized by cetuximab, 80R, and b12 monoclonal antibodies are compiled according to the CED records [27] and PDBsum entries[28]. Mapitope or PepSurf [15, 21, 22] were used to perform mimotope-based epitope prediction with or without SAROTUP procedure. For Mapitope and PepSurf algorithm, the library type was set to “random”; the stop codon modification was set to “none”; and all other options were in default. The cluster with best score was taken as the predicted epitope. In the cetuximab case, PepSurf was used because there are only four or three peptides in the panel, statistically too few for Mapitope. In the case of 80R and b12, Mapitope was used because many peptides in the two cases exceeding the length limit of PepSurf, that is, 14 amino acids. If a predicted residue is identical with a residue in the true epitope, it is underlined (Table 4).Table 4
Mimotope-based epitope prediction with or without SAROTUP procedure.
TargetPrediction without SAROTUP procedureGenuine epitopePrediction with SAROTUP procedurecetuximabN134, E136, S137, I138, Q139, W140, R141, Q164, K185, L186, T187, K188P349, R353, L382, Q384, Q408, H409, Q411, F412, V417, S418, I438, S440, G441, K443, K465, I467, S468, N469, G471, N473K375, I401, R403, R405, T406, K407,Q408, H409, G410, Q411, F412, D43680RH445, V458, P459, F460, S461, P462, D463, G464, K465, P466, C467, T468,P469, P470, A471, L472, N473, C474, Y475R426, S432, Y436, K439, Y440, Y442, P469, P470, A471, L472, C474, Y475, W476, L478, N479, D480, G482, Y484, T485, T486, T487, G488,Y491, Q492L443, R444, H445, I455, S456, N457, V458, P459, F460, S461, P462, D463, G464, K465, P466, C467, T468,P469, P470, A471, L472, N473, C474, Y475b12I108, C109, S110, L111, D113, Q114, S115, L116, K117, P118, C119, V120, P206, K207, V208, S209, F210, E211, P212, I213, P214, I251, R252, P253, I424, N425, M426, W427, C428, K429,V430N280, A281, S365, G366, G367, D368, P369, I371, V372, T373, Y384, N386, P417, R419, V430, G431, K432, T455, R456, G472, G473, D474, M475W95, T232, F233, N234, T236, S257, L260, N262, G263, S264, L265, A266, E267, E268, E269, V270, V271, T290, S291, S364,S365, G366, G367, D368, P369, E370, I371, V372, T373, T450, S481As shown in Table4, the number of true positives improved from zero to four in the cetuximab case with SAROTUP procedure. When it came to the b12 case, the number of true positives increased from one to eight. SAROTUP did not improve the number of true positives in the 80R case when the parameters are same to the cetuximab and b12 cases. However, when the distance parameter was adjusted from default (i.e., 9 Å) to 10 Å, SAROTUP did increase the number of true positive residues from eight to eleven. These results indicate: (1) epitope prediction based on mimotope will be interfered if target-unrelated peptides are taken as mimotopes; (2) SAROTUP can improve the performance of mimotope based epitope mapping through cleaning the input data.We also scanned the second data set to evaluate the efficiency of SAROTUP. The second data set has 100 peptides, varying from 6 to 22 residues long. Suppose that matching each pattern to each peptide manually costs 10 seconds, then it would take a researcher more than 6 hours (23,000 seconds) to look through the second data set for target-unrelated peptides, even if he is as prompt during the whole period. However, it took only one second for SAROTUP to complete this work. Besides, a table of target-unrelated peptides and a new panel of peptides excluding TUP was produced at the same time by SAROTUP. It is true that some target-unrelated peptides can be identified through control and binding competition experiments. However, using SAROTUP first will certainly save a lot of labor, money, and time for researchers in this area.
## 3.3. Extending of SAROTUP
Although the target of all tests described previously were monoclonal antibodies, SAROTUP can be customized and used in scanning the results from phage display technology using other targets such as enzymes and receptors. This is because their screening systems are similar. For the same reason, we can also expect that SAROTUP will extend its use to other similar in vitro evolution techniques, such as ribosome display [29–31], yeast display [32], and bacterial display [33–35].Furthermore, SAROTUP will not only benefit the mimotope-based epitope mapping, but also the development of new diagnostics, therapeutics, and vaccines. Target-unrelated peptides are not appropriate candidates for mimotope based diagnostics, therapeutics, and vaccines, since they are mimics to components or contaminants of the screening system rather than target. Therefore, it is reasonable to find and exclude possible target-unrelated peptides from the candidate list of new diagnostics, therapeutics, and vaccines. Take the cetuximab as an example. Riemer et al. screened a phage-displayed random peptides library with the cetuximab and got four different peptides, that is, QFDLSTRRLK, QYNLSSRALK, VWQRWQKSYV, and MWDRFSRWYK [36]. As described previously, we scanned the four “mimotopes” with SAROTUP and the result suggested that the peptide VWQRWQKSYV might be a TUP. Indeed, the dot blot analysis of Riemer et al. showed that QYNLSSRALK bound the cetuximab with high affinity but VWQRWQKSYV was less reactive with the cetuximab [36]. Trying to develop a mimotope vaccine, Riemer et al. synthesized two-vaccine constructs with the peptide QYNLSSRALK and VWQRWQKSYV, respectively. After immunization mice with these constructs, they found that either the cetuximab or the antibodies induced by the QYNLSSRALK vaccine construct inhibited the growth of A431 cancer cells significantly. The inhibition of the antibodies induced by the VWQRWQKSYV vaccine construct however, was not statistically significant when compared with the inhibition caused by the isotype control antibody [36].
## 3.4. Cautions in Using SAROTUP
SAROTUP must be used with caution since it is a tool only based on pattern matching at present. There are a lot of target-unrelated peptides bearing no known motifs [12]. As these TUPs are not embedded in SAROTUP at present, it is possible that a true TUP cannot be detected by SAROTUP. To reduce this kind of false negatives, we are constructing a database for target-unrelated peptides and mimotopes. Besides the motif-based search, the database-based search can find out the known TUP without known motifs.It is also possible that a SAROTUP predicted target unrelated peptide is actually target-specific. To decrease this kind of false positives, the users should customize the scan according to their experiment at the section of advance options. For example, the user should select “antibody without Fc fragment” as the target if Fab was used in biopanning; this will prevent SAROTUP from reporting peptides bearing the Fc-binding motifs as TUP. As described above, SAROTUP in future will also provide an exact match tool based on database search. In this way, a match might mean that different research groups have isolated the same peptide with a variety of targets. It is obvious that this peptide can hardly be a true target binder. Thus, the false positive rate of SAROTUP can be decreased further when its new feature become available.At last, we must point out that the controlled experiment is still the gold standard to distinguish TUPs from the specific mimotopes. The report of SAROTUP should be verified with experiment.
## 4. Conclusions
SAROTUP, a web application for scanning, reporting and excluding target-unrelated peptides has been coded with Perl. It helps researchers to predict epitope more accurately based on mimotopes. It is also useful in the development of diagnostics, therapeutics, and vaccines. To our knowledge, SAROTUP is the first web tool for TUP detecting and data cleaning. It is very convenient for the community to access SAROTUP throughhttp://immunet.cn/sarotup/.
---
*Source: 101932-2010-03-21.xml* | 2010 |
# Effects of Treated Cow Dung Addition on the Strength of Carbon-Bearing Iron Ore Pellets
**Authors:** Qing-min Meng; Jia-xin Li; Tie-jun Chun; Xiao-feng He; Ru-fei Wei; Ping Wang; Hong-ming Long
**Journal:** Advances in Materials Science and Engineering
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1019438
---
## Abstract
It is of particular interest to use biomass as an alternative source of fuel in direct-reduction ironmaking to ease the current reliance on fossil fuel energy. The influence of cow dung addition on the strength of carbon-bearing iron ore pellets composed of cow dung, iron ore, anthracite, and bentonite was investigated, the quality of green and dry pellet was evaluated based on FTIR analysis, and the mechanism of strength variation of the reduced pellets was investigated by analysing the phase composition and microstructure using XRD and SEM. The results show that cow dung addition decreased the green pellet strength due to expansion of the amorphous region of the cellulose in the cow dung; however, the dry pellet strength increased substantially. In the process of reduction roasting, it was found that cow dung addition can promote aggregation of iron crystals and increase the density of the pellets, resulting in increased strength of the reduction roasted pellets, while excessive cow dung addition resulted in lower strength.
---
## Body
## 1. Introduction
With the gradual depletion of raw materials for the blast furnace, such as coke and quality iron ore, direct-reduction and smelting reduction technologies using gas, liquid fuels, and noncoking coal as energy sources were developed as cleaner, more environment friendly alternatives, which have been applied widely around the world [1–3]. However, the fuel sources of noncoking coal iron-making technology have not fundamentally changed, and there is a gap in product quality and energy consumption compared to the blast furnace. The use of biomass as an alternative fuel source, to further reduce the consumption of fossil fuels and emission of carbon in the steel-making industry, has become a hot topic among scholars [4–8]. Sterol [9] investigated the mechanisms of iron ore reduction with biomass wood waste. The results showed that the iron ore was successfully reduced to predominantly metallic iron when up to 30 wt% of biomass was introduced into the mixture and reduction commenced at approximately 943 K and was almost completed at 1473 K. Wei et al. [10] studied the characteristics and kinetics of iron oxide reduction by carbon in biomass composites. The result showed that iron oxide can be reduced by biomass very rapidly, and the degree of metallisation and reduction increased with temperature.Iron oxide reduction by carbon in biomass can be divided into two stages, namely, reduction by volatile matter followed by reduction by nonvolatile carbon. The reduction times of the two stages both decrease with increasing temperature. Liu et al. [11] researched the reduction of carbon-bearing pellets, using the reducing agents prepared from carbonization products of rice husk, peanut shells, and wood chips. The result showed that the carbon-bearing pellets could be reduced rapidly between 1473 K and 1573 K in about 15 to 20 minutes, while the higher carbon content and appropriate volatile content in biological carbon were beneficial to the pellet reduction. Han et al. [12] studied the effect of biomass on the reduction of carbon-bearing pellets using charcoal, bamboo charcoal, and straw as reductant. The results showed that the biomass reductants had little effect on metallisation rate, but certain biomass reductants had a substantial influence on the strength and volumetric shrinkage of the pellets. The compressive strength of pellets with straw was relatively higher, while the strength of pellets with charcoal or bamboo charcoal was low.Biomass includes all animals, plants, and microorganisms, including organic waste residues. Thus, maximising the use of biomass could potentially relieve the global energy crisis. The large amount of animal dung produced as a by-product of the Agricultural Industry is causing an increasingly serious environmental problem [13], with the stock of cow dung topping the list. Researchers have made extensive studies regarding the issue of cow dung utilisation. Cow dung reclamation technology, such as energy, composting, and animal feed, has achieved considerable economic and social benefits [14–17]. Other than this, researchers are also exploring the applications of cow dung in areas such as new preparation methods of biomass carbon materials [18–20] and solid waste disposal [21, 22].Similar to other plant biomass, cellulose, hemicellulose, and lignin are the main chemical constituents of dry cow dung. The similarity of organic components between cow dung and plant biomass makes it feasible to use cow dung as an alternative reducing agent in iron ore reduction, and this has been demonstrated in various studies. Rath et al. [23] used cow dung as a reductant in the reduction roasting of an iron ore slime containing 56.2% Fe. A concentrate of ~64% Fe, with a recovery of ~66 wt%, was obtained from the reduced product after being subjected to low intensity magnetic separation. Under similar conditions, a concentrate of ~66% Fe, with a recovery of only 35 wt%, was obtained after using conventional charcoal as the reductant (93.5% fixed carbon and 1.2% volatile matter), which demonstrated that cow dung was a better reductant.The key purpose of this study is to investigate the effect of cow dung addition on the strength of carbon-bearing iron ore pellets and its mechanism. The influence of cow dung addition on the quality of green and dry pellets is evaluated based on FTIR analysis, and the mechanism of strength variation of reduction roasted pellets was investigated by analysing the phase composition and microstructure using XRD and SEM, to provide a benchmark for further utilisation of cow dung in direct-reduction ironmaking.
## 2. Materials and Methods
### 2.1. Materials
Carbon-bearing iron ore pellets were prepared by pressing a mixture consisting of iron ore, anthracite, bentonite, and different proportions of cow dung. The chemical composition of the iron ore used in this study is shown in Table1, while the proximate analyses of the anthracite and the cow dung used as the reductant in this study are given in Table 2. The cow dung was obtained from the Mengniu Modern Animal Husbandry (Group), Maanshan Co. First, all of the raw materials were dried at 383 K for 24 h, individually ground in a ball mill to a passing size of 74 μm for the iron ore and 200 μm for the reducing agents. Mixtures of the ground materials were prepared according to the experimental plan given in Table 3. A little water was added, and 20 mm diameter pellets (molar ratio of Cfix/Oiron oxide = 1) were prepared by pressing at 10 MPa.Table 1
Chemical composition of iron ore (wt%).
TFe
FeO
SiO2
Al2O3
CaO
MgO
P
S
61.4
23.5
3.3
1.1
1.0
0.2
0.028
0.328Table 2
Proximate analysis of reducing agent (wt%).
Type of reducing agent
Fixed carbon
Ash
Volatile matter
P
S
Anthracite
78.8
13.4
7.9
0.024
0.580
Cow dung
7.7
24.9
67.4
0.001
0.270Table 3
Raw material ratio of the carbon-bearing iron ore pellets.
Sample number
Iron ore
Anthracite
Cow dung
Bentonite
C
fix
/
O
iron oxide
1
82.5%
17.5%
0.0%
1.6%
1.0
2
79.4%
16.5%
4.1%
1.6%
1.0
3
76.7%
15.6%
7.8%
1.6%
1.0
4
74.2%
14.7%
11.1%
1.6%
1.0
5
72.1%
14.0%
14.0%
1.6%
1.0
### 2.2. Characterization Techniques
Characterization studies were undertaken on the raw materials and some reduction roasted products. The FTIR spectra were obtained with a Nicolet 8700 spectrophotometer by adding 32 scans at a resolution of 4 cm−1, using KBr wafers containing about 0.5 g of sample, which had been dried at 393 K for 24 h before spectral analysis. XRD was carried out with a D8 Advance X-ray powder diffractometer using Cu-Kα radiation. The voltage and current of the machine were set at 60 kV and 80 mA, respectively, scanning from 20° to 80° using a step size of 0.02°. A scanning electron microscope (NOVA Nano SEM430, USA) equipped with EDS detector was used for the microstructure studies.
### 2.3. Strength Tests and Reduction Roasting
The compressive strength of green, dry, and reduction roasted pellets was measured with an automatic compressive strength tester, and each sample was tested 20 times under the same conditions and the results were averaged. A schematic of the reduction roasting experimental apparatus is shown in Figure1. The furnace has a working temperature range of 298–1573 K, with ±1 K temperature control accuracy, and produces a 150 mm hot zone. The carbon-bearing pellets were placed in a Ni-Cr alloy basket which was hung over the hot zone, and the pellets were heated to 1523 K at a rate of 20 K/min. High-purity nitrogen gas was supplied at a constant flowrate of 1 l/min in the reaction tube. At the end of the experiment, the basket was quickly moved to the top of the furnace with high-purity nitrogen gas continuing to be supplied until the pellets cooled to room temperature.Figure 1
A schematic of experimental apparatus of reduction roasting.
## 2.1. Materials
Carbon-bearing iron ore pellets were prepared by pressing a mixture consisting of iron ore, anthracite, bentonite, and different proportions of cow dung. The chemical composition of the iron ore used in this study is shown in Table1, while the proximate analyses of the anthracite and the cow dung used as the reductant in this study are given in Table 2. The cow dung was obtained from the Mengniu Modern Animal Husbandry (Group), Maanshan Co. First, all of the raw materials were dried at 383 K for 24 h, individually ground in a ball mill to a passing size of 74 μm for the iron ore and 200 μm for the reducing agents. Mixtures of the ground materials were prepared according to the experimental plan given in Table 3. A little water was added, and 20 mm diameter pellets (molar ratio of Cfix/Oiron oxide = 1) were prepared by pressing at 10 MPa.Table 1
Chemical composition of iron ore (wt%).
TFe
FeO
SiO2
Al2O3
CaO
MgO
P
S
61.4
23.5
3.3
1.1
1.0
0.2
0.028
0.328Table 2
Proximate analysis of reducing agent (wt%).
Type of reducing agent
Fixed carbon
Ash
Volatile matter
P
S
Anthracite
78.8
13.4
7.9
0.024
0.580
Cow dung
7.7
24.9
67.4
0.001
0.270Table 3
Raw material ratio of the carbon-bearing iron ore pellets.
Sample number
Iron ore
Anthracite
Cow dung
Bentonite
C
fix
/
O
iron oxide
1
82.5%
17.5%
0.0%
1.6%
1.0
2
79.4%
16.5%
4.1%
1.6%
1.0
3
76.7%
15.6%
7.8%
1.6%
1.0
4
74.2%
14.7%
11.1%
1.6%
1.0
5
72.1%
14.0%
14.0%
1.6%
1.0
## 2.2. Characterization Techniques
Characterization studies were undertaken on the raw materials and some reduction roasted products. The FTIR spectra were obtained with a Nicolet 8700 spectrophotometer by adding 32 scans at a resolution of 4 cm−1, using KBr wafers containing about 0.5 g of sample, which had been dried at 393 K for 24 h before spectral analysis. XRD was carried out with a D8 Advance X-ray powder diffractometer using Cu-Kα radiation. The voltage and current of the machine were set at 60 kV and 80 mA, respectively, scanning from 20° to 80° using a step size of 0.02°. A scanning electron microscope (NOVA Nano SEM430, USA) equipped with EDS detector was used for the microstructure studies.
## 2.3. Strength Tests and Reduction Roasting
The compressive strength of green, dry, and reduction roasted pellets was measured with an automatic compressive strength tester, and each sample was tested 20 times under the same conditions and the results were averaged. A schematic of the reduction roasting experimental apparatus is shown in Figure1. The furnace has a working temperature range of 298–1573 K, with ±1 K temperature control accuracy, and produces a 150 mm hot zone. The carbon-bearing pellets were placed in a Ni-Cr alloy basket which was hung over the hot zone, and the pellets were heated to 1523 K at a rate of 20 K/min. High-purity nitrogen gas was supplied at a constant flowrate of 1 l/min in the reaction tube. At the end of the experiment, the basket was quickly moved to the top of the furnace with high-purity nitrogen gas continuing to be supplied until the pellets cooled to room temperature.Figure 1
A schematic of experimental apparatus of reduction roasting.
## 3. Results and Discussion
### 3.1. Effects of Treated Cow Dung Addition on the Cold Strength of Carbon-Bearing Pellets
The cold strength test results of green and dry pellets with different proportions of cow dung addition are shown in Figure2. The bar charts show that the strength of green and dry pellets varies with cow dung addition, but there is no clear correlation between the average compressive strength and the ratio of cow dung to anthracite added. The average strength of the green pellets containing cow dung decreased by 8–16% relative to pellets with no dung. For example, the strength of green pellets with no dung was 10.1 N/pellet, while green pellets with a dung-to-anthracite ratio of 1 : 4 (containing 4.1% cow dung) were 8.5 N/Pellet and the strength of green pellets with a dung-to-anthracite ratio of 3 : 4 (containing 11.1% cow dung) was 8.6 N/Pellet. In contrast, the average strength of dry pellets containing cow dung was between 33.5% and 56.6% higher than dry pellets with no cow dung. For example, the average strength of dry pellets with no cow dung was 18.2 N/pellet, while pellets with a dung-to-anthracite ratio of 1 : 4 had an average strength of 27.7 N/pellet.Figure 2
Cold strength of carbon-bearing pellets with different proportions of cow dung. (a) Green pellet; (b) dry pellet.
(a)
(b)The FTIR spectral analyses of different pellet samples are provided in Figure3, which shows that the spectrum of the iron ore-anthracite sample (Figure 3(a)) aligns closely with the spectrum of the iron ore-anthracite-bentonite sample (Figure 3(b)). The main absorption peaks appear at 3413 cm−1 (stretching vibration centre of hydroxy-OH), 1619 cm−1 (antisymmetric vibration peak of carboxyl-COOH) and 1032 cm−1 (Si-O bond stretching vibration of silicate impurities). This demonstrates that bentonite is not chemically adsorbed with the raw materials such as iron ore and anthracite, and it can be assumed that the strength of these green pellets with no cow dung was maintained by alternative strength mechanisms such as capillary and viscous forces.Figure 3
FTIR spectra of carbon-bearing pellets, comparing different raw materials.A SEM backscattered image of a cross-sectioned dry pellet sample with no added cow dung is shown in Figure4(a). The bentonite, iron concentrate, and finely pulverised coal particles have formed a gel, which has surrounded or coated the particles. The gel has a bridging effect and increases the bridge fluid viscosity and surface tension, strengthening the capillary and viscous forces that bind the particles in the green pellets [24]. In the drying process, the bentonite can make the solid particles be further drawn by the gel, the area of particle contact be increased, the intermolecular forces be strengthened, and the strength of the dry pellet be improved [25].Figure 4
SEM back scattered images of different carbon-bearing pellets after drying. (a) Dry pellet sample without cow dung; (b) dry pellet sample with 14.0% cow dung.
(a)
(b)The FTIR spectra of a pellet sample containing anthracite, iron ore, and cow dung are shown in Figure3(c), while a pellet sample containing anthracite, iron ore, cow dung, and bentonite is shown in Figure 3(d). The main absorption peaks in Figure 3(c) appear at 3423 cm−1, 1631 cm−1, 1423 cm−1, and 1032 cm−1, while the main peaks in Figure 3(d) appear at 3420 cm−1, 1633 cm−1, and 1032 cm−1. The stretching vibration peak of hydroxy-OH and antisymmetric peak of carboxyl-COOH have shifted and increased in intensity. Near 1423 cm−1, the lignin double bond or the hydroxy of carboxylic acid has generated an in-plane bending vibration peak, which suggests that chemical adsorption has probably occurred between the cellulose, hemicellulose, and free hydroxy of lignin and iron ore.A SEM backscattered image of a cross-sectioned dry pellet sample containing 14.0% cow dung is shown in Figure4(b). It can be seen in the figure that the bentonite, iron ore, and finely pulverised coal have formed a gel, infilled with or coating, particles, similar to that observed in the sample that did not contain cow dung (Figure 4(a)). The cellulose and hemicellulose in the particles of cow dung have a rope-shaped arrangement that reinforces the structure of the dry pellets and results in greater strength compared with the dry pellets that did not contain any cow dung.
### 3.2. Effects of Treated Cow Dung Addition on the Strength of Carbon-Bearing Pellets after Reduction Roasting
The average strength of pellets that contained different proportions of cow dung after reduction roasting at 1523 K is given in Figure5. The bar chart shows that the strength of the roasted sample with no cow dung was 2473 N/pellet. The pellets that had cow dung additive had a higher strength, but the strength decreased with increasing cow dung addition. The pellets containing 4.1% cow dung had a strength of 3106 N/pellet after reduction roasting, which was the highest strength obtained (an increase of 25.6%).Figure 5
Strength of different carbon-bearing pellets after reduction roasting at 1523 K.The strength of reduction roasted pellets is controlled by the rate of formation of intergrown iron crystals, their abundance, and physical structure, while the bentonite binder will have little effect on pellet strength after roasting [26]. When a mixture of anthracite and cow dung is used as a reducing agent in carbon-bearing pellets, the volatile matter in the cow dung cracks at about 773 K and will produce H2, CO, CO2, CH4, and other gases [27]. Reduction reactions may occur directly between H2, CO, and CH4, while CO (which is a strong reductive agent) will be formed according to the Boudouard reaction between C (originating from the anthracite and cow dung additives) and CO2. Devolatilisation generates pores within the carbon-bearing pellets, which increases their permeability to the reducing gases, promoting the rate of reduction and formation of intergrown iron crystals.The size of the carbon-bearing pellets before and after reduction roasting is compared in Figure6, which shows that the diameter of the sample with no cow dung contracted by 19%, and the sample with 7.8% cow dung contracted by 24%, while the sample containing 14.0% cow dung contracted by 28%. Thus, it can be concluded that the pellet shrinkage increases with increasing cow dung content. The shrinkage of reduction roasted pellets is mainly the result of aggregation of intergrown iron crystals, but porosity and the amount of low melting-point slag present may also influence the extent of pellet shrinkage [28]. However, Figures 5 and 6 show that the pellet shrinkage does not clearly correlate with higher pellet strength.Figure 6
Size of the carbon-bearing pellets before and after reduction roasting.The metallisation degree of carbon-bearing pellets after reduction is shown in Table4. It can be seen that the metal produced of reduction sample gradually decreases with increase of amount of the cow dung. The pellets containing 4.1% cow dung had a degree of metallisation of 88.6% after reduction roasting, which was the highest degree of metallisation obtained.Table 4
Degree of metallization of carbon-bearing pellets after reduction roasting (%).
Sample number
Total Fe
Metal Fe
R
m
1
81.9
71.0
86.7
2
79.6
70.5
88.6
3
78.0
67.6
86.6
4
76.4
62.6
82.0
5
74.9
60.1
80.3
### 3.3. Phase Analysis of Carbon-Bearing Pellets after Reduction Roasting
Figure7 shows the XRD spectra of different pellet samples after reduction roasting at temperatures of 873, 1073, 1273, and 1523 K. It can be seen in Figure 7(a) that, after roasting at 873 K, the sample containing no cow dung additive was mainly composed of magnetite (Fe3O4) and a small amount of hematite (α-Fe2O3) and maghemite (γ-Fe2O3). In comparison, the peak intensity of α-Fe2O3 is smaller, while the peak intensity of Fe3O4 is larger, in the sample that originally contained 7.8% cow dung. When this sample (7.8% cow dung) was reduction roasted at 1073 K, diffraction peaks for wüstite (FeO) appeared, while the diffraction peaks of Fe2O3 were no longer detected (Figure 7(b)). After reduction roasting at 1273 K, diffraction peaks for metallic Fe were present, and their intensity was stronger in the sample with 7.8% cow dung compared to the sample with no cow dung (Figure 7(c)). Following reduction roasting at 1523 K, the XRD spectrum of the sample initially containing 7.8% cow dung consisted mainly of Fe diffraction peaks (Figure 7(d)). The results in Figure 7 demonstrated that cow dung addition and reduction temperature affect the extent of reduction of the iron oxides in the pellets. Taking into account the degree of metallisation in Table 4, it can be concluded that the extent of reduction at 1523 K decreases when the amount of cow dung additive exceeds about 4–8%.Figure 7
XRD of different carbon-bearing pellets after reduction at different temperatures. (a) 873 k; (b) 1073 k; (c) 1273 k; (d) 1523 k.
(a)
(b)
(c)
(d)SEM backscattered images of different carbon-bearing pellet samples reduction roasted at 1273 and 1523 K are shown in Figure8. Figure 8(a) shows that the edges of iron ore particles were blurred and boundaries between some of the ore and reducing agent particles were hard to distinguish in the sample that had no cow dung addition after roasting at 1273 K. There were few small particles of reducing agent observed, and the structure of the pellet was relatively loose. For the sample with 7.8% cow dung (Figure 8(b)), whole iron ore particles were hard to distinguish, while some metallic iron had formed on the surface of some iron ore and reducing agent particles. The microstructure of the roasted samples that contained 7.8% and 14.0% cow dung (Figures 8(b) and 8(c)) was relatively similar. After reduction roasting at 1523 K, the discrete particles of iron ore, reducing agent, and other raw materials disappeared (Figures 8(d)–8(f)), and the reduction product was predominantly metallic iron with a minor amount of iron oxide. In the sample with no added cow dung (Figure 8(d)), the metallic iron phase is connected by a number of fine grains and there are a considerable number of pores or voids in the pellets. The metallic iron in the samples with 7.8% and 14.0% cow dung is flaky, but the sample with 14.0% cow dung has more voids and the amount of solid solution formed by the metallic iron and incompletely reduced iron oxide is more abundant.Figure 8
SEM backscattered images of different carbon-bearing pellets after reduction at 1273 and 1523 K. (a) 1273 k, cow dung 0%; (b) 1273 k, cow dung 7.8%; (c) 1273 k, cow dung 14.0%; (d) 1523 k, cow dung 0%; (e) 1523 k, cow dung 7.8%; (f) 1523 k, cow dung 14.0%.
(a)
(b)
(c)
(d)
(e)
(f)Figures7 and 8 demonstrate the effect of cow dung addition and the influence of temperature on the microstructure and phase composition of reduction roasted pellets. After reduction roasting at 1523 K, the intergrown iron crystals in pellets that contained cow dung are strongly clustered compared with the samples containing no cow dung. The higher ash content of the cow dung compared to the anthracite can improve the quaternary basicity (mass ratio of (CaO+MgO)/(SiO2+Al2O3)) of the carbon-bearing pellets, promoting the reduction of iron oxides and facilitating the aggregation of intergrown iron crystals [29]. The larger amount of low melting-point amorphous slag produced can infill the pores, making the structure of the pellets stronger and more compact after reduction. However, an excessive addition of cow dung (of more than about 4–8%) will lead to more voids caused by hydrocarbon cracking and volatilisation, which will decrease the extent of metallisation during the reduction of iron oxides, resulting in a decrease pore filling by amorphous slag phases as well as decrease roasted pellet strength, compared with pellets that contained about 4% cow dung.
## 3.1. Effects of Treated Cow Dung Addition on the Cold Strength of Carbon-Bearing Pellets
The cold strength test results of green and dry pellets with different proportions of cow dung addition are shown in Figure2. The bar charts show that the strength of green and dry pellets varies with cow dung addition, but there is no clear correlation between the average compressive strength and the ratio of cow dung to anthracite added. The average strength of the green pellets containing cow dung decreased by 8–16% relative to pellets with no dung. For example, the strength of green pellets with no dung was 10.1 N/pellet, while green pellets with a dung-to-anthracite ratio of 1 : 4 (containing 4.1% cow dung) were 8.5 N/Pellet and the strength of green pellets with a dung-to-anthracite ratio of 3 : 4 (containing 11.1% cow dung) was 8.6 N/Pellet. In contrast, the average strength of dry pellets containing cow dung was between 33.5% and 56.6% higher than dry pellets with no cow dung. For example, the average strength of dry pellets with no cow dung was 18.2 N/pellet, while pellets with a dung-to-anthracite ratio of 1 : 4 had an average strength of 27.7 N/pellet.Figure 2
Cold strength of carbon-bearing pellets with different proportions of cow dung. (a) Green pellet; (b) dry pellet.
(a)
(b)The FTIR spectral analyses of different pellet samples are provided in Figure3, which shows that the spectrum of the iron ore-anthracite sample (Figure 3(a)) aligns closely with the spectrum of the iron ore-anthracite-bentonite sample (Figure 3(b)). The main absorption peaks appear at 3413 cm−1 (stretching vibration centre of hydroxy-OH), 1619 cm−1 (antisymmetric vibration peak of carboxyl-COOH) and 1032 cm−1 (Si-O bond stretching vibration of silicate impurities). This demonstrates that bentonite is not chemically adsorbed with the raw materials such as iron ore and anthracite, and it can be assumed that the strength of these green pellets with no cow dung was maintained by alternative strength mechanisms such as capillary and viscous forces.Figure 3
FTIR spectra of carbon-bearing pellets, comparing different raw materials.A SEM backscattered image of a cross-sectioned dry pellet sample with no added cow dung is shown in Figure4(a). The bentonite, iron concentrate, and finely pulverised coal particles have formed a gel, which has surrounded or coated the particles. The gel has a bridging effect and increases the bridge fluid viscosity and surface tension, strengthening the capillary and viscous forces that bind the particles in the green pellets [24]. In the drying process, the bentonite can make the solid particles be further drawn by the gel, the area of particle contact be increased, the intermolecular forces be strengthened, and the strength of the dry pellet be improved [25].Figure 4
SEM back scattered images of different carbon-bearing pellets after drying. (a) Dry pellet sample without cow dung; (b) dry pellet sample with 14.0% cow dung.
(a)
(b)The FTIR spectra of a pellet sample containing anthracite, iron ore, and cow dung are shown in Figure3(c), while a pellet sample containing anthracite, iron ore, cow dung, and bentonite is shown in Figure 3(d). The main absorption peaks in Figure 3(c) appear at 3423 cm−1, 1631 cm−1, 1423 cm−1, and 1032 cm−1, while the main peaks in Figure 3(d) appear at 3420 cm−1, 1633 cm−1, and 1032 cm−1. The stretching vibration peak of hydroxy-OH and antisymmetric peak of carboxyl-COOH have shifted and increased in intensity. Near 1423 cm−1, the lignin double bond or the hydroxy of carboxylic acid has generated an in-plane bending vibration peak, which suggests that chemical adsorption has probably occurred between the cellulose, hemicellulose, and free hydroxy of lignin and iron ore.A SEM backscattered image of a cross-sectioned dry pellet sample containing 14.0% cow dung is shown in Figure4(b). It can be seen in the figure that the bentonite, iron ore, and finely pulverised coal have formed a gel, infilled with or coating, particles, similar to that observed in the sample that did not contain cow dung (Figure 4(a)). The cellulose and hemicellulose in the particles of cow dung have a rope-shaped arrangement that reinforces the structure of the dry pellets and results in greater strength compared with the dry pellets that did not contain any cow dung.
## 3.2. Effects of Treated Cow Dung Addition on the Strength of Carbon-Bearing Pellets after Reduction Roasting
The average strength of pellets that contained different proportions of cow dung after reduction roasting at 1523 K is given in Figure5. The bar chart shows that the strength of the roasted sample with no cow dung was 2473 N/pellet. The pellets that had cow dung additive had a higher strength, but the strength decreased with increasing cow dung addition. The pellets containing 4.1% cow dung had a strength of 3106 N/pellet after reduction roasting, which was the highest strength obtained (an increase of 25.6%).Figure 5
Strength of different carbon-bearing pellets after reduction roasting at 1523 K.The strength of reduction roasted pellets is controlled by the rate of formation of intergrown iron crystals, their abundance, and physical structure, while the bentonite binder will have little effect on pellet strength after roasting [26]. When a mixture of anthracite and cow dung is used as a reducing agent in carbon-bearing pellets, the volatile matter in the cow dung cracks at about 773 K and will produce H2, CO, CO2, CH4, and other gases [27]. Reduction reactions may occur directly between H2, CO, and CH4, while CO (which is a strong reductive agent) will be formed according to the Boudouard reaction between C (originating from the anthracite and cow dung additives) and CO2. Devolatilisation generates pores within the carbon-bearing pellets, which increases their permeability to the reducing gases, promoting the rate of reduction and formation of intergrown iron crystals.The size of the carbon-bearing pellets before and after reduction roasting is compared in Figure6, which shows that the diameter of the sample with no cow dung contracted by 19%, and the sample with 7.8% cow dung contracted by 24%, while the sample containing 14.0% cow dung contracted by 28%. Thus, it can be concluded that the pellet shrinkage increases with increasing cow dung content. The shrinkage of reduction roasted pellets is mainly the result of aggregation of intergrown iron crystals, but porosity and the amount of low melting-point slag present may also influence the extent of pellet shrinkage [28]. However, Figures 5 and 6 show that the pellet shrinkage does not clearly correlate with higher pellet strength.Figure 6
Size of the carbon-bearing pellets before and after reduction roasting.The metallisation degree of carbon-bearing pellets after reduction is shown in Table4. It can be seen that the metal produced of reduction sample gradually decreases with increase of amount of the cow dung. The pellets containing 4.1% cow dung had a degree of metallisation of 88.6% after reduction roasting, which was the highest degree of metallisation obtained.Table 4
Degree of metallization of carbon-bearing pellets after reduction roasting (%).
Sample number
Total Fe
Metal Fe
R
m
1
81.9
71.0
86.7
2
79.6
70.5
88.6
3
78.0
67.6
86.6
4
76.4
62.6
82.0
5
74.9
60.1
80.3
## 3.3. Phase Analysis of Carbon-Bearing Pellets after Reduction Roasting
Figure7 shows the XRD spectra of different pellet samples after reduction roasting at temperatures of 873, 1073, 1273, and 1523 K. It can be seen in Figure 7(a) that, after roasting at 873 K, the sample containing no cow dung additive was mainly composed of magnetite (Fe3O4) and a small amount of hematite (α-Fe2O3) and maghemite (γ-Fe2O3). In comparison, the peak intensity of α-Fe2O3 is smaller, while the peak intensity of Fe3O4 is larger, in the sample that originally contained 7.8% cow dung. When this sample (7.8% cow dung) was reduction roasted at 1073 K, diffraction peaks for wüstite (FeO) appeared, while the diffraction peaks of Fe2O3 were no longer detected (Figure 7(b)). After reduction roasting at 1273 K, diffraction peaks for metallic Fe were present, and their intensity was stronger in the sample with 7.8% cow dung compared to the sample with no cow dung (Figure 7(c)). Following reduction roasting at 1523 K, the XRD spectrum of the sample initially containing 7.8% cow dung consisted mainly of Fe diffraction peaks (Figure 7(d)). The results in Figure 7 demonstrated that cow dung addition and reduction temperature affect the extent of reduction of the iron oxides in the pellets. Taking into account the degree of metallisation in Table 4, it can be concluded that the extent of reduction at 1523 K decreases when the amount of cow dung additive exceeds about 4–8%.Figure 7
XRD of different carbon-bearing pellets after reduction at different temperatures. (a) 873 k; (b) 1073 k; (c) 1273 k; (d) 1523 k.
(a)
(b)
(c)
(d)SEM backscattered images of different carbon-bearing pellet samples reduction roasted at 1273 and 1523 K are shown in Figure8. Figure 8(a) shows that the edges of iron ore particles were blurred and boundaries between some of the ore and reducing agent particles were hard to distinguish in the sample that had no cow dung addition after roasting at 1273 K. There were few small particles of reducing agent observed, and the structure of the pellet was relatively loose. For the sample with 7.8% cow dung (Figure 8(b)), whole iron ore particles were hard to distinguish, while some metallic iron had formed on the surface of some iron ore and reducing agent particles. The microstructure of the roasted samples that contained 7.8% and 14.0% cow dung (Figures 8(b) and 8(c)) was relatively similar. After reduction roasting at 1523 K, the discrete particles of iron ore, reducing agent, and other raw materials disappeared (Figures 8(d)–8(f)), and the reduction product was predominantly metallic iron with a minor amount of iron oxide. In the sample with no added cow dung (Figure 8(d)), the metallic iron phase is connected by a number of fine grains and there are a considerable number of pores or voids in the pellets. The metallic iron in the samples with 7.8% and 14.0% cow dung is flaky, but the sample with 14.0% cow dung has more voids and the amount of solid solution formed by the metallic iron and incompletely reduced iron oxide is more abundant.Figure 8
SEM backscattered images of different carbon-bearing pellets after reduction at 1273 and 1523 K. (a) 1273 k, cow dung 0%; (b) 1273 k, cow dung 7.8%; (c) 1273 k, cow dung 14.0%; (d) 1523 k, cow dung 0%; (e) 1523 k, cow dung 7.8%; (f) 1523 k, cow dung 14.0%.
(a)
(b)
(c)
(d)
(e)
(f)Figures7 and 8 demonstrate the effect of cow dung addition and the influence of temperature on the microstructure and phase composition of reduction roasted pellets. After reduction roasting at 1523 K, the intergrown iron crystals in pellets that contained cow dung are strongly clustered compared with the samples containing no cow dung. The higher ash content of the cow dung compared to the anthracite can improve the quaternary basicity (mass ratio of (CaO+MgO)/(SiO2+Al2O3)) of the carbon-bearing pellets, promoting the reduction of iron oxides and facilitating the aggregation of intergrown iron crystals [29]. The larger amount of low melting-point amorphous slag produced can infill the pores, making the structure of the pellets stronger and more compact after reduction. However, an excessive addition of cow dung (of more than about 4–8%) will lead to more voids caused by hydrocarbon cracking and volatilisation, which will decrease the extent of metallisation during the reduction of iron oxides, resulting in a decrease pore filling by amorphous slag phases as well as decrease roasted pellet strength, compared with pellets that contained about 4% cow dung.
## 4. Conclusions
The following conclusions can be made through this study.(1) The addition of cow dung affects the strength of carbon-bearing pellets. The green pellet strength decreased by about 8–16% and the dry strength increased by about 34–57% after adding cow dung. However, there was no obvious correlation between the quantity of cow dung added and the change in cold pellet strength. Compared with reduction roasted pellets containing no cow dung, roasted pellets containing cow dung had greater strength (ranging from about 16–26% stronger), but the strength decreased as the proportion of cow dung increased.(2) The lower strength of green pellets containing cow dung was found to be due to expansion of the amorphous region of the cellulose contained in the cow dung. The greater strength of dry pellets containing cow dung was found to be the result of chemical adsorption among cellulose, hemicellulose, and free hydroxy in lignin and iron concentrate. The rope arrangement of cellulose and hemicellulose also positively reinforces the pellet structure.(3) In process of reduction roasting of carbon-bearing pellets, cow dung additions is beneficial for aggregation of intergrown iron crystals and may help to increase the density of the physical structure of the pellets; thus the strength of the reduction roasted pellets is also improved. However, excessive addition of more than about 4–8% cow dung will result in lower pellet density and decreased strength compared with pellets containing about 4% cow dung.
---
*Source: 1019438-2017-11-02.xml* | 1019438-2017-11-02_1019438-2017-11-02.md | 38,195 | Effects of Treated Cow Dung Addition on the Strength of Carbon-Bearing Iron Ore Pellets | Qing-min Meng; Jia-xin Li; Tie-jun Chun; Xiao-feng He; Ru-fei Wei; Ping Wang; Hong-ming Long | Advances in Materials Science and Engineering
(2017) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1019438 | 1019438-2017-11-02.xml | ---
## Abstract
It is of particular interest to use biomass as an alternative source of fuel in direct-reduction ironmaking to ease the current reliance on fossil fuel energy. The influence of cow dung addition on the strength of carbon-bearing iron ore pellets composed of cow dung, iron ore, anthracite, and bentonite was investigated, the quality of green and dry pellet was evaluated based on FTIR analysis, and the mechanism of strength variation of the reduced pellets was investigated by analysing the phase composition and microstructure using XRD and SEM. The results show that cow dung addition decreased the green pellet strength due to expansion of the amorphous region of the cellulose in the cow dung; however, the dry pellet strength increased substantially. In the process of reduction roasting, it was found that cow dung addition can promote aggregation of iron crystals and increase the density of the pellets, resulting in increased strength of the reduction roasted pellets, while excessive cow dung addition resulted in lower strength.
---
## Body
## 1. Introduction
With the gradual depletion of raw materials for the blast furnace, such as coke and quality iron ore, direct-reduction and smelting reduction technologies using gas, liquid fuels, and noncoking coal as energy sources were developed as cleaner, more environment friendly alternatives, which have been applied widely around the world [1–3]. However, the fuel sources of noncoking coal iron-making technology have not fundamentally changed, and there is a gap in product quality and energy consumption compared to the blast furnace. The use of biomass as an alternative fuel source, to further reduce the consumption of fossil fuels and emission of carbon in the steel-making industry, has become a hot topic among scholars [4–8]. Sterol [9] investigated the mechanisms of iron ore reduction with biomass wood waste. The results showed that the iron ore was successfully reduced to predominantly metallic iron when up to 30 wt% of biomass was introduced into the mixture and reduction commenced at approximately 943 K and was almost completed at 1473 K. Wei et al. [10] studied the characteristics and kinetics of iron oxide reduction by carbon in biomass composites. The result showed that iron oxide can be reduced by biomass very rapidly, and the degree of metallisation and reduction increased with temperature.Iron oxide reduction by carbon in biomass can be divided into two stages, namely, reduction by volatile matter followed by reduction by nonvolatile carbon. The reduction times of the two stages both decrease with increasing temperature. Liu et al. [11] researched the reduction of carbon-bearing pellets, using the reducing agents prepared from carbonization products of rice husk, peanut shells, and wood chips. The result showed that the carbon-bearing pellets could be reduced rapidly between 1473 K and 1573 K in about 15 to 20 minutes, while the higher carbon content and appropriate volatile content in biological carbon were beneficial to the pellet reduction. Han et al. [12] studied the effect of biomass on the reduction of carbon-bearing pellets using charcoal, bamboo charcoal, and straw as reductant. The results showed that the biomass reductants had little effect on metallisation rate, but certain biomass reductants had a substantial influence on the strength and volumetric shrinkage of the pellets. The compressive strength of pellets with straw was relatively higher, while the strength of pellets with charcoal or bamboo charcoal was low.Biomass includes all animals, plants, and microorganisms, including organic waste residues. Thus, maximising the use of biomass could potentially relieve the global energy crisis. The large amount of animal dung produced as a by-product of the Agricultural Industry is causing an increasingly serious environmental problem [13], with the stock of cow dung topping the list. Researchers have made extensive studies regarding the issue of cow dung utilisation. Cow dung reclamation technology, such as energy, composting, and animal feed, has achieved considerable economic and social benefits [14–17]. Other than this, researchers are also exploring the applications of cow dung in areas such as new preparation methods of biomass carbon materials [18–20] and solid waste disposal [21, 22].Similar to other plant biomass, cellulose, hemicellulose, and lignin are the main chemical constituents of dry cow dung. The similarity of organic components between cow dung and plant biomass makes it feasible to use cow dung as an alternative reducing agent in iron ore reduction, and this has been demonstrated in various studies. Rath et al. [23] used cow dung as a reductant in the reduction roasting of an iron ore slime containing 56.2% Fe. A concentrate of ~64% Fe, with a recovery of ~66 wt%, was obtained from the reduced product after being subjected to low intensity magnetic separation. Under similar conditions, a concentrate of ~66% Fe, with a recovery of only 35 wt%, was obtained after using conventional charcoal as the reductant (93.5% fixed carbon and 1.2% volatile matter), which demonstrated that cow dung was a better reductant.The key purpose of this study is to investigate the effect of cow dung addition on the strength of carbon-bearing iron ore pellets and its mechanism. The influence of cow dung addition on the quality of green and dry pellets is evaluated based on FTIR analysis, and the mechanism of strength variation of reduction roasted pellets was investigated by analysing the phase composition and microstructure using XRD and SEM, to provide a benchmark for further utilisation of cow dung in direct-reduction ironmaking.
## 2. Materials and Methods
### 2.1. Materials
Carbon-bearing iron ore pellets were prepared by pressing a mixture consisting of iron ore, anthracite, bentonite, and different proportions of cow dung. The chemical composition of the iron ore used in this study is shown in Table1, while the proximate analyses of the anthracite and the cow dung used as the reductant in this study are given in Table 2. The cow dung was obtained from the Mengniu Modern Animal Husbandry (Group), Maanshan Co. First, all of the raw materials were dried at 383 K for 24 h, individually ground in a ball mill to a passing size of 74 μm for the iron ore and 200 μm for the reducing agents. Mixtures of the ground materials were prepared according to the experimental plan given in Table 3. A little water was added, and 20 mm diameter pellets (molar ratio of Cfix/Oiron oxide = 1) were prepared by pressing at 10 MPa.Table 1
Chemical composition of iron ore (wt%).
TFe
FeO
SiO2
Al2O3
CaO
MgO
P
S
61.4
23.5
3.3
1.1
1.0
0.2
0.028
0.328Table 2
Proximate analysis of reducing agent (wt%).
Type of reducing agent
Fixed carbon
Ash
Volatile matter
P
S
Anthracite
78.8
13.4
7.9
0.024
0.580
Cow dung
7.7
24.9
67.4
0.001
0.270Table 3
Raw material ratio of the carbon-bearing iron ore pellets.
Sample number
Iron ore
Anthracite
Cow dung
Bentonite
C
fix
/
O
iron oxide
1
82.5%
17.5%
0.0%
1.6%
1.0
2
79.4%
16.5%
4.1%
1.6%
1.0
3
76.7%
15.6%
7.8%
1.6%
1.0
4
74.2%
14.7%
11.1%
1.6%
1.0
5
72.1%
14.0%
14.0%
1.6%
1.0
### 2.2. Characterization Techniques
Characterization studies were undertaken on the raw materials and some reduction roasted products. The FTIR spectra were obtained with a Nicolet 8700 spectrophotometer by adding 32 scans at a resolution of 4 cm−1, using KBr wafers containing about 0.5 g of sample, which had been dried at 393 K for 24 h before spectral analysis. XRD was carried out with a D8 Advance X-ray powder diffractometer using Cu-Kα radiation. The voltage and current of the machine were set at 60 kV and 80 mA, respectively, scanning from 20° to 80° using a step size of 0.02°. A scanning electron microscope (NOVA Nano SEM430, USA) equipped with EDS detector was used for the microstructure studies.
### 2.3. Strength Tests and Reduction Roasting
The compressive strength of green, dry, and reduction roasted pellets was measured with an automatic compressive strength tester, and each sample was tested 20 times under the same conditions and the results were averaged. A schematic of the reduction roasting experimental apparatus is shown in Figure1. The furnace has a working temperature range of 298–1573 K, with ±1 K temperature control accuracy, and produces a 150 mm hot zone. The carbon-bearing pellets were placed in a Ni-Cr alloy basket which was hung over the hot zone, and the pellets were heated to 1523 K at a rate of 20 K/min. High-purity nitrogen gas was supplied at a constant flowrate of 1 l/min in the reaction tube. At the end of the experiment, the basket was quickly moved to the top of the furnace with high-purity nitrogen gas continuing to be supplied until the pellets cooled to room temperature.Figure 1
A schematic of experimental apparatus of reduction roasting.
## 2.1. Materials
Carbon-bearing iron ore pellets were prepared by pressing a mixture consisting of iron ore, anthracite, bentonite, and different proportions of cow dung. The chemical composition of the iron ore used in this study is shown in Table1, while the proximate analyses of the anthracite and the cow dung used as the reductant in this study are given in Table 2. The cow dung was obtained from the Mengniu Modern Animal Husbandry (Group), Maanshan Co. First, all of the raw materials were dried at 383 K for 24 h, individually ground in a ball mill to a passing size of 74 μm for the iron ore and 200 μm for the reducing agents. Mixtures of the ground materials were prepared according to the experimental plan given in Table 3. A little water was added, and 20 mm diameter pellets (molar ratio of Cfix/Oiron oxide = 1) were prepared by pressing at 10 MPa.Table 1
Chemical composition of iron ore (wt%).
TFe
FeO
SiO2
Al2O3
CaO
MgO
P
S
61.4
23.5
3.3
1.1
1.0
0.2
0.028
0.328Table 2
Proximate analysis of reducing agent (wt%).
Type of reducing agent
Fixed carbon
Ash
Volatile matter
P
S
Anthracite
78.8
13.4
7.9
0.024
0.580
Cow dung
7.7
24.9
67.4
0.001
0.270Table 3
Raw material ratio of the carbon-bearing iron ore pellets.
Sample number
Iron ore
Anthracite
Cow dung
Bentonite
C
fix
/
O
iron oxide
1
82.5%
17.5%
0.0%
1.6%
1.0
2
79.4%
16.5%
4.1%
1.6%
1.0
3
76.7%
15.6%
7.8%
1.6%
1.0
4
74.2%
14.7%
11.1%
1.6%
1.0
5
72.1%
14.0%
14.0%
1.6%
1.0
## 2.2. Characterization Techniques
Characterization studies were undertaken on the raw materials and some reduction roasted products. The FTIR spectra were obtained with a Nicolet 8700 spectrophotometer by adding 32 scans at a resolution of 4 cm−1, using KBr wafers containing about 0.5 g of sample, which had been dried at 393 K for 24 h before spectral analysis. XRD was carried out with a D8 Advance X-ray powder diffractometer using Cu-Kα radiation. The voltage and current of the machine were set at 60 kV and 80 mA, respectively, scanning from 20° to 80° using a step size of 0.02°. A scanning electron microscope (NOVA Nano SEM430, USA) equipped with EDS detector was used for the microstructure studies.
## 2.3. Strength Tests and Reduction Roasting
The compressive strength of green, dry, and reduction roasted pellets was measured with an automatic compressive strength tester, and each sample was tested 20 times under the same conditions and the results were averaged. A schematic of the reduction roasting experimental apparatus is shown in Figure1. The furnace has a working temperature range of 298–1573 K, with ±1 K temperature control accuracy, and produces a 150 mm hot zone. The carbon-bearing pellets were placed in a Ni-Cr alloy basket which was hung over the hot zone, and the pellets were heated to 1523 K at a rate of 20 K/min. High-purity nitrogen gas was supplied at a constant flowrate of 1 l/min in the reaction tube. At the end of the experiment, the basket was quickly moved to the top of the furnace with high-purity nitrogen gas continuing to be supplied until the pellets cooled to room temperature.Figure 1
A schematic of experimental apparatus of reduction roasting.
## 3. Results and Discussion
### 3.1. Effects of Treated Cow Dung Addition on the Cold Strength of Carbon-Bearing Pellets
The cold strength test results of green and dry pellets with different proportions of cow dung addition are shown in Figure2. The bar charts show that the strength of green and dry pellets varies with cow dung addition, but there is no clear correlation between the average compressive strength and the ratio of cow dung to anthracite added. The average strength of the green pellets containing cow dung decreased by 8–16% relative to pellets with no dung. For example, the strength of green pellets with no dung was 10.1 N/pellet, while green pellets with a dung-to-anthracite ratio of 1 : 4 (containing 4.1% cow dung) were 8.5 N/Pellet and the strength of green pellets with a dung-to-anthracite ratio of 3 : 4 (containing 11.1% cow dung) was 8.6 N/Pellet. In contrast, the average strength of dry pellets containing cow dung was between 33.5% and 56.6% higher than dry pellets with no cow dung. For example, the average strength of dry pellets with no cow dung was 18.2 N/pellet, while pellets with a dung-to-anthracite ratio of 1 : 4 had an average strength of 27.7 N/pellet.Figure 2
Cold strength of carbon-bearing pellets with different proportions of cow dung. (a) Green pellet; (b) dry pellet.
(a)
(b)The FTIR spectral analyses of different pellet samples are provided in Figure3, which shows that the spectrum of the iron ore-anthracite sample (Figure 3(a)) aligns closely with the spectrum of the iron ore-anthracite-bentonite sample (Figure 3(b)). The main absorption peaks appear at 3413 cm−1 (stretching vibration centre of hydroxy-OH), 1619 cm−1 (antisymmetric vibration peak of carboxyl-COOH) and 1032 cm−1 (Si-O bond stretching vibration of silicate impurities). This demonstrates that bentonite is not chemically adsorbed with the raw materials such as iron ore and anthracite, and it can be assumed that the strength of these green pellets with no cow dung was maintained by alternative strength mechanisms such as capillary and viscous forces.Figure 3
FTIR spectra of carbon-bearing pellets, comparing different raw materials.A SEM backscattered image of a cross-sectioned dry pellet sample with no added cow dung is shown in Figure4(a). The bentonite, iron concentrate, and finely pulverised coal particles have formed a gel, which has surrounded or coated the particles. The gel has a bridging effect and increases the bridge fluid viscosity and surface tension, strengthening the capillary and viscous forces that bind the particles in the green pellets [24]. In the drying process, the bentonite can make the solid particles be further drawn by the gel, the area of particle contact be increased, the intermolecular forces be strengthened, and the strength of the dry pellet be improved [25].Figure 4
SEM back scattered images of different carbon-bearing pellets after drying. (a) Dry pellet sample without cow dung; (b) dry pellet sample with 14.0% cow dung.
(a)
(b)The FTIR spectra of a pellet sample containing anthracite, iron ore, and cow dung are shown in Figure3(c), while a pellet sample containing anthracite, iron ore, cow dung, and bentonite is shown in Figure 3(d). The main absorption peaks in Figure 3(c) appear at 3423 cm−1, 1631 cm−1, 1423 cm−1, and 1032 cm−1, while the main peaks in Figure 3(d) appear at 3420 cm−1, 1633 cm−1, and 1032 cm−1. The stretching vibration peak of hydroxy-OH and antisymmetric peak of carboxyl-COOH have shifted and increased in intensity. Near 1423 cm−1, the lignin double bond or the hydroxy of carboxylic acid has generated an in-plane bending vibration peak, which suggests that chemical adsorption has probably occurred between the cellulose, hemicellulose, and free hydroxy of lignin and iron ore.A SEM backscattered image of a cross-sectioned dry pellet sample containing 14.0% cow dung is shown in Figure4(b). It can be seen in the figure that the bentonite, iron ore, and finely pulverised coal have formed a gel, infilled with or coating, particles, similar to that observed in the sample that did not contain cow dung (Figure 4(a)). The cellulose and hemicellulose in the particles of cow dung have a rope-shaped arrangement that reinforces the structure of the dry pellets and results in greater strength compared with the dry pellets that did not contain any cow dung.
### 3.2. Effects of Treated Cow Dung Addition on the Strength of Carbon-Bearing Pellets after Reduction Roasting
The average strength of pellets that contained different proportions of cow dung after reduction roasting at 1523 K is given in Figure5. The bar chart shows that the strength of the roasted sample with no cow dung was 2473 N/pellet. The pellets that had cow dung additive had a higher strength, but the strength decreased with increasing cow dung addition. The pellets containing 4.1% cow dung had a strength of 3106 N/pellet after reduction roasting, which was the highest strength obtained (an increase of 25.6%).Figure 5
Strength of different carbon-bearing pellets after reduction roasting at 1523 K.The strength of reduction roasted pellets is controlled by the rate of formation of intergrown iron crystals, their abundance, and physical structure, while the bentonite binder will have little effect on pellet strength after roasting [26]. When a mixture of anthracite and cow dung is used as a reducing agent in carbon-bearing pellets, the volatile matter in the cow dung cracks at about 773 K and will produce H2, CO, CO2, CH4, and other gases [27]. Reduction reactions may occur directly between H2, CO, and CH4, while CO (which is a strong reductive agent) will be formed according to the Boudouard reaction between C (originating from the anthracite and cow dung additives) and CO2. Devolatilisation generates pores within the carbon-bearing pellets, which increases their permeability to the reducing gases, promoting the rate of reduction and formation of intergrown iron crystals.The size of the carbon-bearing pellets before and after reduction roasting is compared in Figure6, which shows that the diameter of the sample with no cow dung contracted by 19%, and the sample with 7.8% cow dung contracted by 24%, while the sample containing 14.0% cow dung contracted by 28%. Thus, it can be concluded that the pellet shrinkage increases with increasing cow dung content. The shrinkage of reduction roasted pellets is mainly the result of aggregation of intergrown iron crystals, but porosity and the amount of low melting-point slag present may also influence the extent of pellet shrinkage [28]. However, Figures 5 and 6 show that the pellet shrinkage does not clearly correlate with higher pellet strength.Figure 6
Size of the carbon-bearing pellets before and after reduction roasting.The metallisation degree of carbon-bearing pellets after reduction is shown in Table4. It can be seen that the metal produced of reduction sample gradually decreases with increase of amount of the cow dung. The pellets containing 4.1% cow dung had a degree of metallisation of 88.6% after reduction roasting, which was the highest degree of metallisation obtained.Table 4
Degree of metallization of carbon-bearing pellets after reduction roasting (%).
Sample number
Total Fe
Metal Fe
R
m
1
81.9
71.0
86.7
2
79.6
70.5
88.6
3
78.0
67.6
86.6
4
76.4
62.6
82.0
5
74.9
60.1
80.3
### 3.3. Phase Analysis of Carbon-Bearing Pellets after Reduction Roasting
Figure7 shows the XRD spectra of different pellet samples after reduction roasting at temperatures of 873, 1073, 1273, and 1523 K. It can be seen in Figure 7(a) that, after roasting at 873 K, the sample containing no cow dung additive was mainly composed of magnetite (Fe3O4) and a small amount of hematite (α-Fe2O3) and maghemite (γ-Fe2O3). In comparison, the peak intensity of α-Fe2O3 is smaller, while the peak intensity of Fe3O4 is larger, in the sample that originally contained 7.8% cow dung. When this sample (7.8% cow dung) was reduction roasted at 1073 K, diffraction peaks for wüstite (FeO) appeared, while the diffraction peaks of Fe2O3 were no longer detected (Figure 7(b)). After reduction roasting at 1273 K, diffraction peaks for metallic Fe were present, and their intensity was stronger in the sample with 7.8% cow dung compared to the sample with no cow dung (Figure 7(c)). Following reduction roasting at 1523 K, the XRD spectrum of the sample initially containing 7.8% cow dung consisted mainly of Fe diffraction peaks (Figure 7(d)). The results in Figure 7 demonstrated that cow dung addition and reduction temperature affect the extent of reduction of the iron oxides in the pellets. Taking into account the degree of metallisation in Table 4, it can be concluded that the extent of reduction at 1523 K decreases when the amount of cow dung additive exceeds about 4–8%.Figure 7
XRD of different carbon-bearing pellets after reduction at different temperatures. (a) 873 k; (b) 1073 k; (c) 1273 k; (d) 1523 k.
(a)
(b)
(c)
(d)SEM backscattered images of different carbon-bearing pellet samples reduction roasted at 1273 and 1523 K are shown in Figure8. Figure 8(a) shows that the edges of iron ore particles were blurred and boundaries between some of the ore and reducing agent particles were hard to distinguish in the sample that had no cow dung addition after roasting at 1273 K. There were few small particles of reducing agent observed, and the structure of the pellet was relatively loose. For the sample with 7.8% cow dung (Figure 8(b)), whole iron ore particles were hard to distinguish, while some metallic iron had formed on the surface of some iron ore and reducing agent particles. The microstructure of the roasted samples that contained 7.8% and 14.0% cow dung (Figures 8(b) and 8(c)) was relatively similar. After reduction roasting at 1523 K, the discrete particles of iron ore, reducing agent, and other raw materials disappeared (Figures 8(d)–8(f)), and the reduction product was predominantly metallic iron with a minor amount of iron oxide. In the sample with no added cow dung (Figure 8(d)), the metallic iron phase is connected by a number of fine grains and there are a considerable number of pores or voids in the pellets. The metallic iron in the samples with 7.8% and 14.0% cow dung is flaky, but the sample with 14.0% cow dung has more voids and the amount of solid solution formed by the metallic iron and incompletely reduced iron oxide is more abundant.Figure 8
SEM backscattered images of different carbon-bearing pellets after reduction at 1273 and 1523 K. (a) 1273 k, cow dung 0%; (b) 1273 k, cow dung 7.8%; (c) 1273 k, cow dung 14.0%; (d) 1523 k, cow dung 0%; (e) 1523 k, cow dung 7.8%; (f) 1523 k, cow dung 14.0%.
(a)
(b)
(c)
(d)
(e)
(f)Figures7 and 8 demonstrate the effect of cow dung addition and the influence of temperature on the microstructure and phase composition of reduction roasted pellets. After reduction roasting at 1523 K, the intergrown iron crystals in pellets that contained cow dung are strongly clustered compared with the samples containing no cow dung. The higher ash content of the cow dung compared to the anthracite can improve the quaternary basicity (mass ratio of (CaO+MgO)/(SiO2+Al2O3)) of the carbon-bearing pellets, promoting the reduction of iron oxides and facilitating the aggregation of intergrown iron crystals [29]. The larger amount of low melting-point amorphous slag produced can infill the pores, making the structure of the pellets stronger and more compact after reduction. However, an excessive addition of cow dung (of more than about 4–8%) will lead to more voids caused by hydrocarbon cracking and volatilisation, which will decrease the extent of metallisation during the reduction of iron oxides, resulting in a decrease pore filling by amorphous slag phases as well as decrease roasted pellet strength, compared with pellets that contained about 4% cow dung.
## 3.1. Effects of Treated Cow Dung Addition on the Cold Strength of Carbon-Bearing Pellets
The cold strength test results of green and dry pellets with different proportions of cow dung addition are shown in Figure2. The bar charts show that the strength of green and dry pellets varies with cow dung addition, but there is no clear correlation between the average compressive strength and the ratio of cow dung to anthracite added. The average strength of the green pellets containing cow dung decreased by 8–16% relative to pellets with no dung. For example, the strength of green pellets with no dung was 10.1 N/pellet, while green pellets with a dung-to-anthracite ratio of 1 : 4 (containing 4.1% cow dung) were 8.5 N/Pellet and the strength of green pellets with a dung-to-anthracite ratio of 3 : 4 (containing 11.1% cow dung) was 8.6 N/Pellet. In contrast, the average strength of dry pellets containing cow dung was between 33.5% and 56.6% higher than dry pellets with no cow dung. For example, the average strength of dry pellets with no cow dung was 18.2 N/pellet, while pellets with a dung-to-anthracite ratio of 1 : 4 had an average strength of 27.7 N/pellet.Figure 2
Cold strength of carbon-bearing pellets with different proportions of cow dung. (a) Green pellet; (b) dry pellet.
(a)
(b)The FTIR spectral analyses of different pellet samples are provided in Figure3, which shows that the spectrum of the iron ore-anthracite sample (Figure 3(a)) aligns closely with the spectrum of the iron ore-anthracite-bentonite sample (Figure 3(b)). The main absorption peaks appear at 3413 cm−1 (stretching vibration centre of hydroxy-OH), 1619 cm−1 (antisymmetric vibration peak of carboxyl-COOH) and 1032 cm−1 (Si-O bond stretching vibration of silicate impurities). This demonstrates that bentonite is not chemically adsorbed with the raw materials such as iron ore and anthracite, and it can be assumed that the strength of these green pellets with no cow dung was maintained by alternative strength mechanisms such as capillary and viscous forces.Figure 3
FTIR spectra of carbon-bearing pellets, comparing different raw materials.A SEM backscattered image of a cross-sectioned dry pellet sample with no added cow dung is shown in Figure4(a). The bentonite, iron concentrate, and finely pulverised coal particles have formed a gel, which has surrounded or coated the particles. The gel has a bridging effect and increases the bridge fluid viscosity and surface tension, strengthening the capillary and viscous forces that bind the particles in the green pellets [24]. In the drying process, the bentonite can make the solid particles be further drawn by the gel, the area of particle contact be increased, the intermolecular forces be strengthened, and the strength of the dry pellet be improved [25].Figure 4
SEM back scattered images of different carbon-bearing pellets after drying. (a) Dry pellet sample without cow dung; (b) dry pellet sample with 14.0% cow dung.
(a)
(b)The FTIR spectra of a pellet sample containing anthracite, iron ore, and cow dung are shown in Figure3(c), while a pellet sample containing anthracite, iron ore, cow dung, and bentonite is shown in Figure 3(d). The main absorption peaks in Figure 3(c) appear at 3423 cm−1, 1631 cm−1, 1423 cm−1, and 1032 cm−1, while the main peaks in Figure 3(d) appear at 3420 cm−1, 1633 cm−1, and 1032 cm−1. The stretching vibration peak of hydroxy-OH and antisymmetric peak of carboxyl-COOH have shifted and increased in intensity. Near 1423 cm−1, the lignin double bond or the hydroxy of carboxylic acid has generated an in-plane bending vibration peak, which suggests that chemical adsorption has probably occurred between the cellulose, hemicellulose, and free hydroxy of lignin and iron ore.A SEM backscattered image of a cross-sectioned dry pellet sample containing 14.0% cow dung is shown in Figure4(b). It can be seen in the figure that the bentonite, iron ore, and finely pulverised coal have formed a gel, infilled with or coating, particles, similar to that observed in the sample that did not contain cow dung (Figure 4(a)). The cellulose and hemicellulose in the particles of cow dung have a rope-shaped arrangement that reinforces the structure of the dry pellets and results in greater strength compared with the dry pellets that did not contain any cow dung.
## 3.2. Effects of Treated Cow Dung Addition on the Strength of Carbon-Bearing Pellets after Reduction Roasting
The average strength of pellets that contained different proportions of cow dung after reduction roasting at 1523 K is given in Figure5. The bar chart shows that the strength of the roasted sample with no cow dung was 2473 N/pellet. The pellets that had cow dung additive had a higher strength, but the strength decreased with increasing cow dung addition. The pellets containing 4.1% cow dung had a strength of 3106 N/pellet after reduction roasting, which was the highest strength obtained (an increase of 25.6%).Figure 5
Strength of different carbon-bearing pellets after reduction roasting at 1523 K.The strength of reduction roasted pellets is controlled by the rate of formation of intergrown iron crystals, their abundance, and physical structure, while the bentonite binder will have little effect on pellet strength after roasting [26]. When a mixture of anthracite and cow dung is used as a reducing agent in carbon-bearing pellets, the volatile matter in the cow dung cracks at about 773 K and will produce H2, CO, CO2, CH4, and other gases [27]. Reduction reactions may occur directly between H2, CO, and CH4, while CO (which is a strong reductive agent) will be formed according to the Boudouard reaction between C (originating from the anthracite and cow dung additives) and CO2. Devolatilisation generates pores within the carbon-bearing pellets, which increases their permeability to the reducing gases, promoting the rate of reduction and formation of intergrown iron crystals.The size of the carbon-bearing pellets before and after reduction roasting is compared in Figure6, which shows that the diameter of the sample with no cow dung contracted by 19%, and the sample with 7.8% cow dung contracted by 24%, while the sample containing 14.0% cow dung contracted by 28%. Thus, it can be concluded that the pellet shrinkage increases with increasing cow dung content. The shrinkage of reduction roasted pellets is mainly the result of aggregation of intergrown iron crystals, but porosity and the amount of low melting-point slag present may also influence the extent of pellet shrinkage [28]. However, Figures 5 and 6 show that the pellet shrinkage does not clearly correlate with higher pellet strength.Figure 6
Size of the carbon-bearing pellets before and after reduction roasting.The metallisation degree of carbon-bearing pellets after reduction is shown in Table4. It can be seen that the metal produced of reduction sample gradually decreases with increase of amount of the cow dung. The pellets containing 4.1% cow dung had a degree of metallisation of 88.6% after reduction roasting, which was the highest degree of metallisation obtained.Table 4
Degree of metallization of carbon-bearing pellets after reduction roasting (%).
Sample number
Total Fe
Metal Fe
R
m
1
81.9
71.0
86.7
2
79.6
70.5
88.6
3
78.0
67.6
86.6
4
76.4
62.6
82.0
5
74.9
60.1
80.3
## 3.3. Phase Analysis of Carbon-Bearing Pellets after Reduction Roasting
Figure7 shows the XRD spectra of different pellet samples after reduction roasting at temperatures of 873, 1073, 1273, and 1523 K. It can be seen in Figure 7(a) that, after roasting at 873 K, the sample containing no cow dung additive was mainly composed of magnetite (Fe3O4) and a small amount of hematite (α-Fe2O3) and maghemite (γ-Fe2O3). In comparison, the peak intensity of α-Fe2O3 is smaller, while the peak intensity of Fe3O4 is larger, in the sample that originally contained 7.8% cow dung. When this sample (7.8% cow dung) was reduction roasted at 1073 K, diffraction peaks for wüstite (FeO) appeared, while the diffraction peaks of Fe2O3 were no longer detected (Figure 7(b)). After reduction roasting at 1273 K, diffraction peaks for metallic Fe were present, and their intensity was stronger in the sample with 7.8% cow dung compared to the sample with no cow dung (Figure 7(c)). Following reduction roasting at 1523 K, the XRD spectrum of the sample initially containing 7.8% cow dung consisted mainly of Fe diffraction peaks (Figure 7(d)). The results in Figure 7 demonstrated that cow dung addition and reduction temperature affect the extent of reduction of the iron oxides in the pellets. Taking into account the degree of metallisation in Table 4, it can be concluded that the extent of reduction at 1523 K decreases when the amount of cow dung additive exceeds about 4–8%.Figure 7
XRD of different carbon-bearing pellets after reduction at different temperatures. (a) 873 k; (b) 1073 k; (c) 1273 k; (d) 1523 k.
(a)
(b)
(c)
(d)SEM backscattered images of different carbon-bearing pellet samples reduction roasted at 1273 and 1523 K are shown in Figure8. Figure 8(a) shows that the edges of iron ore particles were blurred and boundaries between some of the ore and reducing agent particles were hard to distinguish in the sample that had no cow dung addition after roasting at 1273 K. There were few small particles of reducing agent observed, and the structure of the pellet was relatively loose. For the sample with 7.8% cow dung (Figure 8(b)), whole iron ore particles were hard to distinguish, while some metallic iron had formed on the surface of some iron ore and reducing agent particles. The microstructure of the roasted samples that contained 7.8% and 14.0% cow dung (Figures 8(b) and 8(c)) was relatively similar. After reduction roasting at 1523 K, the discrete particles of iron ore, reducing agent, and other raw materials disappeared (Figures 8(d)–8(f)), and the reduction product was predominantly metallic iron with a minor amount of iron oxide. In the sample with no added cow dung (Figure 8(d)), the metallic iron phase is connected by a number of fine grains and there are a considerable number of pores or voids in the pellets. The metallic iron in the samples with 7.8% and 14.0% cow dung is flaky, but the sample with 14.0% cow dung has more voids and the amount of solid solution formed by the metallic iron and incompletely reduced iron oxide is more abundant.Figure 8
SEM backscattered images of different carbon-bearing pellets after reduction at 1273 and 1523 K. (a) 1273 k, cow dung 0%; (b) 1273 k, cow dung 7.8%; (c) 1273 k, cow dung 14.0%; (d) 1523 k, cow dung 0%; (e) 1523 k, cow dung 7.8%; (f) 1523 k, cow dung 14.0%.
(a)
(b)
(c)
(d)
(e)
(f)Figures7 and 8 demonstrate the effect of cow dung addition and the influence of temperature on the microstructure and phase composition of reduction roasted pellets. After reduction roasting at 1523 K, the intergrown iron crystals in pellets that contained cow dung are strongly clustered compared with the samples containing no cow dung. The higher ash content of the cow dung compared to the anthracite can improve the quaternary basicity (mass ratio of (CaO+MgO)/(SiO2+Al2O3)) of the carbon-bearing pellets, promoting the reduction of iron oxides and facilitating the aggregation of intergrown iron crystals [29]. The larger amount of low melting-point amorphous slag produced can infill the pores, making the structure of the pellets stronger and more compact after reduction. However, an excessive addition of cow dung (of more than about 4–8%) will lead to more voids caused by hydrocarbon cracking and volatilisation, which will decrease the extent of metallisation during the reduction of iron oxides, resulting in a decrease pore filling by amorphous slag phases as well as decrease roasted pellet strength, compared with pellets that contained about 4% cow dung.
## 4. Conclusions
The following conclusions can be made through this study.(1) The addition of cow dung affects the strength of carbon-bearing pellets. The green pellet strength decreased by about 8–16% and the dry strength increased by about 34–57% after adding cow dung. However, there was no obvious correlation between the quantity of cow dung added and the change in cold pellet strength. Compared with reduction roasted pellets containing no cow dung, roasted pellets containing cow dung had greater strength (ranging from about 16–26% stronger), but the strength decreased as the proportion of cow dung increased.(2) The lower strength of green pellets containing cow dung was found to be due to expansion of the amorphous region of the cellulose contained in the cow dung. The greater strength of dry pellets containing cow dung was found to be the result of chemical adsorption among cellulose, hemicellulose, and free hydroxy in lignin and iron concentrate. The rope arrangement of cellulose and hemicellulose also positively reinforces the pellet structure.(3) In process of reduction roasting of carbon-bearing pellets, cow dung additions is beneficial for aggregation of intergrown iron crystals and may help to increase the density of the physical structure of the pellets; thus the strength of the reduction roasted pellets is also improved. However, excessive addition of more than about 4–8% cow dung will result in lower pellet density and decreased strength compared with pellets containing about 4% cow dung.
---
*Source: 1019438-2017-11-02.xml* | 2017 |
# Experimental Characterization of Dielectric Properties in Fluid Saturated Artificial Shales
**Authors:** Roman Beloborodov; Marina Pervukhina; Tongcheng Han; Matthew Josh
**Journal:** Geofluids
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1019461
---
## Abstract
High dielectric contrast between water and hydrocarbons provides a useful method for distinguishing between producible layers of reservoir rocks and surrounding media. Dielectric response at high frequencies is related to the moisture content of rocks. Correlations between the dielectric permittivity and specific surface area can be used for the estimation of elastic and geomechanical properties of rocks. Knowledge of dielectric loss-factor and relaxation frequency in shales is critical for the design of techniques for effective hydrocarbon extraction and production from unconventional reservoirs. Although applicability of dielectric measurements is intriguing, the data interpretation is very challenging due to many factors influencing the dielectric response. For instance, dielectric permittivity is determined by mineralogical composition of solid fraction, volumetric content and composition of saturating fluid, rock microstructure and geometrical features of its solid components and pore space, temperature, and pressure. In this experimental study, we investigate the frequency dependent dielectric properties of artificial shale rocks prepared from silt-clay mixtures via mechanical compaction. Samples are prepared with various clay contents and pore fluids of different salinity and cation compositions. Measurements of dielectric properties are conducted in two orientations to investigate the dielectric anisotropy as the samples acquire strongly oriented microstructures during the compaction process.
---
## Body
## 1. Introduction
Dielectric permittivity of a material is a measure of its frequency dependent electrical polarizability [1], in an applied external electric field. In the presence of an electric field, electrons, ions, and polar molecules all contribute to the frequency dependent dielectric properties. In composite materials, the build-up of charge at dissimilar conductivity boundaries and the creation of exchangeable cations may completely dominate the dielectric behaviour. For example, rock consists of liquid, solid, and gas components and each of them is clearly defined by specific chemical and/or mineral composition. Although the dielectric constants of the most of individual rock components rarely exceed 80, the bulk dielectric permittivity of a rock sample may reach many orders of magnitude at the radio frequency range (kilohertz to megahertz), implying that not only the constituents of a rock but also their geometry, interaction, spatial arrangement, and interfaces have a significant contribution to polarization [2].In clay bearing rocks, the presence of a connate brine results in the so-called surface effects: electric double layer polarization (i.e., Stern layer polarization) and Maxwell-Wagner space-charge polarization. They occur due to the presence of polarizable bound water at the clay-water interface and weakly bounded ions on the surface of mineral particles, which can slowly move under the influence of an external field and contribute toboth the conduction and electric polarization [3]. These effects become prominent at frequencies below MHz range and complicate the interpretation of data. Figure 1 illustratesthe various polarization effects with the corresponding frequency bands of their occurrence.Figure 1
Schematics of dielectric polarization/conduction mechanisms, modified from Josh [11].Dielectric measurements in petrophysics and petroleum industry are mostly used to determine the water content. Water molecules have a fixed electrical dipole moment and rotate quickly to align with an external electric field [4], whereas hydrocarbons have nonpolar molecules and as a result have much lower permittivity than water. Therefore, estimating moisture content from the dielectric permittivity at high frequencies (>1 GHz) is often used in borehole petrophysics. However, dielectric logging at frequencies of 1 GHz and above remains challenging in field measurements due to decreased depth of electric field penetration. On the contrary, currently available commercial dielectric tools operate at multiple spot frequencies [5] to facilitate a determination of water content and CEC: dielectric response at frequencies below 50 MHz is more affected by the surface effects described above [6]. At lower frequencies, clay content, geometry and spatial orientations of mineral rock constituents, and the salinity of pore fluid also affect the measured dielectric permittivity [7, 8].Some recent studies suggest alternative methods for oil recovery in unconventional reservoirs that utilize dielectric heating [9]. This technology allows increasing economical efficiency of exploration and reducing the ecological impact on the environment. It heavily relies on knowledge of the dielectric behaviour in kilohertz-megahertz range of frequencies where the depth of electric field penetration is sufficient for field applications and the dielectric loss-factor exhibits peak values [10]. However, interplay of several physical effects results in wide dispersion of dielectric relaxation peaks at frequencies below gigahertz. Identifying of the frequencies of these attenuation peaks as well as estimation of the range of dielectric dispersion for specific types of rock would be useful for tuning the frequency of dielectric heating tools to ensure the energy efficient and productive heating.Another purpose of dielectric measurements is the estimation of cation exchange capacity (CEC), a property that is directly attributed to mineral composition and specifically to clay content. Josh [11] showed a strong correlation between dielectric response at ~30 MHz and the CEC. In turn, this property is linearly proportional to the specific surface area (SSA) of a rock that determines character and amount of interparticle contacts (coordination number) under the given mineral composition and porosity of a rock. Therefore, it is expected that dielectric constant should correlate with both static and dynamic elastic properties of a rock.It is hard to segregate and independently study different factors influencing the dielectric properties of natural shales due to the complex composition of their mineral and fluid components. There are many experimental and theoretical studies dedicated to frequency dependent dielectric properties of rocks with relation to the geometry of their components (e.g., aspect ratios and orientation of mineral grains and pores), water saturation, and organic content (e.g., [2, 12, 13]). However, there is very little evidence of studies taking into account the clay content, microstructure, and the chemical composition of pore fluid (e.g., [4, 7]). The anisotropy of dielectric properties of shales is crucial for the interpretation of dielectric well-logs especially in deviated wells and may affect the efficiency of production via the dielectric heating in unconventional shale reservoirs [14, 15]. Again, there is little to no evidence on how the anisotropy of dielectric properties of shales depend on the aforementioned parameters. The least known parameter in terms of its effect on dielectric response is the microstructure of the clay fraction in shales. Different geological settings may significantly affect the microstructure of sediment and consequently its properties [16]. For example, Dong and Wang [17] showed that the kaolinite sediments saturated with water solutions of different pH exhibit significantly different dielectric spectra due to the different mechanisms of microstructure formation.In this work, we prepare artificial shale samples using the laboratory mechanical compaction of simple water-based mixtures of kaolinite clay and quartz powder. Our goal is to use these models of natural shales to extend our knowledge on and independently investigate the influence of clay content, microstructure, and the chemical composition of pore fluid on their frequency dependent dielectric properties and anisotropy.
## 2. Methodology
### 2.1. Sample Preparation
Artificial shale samples were prepared from mineral mixtures via mechanical compaction. In order to simplify the modelled rock and reduce uncertainties related to multiphase composition, we chose simplistic mineral components and pore fluid chemicals. Quartz and kaolinite were used for silt and clay fractions as they are the common minerals in natural shales. Compared with minerals from smectite group, kaolinite is nonswelling clay, which makes it one of the easiest clay minerals to work with. Crushed quartz powder consists of silt-sized grains.To prepare mixtures with different types of initial clay microstructure, we utilized physicochemical approach. The presence of electrolyte in the pore fluid leads to aggregation of kaolinite clay particles by means of shrinking the diffuse layer of water around the particles and sequential loss of their stability as a colloidal system [19]. Samples with aggregated clay microstructure were prepared by adding a brine solution so that clay platelets and their ultramicroaggregates (basic associations of few axially aligned individual clay platelets) combine together (Figure 2) and form the thick conforming coats on the surfaces of quartz grains (Figure 3).Figure 2
Aggregates of the kaolinite particles.Figure 3
Kaolinite clay coating (in yellow) on the surface of the quartz grain (in red).It is important to note that initially untreated kaolinite powder resides in aggregated state. Thus, samples with dispersed clay microstructure were prepared by boiling mixtures with 25 ml of a dispersant – 4% sodium pyrophosphate tetrabasic, which separates existing clay aggregates into individual clay platelets and their smaller associations. This allows replacing exchangeable cations on the surfaces of clay particles with Na+ ion so that its hydrate envelopes repulse and separate clay particles from each other.Summary of all the prepared mixtures is shown in Table1. Samples were named using a specific nomenclature where the first letter stands for either aggregated (A) or dispersed (D) clay microstructure; the following numbers indicate the weight ratio of quartz to kaolinite constituents in percentage, and the last number shows the salinity of a pore fluid in g/l.Table 1
Specification of the samples.
Name
Type of clay microstructure
Clay content (%)
Porosity (%)
Salinity (g/l)
Salt
D0100_0
Dispersed
100
21
0
–
D4555_0
Dispersed
55
10
0
–
C4
Dispersed
75
28
0
–
C5
Dispersed
60
23
0
–
A1090_0
Aggregated
90
13
0
–
A0100_10
Aggregated
100
10
10
NaCl
A0100_34
Aggregated
100
26
34
NaCl
A0100_75
Aggregated
100
18
75
KCl
A2575_75
Aggregated
75
16
75
KCl
A4060_75
Aggregated
60
14
75
KCl
C2
Aggregated
75
23
75
KCl
C3
Aggregated
60
24
75
KCl
### 2.2. Mechanical Compaction and Parallel Plate Dielectric Measurements
Samples were compacted in the high-pressure oedometer by applying uniaxial stress. Plastic pistons conduct the stress from the actuator that is manually operated with the hydraulic pump. Cell was designed to safely keep 80 MPa of the vertical stress that corresponds to ~3 km depth in a sedimentary basin. The oedometer was made of PEEK plastic; this material is strong enough to maintain zero lateral strain. Compacted shales were gently ejected from the oedometer, covered with thick layer of wax, and preserved in a low-temperature humid atmosphere to prevent desiccation. Further details on the mechanical compaction methodology can be found in Beloborodov et al. [16].A subsample was cut from each of the compacted samples and its dielectric properties are measured in a parallel plate dielectric rig (Figure4). Electrodes of the dielectric rig are made of brass and they can be changed to match the sample size. For this study, small electrodes of 1 cm in diameter were used due to the relatively small size of the subsamples ~2 cm in diameter. To ensure proper coupling between the electrodes of the measurement cell and maintain parallel electric field normal to the faces of the subsample disc, it was polished on a diamond surface grinder to achieve the parallel faces with 30 μm tolerance. Thickness of the average sample should not exceed one fifth of its diameter. The subsample discs were each placed in the parallel plate measurement cell where they were measured with an impedance analyser. The two following methods for running this device were employed. The bare coupling was used to measure the effective conductivity of the compacted samples. The disc subsample is simply placed between the parallel plate electrodes so they are in a direct contact with the bare faces of the subsample. For shales, this method provides a good conductive coupling between the rock surface and the electrodes. Coupling with insulating film was also used to block the current flow and enhance the relative contribution of polarization effects so that the frequency dispersion of real and imaginary parts of relative dielectric permittivity can be determined accurately.Figure 4
Principal scheme of the parallel plate measurement setup, modified from Josh [18].Twelve artificial shale samples were prepared via the laboratory mechanical compaction. Dielectric analysis of the samples is conducted in two directions, normal (Figure5(a)) and parallel to the bedding plane (Figure 5(b)), to investigate the dielectric anisotropy.Figure 5
Parallel plate measurement schematics. Black lining illustrates the bedding of the sample.
## 2.1. Sample Preparation
Artificial shale samples were prepared from mineral mixtures via mechanical compaction. In order to simplify the modelled rock and reduce uncertainties related to multiphase composition, we chose simplistic mineral components and pore fluid chemicals. Quartz and kaolinite were used for silt and clay fractions as they are the common minerals in natural shales. Compared with minerals from smectite group, kaolinite is nonswelling clay, which makes it one of the easiest clay minerals to work with. Crushed quartz powder consists of silt-sized grains.To prepare mixtures with different types of initial clay microstructure, we utilized physicochemical approach. The presence of electrolyte in the pore fluid leads to aggregation of kaolinite clay particles by means of shrinking the diffuse layer of water around the particles and sequential loss of their stability as a colloidal system [19]. Samples with aggregated clay microstructure were prepared by adding a brine solution so that clay platelets and their ultramicroaggregates (basic associations of few axially aligned individual clay platelets) combine together (Figure 2) and form the thick conforming coats on the surfaces of quartz grains (Figure 3).Figure 2
Aggregates of the kaolinite particles.Figure 3
Kaolinite clay coating (in yellow) on the surface of the quartz grain (in red).It is important to note that initially untreated kaolinite powder resides in aggregated state. Thus, samples with dispersed clay microstructure were prepared by boiling mixtures with 25 ml of a dispersant – 4% sodium pyrophosphate tetrabasic, which separates existing clay aggregates into individual clay platelets and their smaller associations. This allows replacing exchangeable cations on the surfaces of clay particles with Na+ ion so that its hydrate envelopes repulse and separate clay particles from each other.Summary of all the prepared mixtures is shown in Table1. Samples were named using a specific nomenclature where the first letter stands for either aggregated (A) or dispersed (D) clay microstructure; the following numbers indicate the weight ratio of quartz to kaolinite constituents in percentage, and the last number shows the salinity of a pore fluid in g/l.Table 1
Specification of the samples.
Name
Type of clay microstructure
Clay content (%)
Porosity (%)
Salinity (g/l)
Salt
D0100_0
Dispersed
100
21
0
–
D4555_0
Dispersed
55
10
0
–
C4
Dispersed
75
28
0
–
C5
Dispersed
60
23
0
–
A1090_0
Aggregated
90
13
0
–
A0100_10
Aggregated
100
10
10
NaCl
A0100_34
Aggregated
100
26
34
NaCl
A0100_75
Aggregated
100
18
75
KCl
A2575_75
Aggregated
75
16
75
KCl
A4060_75
Aggregated
60
14
75
KCl
C2
Aggregated
75
23
75
KCl
C3
Aggregated
60
24
75
KCl
## 2.2. Mechanical Compaction and Parallel Plate Dielectric Measurements
Samples were compacted in the high-pressure oedometer by applying uniaxial stress. Plastic pistons conduct the stress from the actuator that is manually operated with the hydraulic pump. Cell was designed to safely keep 80 MPa of the vertical stress that corresponds to ~3 km depth in a sedimentary basin. The oedometer was made of PEEK plastic; this material is strong enough to maintain zero lateral strain. Compacted shales were gently ejected from the oedometer, covered with thick layer of wax, and preserved in a low-temperature humid atmosphere to prevent desiccation. Further details on the mechanical compaction methodology can be found in Beloborodov et al. [16].A subsample was cut from each of the compacted samples and its dielectric properties are measured in a parallel plate dielectric rig (Figure4). Electrodes of the dielectric rig are made of brass and they can be changed to match the sample size. For this study, small electrodes of 1 cm in diameter were used due to the relatively small size of the subsamples ~2 cm in diameter. To ensure proper coupling between the electrodes of the measurement cell and maintain parallel electric field normal to the faces of the subsample disc, it was polished on a diamond surface grinder to achieve the parallel faces with 30 μm tolerance. Thickness of the average sample should not exceed one fifth of its diameter. The subsample discs were each placed in the parallel plate measurement cell where they were measured with an impedance analyser. The two following methods for running this device were employed. The bare coupling was used to measure the effective conductivity of the compacted samples. The disc subsample is simply placed between the parallel plate electrodes so they are in a direct contact with the bare faces of the subsample. For shales, this method provides a good conductive coupling between the rock surface and the electrodes. Coupling with insulating film was also used to block the current flow and enhance the relative contribution of polarization effects so that the frequency dispersion of real and imaginary parts of relative dielectric permittivity can be determined accurately.Figure 4
Principal scheme of the parallel plate measurement setup, modified from Josh [18].Twelve artificial shale samples were prepared via the laboratory mechanical compaction. Dielectric analysis of the samples is conducted in two directions, normal (Figure5(a)) and parallel to the bedding plane (Figure 5(b)), to investigate the dielectric anisotropy.Figure 5
Parallel plate measurement schematics. Black lining illustrates the bedding of the sample.
## 3. Results and Discussion
Dielectric properties are presented in Figures6–13 for different parameters, namely, the salinity of pore fluid, its cation composition, clay content, and the type of initial clay microstructure. Figure 6 illustrates the positive linear trends of real dielectric permittivity versus porosity for the samples prepared with brine and fresh water separately. The dielectric response at frequencies of >1 GHz range is usually attributed to the dipole polarization of water molecules and might be used to estimate the amount of water in the pore space. Linear correlation decreases with a decrease in frequency from R2 = 0.95 at 100 MHz to R2 = 0.70 at 10 MHz due to contributions from other polarization mechanisms occurring at lower frequencies (e.g., surface-charge polarization). Although it is believed that the brine salinity has no effect on dielectric response of saturated rock samples at high frequencies, our experimental results show clearly separated trends for the two groups of the artificial shale samples prepared with brine and fresh water. This difference is attributable to the changes occurring in the electric double layer of clays in the presence of electrolyte. In the samples saturated with highly concentrated brine the positive ions of Na and K compensate the free charges on the surfaces of clay particles. Therefore, the diffuse layer of weakly bound water around the clay particles significantly shrinks [20, 21] and leaves more free water molecules that are easily polarizable within the sample. On the other hand, the samples prepared with fresh water exhibit thick hydrate envelopes around the clay particles thereby hindering the large amount of water dipoles from polarization in the presence of an electric field. In our experimental results the described effect results in one order of magnitude difference between the samples saturated with brine and fresh water samples on a wide range of porosity at megahertz frequencies. It is also important to note that the number of active cations in brine saturated samples seems to have negligible effect on dielectric permittivity in the dipole polarization frequency range as all the samples prepared with different concentrations and compositions of brine follow the same linear trends. Using the linear trends described above, it is possible to compare different samples at the same porosity. These trends are used to investigate the effects of clay content, salinity, and microstructure.Figure 6
Dielectric response of brine saturated and fresh water samples at 100, 30, and 10 MHz.Figure 7
Surface plot of conductivity, measured parallel (in green) and normal (in red) to the bedding, versus porosity and frequency in 6 brine saturated artificial shale samples.Figure 8
Dielectric anisotropy versus frequency in brine saturated and fresh water samples.Figure 9
Effect of clay content on frequency dependent conductivity in samples measured parallel and normal to the bedding plane.Figure 10
Effect of microstructure on conductivity of artificial shales measured parallel and normal to the bedding plane.Figure 11
Dielectric response of the three brine-saturated artificial shale samples with clay contents of 60, 75, and 100 per cent.Figure 12
Frequency dependent dielectric loss in artificial shales with varying clay content.Figure 13
Frequency dependent dielectric loss in brine saturated artificial shales.Figure7 shows that across the wide range of frequencies horizontal conductivity in different samples is almost always higher than the vertical one. Previously we showed that the particles of clay and silt tend to orient with compaction normal to the direction of an applied compaction stress [16, 22]. Therefore, the interconnected pore network in artificial shale samples is also more oriented in the bedding direction and provides an effective pathway for charge carriers, whereas in the direction normal to bedding the conductive pathways exhibit greater tortuosity and obstruct the movement of charges. The conductivity anisotropy also increases with porosity reduction as the microstructure becomes more oriented in horizontal direction. Both horizontal and vertical conductivities increase with increasing porosity. At high frequencies, the influence of fluid conductivity grows with increase of the pore volume occupied with water solution.The frequency dependent dielectric anisotropy in eight artificial shale samples is shown in Figure8. Dashed line on the plot corresponds to the isotropic system, while all the points above and below this line correspond to anisotropic system with polarization effects dominating in horizontal and vertical directions, respectively. It is important to note that the fresh water samples are always better polarized in horizontal direction and reside above the isotropy line, whereas the brine saturated samples are better polarized in vertical direction at lower frequencies and exhibit inversion of the anisotropy with crossover points in megahertz range.The anisotropy curves for all the samples exhibit peak values in megahertz range and their maximum values and frequency distributions are determined by the type and concentration of salt ions in the saturating fluid. Thus, the samples prepared with fresh water or low concentration fluids show the highest anisotropy peaks located at the lower end of megahertz range and then follow the three peaks at the same frequency of ~60 MHz corresponding to the samples saturated with KCl brine. The lowest peak belonging to the sample saturated with NaCl brine occurs at frequencies above 100 MHz. The crossover points with isotropy line in brine saturated samples are also distributed across the megahertz frequency, similarly to the peak values, but at lower frequencies.This anisotropic behaviour may be explained with electric double layer theory. Given that the clay particles in fresh water samples have thick hydrate envelopes of weakly bound water, the dipole polarization of water molecules is achieved more easily in the parallel to bedding plane simply due to the fact that the clay particles in all the samples are mostly oriented in horizontal direction. Hence water molecules are oriented with their hydrogen atoms towards the surfaces of clay particles and are easily skewed sideways under the influence of electric field parallel to the bedding but resist the influence of normal electric field. In the brine saturated samples this effect is less pronounced across the wide frequency range due to the significantly thinner diffuse part of the double electric layers where the above phenomenon occurs. Therefore, considering the distinctive features of dielectric anisotropy in artificial samples, one might infer that the anisotropy analysis can help with understanding of composition and concentration of the saturating fluids in clay rocks. However, more data on artificial and natural rocks need to be analysed to confirm the discussed relationships.Figure9 shows that the increase of clay content at the same porosity results in greater conductivity in both vertical and horizontal directions. This behaviour is caused by the surface conduction mechanisms characteristic to clay particles. It has been shown that the counter ions located in the Stern layer of clay particles are the dominant contributors to surface conduction and in brine saturated rocks with salinity above 1 mol/l the mobility of K and Na ions develops 1/10 of that in the free fluid and is independent from the salinity [23]. Therefore, in our experiments replacing the fraction of quartz with clay particles having much more uncompensated charge results in proportionally stronger surface conduction.Figure10 shows that the conductivity of the samples prepared with dispersed clay microstructure is always higher than that of the aggregated sample in both directions independently of the silt content and porosity. This is due to the greater surface conduction in the Stern layer of the dispersed samples where the clay particles are separated from each other and have greater free surface than their associations in aggregated sample.At frequencies in the range of 1 kHz to 100 MHz the vertical polarization dominates in the brine saturated samples as shown in Figure11. The main polarization mechanism in this frequency band is the Maxwell-Wagner polarization of counter ions. The movement of ions in the vertical direction is restricted due to the strong orientation of clay particles normal to the applied electric field and the force balance within their electric double layers. In this case each individual clay particle acts as a capacitor and the ensemble of such particles gives rise to strong polarization effects exceeding that of the individual fluid and mineral phases [3]. In contrast, the cations can be easily drawn along the surface of clay platelets and through the less tortuous pore network along the bedding. Thus, in the presence of a low frequency electric field (below megahertz) in the direction normal to the bedding hydrated counter ions are prone to polarization rather than to conduction and vice versa for the electric field in the bedding direction. Also, it is important to note that the higher the clay content is the more the polarization effects are pronounced in both directions due to the higher concentration and the better alignment of clay platelets given the same porosity level [16].Figure11 shows the rollover of real relative permittivity in the MHz range of frequencies for both the vertical and horizontal measurements. According to Kramers–Krönig relationship [24] these rollovers correspond to the peaks on the plots of imaginary relative permittivity in Figure 12. Peak values of dielectric loss-factor in vertical direction appear at approximately one order of magnitude lower frequencies and are ~20% higher than those measured in horizontal direction. One must take these effects into account designing the borehole heating antennas for effective hydrocarbon extraction in unconventional reservoirs. Also, samples with higher clay content show the higher values of loss-factor due to the sharper rollover in the real relative permittivity caused by the change in polarization mechanism from surface-charge to dipole polarization with increase of frequency.Figure13 illustrates the dielectric loss-factor for the three samples saturated with different brine cation composition and concentration. All the peaks are located at MHz range for the measured samples. Distribution of the peaks at different frequencies depends on the salinity of a pore fluid and its cation composition. Hence, increasing the salinity from 10 to 34 g/l in pure kaolinite sample saturated with NaCl results in change of peak frequency from 2 × 106 to 3 × 107 Hz. Sample saturated with 75 g/l NaCl solution exhibits the peak frequency of 4 × 107 which highlights the effect of different salt ions on the dielectric loss-factor of clay rocks.
## 4. Conclusions
Artificial shales with simple mineral composition illustrate the broad frequency dispersion of dielectric effects. The variations in salinity of the connate water, its cation composition, clay content, and microstructure of artificial shales significantly affect complex dielectric permittivity and conductivity of artificial shales. These effects can be explained with the Maxwell-Wagner polarization effects at frequencies below megahertz range and the changes occurring in double electric layer of clay particles in the presence of electrolyte at higher frequencies.It is shown that at high frequencies (above 10 MHz) real relative permittivity has different linear trends with porosity in fresh water and brine saturated samples. The salinity and cation composition of the pore fluid seem to have negligible effect on these high frequency dielectric trends.Formation of anisotropic microstructure of artificial shales during mechanical compaction results in significant values of dielectric anisotropy between 2 and 4. The magnitude and characteristic frequency of the peak values in anisotropy curves as well as the crossover with isotropy point is dependent on salinity, cation composition of saturating fluid, and the clay content of the samples.The absolute peak value of dielectric loss in shales and its characteristic frequency depends not only on the amount of connate fluid, but also on the cation composition of the saturating brine, its salinity, and the orientation of an applied electric field relative to the shale bedding. The peak values of the dielectric loss measured along and normal to bedding lay within the megahertz frequency range with significant separation of approximately 1 order of magnitude. The absolute values of these peaks are approximately 20 percent higher in the direction normal to the bedding.Our simplistic models of naturals shales illustrated the complex dielectric behaviour similar to that of the real rocks. The theoretical modelling of dielectric response is conducted in companion paper illustrating the use of artificial shales for design and calibration of the rock physics models.
---
*Source: 1019461-2017-12-20.xml* | 1019461-2017-12-20_1019461-2017-12-20.md | 32,459 | Experimental Characterization of Dielectric Properties in Fluid Saturated Artificial Shales | Roman Beloborodov; Marina Pervukhina; Tongcheng Han; Matthew Josh | Geofluids
(2017) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1019461 | 1019461-2017-12-20.xml | ---
## Abstract
High dielectric contrast between water and hydrocarbons provides a useful method for distinguishing between producible layers of reservoir rocks and surrounding media. Dielectric response at high frequencies is related to the moisture content of rocks. Correlations between the dielectric permittivity and specific surface area can be used for the estimation of elastic and geomechanical properties of rocks. Knowledge of dielectric loss-factor and relaxation frequency in shales is critical for the design of techniques for effective hydrocarbon extraction and production from unconventional reservoirs. Although applicability of dielectric measurements is intriguing, the data interpretation is very challenging due to many factors influencing the dielectric response. For instance, dielectric permittivity is determined by mineralogical composition of solid fraction, volumetric content and composition of saturating fluid, rock microstructure and geometrical features of its solid components and pore space, temperature, and pressure. In this experimental study, we investigate the frequency dependent dielectric properties of artificial shale rocks prepared from silt-clay mixtures via mechanical compaction. Samples are prepared with various clay contents and pore fluids of different salinity and cation compositions. Measurements of dielectric properties are conducted in two orientations to investigate the dielectric anisotropy as the samples acquire strongly oriented microstructures during the compaction process.
---
## Body
## 1. Introduction
Dielectric permittivity of a material is a measure of its frequency dependent electrical polarizability [1], in an applied external electric field. In the presence of an electric field, electrons, ions, and polar molecules all contribute to the frequency dependent dielectric properties. In composite materials, the build-up of charge at dissimilar conductivity boundaries and the creation of exchangeable cations may completely dominate the dielectric behaviour. For example, rock consists of liquid, solid, and gas components and each of them is clearly defined by specific chemical and/or mineral composition. Although the dielectric constants of the most of individual rock components rarely exceed 80, the bulk dielectric permittivity of a rock sample may reach many orders of magnitude at the radio frequency range (kilohertz to megahertz), implying that not only the constituents of a rock but also their geometry, interaction, spatial arrangement, and interfaces have a significant contribution to polarization [2].In clay bearing rocks, the presence of a connate brine results in the so-called surface effects: electric double layer polarization (i.e., Stern layer polarization) and Maxwell-Wagner space-charge polarization. They occur due to the presence of polarizable bound water at the clay-water interface and weakly bounded ions on the surface of mineral particles, which can slowly move under the influence of an external field and contribute toboth the conduction and electric polarization [3]. These effects become prominent at frequencies below MHz range and complicate the interpretation of data. Figure 1 illustratesthe various polarization effects with the corresponding frequency bands of their occurrence.Figure 1
Schematics of dielectric polarization/conduction mechanisms, modified from Josh [11].Dielectric measurements in petrophysics and petroleum industry are mostly used to determine the water content. Water molecules have a fixed electrical dipole moment and rotate quickly to align with an external electric field [4], whereas hydrocarbons have nonpolar molecules and as a result have much lower permittivity than water. Therefore, estimating moisture content from the dielectric permittivity at high frequencies (>1 GHz) is often used in borehole petrophysics. However, dielectric logging at frequencies of 1 GHz and above remains challenging in field measurements due to decreased depth of electric field penetration. On the contrary, currently available commercial dielectric tools operate at multiple spot frequencies [5] to facilitate a determination of water content and CEC: dielectric response at frequencies below 50 MHz is more affected by the surface effects described above [6]. At lower frequencies, clay content, geometry and spatial orientations of mineral rock constituents, and the salinity of pore fluid also affect the measured dielectric permittivity [7, 8].Some recent studies suggest alternative methods for oil recovery in unconventional reservoirs that utilize dielectric heating [9]. This technology allows increasing economical efficiency of exploration and reducing the ecological impact on the environment. It heavily relies on knowledge of the dielectric behaviour in kilohertz-megahertz range of frequencies where the depth of electric field penetration is sufficient for field applications and the dielectric loss-factor exhibits peak values [10]. However, interplay of several physical effects results in wide dispersion of dielectric relaxation peaks at frequencies below gigahertz. Identifying of the frequencies of these attenuation peaks as well as estimation of the range of dielectric dispersion for specific types of rock would be useful for tuning the frequency of dielectric heating tools to ensure the energy efficient and productive heating.Another purpose of dielectric measurements is the estimation of cation exchange capacity (CEC), a property that is directly attributed to mineral composition and specifically to clay content. Josh [11] showed a strong correlation between dielectric response at ~30 MHz and the CEC. In turn, this property is linearly proportional to the specific surface area (SSA) of a rock that determines character and amount of interparticle contacts (coordination number) under the given mineral composition and porosity of a rock. Therefore, it is expected that dielectric constant should correlate with both static and dynamic elastic properties of a rock.It is hard to segregate and independently study different factors influencing the dielectric properties of natural shales due to the complex composition of their mineral and fluid components. There are many experimental and theoretical studies dedicated to frequency dependent dielectric properties of rocks with relation to the geometry of their components (e.g., aspect ratios and orientation of mineral grains and pores), water saturation, and organic content (e.g., [2, 12, 13]). However, there is very little evidence of studies taking into account the clay content, microstructure, and the chemical composition of pore fluid (e.g., [4, 7]). The anisotropy of dielectric properties of shales is crucial for the interpretation of dielectric well-logs especially in deviated wells and may affect the efficiency of production via the dielectric heating in unconventional shale reservoirs [14, 15]. Again, there is little to no evidence on how the anisotropy of dielectric properties of shales depend on the aforementioned parameters. The least known parameter in terms of its effect on dielectric response is the microstructure of the clay fraction in shales. Different geological settings may significantly affect the microstructure of sediment and consequently its properties [16]. For example, Dong and Wang [17] showed that the kaolinite sediments saturated with water solutions of different pH exhibit significantly different dielectric spectra due to the different mechanisms of microstructure formation.In this work, we prepare artificial shale samples using the laboratory mechanical compaction of simple water-based mixtures of kaolinite clay and quartz powder. Our goal is to use these models of natural shales to extend our knowledge on and independently investigate the influence of clay content, microstructure, and the chemical composition of pore fluid on their frequency dependent dielectric properties and anisotropy.
## 2. Methodology
### 2.1. Sample Preparation
Artificial shale samples were prepared from mineral mixtures via mechanical compaction. In order to simplify the modelled rock and reduce uncertainties related to multiphase composition, we chose simplistic mineral components and pore fluid chemicals. Quartz and kaolinite were used for silt and clay fractions as they are the common minerals in natural shales. Compared with minerals from smectite group, kaolinite is nonswelling clay, which makes it one of the easiest clay minerals to work with. Crushed quartz powder consists of silt-sized grains.To prepare mixtures with different types of initial clay microstructure, we utilized physicochemical approach. The presence of electrolyte in the pore fluid leads to aggregation of kaolinite clay particles by means of shrinking the diffuse layer of water around the particles and sequential loss of their stability as a colloidal system [19]. Samples with aggregated clay microstructure were prepared by adding a brine solution so that clay platelets and their ultramicroaggregates (basic associations of few axially aligned individual clay platelets) combine together (Figure 2) and form the thick conforming coats on the surfaces of quartz grains (Figure 3).Figure 2
Aggregates of the kaolinite particles.Figure 3
Kaolinite clay coating (in yellow) on the surface of the quartz grain (in red).It is important to note that initially untreated kaolinite powder resides in aggregated state. Thus, samples with dispersed clay microstructure were prepared by boiling mixtures with 25 ml of a dispersant – 4% sodium pyrophosphate tetrabasic, which separates existing clay aggregates into individual clay platelets and their smaller associations. This allows replacing exchangeable cations on the surfaces of clay particles with Na+ ion so that its hydrate envelopes repulse and separate clay particles from each other.Summary of all the prepared mixtures is shown in Table1. Samples were named using a specific nomenclature where the first letter stands for either aggregated (A) or dispersed (D) clay microstructure; the following numbers indicate the weight ratio of quartz to kaolinite constituents in percentage, and the last number shows the salinity of a pore fluid in g/l.Table 1
Specification of the samples.
Name
Type of clay microstructure
Clay content (%)
Porosity (%)
Salinity (g/l)
Salt
D0100_0
Dispersed
100
21
0
–
D4555_0
Dispersed
55
10
0
–
C4
Dispersed
75
28
0
–
C5
Dispersed
60
23
0
–
A1090_0
Aggregated
90
13
0
–
A0100_10
Aggregated
100
10
10
NaCl
A0100_34
Aggregated
100
26
34
NaCl
A0100_75
Aggregated
100
18
75
KCl
A2575_75
Aggregated
75
16
75
KCl
A4060_75
Aggregated
60
14
75
KCl
C2
Aggregated
75
23
75
KCl
C3
Aggregated
60
24
75
KCl
### 2.2. Mechanical Compaction and Parallel Plate Dielectric Measurements
Samples were compacted in the high-pressure oedometer by applying uniaxial stress. Plastic pistons conduct the stress from the actuator that is manually operated with the hydraulic pump. Cell was designed to safely keep 80 MPa of the vertical stress that corresponds to ~3 km depth in a sedimentary basin. The oedometer was made of PEEK plastic; this material is strong enough to maintain zero lateral strain. Compacted shales were gently ejected from the oedometer, covered with thick layer of wax, and preserved in a low-temperature humid atmosphere to prevent desiccation. Further details on the mechanical compaction methodology can be found in Beloborodov et al. [16].A subsample was cut from each of the compacted samples and its dielectric properties are measured in a parallel plate dielectric rig (Figure4). Electrodes of the dielectric rig are made of brass and they can be changed to match the sample size. For this study, small electrodes of 1 cm in diameter were used due to the relatively small size of the subsamples ~2 cm in diameter. To ensure proper coupling between the electrodes of the measurement cell and maintain parallel electric field normal to the faces of the subsample disc, it was polished on a diamond surface grinder to achieve the parallel faces with 30 μm tolerance. Thickness of the average sample should not exceed one fifth of its diameter. The subsample discs were each placed in the parallel plate measurement cell where they were measured with an impedance analyser. The two following methods for running this device were employed. The bare coupling was used to measure the effective conductivity of the compacted samples. The disc subsample is simply placed between the parallel plate electrodes so they are in a direct contact with the bare faces of the subsample. For shales, this method provides a good conductive coupling between the rock surface and the electrodes. Coupling with insulating film was also used to block the current flow and enhance the relative contribution of polarization effects so that the frequency dispersion of real and imaginary parts of relative dielectric permittivity can be determined accurately.Figure 4
Principal scheme of the parallel plate measurement setup, modified from Josh [18].Twelve artificial shale samples were prepared via the laboratory mechanical compaction. Dielectric analysis of the samples is conducted in two directions, normal (Figure5(a)) and parallel to the bedding plane (Figure 5(b)), to investigate the dielectric anisotropy.Figure 5
Parallel plate measurement schematics. Black lining illustrates the bedding of the sample.
## 2.1. Sample Preparation
Artificial shale samples were prepared from mineral mixtures via mechanical compaction. In order to simplify the modelled rock and reduce uncertainties related to multiphase composition, we chose simplistic mineral components and pore fluid chemicals. Quartz and kaolinite were used for silt and clay fractions as they are the common minerals in natural shales. Compared with minerals from smectite group, kaolinite is nonswelling clay, which makes it one of the easiest clay minerals to work with. Crushed quartz powder consists of silt-sized grains.To prepare mixtures with different types of initial clay microstructure, we utilized physicochemical approach. The presence of electrolyte in the pore fluid leads to aggregation of kaolinite clay particles by means of shrinking the diffuse layer of water around the particles and sequential loss of their stability as a colloidal system [19]. Samples with aggregated clay microstructure were prepared by adding a brine solution so that clay platelets and their ultramicroaggregates (basic associations of few axially aligned individual clay platelets) combine together (Figure 2) and form the thick conforming coats on the surfaces of quartz grains (Figure 3).Figure 2
Aggregates of the kaolinite particles.Figure 3
Kaolinite clay coating (in yellow) on the surface of the quartz grain (in red).It is important to note that initially untreated kaolinite powder resides in aggregated state. Thus, samples with dispersed clay microstructure were prepared by boiling mixtures with 25 ml of a dispersant – 4% sodium pyrophosphate tetrabasic, which separates existing clay aggregates into individual clay platelets and their smaller associations. This allows replacing exchangeable cations on the surfaces of clay particles with Na+ ion so that its hydrate envelopes repulse and separate clay particles from each other.Summary of all the prepared mixtures is shown in Table1. Samples were named using a specific nomenclature where the first letter stands for either aggregated (A) or dispersed (D) clay microstructure; the following numbers indicate the weight ratio of quartz to kaolinite constituents in percentage, and the last number shows the salinity of a pore fluid in g/l.Table 1
Specification of the samples.
Name
Type of clay microstructure
Clay content (%)
Porosity (%)
Salinity (g/l)
Salt
D0100_0
Dispersed
100
21
0
–
D4555_0
Dispersed
55
10
0
–
C4
Dispersed
75
28
0
–
C5
Dispersed
60
23
0
–
A1090_0
Aggregated
90
13
0
–
A0100_10
Aggregated
100
10
10
NaCl
A0100_34
Aggregated
100
26
34
NaCl
A0100_75
Aggregated
100
18
75
KCl
A2575_75
Aggregated
75
16
75
KCl
A4060_75
Aggregated
60
14
75
KCl
C2
Aggregated
75
23
75
KCl
C3
Aggregated
60
24
75
KCl
## 2.2. Mechanical Compaction and Parallel Plate Dielectric Measurements
Samples were compacted in the high-pressure oedometer by applying uniaxial stress. Plastic pistons conduct the stress from the actuator that is manually operated with the hydraulic pump. Cell was designed to safely keep 80 MPa of the vertical stress that corresponds to ~3 km depth in a sedimentary basin. The oedometer was made of PEEK plastic; this material is strong enough to maintain zero lateral strain. Compacted shales were gently ejected from the oedometer, covered with thick layer of wax, and preserved in a low-temperature humid atmosphere to prevent desiccation. Further details on the mechanical compaction methodology can be found in Beloborodov et al. [16].A subsample was cut from each of the compacted samples and its dielectric properties are measured in a parallel plate dielectric rig (Figure4). Electrodes of the dielectric rig are made of brass and they can be changed to match the sample size. For this study, small electrodes of 1 cm in diameter were used due to the relatively small size of the subsamples ~2 cm in diameter. To ensure proper coupling between the electrodes of the measurement cell and maintain parallel electric field normal to the faces of the subsample disc, it was polished on a diamond surface grinder to achieve the parallel faces with 30 μm tolerance. Thickness of the average sample should not exceed one fifth of its diameter. The subsample discs were each placed in the parallel plate measurement cell where they were measured with an impedance analyser. The two following methods for running this device were employed. The bare coupling was used to measure the effective conductivity of the compacted samples. The disc subsample is simply placed between the parallel plate electrodes so they are in a direct contact with the bare faces of the subsample. For shales, this method provides a good conductive coupling between the rock surface and the electrodes. Coupling with insulating film was also used to block the current flow and enhance the relative contribution of polarization effects so that the frequency dispersion of real and imaginary parts of relative dielectric permittivity can be determined accurately.Figure 4
Principal scheme of the parallel plate measurement setup, modified from Josh [18].Twelve artificial shale samples were prepared via the laboratory mechanical compaction. Dielectric analysis of the samples is conducted in two directions, normal (Figure5(a)) and parallel to the bedding plane (Figure 5(b)), to investigate the dielectric anisotropy.Figure 5
Parallel plate measurement schematics. Black lining illustrates the bedding of the sample.
## 3. Results and Discussion
Dielectric properties are presented in Figures6–13 for different parameters, namely, the salinity of pore fluid, its cation composition, clay content, and the type of initial clay microstructure. Figure 6 illustrates the positive linear trends of real dielectric permittivity versus porosity for the samples prepared with brine and fresh water separately. The dielectric response at frequencies of >1 GHz range is usually attributed to the dipole polarization of water molecules and might be used to estimate the amount of water in the pore space. Linear correlation decreases with a decrease in frequency from R2 = 0.95 at 100 MHz to R2 = 0.70 at 10 MHz due to contributions from other polarization mechanisms occurring at lower frequencies (e.g., surface-charge polarization). Although it is believed that the brine salinity has no effect on dielectric response of saturated rock samples at high frequencies, our experimental results show clearly separated trends for the two groups of the artificial shale samples prepared with brine and fresh water. This difference is attributable to the changes occurring in the electric double layer of clays in the presence of electrolyte. In the samples saturated with highly concentrated brine the positive ions of Na and K compensate the free charges on the surfaces of clay particles. Therefore, the diffuse layer of weakly bound water around the clay particles significantly shrinks [20, 21] and leaves more free water molecules that are easily polarizable within the sample. On the other hand, the samples prepared with fresh water exhibit thick hydrate envelopes around the clay particles thereby hindering the large amount of water dipoles from polarization in the presence of an electric field. In our experimental results the described effect results in one order of magnitude difference between the samples saturated with brine and fresh water samples on a wide range of porosity at megahertz frequencies. It is also important to note that the number of active cations in brine saturated samples seems to have negligible effect on dielectric permittivity in the dipole polarization frequency range as all the samples prepared with different concentrations and compositions of brine follow the same linear trends. Using the linear trends described above, it is possible to compare different samples at the same porosity. These trends are used to investigate the effects of clay content, salinity, and microstructure.Figure 6
Dielectric response of brine saturated and fresh water samples at 100, 30, and 10 MHz.Figure 7
Surface plot of conductivity, measured parallel (in green) and normal (in red) to the bedding, versus porosity and frequency in 6 brine saturated artificial shale samples.Figure 8
Dielectric anisotropy versus frequency in brine saturated and fresh water samples.Figure 9
Effect of clay content on frequency dependent conductivity in samples measured parallel and normal to the bedding plane.Figure 10
Effect of microstructure on conductivity of artificial shales measured parallel and normal to the bedding plane.Figure 11
Dielectric response of the three brine-saturated artificial shale samples with clay contents of 60, 75, and 100 per cent.Figure 12
Frequency dependent dielectric loss in artificial shales with varying clay content.Figure 13
Frequency dependent dielectric loss in brine saturated artificial shales.Figure7 shows that across the wide range of frequencies horizontal conductivity in different samples is almost always higher than the vertical one. Previously we showed that the particles of clay and silt tend to orient with compaction normal to the direction of an applied compaction stress [16, 22]. Therefore, the interconnected pore network in artificial shale samples is also more oriented in the bedding direction and provides an effective pathway for charge carriers, whereas in the direction normal to bedding the conductive pathways exhibit greater tortuosity and obstruct the movement of charges. The conductivity anisotropy also increases with porosity reduction as the microstructure becomes more oriented in horizontal direction. Both horizontal and vertical conductivities increase with increasing porosity. At high frequencies, the influence of fluid conductivity grows with increase of the pore volume occupied with water solution.The frequency dependent dielectric anisotropy in eight artificial shale samples is shown in Figure8. Dashed line on the plot corresponds to the isotropic system, while all the points above and below this line correspond to anisotropic system with polarization effects dominating in horizontal and vertical directions, respectively. It is important to note that the fresh water samples are always better polarized in horizontal direction and reside above the isotropy line, whereas the brine saturated samples are better polarized in vertical direction at lower frequencies and exhibit inversion of the anisotropy with crossover points in megahertz range.The anisotropy curves for all the samples exhibit peak values in megahertz range and their maximum values and frequency distributions are determined by the type and concentration of salt ions in the saturating fluid. Thus, the samples prepared with fresh water or low concentration fluids show the highest anisotropy peaks located at the lower end of megahertz range and then follow the three peaks at the same frequency of ~60 MHz corresponding to the samples saturated with KCl brine. The lowest peak belonging to the sample saturated with NaCl brine occurs at frequencies above 100 MHz. The crossover points with isotropy line in brine saturated samples are also distributed across the megahertz frequency, similarly to the peak values, but at lower frequencies.This anisotropic behaviour may be explained with electric double layer theory. Given that the clay particles in fresh water samples have thick hydrate envelopes of weakly bound water, the dipole polarization of water molecules is achieved more easily in the parallel to bedding plane simply due to the fact that the clay particles in all the samples are mostly oriented in horizontal direction. Hence water molecules are oriented with their hydrogen atoms towards the surfaces of clay particles and are easily skewed sideways under the influence of electric field parallel to the bedding but resist the influence of normal electric field. In the brine saturated samples this effect is less pronounced across the wide frequency range due to the significantly thinner diffuse part of the double electric layers where the above phenomenon occurs. Therefore, considering the distinctive features of dielectric anisotropy in artificial samples, one might infer that the anisotropy analysis can help with understanding of composition and concentration of the saturating fluids in clay rocks. However, more data on artificial and natural rocks need to be analysed to confirm the discussed relationships.Figure9 shows that the increase of clay content at the same porosity results in greater conductivity in both vertical and horizontal directions. This behaviour is caused by the surface conduction mechanisms characteristic to clay particles. It has been shown that the counter ions located in the Stern layer of clay particles are the dominant contributors to surface conduction and in brine saturated rocks with salinity above 1 mol/l the mobility of K and Na ions develops 1/10 of that in the free fluid and is independent from the salinity [23]. Therefore, in our experiments replacing the fraction of quartz with clay particles having much more uncompensated charge results in proportionally stronger surface conduction.Figure10 shows that the conductivity of the samples prepared with dispersed clay microstructure is always higher than that of the aggregated sample in both directions independently of the silt content and porosity. This is due to the greater surface conduction in the Stern layer of the dispersed samples where the clay particles are separated from each other and have greater free surface than their associations in aggregated sample.At frequencies in the range of 1 kHz to 100 MHz the vertical polarization dominates in the brine saturated samples as shown in Figure11. The main polarization mechanism in this frequency band is the Maxwell-Wagner polarization of counter ions. The movement of ions in the vertical direction is restricted due to the strong orientation of clay particles normal to the applied electric field and the force balance within their electric double layers. In this case each individual clay particle acts as a capacitor and the ensemble of such particles gives rise to strong polarization effects exceeding that of the individual fluid and mineral phases [3]. In contrast, the cations can be easily drawn along the surface of clay platelets and through the less tortuous pore network along the bedding. Thus, in the presence of a low frequency electric field (below megahertz) in the direction normal to the bedding hydrated counter ions are prone to polarization rather than to conduction and vice versa for the electric field in the bedding direction. Also, it is important to note that the higher the clay content is the more the polarization effects are pronounced in both directions due to the higher concentration and the better alignment of clay platelets given the same porosity level [16].Figure11 shows the rollover of real relative permittivity in the MHz range of frequencies for both the vertical and horizontal measurements. According to Kramers–Krönig relationship [24] these rollovers correspond to the peaks on the plots of imaginary relative permittivity in Figure 12. Peak values of dielectric loss-factor in vertical direction appear at approximately one order of magnitude lower frequencies and are ~20% higher than those measured in horizontal direction. One must take these effects into account designing the borehole heating antennas for effective hydrocarbon extraction in unconventional reservoirs. Also, samples with higher clay content show the higher values of loss-factor due to the sharper rollover in the real relative permittivity caused by the change in polarization mechanism from surface-charge to dipole polarization with increase of frequency.Figure13 illustrates the dielectric loss-factor for the three samples saturated with different brine cation composition and concentration. All the peaks are located at MHz range for the measured samples. Distribution of the peaks at different frequencies depends on the salinity of a pore fluid and its cation composition. Hence, increasing the salinity from 10 to 34 g/l in pure kaolinite sample saturated with NaCl results in change of peak frequency from 2 × 106 to 3 × 107 Hz. Sample saturated with 75 g/l NaCl solution exhibits the peak frequency of 4 × 107 which highlights the effect of different salt ions on the dielectric loss-factor of clay rocks.
## 4. Conclusions
Artificial shales with simple mineral composition illustrate the broad frequency dispersion of dielectric effects. The variations in salinity of the connate water, its cation composition, clay content, and microstructure of artificial shales significantly affect complex dielectric permittivity and conductivity of artificial shales. These effects can be explained with the Maxwell-Wagner polarization effects at frequencies below megahertz range and the changes occurring in double electric layer of clay particles in the presence of electrolyte at higher frequencies.It is shown that at high frequencies (above 10 MHz) real relative permittivity has different linear trends with porosity in fresh water and brine saturated samples. The salinity and cation composition of the pore fluid seem to have negligible effect on these high frequency dielectric trends.Formation of anisotropic microstructure of artificial shales during mechanical compaction results in significant values of dielectric anisotropy between 2 and 4. The magnitude and characteristic frequency of the peak values in anisotropy curves as well as the crossover with isotropy point is dependent on salinity, cation composition of saturating fluid, and the clay content of the samples.The absolute peak value of dielectric loss in shales and its characteristic frequency depends not only on the amount of connate fluid, but also on the cation composition of the saturating brine, its salinity, and the orientation of an applied electric field relative to the shale bedding. The peak values of the dielectric loss measured along and normal to bedding lay within the megahertz frequency range with significant separation of approximately 1 order of magnitude. The absolute values of these peaks are approximately 20 percent higher in the direction normal to the bedding.Our simplistic models of naturals shales illustrated the complex dielectric behaviour similar to that of the real rocks. The theoretical modelling of dielectric response is conducted in companion paper illustrating the use of artificial shales for design and calibration of the rock physics models.
---
*Source: 1019461-2017-12-20.xml* | 2017 |
# Increasing Incidence, but Lack of Seasonality, of Elevated TSH Levels, on Newborn Screening, in the North of England
**Authors:** Mark S. Pearce; Murthy Korada; Julie Day; Steve Turner; David Allison; Mohammed Kibirige; Tim D. Cheetham
**Journal:** Journal of Thyroid Research
(2010)
**Publisher:** SAGE-Hindawi Access to Research
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.4061/2010/101948
---
## Abstract
Previous studies of congenital hypothyroidism have suggested an increasing incidence and seasonal variation in incidence, which may suggest nongenetic factors involved in aetiology. This study describes the incidence of elevated thyroid stimulating hormone (TSH) values in newborns, a surrogate for congenital hypothyroidism, measured as part of the screening programme for congenital hypothyroidism, over an eleven-year period (1994–2005), and assesses whether seasonal variation exists. All infants born in the Northern Region of England are screened by measuring levels of circulating TSH using a blood spot assay. Data on all 213 cases born from 1994 to 2005 inclusive were available. Annual incidence increased significantly from 37 per 100,000 in 1994 to a peak of 92.8 per 100,000 in 2003. There was no evidence of seasonal variation in incidence. The reasons for the increasing incidence are unclear, but do not appear to involve increasing exposure to seasonally varying factors or changes in measurements methods.
---
## Body
## 1. Introduction
Congenital hypothyroidism (CHT) is the most common congenital endocrine disorder with a world-wide incidence around 1 in3500 to 4000 live births. Reduced thyroid hormone production in babies with CHT has a major detrimental effect on central nervous system development and growth. Prompt treatment with thyroxine will prevent these problems arising in the majority of babies.Most babies with CHT have thyroid gland agenesis or dysgenesis with a poorly formed or absent gland. A minority(~20%) have a normally sited gland but an underlying single gene defect preventing the normal process of thyroid hormone production within the gland (dyshormonogenesis). Although germline mutations in thyroid transcription factors 1 and 2 (TTF-1 and TTF-2) and PAX-8 (paired box transcription-8) have been identified as aetiological risk factors for dysgenesis or agenesis, they only explain a small percentage of cases (around 2%) [1]. It is important to highlight the fact that iodine deficiency is a well recognised and important cause of neonatal thyroid dysfunction in some parts of the world.An increasing incidence of CHT has been suggested from an analysis of data including that from New York State [2] and Mexico [3], although no similar increase was shown in a similar study of data from Quebec [4]. Previous studies in the United Kingdom have suggested that CHT is more prevalent in Asian sectors of the population and that prevalence has increased [5]. Studies also suggest that the prevalence of hypothyroidism in Scotland has increased [6], with a study from the same area of Scotland demonstrating an increased population prevalence of hypothyroidism in young people compared to previously published rates [7].The incidence of CHT has also been suggested to vary seasonally in a number of studies in different geographical areas including the West Midlands of England [5], Finland [8], Japan [9–11], and Australia [9]. However, seasonality also has been not observed in a number of other studies, including those in the North West of England [12], the Netherlands [13], Saudi Arabia [14], Canada [4], Norway, France and Switzerland [9].Should seasonal variation in CHT risk exist, this would suggest that an unknown environmental factor may be involved in the disease’s aetiology. Temporal trends in risk are usually unlikely to include genetic factors, unless either population shifts result in germline mutations being more prominent in a particular geographical area or environmental influences on germline mutations or epigenetic changes have increased in prevalence over the study period.Circulating thyroid stimulating hormone (TSH) levels are measured as part of screening programmes for CHT across many parts of Europe, Japan, and increasingly in North America. In the UK, neonatal screening for CHT began in 1979 in Scotland and in 1981 in the remainder of the UK after a recommendation from the UK Department of Health [15].This paper describes the incidence of elevated TSH levels in newborns in the North of England over an eleven-year period (1994–2005) and examines whether seasonal variation in incidence exists in this geographical area.
## 2. Methods
Around 35 000 infants in the Northern Region of England, comprising North East England (the area from Teesside extending north into Northumberland) and North Cumbria, are screened every year in a single centre by measuring blood spot TSH levels. Data on all cases, including dates of birth, were available from 1994 to 2005 inclusive.
### 2.1. TSH as a Surrogate for CHT
We opted to refer to “TSH” rather than “CHT” in this study for the following reasons.(1)
The extent to which cases of suspected CHT are investigated will vary from one unit to another [16, 17]. We felt that this was likely to be the case in our region of the UK as well.(2)
There is no definitive test or tests that can identify the underlying thyroid gland abnormality in this condition. Even the combination of isotope scanning and ultrasonography does not reveal an underlying diagnosis in all infants [16, 18].(3)
The sensitivity and specificity of tests such as isotope scanning is suboptimal with potentially misleading information generated by factors such as early thyroxine therapy [18].(4)
Some studies have used thyroxine intervention as confirmation of underlying CHT but the threshold for intervention will vary with time and from clinician to clinician and centre to centre [19]. Hence some babies with “raised” TSH values and thyroid hormone values within the laboratory normal range will be treated whilst others will not [5, 16].(5)
Ultimately, biochemistry is the most important parameter; a baby with a raised TSH but normal imaging will require thyroxine treatment whilst a baby with normal biochemistry but abnormal imaging will not.
### 2.2. Sample Processing and Analysis
During the period of the study the screening centre moved between the Royal Victoria Infirmary (RVI), Newcastle and the University Hospital of North Durham (UHND), Durham. The screening blood spot TSH method and cut-off value for screening failure also changed during the study from a manual radioimmunassay method (1994–March 1998) to an ACS-(Beyer) chemiluminometric assay (April 1998–February 2003) and then a DELFIA (perkin Elmer) fluoroimmunometric assay (March 2003–present) (Table1).Table 1
TSH newborn screening base, assay methodology, and cut-off values during the study period.
1994–March 1998April 1998–February 2003March 2003–March 2005April 2005–presentCentreNewcastleDurhamDurhamNewcastleMethodRadioimmunoassayACS-180DELFIADELFIA(Manual)(Bayer)(Perkin Elmer)(Perkin Elmer)TSH cut-off (mU/L)201066Interassay coefficients of variation (CV) for blood spot TSH assays were16.5%, 9.4%, and 8.8% for the manual radioimmun assay method at TSH values of 22.0, 38.6, and 74.1 mU/L, respectively. Interassay CV for blood spot TSH assays were 7% and 6%for the ACS chemiluminometric assay at TSH values of 14 mU/L and 68 mU/L, respectively, and 11% and 12% for the DELFIA fluoroimmunometric assay at TSH values of 16 mU/L and 60 mU/L, respectively. Nine replicates over 9 analytical runs were used to calculate inter assay precision for the ACS assay and 42 replicates over 10 analytical runs for the DELFIA assay. To ensure that cut-off values were comparable across the different screening methods used, prior to a change in method blood spot samples received as part of the screening program were analysed by both methods (Radioimmunoassay v ACS-180, n=2634; ACS-180 v DELFIA, n=682). Revised cut-off values were established by comparing the results using scatter charts and using least squares linear regression. There was a highly significant correlation between the 2 methods when the assay was changed from RIA to ACS-180 in 1998 (P<.001; r2=0.94) and when it was changed from ACS-180 to DELFIA in 2003 (P<.001; r2=0.99). 100% agreement for screening passes and failures was obtained using the revised cut-off values.Values identified as being greater than 20 mU/L by the radioimmunassay method, 10 mU/L by the ACS method, and greater than 6 mU/L by the DELFIA method were followed by analysis of a repeat blood spot from the screening card. They were deemed to be screening failures if the final blood spot value was again greater than 10 mU/L (ACS) or 6 mU/L (DELFIA). All infants where the paediatrician was subsequently notified of an increased value and hence were classed as neonatal screening test failures were included in the analysis.
### 2.3. Statistical Analysis
The yearly incidence of elevated TSH values in newborns was calculated as the number of cases in each year per 100,000 live births born in the Northern Region. Temporal changes in incidence were assessed using Poisson regression. Seasonal variation in incidence was assessed using the Edwards test for seasonality [20] with an adjustment for variable month length. A P-value of less than .05 was considered statistically significant. All statistical analyses were performed using the statistical software package Stata, version 9.0 (StataCorp, Texas).Approvals for this study were obtained from the Newcastle and North Tyneside Local Research Ethics Committee and the Patient Information Advisory Group for England and Wales.
## 2.1. TSH as a Surrogate for CHT
We opted to refer to “TSH” rather than “CHT” in this study for the following reasons.(1)
The extent to which cases of suspected CHT are investigated will vary from one unit to another [16, 17]. We felt that this was likely to be the case in our region of the UK as well.(2)
There is no definitive test or tests that can identify the underlying thyroid gland abnormality in this condition. Even the combination of isotope scanning and ultrasonography does not reveal an underlying diagnosis in all infants [16, 18].(3)
The sensitivity and specificity of tests such as isotope scanning is suboptimal with potentially misleading information generated by factors such as early thyroxine therapy [18].(4)
Some studies have used thyroxine intervention as confirmation of underlying CHT but the threshold for intervention will vary with time and from clinician to clinician and centre to centre [19]. Hence some babies with “raised” TSH values and thyroid hormone values within the laboratory normal range will be treated whilst others will not [5, 16].(5)
Ultimately, biochemistry is the most important parameter; a baby with a raised TSH but normal imaging will require thyroxine treatment whilst a baby with normal biochemistry but abnormal imaging will not.
## 2.2. Sample Processing and Analysis
During the period of the study the screening centre moved between the Royal Victoria Infirmary (RVI), Newcastle and the University Hospital of North Durham (UHND), Durham. The screening blood spot TSH method and cut-off value for screening failure also changed during the study from a manual radioimmunassay method (1994–March 1998) to an ACS-(Beyer) chemiluminometric assay (April 1998–February 2003) and then a DELFIA (perkin Elmer) fluoroimmunometric assay (March 2003–present) (Table1).Table 1
TSH newborn screening base, assay methodology, and cut-off values during the study period.
1994–March 1998April 1998–February 2003March 2003–March 2005April 2005–presentCentreNewcastleDurhamDurhamNewcastleMethodRadioimmunoassayACS-180DELFIADELFIA(Manual)(Bayer)(Perkin Elmer)(Perkin Elmer)TSH cut-off (mU/L)201066Interassay coefficients of variation (CV) for blood spot TSH assays were16.5%, 9.4%, and 8.8% for the manual radioimmun assay method at TSH values of 22.0, 38.6, and 74.1 mU/L, respectively. Interassay CV for blood spot TSH assays were 7% and 6%for the ACS chemiluminometric assay at TSH values of 14 mU/L and 68 mU/L, respectively, and 11% and 12% for the DELFIA fluoroimmunometric assay at TSH values of 16 mU/L and 60 mU/L, respectively. Nine replicates over 9 analytical runs were used to calculate inter assay precision for the ACS assay and 42 replicates over 10 analytical runs for the DELFIA assay. To ensure that cut-off values were comparable across the different screening methods used, prior to a change in method blood spot samples received as part of the screening program were analysed by both methods (Radioimmunoassay v ACS-180, n=2634; ACS-180 v DELFIA, n=682). Revised cut-off values were established by comparing the results using scatter charts and using least squares linear regression. There was a highly significant correlation between the 2 methods when the assay was changed from RIA to ACS-180 in 1998 (P<.001; r2=0.94) and when it was changed from ACS-180 to DELFIA in 2003 (P<.001; r2=0.99). 100% agreement for screening passes and failures was obtained using the revised cut-off values.Values identified as being greater than 20 mU/L by the radioimmunassay method, 10 mU/L by the ACS method, and greater than 6 mU/L by the DELFIA method were followed by analysis of a repeat blood spot from the screening card. They were deemed to be screening failures if the final blood spot value was again greater than 10 mU/L (ACS) or 6 mU/L (DELFIA). All infants where the paediatrician was subsequently notified of an increased value and hence were classed as neonatal screening test failures were included in the analysis.
## 2.3. Statistical Analysis
The yearly incidence of elevated TSH values in newborns was calculated as the number of cases in each year per 100,000 live births born in the Northern Region. Temporal changes in incidence were assessed using Poisson regression. Seasonal variation in incidence was assessed using the Edwards test for seasonality [20] with an adjustment for variable month length. A P-value of less than .05 was considered statistically significant. All statistical analyses were performed using the statistical software package Stata, version 9.0 (StataCorp, Texas).Approvals for this study were obtained from the Newcastle and North Tyneside Local Research Ethics Committee and the Patient Information Advisory Group for England and Wales.
## 3. Results
Between 1994 and 2005 inclusive, there were 213 cases of elevated TSH values in newborns in the Northern Region of England. The ratio of female to male cases was1.3:1. Over the study period, the average annual incidence was 59.94 per 100,000 live births. The annual number of cases of TSH elevation in newborns in the Northern Region and annual incidence per 100,000 live births are shown in Table 2. Incidence increased significantly over the study period (P<.0001) from 37 per 100,000 in 1994 to a peak of 92.8 in 2003.Table 2
Annual number of cases and incidence of elevated TSH levels on newborn screening in the Northern Region of England, 1994–2005.
YearNumber of casesIncidence per100,000 live births19941537.1219951132.2619961338.611997515.2119981649.9919992064.6520001550.6320012276.1120022689.0220032892.8420042270.9320052063.64The number of cases by month of birth is reported in Table3. Despite peak number of cases in May and in the August–October period, there was no significant evidence of seasonal variation in the number of cases (P=.16). Nor was there evidence of seasonal variation with sex-specific seasonality analyses (P=.17 for females and 0.59 for males).Table 3
Number of cases of elevated TSH levels on newborn screening by month of birth in the Northern Region of England, 1994–2005.
MonthNumber of casesJanuary14February14March14April17May25June19July16August20September23October20November16December15
## 4. Discussion
Despite advances made in identifying genetic risk markers for CHT, there remains a great deal to be explained in terms of the aetiology of the disease [21]. This study showed an increasing incidence of elevated TSH values in newborns in the Northern Region of England between 1994 and 2005, but did not find evidence of seasonal variation in the number of cases. Many other studies have depended primarily on biochemistry including TSH rather than other investigations when making a diagnosis of CHT [22]. Given the high risk of subclinical hypothyroidism and morphological abnormalities in “false-positive” patients [23, 24] we suspect that our figures for raised TSH will be closely linked to the number of actual cases of CHT. Unfortunately we do not have detailed information on outcome in these children because they were managed in more than 10 different hospitals by an even greater number of clinicians. More detailed data were therefore not available to allow us to assess changes in permanent CHT or to analyse the data with respect to different aetiologies.An increasing temporal trend in incidence of CHT has recently been reported in New York State [2], with a 138% increase between 1978 and 2005. Excluding New York State, nationwide United States data suggest a 73% increase between 1987 and 2002 [22]. We observed a 151% increase in raised TSH values between 1994 and 2003. However, this is in contrast to research conducted in Quebec, Canada, where no changes in incidence were seen over a 16-year period [4]. A real temporal trend, aside from changes in diagnostic procedures which can lead to increases in incidence [25], suggests either an increasing exposure to an environmental risk factor, or a changing distribution of risk factors among the population. The incidence rates in this study dropped slightly after 2003 and it remains to be seen whether this is a true decline or simply random variation.The division between a “screen positive” and “screen negative” in a screening programme such as this is not linked to robust outcome measures and the screening threshold and management of cases with mild thyroid dysfunction varies between regions. We were keen to establish that the change in incidence was not simply a reflection of change in assay methodology or laboratory practice (as opposed to seasonality where we would not expect assay change to have the same impact). All births in the study region were screened in a single centre at any one time. The physical location of this centre changed originally in April 1998 from Newcastle to Durham and again in April2005 from Durham to Newcastle. The move in 1998 also corresponded to a change in the laboratory assay with a further change in assay in 2003. The different assay methods were rigorously compared to ensure that there would be no difference in the number of cases identified as a result of the change. To this end a large number of samples were analysed and there was no difference in screening passes or failures with 100% concordance. It is of note that the increasing incidence in raised TSH values was most striking during the period when TSH was measured only by the ACS method although we suspect that this represents a true increase because the rise had commenced prior to the change in methodology and continued after this changed to the DELFIA assay. Many other studies of CHT or elevated TSH levels have encountered similar issues regarding data interpretation as assay methodology has changed [16].In terms of changes in the population structure, two previous studies from England have reported an increased incidence of CHT among Asians [5, 12]. However, the Northern Region of England has a population of 3.1 million, of which less than 2% are from ethnic minorities, with low levels of migration [26]. Therefore, while the data are not available to assess directly, it is unlikely that the increased incidence is related to changes in population structure and with it changes in genetic risk factor profiles. Exposure to environmental factors such as chemicals or increasing levels of other risk factors such as maternal iodine deficiency or high prenatal iodine exposures [27] or low birth weight infants [28] may be suggested by an increasing temporal trend whereas infections or seasonally varying dietary factors or chemical exposures, such as dioxin and polychlorinated biphenyl [29], may be suggested by evidence of seasonal variation in the number of cases. The potential role of a suboptimal maternal iodine status in some parts of the North of England should be highlighted and clearly warrants further study [30]. We found little evidence of seasonal variation of elevated TSH levels in newborns, in line with a number of previous reports of no seasonality [4, 9, 12–14]. In contrast, a number of previous studies have reported seasonal variation in a number of different geographical areas [5, 8–11], including a different part of England [5]. Gu et al. also found sex-specific seasonal patterns of incidence in Japan [31]. However, sex-specific analyses also showed little evidence of seasonal variation in this study. The issue of statistical power should be considered when interpreting our results and it is possible that with a larger sample a seasonal effect may have been found. It is also possible that differences in findings may reflect differences in the underlying populations. Our sex ratio (F:M) of 1.3:1 was significantly less than the sex ratio of 2.8:1 previously shown in a study from Scotland that used thyroxine prescription data as a surrogate for hypothyroidism in children and young people [7] and less than the ratio of 2.1:1 reported for cases of “true” CHT from the same country [16]. This underlines the importance of taking factors such as iodine status into consideration in future work.In conclusion, we have observed a significant increasing trend in the incidence of elevated TSH levels in newborns, a surrogate for increasing levels of CHT, since 1994. Whilst the reasons for the increase are unclear, it would appear from this analysis that seasonally varying factors are not involved. It is also unlikely to be due to a change in the population distribution of genetic risk factors, although environmental determinants of genetic mutations and epigenetic factors cannot be ruled out. Further research is required into the potential environmental determinants of increased CHT risk.
---
*Source: 101948-2010-01-28.xml* | 101948-2010-01-28_101948-2010-01-28.md | 22,587 | Increasing Incidence, but Lack of Seasonality, of Elevated TSH Levels, on Newborn Screening, in the North of England | Mark S. Pearce; Murthy Korada; Julie Day; Steve Turner; David Allison; Mohammed Kibirige; Tim D. Cheetham | Journal of Thyroid Research
(2010) | Medical & Health Sciences | SAGE-Hindawi Access to Research | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.4061/2010/101948 | 101948-2010-01-28.xml | ---
## Abstract
Previous studies of congenital hypothyroidism have suggested an increasing incidence and seasonal variation in incidence, which may suggest nongenetic factors involved in aetiology. This study describes the incidence of elevated thyroid stimulating hormone (TSH) values in newborns, a surrogate for congenital hypothyroidism, measured as part of the screening programme for congenital hypothyroidism, over an eleven-year period (1994–2005), and assesses whether seasonal variation exists. All infants born in the Northern Region of England are screened by measuring levels of circulating TSH using a blood spot assay. Data on all 213 cases born from 1994 to 2005 inclusive were available. Annual incidence increased significantly from 37 per 100,000 in 1994 to a peak of 92.8 per 100,000 in 2003. There was no evidence of seasonal variation in incidence. The reasons for the increasing incidence are unclear, but do not appear to involve increasing exposure to seasonally varying factors or changes in measurements methods.
---
## Body
## 1. Introduction
Congenital hypothyroidism (CHT) is the most common congenital endocrine disorder with a world-wide incidence around 1 in3500 to 4000 live births. Reduced thyroid hormone production in babies with CHT has a major detrimental effect on central nervous system development and growth. Prompt treatment with thyroxine will prevent these problems arising in the majority of babies.Most babies with CHT have thyroid gland agenesis or dysgenesis with a poorly formed or absent gland. A minority(~20%) have a normally sited gland but an underlying single gene defect preventing the normal process of thyroid hormone production within the gland (dyshormonogenesis). Although germline mutations in thyroid transcription factors 1 and 2 (TTF-1 and TTF-2) and PAX-8 (paired box transcription-8) have been identified as aetiological risk factors for dysgenesis or agenesis, they only explain a small percentage of cases (around 2%) [1]. It is important to highlight the fact that iodine deficiency is a well recognised and important cause of neonatal thyroid dysfunction in some parts of the world.An increasing incidence of CHT has been suggested from an analysis of data including that from New York State [2] and Mexico [3], although no similar increase was shown in a similar study of data from Quebec [4]. Previous studies in the United Kingdom have suggested that CHT is more prevalent in Asian sectors of the population and that prevalence has increased [5]. Studies also suggest that the prevalence of hypothyroidism in Scotland has increased [6], with a study from the same area of Scotland demonstrating an increased population prevalence of hypothyroidism in young people compared to previously published rates [7].The incidence of CHT has also been suggested to vary seasonally in a number of studies in different geographical areas including the West Midlands of England [5], Finland [8], Japan [9–11], and Australia [9]. However, seasonality also has been not observed in a number of other studies, including those in the North West of England [12], the Netherlands [13], Saudi Arabia [14], Canada [4], Norway, France and Switzerland [9].Should seasonal variation in CHT risk exist, this would suggest that an unknown environmental factor may be involved in the disease’s aetiology. Temporal trends in risk are usually unlikely to include genetic factors, unless either population shifts result in germline mutations being more prominent in a particular geographical area or environmental influences on germline mutations or epigenetic changes have increased in prevalence over the study period.Circulating thyroid stimulating hormone (TSH) levels are measured as part of screening programmes for CHT across many parts of Europe, Japan, and increasingly in North America. In the UK, neonatal screening for CHT began in 1979 in Scotland and in 1981 in the remainder of the UK after a recommendation from the UK Department of Health [15].This paper describes the incidence of elevated TSH levels in newborns in the North of England over an eleven-year period (1994–2005) and examines whether seasonal variation in incidence exists in this geographical area.
## 2. Methods
Around 35 000 infants in the Northern Region of England, comprising North East England (the area from Teesside extending north into Northumberland) and North Cumbria, are screened every year in a single centre by measuring blood spot TSH levels. Data on all cases, including dates of birth, were available from 1994 to 2005 inclusive.
### 2.1. TSH as a Surrogate for CHT
We opted to refer to “TSH” rather than “CHT” in this study for the following reasons.(1)
The extent to which cases of suspected CHT are investigated will vary from one unit to another [16, 17]. We felt that this was likely to be the case in our region of the UK as well.(2)
There is no definitive test or tests that can identify the underlying thyroid gland abnormality in this condition. Even the combination of isotope scanning and ultrasonography does not reveal an underlying diagnosis in all infants [16, 18].(3)
The sensitivity and specificity of tests such as isotope scanning is suboptimal with potentially misleading information generated by factors such as early thyroxine therapy [18].(4)
Some studies have used thyroxine intervention as confirmation of underlying CHT but the threshold for intervention will vary with time and from clinician to clinician and centre to centre [19]. Hence some babies with “raised” TSH values and thyroid hormone values within the laboratory normal range will be treated whilst others will not [5, 16].(5)
Ultimately, biochemistry is the most important parameter; a baby with a raised TSH but normal imaging will require thyroxine treatment whilst a baby with normal biochemistry but abnormal imaging will not.
### 2.2. Sample Processing and Analysis
During the period of the study the screening centre moved between the Royal Victoria Infirmary (RVI), Newcastle and the University Hospital of North Durham (UHND), Durham. The screening blood spot TSH method and cut-off value for screening failure also changed during the study from a manual radioimmunassay method (1994–March 1998) to an ACS-(Beyer) chemiluminometric assay (April 1998–February 2003) and then a DELFIA (perkin Elmer) fluoroimmunometric assay (March 2003–present) (Table1).Table 1
TSH newborn screening base, assay methodology, and cut-off values during the study period.
1994–March 1998April 1998–February 2003March 2003–March 2005April 2005–presentCentreNewcastleDurhamDurhamNewcastleMethodRadioimmunoassayACS-180DELFIADELFIA(Manual)(Bayer)(Perkin Elmer)(Perkin Elmer)TSH cut-off (mU/L)201066Interassay coefficients of variation (CV) for blood spot TSH assays were16.5%, 9.4%, and 8.8% for the manual radioimmun assay method at TSH values of 22.0, 38.6, and 74.1 mU/L, respectively. Interassay CV for blood spot TSH assays were 7% and 6%for the ACS chemiluminometric assay at TSH values of 14 mU/L and 68 mU/L, respectively, and 11% and 12% for the DELFIA fluoroimmunometric assay at TSH values of 16 mU/L and 60 mU/L, respectively. Nine replicates over 9 analytical runs were used to calculate inter assay precision for the ACS assay and 42 replicates over 10 analytical runs for the DELFIA assay. To ensure that cut-off values were comparable across the different screening methods used, prior to a change in method blood spot samples received as part of the screening program were analysed by both methods (Radioimmunoassay v ACS-180, n=2634; ACS-180 v DELFIA, n=682). Revised cut-off values were established by comparing the results using scatter charts and using least squares linear regression. There was a highly significant correlation between the 2 methods when the assay was changed from RIA to ACS-180 in 1998 (P<.001; r2=0.94) and when it was changed from ACS-180 to DELFIA in 2003 (P<.001; r2=0.99). 100% agreement for screening passes and failures was obtained using the revised cut-off values.Values identified as being greater than 20 mU/L by the radioimmunassay method, 10 mU/L by the ACS method, and greater than 6 mU/L by the DELFIA method were followed by analysis of a repeat blood spot from the screening card. They were deemed to be screening failures if the final blood spot value was again greater than 10 mU/L (ACS) or 6 mU/L (DELFIA). All infants where the paediatrician was subsequently notified of an increased value and hence were classed as neonatal screening test failures were included in the analysis.
### 2.3. Statistical Analysis
The yearly incidence of elevated TSH values in newborns was calculated as the number of cases in each year per 100,000 live births born in the Northern Region. Temporal changes in incidence were assessed using Poisson regression. Seasonal variation in incidence was assessed using the Edwards test for seasonality [20] with an adjustment for variable month length. A P-value of less than .05 was considered statistically significant. All statistical analyses were performed using the statistical software package Stata, version 9.0 (StataCorp, Texas).Approvals for this study were obtained from the Newcastle and North Tyneside Local Research Ethics Committee and the Patient Information Advisory Group for England and Wales.
## 2.1. TSH as a Surrogate for CHT
We opted to refer to “TSH” rather than “CHT” in this study for the following reasons.(1)
The extent to which cases of suspected CHT are investigated will vary from one unit to another [16, 17]. We felt that this was likely to be the case in our region of the UK as well.(2)
There is no definitive test or tests that can identify the underlying thyroid gland abnormality in this condition. Even the combination of isotope scanning and ultrasonography does not reveal an underlying diagnosis in all infants [16, 18].(3)
The sensitivity and specificity of tests such as isotope scanning is suboptimal with potentially misleading information generated by factors such as early thyroxine therapy [18].(4)
Some studies have used thyroxine intervention as confirmation of underlying CHT but the threshold for intervention will vary with time and from clinician to clinician and centre to centre [19]. Hence some babies with “raised” TSH values and thyroid hormone values within the laboratory normal range will be treated whilst others will not [5, 16].(5)
Ultimately, biochemistry is the most important parameter; a baby with a raised TSH but normal imaging will require thyroxine treatment whilst a baby with normal biochemistry but abnormal imaging will not.
## 2.2. Sample Processing and Analysis
During the period of the study the screening centre moved between the Royal Victoria Infirmary (RVI), Newcastle and the University Hospital of North Durham (UHND), Durham. The screening blood spot TSH method and cut-off value for screening failure also changed during the study from a manual radioimmunassay method (1994–March 1998) to an ACS-(Beyer) chemiluminometric assay (April 1998–February 2003) and then a DELFIA (perkin Elmer) fluoroimmunometric assay (March 2003–present) (Table1).Table 1
TSH newborn screening base, assay methodology, and cut-off values during the study period.
1994–March 1998April 1998–February 2003March 2003–March 2005April 2005–presentCentreNewcastleDurhamDurhamNewcastleMethodRadioimmunoassayACS-180DELFIADELFIA(Manual)(Bayer)(Perkin Elmer)(Perkin Elmer)TSH cut-off (mU/L)201066Interassay coefficients of variation (CV) for blood spot TSH assays were16.5%, 9.4%, and 8.8% for the manual radioimmun assay method at TSH values of 22.0, 38.6, and 74.1 mU/L, respectively. Interassay CV for blood spot TSH assays were 7% and 6%for the ACS chemiluminometric assay at TSH values of 14 mU/L and 68 mU/L, respectively, and 11% and 12% for the DELFIA fluoroimmunometric assay at TSH values of 16 mU/L and 60 mU/L, respectively. Nine replicates over 9 analytical runs were used to calculate inter assay precision for the ACS assay and 42 replicates over 10 analytical runs for the DELFIA assay. To ensure that cut-off values were comparable across the different screening methods used, prior to a change in method blood spot samples received as part of the screening program were analysed by both methods (Radioimmunoassay v ACS-180, n=2634; ACS-180 v DELFIA, n=682). Revised cut-off values were established by comparing the results using scatter charts and using least squares linear regression. There was a highly significant correlation between the 2 methods when the assay was changed from RIA to ACS-180 in 1998 (P<.001; r2=0.94) and when it was changed from ACS-180 to DELFIA in 2003 (P<.001; r2=0.99). 100% agreement for screening passes and failures was obtained using the revised cut-off values.Values identified as being greater than 20 mU/L by the radioimmunassay method, 10 mU/L by the ACS method, and greater than 6 mU/L by the DELFIA method were followed by analysis of a repeat blood spot from the screening card. They were deemed to be screening failures if the final blood spot value was again greater than 10 mU/L (ACS) or 6 mU/L (DELFIA). All infants where the paediatrician was subsequently notified of an increased value and hence were classed as neonatal screening test failures were included in the analysis.
## 2.3. Statistical Analysis
The yearly incidence of elevated TSH values in newborns was calculated as the number of cases in each year per 100,000 live births born in the Northern Region. Temporal changes in incidence were assessed using Poisson regression. Seasonal variation in incidence was assessed using the Edwards test for seasonality [20] with an adjustment for variable month length. A P-value of less than .05 was considered statistically significant. All statistical analyses were performed using the statistical software package Stata, version 9.0 (StataCorp, Texas).Approvals for this study were obtained from the Newcastle and North Tyneside Local Research Ethics Committee and the Patient Information Advisory Group for England and Wales.
## 3. Results
Between 1994 and 2005 inclusive, there were 213 cases of elevated TSH values in newborns in the Northern Region of England. The ratio of female to male cases was1.3:1. Over the study period, the average annual incidence was 59.94 per 100,000 live births. The annual number of cases of TSH elevation in newborns in the Northern Region and annual incidence per 100,000 live births are shown in Table 2. Incidence increased significantly over the study period (P<.0001) from 37 per 100,000 in 1994 to a peak of 92.8 in 2003.Table 2
Annual number of cases and incidence of elevated TSH levels on newborn screening in the Northern Region of England, 1994–2005.
YearNumber of casesIncidence per100,000 live births19941537.1219951132.2619961338.611997515.2119981649.9919992064.6520001550.6320012276.1120022689.0220032892.8420042270.9320052063.64The number of cases by month of birth is reported in Table3. Despite peak number of cases in May and in the August–October period, there was no significant evidence of seasonal variation in the number of cases (P=.16). Nor was there evidence of seasonal variation with sex-specific seasonality analyses (P=.17 for females and 0.59 for males).Table 3
Number of cases of elevated TSH levels on newborn screening by month of birth in the Northern Region of England, 1994–2005.
MonthNumber of casesJanuary14February14March14April17May25June19July16August20September23October20November16December15
## 4. Discussion
Despite advances made in identifying genetic risk markers for CHT, there remains a great deal to be explained in terms of the aetiology of the disease [21]. This study showed an increasing incidence of elevated TSH values in newborns in the Northern Region of England between 1994 and 2005, but did not find evidence of seasonal variation in the number of cases. Many other studies have depended primarily on biochemistry including TSH rather than other investigations when making a diagnosis of CHT [22]. Given the high risk of subclinical hypothyroidism and morphological abnormalities in “false-positive” patients [23, 24] we suspect that our figures for raised TSH will be closely linked to the number of actual cases of CHT. Unfortunately we do not have detailed information on outcome in these children because they were managed in more than 10 different hospitals by an even greater number of clinicians. More detailed data were therefore not available to allow us to assess changes in permanent CHT or to analyse the data with respect to different aetiologies.An increasing temporal trend in incidence of CHT has recently been reported in New York State [2], with a 138% increase between 1978 and 2005. Excluding New York State, nationwide United States data suggest a 73% increase between 1987 and 2002 [22]. We observed a 151% increase in raised TSH values between 1994 and 2003. However, this is in contrast to research conducted in Quebec, Canada, where no changes in incidence were seen over a 16-year period [4]. A real temporal trend, aside from changes in diagnostic procedures which can lead to increases in incidence [25], suggests either an increasing exposure to an environmental risk factor, or a changing distribution of risk factors among the population. The incidence rates in this study dropped slightly after 2003 and it remains to be seen whether this is a true decline or simply random variation.The division between a “screen positive” and “screen negative” in a screening programme such as this is not linked to robust outcome measures and the screening threshold and management of cases with mild thyroid dysfunction varies between regions. We were keen to establish that the change in incidence was not simply a reflection of change in assay methodology or laboratory practice (as opposed to seasonality where we would not expect assay change to have the same impact). All births in the study region were screened in a single centre at any one time. The physical location of this centre changed originally in April 1998 from Newcastle to Durham and again in April2005 from Durham to Newcastle. The move in 1998 also corresponded to a change in the laboratory assay with a further change in assay in 2003. The different assay methods were rigorously compared to ensure that there would be no difference in the number of cases identified as a result of the change. To this end a large number of samples were analysed and there was no difference in screening passes or failures with 100% concordance. It is of note that the increasing incidence in raised TSH values was most striking during the period when TSH was measured only by the ACS method although we suspect that this represents a true increase because the rise had commenced prior to the change in methodology and continued after this changed to the DELFIA assay. Many other studies of CHT or elevated TSH levels have encountered similar issues regarding data interpretation as assay methodology has changed [16].In terms of changes in the population structure, two previous studies from England have reported an increased incidence of CHT among Asians [5, 12]. However, the Northern Region of England has a population of 3.1 million, of which less than 2% are from ethnic minorities, with low levels of migration [26]. Therefore, while the data are not available to assess directly, it is unlikely that the increased incidence is related to changes in population structure and with it changes in genetic risk factor profiles. Exposure to environmental factors such as chemicals or increasing levels of other risk factors such as maternal iodine deficiency or high prenatal iodine exposures [27] or low birth weight infants [28] may be suggested by an increasing temporal trend whereas infections or seasonally varying dietary factors or chemical exposures, such as dioxin and polychlorinated biphenyl [29], may be suggested by evidence of seasonal variation in the number of cases. The potential role of a suboptimal maternal iodine status in some parts of the North of England should be highlighted and clearly warrants further study [30]. We found little evidence of seasonal variation of elevated TSH levels in newborns, in line with a number of previous reports of no seasonality [4, 9, 12–14]. In contrast, a number of previous studies have reported seasonal variation in a number of different geographical areas [5, 8–11], including a different part of England [5]. Gu et al. also found sex-specific seasonal patterns of incidence in Japan [31]. However, sex-specific analyses also showed little evidence of seasonal variation in this study. The issue of statistical power should be considered when interpreting our results and it is possible that with a larger sample a seasonal effect may have been found. It is also possible that differences in findings may reflect differences in the underlying populations. Our sex ratio (F:M) of 1.3:1 was significantly less than the sex ratio of 2.8:1 previously shown in a study from Scotland that used thyroxine prescription data as a surrogate for hypothyroidism in children and young people [7] and less than the ratio of 2.1:1 reported for cases of “true” CHT from the same country [16]. This underlines the importance of taking factors such as iodine status into consideration in future work.In conclusion, we have observed a significant increasing trend in the incidence of elevated TSH levels in newborns, a surrogate for increasing levels of CHT, since 1994. Whilst the reasons for the increase are unclear, it would appear from this analysis that seasonally varying factors are not involved. It is also unlikely to be due to a change in the population distribution of genetic risk factors, although environmental determinants of genetic mutations and epigenetic factors cannot be ruled out. Further research is required into the potential environmental determinants of increased CHT risk.
---
*Source: 101948-2010-01-28.xml* | 2010 |
# Similarity Measurement and Classification of English Characters Based on Language Features
**Authors:** Linna Miao; Zhixin Fang; Junping Zhang
**Journal:** Mobile Information Systems
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1019508
---
## Abstract
English is now widely used in the world as an international language. As a symbol of the development of human civilization, English characters provide an important medium and tool for mankind. In the current information age, the vocabulary of English words is more quantitative, and it is almost everywhere. Under the background of the multiquantification of English words and the quantification of the relationship between words, the similarity measurement analysis and calculation of English words and the classification of vocabulary measurement calculation are carried out by integrating the characteristics of language. The experimental results are as follows: (1) the development situation of English words is analyzed, the research direction of the experiment is determined, the concept of English character features is proposed, and the similarity calculation method is selected according to different features, in order to simplify the complex and difficult-to-understand word meaning relationship between English words; (2) the text features are extracted through the similarity feature selection of language and text. The extraction of features indirectly affects the effectiveness of classification. The similarity word embedding vector is used to map English words into the vector for analysis and comparison, calculate the distance between the similarity numerical variables between English words and their similarity coefficient, measure the distance between them, and evaluate the similarity between them, including the angle cosine method and correlation coefficient method which are the two main methods for calculating the similarity coefficient.
---
## Body
## 1. Introduction
Whispering is a natural way of speaking. Although its perceptual ability is reduced, it still contains information about expectations (i.e., comprehensibility) and the identity and gender of the speaker. However, considering the acoustic differences between whispered speech and normal voiced speech, speech applications trained on the latter but tested with the former showed unacceptable performance levels. In the automatic speaker verification task, previous studies have shown that I) traditional features (e.g., the Mel frequency cepstrum coefficient, MFCC) cannot transmit enough speaker discrimination clues in two utterance efforts and II) multiconditional training often reduces the performance of normal speech while improving whisper performance. In this paper, we aim to solve these two shortcomings by proposing three innovative features, which can provide reliable results for normal speech and whispered speech when fused at the score level. Overall, the relative improvement rates of the whisper group and the normal group were 66% and 63%, respectively [1]. Although the correctness of the characteristic measurement largely blames it on the changing external environment so far, little attention has been paid to the consequences of this fact in pattern recognition tasks. In this paper, we explicitly consider the uncertainty of feature measurement and illustrate how to improve diversified classification rules and research methods to make up for the impact of uncertainty. This experimental method can be effectively used in various multistyle scenes, and the feature vectors evolved from different sences are merged. For the development of this kind of operation, if we can estimate the uncertainty of noise generated by each vector characteristic flow, this kind of development will achieve high efficiency and adapt to various pattern fusion rules. The study further shows that under some assumptions, the multimodal fusion method depending on the flow weight can be naturally generated from our scheme; this relationship can help us give a helpful view on the use of uncertainty compensation methods. This reveals how to apply those views to audio-visual intelligent induction. In this similar event, an emerging technology is developed and proposed, which works when using the framework in feature extraction and variability evaluation of human perception and also studies how to effectively calculate the enhanced audio features and their uncertainty estimation. The effectiveness of our multimodal integration method in the audio-visual database is proved [2]. It is a very difficult and meaningful challenge to identify and classify complex people’s actions and behaviors from animated videos. The article uses Indian sign language (ISL) video to explore this kind of problem. The feature of discretization through the scale and translation of basic wavelet, the new segmentation algorithm, is proposed. The fusion functions form a two-dimensional point cloud to show some characteristics in uninterrupted animation playback. The most perfect feature extraction of symbols in animation playback is carried out on each single classifier to check the feasibility of the designed feature extraction framework. In the experiment, we can see that the feature proportion of some binary patterns can better represent the value of symbol recognition data than other most advanced features. The specific reason is that the designed feature model combines the overall features and some features. The obtained and classified characteristics are transmitted to the network database remotely, and they correspond to their own nominal words. The accuracy and correctness of the recognition mark are tested. Through the largest training example, an artificial neural network classifier with a recognition rate of 92.79% is obtained, which is much higher than the existing artificial neural network classifiers on sign language and ISL data sets with other features [3]. This paper mainly studies the temporal retrieval of activities in videos through sentence queries. Given the sentence query describing the activity, time moment retrieval aims to locate the time period that can best describe the text query in the video. This is a common and challenging task because it requires understanding both the video and the language. Existing studies mainly use coarse frame level features as visual representation, blurring specific details in the video (for example, the required object “girl,” “Cup,” and action “dumping,”) which may provide key clues for the time required for positioning. In this paper, we propose a new spatial and linguistic time tensor fusion (SLTF) method to solve these problems [4]. This study investigates the generation and perception of English vowels by Korean English learners in two English learning sessions about one year apart. A preliminary experiment shows that Korean adults use two different Korean vowels to classify some comparative English vowels, while others show classification overlap, which means that it is difficult for Korean English learners to distinguish these vowels. In two subsequent experiments, NK adults and children living in North America for different periods of time (3 years vs. 5 years; 4 groups, 18 years in each group) were compared with age-matched native English speakers. In Experiment 2, NK children identified English vowels more accurately than NK adults but more accurately than NE children. In Experiment 3, a picture naming task was used to extract images containing/, i.e., εД/English words. Some vowels produced by NK children are easier to hear than those produced by NK adults. Acoustic analysis shows that the vowel contrast of NK children is significantly higher than that of NK adults [5]. A size and color invariant character recognition system based on the feedforward neural network is proposed. Our feedforward network has two layers. One is the input layer, and the other is the output layer. The whole recognition process is divided into four basic steps: preprocessing, standardization, network establishment, and recognition. Preprocessing includes digitization, noise removal, and boundary detection. After boundary detection, the input character matrix is normalized to a 12 × 8 matrix for size invariant recognition and input it into the network composed of 96 input and 36 output neurons. Then, we use the proposed training algorithm to train the network in a supervised way and establish the network by adjusting the weight. Finally, we test our network by averaging more than 20 samples per character. By considering the similarity measure between classes, we give 99.99% accuracy for numbers (0∼9), 98% accuracy for letters (a∼z), and more than 94% accuracy for alphanumeric characters [6]. Using the perceptual assimilation model (PAM) of best (1995), we studied the dictation and observation ability of Cantonese tone, as well as touch, smell, hearing, and visual perception, including Thai and English [7]. This paper identifies six social science research methods that help to describe the social and cultural significance of nanotechnology: web-based questionnaire survey, episode experiment, network link analysis, recommendation system, quantitative content analysis, and qualitative text analysis. Data from a range of sources are used to illustrate how these methods describe the knowledge content and institutional structure of the emerging nanotechnology culture. These methods will make it possible to test hypotheses in the future. For example, nanotechnology has two competing definitions, namely science and technology and science fiction, which affect public cognition through different ways and directions [8]. In the study of biomedical field, the identification and standardization of medical case literature is an important step of biomedical text extraction. In addition, a gene symbol recognition system is also described to obtain special text content from biomedical materials and standardize its content. The composition of this gene symbol recognition system includes gene symbol recognition, gene text content mapping, gene text standardization, and text content filtering. Gene symbol recognition is a process based on fund symbol matching and monitoring. It uses a large number of labeling methods to achieve the recognition of gene symbols. In the gene text content mapping stage, the data set connection is established in the system context around the principles of accurate matching and priority matching [9]. If we lack relevant problem specific knowledge, we can use the cross validation method to select the classification method empirically. We test this idea here to illustrate the meaning of cross validation to solve and not solve the selection problem. As experience shows, cross validation may bring higher average performance than the application of any single classification strategy and can also reduce the risk of poor performance. On the other hand, compared with simpler strategies, cross validation is more or less a bias. The correct application of cross validation ultimately depends on previous knowledge. In fact, cross validation may be seen as a way to apply some information about the applicability of alternative classification strategies [10]. A new intelligent fault diagnosis method of rotating machinery based on wavelet packet transform (WPT), empirical mode decomposition (EMD), dimensionless parameters, distance estimation technology, and radial basis function (RBF) network is proposed. The experimental results show that the method combining WPT, EMD, distance evaluation technology, and RBF network can accurately extract fault information and select sensitive features so as to correctly diagnose different fault types of bearings. This method is applied to the fault diagnosis of slight rub impact in heavy oil catalytic cracking units. The actual results show that this method can be effectively applied to the fault diagnosis of rotating machinery [11]. Decision tree classification provides a fast and effective data set classification method. There are many algorithms to optimize the structure of decision tree, although these methods are vulnerable to the changes of the training data set. This method is tested with two different data sets, and the results are equivalent to or better than other classification methods. The last discussion demonstrates the utility of decision trees relative to algorithms or other alternative methods (such as neural networks), especially when considering a large number of variables [12]. An objective classification method of weather situation in Europe and northeast Atlantic is established. The mean air pressure of each mser40 in winter and the mean air pressure of each mser40 in winter are calculated, respectively. Then, according to the original concept of Hess and Brezovsky, by using the mode correlation of these composite fields, the daily directory of the target GWL is constructed, and some filtering methods are used to detach the instantaneous feature vector, which can help to keep the GWL task at least more than four days. The essential difference between the fact and the original GWL system is found. The reason is that the original system is mainly concentrated in Central Europe and has a certain subjectivity, while the reality system treats power more in terms of spatial standards. The data transformation fluctuation of most air flows in Central Europe usually comes from GWL series, which is used to calculate the law of anticyclone change, reanalyze the change of anticyclone fluctuation in Central Europe during this period, and predict the development situation. [13]. In this paper, a fault classification method based on neural network and orthogonal least squares (OLS) learning process is adopted to identify various relevant voltage and current modes. This paper also compares the RBF neural network with the BP neural network. The results show that the RBF method can calculate all kinds of faults quickly and accurately. The simulation results also show that this method can be used as an effective tool for high-speed relay protection [14]. A fully automatic multiscale fuzzy c-means classification method is proposed. We use diffusion filters to process MR images and construct multiscale image sequences. On the scale from coarse to fine, the multiscale fuzzy c-means classification method is adopted. The final function of the old implicit averaging method is basically modified and its classification is diversified, in which the coarse scale supervises the classification of the next fine scale. Due to its multiscale diffusion filtering scheme, this means has high stability for noisy and weak contrast animation images. By comparing and improving the new design method with the old method, the synthetic images with different contrasts and the McGill brain magnetic resonance image database were verified. Our MsFCM method is always superior to the traditional FCM and MFCM methods. The actual ground verification shows that the MsFCM method achieves an overlap rate of more than 90%. The availability of this method is proved in the actual animation image. It is proved that the diversified average classification methods are correct and stable for all kinds of animation images. It can become a tool for animated images and other application scenes [15].
## 2. Similarity Measurement of English Characters Based on Language Features
In the field of language learning and recognition, features are important research objects. In the process of language similarity calculation and classification, based on the analysis, recognition, and text inspection of different research objects, in essence, they can be regarded as extracting and classifying the features of research objects and calculating the similarity between two feature vectors through a measurement criterion. Therefore, the selection of features has a far-reaching impact on the results of calculating similarity.
### 2.1. Features
At present, feature extraction is in the primary stage in horizon perception. In the process of learning in the field of science, the most important thing is analytical theory. One of the important viewpoints in this theory is that horizon perception is a process extending from a local feature of something to a global feature, which makes it clear that the local feature is perceived at the first time. However, as for the principle of global priority theory, the global feature is regarded as the first perceived object, followed by the local feature. What is feature extraction? The so-called feature extraction is a method of transforming the original space into the space to be calculated through a certain mapping relationship. The initial feature is the first feature of the extracted object. If the dimension of the object to be calculated is high, it will produce too high time complexity in the calculation. Therefore, in general, try to map the high-dimensional space vector to the low-dimensional space. This method is helpful to complete the analysis and extraction of the features of the research object, and different features can complement each other. Therefore, in theory, the accuracy of multiobject combined feature extraction is higher than that of single-object feature extraction. Therefore, in the feature extraction of similarity measurement, it is best to extract and measure the features of multiobject combinations and then select some obvious features for linear or nonlinear combinations.
#### 2.1.1. Statistical Characteristics
(1) Conversion Coefficient Method. The idea of the transformation coefficient method is to calculate the number of the whole global characteristic variables. It is to carry out different transformations on the model and take the results of different transformations as a feature. The transformation coefficient methods often used in the process of statistical features include KL transformation, Fourier transformation, Hough transformation, and so on. The conversion coefficient method takes each pixel in the graph as each unit. Therefore, when using the conversion coefficient method, it will also produce the problems of difficult calculation and resource consumption. Therefore, in practical application, some special correction methods will be adopted to reduce the difficulty of calculation.(2) Contour Feature. The edge contour of English text form rich features. Although the features cannot be displayed inside the text and are not obvious, its edge contour can still reflect some rich features. Because this feature starts from the edge, it can be used as the classification of general features to a certain extent.(3) Pixel Density Characteristics. Due to the wide variety of English characters, the pixel distribution represented by different kinds of English characters is very different. The coarse pixel density characteristics can be obtained by dividing the text image horizontally or vertically and calculating the effective number of pixels in each area. For some English text pictures, the difference in their own structure is not very obvious. Although the pixel density obtained by different division methods is different, the characters they actually represent are very similar. Therefore, the pixel density feature can be used to classify the English character features. The advantage of the pixel density feature is that it can prevent the influence of external things, and a small amount of information will not seriously affect the actual results. However, due to the diversity of text types, the features formed by different English words take a long time. Therefore, for different English text types, the feature extraction method needs to be improved.
### 2.2. Similarity Measure
Similarity reflects the degree of relationship between different objects or different features. The similarity is an important index to indicate whether the model samples are similar. It is usually represented by a value between 0 and 1. The vector similarity can be divided into the vector similarity and system similarity. Different research objects correspond to different similarities. The calculation methods of similarity measure mainly include the distance calculation method and function method. The two methods have their own differences. The accuracy of the results obtained by the distance calculation method is smaller, while the results calculated by the function method are more accurate, especially when studying the similarity between vectors.
## 2.1. Features
At present, feature extraction is in the primary stage in horizon perception. In the process of learning in the field of science, the most important thing is analytical theory. One of the important viewpoints in this theory is that horizon perception is a process extending from a local feature of something to a global feature, which makes it clear that the local feature is perceived at the first time. However, as for the principle of global priority theory, the global feature is regarded as the first perceived object, followed by the local feature. What is feature extraction? The so-called feature extraction is a method of transforming the original space into the space to be calculated through a certain mapping relationship. The initial feature is the first feature of the extracted object. If the dimension of the object to be calculated is high, it will produce too high time complexity in the calculation. Therefore, in general, try to map the high-dimensional space vector to the low-dimensional space. This method is helpful to complete the analysis and extraction of the features of the research object, and different features can complement each other. Therefore, in theory, the accuracy of multiobject combined feature extraction is higher than that of single-object feature extraction. Therefore, in the feature extraction of similarity measurement, it is best to extract and measure the features of multiobject combinations and then select some obvious features for linear or nonlinear combinations.
### 2.1.1. Statistical Characteristics
(1) Conversion Coefficient Method. The idea of the transformation coefficient method is to calculate the number of the whole global characteristic variables. It is to carry out different transformations on the model and take the results of different transformations as a feature. The transformation coefficient methods often used in the process of statistical features include KL transformation, Fourier transformation, Hough transformation, and so on. The conversion coefficient method takes each pixel in the graph as each unit. Therefore, when using the conversion coefficient method, it will also produce the problems of difficult calculation and resource consumption. Therefore, in practical application, some special correction methods will be adopted to reduce the difficulty of calculation.(2) Contour Feature. The edge contour of English text form rich features. Although the features cannot be displayed inside the text and are not obvious, its edge contour can still reflect some rich features. Because this feature starts from the edge, it can be used as the classification of general features to a certain extent.(3) Pixel Density Characteristics. Due to the wide variety of English characters, the pixel distribution represented by different kinds of English characters is very different. The coarse pixel density characteristics can be obtained by dividing the text image horizontally or vertically and calculating the effective number of pixels in each area. For some English text pictures, the difference in their own structure is not very obvious. Although the pixel density obtained by different division methods is different, the characters they actually represent are very similar. Therefore, the pixel density feature can be used to classify the English character features. The advantage of the pixel density feature is that it can prevent the influence of external things, and a small amount of information will not seriously affect the actual results. However, due to the diversity of text types, the features formed by different English words take a long time. Therefore, for different English text types, the feature extraction method needs to be improved.
## 2.1.1. Statistical Characteristics
(1) Conversion Coefficient Method. The idea of the transformation coefficient method is to calculate the number of the whole global characteristic variables. It is to carry out different transformations on the model and take the results of different transformations as a feature. The transformation coefficient methods often used in the process of statistical features include KL transformation, Fourier transformation, Hough transformation, and so on. The conversion coefficient method takes each pixel in the graph as each unit. Therefore, when using the conversion coefficient method, it will also produce the problems of difficult calculation and resource consumption. Therefore, in practical application, some special correction methods will be adopted to reduce the difficulty of calculation.(2) Contour Feature. The edge contour of English text form rich features. Although the features cannot be displayed inside the text and are not obvious, its edge contour can still reflect some rich features. Because this feature starts from the edge, it can be used as the classification of general features to a certain extent.(3) Pixel Density Characteristics. Due to the wide variety of English characters, the pixel distribution represented by different kinds of English characters is very different. The coarse pixel density characteristics can be obtained by dividing the text image horizontally or vertically and calculating the effective number of pixels in each area. For some English text pictures, the difference in their own structure is not very obvious. Although the pixel density obtained by different division methods is different, the characters they actually represent are very similar. Therefore, the pixel density feature can be used to classify the English character features. The advantage of the pixel density feature is that it can prevent the influence of external things, and a small amount of information will not seriously affect the actual results. However, due to the diversity of text types, the features formed by different English words take a long time. Therefore, for different English text types, the feature extraction method needs to be improved.
## 2.2. Similarity Measure
Similarity reflects the degree of relationship between different objects or different features. The similarity is an important index to indicate whether the model samples are similar. It is usually represented by a value between 0 and 1. The vector similarity can be divided into the vector similarity and system similarity. Different research objects correspond to different similarities. The calculation methods of similarity measure mainly include the distance calculation method and function method. The two methods have their own differences. The accuracy of the results obtained by the distance calculation method is smaller, while the results calculated by the function method are more accurate, especially when studying the similarity between vectors.
## 3. English Text Similarity Measurement Algorithm Based on Language Features
### 3.1. Similarity Feature Selection
In the process of English text similarity measurement and classification, feature extraction is the most important content. The quality of feature selection directly affects the efficiency of similarity classification, so this paper uses the chi square test to extract features. What is chi square test? Chi square test is to score and sort the features of the research object after feature extraction so as to select the top features as the extraction result set.Chi square test formula is(1)x2=∑A−T2T.
### 3.2. Similarity Word Embedding Vector
Word embedding vector is a process of mapping a word into a measurement space. The computer itself cannot directly extract the features of English text, so it is necessary to convert English text into a spatial vector. Nowadays, the most important text space vector models are the skip-gram model and CBOW model. This paper selects the former as the training text vocabulary vector.The skip-gram model obtains a weight model from the input layer to the output layer through simulation training in a certain scale corpus according to the probability of the characteristics ofn words before and after text center vocabulary prediction. The probability of maximizing text obtained by the model is(2)argmax∏wijD∐c∈CijPc|wij,θ.A support vector machine algorithm is essentially a supervised classification algorithm. It can be divided into linear separable and linear nonseparable. It has achieved good results in the process of classification training.The support vector machine can map the research object data from low-dimensional space to high-dimensional space and select kernel function for a solution. The mathematical expression is(3)max∑i=1nai−12∑i=1n∑j=1naiajyiyjKxi,xjs.t.∑i=1naiyi=00≤ai≤C,i=1,2,…,n.In formula (3), Kxi,xj represents the kernel function, and the final classification function is(4)fx=sign∑i=1naiyiKxi,xj+b.According to the Bayesian formula(5)PBi|A=PBiPA|Bi∑j=1nPBjPA|Bj.It can be concluded that(6)PCi|X=PCiPXk|CiPX.(Whenx condition is independent)In formula (6)(7)PCi=Ncid.Calculate in setC(8)CCi=PCi∏k=1nPXk|Ci.Final classification result is(9)Cmax=arg maxPCi∏k=1nPXk|Ci.The composition of random forest is composed of a variety of decision trees. Compared with the composition of a single decision tree, it avoids getting consistent assumptions and making the assumptions too strict. The method of increasing the amount of data and testing the sample set is usually used to evaluate the performance of the classifier. While solving the classification problem, each decision tree in the forest judges the simulated training samples in turn, which are selected by most decision trees as the final result.(10)h1X,θw1,h2X,θw2,…,hmX,θwm.The marginal function of random forest is(11)mgX,Y=avkIhkx=y−maxj≠yavkIhkx=j.
### 3.3. Distance between Similarity Numerical Variables
If the attributes of decision variables are continuous or discontinuous, how to measure the similarity or distance between variables?
#### 3.3.1. European Distance
(12)di.j=∑k=1nXik−Xjk2,di.j refers to the overall distance in the n-dimensional space, that is, the dissimilarity. The larger di.j means the farther the distance, that is, the more obvious the dissimilarity is. On the contrary, the smaller di.j means the more obvious the similarity between the whole. Xik means the i-dimensional coordinate of the first point, and Xjk means the second two-dimensional coordinate of the second point.
#### 3.3.2. Manhattan Distance
(13)di.j=∑k=1nXik−Xjk.
### 3.4. Similarity Coefficient
OrderO=x1,x2,…,xn. All numerical sets of simulation research objects are set to x1,x2,…,xn. The range of values for each simulation study was set to xi,xj∈O, whererij is the similarity coefficient of xi and xj; the specific conditions are as follows:(1)
rij=1⇔xi=xj(2)
∀xi,xj,rij∈0,1(3)
∀xi,xj,rij=rjiThe following methods are commonly used to measure and calculate the similarity coefficient:
#### 3.4.1. Quantity Product Method
(14)rij=1,i=j,∑i=1mXiXikXjk,i≠j,.where M is a positive number, satisfying M≥∑k=1mXikXjk,i≠j.
#### 3.4.2. Included Angle Cosine
(15)rij=∑k=1mXikXjk∑k=1mXik2∑k=1mXjk2.A vector is a directed line segment in a multidimensional space. If two vectors have the same direction, their included angle is 0. Therefore, the cosine value can be used to express the similarity of two vectors. When two vectors are orthogonal,rij = 0 indicates that the vectors are completely different.
#### 3.4.3. Correlation Coefficient Method
(16)rij=∑n=1mXik−XiXjk−Xj∑k=1mXik−Xi2∑k=1mXjk−Xj2.Among them,Xi=∑k=1mXik/m,Xj=∑k=1mXjk/m, and the numerical range of rij is in [−1, 1]. When the result is 0, it indicates that there is no correlation between the whole; when the result is 1, it indicates that the whole is positively correlated; when the result is −1, it indicates that there is a negative correlation as a whole.
#### 3.4.4. Arithmetic Mean Minimum Method
(17)rij=2×∑k=1mXik∧Xjk∑k=1mXik+Xjk.
#### 3.4.5. Exponential Similarity Method
(18)rij=∑k=1mexp−Xik−Xjk2/Sk2m.
#### 3.4.6. Paste Progress
If the characteristics ofXi and Xj are unified so that Xik and Xjk belong to [0,1] (k = 1, 2, …, m), the similarity of Xi and Xj is defined as their pasting progress. Distance paste progress(19)rij=1−cdXi−Xja,C and a are appropriate selection parameters, and their values can be any value, but their selected values should meet the 0 ≤ rij ≤ 1 inequality, and dXi−Xj represents the distance between them.dXi−Xj is a certain distance, which can be taken as Minkovsky distance(20)dXi−Xj=∑k=1mXik−Xjkp1/p.
## 3.1. Similarity Feature Selection
In the process of English text similarity measurement and classification, feature extraction is the most important content. The quality of feature selection directly affects the efficiency of similarity classification, so this paper uses the chi square test to extract features. What is chi square test? Chi square test is to score and sort the features of the research object after feature extraction so as to select the top features as the extraction result set.Chi square test formula is(1)x2=∑A−T2T.
## 3.2. Similarity Word Embedding Vector
Word embedding vector is a process of mapping a word into a measurement space. The computer itself cannot directly extract the features of English text, so it is necessary to convert English text into a spatial vector. Nowadays, the most important text space vector models are the skip-gram model and CBOW model. This paper selects the former as the training text vocabulary vector.The skip-gram model obtains a weight model from the input layer to the output layer through simulation training in a certain scale corpus according to the probability of the characteristics ofn words before and after text center vocabulary prediction. The probability of maximizing text obtained by the model is(2)argmax∏wijD∐c∈CijPc|wij,θ.A support vector machine algorithm is essentially a supervised classification algorithm. It can be divided into linear separable and linear nonseparable. It has achieved good results in the process of classification training.The support vector machine can map the research object data from low-dimensional space to high-dimensional space and select kernel function for a solution. The mathematical expression is(3)max∑i=1nai−12∑i=1n∑j=1naiajyiyjKxi,xjs.t.∑i=1naiyi=00≤ai≤C,i=1,2,…,n.In formula (3), Kxi,xj represents the kernel function, and the final classification function is(4)fx=sign∑i=1naiyiKxi,xj+b.According to the Bayesian formula(5)PBi|A=PBiPA|Bi∑j=1nPBjPA|Bj.It can be concluded that(6)PCi|X=PCiPXk|CiPX.(Whenx condition is independent)In formula (6)(7)PCi=Ncid.Calculate in setC(8)CCi=PCi∏k=1nPXk|Ci.Final classification result is(9)Cmax=arg maxPCi∏k=1nPXk|Ci.The composition of random forest is composed of a variety of decision trees. Compared with the composition of a single decision tree, it avoids getting consistent assumptions and making the assumptions too strict. The method of increasing the amount of data and testing the sample set is usually used to evaluate the performance of the classifier. While solving the classification problem, each decision tree in the forest judges the simulated training samples in turn, which are selected by most decision trees as the final result.(10)h1X,θw1,h2X,θw2,…,hmX,θwm.The marginal function of random forest is(11)mgX,Y=avkIhkx=y−maxj≠yavkIhkx=j.
## 3.3. Distance between Similarity Numerical Variables
If the attributes of decision variables are continuous or discontinuous, how to measure the similarity or distance between variables?
### 3.3.1. European Distance
(12)di.j=∑k=1nXik−Xjk2,di.j refers to the overall distance in the n-dimensional space, that is, the dissimilarity. The larger di.j means the farther the distance, that is, the more obvious the dissimilarity is. On the contrary, the smaller di.j means the more obvious the similarity between the whole. Xik means the i-dimensional coordinate of the first point, and Xjk means the second two-dimensional coordinate of the second point.
### 3.3.2. Manhattan Distance
(13)di.j=∑k=1nXik−Xjk.
## 3.3.1. European Distance
(12)di.j=∑k=1nXik−Xjk2,di.j refers to the overall distance in the n-dimensional space, that is, the dissimilarity. The larger di.j means the farther the distance, that is, the more obvious the dissimilarity is. On the contrary, the smaller di.j means the more obvious the similarity between the whole. Xik means the i-dimensional coordinate of the first point, and Xjk means the second two-dimensional coordinate of the second point.
## 3.3.2. Manhattan Distance
(13)di.j=∑k=1nXik−Xjk.
## 3.4. Similarity Coefficient
OrderO=x1,x2,…,xn. All numerical sets of simulation research objects are set to x1,x2,…,xn. The range of values for each simulation study was set to xi,xj∈O, whererij is the similarity coefficient of xi and xj; the specific conditions are as follows:(1)
rij=1⇔xi=xj(2)
∀xi,xj,rij∈0,1(3)
∀xi,xj,rij=rjiThe following methods are commonly used to measure and calculate the similarity coefficient:
### 3.4.1. Quantity Product Method
(14)rij=1,i=j,∑i=1mXiXikXjk,i≠j,.where M is a positive number, satisfying M≥∑k=1mXikXjk,i≠j.
### 3.4.2. Included Angle Cosine
(15)rij=∑k=1mXikXjk∑k=1mXik2∑k=1mXjk2.A vector is a directed line segment in a multidimensional space. If two vectors have the same direction, their included angle is 0. Therefore, the cosine value can be used to express the similarity of two vectors. When two vectors are orthogonal,rij = 0 indicates that the vectors are completely different.
### 3.4.3. Correlation Coefficient Method
(16)rij=∑n=1mXik−XiXjk−Xj∑k=1mXik−Xi2∑k=1mXjk−Xj2.Among them,Xi=∑k=1mXik/m,Xj=∑k=1mXjk/m, and the numerical range of rij is in [−1, 1]. When the result is 0, it indicates that there is no correlation between the whole; when the result is 1, it indicates that the whole is positively correlated; when the result is −1, it indicates that there is a negative correlation as a whole.
### 3.4.4. Arithmetic Mean Minimum Method
(17)rij=2×∑k=1mXik∧Xjk∑k=1mXik+Xjk.
### 3.4.5. Exponential Similarity Method
(18)rij=∑k=1mexp−Xik−Xjk2/Sk2m.
### 3.4.6. Paste Progress
If the characteristics ofXi and Xj are unified so that Xik and Xjk belong to [0,1] (k = 1, 2, …, m), the similarity of Xi and Xj is defined as their pasting progress. Distance paste progress(19)rij=1−cdXi−Xja,C and a are appropriate selection parameters, and their values can be any value, but their selected values should meet the 0 ≤ rij ≤ 1 inequality, and dXi−Xj represents the distance between them.dXi−Xj is a certain distance, which can be taken as Minkovsky distance(20)dXi−Xj=∑k=1mXik−Xjkp1/p.
## 3.4.1. Quantity Product Method
(14)rij=1,i=j,∑i=1mXiXikXjk,i≠j,.where M is a positive number, satisfying M≥∑k=1mXikXjk,i≠j.
## 3.4.2. Included Angle Cosine
(15)rij=∑k=1mXikXjk∑k=1mXik2∑k=1mXjk2.A vector is a directed line segment in a multidimensional space. If two vectors have the same direction, their included angle is 0. Therefore, the cosine value can be used to express the similarity of two vectors. When two vectors are orthogonal,rij = 0 indicates that the vectors are completely different.
## 3.4.3. Correlation Coefficient Method
(16)rij=∑n=1mXik−XiXjk−Xj∑k=1mXik−Xi2∑k=1mXjk−Xj2.Among them,Xi=∑k=1mXik/m,Xj=∑k=1mXjk/m, and the numerical range of rij is in [−1, 1]. When the result is 0, it indicates that there is no correlation between the whole; when the result is 1, it indicates that the whole is positively correlated; when the result is −1, it indicates that there is a negative correlation as a whole.
## 3.4.4. Arithmetic Mean Minimum Method
(17)rij=2×∑k=1mXik∧Xjk∑k=1mXik+Xjk.
## 3.4.5. Exponential Similarity Method
(18)rij=∑k=1mexp−Xik−Xjk2/Sk2m.
## 3.4.6. Paste Progress
If the characteristics ofXi and Xj are unified so that Xik and Xjk belong to [0,1] (k = 1, 2, …, m), the similarity of Xi and Xj is defined as their pasting progress. Distance paste progress(19)rij=1−cdXi−Xja,C and a are appropriate selection parameters, and their values can be any value, but their selected values should meet the 0 ≤ rij ≤ 1 inequality, and dXi−Xj represents the distance between them.dXi−Xj is a certain distance, which can be taken as Minkovsky distance(20)dXi−Xj=∑k=1mXik−Xjkp1/p.
## 4. Experimental Analysis of Similarity Measurement and Classification of English Characters Based on Language Features
### 4.1. Comparative Analysis of Similarity Algorithm Efficiency
The cosine similarity algorithm, keyword similarity algorithm, word meaning similarity algorithm, common subsequence similarity algorithm, and the experimental algorithm are used to analyze and calculate the similarity measurement of the simulation research sample data.Method1: cosine similarity algorithm, method2: keyword similarity algorithm, method3: word meaning similarity algorithm, method4: common subsequence similarity algorithm, and method 5: experimental algorithm in this paper.As shown in Table1, which represents the average similarity value of the five methods in the state of 1 under different numbers of data. Since the similarity between data vocabulary pairs is tested, if the similarity state between vocabulary pairs is 1, that is, the similarity value between the vocabulary pairs is also very high.Table 1
Average similarity of vocabulary pairs with status 1 under different data numbers of different algorithms.
Number of data500100015002000250030003500400045005801method10.7170.7420.7280.7310.7300.7290.7310.7310.7300.730method20.7130.7180.7210.7250.7240.7220.7240.7240.7230.723method30.3730.3780.3760.3780.3780.3770.3760.3800.3780.380method40.6490.6520.6580.6610.6630.6620.6640.6640.6630.664method50.8410.8370.8390.8410.8450.8440.8460.8460.8460.846The average value of similarity in this paper is higher than other algorithms and remains around 0.84, and the difference between the maximum value and the minimum value is no more than 0.01, which shows that this algorithm has good results in the calculation of similarity and the stability of the algorithm. Among them, the average similarity of the cosine similarity algorithm, keyword similarity algorithm, and common subsequence similarity algorithm is also high, while the average similarity of the word meaning similarity algorithm remains low.As shown in Figure1, it shows the comparative analysis of the accuracy and efficiency of the five algorithms under the condition of anonuniform similarity threshold.Figure 1
Comparison of accuracy of five algorithms under different similarity values.As shown in Figure2, the recall rates of the five algorithms are compared and analyzed in the case of inconsistent similarity thresholds.Figure 2
Comparison of recall rates of five algorithms under different similarity values.As shown in Figure3, it shows the comparison and analysis of F result values of five algorithms under the condition of a nonuniform similarity threshold. The harmonic average calculated by each algorithm is basically the same as the calculated recall rate. Since the growth rate of each algorithm P is lower than the reduction rate of R, the result of R has an obvious impact on the result of F value, and the change curve of F value is close to that of R.Figure 3
Comparison ofF values of five algorithms under different similarity thresholds.
### 4.2. Experimental Analysis of Similarity Calculation
By collecting and analyzing the usage of English vocabulary resources participating in the system comparison, some English vocabulary pairs that cannot be calculated are selected from the English vocabulary data set test, and the final ten pairs of words are tested.As shown in Table2, the calculation results of English vocabulary similarity are shown. It can be seen that the calculation results of S1 column are lower than those of other columns. The reason for this phenomenon is that the high similarity of English vocabulary selects the system design method and notes the common characteristics of a large number of English vocabulary, which may include the influence of external interference factors, resulting in the low similarity of English vocabulary vector. The jump of S2 column value is too high. The reason for this phenomenon may be that the selection and design of highly similar English words in the Baidu library do not match the artificial idea in some way.Table 2
Calculation results of English vocabulary similarity.
IDW1W2SS1S2S31AutomobileCar0.9231.0320.9980.9352JewelleryGlass0.8567.6980.7890.8643NoonNoon0.0150.2030.0360.0174ForestWoodland0.8160.7860.8390.8225PhoneTelephone0.8050.9630.8230.8116ChairStool0.2360.3540.2540.2407RopeLine0.3690.4780.3720.3598WorryWorried0.3590.4230.3970.3619HospitalClinic0.4130.5120.4380.41910ReflectionConsider0.7160.8360.7250.712The calculation results of English word similarity inS1 are generally low in value, which is mainly due to the design method of the high similarity English word autonomous selection system based on the database, considering many English word features and the influence of some other interference factors, resulting in the low similarity of high-dimensional vectors of English word features.As shown in Figure4, it shows the selection efficiency of the corresponding similar English vocabulary selection system design when the number of data for selecting English vocabulary is 200, 400, and 600. If α corresponds to 1, the selection efficiency of a similar English vocabulary selection system design is 30%, 32%, and 45%, respectively. If α corresponds to 3, the selection efficiency of a similar English vocabulary selection system design is 40%, 44%, and 60%, respectively. If α corresponds to 5, the selection efficiency of a similar English vocabulary selection system design is 55%, 63%, and 80%, respectively. Through comparative analysis, the recognition rate and weight of stable English lexical features can be obtained as α. In the interval [1,5], the selection efficiency will be the highest.Figure 4
Selection efficiency of high similarity English vocabulary selection system design.
### 4.3. CD_Sim Test and Analysis of Methods
To verify the CD_, the calculation results of the SIM method show accuracy and time efficiency in practical application. Four types of data are randomly selected from English vocabulary as research simulation samples. Through the keyword extraction of the experimental results, the similarity measurement results are tested by cluster analysis and classification methods.
#### 4.3.1. Cluster Analysis
The results of similarity measure calculation indirectly affect the accuracy of the English vocabulary clustering algorithm. In addition, in the simulation sample, the accuracy of the clustering algorithm can in turn test the quality of similarity results. Commonly used clustering algorithms include the distance matrix-based clustering algorithm, AP clustering algorithm, and gradually developed spectral clustering algorithm. Both the distance-based clustering algorithm and the spectral clustering algorithm are suitable for a given number of data, with high time complexity and clustering accuracy. If the given data are unknown, the results calculated by the two algorithms will have a certain deviation. Cluster analysis is formed according to the similarity measurement analysis. The specific experimental results are shown in Table3.Table 3
Calculation and test results based on the cluster test method.
MethodAP clusteringSpectral clusteringKmeans clusteringNUMEntropyPurityEntropyPurityEntropyPurityMean clustering140.960.740.740.471.840.41Hierarchical clustering92.130.280.280.412.220.24SOM clustering180.330.900.900.821.260.66FCM clustering180.600.851.600.501.680.51As shown in Table3, the four similarity measurement methods are compared and analyzed, The clustering result obtained by the sim method is the best, but there are only four documents in the data simulation sample, while the number of SIM clustering methods has reached 18, which is obviously unreasonable. Through the analysis of experimental clustering data, the experimental results of SIM are better than CL_ SIM and ZWS_ SIM, the clustering entropy is the smallest, and the purity is the largest.
#### 4.3.2. Time Complexity Analysis
According to the experimental data in Table4, among the four text similarity measurement methods, FCM clustering similarity measurement method based on statistics has higher time efficiency, while the SOM clustering similarity measurement method has lower time efficiency than the mean clustering.Table 4
Time complexity of similarity measurement method.
MethodMean clusteringHierarchical clusteringSOM clusteringFCM clusteringTime/s102661014682573410.6
### 4.4. Experimental Results and Analysis of Classification Methods
In traditional classification experiments, word types are usually divided into simulation training, and only nouns, verbs, and verbs with nominality are selected as feature selection objects.When the threshold values of feature number are 110, 550, 1100, 1600, 2100, 3300, 4100, 5000, 5500, 6800, and 8300, respectively, the overall classification accuracy is obtained.As shown in Table5, the overall classification accuracy table represents the number of individuals with different characteristics.Table 5
Overall classification accuracy of different feature numbers.
Number of featuresTotal classification accuracy (%)Spend time (s)11067.1783055072.30945110076.621216160077.781536210079.301872330080.702305410081.982742500081.843062550082.603177680082.533684830082.613752As shown in Figure5, when the number of features is sorted from small to large, the classification accuracy rate and features increase linearly. When the number of features reaches about 5000, the classification accuracy rate is basically stable.Figure 5
The broken line diagram of classification accuracy of traditional classification methods.As shown in Figure6, when the number of features is sorted from small to large, the test time and time spent basically increase linearly.Figure 6
Test time of the traditional classification method.
## 4.1. Comparative Analysis of Similarity Algorithm Efficiency
The cosine similarity algorithm, keyword similarity algorithm, word meaning similarity algorithm, common subsequence similarity algorithm, and the experimental algorithm are used to analyze and calculate the similarity measurement of the simulation research sample data.Method1: cosine similarity algorithm, method2: keyword similarity algorithm, method3: word meaning similarity algorithm, method4: common subsequence similarity algorithm, and method 5: experimental algorithm in this paper.As shown in Table1, which represents the average similarity value of the five methods in the state of 1 under different numbers of data. Since the similarity between data vocabulary pairs is tested, if the similarity state between vocabulary pairs is 1, that is, the similarity value between the vocabulary pairs is also very high.Table 1
Average similarity of vocabulary pairs with status 1 under different data numbers of different algorithms.
Number of data500100015002000250030003500400045005801method10.7170.7420.7280.7310.7300.7290.7310.7310.7300.730method20.7130.7180.7210.7250.7240.7220.7240.7240.7230.723method30.3730.3780.3760.3780.3780.3770.3760.3800.3780.380method40.6490.6520.6580.6610.6630.6620.6640.6640.6630.664method50.8410.8370.8390.8410.8450.8440.8460.8460.8460.846The average value of similarity in this paper is higher than other algorithms and remains around 0.84, and the difference between the maximum value and the minimum value is no more than 0.01, which shows that this algorithm has good results in the calculation of similarity and the stability of the algorithm. Among them, the average similarity of the cosine similarity algorithm, keyword similarity algorithm, and common subsequence similarity algorithm is also high, while the average similarity of the word meaning similarity algorithm remains low.As shown in Figure1, it shows the comparative analysis of the accuracy and efficiency of the five algorithms under the condition of anonuniform similarity threshold.Figure 1
Comparison of accuracy of five algorithms under different similarity values.As shown in Figure2, the recall rates of the five algorithms are compared and analyzed in the case of inconsistent similarity thresholds.Figure 2
Comparison of recall rates of five algorithms under different similarity values.As shown in Figure3, it shows the comparison and analysis of F result values of five algorithms under the condition of a nonuniform similarity threshold. The harmonic average calculated by each algorithm is basically the same as the calculated recall rate. Since the growth rate of each algorithm P is lower than the reduction rate of R, the result of R has an obvious impact on the result of F value, and the change curve of F value is close to that of R.Figure 3
Comparison ofF values of five algorithms under different similarity thresholds.
## 4.2. Experimental Analysis of Similarity Calculation
By collecting and analyzing the usage of English vocabulary resources participating in the system comparison, some English vocabulary pairs that cannot be calculated are selected from the English vocabulary data set test, and the final ten pairs of words are tested.As shown in Table2, the calculation results of English vocabulary similarity are shown. It can be seen that the calculation results of S1 column are lower than those of other columns. The reason for this phenomenon is that the high similarity of English vocabulary selects the system design method and notes the common characteristics of a large number of English vocabulary, which may include the influence of external interference factors, resulting in the low similarity of English vocabulary vector. The jump of S2 column value is too high. The reason for this phenomenon may be that the selection and design of highly similar English words in the Baidu library do not match the artificial idea in some way.Table 2
Calculation results of English vocabulary similarity.
IDW1W2SS1S2S31AutomobileCar0.9231.0320.9980.9352JewelleryGlass0.8567.6980.7890.8643NoonNoon0.0150.2030.0360.0174ForestWoodland0.8160.7860.8390.8225PhoneTelephone0.8050.9630.8230.8116ChairStool0.2360.3540.2540.2407RopeLine0.3690.4780.3720.3598WorryWorried0.3590.4230.3970.3619HospitalClinic0.4130.5120.4380.41910ReflectionConsider0.7160.8360.7250.712The calculation results of English word similarity inS1 are generally low in value, which is mainly due to the design method of the high similarity English word autonomous selection system based on the database, considering many English word features and the influence of some other interference factors, resulting in the low similarity of high-dimensional vectors of English word features.As shown in Figure4, it shows the selection efficiency of the corresponding similar English vocabulary selection system design when the number of data for selecting English vocabulary is 200, 400, and 600. If α corresponds to 1, the selection efficiency of a similar English vocabulary selection system design is 30%, 32%, and 45%, respectively. If α corresponds to 3, the selection efficiency of a similar English vocabulary selection system design is 40%, 44%, and 60%, respectively. If α corresponds to 5, the selection efficiency of a similar English vocabulary selection system design is 55%, 63%, and 80%, respectively. Through comparative analysis, the recognition rate and weight of stable English lexical features can be obtained as α. In the interval [1,5], the selection efficiency will be the highest.Figure 4
Selection efficiency of high similarity English vocabulary selection system design.
## 4.3. CD_Sim Test and Analysis of Methods
To verify the CD_, the calculation results of the SIM method show accuracy and time efficiency in practical application. Four types of data are randomly selected from English vocabulary as research simulation samples. Through the keyword extraction of the experimental results, the similarity measurement results are tested by cluster analysis and classification methods.
### 4.3.1. Cluster Analysis
The results of similarity measure calculation indirectly affect the accuracy of the English vocabulary clustering algorithm. In addition, in the simulation sample, the accuracy of the clustering algorithm can in turn test the quality of similarity results. Commonly used clustering algorithms include the distance matrix-based clustering algorithm, AP clustering algorithm, and gradually developed spectral clustering algorithm. Both the distance-based clustering algorithm and the spectral clustering algorithm are suitable for a given number of data, with high time complexity and clustering accuracy. If the given data are unknown, the results calculated by the two algorithms will have a certain deviation. Cluster analysis is formed according to the similarity measurement analysis. The specific experimental results are shown in Table3.Table 3
Calculation and test results based on the cluster test method.
MethodAP clusteringSpectral clusteringKmeans clusteringNUMEntropyPurityEntropyPurityEntropyPurityMean clustering140.960.740.740.471.840.41Hierarchical clustering92.130.280.280.412.220.24SOM clustering180.330.900.900.821.260.66FCM clustering180.600.851.600.501.680.51As shown in Table3, the four similarity measurement methods are compared and analyzed, The clustering result obtained by the sim method is the best, but there are only four documents in the data simulation sample, while the number of SIM clustering methods has reached 18, which is obviously unreasonable. Through the analysis of experimental clustering data, the experimental results of SIM are better than CL_ SIM and ZWS_ SIM, the clustering entropy is the smallest, and the purity is the largest.
### 4.3.2. Time Complexity Analysis
According to the experimental data in Table4, among the four text similarity measurement methods, FCM clustering similarity measurement method based on statistics has higher time efficiency, while the SOM clustering similarity measurement method has lower time efficiency than the mean clustering.Table 4
Time complexity of similarity measurement method.
MethodMean clusteringHierarchical clusteringSOM clusteringFCM clusteringTime/s102661014682573410.6
## 4.3.1. Cluster Analysis
The results of similarity measure calculation indirectly affect the accuracy of the English vocabulary clustering algorithm. In addition, in the simulation sample, the accuracy of the clustering algorithm can in turn test the quality of similarity results. Commonly used clustering algorithms include the distance matrix-based clustering algorithm, AP clustering algorithm, and gradually developed spectral clustering algorithm. Both the distance-based clustering algorithm and the spectral clustering algorithm are suitable for a given number of data, with high time complexity and clustering accuracy. If the given data are unknown, the results calculated by the two algorithms will have a certain deviation. Cluster analysis is formed according to the similarity measurement analysis. The specific experimental results are shown in Table3.Table 3
Calculation and test results based on the cluster test method.
MethodAP clusteringSpectral clusteringKmeans clusteringNUMEntropyPurityEntropyPurityEntropyPurityMean clustering140.960.740.740.471.840.41Hierarchical clustering92.130.280.280.412.220.24SOM clustering180.330.900.900.821.260.66FCM clustering180.600.851.600.501.680.51As shown in Table3, the four similarity measurement methods are compared and analyzed, The clustering result obtained by the sim method is the best, but there are only four documents in the data simulation sample, while the number of SIM clustering methods has reached 18, which is obviously unreasonable. Through the analysis of experimental clustering data, the experimental results of SIM are better than CL_ SIM and ZWS_ SIM, the clustering entropy is the smallest, and the purity is the largest.
## 4.3.2. Time Complexity Analysis
According to the experimental data in Table4, among the four text similarity measurement methods, FCM clustering similarity measurement method based on statistics has higher time efficiency, while the SOM clustering similarity measurement method has lower time efficiency than the mean clustering.Table 4
Time complexity of similarity measurement method.
MethodMean clusteringHierarchical clusteringSOM clusteringFCM clusteringTime/s102661014682573410.6
## 4.4. Experimental Results and Analysis of Classification Methods
In traditional classification experiments, word types are usually divided into simulation training, and only nouns, verbs, and verbs with nominality are selected as feature selection objects.When the threshold values of feature number are 110, 550, 1100, 1600, 2100, 3300, 4100, 5000, 5500, 6800, and 8300, respectively, the overall classification accuracy is obtained.As shown in Table5, the overall classification accuracy table represents the number of individuals with different characteristics.Table 5
Overall classification accuracy of different feature numbers.
Number of featuresTotal classification accuracy (%)Spend time (s)11067.1783055072.30945110076.621216160077.781536210079.301872330080.702305410081.982742500081.843062550082.603177680082.533684830082.613752As shown in Figure5, when the number of features is sorted from small to large, the classification accuracy rate and features increase linearly. When the number of features reaches about 5000, the classification accuracy rate is basically stable.Figure 5
The broken line diagram of classification accuracy of traditional classification methods.As shown in Figure6, when the number of features is sorted from small to large, the test time and time spent basically increase linearly.Figure 6
Test time of the traditional classification method.
## 5. Conclusion
Firstly, this paper defines the concept of features and introduces the methods of statistical features of English characters and the research direction and background of the subject. Then, it analyzes the current situation of language development. The diversification of word meaning relationships between words has become the primary task of language word meaning research, that is, how to choose the correct method and model to express the relationship between language words, which is the purpose of this paper. Then, it introduces the meaning of similarity measurement and the calculation algorithm of similarity measurement, mainly including the feature selection of similarity, the embedding amount of similarity words, the distance between similarity numerical variables, and the calculation of the similarity coefficient. Finally, the efficiency of the similarity algorithm is compared and analyzed, the similarity measurement of fused language features is calculated and analyzed, and the CD is tested_ according to the classification method, the experimental calculation and analysis are carried out, and the experimental results are analyzed.
---
*Source: 1019508-2022-08-24.xml* | 1019508-2022-08-24_1019508-2022-08-24.md | 64,149 | Similarity Measurement and Classification of English Characters Based on Language Features | Linna Miao; Zhixin Fang; Junping Zhang | Mobile Information Systems
(2022) | Computer Science | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1019508 | 1019508-2022-08-24.xml | ---
## Abstract
English is now widely used in the world as an international language. As a symbol of the development of human civilization, English characters provide an important medium and tool for mankind. In the current information age, the vocabulary of English words is more quantitative, and it is almost everywhere. Under the background of the multiquantification of English words and the quantification of the relationship between words, the similarity measurement analysis and calculation of English words and the classification of vocabulary measurement calculation are carried out by integrating the characteristics of language. The experimental results are as follows: (1) the development situation of English words is analyzed, the research direction of the experiment is determined, the concept of English character features is proposed, and the similarity calculation method is selected according to different features, in order to simplify the complex and difficult-to-understand word meaning relationship between English words; (2) the text features are extracted through the similarity feature selection of language and text. The extraction of features indirectly affects the effectiveness of classification. The similarity word embedding vector is used to map English words into the vector for analysis and comparison, calculate the distance between the similarity numerical variables between English words and their similarity coefficient, measure the distance between them, and evaluate the similarity between them, including the angle cosine method and correlation coefficient method which are the two main methods for calculating the similarity coefficient.
---
## Body
## 1. Introduction
Whispering is a natural way of speaking. Although its perceptual ability is reduced, it still contains information about expectations (i.e., comprehensibility) and the identity and gender of the speaker. However, considering the acoustic differences between whispered speech and normal voiced speech, speech applications trained on the latter but tested with the former showed unacceptable performance levels. In the automatic speaker verification task, previous studies have shown that I) traditional features (e.g., the Mel frequency cepstrum coefficient, MFCC) cannot transmit enough speaker discrimination clues in two utterance efforts and II) multiconditional training often reduces the performance of normal speech while improving whisper performance. In this paper, we aim to solve these two shortcomings by proposing three innovative features, which can provide reliable results for normal speech and whispered speech when fused at the score level. Overall, the relative improvement rates of the whisper group and the normal group were 66% and 63%, respectively [1]. Although the correctness of the characteristic measurement largely blames it on the changing external environment so far, little attention has been paid to the consequences of this fact in pattern recognition tasks. In this paper, we explicitly consider the uncertainty of feature measurement and illustrate how to improve diversified classification rules and research methods to make up for the impact of uncertainty. This experimental method can be effectively used in various multistyle scenes, and the feature vectors evolved from different sences are merged. For the development of this kind of operation, if we can estimate the uncertainty of noise generated by each vector characteristic flow, this kind of development will achieve high efficiency and adapt to various pattern fusion rules. The study further shows that under some assumptions, the multimodal fusion method depending on the flow weight can be naturally generated from our scheme; this relationship can help us give a helpful view on the use of uncertainty compensation methods. This reveals how to apply those views to audio-visual intelligent induction. In this similar event, an emerging technology is developed and proposed, which works when using the framework in feature extraction and variability evaluation of human perception and also studies how to effectively calculate the enhanced audio features and their uncertainty estimation. The effectiveness of our multimodal integration method in the audio-visual database is proved [2]. It is a very difficult and meaningful challenge to identify and classify complex people’s actions and behaviors from animated videos. The article uses Indian sign language (ISL) video to explore this kind of problem. The feature of discretization through the scale and translation of basic wavelet, the new segmentation algorithm, is proposed. The fusion functions form a two-dimensional point cloud to show some characteristics in uninterrupted animation playback. The most perfect feature extraction of symbols in animation playback is carried out on each single classifier to check the feasibility of the designed feature extraction framework. In the experiment, we can see that the feature proportion of some binary patterns can better represent the value of symbol recognition data than other most advanced features. The specific reason is that the designed feature model combines the overall features and some features. The obtained and classified characteristics are transmitted to the network database remotely, and they correspond to their own nominal words. The accuracy and correctness of the recognition mark are tested. Through the largest training example, an artificial neural network classifier with a recognition rate of 92.79% is obtained, which is much higher than the existing artificial neural network classifiers on sign language and ISL data sets with other features [3]. This paper mainly studies the temporal retrieval of activities in videos through sentence queries. Given the sentence query describing the activity, time moment retrieval aims to locate the time period that can best describe the text query in the video. This is a common and challenging task because it requires understanding both the video and the language. Existing studies mainly use coarse frame level features as visual representation, blurring specific details in the video (for example, the required object “girl,” “Cup,” and action “dumping,”) which may provide key clues for the time required for positioning. In this paper, we propose a new spatial and linguistic time tensor fusion (SLTF) method to solve these problems [4]. This study investigates the generation and perception of English vowels by Korean English learners in two English learning sessions about one year apart. A preliminary experiment shows that Korean adults use two different Korean vowels to classify some comparative English vowels, while others show classification overlap, which means that it is difficult for Korean English learners to distinguish these vowels. In two subsequent experiments, NK adults and children living in North America for different periods of time (3 years vs. 5 years; 4 groups, 18 years in each group) were compared with age-matched native English speakers. In Experiment 2, NK children identified English vowels more accurately than NK adults but more accurately than NE children. In Experiment 3, a picture naming task was used to extract images containing/, i.e., εД/English words. Some vowels produced by NK children are easier to hear than those produced by NK adults. Acoustic analysis shows that the vowel contrast of NK children is significantly higher than that of NK adults [5]. A size and color invariant character recognition system based on the feedforward neural network is proposed. Our feedforward network has two layers. One is the input layer, and the other is the output layer. The whole recognition process is divided into four basic steps: preprocessing, standardization, network establishment, and recognition. Preprocessing includes digitization, noise removal, and boundary detection. After boundary detection, the input character matrix is normalized to a 12 × 8 matrix for size invariant recognition and input it into the network composed of 96 input and 36 output neurons. Then, we use the proposed training algorithm to train the network in a supervised way and establish the network by adjusting the weight. Finally, we test our network by averaging more than 20 samples per character. By considering the similarity measure between classes, we give 99.99% accuracy for numbers (0∼9), 98% accuracy for letters (a∼z), and more than 94% accuracy for alphanumeric characters [6]. Using the perceptual assimilation model (PAM) of best (1995), we studied the dictation and observation ability of Cantonese tone, as well as touch, smell, hearing, and visual perception, including Thai and English [7]. This paper identifies six social science research methods that help to describe the social and cultural significance of nanotechnology: web-based questionnaire survey, episode experiment, network link analysis, recommendation system, quantitative content analysis, and qualitative text analysis. Data from a range of sources are used to illustrate how these methods describe the knowledge content and institutional structure of the emerging nanotechnology culture. These methods will make it possible to test hypotheses in the future. For example, nanotechnology has two competing definitions, namely science and technology and science fiction, which affect public cognition through different ways and directions [8]. In the study of biomedical field, the identification and standardization of medical case literature is an important step of biomedical text extraction. In addition, a gene symbol recognition system is also described to obtain special text content from biomedical materials and standardize its content. The composition of this gene symbol recognition system includes gene symbol recognition, gene text content mapping, gene text standardization, and text content filtering. Gene symbol recognition is a process based on fund symbol matching and monitoring. It uses a large number of labeling methods to achieve the recognition of gene symbols. In the gene text content mapping stage, the data set connection is established in the system context around the principles of accurate matching and priority matching [9]. If we lack relevant problem specific knowledge, we can use the cross validation method to select the classification method empirically. We test this idea here to illustrate the meaning of cross validation to solve and not solve the selection problem. As experience shows, cross validation may bring higher average performance than the application of any single classification strategy and can also reduce the risk of poor performance. On the other hand, compared with simpler strategies, cross validation is more or less a bias. The correct application of cross validation ultimately depends on previous knowledge. In fact, cross validation may be seen as a way to apply some information about the applicability of alternative classification strategies [10]. A new intelligent fault diagnosis method of rotating machinery based on wavelet packet transform (WPT), empirical mode decomposition (EMD), dimensionless parameters, distance estimation technology, and radial basis function (RBF) network is proposed. The experimental results show that the method combining WPT, EMD, distance evaluation technology, and RBF network can accurately extract fault information and select sensitive features so as to correctly diagnose different fault types of bearings. This method is applied to the fault diagnosis of slight rub impact in heavy oil catalytic cracking units. The actual results show that this method can be effectively applied to the fault diagnosis of rotating machinery [11]. Decision tree classification provides a fast and effective data set classification method. There are many algorithms to optimize the structure of decision tree, although these methods are vulnerable to the changes of the training data set. This method is tested with two different data sets, and the results are equivalent to or better than other classification methods. The last discussion demonstrates the utility of decision trees relative to algorithms or other alternative methods (such as neural networks), especially when considering a large number of variables [12]. An objective classification method of weather situation in Europe and northeast Atlantic is established. The mean air pressure of each mser40 in winter and the mean air pressure of each mser40 in winter are calculated, respectively. Then, according to the original concept of Hess and Brezovsky, by using the mode correlation of these composite fields, the daily directory of the target GWL is constructed, and some filtering methods are used to detach the instantaneous feature vector, which can help to keep the GWL task at least more than four days. The essential difference between the fact and the original GWL system is found. The reason is that the original system is mainly concentrated in Central Europe and has a certain subjectivity, while the reality system treats power more in terms of spatial standards. The data transformation fluctuation of most air flows in Central Europe usually comes from GWL series, which is used to calculate the law of anticyclone change, reanalyze the change of anticyclone fluctuation in Central Europe during this period, and predict the development situation. [13]. In this paper, a fault classification method based on neural network and orthogonal least squares (OLS) learning process is adopted to identify various relevant voltage and current modes. This paper also compares the RBF neural network with the BP neural network. The results show that the RBF method can calculate all kinds of faults quickly and accurately. The simulation results also show that this method can be used as an effective tool for high-speed relay protection [14]. A fully automatic multiscale fuzzy c-means classification method is proposed. We use diffusion filters to process MR images and construct multiscale image sequences. On the scale from coarse to fine, the multiscale fuzzy c-means classification method is adopted. The final function of the old implicit averaging method is basically modified and its classification is diversified, in which the coarse scale supervises the classification of the next fine scale. Due to its multiscale diffusion filtering scheme, this means has high stability for noisy and weak contrast animation images. By comparing and improving the new design method with the old method, the synthetic images with different contrasts and the McGill brain magnetic resonance image database were verified. Our MsFCM method is always superior to the traditional FCM and MFCM methods. The actual ground verification shows that the MsFCM method achieves an overlap rate of more than 90%. The availability of this method is proved in the actual animation image. It is proved that the diversified average classification methods are correct and stable for all kinds of animation images. It can become a tool for animated images and other application scenes [15].
## 2. Similarity Measurement of English Characters Based on Language Features
In the field of language learning and recognition, features are important research objects. In the process of language similarity calculation and classification, based on the analysis, recognition, and text inspection of different research objects, in essence, they can be regarded as extracting and classifying the features of research objects and calculating the similarity between two feature vectors through a measurement criterion. Therefore, the selection of features has a far-reaching impact on the results of calculating similarity.
### 2.1. Features
At present, feature extraction is in the primary stage in horizon perception. In the process of learning in the field of science, the most important thing is analytical theory. One of the important viewpoints in this theory is that horizon perception is a process extending from a local feature of something to a global feature, which makes it clear that the local feature is perceived at the first time. However, as for the principle of global priority theory, the global feature is regarded as the first perceived object, followed by the local feature. What is feature extraction? The so-called feature extraction is a method of transforming the original space into the space to be calculated through a certain mapping relationship. The initial feature is the first feature of the extracted object. If the dimension of the object to be calculated is high, it will produce too high time complexity in the calculation. Therefore, in general, try to map the high-dimensional space vector to the low-dimensional space. This method is helpful to complete the analysis and extraction of the features of the research object, and different features can complement each other. Therefore, in theory, the accuracy of multiobject combined feature extraction is higher than that of single-object feature extraction. Therefore, in the feature extraction of similarity measurement, it is best to extract and measure the features of multiobject combinations and then select some obvious features for linear or nonlinear combinations.
#### 2.1.1. Statistical Characteristics
(1) Conversion Coefficient Method. The idea of the transformation coefficient method is to calculate the number of the whole global characteristic variables. It is to carry out different transformations on the model and take the results of different transformations as a feature. The transformation coefficient methods often used in the process of statistical features include KL transformation, Fourier transformation, Hough transformation, and so on. The conversion coefficient method takes each pixel in the graph as each unit. Therefore, when using the conversion coefficient method, it will also produce the problems of difficult calculation and resource consumption. Therefore, in practical application, some special correction methods will be adopted to reduce the difficulty of calculation.(2) Contour Feature. The edge contour of English text form rich features. Although the features cannot be displayed inside the text and are not obvious, its edge contour can still reflect some rich features. Because this feature starts from the edge, it can be used as the classification of general features to a certain extent.(3) Pixel Density Characteristics. Due to the wide variety of English characters, the pixel distribution represented by different kinds of English characters is very different. The coarse pixel density characteristics can be obtained by dividing the text image horizontally or vertically and calculating the effective number of pixels in each area. For some English text pictures, the difference in their own structure is not very obvious. Although the pixel density obtained by different division methods is different, the characters they actually represent are very similar. Therefore, the pixel density feature can be used to classify the English character features. The advantage of the pixel density feature is that it can prevent the influence of external things, and a small amount of information will not seriously affect the actual results. However, due to the diversity of text types, the features formed by different English words take a long time. Therefore, for different English text types, the feature extraction method needs to be improved.
### 2.2. Similarity Measure
Similarity reflects the degree of relationship between different objects or different features. The similarity is an important index to indicate whether the model samples are similar. It is usually represented by a value between 0 and 1. The vector similarity can be divided into the vector similarity and system similarity. Different research objects correspond to different similarities. The calculation methods of similarity measure mainly include the distance calculation method and function method. The two methods have their own differences. The accuracy of the results obtained by the distance calculation method is smaller, while the results calculated by the function method are more accurate, especially when studying the similarity between vectors.
## 2.1. Features
At present, feature extraction is in the primary stage in horizon perception. In the process of learning in the field of science, the most important thing is analytical theory. One of the important viewpoints in this theory is that horizon perception is a process extending from a local feature of something to a global feature, which makes it clear that the local feature is perceived at the first time. However, as for the principle of global priority theory, the global feature is regarded as the first perceived object, followed by the local feature. What is feature extraction? The so-called feature extraction is a method of transforming the original space into the space to be calculated through a certain mapping relationship. The initial feature is the first feature of the extracted object. If the dimension of the object to be calculated is high, it will produce too high time complexity in the calculation. Therefore, in general, try to map the high-dimensional space vector to the low-dimensional space. This method is helpful to complete the analysis and extraction of the features of the research object, and different features can complement each other. Therefore, in theory, the accuracy of multiobject combined feature extraction is higher than that of single-object feature extraction. Therefore, in the feature extraction of similarity measurement, it is best to extract and measure the features of multiobject combinations and then select some obvious features for linear or nonlinear combinations.
### 2.1.1. Statistical Characteristics
(1) Conversion Coefficient Method. The idea of the transformation coefficient method is to calculate the number of the whole global characteristic variables. It is to carry out different transformations on the model and take the results of different transformations as a feature. The transformation coefficient methods often used in the process of statistical features include KL transformation, Fourier transformation, Hough transformation, and so on. The conversion coefficient method takes each pixel in the graph as each unit. Therefore, when using the conversion coefficient method, it will also produce the problems of difficult calculation and resource consumption. Therefore, in practical application, some special correction methods will be adopted to reduce the difficulty of calculation.(2) Contour Feature. The edge contour of English text form rich features. Although the features cannot be displayed inside the text and are not obvious, its edge contour can still reflect some rich features. Because this feature starts from the edge, it can be used as the classification of general features to a certain extent.(3) Pixel Density Characteristics. Due to the wide variety of English characters, the pixel distribution represented by different kinds of English characters is very different. The coarse pixel density characteristics can be obtained by dividing the text image horizontally or vertically and calculating the effective number of pixels in each area. For some English text pictures, the difference in their own structure is not very obvious. Although the pixel density obtained by different division methods is different, the characters they actually represent are very similar. Therefore, the pixel density feature can be used to classify the English character features. The advantage of the pixel density feature is that it can prevent the influence of external things, and a small amount of information will not seriously affect the actual results. However, due to the diversity of text types, the features formed by different English words take a long time. Therefore, for different English text types, the feature extraction method needs to be improved.
## 2.1.1. Statistical Characteristics
(1) Conversion Coefficient Method. The idea of the transformation coefficient method is to calculate the number of the whole global characteristic variables. It is to carry out different transformations on the model and take the results of different transformations as a feature. The transformation coefficient methods often used in the process of statistical features include KL transformation, Fourier transformation, Hough transformation, and so on. The conversion coefficient method takes each pixel in the graph as each unit. Therefore, when using the conversion coefficient method, it will also produce the problems of difficult calculation and resource consumption. Therefore, in practical application, some special correction methods will be adopted to reduce the difficulty of calculation.(2) Contour Feature. The edge contour of English text form rich features. Although the features cannot be displayed inside the text and are not obvious, its edge contour can still reflect some rich features. Because this feature starts from the edge, it can be used as the classification of general features to a certain extent.(3) Pixel Density Characteristics. Due to the wide variety of English characters, the pixel distribution represented by different kinds of English characters is very different. The coarse pixel density characteristics can be obtained by dividing the text image horizontally or vertically and calculating the effective number of pixels in each area. For some English text pictures, the difference in their own structure is not very obvious. Although the pixel density obtained by different division methods is different, the characters they actually represent are very similar. Therefore, the pixel density feature can be used to classify the English character features. The advantage of the pixel density feature is that it can prevent the influence of external things, and a small amount of information will not seriously affect the actual results. However, due to the diversity of text types, the features formed by different English words take a long time. Therefore, for different English text types, the feature extraction method needs to be improved.
## 2.2. Similarity Measure
Similarity reflects the degree of relationship between different objects or different features. The similarity is an important index to indicate whether the model samples are similar. It is usually represented by a value between 0 and 1. The vector similarity can be divided into the vector similarity and system similarity. Different research objects correspond to different similarities. The calculation methods of similarity measure mainly include the distance calculation method and function method. The two methods have their own differences. The accuracy of the results obtained by the distance calculation method is smaller, while the results calculated by the function method are more accurate, especially when studying the similarity between vectors.
## 3. English Text Similarity Measurement Algorithm Based on Language Features
### 3.1. Similarity Feature Selection
In the process of English text similarity measurement and classification, feature extraction is the most important content. The quality of feature selection directly affects the efficiency of similarity classification, so this paper uses the chi square test to extract features. What is chi square test? Chi square test is to score and sort the features of the research object after feature extraction so as to select the top features as the extraction result set.Chi square test formula is(1)x2=∑A−T2T.
### 3.2. Similarity Word Embedding Vector
Word embedding vector is a process of mapping a word into a measurement space. The computer itself cannot directly extract the features of English text, so it is necessary to convert English text into a spatial vector. Nowadays, the most important text space vector models are the skip-gram model and CBOW model. This paper selects the former as the training text vocabulary vector.The skip-gram model obtains a weight model from the input layer to the output layer through simulation training in a certain scale corpus according to the probability of the characteristics ofn words before and after text center vocabulary prediction. The probability of maximizing text obtained by the model is(2)argmax∏wijD∐c∈CijPc|wij,θ.A support vector machine algorithm is essentially a supervised classification algorithm. It can be divided into linear separable and linear nonseparable. It has achieved good results in the process of classification training.The support vector machine can map the research object data from low-dimensional space to high-dimensional space and select kernel function for a solution. The mathematical expression is(3)max∑i=1nai−12∑i=1n∑j=1naiajyiyjKxi,xjs.t.∑i=1naiyi=00≤ai≤C,i=1,2,…,n.In formula (3), Kxi,xj represents the kernel function, and the final classification function is(4)fx=sign∑i=1naiyiKxi,xj+b.According to the Bayesian formula(5)PBi|A=PBiPA|Bi∑j=1nPBjPA|Bj.It can be concluded that(6)PCi|X=PCiPXk|CiPX.(Whenx condition is independent)In formula (6)(7)PCi=Ncid.Calculate in setC(8)CCi=PCi∏k=1nPXk|Ci.Final classification result is(9)Cmax=arg maxPCi∏k=1nPXk|Ci.The composition of random forest is composed of a variety of decision trees. Compared with the composition of a single decision tree, it avoids getting consistent assumptions and making the assumptions too strict. The method of increasing the amount of data and testing the sample set is usually used to evaluate the performance of the classifier. While solving the classification problem, each decision tree in the forest judges the simulated training samples in turn, which are selected by most decision trees as the final result.(10)h1X,θw1,h2X,θw2,…,hmX,θwm.The marginal function of random forest is(11)mgX,Y=avkIhkx=y−maxj≠yavkIhkx=j.
### 3.3. Distance between Similarity Numerical Variables
If the attributes of decision variables are continuous or discontinuous, how to measure the similarity or distance between variables?
#### 3.3.1. European Distance
(12)di.j=∑k=1nXik−Xjk2,di.j refers to the overall distance in the n-dimensional space, that is, the dissimilarity. The larger di.j means the farther the distance, that is, the more obvious the dissimilarity is. On the contrary, the smaller di.j means the more obvious the similarity between the whole. Xik means the i-dimensional coordinate of the first point, and Xjk means the second two-dimensional coordinate of the second point.
#### 3.3.2. Manhattan Distance
(13)di.j=∑k=1nXik−Xjk.
### 3.4. Similarity Coefficient
OrderO=x1,x2,…,xn. All numerical sets of simulation research objects are set to x1,x2,…,xn. The range of values for each simulation study was set to xi,xj∈O, whererij is the similarity coefficient of xi and xj; the specific conditions are as follows:(1)
rij=1⇔xi=xj(2)
∀xi,xj,rij∈0,1(3)
∀xi,xj,rij=rjiThe following methods are commonly used to measure and calculate the similarity coefficient:
#### 3.4.1. Quantity Product Method
(14)rij=1,i=j,∑i=1mXiXikXjk,i≠j,.where M is a positive number, satisfying M≥∑k=1mXikXjk,i≠j.
#### 3.4.2. Included Angle Cosine
(15)rij=∑k=1mXikXjk∑k=1mXik2∑k=1mXjk2.A vector is a directed line segment in a multidimensional space. If two vectors have the same direction, their included angle is 0. Therefore, the cosine value can be used to express the similarity of two vectors. When two vectors are orthogonal,rij = 0 indicates that the vectors are completely different.
#### 3.4.3. Correlation Coefficient Method
(16)rij=∑n=1mXik−XiXjk−Xj∑k=1mXik−Xi2∑k=1mXjk−Xj2.Among them,Xi=∑k=1mXik/m,Xj=∑k=1mXjk/m, and the numerical range of rij is in [−1, 1]. When the result is 0, it indicates that there is no correlation between the whole; when the result is 1, it indicates that the whole is positively correlated; when the result is −1, it indicates that there is a negative correlation as a whole.
#### 3.4.4. Arithmetic Mean Minimum Method
(17)rij=2×∑k=1mXik∧Xjk∑k=1mXik+Xjk.
#### 3.4.5. Exponential Similarity Method
(18)rij=∑k=1mexp−Xik−Xjk2/Sk2m.
#### 3.4.6. Paste Progress
If the characteristics ofXi and Xj are unified so that Xik and Xjk belong to [0,1] (k = 1, 2, …, m), the similarity of Xi and Xj is defined as their pasting progress. Distance paste progress(19)rij=1−cdXi−Xja,C and a are appropriate selection parameters, and their values can be any value, but their selected values should meet the 0 ≤ rij ≤ 1 inequality, and dXi−Xj represents the distance between them.dXi−Xj is a certain distance, which can be taken as Minkovsky distance(20)dXi−Xj=∑k=1mXik−Xjkp1/p.
## 3.1. Similarity Feature Selection
In the process of English text similarity measurement and classification, feature extraction is the most important content. The quality of feature selection directly affects the efficiency of similarity classification, so this paper uses the chi square test to extract features. What is chi square test? Chi square test is to score and sort the features of the research object after feature extraction so as to select the top features as the extraction result set.Chi square test formula is(1)x2=∑A−T2T.
## 3.2. Similarity Word Embedding Vector
Word embedding vector is a process of mapping a word into a measurement space. The computer itself cannot directly extract the features of English text, so it is necessary to convert English text into a spatial vector. Nowadays, the most important text space vector models are the skip-gram model and CBOW model. This paper selects the former as the training text vocabulary vector.The skip-gram model obtains a weight model from the input layer to the output layer through simulation training in a certain scale corpus according to the probability of the characteristics ofn words before and after text center vocabulary prediction. The probability of maximizing text obtained by the model is(2)argmax∏wijD∐c∈CijPc|wij,θ.A support vector machine algorithm is essentially a supervised classification algorithm. It can be divided into linear separable and linear nonseparable. It has achieved good results in the process of classification training.The support vector machine can map the research object data from low-dimensional space to high-dimensional space and select kernel function for a solution. The mathematical expression is(3)max∑i=1nai−12∑i=1n∑j=1naiajyiyjKxi,xjs.t.∑i=1naiyi=00≤ai≤C,i=1,2,…,n.In formula (3), Kxi,xj represents the kernel function, and the final classification function is(4)fx=sign∑i=1naiyiKxi,xj+b.According to the Bayesian formula(5)PBi|A=PBiPA|Bi∑j=1nPBjPA|Bj.It can be concluded that(6)PCi|X=PCiPXk|CiPX.(Whenx condition is independent)In formula (6)(7)PCi=Ncid.Calculate in setC(8)CCi=PCi∏k=1nPXk|Ci.Final classification result is(9)Cmax=arg maxPCi∏k=1nPXk|Ci.The composition of random forest is composed of a variety of decision trees. Compared with the composition of a single decision tree, it avoids getting consistent assumptions and making the assumptions too strict. The method of increasing the amount of data and testing the sample set is usually used to evaluate the performance of the classifier. While solving the classification problem, each decision tree in the forest judges the simulated training samples in turn, which are selected by most decision trees as the final result.(10)h1X,θw1,h2X,θw2,…,hmX,θwm.The marginal function of random forest is(11)mgX,Y=avkIhkx=y−maxj≠yavkIhkx=j.
## 3.3. Distance between Similarity Numerical Variables
If the attributes of decision variables are continuous or discontinuous, how to measure the similarity or distance between variables?
### 3.3.1. European Distance
(12)di.j=∑k=1nXik−Xjk2,di.j refers to the overall distance in the n-dimensional space, that is, the dissimilarity. The larger di.j means the farther the distance, that is, the more obvious the dissimilarity is. On the contrary, the smaller di.j means the more obvious the similarity between the whole. Xik means the i-dimensional coordinate of the first point, and Xjk means the second two-dimensional coordinate of the second point.
### 3.3.2. Manhattan Distance
(13)di.j=∑k=1nXik−Xjk.
## 3.3.1. European Distance
(12)di.j=∑k=1nXik−Xjk2,di.j refers to the overall distance in the n-dimensional space, that is, the dissimilarity. The larger di.j means the farther the distance, that is, the more obvious the dissimilarity is. On the contrary, the smaller di.j means the more obvious the similarity between the whole. Xik means the i-dimensional coordinate of the first point, and Xjk means the second two-dimensional coordinate of the second point.
## 3.3.2. Manhattan Distance
(13)di.j=∑k=1nXik−Xjk.
## 3.4. Similarity Coefficient
OrderO=x1,x2,…,xn. All numerical sets of simulation research objects are set to x1,x2,…,xn. The range of values for each simulation study was set to xi,xj∈O, whererij is the similarity coefficient of xi and xj; the specific conditions are as follows:(1)
rij=1⇔xi=xj(2)
∀xi,xj,rij∈0,1(3)
∀xi,xj,rij=rjiThe following methods are commonly used to measure and calculate the similarity coefficient:
### 3.4.1. Quantity Product Method
(14)rij=1,i=j,∑i=1mXiXikXjk,i≠j,.where M is a positive number, satisfying M≥∑k=1mXikXjk,i≠j.
### 3.4.2. Included Angle Cosine
(15)rij=∑k=1mXikXjk∑k=1mXik2∑k=1mXjk2.A vector is a directed line segment in a multidimensional space. If two vectors have the same direction, their included angle is 0. Therefore, the cosine value can be used to express the similarity of two vectors. When two vectors are orthogonal,rij = 0 indicates that the vectors are completely different.
### 3.4.3. Correlation Coefficient Method
(16)rij=∑n=1mXik−XiXjk−Xj∑k=1mXik−Xi2∑k=1mXjk−Xj2.Among them,Xi=∑k=1mXik/m,Xj=∑k=1mXjk/m, and the numerical range of rij is in [−1, 1]. When the result is 0, it indicates that there is no correlation between the whole; when the result is 1, it indicates that the whole is positively correlated; when the result is −1, it indicates that there is a negative correlation as a whole.
### 3.4.4. Arithmetic Mean Minimum Method
(17)rij=2×∑k=1mXik∧Xjk∑k=1mXik+Xjk.
### 3.4.5. Exponential Similarity Method
(18)rij=∑k=1mexp−Xik−Xjk2/Sk2m.
### 3.4.6. Paste Progress
If the characteristics ofXi and Xj are unified so that Xik and Xjk belong to [0,1] (k = 1, 2, …, m), the similarity of Xi and Xj is defined as their pasting progress. Distance paste progress(19)rij=1−cdXi−Xja,C and a are appropriate selection parameters, and their values can be any value, but their selected values should meet the 0 ≤ rij ≤ 1 inequality, and dXi−Xj represents the distance between them.dXi−Xj is a certain distance, which can be taken as Minkovsky distance(20)dXi−Xj=∑k=1mXik−Xjkp1/p.
## 3.4.1. Quantity Product Method
(14)rij=1,i=j,∑i=1mXiXikXjk,i≠j,.where M is a positive number, satisfying M≥∑k=1mXikXjk,i≠j.
## 3.4.2. Included Angle Cosine
(15)rij=∑k=1mXikXjk∑k=1mXik2∑k=1mXjk2.A vector is a directed line segment in a multidimensional space. If two vectors have the same direction, their included angle is 0. Therefore, the cosine value can be used to express the similarity of two vectors. When two vectors are orthogonal,rij = 0 indicates that the vectors are completely different.
## 3.4.3. Correlation Coefficient Method
(16)rij=∑n=1mXik−XiXjk−Xj∑k=1mXik−Xi2∑k=1mXjk−Xj2.Among them,Xi=∑k=1mXik/m,Xj=∑k=1mXjk/m, and the numerical range of rij is in [−1, 1]. When the result is 0, it indicates that there is no correlation between the whole; when the result is 1, it indicates that the whole is positively correlated; when the result is −1, it indicates that there is a negative correlation as a whole.
## 3.4.4. Arithmetic Mean Minimum Method
(17)rij=2×∑k=1mXik∧Xjk∑k=1mXik+Xjk.
## 3.4.5. Exponential Similarity Method
(18)rij=∑k=1mexp−Xik−Xjk2/Sk2m.
## 3.4.6. Paste Progress
If the characteristics ofXi and Xj are unified so that Xik and Xjk belong to [0,1] (k = 1, 2, …, m), the similarity of Xi and Xj is defined as their pasting progress. Distance paste progress(19)rij=1−cdXi−Xja,C and a are appropriate selection parameters, and their values can be any value, but their selected values should meet the 0 ≤ rij ≤ 1 inequality, and dXi−Xj represents the distance between them.dXi−Xj is a certain distance, which can be taken as Minkovsky distance(20)dXi−Xj=∑k=1mXik−Xjkp1/p.
## 4. Experimental Analysis of Similarity Measurement and Classification of English Characters Based on Language Features
### 4.1. Comparative Analysis of Similarity Algorithm Efficiency
The cosine similarity algorithm, keyword similarity algorithm, word meaning similarity algorithm, common subsequence similarity algorithm, and the experimental algorithm are used to analyze and calculate the similarity measurement of the simulation research sample data.Method1: cosine similarity algorithm, method2: keyword similarity algorithm, method3: word meaning similarity algorithm, method4: common subsequence similarity algorithm, and method 5: experimental algorithm in this paper.As shown in Table1, which represents the average similarity value of the five methods in the state of 1 under different numbers of data. Since the similarity between data vocabulary pairs is tested, if the similarity state between vocabulary pairs is 1, that is, the similarity value between the vocabulary pairs is also very high.Table 1
Average similarity of vocabulary pairs with status 1 under different data numbers of different algorithms.
Number of data500100015002000250030003500400045005801method10.7170.7420.7280.7310.7300.7290.7310.7310.7300.730method20.7130.7180.7210.7250.7240.7220.7240.7240.7230.723method30.3730.3780.3760.3780.3780.3770.3760.3800.3780.380method40.6490.6520.6580.6610.6630.6620.6640.6640.6630.664method50.8410.8370.8390.8410.8450.8440.8460.8460.8460.846The average value of similarity in this paper is higher than other algorithms and remains around 0.84, and the difference between the maximum value and the minimum value is no more than 0.01, which shows that this algorithm has good results in the calculation of similarity and the stability of the algorithm. Among them, the average similarity of the cosine similarity algorithm, keyword similarity algorithm, and common subsequence similarity algorithm is also high, while the average similarity of the word meaning similarity algorithm remains low.As shown in Figure1, it shows the comparative analysis of the accuracy and efficiency of the five algorithms under the condition of anonuniform similarity threshold.Figure 1
Comparison of accuracy of five algorithms under different similarity values.As shown in Figure2, the recall rates of the five algorithms are compared and analyzed in the case of inconsistent similarity thresholds.Figure 2
Comparison of recall rates of five algorithms under different similarity values.As shown in Figure3, it shows the comparison and analysis of F result values of five algorithms under the condition of a nonuniform similarity threshold. The harmonic average calculated by each algorithm is basically the same as the calculated recall rate. Since the growth rate of each algorithm P is lower than the reduction rate of R, the result of R has an obvious impact on the result of F value, and the change curve of F value is close to that of R.Figure 3
Comparison ofF values of five algorithms under different similarity thresholds.
### 4.2. Experimental Analysis of Similarity Calculation
By collecting and analyzing the usage of English vocabulary resources participating in the system comparison, some English vocabulary pairs that cannot be calculated are selected from the English vocabulary data set test, and the final ten pairs of words are tested.As shown in Table2, the calculation results of English vocabulary similarity are shown. It can be seen that the calculation results of S1 column are lower than those of other columns. The reason for this phenomenon is that the high similarity of English vocabulary selects the system design method and notes the common characteristics of a large number of English vocabulary, which may include the influence of external interference factors, resulting in the low similarity of English vocabulary vector. The jump of S2 column value is too high. The reason for this phenomenon may be that the selection and design of highly similar English words in the Baidu library do not match the artificial idea in some way.Table 2
Calculation results of English vocabulary similarity.
IDW1W2SS1S2S31AutomobileCar0.9231.0320.9980.9352JewelleryGlass0.8567.6980.7890.8643NoonNoon0.0150.2030.0360.0174ForestWoodland0.8160.7860.8390.8225PhoneTelephone0.8050.9630.8230.8116ChairStool0.2360.3540.2540.2407RopeLine0.3690.4780.3720.3598WorryWorried0.3590.4230.3970.3619HospitalClinic0.4130.5120.4380.41910ReflectionConsider0.7160.8360.7250.712The calculation results of English word similarity inS1 are generally low in value, which is mainly due to the design method of the high similarity English word autonomous selection system based on the database, considering many English word features and the influence of some other interference factors, resulting in the low similarity of high-dimensional vectors of English word features.As shown in Figure4, it shows the selection efficiency of the corresponding similar English vocabulary selection system design when the number of data for selecting English vocabulary is 200, 400, and 600. If α corresponds to 1, the selection efficiency of a similar English vocabulary selection system design is 30%, 32%, and 45%, respectively. If α corresponds to 3, the selection efficiency of a similar English vocabulary selection system design is 40%, 44%, and 60%, respectively. If α corresponds to 5, the selection efficiency of a similar English vocabulary selection system design is 55%, 63%, and 80%, respectively. Through comparative analysis, the recognition rate and weight of stable English lexical features can be obtained as α. In the interval [1,5], the selection efficiency will be the highest.Figure 4
Selection efficiency of high similarity English vocabulary selection system design.
### 4.3. CD_Sim Test and Analysis of Methods
To verify the CD_, the calculation results of the SIM method show accuracy and time efficiency in practical application. Four types of data are randomly selected from English vocabulary as research simulation samples. Through the keyword extraction of the experimental results, the similarity measurement results are tested by cluster analysis and classification methods.
#### 4.3.1. Cluster Analysis
The results of similarity measure calculation indirectly affect the accuracy of the English vocabulary clustering algorithm. In addition, in the simulation sample, the accuracy of the clustering algorithm can in turn test the quality of similarity results. Commonly used clustering algorithms include the distance matrix-based clustering algorithm, AP clustering algorithm, and gradually developed spectral clustering algorithm. Both the distance-based clustering algorithm and the spectral clustering algorithm are suitable for a given number of data, with high time complexity and clustering accuracy. If the given data are unknown, the results calculated by the two algorithms will have a certain deviation. Cluster analysis is formed according to the similarity measurement analysis. The specific experimental results are shown in Table3.Table 3
Calculation and test results based on the cluster test method.
MethodAP clusteringSpectral clusteringKmeans clusteringNUMEntropyPurityEntropyPurityEntropyPurityMean clustering140.960.740.740.471.840.41Hierarchical clustering92.130.280.280.412.220.24SOM clustering180.330.900.900.821.260.66FCM clustering180.600.851.600.501.680.51As shown in Table3, the four similarity measurement methods are compared and analyzed, The clustering result obtained by the sim method is the best, but there are only four documents in the data simulation sample, while the number of SIM clustering methods has reached 18, which is obviously unreasonable. Through the analysis of experimental clustering data, the experimental results of SIM are better than CL_ SIM and ZWS_ SIM, the clustering entropy is the smallest, and the purity is the largest.
#### 4.3.2. Time Complexity Analysis
According to the experimental data in Table4, among the four text similarity measurement methods, FCM clustering similarity measurement method based on statistics has higher time efficiency, while the SOM clustering similarity measurement method has lower time efficiency than the mean clustering.Table 4
Time complexity of similarity measurement method.
MethodMean clusteringHierarchical clusteringSOM clusteringFCM clusteringTime/s102661014682573410.6
### 4.4. Experimental Results and Analysis of Classification Methods
In traditional classification experiments, word types are usually divided into simulation training, and only nouns, verbs, and verbs with nominality are selected as feature selection objects.When the threshold values of feature number are 110, 550, 1100, 1600, 2100, 3300, 4100, 5000, 5500, 6800, and 8300, respectively, the overall classification accuracy is obtained.As shown in Table5, the overall classification accuracy table represents the number of individuals with different characteristics.Table 5
Overall classification accuracy of different feature numbers.
Number of featuresTotal classification accuracy (%)Spend time (s)11067.1783055072.30945110076.621216160077.781536210079.301872330080.702305410081.982742500081.843062550082.603177680082.533684830082.613752As shown in Figure5, when the number of features is sorted from small to large, the classification accuracy rate and features increase linearly. When the number of features reaches about 5000, the classification accuracy rate is basically stable.Figure 5
The broken line diagram of classification accuracy of traditional classification methods.As shown in Figure6, when the number of features is sorted from small to large, the test time and time spent basically increase linearly.Figure 6
Test time of the traditional classification method.
## 4.1. Comparative Analysis of Similarity Algorithm Efficiency
The cosine similarity algorithm, keyword similarity algorithm, word meaning similarity algorithm, common subsequence similarity algorithm, and the experimental algorithm are used to analyze and calculate the similarity measurement of the simulation research sample data.Method1: cosine similarity algorithm, method2: keyword similarity algorithm, method3: word meaning similarity algorithm, method4: common subsequence similarity algorithm, and method 5: experimental algorithm in this paper.As shown in Table1, which represents the average similarity value of the five methods in the state of 1 under different numbers of data. Since the similarity between data vocabulary pairs is tested, if the similarity state between vocabulary pairs is 1, that is, the similarity value between the vocabulary pairs is also very high.Table 1
Average similarity of vocabulary pairs with status 1 under different data numbers of different algorithms.
Number of data500100015002000250030003500400045005801method10.7170.7420.7280.7310.7300.7290.7310.7310.7300.730method20.7130.7180.7210.7250.7240.7220.7240.7240.7230.723method30.3730.3780.3760.3780.3780.3770.3760.3800.3780.380method40.6490.6520.6580.6610.6630.6620.6640.6640.6630.664method50.8410.8370.8390.8410.8450.8440.8460.8460.8460.846The average value of similarity in this paper is higher than other algorithms and remains around 0.84, and the difference between the maximum value and the minimum value is no more than 0.01, which shows that this algorithm has good results in the calculation of similarity and the stability of the algorithm. Among them, the average similarity of the cosine similarity algorithm, keyword similarity algorithm, and common subsequence similarity algorithm is also high, while the average similarity of the word meaning similarity algorithm remains low.As shown in Figure1, it shows the comparative analysis of the accuracy and efficiency of the five algorithms under the condition of anonuniform similarity threshold.Figure 1
Comparison of accuracy of five algorithms under different similarity values.As shown in Figure2, the recall rates of the five algorithms are compared and analyzed in the case of inconsistent similarity thresholds.Figure 2
Comparison of recall rates of five algorithms under different similarity values.As shown in Figure3, it shows the comparison and analysis of F result values of five algorithms under the condition of a nonuniform similarity threshold. The harmonic average calculated by each algorithm is basically the same as the calculated recall rate. Since the growth rate of each algorithm P is lower than the reduction rate of R, the result of R has an obvious impact on the result of F value, and the change curve of F value is close to that of R.Figure 3
Comparison ofF values of five algorithms under different similarity thresholds.
## 4.2. Experimental Analysis of Similarity Calculation
By collecting and analyzing the usage of English vocabulary resources participating in the system comparison, some English vocabulary pairs that cannot be calculated are selected from the English vocabulary data set test, and the final ten pairs of words are tested.As shown in Table2, the calculation results of English vocabulary similarity are shown. It can be seen that the calculation results of S1 column are lower than those of other columns. The reason for this phenomenon is that the high similarity of English vocabulary selects the system design method and notes the common characteristics of a large number of English vocabulary, which may include the influence of external interference factors, resulting in the low similarity of English vocabulary vector. The jump of S2 column value is too high. The reason for this phenomenon may be that the selection and design of highly similar English words in the Baidu library do not match the artificial idea in some way.Table 2
Calculation results of English vocabulary similarity.
IDW1W2SS1S2S31AutomobileCar0.9231.0320.9980.9352JewelleryGlass0.8567.6980.7890.8643NoonNoon0.0150.2030.0360.0174ForestWoodland0.8160.7860.8390.8225PhoneTelephone0.8050.9630.8230.8116ChairStool0.2360.3540.2540.2407RopeLine0.3690.4780.3720.3598WorryWorried0.3590.4230.3970.3619HospitalClinic0.4130.5120.4380.41910ReflectionConsider0.7160.8360.7250.712The calculation results of English word similarity inS1 are generally low in value, which is mainly due to the design method of the high similarity English word autonomous selection system based on the database, considering many English word features and the influence of some other interference factors, resulting in the low similarity of high-dimensional vectors of English word features.As shown in Figure4, it shows the selection efficiency of the corresponding similar English vocabulary selection system design when the number of data for selecting English vocabulary is 200, 400, and 600. If α corresponds to 1, the selection efficiency of a similar English vocabulary selection system design is 30%, 32%, and 45%, respectively. If α corresponds to 3, the selection efficiency of a similar English vocabulary selection system design is 40%, 44%, and 60%, respectively. If α corresponds to 5, the selection efficiency of a similar English vocabulary selection system design is 55%, 63%, and 80%, respectively. Through comparative analysis, the recognition rate and weight of stable English lexical features can be obtained as α. In the interval [1,5], the selection efficiency will be the highest.Figure 4
Selection efficiency of high similarity English vocabulary selection system design.
## 4.3. CD_Sim Test and Analysis of Methods
To verify the CD_, the calculation results of the SIM method show accuracy and time efficiency in practical application. Four types of data are randomly selected from English vocabulary as research simulation samples. Through the keyword extraction of the experimental results, the similarity measurement results are tested by cluster analysis and classification methods.
### 4.3.1. Cluster Analysis
The results of similarity measure calculation indirectly affect the accuracy of the English vocabulary clustering algorithm. In addition, in the simulation sample, the accuracy of the clustering algorithm can in turn test the quality of similarity results. Commonly used clustering algorithms include the distance matrix-based clustering algorithm, AP clustering algorithm, and gradually developed spectral clustering algorithm. Both the distance-based clustering algorithm and the spectral clustering algorithm are suitable for a given number of data, with high time complexity and clustering accuracy. If the given data are unknown, the results calculated by the two algorithms will have a certain deviation. Cluster analysis is formed according to the similarity measurement analysis. The specific experimental results are shown in Table3.Table 3
Calculation and test results based on the cluster test method.
MethodAP clusteringSpectral clusteringKmeans clusteringNUMEntropyPurityEntropyPurityEntropyPurityMean clustering140.960.740.740.471.840.41Hierarchical clustering92.130.280.280.412.220.24SOM clustering180.330.900.900.821.260.66FCM clustering180.600.851.600.501.680.51As shown in Table3, the four similarity measurement methods are compared and analyzed, The clustering result obtained by the sim method is the best, but there are only four documents in the data simulation sample, while the number of SIM clustering methods has reached 18, which is obviously unreasonable. Through the analysis of experimental clustering data, the experimental results of SIM are better than CL_ SIM and ZWS_ SIM, the clustering entropy is the smallest, and the purity is the largest.
### 4.3.2. Time Complexity Analysis
According to the experimental data in Table4, among the four text similarity measurement methods, FCM clustering similarity measurement method based on statistics has higher time efficiency, while the SOM clustering similarity measurement method has lower time efficiency than the mean clustering.Table 4
Time complexity of similarity measurement method.
MethodMean clusteringHierarchical clusteringSOM clusteringFCM clusteringTime/s102661014682573410.6
## 4.3.1. Cluster Analysis
The results of similarity measure calculation indirectly affect the accuracy of the English vocabulary clustering algorithm. In addition, in the simulation sample, the accuracy of the clustering algorithm can in turn test the quality of similarity results. Commonly used clustering algorithms include the distance matrix-based clustering algorithm, AP clustering algorithm, and gradually developed spectral clustering algorithm. Both the distance-based clustering algorithm and the spectral clustering algorithm are suitable for a given number of data, with high time complexity and clustering accuracy. If the given data are unknown, the results calculated by the two algorithms will have a certain deviation. Cluster analysis is formed according to the similarity measurement analysis. The specific experimental results are shown in Table3.Table 3
Calculation and test results based on the cluster test method.
MethodAP clusteringSpectral clusteringKmeans clusteringNUMEntropyPurityEntropyPurityEntropyPurityMean clustering140.960.740.740.471.840.41Hierarchical clustering92.130.280.280.412.220.24SOM clustering180.330.900.900.821.260.66FCM clustering180.600.851.600.501.680.51As shown in Table3, the four similarity measurement methods are compared and analyzed, The clustering result obtained by the sim method is the best, but there are only four documents in the data simulation sample, while the number of SIM clustering methods has reached 18, which is obviously unreasonable. Through the analysis of experimental clustering data, the experimental results of SIM are better than CL_ SIM and ZWS_ SIM, the clustering entropy is the smallest, and the purity is the largest.
## 4.3.2. Time Complexity Analysis
According to the experimental data in Table4, among the four text similarity measurement methods, FCM clustering similarity measurement method based on statistics has higher time efficiency, while the SOM clustering similarity measurement method has lower time efficiency than the mean clustering.Table 4
Time complexity of similarity measurement method.
MethodMean clusteringHierarchical clusteringSOM clusteringFCM clusteringTime/s102661014682573410.6
## 4.4. Experimental Results and Analysis of Classification Methods
In traditional classification experiments, word types are usually divided into simulation training, and only nouns, verbs, and verbs with nominality are selected as feature selection objects.When the threshold values of feature number are 110, 550, 1100, 1600, 2100, 3300, 4100, 5000, 5500, 6800, and 8300, respectively, the overall classification accuracy is obtained.As shown in Table5, the overall classification accuracy table represents the number of individuals with different characteristics.Table 5
Overall classification accuracy of different feature numbers.
Number of featuresTotal classification accuracy (%)Spend time (s)11067.1783055072.30945110076.621216160077.781536210079.301872330080.702305410081.982742500081.843062550082.603177680082.533684830082.613752As shown in Figure5, when the number of features is sorted from small to large, the classification accuracy rate and features increase linearly. When the number of features reaches about 5000, the classification accuracy rate is basically stable.Figure 5
The broken line diagram of classification accuracy of traditional classification methods.As shown in Figure6, when the number of features is sorted from small to large, the test time and time spent basically increase linearly.Figure 6
Test time of the traditional classification method.
## 5. Conclusion
Firstly, this paper defines the concept of features and introduces the methods of statistical features of English characters and the research direction and background of the subject. Then, it analyzes the current situation of language development. The diversification of word meaning relationships between words has become the primary task of language word meaning research, that is, how to choose the correct method and model to express the relationship between language words, which is the purpose of this paper. Then, it introduces the meaning of similarity measurement and the calculation algorithm of similarity measurement, mainly including the feature selection of similarity, the embedding amount of similarity words, the distance between similarity numerical variables, and the calculation of the similarity coefficient. Finally, the efficiency of the similarity algorithm is compared and analyzed, the similarity measurement of fused language features is calculated and analyzed, and the CD is tested_ according to the classification method, the experimental calculation and analysis are carried out, and the experimental results are analyzed.
---
*Source: 1019508-2022-08-24.xml* | 2022 |
# Exposure to Workplace Bullying: The Role of Coping Strategies in Dealing with Work Stressors
**Authors:** Whitney Van den Brande; Elfi Baillien; Tinne Vander Elst; Hans De Witte; Anja Van den Broeck; Lode Godderis
**Journal:** BioMed Research International
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1019529
---
## Abstract
Studies investigating both work- and individual-related antecedents of workplace bullying are scarce. In reply, this study investigated the interaction between workload, job insecurity, role conflict, and role ambiguity (i.e., work-related antecedents), and problem- and emotion-focused coping strategies (i.e., individual-related antecedents) in association with exposure to workplace bullying. Problem-focused coping strategies were hypothesised to decrease (i.e., buffer) the associations between workload, job insecurity, role conflict, and role ambiguity and exposure to bullying, while emotion-focused coping strategies were hypothesised to increase (i.e., amplify) these associations. Results for a heterogeneous sample (N = 3,105) did not provide evidence for problem-focused coping strategies as moderators. As expected, some emotion-focused coping strategies amplified the associations between work-related antecedents and bullying: employees using “focus on and venting of emotions” or “behavioural disengagement” in dealing with job insecurity, role conflict, or role ambiguity were more likely to be exposed to bullying. Similarly, “seeking social support for emotional reasons” and “mental disengagement” amplified the associations of role ambiguity and the associations of both role conflict and role ambiguity, respectively. To prevent bullying, organisations may train employees in tempering emotion-focused coping strategies, especially when experiencing job insecurity, role conflict, or role ambiguity.
---
## Body
## 1. Introduction
Workplace bullying is defined as the perceived situation in which an employee is systematically and repeatedly thetargetof work-related and/or personal negative acts at work [1]. Bullying has become an issue in many organisations. Prevalence rates range from 3% up to 15% in Europe [2], such that between 3% and 4% of European employees experience bullying behaviours weekly (i.e., serious bullying), while 9% to 15% experience bullying behaviours monthly (i.e., occasional bullying) [3]. As being exposed to workplace bullying is associated with health impairment—such as burnout [4], symptoms of posttraumatic stress disorder [5], and depression [6]—studies have investigated antecedents that may prevent bullying [2, 7].To date, these studies have mainly focused on work-related antecedents that trigger exposure to bullying [7], although scholars have also identified some individual-related antecedents such as low self-esteem and poor social skills [8]. Studies thus showed that exposure to workplace bullying is a multicausal phenomenon [9]. However, these studies focusing on work- or individual-related antecedents have been developed independently of each other, although scholars underlined that the interaction betweenboth work- and individual-related antecedents should be investigated to fully grasp the origin of exposure to workplace bullying [9]. In line with this suggestion, scholars claim that the effect of work stressors (i.e., work-related antecedents) on their outcomes could be influenced by coping strategies (i.e., individual-related antecedents) [10]. Despite these claims, studies investigating the interaction between work stressors and coping strategies to bullying are lacking [11].In reply, this study aims to bridge the research lines on work-relatedand individual-related antecedents of workplace bullying by investigating the interaction between work stressors (i.e., workload, job insecurity, role conflict, and role ambiguity) and employees’ coping strategies (i.e., problem- and emotion-focused) in association to exposure to workplace bullying. By investigating how the interaction between these factors may prevent or evoke exposure to workplace bullying, this study may additionally identify possible work- and individual-related prevention areas.Studies have particularly underlined the negative impact of workload [12], job insecurity [13], role conflict, and role ambiguity [14] on exposure to workplace bullying. A recent systematic review showed that these work stressors are the most important antecedents of exposure to workplace bullying [11]. The association between those work stressors and exposure to bullying may be theoretically substantiated by the Work Environment Hypothesis [15] and the General Strain Theory [16]: a poor psychosocial work environment (i.e., work stressors) may trigger exposure to bullying because it depletes employees’ energy, causing strain [16, 17]. Strained employees have difficulties in defending themselves against bullying acts and offer little resistance [17, 18]. Consequently, they become an “easy target” for exposure to workplace bullying [13].The negative impact of work stressors on exposure to workplace bullying could be altered by coping strategies [10, 11]. In other words, employees’ coping strategies could be potential moderators of the association between work stressors and exposure to bullying. The literature defines coping in at least two ways. Some studies conceptualise coping as fluctuating states depending on situational appraisals (i.e., state-like disposition) [19], while other studies found that the tendency to use certain coping strategies can be relatively stable over time and situations (i.e., trait-like disposition) [20, 21]. As the present study aims to investigate the interaction between work- and individual-related antecedents of exposure to workplace bullying, we align with the definition of coping strategies as a trait-like disposition. In this study, coping strategies refer to theemployees’ tendency to make cognitive and behavioural efforts to manage, tolerate, or reduce work stressors [10]. These coping strategies are either oriented at tackling the problem (“problem-focused”) or at managing emotions associated with the stressor (“emotion-focused”) [10]. Carver et al. [22] identified “active coping,” “planning,” and “seeking social support for instrumental reasons” as important problem-focused coping strategies, while “focus on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons” were identified as emotion-focused coping strategies.According to the Three-Way Model of Workplace bullying, work stressors may particularly trigger exposure to bullying when employees apply inefficient coping strategies, whereas applying efficient coping strategies may reduce exposure to bullying [23]. According to the pioneers in coping research, Lazarus and Folkman [10], emotion-focused coping strategies reduce the negative emotions associated with the stressor in the short term but may prevent employees from performing a suitable action to address the problem. Emotion-focused coping strategies may therefore impair employee well-being. This view is supported by previous studies indicating that “focus on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons” are related to impaired well-being [e.g., [22, 24, 25]]. It also aligns with a recent review showing that using emotion-focused coping strategies as a dominant strategy is related to strain outcomes (e.g., emotional exhaustion and depersonalization) [26]. Emotion-focused coping strategies may thus be an inefficient way of coping with work stressors. Similarly, we propose that they may trigger exposure to workplace bullying: employees experiencing high levels of work stressors in combination with using inefficient coping strategies (i.e., emotion-focused coping strategies) tend to (unknowingly) breach well-established norms, habits, expectations, or values within their workplace [27]. For example, a stressed employee may look for distractions to avoid the problem and thus perform at a lower level than his/her colleagues. Colleagues may not accept that these norms are breached and may, in turn, try to restore the norms by punishing this employee or demonstrating negative acts towards them [Social Interactionist Theory; [23, 27, 28]]. Alternatively, a stressed employee may ventilate his/her emotions frequently to his/her colleagues, which may interfere with their work and hamper their performance. In reply, they may demonstrate negative acts towards the stressed employee for interfering with their work [Social Interactionist Theory; [23, 27, 28]]. In sum, we hypothesise the following.Hypothesis 1.
Emotion-focused coping strategies increase the association between work stressors, including workload(H1a), job insecurity(H1b), role conflict(H1c), and role ambiguity(H1d), and exposure to workplace bullying (i.e., amplifying effects).In contrast, problem-focused coping strategies may be efficient in dealing with work stressors, as they are focused at solving the issue [10]. Previous studies have demonstrated that “active coping,” “planning,” and “seeking social support for instrumental reasons” were associated with positive health outcomes [e.g., [19, 22]] and were negatively correlated with strain outcomes, such as psychological symptoms and emotional exhaustion [26, 29]. Accordingly, we expect problem-focused coping strategies to decrease the association between work stressors and exposure to bullying: employees who cope with work stressors in a problem-focused way are putting effort into solving the problem instead of breaching valued norms, habits, expectations, or values [23, 27]. They gain control over the stressful situation by defining and interpreting the situation, planning solutions, and choosing a course of action which may avoid or reduce exposure to bullying [10, 30]. In sum, we hypothesise the following.Hypothesis 2.
Problem-focused coping strategies decrease the association between work stressors, including workload(H2a), job insecurity(H2b), role conflict(H2c), and role ambiguity(H2d), and exposure to workplace bullying (i.e., buffering effects).
## 2. Methods
### 2.1. Study Context and Participants
Cross-sectional data were collected from September until November 2014 by means of online and paper-and-pencil questionnaires distributed by an external service for optimising work environments (IDEWE). A total of 6,499 Flemish employees from 16 organisations in various sectors (i.e., healthcare, manufacturing, governmental, and service sectors) were invited to complete a questionnaire on psychosocial risk factors and work-related well-being [31]. All participants provided an informed consent that underlined the anonymity of their answers, stated that their participation was voluntary, and shared the researchers’ contact information. The Social and Societal Ethics Committee (SMEC) of KU Leuven approved the study protocol (G-2014 07 025).The final sample consisted of 3,105 Flemish employees (response rate of 48%) who completed the questionnaire. The mean age of the participants was 42 years (SD = 11.00). In total, 33% of the respondents were male, 68% had a full-time position, and 91% had a permanent contract. The participants were employed in healthcare (75%), manufacturing (9%), governmental (4%), and service (12%) sectors.
### 2.2. Measures
The variables were measured using established and internationally validated scales. The means, standard deviations, and correlations are presented in Table1.Table 1
Means, standard deviations, and correlations (N=3,105).
M
SD
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(1) Age
41.61
11.00
—
.13∗∗
.05∗
−.09∗∗
−.04
−.13∗∗
.03
.10∗∗
−.10∗∗
−.09∗∗
.05∗
−.09∗∗
−.18∗∗
−.02
(2) Male
n.a.
n.a.
—
−.02
.01
.08∗∗
.09∗∗
−.08∗∗
.02
−.12∗∗
−.25∗∗
.03
−.08∗∗
−.35∗∗
.06∗∗
(3) Workload
3.42
0.84
—
.10∗∗
.43∗∗
.17∗∗
.07∗∗
.05∗∗
−.02
.13∗∗
.04∗
.09∗
.08∗∗
.22∗∗
(4) Job insecurity
2.09
0.90
—
.27∗∗
.25∗∗
−.07∗∗
−.08∗∗
−.02
.13∗∗
.16∗∗
.14∗∗
.02
.29∗∗
(5) Role conflict
2.41
0.91
—
.42∗∗
−.07∗∗
−.05∗∗
−.03
.17∗∗
.23∗∗
.20∗∗
.02
.46∗∗
(6) Role ambiguity
1.93
0.73
—
−.16∗∗
−.10∗∗
−.08∗∗
.13∗∗
.16∗∗
.13∗∗
−.01
.33∗∗
(7) Active coping
4.02
0.61
—
.63∗∗
.35∗∗
−.08∗∗
−.30∗∗
−.08∗∗
.12∗∗
−.04∗
(8) Planning
3.71
0.76
—
.39∗∗
−.08∗∗
−.26∗∗
−.08∗∗
.10∗∗
−.03
(9) SOCINSTR
3.44
0.87
—
.18∗∗
−.07∗∗
.06∗∗
.38∗∗
−.03
(10) VENT
2.22
0.80
—
.38∗∗
.36∗∗
.43∗∗
.19∗∗
(11) BD
1.70
0.71
—
.42∗∗
.04∗
.22∗∗
(12) MD
2.36
0.75
—
.27∗∗
.19∗∗
(13) SOCEMO
3.08
0.98
—
.06∗∗
(14) EWB
1.48
0.51
—
Note. n.a.: not applicable; SOCINSTR: seeking social support for instrumental reasons; VENT: focus on and venting of emotions; MD: mental disengagement; BD: behavioural disengagement; SOCEMO: seeking social support for emotional reasons; EWB: exposure to workplace bullying; p∗<.05; p∗∗<.01.Exposure to workplace bullying (α=.85) was measured by means of the Short Negative Acts Questionnaire (S-NAQ) [32]. Respondents were asked to indicate how often they were confronted with a list of nine bullying acts during the last six months (e.g., “gossip or rumours about you”). The response categories ranged from “never” (=1) to “now and then” (=2), “monthly” (=3), “weekly” (=4), and “daily” (=5).Workload (α=.87) was assessed using three items from the Questionnaire Experience and Evaluation of Work (QEEW) [33], including “I have to work extra hard in order to complete a task.”Role ambiguity (α=.82) was measured using three items from the Short Inventory to Monitor Psychosocial Hazards (SIMPH) [34]. An example of an item is “I know exactly what others expect of me in my work (R).”Role conflict (α=.79) was measured using three items of the Work Conditions and Control Questionnaire (WOCCQ; e.g., “I receive contradictory instructions”) [35].Job insecurity (α=.81) was measured by using three items from the scale by Vander Elst et al. [36], for example, “I think I might lose my job in the near future.” The items regarding the abovementioned work stressors were rated on a five-point Likert scale ranging from “almost never” (=1), “rather seldom” (=2), “sometimes” (=3), “often” (=4), and “almost always” (=5).Coping strategies were assessed by 28 items from the COPE [22]. Following the idea that coping strategies represent individual factors expressing the tendency to apply certain strategies more than others, respondents were asked to indicate what theyusually do when facing a stressful situation. The response categories varied from “almost never” (=1), “rather seldom” (=2), “sometimes” (=3), “often” (=4), and “almost always” (=5).Problem-focused coping strategies were measured with three subscales: four items tapped into“active coping” (e.g., “I concentrate my efforts on doing something about it”) and four into“planning” (e.g., “I think hard about what steps to take”), and another four measured“seeking social support for instrumental reasons” (e.g., “I try to get advice from someone about what to do”). The alpha coefficients for these scales were .83, .85, and .91, respectively. Emotion-focused coping strategies were measured using four subscales with four items each:“focusing on and venting of emotions”(e.g., “I get upset and show my emotions”),“behavioural disengagement”(e.g., “I just give up trying to reach my goal”),“mental disengagement”(e.g., “I turn to work or other substitute activities to take my mind off things”), and“seeking social support for emotional reasons”(e.g., “I get sympathy and understanding from someone”). Cronbach’s alpha coefficients were .85, .86, .69, and .92, respectively.Finally, age (years) and gender (0 = female, 1 = male) were measured.
### 2.3. Statistical Analyses
Analyses were performed with the software package AMOS 22. The construct validity of the scales was evaluated by means of Confirmatory Factor Analysis (CFA) [37]. The hypothesised measurement model contained 12 factors in which all items loaded on the corresponding latent variable (i.e., exposure to workplace bullying, workload, job insecurity, role conflict, role ambiguity, “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”). We compared the measurement model with five alternative models: (1) a one-factor model in which all items were loaded on the same factor, (2) a four-factor model with general work stressors (i.e., the items of workload, job insecurity, role conflict, and role ambiguity), general problem-focused coping strategies (i.e., the items of “active coping,” “planning,” and “seeking social support for instrumental reasons”), general emotion-focused coping strategies (i.e., the items of “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, (3) a six-factor model with workload, job insecurity, role conflict, role ambiguity, general coping strategies (i.e., the items of “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, (4) a seven-factor model with workload, job insecurity, role conflict, role ambiguity, general problem-focused coping strategies (i.e., the items of “active coping,” “planning,” and “seeking social support for instrumental reasons”), general emotion-focused coping strategies (i.e., the items of “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, and (5) a nine-factor model with general work stressors (i.e., the items of workload, job insecurity, role conflict, and role ambiguity), “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” “seeking social support for emotional reasons,” and exposure to workplace bullying as latent factors. In all models, the latent variables were allowed to covary. The χ2 difference test was used to compare the hypothesised measurement model with the alternative measurement models [37, 38]. The fit of the models was evaluated based on Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), Root Mean Square Error of Approximation (RMSEA), and Standardized Root Mean Residual (SRMR) [38]. Values above .90 for CFI and TLI indicate a good fit, while values above .95 indicate an excellent fit [38, 39]. Values close to .08 for RMSEA and values close to .10 for SRMR indicate a relatively good fit between the measurement model and the observed data [38, 39]. Values below .05 for RMSEA and values below .09 for SRMR indicate an excellent fit [38].In line with Bakker et al. [40] and following the procedure of Mathieu et al. [41, 42], we investigated the hypotheses by means of Moderated Structural Equation Modelling (MSEM). MSEM was used because it has the ability to (a) assess and correct for measurement error and (b) provide measures of fit of the models under investigation [37]. For each pair of a work stressor and a coping strategy, two models were tested and compared: (1) a model without an interaction factor and (2) a model with an interaction factor. In the model without the interaction, one of the four work stressors and one of the seven coping strategies were modelled as the exogenous factors and workplace bullying was the endogenous factor. To this model, a factor reflecting the interaction between the work stressor and the coping strategy was added (i.e., model with interaction factor). The interaction term was calculated by multiplying the centred scale scores for the respective work stressor and coping strategy [43]. In both models, the centred scale score for the respective variable indicated the exogenous factors. The exogenous factors were allowed to covary. The error variance of each indicator was set equal to the product of its variance and one minus its reliability [41, 42]. The paths from the exogenous factors to their indicator were calculated using the square roots of the scale reliabilities [40–42, 44]. The reliability of the interaction term was calculated using the formula as described in Cortina et al. [42].The path coefficients were estimated and the fit of each model was evaluated using CFI, TLI, RMSEA, and SRMSR. The interaction effects were considered as significant when (a) the Unstandardized Path Coefficient (UPC) from the interaction term to the endogenous factor (i.e., exposure to workplace bullying) was statistically significantand (b) the χ2 difference test indicated that the model with the latent interaction factors fits the data better in comparison to the model without the latent interaction factor. As we tested the relationships in this study in a pairwise manner, a Bonferroni correction of p<.002 (instead of p<.05) was used.
## 2.1. Study Context and Participants
Cross-sectional data were collected from September until November 2014 by means of online and paper-and-pencil questionnaires distributed by an external service for optimising work environments (IDEWE). A total of 6,499 Flemish employees from 16 organisations in various sectors (i.e., healthcare, manufacturing, governmental, and service sectors) were invited to complete a questionnaire on psychosocial risk factors and work-related well-being [31]. All participants provided an informed consent that underlined the anonymity of their answers, stated that their participation was voluntary, and shared the researchers’ contact information. The Social and Societal Ethics Committee (SMEC) of KU Leuven approved the study protocol (G-2014 07 025).The final sample consisted of 3,105 Flemish employees (response rate of 48%) who completed the questionnaire. The mean age of the participants was 42 years (SD = 11.00). In total, 33% of the respondents were male, 68% had a full-time position, and 91% had a permanent contract. The participants were employed in healthcare (75%), manufacturing (9%), governmental (4%), and service (12%) sectors.
## 2.2. Measures
The variables were measured using established and internationally validated scales. The means, standard deviations, and correlations are presented in Table1.Table 1
Means, standard deviations, and correlations (N=3,105).
M
SD
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(1) Age
41.61
11.00
—
.13∗∗
.05∗
−.09∗∗
−.04
−.13∗∗
.03
.10∗∗
−.10∗∗
−.09∗∗
.05∗
−.09∗∗
−.18∗∗
−.02
(2) Male
n.a.
n.a.
—
−.02
.01
.08∗∗
.09∗∗
−.08∗∗
.02
−.12∗∗
−.25∗∗
.03
−.08∗∗
−.35∗∗
.06∗∗
(3) Workload
3.42
0.84
—
.10∗∗
.43∗∗
.17∗∗
.07∗∗
.05∗∗
−.02
.13∗∗
.04∗
.09∗
.08∗∗
.22∗∗
(4) Job insecurity
2.09
0.90
—
.27∗∗
.25∗∗
−.07∗∗
−.08∗∗
−.02
.13∗∗
.16∗∗
.14∗∗
.02
.29∗∗
(5) Role conflict
2.41
0.91
—
.42∗∗
−.07∗∗
−.05∗∗
−.03
.17∗∗
.23∗∗
.20∗∗
.02
.46∗∗
(6) Role ambiguity
1.93
0.73
—
−.16∗∗
−.10∗∗
−.08∗∗
.13∗∗
.16∗∗
.13∗∗
−.01
.33∗∗
(7) Active coping
4.02
0.61
—
.63∗∗
.35∗∗
−.08∗∗
−.30∗∗
−.08∗∗
.12∗∗
−.04∗
(8) Planning
3.71
0.76
—
.39∗∗
−.08∗∗
−.26∗∗
−.08∗∗
.10∗∗
−.03
(9) SOCINSTR
3.44
0.87
—
.18∗∗
−.07∗∗
.06∗∗
.38∗∗
−.03
(10) VENT
2.22
0.80
—
.38∗∗
.36∗∗
.43∗∗
.19∗∗
(11) BD
1.70
0.71
—
.42∗∗
.04∗
.22∗∗
(12) MD
2.36
0.75
—
.27∗∗
.19∗∗
(13) SOCEMO
3.08
0.98
—
.06∗∗
(14) EWB
1.48
0.51
—
Note. n.a.: not applicable; SOCINSTR: seeking social support for instrumental reasons; VENT: focus on and venting of emotions; MD: mental disengagement; BD: behavioural disengagement; SOCEMO: seeking social support for emotional reasons; EWB: exposure to workplace bullying; p∗<.05; p∗∗<.01.Exposure to workplace bullying (α=.85) was measured by means of the Short Negative Acts Questionnaire (S-NAQ) [32]. Respondents were asked to indicate how often they were confronted with a list of nine bullying acts during the last six months (e.g., “gossip or rumours about you”). The response categories ranged from “never” (=1) to “now and then” (=2), “monthly” (=3), “weekly” (=4), and “daily” (=5).Workload (α=.87) was assessed using three items from the Questionnaire Experience and Evaluation of Work (QEEW) [33], including “I have to work extra hard in order to complete a task.”Role ambiguity (α=.82) was measured using three items from the Short Inventory to Monitor Psychosocial Hazards (SIMPH) [34]. An example of an item is “I know exactly what others expect of me in my work (R).”Role conflict (α=.79) was measured using three items of the Work Conditions and Control Questionnaire (WOCCQ; e.g., “I receive contradictory instructions”) [35].Job insecurity (α=.81) was measured by using three items from the scale by Vander Elst et al. [36], for example, “I think I might lose my job in the near future.” The items regarding the abovementioned work stressors were rated on a five-point Likert scale ranging from “almost never” (=1), “rather seldom” (=2), “sometimes” (=3), “often” (=4), and “almost always” (=5).Coping strategies were assessed by 28 items from the COPE [22]. Following the idea that coping strategies represent individual factors expressing the tendency to apply certain strategies more than others, respondents were asked to indicate what theyusually do when facing a stressful situation. The response categories varied from “almost never” (=1), “rather seldom” (=2), “sometimes” (=3), “often” (=4), and “almost always” (=5).Problem-focused coping strategies were measured with three subscales: four items tapped into“active coping” (e.g., “I concentrate my efforts on doing something about it”) and four into“planning” (e.g., “I think hard about what steps to take”), and another four measured“seeking social support for instrumental reasons” (e.g., “I try to get advice from someone about what to do”). The alpha coefficients for these scales were .83, .85, and .91, respectively. Emotion-focused coping strategies were measured using four subscales with four items each:“focusing on and venting of emotions”(e.g., “I get upset and show my emotions”),“behavioural disengagement”(e.g., “I just give up trying to reach my goal”),“mental disengagement”(e.g., “I turn to work or other substitute activities to take my mind off things”), and“seeking social support for emotional reasons”(e.g., “I get sympathy and understanding from someone”). Cronbach’s alpha coefficients were .85, .86, .69, and .92, respectively.Finally, age (years) and gender (0 = female, 1 = male) were measured.
## 2.3. Statistical Analyses
Analyses were performed with the software package AMOS 22. The construct validity of the scales was evaluated by means of Confirmatory Factor Analysis (CFA) [37]. The hypothesised measurement model contained 12 factors in which all items loaded on the corresponding latent variable (i.e., exposure to workplace bullying, workload, job insecurity, role conflict, role ambiguity, “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”). We compared the measurement model with five alternative models: (1) a one-factor model in which all items were loaded on the same factor, (2) a four-factor model with general work stressors (i.e., the items of workload, job insecurity, role conflict, and role ambiguity), general problem-focused coping strategies (i.e., the items of “active coping,” “planning,” and “seeking social support for instrumental reasons”), general emotion-focused coping strategies (i.e., the items of “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, (3) a six-factor model with workload, job insecurity, role conflict, role ambiguity, general coping strategies (i.e., the items of “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, (4) a seven-factor model with workload, job insecurity, role conflict, role ambiguity, general problem-focused coping strategies (i.e., the items of “active coping,” “planning,” and “seeking social support for instrumental reasons”), general emotion-focused coping strategies (i.e., the items of “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, and (5) a nine-factor model with general work stressors (i.e., the items of workload, job insecurity, role conflict, and role ambiguity), “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” “seeking social support for emotional reasons,” and exposure to workplace bullying as latent factors. In all models, the latent variables were allowed to covary. The χ2 difference test was used to compare the hypothesised measurement model with the alternative measurement models [37, 38]. The fit of the models was evaluated based on Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), Root Mean Square Error of Approximation (RMSEA), and Standardized Root Mean Residual (SRMR) [38]. Values above .90 for CFI and TLI indicate a good fit, while values above .95 indicate an excellent fit [38, 39]. Values close to .08 for RMSEA and values close to .10 for SRMR indicate a relatively good fit between the measurement model and the observed data [38, 39]. Values below .05 for RMSEA and values below .09 for SRMR indicate an excellent fit [38].In line with Bakker et al. [40] and following the procedure of Mathieu et al. [41, 42], we investigated the hypotheses by means of Moderated Structural Equation Modelling (MSEM). MSEM was used because it has the ability to (a) assess and correct for measurement error and (b) provide measures of fit of the models under investigation [37]. For each pair of a work stressor and a coping strategy, two models were tested and compared: (1) a model without an interaction factor and (2) a model with an interaction factor. In the model without the interaction, one of the four work stressors and one of the seven coping strategies were modelled as the exogenous factors and workplace bullying was the endogenous factor. To this model, a factor reflecting the interaction between the work stressor and the coping strategy was added (i.e., model with interaction factor). The interaction term was calculated by multiplying the centred scale scores for the respective work stressor and coping strategy [43]. In both models, the centred scale score for the respective variable indicated the exogenous factors. The exogenous factors were allowed to covary. The error variance of each indicator was set equal to the product of its variance and one minus its reliability [41, 42]. The paths from the exogenous factors to their indicator were calculated using the square roots of the scale reliabilities [40–42, 44]. The reliability of the interaction term was calculated using the formula as described in Cortina et al. [42].The path coefficients were estimated and the fit of each model was evaluated using CFI, TLI, RMSEA, and SRMSR. The interaction effects were considered as significant when (a) the Unstandardized Path Coefficient (UPC) from the interaction term to the endogenous factor (i.e., exposure to workplace bullying) was statistically significantand (b) the χ2 difference test indicated that the model with the latent interaction factors fits the data better in comparison to the model without the latent interaction factor. As we tested the relationships in this study in a pairwise manner, a Bonferroni correction of p<.002 (instead of p<.05) was used.
## 3. Results
### 3.1. Construct Validity of the Measurement Model
Table2 shows that the proposed 12-factor model fitted the data well and better than the five alternative models, providing evidence for the hypothesised dimensionality of the study scales. While the RMSEA and SRMR values pointed at an excellent model fit [45], the CFI and TLI values did not meet the strict standards for an excellent model fit. Nevertheless, these CFI and TLI values were comparable to what many others consider to represent adequate model fit [45].Table 2
Results of Confirmatory Factor Analysis (N=3,105).
Model
Latent factors
χ
2
df
CFI
TLI
RMSEA
SRMR
Model comparison
Δ
χ
2
Δdf
(1) 12-factor model
WL, JI, RC, RA, AC, PL, SOCINSTR, VENT, MD, BD, SOCEMO, EWB
7009.54∗∗∗
1061
.92
.92
.04
.04
/
/
/
(2) One-factor model
General factor
62958.46∗∗∗
1127
.21
.18
.13
.15
4 versus 1
55948.92∗∗∗
66
(3) Four-factor model
Stressors, PFC, EFC, EWB
38167.12∗∗∗
1121
.53
.51
.10
.11
5 versus 1
31157.58∗∗∗
60
(4) Six-factor model
WL, RA, JI, RC, General coping, EWB
39075.20∗∗∗
1112
.52
.49
.11
.13
6 versus 1
32065.66∗∗∗
51
(5) Seven-factor model
WL, JI, RC, RA, PFC, EFC, EWB
29109.328∗∗∗
1106
.64
.62
.09
.10
3 versus 1
22099.79∗∗∗
45
(6) Nine-factor model
Stressors, AC, PL, SOCINSTR, VENT, MD, BD, SOCEMO, EWB
16145.45∗∗∗
1091
.81
.79
.07
.06
2 versus 1
9135.91∗∗∗
30
Note. WL: workload; RA: role ambiguity; JI: job insecurity; RC: role conflict; PFC: problem-focused coping; EFC: emotion-focused coping; EWB: exposure to workplace bullying; AC: active coping; PL: planning; SOCINSTR: seeking social support for instrumental reasons; VENT: focus on and venting of emotions; MD: mental disengagement; BD: behavioural disengagement; SOCEMO: seeking social support for emotional reasons; p∗∗∗<.001.
### 3.2. Tests of the Hypotheses
Table3 shows the results of the hypothesised moderating effects (information regarding the main effects of the investigated work stressors on exposure to workplace bullying can be retrieved by sending an e-mail to [email protected]). Our first hypothesis was partially confirmed. Although we found no evidence for the moderating role of emotion-focused coping strategies in the association between workload and exposure to workplace bullying, some emotion-focused coping strategies moderated the association of job insecurity, role conflict, and role ambiguity with exposure to bullying. For these tests, the UPCs were significant for a Bonferroni correction of p<.002 and the models with the interaction term fitted the data significantly better than the models without an interaction term. In line with our expectations, plots of the significant interaction effects revealed amplifying effects of emotion-focused coping strategies (Figure 1). Specifically, employees using “focus on and venting of emotions” or “behavioural disengagement” when experiencing job insecurity, role conflict, and role ambiguity were more likely to be exposed to bullying. Similar results were found for employees using “mental disengagement” in the case of role conflict and role ambiguity and for employees using “seeking social support for emotional reasons” in the case of role ambiguity.Table 3
Results of Moderated Structural Equation Modelling analyses for the interaction between work stressors and coping strategies (N=3,105).
Interaction effect
UPC
SE
SPC
χ
2
CFI
TLI
RMSEA
SRMR
Model comparison
Δ
χ
2
Δdf
Active coping:
Workload× active coping
.003
.007
.011
1099.732∗∗∗
.894
.868
.080
.048
82.209∗∗∗
10
Job insecurity× active coping
.011
.007
.036
871.594∗∗∗
.917
.896
.071
.040
30.760∗∗∗
10
Role conflict× active coping
.007
.006
.024
1232.564∗∗∗
.888
.860
.085
.050
17.594
10
Role ambiguity× active coping
.006
.008
.016
1034.238∗∗∗
.903
.879
.077
.045
29.625∗∗∗
10
Planning:
Workload× planning
.007
.006
.026
1056.502∗∗∗
.898
.873
.078
.046
19.927∗
10
Job insecurity× planning
.021b
.007
.073
888.747∗∗∗
.915
.894
.071
.040
33.602∗∗∗
10
Role conflict× planning
.012
.005
.052
1259.622∗∗∗
.886
.858
.086
.051
32.901∗∗∗
10
Role ambiguity× planning
.012
.007
.037
1071.857∗∗∗
.899
.874
.079
.047
58.995∗∗∗
10
Seeking social support for instrumental reasons:
Workload× seeking social support for instrumental reasons
.007
.005
.028
1058.567∗∗∗
.898
.873
.078
.047
15.851
10
Job insecurity× seeking social support for instrumental reasons
.008
.005
.036
876.077∗∗∗
.916
.895
.071
.039
9.203
10
Role conflict× seeking social support for instrumental reasons
.008
.004
.039
1250.407∗∗∗
.886
.858
.085
.050
13.735
10
Role ambiguity× seeking social support for instrumental reasons
.010
.006
.037
1052.848∗∗∗
.900
.876
.078
.047
36.766∗∗∗
10
Focus on and venting of emotions:
Workload× focus on and venting of emotions
.002
.005
.007
1055.240∗∗∗
.900
.875
.078
.047
35.610∗∗∗
10
Job insecurity× focus on and venting of emotions
.021∗∗∗
.005
.089
885.721∗∗∗
.916
.896
.071
.040
42.092∗∗∗
10
Role conflict× focus on and venting of emotions
.023∗∗∗
.005
.104
1254.362∗∗∗
.888
.860
.085
.051
37.986∗∗∗
10
Role ambiguity× focus on and venting of emotions
.032∗∗∗
.008
.101
1022.669∗∗∗
.904
.881
.077
.045
22.121∗
10
Mental disengagement:
Workload× mental disengagement
.005
.006
.022
1038.441∗∗∗
.901
.877
.077
.046
17.059
10
Job insecurity× mental disengagement
.017
.005
.074
858.832∗∗∗
.919
.899
.070
.039
12.695
10
Role conflict× mental disengagement
.018∗∗∗
.005
.085
1241.448∗∗∗
.889
.862
.085
.050
27.217∗∗
10
Role ambiguity× mental disengagement
.023b
.007
.079
1047.318∗∗∗
.902
.878
.078
.046
44.578∗∗∗
10
Behavioural disengagement:
Workload× behavioural disengagement
.012
.006
.042
1044.847∗∗∗
.901
.876
.078
.046
11.138
10
Job insecurity× behavioural disengagement
.020∗∗∗
.006
.075
949.500∗∗∗
.911
.889
.074
.044
96.242∗∗∗
10
Role conflict× behavioural disengagement
.027∗∗∗
.005
.110
1323.539∗∗∗
.883
.854
.088
.055
100.071∗∗∗
10
Role ambiguity× behavioural disengagement
.024∗∗∗
.007
.074
1074.327∗∗∗
.900
.876
.079
.049
63.595∗∗∗
10
Seeking social support for emotional reasons:
Workload× seeking social support for emotional reasons
.001
.005
.003
1062.705∗∗∗
.898
.873
.078
.047
25.580∗∗
10
Job insecurity× seeking social support for emotional reasons
.009
.004
.047
866.678∗∗∗
.917
.897
.070
.039
8.944
10
Role conflict× seeking social support for emotional reasons
.013
.005
.057
1246.383∗∗∗
.887
.859
.085
.049
18.30
10
Role ambiguity× seeking social support for emotional reasons
.018∗∗∗
.005
.074
1034.948∗∗∗
.902
.878
.077
.046
22.219∗
10
Note. UPC: unstandardized path coefficient; SE: standard error; SPC: standardized path coefficient; Model Comparison included comparing the fit of the model with interaction term and the model without interaction term; p∗<.05; p∗∗<.01; p∗∗∗<.001; pb<.002.Figure 1
Plots of the significant interaction effects between work stressors and coping strategies in the prediction of exposure to workplace bullying.Our second hypothesis was rejected, as problem-focused coping strategies did not buffer the association between the work stressors (i.e., workload, job insecurity, role conflict, and role ambiguity) and exposure to workplace bullying. Although for some interactions the models with the interaction term fitted the data significantly better, the UPCs were not significant (p>.002). Notably, employees using “planning” strategies when experiencing job insecurity were more likely to be exposed to bullying (see Figure 1).As the demographic variable of gender (0 = female; 1= male) was positively correlated with exposure to workplace bullying (Table1), we reran all 28 pairwise models also controlling for gender. However, these analyses did not alter our conclusions. Age was not associated with exposure to workplace bullying and was therefore not included in this analysis.
## 3.1. Construct Validity of the Measurement Model
Table2 shows that the proposed 12-factor model fitted the data well and better than the five alternative models, providing evidence for the hypothesised dimensionality of the study scales. While the RMSEA and SRMR values pointed at an excellent model fit [45], the CFI and TLI values did not meet the strict standards for an excellent model fit. Nevertheless, these CFI and TLI values were comparable to what many others consider to represent adequate model fit [45].Table 2
Results of Confirmatory Factor Analysis (N=3,105).
Model
Latent factors
χ
2
df
CFI
TLI
RMSEA
SRMR
Model comparison
Δ
χ
2
Δdf
(1) 12-factor model
WL, JI, RC, RA, AC, PL, SOCINSTR, VENT, MD, BD, SOCEMO, EWB
7009.54∗∗∗
1061
.92
.92
.04
.04
/
/
/
(2) One-factor model
General factor
62958.46∗∗∗
1127
.21
.18
.13
.15
4 versus 1
55948.92∗∗∗
66
(3) Four-factor model
Stressors, PFC, EFC, EWB
38167.12∗∗∗
1121
.53
.51
.10
.11
5 versus 1
31157.58∗∗∗
60
(4) Six-factor model
WL, RA, JI, RC, General coping, EWB
39075.20∗∗∗
1112
.52
.49
.11
.13
6 versus 1
32065.66∗∗∗
51
(5) Seven-factor model
WL, JI, RC, RA, PFC, EFC, EWB
29109.328∗∗∗
1106
.64
.62
.09
.10
3 versus 1
22099.79∗∗∗
45
(6) Nine-factor model
Stressors, AC, PL, SOCINSTR, VENT, MD, BD, SOCEMO, EWB
16145.45∗∗∗
1091
.81
.79
.07
.06
2 versus 1
9135.91∗∗∗
30
Note. WL: workload; RA: role ambiguity; JI: job insecurity; RC: role conflict; PFC: problem-focused coping; EFC: emotion-focused coping; EWB: exposure to workplace bullying; AC: active coping; PL: planning; SOCINSTR: seeking social support for instrumental reasons; VENT: focus on and venting of emotions; MD: mental disengagement; BD: behavioural disengagement; SOCEMO: seeking social support for emotional reasons; p∗∗∗<.001.
## 3.2. Tests of the Hypotheses
Table3 shows the results of the hypothesised moderating effects (information regarding the main effects of the investigated work stressors on exposure to workplace bullying can be retrieved by sending an e-mail to [email protected]). Our first hypothesis was partially confirmed. Although we found no evidence for the moderating role of emotion-focused coping strategies in the association between workload and exposure to workplace bullying, some emotion-focused coping strategies moderated the association of job insecurity, role conflict, and role ambiguity with exposure to bullying. For these tests, the UPCs were significant for a Bonferroni correction of p<.002 and the models with the interaction term fitted the data significantly better than the models without an interaction term. In line with our expectations, plots of the significant interaction effects revealed amplifying effects of emotion-focused coping strategies (Figure 1). Specifically, employees using “focus on and venting of emotions” or “behavioural disengagement” when experiencing job insecurity, role conflict, and role ambiguity were more likely to be exposed to bullying. Similar results were found for employees using “mental disengagement” in the case of role conflict and role ambiguity and for employees using “seeking social support for emotional reasons” in the case of role ambiguity.Table 3
Results of Moderated Structural Equation Modelling analyses for the interaction between work stressors and coping strategies (N=3,105).
Interaction effect
UPC
SE
SPC
χ
2
CFI
TLI
RMSEA
SRMR
Model comparison
Δ
χ
2
Δdf
Active coping:
Workload× active coping
.003
.007
.011
1099.732∗∗∗
.894
.868
.080
.048
82.209∗∗∗
10
Job insecurity× active coping
.011
.007
.036
871.594∗∗∗
.917
.896
.071
.040
30.760∗∗∗
10
Role conflict× active coping
.007
.006
.024
1232.564∗∗∗
.888
.860
.085
.050
17.594
10
Role ambiguity× active coping
.006
.008
.016
1034.238∗∗∗
.903
.879
.077
.045
29.625∗∗∗
10
Planning:
Workload× planning
.007
.006
.026
1056.502∗∗∗
.898
.873
.078
.046
19.927∗
10
Job insecurity× planning
.021b
.007
.073
888.747∗∗∗
.915
.894
.071
.040
33.602∗∗∗
10
Role conflict× planning
.012
.005
.052
1259.622∗∗∗
.886
.858
.086
.051
32.901∗∗∗
10
Role ambiguity× planning
.012
.007
.037
1071.857∗∗∗
.899
.874
.079
.047
58.995∗∗∗
10
Seeking social support for instrumental reasons:
Workload× seeking social support for instrumental reasons
.007
.005
.028
1058.567∗∗∗
.898
.873
.078
.047
15.851
10
Job insecurity× seeking social support for instrumental reasons
.008
.005
.036
876.077∗∗∗
.916
.895
.071
.039
9.203
10
Role conflict× seeking social support for instrumental reasons
.008
.004
.039
1250.407∗∗∗
.886
.858
.085
.050
13.735
10
Role ambiguity× seeking social support for instrumental reasons
.010
.006
.037
1052.848∗∗∗
.900
.876
.078
.047
36.766∗∗∗
10
Focus on and venting of emotions:
Workload× focus on and venting of emotions
.002
.005
.007
1055.240∗∗∗
.900
.875
.078
.047
35.610∗∗∗
10
Job insecurity× focus on and venting of emotions
.021∗∗∗
.005
.089
885.721∗∗∗
.916
.896
.071
.040
42.092∗∗∗
10
Role conflict× focus on and venting of emotions
.023∗∗∗
.005
.104
1254.362∗∗∗
.888
.860
.085
.051
37.986∗∗∗
10
Role ambiguity× focus on and venting of emotions
.032∗∗∗
.008
.101
1022.669∗∗∗
.904
.881
.077
.045
22.121∗
10
Mental disengagement:
Workload× mental disengagement
.005
.006
.022
1038.441∗∗∗
.901
.877
.077
.046
17.059
10
Job insecurity× mental disengagement
.017
.005
.074
858.832∗∗∗
.919
.899
.070
.039
12.695
10
Role conflict× mental disengagement
.018∗∗∗
.005
.085
1241.448∗∗∗
.889
.862
.085
.050
27.217∗∗
10
Role ambiguity× mental disengagement
.023b
.007
.079
1047.318∗∗∗
.902
.878
.078
.046
44.578∗∗∗
10
Behavioural disengagement:
Workload× behavioural disengagement
.012
.006
.042
1044.847∗∗∗
.901
.876
.078
.046
11.138
10
Job insecurity× behavioural disengagement
.020∗∗∗
.006
.075
949.500∗∗∗
.911
.889
.074
.044
96.242∗∗∗
10
Role conflict× behavioural disengagement
.027∗∗∗
.005
.110
1323.539∗∗∗
.883
.854
.088
.055
100.071∗∗∗
10
Role ambiguity× behavioural disengagement
.024∗∗∗
.007
.074
1074.327∗∗∗
.900
.876
.079
.049
63.595∗∗∗
10
Seeking social support for emotional reasons:
Workload× seeking social support for emotional reasons
.001
.005
.003
1062.705∗∗∗
.898
.873
.078
.047
25.580∗∗
10
Job insecurity× seeking social support for emotional reasons
.009
.004
.047
866.678∗∗∗
.917
.897
.070
.039
8.944
10
Role conflict× seeking social support for emotional reasons
.013
.005
.057
1246.383∗∗∗
.887
.859
.085
.049
18.30
10
Role ambiguity× seeking social support for emotional reasons
.018∗∗∗
.005
.074
1034.948∗∗∗
.902
.878
.077
.046
22.219∗
10
Note. UPC: unstandardized path coefficient; SE: standard error; SPC: standardized path coefficient; Model Comparison included comparing the fit of the model with interaction term and the model without interaction term; p∗<.05; p∗∗<.01; p∗∗∗<.001; pb<.002.Figure 1
Plots of the significant interaction effects between work stressors and coping strategies in the prediction of exposure to workplace bullying.Our second hypothesis was rejected, as problem-focused coping strategies did not buffer the association between the work stressors (i.e., workload, job insecurity, role conflict, and role ambiguity) and exposure to workplace bullying. Although for some interactions the models with the interaction term fitted the data significantly better, the UPCs were not significant (p>.002). Notably, employees using “planning” strategies when experiencing job insecurity were more likely to be exposed to bullying (see Figure 1).As the demographic variable of gender (0 = female; 1= male) was positively correlated with exposure to workplace bullying (Table1), we reran all 28 pairwise models also controlling for gender. However, these analyses did not alter our conclusions. Age was not associated with exposure to workplace bullying and was therefore not included in this analysis.
## 4. Discussion
To our knowledge, this is the first study that investigates the moderating role of problem- and emotion-focused coping strategies in the association between work stressors and exposure to workplace bullying.The results provided partial support for our first hypothesis on the amplifying effects of emotion-focused coping strategies in the association between work stressors and exposure to workplace bullying. The strengths of all the interaction effects were of similar size and rather small, based on the magnitude of the UPCs observed. First, most interaction effects were found for “focus on and venting of emotions” and “behavioural disengagement.” When experiencing job insecurity, role conflict, or role ambiguity, employees using these emotion-focused coping strategies were more likely to be exposed to bullying, in comparison with employees not using these strategies. Second, two interaction effects were found for “mental disengagement.” Employees with the tendency to use “mental disengagement” in the case of role conflict or role ambiguity were more likely to be exposed to bullying. Finally, one interaction effect was found for “seeking social support for emotional reasons.” Employees with the tendency to use “seeking social support for emotional reasons” in the case of role ambiguity were more likely to be exposed to workplace bullying.From an empirical perspective, these results align with previous studies on coping and strain outcomes. For example, a longitudinal study showed that emotion-focused coping strategies amplified the negative impact of role conflict on emotional exhaustion [46]. Moreover, Chen and Kao [47] found evidence for emotion-focused coping strategies as an amplifier in the association between job hassles and burnout. From a theoretical perspective, it seems that applying emotion-focused coping strategies in combination withspecific work stressors (i.e., job insecurity, role conflict, or role ambiguity) makes employees more vulnerable to bullying. According to the Three-Way Model of Workplace bullying and to Social Interactionism, employees may unknowingly breach habits and values within their organisation making them “easy” targets for workplace bullying [23, 27].Notably, these results contradict recent suggestions in the work stress literature differentiating work stressors in terms of job hindrances and job challenges. In the literature, job hindrances (i.e., role conflict, role overload, and job insecurity) are defined as work stressors that are uncontrollable obstacles that hinder optimal functioning [48, 49]. Job challenges (i.e., workload) are work stressors that require some energy, but are nonetheless stimulating and help in achieving goals [48, 49]. The challenges-hindrances literature assumes that emotion-focused coping strategies are not helpful in reducing the potential negative impact of job challenges: as job challenges are perceived as controllable and may be helpful in achieving goals, using problem-focused coping strategies would be more beneficial [49, 50]. In contrast, as job hindrances are uncontrollable, emotion-focused coping strategies are more appropriate to be used to reduce the negative impact of job hindrances, while problem-focused coping strategies are assumed to increase their negative impact [49, 50]. Our findings, however, show that emotion-focused coping strategies amplify rather than buffer the association between job hindrances (i.e., job insecurity, role conflict, and role ambiguity) and exposure to workplace bullying. They thus contradict recent arguments in the work stress literature but are in line with the well-established view of the Three-Way Model of Workplace bullying [23] and Lazarus and Folkman [10].Contrary to our expectations, no interaction effects between workload and the investigated emotion-focused coping strategies were found. Thus, this finding contradicts recent developments in the work stress literature arguing that emotion-focused coping strategies would be problematic in dealing with job challenges [49, 50]. Future research should investigate a wider range of coping strategies that would be relevant for workload, such as cognitive reframing [51]. Cognitive reframing might be a more efficient coping strategy than the other investigated coping strategies, as it may, for example, influence the way employees perceive workload. By applying cognitive reframing as a coping strategy, the situation may become less stressful: it may change the perception of the initial stressors in a way that may reduce the perceived workload [52].Our second hypothesis was rejected: we found no evidence for the buffering role of problem-focused coping strategies. Moreover, in contrast to our expectations, “planning” (i.e., a problem-focused coping strategy) amplified rather than buffered the association between job insecurity and exposure to workplace bullying. Employees using “planning” to deal with job insecurity were more likely to be exposed to bullying. Although unexpected, this finding aligns with previous results showing that problem-focused coping in combination with job insecurity is associated with negative outcomes in terms of low job satisfaction and high turnover intention [53]. Our results extend those findings to being exposed to workplace bullying. From a theoretical perspective, our findings can be explained through the work of Folkman et al. [19] who state that the efficiency of coping strategies depends on the source of the stressor. Problem-focused coping strategies are more efficient when the source of the stressor is clear or controllable [10]. In the case of job insecurity, the source of the uncertain environment is unclear, and employees often are not able to control or handle the economic status of their company [53]. This also aligns with the challenges-hindrance literature arguing that problem-focused coping strategies are less effective and thus increase the negative impact of job hindrances (i.e., job insecurity) on strain outcomes (i.e., workplace bullying), as described earlier [49, 50]. As the efficiency of a coping strategy may depend on how well it fits with a particular stressor [51], further research is needed to investigatespecific combinations of work stressors and coping strategies to determine which strategies are more appropriate to prevent exposure to workplace bullying.
### 4.1. Limitations and Paths for Future Research
Some limitations should be considered in interpreting the findings of this study. First, this study has a cross-sectional research design. Consequently, the conclusions do not allow us to determine the direction of the predicted associations. However, our research model was based on multiple previous longitudinal studies that already identified causal (cross-lagged) relationships from work stressors to exposure to workplace bullying rather than the other way around [54, 55]. Moreover, cross-sectional data might be appropriate to investigate interaction effects, because the moderator is not part of a causal sequence but qualifies an association between variables [56]. Nevertheless, we advise future studies to use a longitudinal design to replicate our findings and investigate the moderating role of coping strategies in thelaggedrelationship from work stressors to exposure to workplace bullying. Notably, as workplace bullying can also be considered a social stressor [e.g., [57]], it may be interesting to investigate the moderating role of coping strategies in the lagged relationship from exposure to workplace bullying to strain. This aligns with Lazarus and Folkman [10], equally suggesting that coping strategies may influence the impact of workplace bullying on its outcomes. As mentioned above, the authors state that the efficiency of coping strategies depends on the controllability of the perceived stressor (i.e., workplace bullying) [10]. Moreover, Lazarus and Folkman [10] propose that problem-focused coping strategies are efficient when the stressor is perceived as controllable, while emotion-focused coping strategies are expected to be efficient when the stressor is perceived as uncontrollable [58]. Exposure to workplace bullying is typically defined as uncontrollable: we thus expect that emotion-focused coping strategies would reduce this association, while problem-focused coping strategies are expected to amplify this association [59]. This theoretical reasoning aligns with recent findings from previous studies investigating these hypotheses [e.g., [59]]. However, it would be interesting to examine if the proposed associations between work stressors and coping strategies are the same for employees exposed to workplace bullying as compared to employees not exposed to bullying.Second, due to the use of self-reported measures common method bias may have inflated the associations between our study variables [60]. However, self-reported measures are appropriate in this study because we aimed to investigate the way employees (a)perceived work stressors, (b)preferred the use of certain coping strategies, and (c)perceived or experienced acts of workplace bullying. Additionally, self-reported measures are dominantly used in research on workplace bullying [61]. We attempted to reduce the risk of common method bias by emphasizing the voluntary nature of this study and the anonymous treatment of the study results and by demonstrating the construct validity of the study scales in a series of CFAs. Nevertheless, future research should consider using multisource data to avoid problems with common method bias.Third, as we used pairwise tests and the same relationships were tested repeatedly, a Bonferroni correction withp<.002 (instead of p<.05 or p<.01) was used. This correction may have led to conservative conclusions: several hypotheses were rejected at the .002 level but could be accepted at the .05 level (e.g., the interaction between workload and behavioural disengagement) or at the .01 level (e.g., the interaction between workload and seeking support for emotional reasons). Nevertheless, by applying a Bonferroni correction, we reduced the risk of Type Ι errors [62]. However, because of the relative large sample size, this is much less an issue in this study [63].Fourth, our study sample did not represent all sectors. For example, employees working in the education sector and construction industry were not included in our sample. Furthermore, employees working in the health care sector were overrepresented (75%). Therefore, researchers should be careful about generalising our conclusions to employees working in all sectors. However, we do not believe that the sample composition would affect our results and that using a more representative sample would have led to other results [64]. Previous research found no differences regarding exposure to workplace bullying between health care workers and employees working in other sectors [65].Fifth, it would be interesting to investigate the moderating role of coping strategies in the association between other antecedents and exposure to workplace bullying. For example, a prospective study showed that mental distress predicts exposure to workplace bullying, showing that individual characteristics may make employees more vulnerable to bullying [66]. Following the results of our study, it would be interesting to also investigate the moderating role of problem- and emotion-focused coping strategies (i.e., individual-related factors) in the association between mental distress and exposure to workplace bullying to examine whether individual factors may also be a risk factor of becoming bullied.Finally, this study focused on targets of exposure to workplace bullying. However, future studies should investigate the moderating role of coping strategies in the association between work stressors and workplace bullying from the perspective of the perpetrator. Indeed, high levels of work stressors in combination with inefficient coping strategies may produce irritation and hostility, which may result in demonstrating negative acts towards coworkers. This view aligns with the Frustration-Aggression hypothesis [67]: when dealing with frustrations and the accompanying negative emotions, employees may act out these frustrations through negative actions [67]. This process can be amplified when using inefficient coping mechanisms because these employees do not reduce the antecedent conditions which cause these frustrations. As a result they become more frustrated and demonstrate negative acts towards other colleagues. Future research is needed to explore this hypothesis.
## 4.1. Limitations and Paths for Future Research
Some limitations should be considered in interpreting the findings of this study. First, this study has a cross-sectional research design. Consequently, the conclusions do not allow us to determine the direction of the predicted associations. However, our research model was based on multiple previous longitudinal studies that already identified causal (cross-lagged) relationships from work stressors to exposure to workplace bullying rather than the other way around [54, 55]. Moreover, cross-sectional data might be appropriate to investigate interaction effects, because the moderator is not part of a causal sequence but qualifies an association between variables [56]. Nevertheless, we advise future studies to use a longitudinal design to replicate our findings and investigate the moderating role of coping strategies in thelaggedrelationship from work stressors to exposure to workplace bullying. Notably, as workplace bullying can also be considered a social stressor [e.g., [57]], it may be interesting to investigate the moderating role of coping strategies in the lagged relationship from exposure to workplace bullying to strain. This aligns with Lazarus and Folkman [10], equally suggesting that coping strategies may influence the impact of workplace bullying on its outcomes. As mentioned above, the authors state that the efficiency of coping strategies depends on the controllability of the perceived stressor (i.e., workplace bullying) [10]. Moreover, Lazarus and Folkman [10] propose that problem-focused coping strategies are efficient when the stressor is perceived as controllable, while emotion-focused coping strategies are expected to be efficient when the stressor is perceived as uncontrollable [58]. Exposure to workplace bullying is typically defined as uncontrollable: we thus expect that emotion-focused coping strategies would reduce this association, while problem-focused coping strategies are expected to amplify this association [59]. This theoretical reasoning aligns with recent findings from previous studies investigating these hypotheses [e.g., [59]]. However, it would be interesting to examine if the proposed associations between work stressors and coping strategies are the same for employees exposed to workplace bullying as compared to employees not exposed to bullying.Second, due to the use of self-reported measures common method bias may have inflated the associations between our study variables [60]. However, self-reported measures are appropriate in this study because we aimed to investigate the way employees (a)perceived work stressors, (b)preferred the use of certain coping strategies, and (c)perceived or experienced acts of workplace bullying. Additionally, self-reported measures are dominantly used in research on workplace bullying [61]. We attempted to reduce the risk of common method bias by emphasizing the voluntary nature of this study and the anonymous treatment of the study results and by demonstrating the construct validity of the study scales in a series of CFAs. Nevertheless, future research should consider using multisource data to avoid problems with common method bias.Third, as we used pairwise tests and the same relationships were tested repeatedly, a Bonferroni correction withp<.002 (instead of p<.05 or p<.01) was used. This correction may have led to conservative conclusions: several hypotheses were rejected at the .002 level but could be accepted at the .05 level (e.g., the interaction between workload and behavioural disengagement) or at the .01 level (e.g., the interaction between workload and seeking support for emotional reasons). Nevertheless, by applying a Bonferroni correction, we reduced the risk of Type Ι errors [62]. However, because of the relative large sample size, this is much less an issue in this study [63].Fourth, our study sample did not represent all sectors. For example, employees working in the education sector and construction industry were not included in our sample. Furthermore, employees working in the health care sector were overrepresented (75%). Therefore, researchers should be careful about generalising our conclusions to employees working in all sectors. However, we do not believe that the sample composition would affect our results and that using a more representative sample would have led to other results [64]. Previous research found no differences regarding exposure to workplace bullying between health care workers and employees working in other sectors [65].Fifth, it would be interesting to investigate the moderating role of coping strategies in the association between other antecedents and exposure to workplace bullying. For example, a prospective study showed that mental distress predicts exposure to workplace bullying, showing that individual characteristics may make employees more vulnerable to bullying [66]. Following the results of our study, it would be interesting to also investigate the moderating role of problem- and emotion-focused coping strategies (i.e., individual-related factors) in the association between mental distress and exposure to workplace bullying to examine whether individual factors may also be a risk factor of becoming bullied.Finally, this study focused on targets of exposure to workplace bullying. However, future studies should investigate the moderating role of coping strategies in the association between work stressors and workplace bullying from the perspective of the perpetrator. Indeed, high levels of work stressors in combination with inefficient coping strategies may produce irritation and hostility, which may result in demonstrating negative acts towards coworkers. This view aligns with the Frustration-Aggression hypothesis [67]: when dealing with frustrations and the accompanying negative emotions, employees may act out these frustrations through negative actions [67]. This process can be amplified when using inefficient coping mechanisms because these employees do not reduce the antecedent conditions which cause these frustrations. As a result they become more frustrated and demonstrate negative acts towards other colleagues. Future research is needed to explore this hypothesis.
## 5. Conclusion
This study investigated the moderating role of employees’ problem- and emotion-focused coping strategies in the association between work stressors and exposure to workplace bullying. As expected, some emotion-focused coping strategies amplified the association between work stressors and exposure to bullying. However, we found no evidence for the buffering role of problem-focused coping strategies in the association between work stressors and being bullied. Based on our results, we advise organisations to implement interventions that focus on making employees aware of the possible amplifying effects of emotion-focused coping strategies when they are experiencing job insecurity, role conflict, and/or role ambiguity. We advise future research to investigate specific combinations of different (types of) work stressors and coping strategies to determine which coping strategies are efficient in preventing workplace bullying.
---
*Source: 1019529-2017-11-15.xml* | 1019529-2017-11-15_1019529-2017-11-15.md | 70,196 | Exposure to Workplace Bullying: The Role of Coping Strategies in Dealing with Work Stressors | Whitney Van den Brande; Elfi Baillien; Tinne Vander Elst; Hans De Witte; Anja Van den Broeck; Lode Godderis | BioMed Research International
(2017) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1019529 | 1019529-2017-11-15.xml | ---
## Abstract
Studies investigating both work- and individual-related antecedents of workplace bullying are scarce. In reply, this study investigated the interaction between workload, job insecurity, role conflict, and role ambiguity (i.e., work-related antecedents), and problem- and emotion-focused coping strategies (i.e., individual-related antecedents) in association with exposure to workplace bullying. Problem-focused coping strategies were hypothesised to decrease (i.e., buffer) the associations between workload, job insecurity, role conflict, and role ambiguity and exposure to bullying, while emotion-focused coping strategies were hypothesised to increase (i.e., amplify) these associations. Results for a heterogeneous sample (N = 3,105) did not provide evidence for problem-focused coping strategies as moderators. As expected, some emotion-focused coping strategies amplified the associations between work-related antecedents and bullying: employees using “focus on and venting of emotions” or “behavioural disengagement” in dealing with job insecurity, role conflict, or role ambiguity were more likely to be exposed to bullying. Similarly, “seeking social support for emotional reasons” and “mental disengagement” amplified the associations of role ambiguity and the associations of both role conflict and role ambiguity, respectively. To prevent bullying, organisations may train employees in tempering emotion-focused coping strategies, especially when experiencing job insecurity, role conflict, or role ambiguity.
---
## Body
## 1. Introduction
Workplace bullying is defined as the perceived situation in which an employee is systematically and repeatedly thetargetof work-related and/or personal negative acts at work [1]. Bullying has become an issue in many organisations. Prevalence rates range from 3% up to 15% in Europe [2], such that between 3% and 4% of European employees experience bullying behaviours weekly (i.e., serious bullying), while 9% to 15% experience bullying behaviours monthly (i.e., occasional bullying) [3]. As being exposed to workplace bullying is associated with health impairment—such as burnout [4], symptoms of posttraumatic stress disorder [5], and depression [6]—studies have investigated antecedents that may prevent bullying [2, 7].To date, these studies have mainly focused on work-related antecedents that trigger exposure to bullying [7], although scholars have also identified some individual-related antecedents such as low self-esteem and poor social skills [8]. Studies thus showed that exposure to workplace bullying is a multicausal phenomenon [9]. However, these studies focusing on work- or individual-related antecedents have been developed independently of each other, although scholars underlined that the interaction betweenboth work- and individual-related antecedents should be investigated to fully grasp the origin of exposure to workplace bullying [9]. In line with this suggestion, scholars claim that the effect of work stressors (i.e., work-related antecedents) on their outcomes could be influenced by coping strategies (i.e., individual-related antecedents) [10]. Despite these claims, studies investigating the interaction between work stressors and coping strategies to bullying are lacking [11].In reply, this study aims to bridge the research lines on work-relatedand individual-related antecedents of workplace bullying by investigating the interaction between work stressors (i.e., workload, job insecurity, role conflict, and role ambiguity) and employees’ coping strategies (i.e., problem- and emotion-focused) in association to exposure to workplace bullying. By investigating how the interaction between these factors may prevent or evoke exposure to workplace bullying, this study may additionally identify possible work- and individual-related prevention areas.Studies have particularly underlined the negative impact of workload [12], job insecurity [13], role conflict, and role ambiguity [14] on exposure to workplace bullying. A recent systematic review showed that these work stressors are the most important antecedents of exposure to workplace bullying [11]. The association between those work stressors and exposure to bullying may be theoretically substantiated by the Work Environment Hypothesis [15] and the General Strain Theory [16]: a poor psychosocial work environment (i.e., work stressors) may trigger exposure to bullying because it depletes employees’ energy, causing strain [16, 17]. Strained employees have difficulties in defending themselves against bullying acts and offer little resistance [17, 18]. Consequently, they become an “easy target” for exposure to workplace bullying [13].The negative impact of work stressors on exposure to workplace bullying could be altered by coping strategies [10, 11]. In other words, employees’ coping strategies could be potential moderators of the association between work stressors and exposure to bullying. The literature defines coping in at least two ways. Some studies conceptualise coping as fluctuating states depending on situational appraisals (i.e., state-like disposition) [19], while other studies found that the tendency to use certain coping strategies can be relatively stable over time and situations (i.e., trait-like disposition) [20, 21]. As the present study aims to investigate the interaction between work- and individual-related antecedents of exposure to workplace bullying, we align with the definition of coping strategies as a trait-like disposition. In this study, coping strategies refer to theemployees’ tendency to make cognitive and behavioural efforts to manage, tolerate, or reduce work stressors [10]. These coping strategies are either oriented at tackling the problem (“problem-focused”) or at managing emotions associated with the stressor (“emotion-focused”) [10]. Carver et al. [22] identified “active coping,” “planning,” and “seeking social support for instrumental reasons” as important problem-focused coping strategies, while “focus on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons” were identified as emotion-focused coping strategies.According to the Three-Way Model of Workplace bullying, work stressors may particularly trigger exposure to bullying when employees apply inefficient coping strategies, whereas applying efficient coping strategies may reduce exposure to bullying [23]. According to the pioneers in coping research, Lazarus and Folkman [10], emotion-focused coping strategies reduce the negative emotions associated with the stressor in the short term but may prevent employees from performing a suitable action to address the problem. Emotion-focused coping strategies may therefore impair employee well-being. This view is supported by previous studies indicating that “focus on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons” are related to impaired well-being [e.g., [22, 24, 25]]. It also aligns with a recent review showing that using emotion-focused coping strategies as a dominant strategy is related to strain outcomes (e.g., emotional exhaustion and depersonalization) [26]. Emotion-focused coping strategies may thus be an inefficient way of coping with work stressors. Similarly, we propose that they may trigger exposure to workplace bullying: employees experiencing high levels of work stressors in combination with using inefficient coping strategies (i.e., emotion-focused coping strategies) tend to (unknowingly) breach well-established norms, habits, expectations, or values within their workplace [27]. For example, a stressed employee may look for distractions to avoid the problem and thus perform at a lower level than his/her colleagues. Colleagues may not accept that these norms are breached and may, in turn, try to restore the norms by punishing this employee or demonstrating negative acts towards them [Social Interactionist Theory; [23, 27, 28]]. Alternatively, a stressed employee may ventilate his/her emotions frequently to his/her colleagues, which may interfere with their work and hamper their performance. In reply, they may demonstrate negative acts towards the stressed employee for interfering with their work [Social Interactionist Theory; [23, 27, 28]]. In sum, we hypothesise the following.Hypothesis 1.
Emotion-focused coping strategies increase the association between work stressors, including workload(H1a), job insecurity(H1b), role conflict(H1c), and role ambiguity(H1d), and exposure to workplace bullying (i.e., amplifying effects).In contrast, problem-focused coping strategies may be efficient in dealing with work stressors, as they are focused at solving the issue [10]. Previous studies have demonstrated that “active coping,” “planning,” and “seeking social support for instrumental reasons” were associated with positive health outcomes [e.g., [19, 22]] and were negatively correlated with strain outcomes, such as psychological symptoms and emotional exhaustion [26, 29]. Accordingly, we expect problem-focused coping strategies to decrease the association between work stressors and exposure to bullying: employees who cope with work stressors in a problem-focused way are putting effort into solving the problem instead of breaching valued norms, habits, expectations, or values [23, 27]. They gain control over the stressful situation by defining and interpreting the situation, planning solutions, and choosing a course of action which may avoid or reduce exposure to bullying [10, 30]. In sum, we hypothesise the following.Hypothesis 2.
Problem-focused coping strategies decrease the association between work stressors, including workload(H2a), job insecurity(H2b), role conflict(H2c), and role ambiguity(H2d), and exposure to workplace bullying (i.e., buffering effects).
## 2. Methods
### 2.1. Study Context and Participants
Cross-sectional data were collected from September until November 2014 by means of online and paper-and-pencil questionnaires distributed by an external service for optimising work environments (IDEWE). A total of 6,499 Flemish employees from 16 organisations in various sectors (i.e., healthcare, manufacturing, governmental, and service sectors) were invited to complete a questionnaire on psychosocial risk factors and work-related well-being [31]. All participants provided an informed consent that underlined the anonymity of their answers, stated that their participation was voluntary, and shared the researchers’ contact information. The Social and Societal Ethics Committee (SMEC) of KU Leuven approved the study protocol (G-2014 07 025).The final sample consisted of 3,105 Flemish employees (response rate of 48%) who completed the questionnaire. The mean age of the participants was 42 years (SD = 11.00). In total, 33% of the respondents were male, 68% had a full-time position, and 91% had a permanent contract. The participants were employed in healthcare (75%), manufacturing (9%), governmental (4%), and service (12%) sectors.
### 2.2. Measures
The variables were measured using established and internationally validated scales. The means, standard deviations, and correlations are presented in Table1.Table 1
Means, standard deviations, and correlations (N=3,105).
M
SD
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(1) Age
41.61
11.00
—
.13∗∗
.05∗
−.09∗∗
−.04
−.13∗∗
.03
.10∗∗
−.10∗∗
−.09∗∗
.05∗
−.09∗∗
−.18∗∗
−.02
(2) Male
n.a.
n.a.
—
−.02
.01
.08∗∗
.09∗∗
−.08∗∗
.02
−.12∗∗
−.25∗∗
.03
−.08∗∗
−.35∗∗
.06∗∗
(3) Workload
3.42
0.84
—
.10∗∗
.43∗∗
.17∗∗
.07∗∗
.05∗∗
−.02
.13∗∗
.04∗
.09∗
.08∗∗
.22∗∗
(4) Job insecurity
2.09
0.90
—
.27∗∗
.25∗∗
−.07∗∗
−.08∗∗
−.02
.13∗∗
.16∗∗
.14∗∗
.02
.29∗∗
(5) Role conflict
2.41
0.91
—
.42∗∗
−.07∗∗
−.05∗∗
−.03
.17∗∗
.23∗∗
.20∗∗
.02
.46∗∗
(6) Role ambiguity
1.93
0.73
—
−.16∗∗
−.10∗∗
−.08∗∗
.13∗∗
.16∗∗
.13∗∗
−.01
.33∗∗
(7) Active coping
4.02
0.61
—
.63∗∗
.35∗∗
−.08∗∗
−.30∗∗
−.08∗∗
.12∗∗
−.04∗
(8) Planning
3.71
0.76
—
.39∗∗
−.08∗∗
−.26∗∗
−.08∗∗
.10∗∗
−.03
(9) SOCINSTR
3.44
0.87
—
.18∗∗
−.07∗∗
.06∗∗
.38∗∗
−.03
(10) VENT
2.22
0.80
—
.38∗∗
.36∗∗
.43∗∗
.19∗∗
(11) BD
1.70
0.71
—
.42∗∗
.04∗
.22∗∗
(12) MD
2.36
0.75
—
.27∗∗
.19∗∗
(13) SOCEMO
3.08
0.98
—
.06∗∗
(14) EWB
1.48
0.51
—
Note. n.a.: not applicable; SOCINSTR: seeking social support for instrumental reasons; VENT: focus on and venting of emotions; MD: mental disengagement; BD: behavioural disengagement; SOCEMO: seeking social support for emotional reasons; EWB: exposure to workplace bullying; p∗<.05; p∗∗<.01.Exposure to workplace bullying (α=.85) was measured by means of the Short Negative Acts Questionnaire (S-NAQ) [32]. Respondents were asked to indicate how often they were confronted with a list of nine bullying acts during the last six months (e.g., “gossip or rumours about you”). The response categories ranged from “never” (=1) to “now and then” (=2), “monthly” (=3), “weekly” (=4), and “daily” (=5).Workload (α=.87) was assessed using three items from the Questionnaire Experience and Evaluation of Work (QEEW) [33], including “I have to work extra hard in order to complete a task.”Role ambiguity (α=.82) was measured using three items from the Short Inventory to Monitor Psychosocial Hazards (SIMPH) [34]. An example of an item is “I know exactly what others expect of me in my work (R).”Role conflict (α=.79) was measured using three items of the Work Conditions and Control Questionnaire (WOCCQ; e.g., “I receive contradictory instructions”) [35].Job insecurity (α=.81) was measured by using three items from the scale by Vander Elst et al. [36], for example, “I think I might lose my job in the near future.” The items regarding the abovementioned work stressors were rated on a five-point Likert scale ranging from “almost never” (=1), “rather seldom” (=2), “sometimes” (=3), “often” (=4), and “almost always” (=5).Coping strategies were assessed by 28 items from the COPE [22]. Following the idea that coping strategies represent individual factors expressing the tendency to apply certain strategies more than others, respondents were asked to indicate what theyusually do when facing a stressful situation. The response categories varied from “almost never” (=1), “rather seldom” (=2), “sometimes” (=3), “often” (=4), and “almost always” (=5).Problem-focused coping strategies were measured with three subscales: four items tapped into“active coping” (e.g., “I concentrate my efforts on doing something about it”) and four into“planning” (e.g., “I think hard about what steps to take”), and another four measured“seeking social support for instrumental reasons” (e.g., “I try to get advice from someone about what to do”). The alpha coefficients for these scales were .83, .85, and .91, respectively. Emotion-focused coping strategies were measured using four subscales with four items each:“focusing on and venting of emotions”(e.g., “I get upset and show my emotions”),“behavioural disengagement”(e.g., “I just give up trying to reach my goal”),“mental disengagement”(e.g., “I turn to work or other substitute activities to take my mind off things”), and“seeking social support for emotional reasons”(e.g., “I get sympathy and understanding from someone”). Cronbach’s alpha coefficients were .85, .86, .69, and .92, respectively.Finally, age (years) and gender (0 = female, 1 = male) were measured.
### 2.3. Statistical Analyses
Analyses were performed with the software package AMOS 22. The construct validity of the scales was evaluated by means of Confirmatory Factor Analysis (CFA) [37]. The hypothesised measurement model contained 12 factors in which all items loaded on the corresponding latent variable (i.e., exposure to workplace bullying, workload, job insecurity, role conflict, role ambiguity, “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”). We compared the measurement model with five alternative models: (1) a one-factor model in which all items were loaded on the same factor, (2) a four-factor model with general work stressors (i.e., the items of workload, job insecurity, role conflict, and role ambiguity), general problem-focused coping strategies (i.e., the items of “active coping,” “planning,” and “seeking social support for instrumental reasons”), general emotion-focused coping strategies (i.e., the items of “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, (3) a six-factor model with workload, job insecurity, role conflict, role ambiguity, general coping strategies (i.e., the items of “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, (4) a seven-factor model with workload, job insecurity, role conflict, role ambiguity, general problem-focused coping strategies (i.e., the items of “active coping,” “planning,” and “seeking social support for instrumental reasons”), general emotion-focused coping strategies (i.e., the items of “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, and (5) a nine-factor model with general work stressors (i.e., the items of workload, job insecurity, role conflict, and role ambiguity), “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” “seeking social support for emotional reasons,” and exposure to workplace bullying as latent factors. In all models, the latent variables were allowed to covary. The χ2 difference test was used to compare the hypothesised measurement model with the alternative measurement models [37, 38]. The fit of the models was evaluated based on Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), Root Mean Square Error of Approximation (RMSEA), and Standardized Root Mean Residual (SRMR) [38]. Values above .90 for CFI and TLI indicate a good fit, while values above .95 indicate an excellent fit [38, 39]. Values close to .08 for RMSEA and values close to .10 for SRMR indicate a relatively good fit between the measurement model and the observed data [38, 39]. Values below .05 for RMSEA and values below .09 for SRMR indicate an excellent fit [38].In line with Bakker et al. [40] and following the procedure of Mathieu et al. [41, 42], we investigated the hypotheses by means of Moderated Structural Equation Modelling (MSEM). MSEM was used because it has the ability to (a) assess and correct for measurement error and (b) provide measures of fit of the models under investigation [37]. For each pair of a work stressor and a coping strategy, two models were tested and compared: (1) a model without an interaction factor and (2) a model with an interaction factor. In the model without the interaction, one of the four work stressors and one of the seven coping strategies were modelled as the exogenous factors and workplace bullying was the endogenous factor. To this model, a factor reflecting the interaction between the work stressor and the coping strategy was added (i.e., model with interaction factor). The interaction term was calculated by multiplying the centred scale scores for the respective work stressor and coping strategy [43]. In both models, the centred scale score for the respective variable indicated the exogenous factors. The exogenous factors were allowed to covary. The error variance of each indicator was set equal to the product of its variance and one minus its reliability [41, 42]. The paths from the exogenous factors to their indicator were calculated using the square roots of the scale reliabilities [40–42, 44]. The reliability of the interaction term was calculated using the formula as described in Cortina et al. [42].The path coefficients were estimated and the fit of each model was evaluated using CFI, TLI, RMSEA, and SRMSR. The interaction effects were considered as significant when (a) the Unstandardized Path Coefficient (UPC) from the interaction term to the endogenous factor (i.e., exposure to workplace bullying) was statistically significantand (b) the χ2 difference test indicated that the model with the latent interaction factors fits the data better in comparison to the model without the latent interaction factor. As we tested the relationships in this study in a pairwise manner, a Bonferroni correction of p<.002 (instead of p<.05) was used.
## 2.1. Study Context and Participants
Cross-sectional data were collected from September until November 2014 by means of online and paper-and-pencil questionnaires distributed by an external service for optimising work environments (IDEWE). A total of 6,499 Flemish employees from 16 organisations in various sectors (i.e., healthcare, manufacturing, governmental, and service sectors) were invited to complete a questionnaire on psychosocial risk factors and work-related well-being [31]. All participants provided an informed consent that underlined the anonymity of their answers, stated that their participation was voluntary, and shared the researchers’ contact information. The Social and Societal Ethics Committee (SMEC) of KU Leuven approved the study protocol (G-2014 07 025).The final sample consisted of 3,105 Flemish employees (response rate of 48%) who completed the questionnaire. The mean age of the participants was 42 years (SD = 11.00). In total, 33% of the respondents were male, 68% had a full-time position, and 91% had a permanent contract. The participants were employed in healthcare (75%), manufacturing (9%), governmental (4%), and service (12%) sectors.
## 2.2. Measures
The variables were measured using established and internationally validated scales. The means, standard deviations, and correlations are presented in Table1.Table 1
Means, standard deviations, and correlations (N=3,105).
M
SD
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(1) Age
41.61
11.00
—
.13∗∗
.05∗
−.09∗∗
−.04
−.13∗∗
.03
.10∗∗
−.10∗∗
−.09∗∗
.05∗
−.09∗∗
−.18∗∗
−.02
(2) Male
n.a.
n.a.
—
−.02
.01
.08∗∗
.09∗∗
−.08∗∗
.02
−.12∗∗
−.25∗∗
.03
−.08∗∗
−.35∗∗
.06∗∗
(3) Workload
3.42
0.84
—
.10∗∗
.43∗∗
.17∗∗
.07∗∗
.05∗∗
−.02
.13∗∗
.04∗
.09∗
.08∗∗
.22∗∗
(4) Job insecurity
2.09
0.90
—
.27∗∗
.25∗∗
−.07∗∗
−.08∗∗
−.02
.13∗∗
.16∗∗
.14∗∗
.02
.29∗∗
(5) Role conflict
2.41
0.91
—
.42∗∗
−.07∗∗
−.05∗∗
−.03
.17∗∗
.23∗∗
.20∗∗
.02
.46∗∗
(6) Role ambiguity
1.93
0.73
—
−.16∗∗
−.10∗∗
−.08∗∗
.13∗∗
.16∗∗
.13∗∗
−.01
.33∗∗
(7) Active coping
4.02
0.61
—
.63∗∗
.35∗∗
−.08∗∗
−.30∗∗
−.08∗∗
.12∗∗
−.04∗
(8) Planning
3.71
0.76
—
.39∗∗
−.08∗∗
−.26∗∗
−.08∗∗
.10∗∗
−.03
(9) SOCINSTR
3.44
0.87
—
.18∗∗
−.07∗∗
.06∗∗
.38∗∗
−.03
(10) VENT
2.22
0.80
—
.38∗∗
.36∗∗
.43∗∗
.19∗∗
(11) BD
1.70
0.71
—
.42∗∗
.04∗
.22∗∗
(12) MD
2.36
0.75
—
.27∗∗
.19∗∗
(13) SOCEMO
3.08
0.98
—
.06∗∗
(14) EWB
1.48
0.51
—
Note. n.a.: not applicable; SOCINSTR: seeking social support for instrumental reasons; VENT: focus on and venting of emotions; MD: mental disengagement; BD: behavioural disengagement; SOCEMO: seeking social support for emotional reasons; EWB: exposure to workplace bullying; p∗<.05; p∗∗<.01.Exposure to workplace bullying (α=.85) was measured by means of the Short Negative Acts Questionnaire (S-NAQ) [32]. Respondents were asked to indicate how often they were confronted with a list of nine bullying acts during the last six months (e.g., “gossip or rumours about you”). The response categories ranged from “never” (=1) to “now and then” (=2), “monthly” (=3), “weekly” (=4), and “daily” (=5).Workload (α=.87) was assessed using three items from the Questionnaire Experience and Evaluation of Work (QEEW) [33], including “I have to work extra hard in order to complete a task.”Role ambiguity (α=.82) was measured using three items from the Short Inventory to Monitor Psychosocial Hazards (SIMPH) [34]. An example of an item is “I know exactly what others expect of me in my work (R).”Role conflict (α=.79) was measured using three items of the Work Conditions and Control Questionnaire (WOCCQ; e.g., “I receive contradictory instructions”) [35].Job insecurity (α=.81) was measured by using three items from the scale by Vander Elst et al. [36], for example, “I think I might lose my job in the near future.” The items regarding the abovementioned work stressors were rated on a five-point Likert scale ranging from “almost never” (=1), “rather seldom” (=2), “sometimes” (=3), “often” (=4), and “almost always” (=5).Coping strategies were assessed by 28 items from the COPE [22]. Following the idea that coping strategies represent individual factors expressing the tendency to apply certain strategies more than others, respondents were asked to indicate what theyusually do when facing a stressful situation. The response categories varied from “almost never” (=1), “rather seldom” (=2), “sometimes” (=3), “often” (=4), and “almost always” (=5).Problem-focused coping strategies were measured with three subscales: four items tapped into“active coping” (e.g., “I concentrate my efforts on doing something about it”) and four into“planning” (e.g., “I think hard about what steps to take”), and another four measured“seeking social support for instrumental reasons” (e.g., “I try to get advice from someone about what to do”). The alpha coefficients for these scales were .83, .85, and .91, respectively. Emotion-focused coping strategies were measured using four subscales with four items each:“focusing on and venting of emotions”(e.g., “I get upset and show my emotions”),“behavioural disengagement”(e.g., “I just give up trying to reach my goal”),“mental disengagement”(e.g., “I turn to work or other substitute activities to take my mind off things”), and“seeking social support for emotional reasons”(e.g., “I get sympathy and understanding from someone”). Cronbach’s alpha coefficients were .85, .86, .69, and .92, respectively.Finally, age (years) and gender (0 = female, 1 = male) were measured.
## 2.3. Statistical Analyses
Analyses were performed with the software package AMOS 22. The construct validity of the scales was evaluated by means of Confirmatory Factor Analysis (CFA) [37]. The hypothesised measurement model contained 12 factors in which all items loaded on the corresponding latent variable (i.e., exposure to workplace bullying, workload, job insecurity, role conflict, role ambiguity, “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”). We compared the measurement model with five alternative models: (1) a one-factor model in which all items were loaded on the same factor, (2) a four-factor model with general work stressors (i.e., the items of workload, job insecurity, role conflict, and role ambiguity), general problem-focused coping strategies (i.e., the items of “active coping,” “planning,” and “seeking social support for instrumental reasons”), general emotion-focused coping strategies (i.e., the items of “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, (3) a six-factor model with workload, job insecurity, role conflict, role ambiguity, general coping strategies (i.e., the items of “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, (4) a seven-factor model with workload, job insecurity, role conflict, role ambiguity, general problem-focused coping strategies (i.e., the items of “active coping,” “planning,” and “seeking social support for instrumental reasons”), general emotion-focused coping strategies (i.e., the items of “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” and “seeking social support for emotional reasons”), and exposure to workplace bullying as latent factors, and (5) a nine-factor model with general work stressors (i.e., the items of workload, job insecurity, role conflict, and role ambiguity), “active coping,” “planning,” “seeking social support for instrumental reasons,” “focusing on and venting of emotions,” “behavioural disengagement,” “mental disengagement,” “seeking social support for emotional reasons,” and exposure to workplace bullying as latent factors. In all models, the latent variables were allowed to covary. The χ2 difference test was used to compare the hypothesised measurement model with the alternative measurement models [37, 38]. The fit of the models was evaluated based on Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), Root Mean Square Error of Approximation (RMSEA), and Standardized Root Mean Residual (SRMR) [38]. Values above .90 for CFI and TLI indicate a good fit, while values above .95 indicate an excellent fit [38, 39]. Values close to .08 for RMSEA and values close to .10 for SRMR indicate a relatively good fit between the measurement model and the observed data [38, 39]. Values below .05 for RMSEA and values below .09 for SRMR indicate an excellent fit [38].In line with Bakker et al. [40] and following the procedure of Mathieu et al. [41, 42], we investigated the hypotheses by means of Moderated Structural Equation Modelling (MSEM). MSEM was used because it has the ability to (a) assess and correct for measurement error and (b) provide measures of fit of the models under investigation [37]. For each pair of a work stressor and a coping strategy, two models were tested and compared: (1) a model without an interaction factor and (2) a model with an interaction factor. In the model without the interaction, one of the four work stressors and one of the seven coping strategies were modelled as the exogenous factors and workplace bullying was the endogenous factor. To this model, a factor reflecting the interaction between the work stressor and the coping strategy was added (i.e., model with interaction factor). The interaction term was calculated by multiplying the centred scale scores for the respective work stressor and coping strategy [43]. In both models, the centred scale score for the respective variable indicated the exogenous factors. The exogenous factors were allowed to covary. The error variance of each indicator was set equal to the product of its variance and one minus its reliability [41, 42]. The paths from the exogenous factors to their indicator were calculated using the square roots of the scale reliabilities [40–42, 44]. The reliability of the interaction term was calculated using the formula as described in Cortina et al. [42].The path coefficients were estimated and the fit of each model was evaluated using CFI, TLI, RMSEA, and SRMSR. The interaction effects were considered as significant when (a) the Unstandardized Path Coefficient (UPC) from the interaction term to the endogenous factor (i.e., exposure to workplace bullying) was statistically significantand (b) the χ2 difference test indicated that the model with the latent interaction factors fits the data better in comparison to the model without the latent interaction factor. As we tested the relationships in this study in a pairwise manner, a Bonferroni correction of p<.002 (instead of p<.05) was used.
## 3. Results
### 3.1. Construct Validity of the Measurement Model
Table2 shows that the proposed 12-factor model fitted the data well and better than the five alternative models, providing evidence for the hypothesised dimensionality of the study scales. While the RMSEA and SRMR values pointed at an excellent model fit [45], the CFI and TLI values did not meet the strict standards for an excellent model fit. Nevertheless, these CFI and TLI values were comparable to what many others consider to represent adequate model fit [45].Table 2
Results of Confirmatory Factor Analysis (N=3,105).
Model
Latent factors
χ
2
df
CFI
TLI
RMSEA
SRMR
Model comparison
Δ
χ
2
Δdf
(1) 12-factor model
WL, JI, RC, RA, AC, PL, SOCINSTR, VENT, MD, BD, SOCEMO, EWB
7009.54∗∗∗
1061
.92
.92
.04
.04
/
/
/
(2) One-factor model
General factor
62958.46∗∗∗
1127
.21
.18
.13
.15
4 versus 1
55948.92∗∗∗
66
(3) Four-factor model
Stressors, PFC, EFC, EWB
38167.12∗∗∗
1121
.53
.51
.10
.11
5 versus 1
31157.58∗∗∗
60
(4) Six-factor model
WL, RA, JI, RC, General coping, EWB
39075.20∗∗∗
1112
.52
.49
.11
.13
6 versus 1
32065.66∗∗∗
51
(5) Seven-factor model
WL, JI, RC, RA, PFC, EFC, EWB
29109.328∗∗∗
1106
.64
.62
.09
.10
3 versus 1
22099.79∗∗∗
45
(6) Nine-factor model
Stressors, AC, PL, SOCINSTR, VENT, MD, BD, SOCEMO, EWB
16145.45∗∗∗
1091
.81
.79
.07
.06
2 versus 1
9135.91∗∗∗
30
Note. WL: workload; RA: role ambiguity; JI: job insecurity; RC: role conflict; PFC: problem-focused coping; EFC: emotion-focused coping; EWB: exposure to workplace bullying; AC: active coping; PL: planning; SOCINSTR: seeking social support for instrumental reasons; VENT: focus on and venting of emotions; MD: mental disengagement; BD: behavioural disengagement; SOCEMO: seeking social support for emotional reasons; p∗∗∗<.001.
### 3.2. Tests of the Hypotheses
Table3 shows the results of the hypothesised moderating effects (information regarding the main effects of the investigated work stressors on exposure to workplace bullying can be retrieved by sending an e-mail to [email protected]). Our first hypothesis was partially confirmed. Although we found no evidence for the moderating role of emotion-focused coping strategies in the association between workload and exposure to workplace bullying, some emotion-focused coping strategies moderated the association of job insecurity, role conflict, and role ambiguity with exposure to bullying. For these tests, the UPCs were significant for a Bonferroni correction of p<.002 and the models with the interaction term fitted the data significantly better than the models without an interaction term. In line with our expectations, plots of the significant interaction effects revealed amplifying effects of emotion-focused coping strategies (Figure 1). Specifically, employees using “focus on and venting of emotions” or “behavioural disengagement” when experiencing job insecurity, role conflict, and role ambiguity were more likely to be exposed to bullying. Similar results were found for employees using “mental disengagement” in the case of role conflict and role ambiguity and for employees using “seeking social support for emotional reasons” in the case of role ambiguity.Table 3
Results of Moderated Structural Equation Modelling analyses for the interaction between work stressors and coping strategies (N=3,105).
Interaction effect
UPC
SE
SPC
χ
2
CFI
TLI
RMSEA
SRMR
Model comparison
Δ
χ
2
Δdf
Active coping:
Workload× active coping
.003
.007
.011
1099.732∗∗∗
.894
.868
.080
.048
82.209∗∗∗
10
Job insecurity× active coping
.011
.007
.036
871.594∗∗∗
.917
.896
.071
.040
30.760∗∗∗
10
Role conflict× active coping
.007
.006
.024
1232.564∗∗∗
.888
.860
.085
.050
17.594
10
Role ambiguity× active coping
.006
.008
.016
1034.238∗∗∗
.903
.879
.077
.045
29.625∗∗∗
10
Planning:
Workload× planning
.007
.006
.026
1056.502∗∗∗
.898
.873
.078
.046
19.927∗
10
Job insecurity× planning
.021b
.007
.073
888.747∗∗∗
.915
.894
.071
.040
33.602∗∗∗
10
Role conflict× planning
.012
.005
.052
1259.622∗∗∗
.886
.858
.086
.051
32.901∗∗∗
10
Role ambiguity× planning
.012
.007
.037
1071.857∗∗∗
.899
.874
.079
.047
58.995∗∗∗
10
Seeking social support for instrumental reasons:
Workload× seeking social support for instrumental reasons
.007
.005
.028
1058.567∗∗∗
.898
.873
.078
.047
15.851
10
Job insecurity× seeking social support for instrumental reasons
.008
.005
.036
876.077∗∗∗
.916
.895
.071
.039
9.203
10
Role conflict× seeking social support for instrumental reasons
.008
.004
.039
1250.407∗∗∗
.886
.858
.085
.050
13.735
10
Role ambiguity× seeking social support for instrumental reasons
.010
.006
.037
1052.848∗∗∗
.900
.876
.078
.047
36.766∗∗∗
10
Focus on and venting of emotions:
Workload× focus on and venting of emotions
.002
.005
.007
1055.240∗∗∗
.900
.875
.078
.047
35.610∗∗∗
10
Job insecurity× focus on and venting of emotions
.021∗∗∗
.005
.089
885.721∗∗∗
.916
.896
.071
.040
42.092∗∗∗
10
Role conflict× focus on and venting of emotions
.023∗∗∗
.005
.104
1254.362∗∗∗
.888
.860
.085
.051
37.986∗∗∗
10
Role ambiguity× focus on and venting of emotions
.032∗∗∗
.008
.101
1022.669∗∗∗
.904
.881
.077
.045
22.121∗
10
Mental disengagement:
Workload× mental disengagement
.005
.006
.022
1038.441∗∗∗
.901
.877
.077
.046
17.059
10
Job insecurity× mental disengagement
.017
.005
.074
858.832∗∗∗
.919
.899
.070
.039
12.695
10
Role conflict× mental disengagement
.018∗∗∗
.005
.085
1241.448∗∗∗
.889
.862
.085
.050
27.217∗∗
10
Role ambiguity× mental disengagement
.023b
.007
.079
1047.318∗∗∗
.902
.878
.078
.046
44.578∗∗∗
10
Behavioural disengagement:
Workload× behavioural disengagement
.012
.006
.042
1044.847∗∗∗
.901
.876
.078
.046
11.138
10
Job insecurity× behavioural disengagement
.020∗∗∗
.006
.075
949.500∗∗∗
.911
.889
.074
.044
96.242∗∗∗
10
Role conflict× behavioural disengagement
.027∗∗∗
.005
.110
1323.539∗∗∗
.883
.854
.088
.055
100.071∗∗∗
10
Role ambiguity× behavioural disengagement
.024∗∗∗
.007
.074
1074.327∗∗∗
.900
.876
.079
.049
63.595∗∗∗
10
Seeking social support for emotional reasons:
Workload× seeking social support for emotional reasons
.001
.005
.003
1062.705∗∗∗
.898
.873
.078
.047
25.580∗∗
10
Job insecurity× seeking social support for emotional reasons
.009
.004
.047
866.678∗∗∗
.917
.897
.070
.039
8.944
10
Role conflict× seeking social support for emotional reasons
.013
.005
.057
1246.383∗∗∗
.887
.859
.085
.049
18.30
10
Role ambiguity× seeking social support for emotional reasons
.018∗∗∗
.005
.074
1034.948∗∗∗
.902
.878
.077
.046
22.219∗
10
Note. UPC: unstandardized path coefficient; SE: standard error; SPC: standardized path coefficient; Model Comparison included comparing the fit of the model with interaction term and the model without interaction term; p∗<.05; p∗∗<.01; p∗∗∗<.001; pb<.002.Figure 1
Plots of the significant interaction effects between work stressors and coping strategies in the prediction of exposure to workplace bullying.Our second hypothesis was rejected, as problem-focused coping strategies did not buffer the association between the work stressors (i.e., workload, job insecurity, role conflict, and role ambiguity) and exposure to workplace bullying. Although for some interactions the models with the interaction term fitted the data significantly better, the UPCs were not significant (p>.002). Notably, employees using “planning” strategies when experiencing job insecurity were more likely to be exposed to bullying (see Figure 1).As the demographic variable of gender (0 = female; 1= male) was positively correlated with exposure to workplace bullying (Table1), we reran all 28 pairwise models also controlling for gender. However, these analyses did not alter our conclusions. Age was not associated with exposure to workplace bullying and was therefore not included in this analysis.
## 3.1. Construct Validity of the Measurement Model
Table2 shows that the proposed 12-factor model fitted the data well and better than the five alternative models, providing evidence for the hypothesised dimensionality of the study scales. While the RMSEA and SRMR values pointed at an excellent model fit [45], the CFI and TLI values did not meet the strict standards for an excellent model fit. Nevertheless, these CFI and TLI values were comparable to what many others consider to represent adequate model fit [45].Table 2
Results of Confirmatory Factor Analysis (N=3,105).
Model
Latent factors
χ
2
df
CFI
TLI
RMSEA
SRMR
Model comparison
Δ
χ
2
Δdf
(1) 12-factor model
WL, JI, RC, RA, AC, PL, SOCINSTR, VENT, MD, BD, SOCEMO, EWB
7009.54∗∗∗
1061
.92
.92
.04
.04
/
/
/
(2) One-factor model
General factor
62958.46∗∗∗
1127
.21
.18
.13
.15
4 versus 1
55948.92∗∗∗
66
(3) Four-factor model
Stressors, PFC, EFC, EWB
38167.12∗∗∗
1121
.53
.51
.10
.11
5 versus 1
31157.58∗∗∗
60
(4) Six-factor model
WL, RA, JI, RC, General coping, EWB
39075.20∗∗∗
1112
.52
.49
.11
.13
6 versus 1
32065.66∗∗∗
51
(5) Seven-factor model
WL, JI, RC, RA, PFC, EFC, EWB
29109.328∗∗∗
1106
.64
.62
.09
.10
3 versus 1
22099.79∗∗∗
45
(6) Nine-factor model
Stressors, AC, PL, SOCINSTR, VENT, MD, BD, SOCEMO, EWB
16145.45∗∗∗
1091
.81
.79
.07
.06
2 versus 1
9135.91∗∗∗
30
Note. WL: workload; RA: role ambiguity; JI: job insecurity; RC: role conflict; PFC: problem-focused coping; EFC: emotion-focused coping; EWB: exposure to workplace bullying; AC: active coping; PL: planning; SOCINSTR: seeking social support for instrumental reasons; VENT: focus on and venting of emotions; MD: mental disengagement; BD: behavioural disengagement; SOCEMO: seeking social support for emotional reasons; p∗∗∗<.001.
## 3.2. Tests of the Hypotheses
Table3 shows the results of the hypothesised moderating effects (information regarding the main effects of the investigated work stressors on exposure to workplace bullying can be retrieved by sending an e-mail to [email protected]). Our first hypothesis was partially confirmed. Although we found no evidence for the moderating role of emotion-focused coping strategies in the association between workload and exposure to workplace bullying, some emotion-focused coping strategies moderated the association of job insecurity, role conflict, and role ambiguity with exposure to bullying. For these tests, the UPCs were significant for a Bonferroni correction of p<.002 and the models with the interaction term fitted the data significantly better than the models without an interaction term. In line with our expectations, plots of the significant interaction effects revealed amplifying effects of emotion-focused coping strategies (Figure 1). Specifically, employees using “focus on and venting of emotions” or “behavioural disengagement” when experiencing job insecurity, role conflict, and role ambiguity were more likely to be exposed to bullying. Similar results were found for employees using “mental disengagement” in the case of role conflict and role ambiguity and for employees using “seeking social support for emotional reasons” in the case of role ambiguity.Table 3
Results of Moderated Structural Equation Modelling analyses for the interaction between work stressors and coping strategies (N=3,105).
Interaction effect
UPC
SE
SPC
χ
2
CFI
TLI
RMSEA
SRMR
Model comparison
Δ
χ
2
Δdf
Active coping:
Workload× active coping
.003
.007
.011
1099.732∗∗∗
.894
.868
.080
.048
82.209∗∗∗
10
Job insecurity× active coping
.011
.007
.036
871.594∗∗∗
.917
.896
.071
.040
30.760∗∗∗
10
Role conflict× active coping
.007
.006
.024
1232.564∗∗∗
.888
.860
.085
.050
17.594
10
Role ambiguity× active coping
.006
.008
.016
1034.238∗∗∗
.903
.879
.077
.045
29.625∗∗∗
10
Planning:
Workload× planning
.007
.006
.026
1056.502∗∗∗
.898
.873
.078
.046
19.927∗
10
Job insecurity× planning
.021b
.007
.073
888.747∗∗∗
.915
.894
.071
.040
33.602∗∗∗
10
Role conflict× planning
.012
.005
.052
1259.622∗∗∗
.886
.858
.086
.051
32.901∗∗∗
10
Role ambiguity× planning
.012
.007
.037
1071.857∗∗∗
.899
.874
.079
.047
58.995∗∗∗
10
Seeking social support for instrumental reasons:
Workload× seeking social support for instrumental reasons
.007
.005
.028
1058.567∗∗∗
.898
.873
.078
.047
15.851
10
Job insecurity× seeking social support for instrumental reasons
.008
.005
.036
876.077∗∗∗
.916
.895
.071
.039
9.203
10
Role conflict× seeking social support for instrumental reasons
.008
.004
.039
1250.407∗∗∗
.886
.858
.085
.050
13.735
10
Role ambiguity× seeking social support for instrumental reasons
.010
.006
.037
1052.848∗∗∗
.900
.876
.078
.047
36.766∗∗∗
10
Focus on and venting of emotions:
Workload× focus on and venting of emotions
.002
.005
.007
1055.240∗∗∗
.900
.875
.078
.047
35.610∗∗∗
10
Job insecurity× focus on and venting of emotions
.021∗∗∗
.005
.089
885.721∗∗∗
.916
.896
.071
.040
42.092∗∗∗
10
Role conflict× focus on and venting of emotions
.023∗∗∗
.005
.104
1254.362∗∗∗
.888
.860
.085
.051
37.986∗∗∗
10
Role ambiguity× focus on and venting of emotions
.032∗∗∗
.008
.101
1022.669∗∗∗
.904
.881
.077
.045
22.121∗
10
Mental disengagement:
Workload× mental disengagement
.005
.006
.022
1038.441∗∗∗
.901
.877
.077
.046
17.059
10
Job insecurity× mental disengagement
.017
.005
.074
858.832∗∗∗
.919
.899
.070
.039
12.695
10
Role conflict× mental disengagement
.018∗∗∗
.005
.085
1241.448∗∗∗
.889
.862
.085
.050
27.217∗∗
10
Role ambiguity× mental disengagement
.023b
.007
.079
1047.318∗∗∗
.902
.878
.078
.046
44.578∗∗∗
10
Behavioural disengagement:
Workload× behavioural disengagement
.012
.006
.042
1044.847∗∗∗
.901
.876
.078
.046
11.138
10
Job insecurity× behavioural disengagement
.020∗∗∗
.006
.075
949.500∗∗∗
.911
.889
.074
.044
96.242∗∗∗
10
Role conflict× behavioural disengagement
.027∗∗∗
.005
.110
1323.539∗∗∗
.883
.854
.088
.055
100.071∗∗∗
10
Role ambiguity× behavioural disengagement
.024∗∗∗
.007
.074
1074.327∗∗∗
.900
.876
.079
.049
63.595∗∗∗
10
Seeking social support for emotional reasons:
Workload× seeking social support for emotional reasons
.001
.005
.003
1062.705∗∗∗
.898
.873
.078
.047
25.580∗∗
10
Job insecurity× seeking social support for emotional reasons
.009
.004
.047
866.678∗∗∗
.917
.897
.070
.039
8.944
10
Role conflict× seeking social support for emotional reasons
.013
.005
.057
1246.383∗∗∗
.887
.859
.085
.049
18.30
10
Role ambiguity× seeking social support for emotional reasons
.018∗∗∗
.005
.074
1034.948∗∗∗
.902
.878
.077
.046
22.219∗
10
Note. UPC: unstandardized path coefficient; SE: standard error; SPC: standardized path coefficient; Model Comparison included comparing the fit of the model with interaction term and the model without interaction term; p∗<.05; p∗∗<.01; p∗∗∗<.001; pb<.002.Figure 1
Plots of the significant interaction effects between work stressors and coping strategies in the prediction of exposure to workplace bullying.Our second hypothesis was rejected, as problem-focused coping strategies did not buffer the association between the work stressors (i.e., workload, job insecurity, role conflict, and role ambiguity) and exposure to workplace bullying. Although for some interactions the models with the interaction term fitted the data significantly better, the UPCs were not significant (p>.002). Notably, employees using “planning” strategies when experiencing job insecurity were more likely to be exposed to bullying (see Figure 1).As the demographic variable of gender (0 = female; 1= male) was positively correlated with exposure to workplace bullying (Table1), we reran all 28 pairwise models also controlling for gender. However, these analyses did not alter our conclusions. Age was not associated with exposure to workplace bullying and was therefore not included in this analysis.
## 4. Discussion
To our knowledge, this is the first study that investigates the moderating role of problem- and emotion-focused coping strategies in the association between work stressors and exposure to workplace bullying.The results provided partial support for our first hypothesis on the amplifying effects of emotion-focused coping strategies in the association between work stressors and exposure to workplace bullying. The strengths of all the interaction effects were of similar size and rather small, based on the magnitude of the UPCs observed. First, most interaction effects were found for “focus on and venting of emotions” and “behavioural disengagement.” When experiencing job insecurity, role conflict, or role ambiguity, employees using these emotion-focused coping strategies were more likely to be exposed to bullying, in comparison with employees not using these strategies. Second, two interaction effects were found for “mental disengagement.” Employees with the tendency to use “mental disengagement” in the case of role conflict or role ambiguity were more likely to be exposed to bullying. Finally, one interaction effect was found for “seeking social support for emotional reasons.” Employees with the tendency to use “seeking social support for emotional reasons” in the case of role ambiguity were more likely to be exposed to workplace bullying.From an empirical perspective, these results align with previous studies on coping and strain outcomes. For example, a longitudinal study showed that emotion-focused coping strategies amplified the negative impact of role conflict on emotional exhaustion [46]. Moreover, Chen and Kao [47] found evidence for emotion-focused coping strategies as an amplifier in the association between job hassles and burnout. From a theoretical perspective, it seems that applying emotion-focused coping strategies in combination withspecific work stressors (i.e., job insecurity, role conflict, or role ambiguity) makes employees more vulnerable to bullying. According to the Three-Way Model of Workplace bullying and to Social Interactionism, employees may unknowingly breach habits and values within their organisation making them “easy” targets for workplace bullying [23, 27].Notably, these results contradict recent suggestions in the work stress literature differentiating work stressors in terms of job hindrances and job challenges. In the literature, job hindrances (i.e., role conflict, role overload, and job insecurity) are defined as work stressors that are uncontrollable obstacles that hinder optimal functioning [48, 49]. Job challenges (i.e., workload) are work stressors that require some energy, but are nonetheless stimulating and help in achieving goals [48, 49]. The challenges-hindrances literature assumes that emotion-focused coping strategies are not helpful in reducing the potential negative impact of job challenges: as job challenges are perceived as controllable and may be helpful in achieving goals, using problem-focused coping strategies would be more beneficial [49, 50]. In contrast, as job hindrances are uncontrollable, emotion-focused coping strategies are more appropriate to be used to reduce the negative impact of job hindrances, while problem-focused coping strategies are assumed to increase their negative impact [49, 50]. Our findings, however, show that emotion-focused coping strategies amplify rather than buffer the association between job hindrances (i.e., job insecurity, role conflict, and role ambiguity) and exposure to workplace bullying. They thus contradict recent arguments in the work stress literature but are in line with the well-established view of the Three-Way Model of Workplace bullying [23] and Lazarus and Folkman [10].Contrary to our expectations, no interaction effects between workload and the investigated emotion-focused coping strategies were found. Thus, this finding contradicts recent developments in the work stress literature arguing that emotion-focused coping strategies would be problematic in dealing with job challenges [49, 50]. Future research should investigate a wider range of coping strategies that would be relevant for workload, such as cognitive reframing [51]. Cognitive reframing might be a more efficient coping strategy than the other investigated coping strategies, as it may, for example, influence the way employees perceive workload. By applying cognitive reframing as a coping strategy, the situation may become less stressful: it may change the perception of the initial stressors in a way that may reduce the perceived workload [52].Our second hypothesis was rejected: we found no evidence for the buffering role of problem-focused coping strategies. Moreover, in contrast to our expectations, “planning” (i.e., a problem-focused coping strategy) amplified rather than buffered the association between job insecurity and exposure to workplace bullying. Employees using “planning” to deal with job insecurity were more likely to be exposed to bullying. Although unexpected, this finding aligns with previous results showing that problem-focused coping in combination with job insecurity is associated with negative outcomes in terms of low job satisfaction and high turnover intention [53]. Our results extend those findings to being exposed to workplace bullying. From a theoretical perspective, our findings can be explained through the work of Folkman et al. [19] who state that the efficiency of coping strategies depends on the source of the stressor. Problem-focused coping strategies are more efficient when the source of the stressor is clear or controllable [10]. In the case of job insecurity, the source of the uncertain environment is unclear, and employees often are not able to control or handle the economic status of their company [53]. This also aligns with the challenges-hindrance literature arguing that problem-focused coping strategies are less effective and thus increase the negative impact of job hindrances (i.e., job insecurity) on strain outcomes (i.e., workplace bullying), as described earlier [49, 50]. As the efficiency of a coping strategy may depend on how well it fits with a particular stressor [51], further research is needed to investigatespecific combinations of work stressors and coping strategies to determine which strategies are more appropriate to prevent exposure to workplace bullying.
### 4.1. Limitations and Paths for Future Research
Some limitations should be considered in interpreting the findings of this study. First, this study has a cross-sectional research design. Consequently, the conclusions do not allow us to determine the direction of the predicted associations. However, our research model was based on multiple previous longitudinal studies that already identified causal (cross-lagged) relationships from work stressors to exposure to workplace bullying rather than the other way around [54, 55]. Moreover, cross-sectional data might be appropriate to investigate interaction effects, because the moderator is not part of a causal sequence but qualifies an association between variables [56]. Nevertheless, we advise future studies to use a longitudinal design to replicate our findings and investigate the moderating role of coping strategies in thelaggedrelationship from work stressors to exposure to workplace bullying. Notably, as workplace bullying can also be considered a social stressor [e.g., [57]], it may be interesting to investigate the moderating role of coping strategies in the lagged relationship from exposure to workplace bullying to strain. This aligns with Lazarus and Folkman [10], equally suggesting that coping strategies may influence the impact of workplace bullying on its outcomes. As mentioned above, the authors state that the efficiency of coping strategies depends on the controllability of the perceived stressor (i.e., workplace bullying) [10]. Moreover, Lazarus and Folkman [10] propose that problem-focused coping strategies are efficient when the stressor is perceived as controllable, while emotion-focused coping strategies are expected to be efficient when the stressor is perceived as uncontrollable [58]. Exposure to workplace bullying is typically defined as uncontrollable: we thus expect that emotion-focused coping strategies would reduce this association, while problem-focused coping strategies are expected to amplify this association [59]. This theoretical reasoning aligns with recent findings from previous studies investigating these hypotheses [e.g., [59]]. However, it would be interesting to examine if the proposed associations between work stressors and coping strategies are the same for employees exposed to workplace bullying as compared to employees not exposed to bullying.Second, due to the use of self-reported measures common method bias may have inflated the associations between our study variables [60]. However, self-reported measures are appropriate in this study because we aimed to investigate the way employees (a)perceived work stressors, (b)preferred the use of certain coping strategies, and (c)perceived or experienced acts of workplace bullying. Additionally, self-reported measures are dominantly used in research on workplace bullying [61]. We attempted to reduce the risk of common method bias by emphasizing the voluntary nature of this study and the anonymous treatment of the study results and by demonstrating the construct validity of the study scales in a series of CFAs. Nevertheless, future research should consider using multisource data to avoid problems with common method bias.Third, as we used pairwise tests and the same relationships were tested repeatedly, a Bonferroni correction withp<.002 (instead of p<.05 or p<.01) was used. This correction may have led to conservative conclusions: several hypotheses were rejected at the .002 level but could be accepted at the .05 level (e.g., the interaction between workload and behavioural disengagement) or at the .01 level (e.g., the interaction between workload and seeking support for emotional reasons). Nevertheless, by applying a Bonferroni correction, we reduced the risk of Type Ι errors [62]. However, because of the relative large sample size, this is much less an issue in this study [63].Fourth, our study sample did not represent all sectors. For example, employees working in the education sector and construction industry were not included in our sample. Furthermore, employees working in the health care sector were overrepresented (75%). Therefore, researchers should be careful about generalising our conclusions to employees working in all sectors. However, we do not believe that the sample composition would affect our results and that using a more representative sample would have led to other results [64]. Previous research found no differences regarding exposure to workplace bullying between health care workers and employees working in other sectors [65].Fifth, it would be interesting to investigate the moderating role of coping strategies in the association between other antecedents and exposure to workplace bullying. For example, a prospective study showed that mental distress predicts exposure to workplace bullying, showing that individual characteristics may make employees more vulnerable to bullying [66]. Following the results of our study, it would be interesting to also investigate the moderating role of problem- and emotion-focused coping strategies (i.e., individual-related factors) in the association between mental distress and exposure to workplace bullying to examine whether individual factors may also be a risk factor of becoming bullied.Finally, this study focused on targets of exposure to workplace bullying. However, future studies should investigate the moderating role of coping strategies in the association between work stressors and workplace bullying from the perspective of the perpetrator. Indeed, high levels of work stressors in combination with inefficient coping strategies may produce irritation and hostility, which may result in demonstrating negative acts towards coworkers. This view aligns with the Frustration-Aggression hypothesis [67]: when dealing with frustrations and the accompanying negative emotions, employees may act out these frustrations through negative actions [67]. This process can be amplified when using inefficient coping mechanisms because these employees do not reduce the antecedent conditions which cause these frustrations. As a result they become more frustrated and demonstrate negative acts towards other colleagues. Future research is needed to explore this hypothesis.
## 4.1. Limitations and Paths for Future Research
Some limitations should be considered in interpreting the findings of this study. First, this study has a cross-sectional research design. Consequently, the conclusions do not allow us to determine the direction of the predicted associations. However, our research model was based on multiple previous longitudinal studies that already identified causal (cross-lagged) relationships from work stressors to exposure to workplace bullying rather than the other way around [54, 55]. Moreover, cross-sectional data might be appropriate to investigate interaction effects, because the moderator is not part of a causal sequence but qualifies an association between variables [56]. Nevertheless, we advise future studies to use a longitudinal design to replicate our findings and investigate the moderating role of coping strategies in thelaggedrelationship from work stressors to exposure to workplace bullying. Notably, as workplace bullying can also be considered a social stressor [e.g., [57]], it may be interesting to investigate the moderating role of coping strategies in the lagged relationship from exposure to workplace bullying to strain. This aligns with Lazarus and Folkman [10], equally suggesting that coping strategies may influence the impact of workplace bullying on its outcomes. As mentioned above, the authors state that the efficiency of coping strategies depends on the controllability of the perceived stressor (i.e., workplace bullying) [10]. Moreover, Lazarus and Folkman [10] propose that problem-focused coping strategies are efficient when the stressor is perceived as controllable, while emotion-focused coping strategies are expected to be efficient when the stressor is perceived as uncontrollable [58]. Exposure to workplace bullying is typically defined as uncontrollable: we thus expect that emotion-focused coping strategies would reduce this association, while problem-focused coping strategies are expected to amplify this association [59]. This theoretical reasoning aligns with recent findings from previous studies investigating these hypotheses [e.g., [59]]. However, it would be interesting to examine if the proposed associations between work stressors and coping strategies are the same for employees exposed to workplace bullying as compared to employees not exposed to bullying.Second, due to the use of self-reported measures common method bias may have inflated the associations between our study variables [60]. However, self-reported measures are appropriate in this study because we aimed to investigate the way employees (a)perceived work stressors, (b)preferred the use of certain coping strategies, and (c)perceived or experienced acts of workplace bullying. Additionally, self-reported measures are dominantly used in research on workplace bullying [61]. We attempted to reduce the risk of common method bias by emphasizing the voluntary nature of this study and the anonymous treatment of the study results and by demonstrating the construct validity of the study scales in a series of CFAs. Nevertheless, future research should consider using multisource data to avoid problems with common method bias.Third, as we used pairwise tests and the same relationships were tested repeatedly, a Bonferroni correction withp<.002 (instead of p<.05 or p<.01) was used. This correction may have led to conservative conclusions: several hypotheses were rejected at the .002 level but could be accepted at the .05 level (e.g., the interaction between workload and behavioural disengagement) or at the .01 level (e.g., the interaction between workload and seeking support for emotional reasons). Nevertheless, by applying a Bonferroni correction, we reduced the risk of Type Ι errors [62]. However, because of the relative large sample size, this is much less an issue in this study [63].Fourth, our study sample did not represent all sectors. For example, employees working in the education sector and construction industry were not included in our sample. Furthermore, employees working in the health care sector were overrepresented (75%). Therefore, researchers should be careful about generalising our conclusions to employees working in all sectors. However, we do not believe that the sample composition would affect our results and that using a more representative sample would have led to other results [64]. Previous research found no differences regarding exposure to workplace bullying between health care workers and employees working in other sectors [65].Fifth, it would be interesting to investigate the moderating role of coping strategies in the association between other antecedents and exposure to workplace bullying. For example, a prospective study showed that mental distress predicts exposure to workplace bullying, showing that individual characteristics may make employees more vulnerable to bullying [66]. Following the results of our study, it would be interesting to also investigate the moderating role of problem- and emotion-focused coping strategies (i.e., individual-related factors) in the association between mental distress and exposure to workplace bullying to examine whether individual factors may also be a risk factor of becoming bullied.Finally, this study focused on targets of exposure to workplace bullying. However, future studies should investigate the moderating role of coping strategies in the association between work stressors and workplace bullying from the perspective of the perpetrator. Indeed, high levels of work stressors in combination with inefficient coping strategies may produce irritation and hostility, which may result in demonstrating negative acts towards coworkers. This view aligns with the Frustration-Aggression hypothesis [67]: when dealing with frustrations and the accompanying negative emotions, employees may act out these frustrations through negative actions [67]. This process can be amplified when using inefficient coping mechanisms because these employees do not reduce the antecedent conditions which cause these frustrations. As a result they become more frustrated and demonstrate negative acts towards other colleagues. Future research is needed to explore this hypothesis.
## 5. Conclusion
This study investigated the moderating role of employees’ problem- and emotion-focused coping strategies in the association between work stressors and exposure to workplace bullying. As expected, some emotion-focused coping strategies amplified the association between work stressors and exposure to bullying. However, we found no evidence for the buffering role of problem-focused coping strategies in the association between work stressors and being bullied. Based on our results, we advise organisations to implement interventions that focus on making employees aware of the possible amplifying effects of emotion-focused coping strategies when they are experiencing job insecurity, role conflict, and/or role ambiguity. We advise future research to investigate specific combinations of different (types of) work stressors and coping strategies to determine which coping strategies are efficient in preventing workplace bullying.
---
*Source: 1019529-2017-11-15.xml* | 2017 |
# Ocean Modeling Analysis and Modeling Based on Deep Learning
**Authors:** Ming Hui Niu; Joung Hyung Cho
**Journal:** Mobile Information Systems
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1019564
---
## Abstract
The ocean comprises an uninterrupted body of salt water confined within a vast basin on the earth’s surface. The ocean is the largest ecosystem on earth with rich and diverse biological resources. Organisms that reside in salty water are referred to as “marine life.” Plants, animals, and microorganisms including archaea and bacteria are examples of these. The existence of marine life is not only a biological resource but also an economic source. Toys and other industries that imitate marine life have emerged in the market. A different modeling design of marine life has improved with the passage of time and the concept of modeling aesthetics has been incorporated. The identification of marine life images is challenging due to the complexity of the maritime environment, and there are several flaws in marine life models. The rise of deep learning has brought some new ideas for the weaknesses in marine life modeling, and the advantages of convolutional neural networks have contributed to some of the concepts based on deep learning. This research analyses marine modeling by using the benefits of convolutional neural networks, so that people can better understand marine life modeling. The experimental results indicate that the proposed approach has achieved good results in marine life detection, and the modeling effect of ocean modeling analysis based on deep learning is good.
---
## Body
## 1. Introduction
The ocean is an uninterrupted body of salt water confined in a massive basin on the earth’s surface. The primary oceans and their peripheral seas encompass almost 71% of the earth’s surface having an average position of 3,688 meters (12,100 feet) [1]. The ocean is the largest ecosystem on earth, with rich and diverse biological resources. The plants, animals, and other species that survive in the salt water of the sea or ocean are referred to as marine life. At its most basic, marine life influences the nature of our world [2]. Most of the oxygen we breathe comes from marine species. Marine life shapes and protects shorelines and some marine creatures help to generate new land. Most living species began in saltwater environments. Strong marine ecosystems are vital for civilization because they generate services such as food security, animal feed, natural resources for medications, construction materials made from coral rock plus sand, and natural defense against threats including coastal erosion and inundation [3].The marine economy has become a new development point for all nations’ national economies worldwide, and China has also begun to execute the marine ranching project and fiercely supports it as a growing strategic business [4]. It is critical for human civilization to understand how to develop marine biological resources in an effective and sensible manner. Marine biological image detection technology has been widely employed in marine biodiversity monitoring, ecosystem health assessment, and intelligent aquatic fishing since the introduction of marine pastures. The detection difficulties of marine biological images are exacerbated in practical applications due to the complexity of marine environment [5]. With the fast advancement of computer vision technology, researchers from all over the world have steadily used deep learning to the identification of marine items, providing a new direction for the detection of marine life images, as well as new concepts and directions for marine life modeling [6].Life evolved in the ocean, occupies the majority of the earth’s surface, and influences the shape of human activities as well as human dwelling space. Protecting the ocean is the same as protecting humans [7]. Based on deep learning, this study analyses marine life modeling and builds a model to better comprehend and appreciate marine life and the beauty of natural life.
### 1.1. Modeling of Marine Creatures
Before the invention of photography, scientists relied on the skillful hands of painters to translate their ocean discoveries into paper. The product is a set of scientific maps of marine life that are surprisingly lifelike and occasionally humorous [8]. With the advancement of science, the ability of human beings to explore the world is continuously enhanced and the art of natural illustration is constantly improving. In the nineteenth century, artists were fundamental members of the scientific society, contributing to the expression and distribution of knowledge gained by scientists in nature [9]. The ocean century is a term used to describe the twenty-first century. Humans have steadily shifted their attention to the ocean, which provides abundant natural resources, as terrestrial resources have dwindled and the world’s attention to the ocean has expanded rapidly [10]. However, there are many different species of marine life, and drawing photographs with people and equipment is tough, tedious process and extracts picture information. The underwater world is very desirable and the mysterious appearance of marine life attracts our attention. With the advancement of science and technology, the appearance of marine life is displayed in front of humans through high-precision instruments. The introduction of marine life modeling design has transformed information from photographs into finished goods, providing us with more knowledge on marine life, allowing us to get a deeper understanding of the subject and stimulating the development of dolls, toys, and other businesses. Figure 1 shows pictures of marine life. Figure 2 shows a charming dolphin key chain pendant that has been used in the real world.Figure 1
Picture of marine life.Figure 2
Cute dolphin keychain pendant picture.
### 1.2. Modeling Aesthetics
The Chinese Encyclopedia of Fine Arts defines plastic arts as “Plastic arts refers to the art of visual static space images created with certain material and means, generally including architecture, sculpture, painting, arts and crafts, design, calligraphy, seal cutting and other types” [11]. Modern plastic art is also known as visual art, since the creative picture generated by plastic art is visible and relies on vision to create and admire. The term plastic art comes from the German “bildende kunst” and the English “plastic art” refers to sculpture in a narrow sense. The German literary theorist Lessing first used this concept; in 1766, Lessing’s masterpiece of art criticism, Laocoon, was published, in which painting and poetry were distinguished [12]. Art has been the principal object of study in aesthetics since its inception as a separate subject in the mid-eighteenth century. Aesthetics can be separated into several categories depending on the art forms studied, such as music aesthetics and architectural aesthetics. As a result, plastic aesthetics, according to the author, is theoretical research on plastic art forms including painting, architecture, and sculpture. Plastic aesthetics concepts are regularly supplemented with fresh content as the categories of plastic arts develop [13]. Plastic aesthetics is an essential aspect of aesthetics, as well as the focus of current aesthetic study, and it can be found in various forms in contemporary visual culture. The artistic culture of the twentieth century had a significant influence.
### 1.3. Deep Learning
Deep learning is an important branch of machine learning and one of the current research hotspots in the field of artificial intelligence as a new machine learning approach [14]. Deep learning transforms the feature representation into a new feature space of the sample in the original space by performing layer-by-layer feature transformation on the original data and learns to obtain a hierarchical feature representation, which is more conducive to classification or feature visualization [15]. Deep learning has become one of the research hotspots and mainstream growth areas in the field of artificial intelligence in recent years, due to the fast development of ultra-large-scale computers, big data, smart chips, and other technologies [16].Deep learning is highly appreciated by academia and industry, and the reasons for its rapid growth are inextricably linked to rapid advancements in computer hardware (major improvements in computer computing power) and software (widespread usage of open-source software). On the one hand, the training phase of deep learning requires high-density parallel computing processing for a large amount of data. Traditional central processing units (CPUs) are difficult to perform such tasks. Therefore, new processors are constantly being designed and manufactured. The most representative typical processors include the Nvidia and AMD series of graphics processing units (GPUs), Google’s Tensor Processing Units (TPUs), and Huawei’s Ascend processors. Open-source software, on the other hand, has emerged as the primary engine of deep learning research in recent years, with widely used programming languages and efficient algorithm programming frameworks serving as supporting aspects. The main programming languages suitable for deep learning are Python, Julia, MATLAB, and C++. Python, created by Guido van Rossum of the Netherlands in the early 1990s, is the most popular deep learning programming language, with high simplicity, readability, and scalability.Deep neural networks are the most common type of deep learning today, and the deep convolutional neural network (CNN) is one of the most well-known and commonly utilized architectures. Deep convolutional neural networks have demonstrated outstanding results in a variety of applications in recent years [17].The remainder of the study is composed of the following sections: Section2 is the ocean modeling analysis and modeling based on the convolutional neural network in deep learning, Section 3 is the experiment and application analysis, while the conclusion is present in Section 4.
## 1.1. Modeling of Marine Creatures
Before the invention of photography, scientists relied on the skillful hands of painters to translate their ocean discoveries into paper. The product is a set of scientific maps of marine life that are surprisingly lifelike and occasionally humorous [8]. With the advancement of science, the ability of human beings to explore the world is continuously enhanced and the art of natural illustration is constantly improving. In the nineteenth century, artists were fundamental members of the scientific society, contributing to the expression and distribution of knowledge gained by scientists in nature [9]. The ocean century is a term used to describe the twenty-first century. Humans have steadily shifted their attention to the ocean, which provides abundant natural resources, as terrestrial resources have dwindled and the world’s attention to the ocean has expanded rapidly [10]. However, there are many different species of marine life, and drawing photographs with people and equipment is tough, tedious process and extracts picture information. The underwater world is very desirable and the mysterious appearance of marine life attracts our attention. With the advancement of science and technology, the appearance of marine life is displayed in front of humans through high-precision instruments. The introduction of marine life modeling design has transformed information from photographs into finished goods, providing us with more knowledge on marine life, allowing us to get a deeper understanding of the subject and stimulating the development of dolls, toys, and other businesses. Figure 1 shows pictures of marine life. Figure 2 shows a charming dolphin key chain pendant that has been used in the real world.Figure 1
Picture of marine life.Figure 2
Cute dolphin keychain pendant picture.
## 1.2. Modeling Aesthetics
The Chinese Encyclopedia of Fine Arts defines plastic arts as “Plastic arts refers to the art of visual static space images created with certain material and means, generally including architecture, sculpture, painting, arts and crafts, design, calligraphy, seal cutting and other types” [11]. Modern plastic art is also known as visual art, since the creative picture generated by plastic art is visible and relies on vision to create and admire. The term plastic art comes from the German “bildende kunst” and the English “plastic art” refers to sculpture in a narrow sense. The German literary theorist Lessing first used this concept; in 1766, Lessing’s masterpiece of art criticism, Laocoon, was published, in which painting and poetry were distinguished [12]. Art has been the principal object of study in aesthetics since its inception as a separate subject in the mid-eighteenth century. Aesthetics can be separated into several categories depending on the art forms studied, such as music aesthetics and architectural aesthetics. As a result, plastic aesthetics, according to the author, is theoretical research on plastic art forms including painting, architecture, and sculpture. Plastic aesthetics concepts are regularly supplemented with fresh content as the categories of plastic arts develop [13]. Plastic aesthetics is an essential aspect of aesthetics, as well as the focus of current aesthetic study, and it can be found in various forms in contemporary visual culture. The artistic culture of the twentieth century had a significant influence.
## 1.3. Deep Learning
Deep learning is an important branch of machine learning and one of the current research hotspots in the field of artificial intelligence as a new machine learning approach [14]. Deep learning transforms the feature representation into a new feature space of the sample in the original space by performing layer-by-layer feature transformation on the original data and learns to obtain a hierarchical feature representation, which is more conducive to classification or feature visualization [15]. Deep learning has become one of the research hotspots and mainstream growth areas in the field of artificial intelligence in recent years, due to the fast development of ultra-large-scale computers, big data, smart chips, and other technologies [16].Deep learning is highly appreciated by academia and industry, and the reasons for its rapid growth are inextricably linked to rapid advancements in computer hardware (major improvements in computer computing power) and software (widespread usage of open-source software). On the one hand, the training phase of deep learning requires high-density parallel computing processing for a large amount of data. Traditional central processing units (CPUs) are difficult to perform such tasks. Therefore, new processors are constantly being designed and manufactured. The most representative typical processors include the Nvidia and AMD series of graphics processing units (GPUs), Google’s Tensor Processing Units (TPUs), and Huawei’s Ascend processors. Open-source software, on the other hand, has emerged as the primary engine of deep learning research in recent years, with widely used programming languages and efficient algorithm programming frameworks serving as supporting aspects. The main programming languages suitable for deep learning are Python, Julia, MATLAB, and C++. Python, created by Guido van Rossum of the Netherlands in the early 1990s, is the most popular deep learning programming language, with high simplicity, readability, and scalability.Deep neural networks are the most common type of deep learning today, and the deep convolutional neural network (CNN) is one of the most well-known and commonly utilized architectures. Deep convolutional neural networks have demonstrated outstanding results in a variety of applications in recent years [17].The remainder of the study is composed of the following sections: Section2 is the ocean modeling analysis and modeling based on the convolutional neural network in deep learning, Section 3 is the experiment and application analysis, while the conclusion is present in Section 4.
## 2. Ocean Modeling Analysis and Modeling Based on the Convolutional Neural Network in Deep Learning
This section is composed of the two subsections, which are convolutional neural networks and You Only Look Once Version 3 (YOLOv3) network structure.
### 2.1. Convolutional Neural Networks
The convolutional neural network is a kind of neural network with deep structure and convolution calculation, which has the characteristics of weight sharing, local connection, and convolution pooling operation [18]. These features can effectively reduce the number of training parameters and the complexity of the network, making the model robust and fault-tolerant. Because of these properties, convolutional neural networks perform much better than fully connected neural networks in various signal and information processing tasks [19].The convolutional neural network contains four modules such as convolutional layer, pooling layer, activation function, and fully connected layer. Convolution is an efficient method to extract image features. The convolutional layer is the core layer of the convolutional neural network and includes a significant amount of computation. Convolution kernel (filter), stride, and padding are all convolutional layer parameters. The filling approach can be used to compute the edge numerous times to avoid edge information from being lost. There are many methods of pooling, such as max pooling and mean pooling. While, max pooling is a commonly used method in convolutional neural networks. The pooling process is shown in Figure3. The Sigmoid or Tanh functions were utilized in the early convolutional neural networks, and subsequently, the rectified linear unit (ReLU) function was included, as shown in Figure 4. In addition, the exponential linear units (ELU) function and MaxOut function are also often used. The fully connected layer is the classifier of the convolutional neural network, usually at the end of the network. Convolution operations can implement fully connected layers. Each node in the fully connected layer must be linked to every node in the preceding layer, learn model parameters, conduct feature fitting, and synthesize the previous layer’s output features; then, therefore, this layer’s weight parameters are the highest in the network.Figure 3
Pooling process.Figure 4
Three activation functions.
### 2.2. YOLOv3 Network Structure
The network structure of this experiment is improved based on YOLOv3. YOLOv3 is a kind of YOLO network series, which belongs to the convolutional neural network based on candidate regions [20]. It improves the network structure of YOLOv2 and introduces the residual structure. The method of multifeature scale prediction is realized for detection, and finally, a good detection effect is obtained. In addition, compared with other convolutional neural networks based on candidate regions, in terms of network structure, YOLOv3 is more concise, and it is simple to improve the network structure. In this research, an enhanced structure suited for marine biological identification is provided based on the original YOLOv3 network model. The structure of each layer for the YOLOv3 network is shown in Figure 5.Figure 5
YOLOv3 network structure.The overall structure of the YOLOv3 network can be divided into two parts as follows: feature extractor and multichannel fusion detection. The multiscale fusion detection branch of YOLOv3 is another component. YOLOv3 combines the feature maps obtained by the feature extraction network through multiscale local fusion. The method is directly detected by convolution.
## 2.1. Convolutional Neural Networks
The convolutional neural network is a kind of neural network with deep structure and convolution calculation, which has the characteristics of weight sharing, local connection, and convolution pooling operation [18]. These features can effectively reduce the number of training parameters and the complexity of the network, making the model robust and fault-tolerant. Because of these properties, convolutional neural networks perform much better than fully connected neural networks in various signal and information processing tasks [19].The convolutional neural network contains four modules such as convolutional layer, pooling layer, activation function, and fully connected layer. Convolution is an efficient method to extract image features. The convolutional layer is the core layer of the convolutional neural network and includes a significant amount of computation. Convolution kernel (filter), stride, and padding are all convolutional layer parameters. The filling approach can be used to compute the edge numerous times to avoid edge information from being lost. There are many methods of pooling, such as max pooling and mean pooling. While, max pooling is a commonly used method in convolutional neural networks. The pooling process is shown in Figure3. The Sigmoid or Tanh functions were utilized in the early convolutional neural networks, and subsequently, the rectified linear unit (ReLU) function was included, as shown in Figure 4. In addition, the exponential linear units (ELU) function and MaxOut function are also often used. The fully connected layer is the classifier of the convolutional neural network, usually at the end of the network. Convolution operations can implement fully connected layers. Each node in the fully connected layer must be linked to every node in the preceding layer, learn model parameters, conduct feature fitting, and synthesize the previous layer’s output features; then, therefore, this layer’s weight parameters are the highest in the network.Figure 3
Pooling process.Figure 4
Three activation functions.
## 2.2. YOLOv3 Network Structure
The network structure of this experiment is improved based on YOLOv3. YOLOv3 is a kind of YOLO network series, which belongs to the convolutional neural network based on candidate regions [20]. It improves the network structure of YOLOv2 and introduces the residual structure. The method of multifeature scale prediction is realized for detection, and finally, a good detection effect is obtained. In addition, compared with other convolutional neural networks based on candidate regions, in terms of network structure, YOLOv3 is more concise, and it is simple to improve the network structure. In this research, an enhanced structure suited for marine biological identification is provided based on the original YOLOv3 network model. The structure of each layer for the YOLOv3 network is shown in Figure 5.Figure 5
YOLOv3 network structure.The overall structure of the YOLOv3 network can be divided into two parts as follows: feature extractor and multichannel fusion detection. The multiscale fusion detection branch of YOLOv3 is another component. YOLOv3 combines the feature maps obtained by the feature extraction network through multiscale local fusion. The method is directly detected by convolution.
## 3. Experiment and Application Analysis
The PyTorch framework is used in this study’s experimental implementation, runs on a server with Intel(R)Xeon(R)CPU [email protected] GHz processor, and uses 64 GB memory, the graphics card is NVIDIA TeslaT4 GPU and Ubuntu16.04 operating system and YOLOv3 improved network. The performance of the YOLOv3 improved network in terms of iterative convergence is shown in Figure6. The National Underwater Robot Competition provided the datasets used in this work, with a total number of 8220 images. There are four types of marine organisms in the dataset as follows: sea cucumber (holothurian), sea urchin (echinus), scallop (scallop), and starfish (starfish). Label and save the dataset as an XML file using LabelImg and then split it into 6580 training sets and 1640 testing sets in an 8 : 2 ratio.Figure 6
Convergence effect of YOLOv3 improved iteration.In the marine biology dataset, the tests contrasted the detection accuracy of the original YOLOv3 network against the enhanced network. Table1 provides the detection results. The results of the experiments indicate that the impact of YOLOv3 has improved. Few algorithms are selected to compare with IYOLOv3’s performance. The IYOLOv3 algorithm has obtained good results in the identification of marine creatures, as given in Table 2. It can be shown that the algorithm modeling effect of deep learning-based ocean modeling analysis is good.Table 1
Comparison of different network detection results.
Network modelIteration timesDetection accuracyYOLOv3_improved100000.7432YOLOv3100000.7116VGG-SSD100000.7208Table 2
Performance comparison of different algorithms.
AlgorithmHolothurian AP/(%)Echinus AP/(%)Scallop AP/(%)Starfish AP/(%)MAP/(%)FPSFaster R-CNN69.0387.8670.2482.1177.357SSD62.9180.2361.3280.0373.129YOLOv372.6188.0169.3681.6978.0913
## 4. Conclusion
Due to the complexity of the maritime environment and various shortcomings in marine life models, identifying photos of marine life is difficult. The convolutional neural network technique is used in this experiment based on deep learning to create and evaluate marine life modeling, and the produced model realizes the network’s deployment and application. The original model of the convolutional neural network is selected more often in this process and the accuracy is better, but it uses more computational resources. The size of the network model is gradually shrinking in the present research and development process of the convolutional neural network. Because of this study, a network topology with fewer parameters and lower model complexity is chosen, and a multiscale feature fusion and data improvement technique for marine creatures is implemented to decrease the amount of calculation and the resulting delay loss. It has a positive influence and promotional value.
---
*Source: 1019564-2022-07-30.xml* | 1019564-2022-07-30_1019564-2022-07-30.md | 26,165 | Ocean Modeling Analysis and Modeling Based on Deep Learning | Ming Hui Niu; Joung Hyung Cho | Mobile Information Systems
(2022) | Computer Science | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1019564 | 1019564-2022-07-30.xml | ---
## Abstract
The ocean comprises an uninterrupted body of salt water confined within a vast basin on the earth’s surface. The ocean is the largest ecosystem on earth with rich and diverse biological resources. Organisms that reside in salty water are referred to as “marine life.” Plants, animals, and microorganisms including archaea and bacteria are examples of these. The existence of marine life is not only a biological resource but also an economic source. Toys and other industries that imitate marine life have emerged in the market. A different modeling design of marine life has improved with the passage of time and the concept of modeling aesthetics has been incorporated. The identification of marine life images is challenging due to the complexity of the maritime environment, and there are several flaws in marine life models. The rise of deep learning has brought some new ideas for the weaknesses in marine life modeling, and the advantages of convolutional neural networks have contributed to some of the concepts based on deep learning. This research analyses marine modeling by using the benefits of convolutional neural networks, so that people can better understand marine life modeling. The experimental results indicate that the proposed approach has achieved good results in marine life detection, and the modeling effect of ocean modeling analysis based on deep learning is good.
---
## Body
## 1. Introduction
The ocean is an uninterrupted body of salt water confined in a massive basin on the earth’s surface. The primary oceans and their peripheral seas encompass almost 71% of the earth’s surface having an average position of 3,688 meters (12,100 feet) [1]. The ocean is the largest ecosystem on earth, with rich and diverse biological resources. The plants, animals, and other species that survive in the salt water of the sea or ocean are referred to as marine life. At its most basic, marine life influences the nature of our world [2]. Most of the oxygen we breathe comes from marine species. Marine life shapes and protects shorelines and some marine creatures help to generate new land. Most living species began in saltwater environments. Strong marine ecosystems are vital for civilization because they generate services such as food security, animal feed, natural resources for medications, construction materials made from coral rock plus sand, and natural defense against threats including coastal erosion and inundation [3].The marine economy has become a new development point for all nations’ national economies worldwide, and China has also begun to execute the marine ranching project and fiercely supports it as a growing strategic business [4]. It is critical for human civilization to understand how to develop marine biological resources in an effective and sensible manner. Marine biological image detection technology has been widely employed in marine biodiversity monitoring, ecosystem health assessment, and intelligent aquatic fishing since the introduction of marine pastures. The detection difficulties of marine biological images are exacerbated in practical applications due to the complexity of marine environment [5]. With the fast advancement of computer vision technology, researchers from all over the world have steadily used deep learning to the identification of marine items, providing a new direction for the detection of marine life images, as well as new concepts and directions for marine life modeling [6].Life evolved in the ocean, occupies the majority of the earth’s surface, and influences the shape of human activities as well as human dwelling space. Protecting the ocean is the same as protecting humans [7]. Based on deep learning, this study analyses marine life modeling and builds a model to better comprehend and appreciate marine life and the beauty of natural life.
### 1.1. Modeling of Marine Creatures
Before the invention of photography, scientists relied on the skillful hands of painters to translate their ocean discoveries into paper. The product is a set of scientific maps of marine life that are surprisingly lifelike and occasionally humorous [8]. With the advancement of science, the ability of human beings to explore the world is continuously enhanced and the art of natural illustration is constantly improving. In the nineteenth century, artists were fundamental members of the scientific society, contributing to the expression and distribution of knowledge gained by scientists in nature [9]. The ocean century is a term used to describe the twenty-first century. Humans have steadily shifted their attention to the ocean, which provides abundant natural resources, as terrestrial resources have dwindled and the world’s attention to the ocean has expanded rapidly [10]. However, there are many different species of marine life, and drawing photographs with people and equipment is tough, tedious process and extracts picture information. The underwater world is very desirable and the mysterious appearance of marine life attracts our attention. With the advancement of science and technology, the appearance of marine life is displayed in front of humans through high-precision instruments. The introduction of marine life modeling design has transformed information from photographs into finished goods, providing us with more knowledge on marine life, allowing us to get a deeper understanding of the subject and stimulating the development of dolls, toys, and other businesses. Figure 1 shows pictures of marine life. Figure 2 shows a charming dolphin key chain pendant that has been used in the real world.Figure 1
Picture of marine life.Figure 2
Cute dolphin keychain pendant picture.
### 1.2. Modeling Aesthetics
The Chinese Encyclopedia of Fine Arts defines plastic arts as “Plastic arts refers to the art of visual static space images created with certain material and means, generally including architecture, sculpture, painting, arts and crafts, design, calligraphy, seal cutting and other types” [11]. Modern plastic art is also known as visual art, since the creative picture generated by plastic art is visible and relies on vision to create and admire. The term plastic art comes from the German “bildende kunst” and the English “plastic art” refers to sculpture in a narrow sense. The German literary theorist Lessing first used this concept; in 1766, Lessing’s masterpiece of art criticism, Laocoon, was published, in which painting and poetry were distinguished [12]. Art has been the principal object of study in aesthetics since its inception as a separate subject in the mid-eighteenth century. Aesthetics can be separated into several categories depending on the art forms studied, such as music aesthetics and architectural aesthetics. As a result, plastic aesthetics, according to the author, is theoretical research on plastic art forms including painting, architecture, and sculpture. Plastic aesthetics concepts are regularly supplemented with fresh content as the categories of plastic arts develop [13]. Plastic aesthetics is an essential aspect of aesthetics, as well as the focus of current aesthetic study, and it can be found in various forms in contemporary visual culture. The artistic culture of the twentieth century had a significant influence.
### 1.3. Deep Learning
Deep learning is an important branch of machine learning and one of the current research hotspots in the field of artificial intelligence as a new machine learning approach [14]. Deep learning transforms the feature representation into a new feature space of the sample in the original space by performing layer-by-layer feature transformation on the original data and learns to obtain a hierarchical feature representation, which is more conducive to classification or feature visualization [15]. Deep learning has become one of the research hotspots and mainstream growth areas in the field of artificial intelligence in recent years, due to the fast development of ultra-large-scale computers, big data, smart chips, and other technologies [16].Deep learning is highly appreciated by academia and industry, and the reasons for its rapid growth are inextricably linked to rapid advancements in computer hardware (major improvements in computer computing power) and software (widespread usage of open-source software). On the one hand, the training phase of deep learning requires high-density parallel computing processing for a large amount of data. Traditional central processing units (CPUs) are difficult to perform such tasks. Therefore, new processors are constantly being designed and manufactured. The most representative typical processors include the Nvidia and AMD series of graphics processing units (GPUs), Google’s Tensor Processing Units (TPUs), and Huawei’s Ascend processors. Open-source software, on the other hand, has emerged as the primary engine of deep learning research in recent years, with widely used programming languages and efficient algorithm programming frameworks serving as supporting aspects. The main programming languages suitable for deep learning are Python, Julia, MATLAB, and C++. Python, created by Guido van Rossum of the Netherlands in the early 1990s, is the most popular deep learning programming language, with high simplicity, readability, and scalability.Deep neural networks are the most common type of deep learning today, and the deep convolutional neural network (CNN) is one of the most well-known and commonly utilized architectures. Deep convolutional neural networks have demonstrated outstanding results in a variety of applications in recent years [17].The remainder of the study is composed of the following sections: Section2 is the ocean modeling analysis and modeling based on the convolutional neural network in deep learning, Section 3 is the experiment and application analysis, while the conclusion is present in Section 4.
## 1.1. Modeling of Marine Creatures
Before the invention of photography, scientists relied on the skillful hands of painters to translate their ocean discoveries into paper. The product is a set of scientific maps of marine life that are surprisingly lifelike and occasionally humorous [8]. With the advancement of science, the ability of human beings to explore the world is continuously enhanced and the art of natural illustration is constantly improving. In the nineteenth century, artists were fundamental members of the scientific society, contributing to the expression and distribution of knowledge gained by scientists in nature [9]. The ocean century is a term used to describe the twenty-first century. Humans have steadily shifted their attention to the ocean, which provides abundant natural resources, as terrestrial resources have dwindled and the world’s attention to the ocean has expanded rapidly [10]. However, there are many different species of marine life, and drawing photographs with people and equipment is tough, tedious process and extracts picture information. The underwater world is very desirable and the mysterious appearance of marine life attracts our attention. With the advancement of science and technology, the appearance of marine life is displayed in front of humans through high-precision instruments. The introduction of marine life modeling design has transformed information from photographs into finished goods, providing us with more knowledge on marine life, allowing us to get a deeper understanding of the subject and stimulating the development of dolls, toys, and other businesses. Figure 1 shows pictures of marine life. Figure 2 shows a charming dolphin key chain pendant that has been used in the real world.Figure 1
Picture of marine life.Figure 2
Cute dolphin keychain pendant picture.
## 1.2. Modeling Aesthetics
The Chinese Encyclopedia of Fine Arts defines plastic arts as “Plastic arts refers to the art of visual static space images created with certain material and means, generally including architecture, sculpture, painting, arts and crafts, design, calligraphy, seal cutting and other types” [11]. Modern plastic art is also known as visual art, since the creative picture generated by plastic art is visible and relies on vision to create and admire. The term plastic art comes from the German “bildende kunst” and the English “plastic art” refers to sculpture in a narrow sense. The German literary theorist Lessing first used this concept; in 1766, Lessing’s masterpiece of art criticism, Laocoon, was published, in which painting and poetry were distinguished [12]. Art has been the principal object of study in aesthetics since its inception as a separate subject in the mid-eighteenth century. Aesthetics can be separated into several categories depending on the art forms studied, such as music aesthetics and architectural aesthetics. As a result, plastic aesthetics, according to the author, is theoretical research on plastic art forms including painting, architecture, and sculpture. Plastic aesthetics concepts are regularly supplemented with fresh content as the categories of plastic arts develop [13]. Plastic aesthetics is an essential aspect of aesthetics, as well as the focus of current aesthetic study, and it can be found in various forms in contemporary visual culture. The artistic culture of the twentieth century had a significant influence.
## 1.3. Deep Learning
Deep learning is an important branch of machine learning and one of the current research hotspots in the field of artificial intelligence as a new machine learning approach [14]. Deep learning transforms the feature representation into a new feature space of the sample in the original space by performing layer-by-layer feature transformation on the original data and learns to obtain a hierarchical feature representation, which is more conducive to classification or feature visualization [15]. Deep learning has become one of the research hotspots and mainstream growth areas in the field of artificial intelligence in recent years, due to the fast development of ultra-large-scale computers, big data, smart chips, and other technologies [16].Deep learning is highly appreciated by academia and industry, and the reasons for its rapid growth are inextricably linked to rapid advancements in computer hardware (major improvements in computer computing power) and software (widespread usage of open-source software). On the one hand, the training phase of deep learning requires high-density parallel computing processing for a large amount of data. Traditional central processing units (CPUs) are difficult to perform such tasks. Therefore, new processors are constantly being designed and manufactured. The most representative typical processors include the Nvidia and AMD series of graphics processing units (GPUs), Google’s Tensor Processing Units (TPUs), and Huawei’s Ascend processors. Open-source software, on the other hand, has emerged as the primary engine of deep learning research in recent years, with widely used programming languages and efficient algorithm programming frameworks serving as supporting aspects. The main programming languages suitable for deep learning are Python, Julia, MATLAB, and C++. Python, created by Guido van Rossum of the Netherlands in the early 1990s, is the most popular deep learning programming language, with high simplicity, readability, and scalability.Deep neural networks are the most common type of deep learning today, and the deep convolutional neural network (CNN) is one of the most well-known and commonly utilized architectures. Deep convolutional neural networks have demonstrated outstanding results in a variety of applications in recent years [17].The remainder of the study is composed of the following sections: Section2 is the ocean modeling analysis and modeling based on the convolutional neural network in deep learning, Section 3 is the experiment and application analysis, while the conclusion is present in Section 4.
## 2. Ocean Modeling Analysis and Modeling Based on the Convolutional Neural Network in Deep Learning
This section is composed of the two subsections, which are convolutional neural networks and You Only Look Once Version 3 (YOLOv3) network structure.
### 2.1. Convolutional Neural Networks
The convolutional neural network is a kind of neural network with deep structure and convolution calculation, which has the characteristics of weight sharing, local connection, and convolution pooling operation [18]. These features can effectively reduce the number of training parameters and the complexity of the network, making the model robust and fault-tolerant. Because of these properties, convolutional neural networks perform much better than fully connected neural networks in various signal and information processing tasks [19].The convolutional neural network contains four modules such as convolutional layer, pooling layer, activation function, and fully connected layer. Convolution is an efficient method to extract image features. The convolutional layer is the core layer of the convolutional neural network and includes a significant amount of computation. Convolution kernel (filter), stride, and padding are all convolutional layer parameters. The filling approach can be used to compute the edge numerous times to avoid edge information from being lost. There are many methods of pooling, such as max pooling and mean pooling. While, max pooling is a commonly used method in convolutional neural networks. The pooling process is shown in Figure3. The Sigmoid or Tanh functions were utilized in the early convolutional neural networks, and subsequently, the rectified linear unit (ReLU) function was included, as shown in Figure 4. In addition, the exponential linear units (ELU) function and MaxOut function are also often used. The fully connected layer is the classifier of the convolutional neural network, usually at the end of the network. Convolution operations can implement fully connected layers. Each node in the fully connected layer must be linked to every node in the preceding layer, learn model parameters, conduct feature fitting, and synthesize the previous layer’s output features; then, therefore, this layer’s weight parameters are the highest in the network.Figure 3
Pooling process.Figure 4
Three activation functions.
### 2.2. YOLOv3 Network Structure
The network structure of this experiment is improved based on YOLOv3. YOLOv3 is a kind of YOLO network series, which belongs to the convolutional neural network based on candidate regions [20]. It improves the network structure of YOLOv2 and introduces the residual structure. The method of multifeature scale prediction is realized for detection, and finally, a good detection effect is obtained. In addition, compared with other convolutional neural networks based on candidate regions, in terms of network structure, YOLOv3 is more concise, and it is simple to improve the network structure. In this research, an enhanced structure suited for marine biological identification is provided based on the original YOLOv3 network model. The structure of each layer for the YOLOv3 network is shown in Figure 5.Figure 5
YOLOv3 network structure.The overall structure of the YOLOv3 network can be divided into two parts as follows: feature extractor and multichannel fusion detection. The multiscale fusion detection branch of YOLOv3 is another component. YOLOv3 combines the feature maps obtained by the feature extraction network through multiscale local fusion. The method is directly detected by convolution.
## 2.1. Convolutional Neural Networks
The convolutional neural network is a kind of neural network with deep structure and convolution calculation, which has the characteristics of weight sharing, local connection, and convolution pooling operation [18]. These features can effectively reduce the number of training parameters and the complexity of the network, making the model robust and fault-tolerant. Because of these properties, convolutional neural networks perform much better than fully connected neural networks in various signal and information processing tasks [19].The convolutional neural network contains four modules such as convolutional layer, pooling layer, activation function, and fully connected layer. Convolution is an efficient method to extract image features. The convolutional layer is the core layer of the convolutional neural network and includes a significant amount of computation. Convolution kernel (filter), stride, and padding are all convolutional layer parameters. The filling approach can be used to compute the edge numerous times to avoid edge information from being lost. There are many methods of pooling, such as max pooling and mean pooling. While, max pooling is a commonly used method in convolutional neural networks. The pooling process is shown in Figure3. The Sigmoid or Tanh functions were utilized in the early convolutional neural networks, and subsequently, the rectified linear unit (ReLU) function was included, as shown in Figure 4. In addition, the exponential linear units (ELU) function and MaxOut function are also often used. The fully connected layer is the classifier of the convolutional neural network, usually at the end of the network. Convolution operations can implement fully connected layers. Each node in the fully connected layer must be linked to every node in the preceding layer, learn model parameters, conduct feature fitting, and synthesize the previous layer’s output features; then, therefore, this layer’s weight parameters are the highest in the network.Figure 3
Pooling process.Figure 4
Three activation functions.
## 2.2. YOLOv3 Network Structure
The network structure of this experiment is improved based on YOLOv3. YOLOv3 is a kind of YOLO network series, which belongs to the convolutional neural network based on candidate regions [20]. It improves the network structure of YOLOv2 and introduces the residual structure. The method of multifeature scale prediction is realized for detection, and finally, a good detection effect is obtained. In addition, compared with other convolutional neural networks based on candidate regions, in terms of network structure, YOLOv3 is more concise, and it is simple to improve the network structure. In this research, an enhanced structure suited for marine biological identification is provided based on the original YOLOv3 network model. The structure of each layer for the YOLOv3 network is shown in Figure 5.Figure 5
YOLOv3 network structure.The overall structure of the YOLOv3 network can be divided into two parts as follows: feature extractor and multichannel fusion detection. The multiscale fusion detection branch of YOLOv3 is another component. YOLOv3 combines the feature maps obtained by the feature extraction network through multiscale local fusion. The method is directly detected by convolution.
## 3. Experiment and Application Analysis
The PyTorch framework is used in this study’s experimental implementation, runs on a server with Intel(R)Xeon(R)CPU [email protected] GHz processor, and uses 64 GB memory, the graphics card is NVIDIA TeslaT4 GPU and Ubuntu16.04 operating system and YOLOv3 improved network. The performance of the YOLOv3 improved network in terms of iterative convergence is shown in Figure6. The National Underwater Robot Competition provided the datasets used in this work, with a total number of 8220 images. There are four types of marine organisms in the dataset as follows: sea cucumber (holothurian), sea urchin (echinus), scallop (scallop), and starfish (starfish). Label and save the dataset as an XML file using LabelImg and then split it into 6580 training sets and 1640 testing sets in an 8 : 2 ratio.Figure 6
Convergence effect of YOLOv3 improved iteration.In the marine biology dataset, the tests contrasted the detection accuracy of the original YOLOv3 network against the enhanced network. Table1 provides the detection results. The results of the experiments indicate that the impact of YOLOv3 has improved. Few algorithms are selected to compare with IYOLOv3’s performance. The IYOLOv3 algorithm has obtained good results in the identification of marine creatures, as given in Table 2. It can be shown that the algorithm modeling effect of deep learning-based ocean modeling analysis is good.Table 1
Comparison of different network detection results.
Network modelIteration timesDetection accuracyYOLOv3_improved100000.7432YOLOv3100000.7116VGG-SSD100000.7208Table 2
Performance comparison of different algorithms.
AlgorithmHolothurian AP/(%)Echinus AP/(%)Scallop AP/(%)Starfish AP/(%)MAP/(%)FPSFaster R-CNN69.0387.8670.2482.1177.357SSD62.9180.2361.3280.0373.129YOLOv372.6188.0169.3681.6978.0913
## 4. Conclusion
Due to the complexity of the maritime environment and various shortcomings in marine life models, identifying photos of marine life is difficult. The convolutional neural network technique is used in this experiment based on deep learning to create and evaluate marine life modeling, and the produced model realizes the network’s deployment and application. The original model of the convolutional neural network is selected more often in this process and the accuracy is better, but it uses more computational resources. The size of the network model is gradually shrinking in the present research and development process of the convolutional neural network. Because of this study, a network topology with fewer parameters and lower model complexity is chosen, and a multiscale feature fusion and data improvement technique for marine creatures is implemented to decrease the amount of calculation and the resulting delay loss. It has a positive influence and promotional value.
---
*Source: 1019564-2022-07-30.xml* | 2022 |
# Adaptive Parallel Simultaneous Stabilization of a Class of Nonlinear Descriptor Systems via Dissipative Matrix Method
**Authors:** Liying Sun; Renming Yang
**Journal:** Mathematical Problems in Engineering
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1019569
---
## Abstract
This paper investigates the adaptive parallel simultaneous stabilization and robust adaptive parallel simultaneous stabilization problems of a class of nonlinear descriptor systems via dissipative matrix method. Firstly, under an output feedback law, two nonlinear descriptor systems are transformed into two nonlinear differential-algebraic systems by nonsingular transformations, and a sufficient condition of impulse-free is given for two resulting closed-loop systems. Then, the two systems are combined to generate an augmented dissipative Hamiltonian differential-algebraic system by using the system-augmentation technique. Based on the dissipative system, an adaptive parallel simultaneous stabilization controller and a robust adaptive parallel simultaneous stabilization controller are designed for the two systems. Furthermore, the case of more than two nonlinear descriptor systems is investigated. Finally, an illustrative example is studied by using the results proposed in this paper, and simulations show that the adaptive parallel simultaneous stabilization controllers obtained in this paper work very well.
---
## Body
## 1. Introduction
In practical control designs, a commonly encountered problem is to design feedback controller(s) to stabilize a given family of parallel systems. It is straightforward to consider each system individually and design a stabilization controller for each system. However, a more economical approach to the problem is to design a single controller, which may take measurements/signals from all members of the family, to stabilize all the systems simultaneously [1, 2]. In this way, the controller implementation cost will be greatly reduced. This control is referred to the parallel simultaneous stabilization. It is noted that this kind of stabilization is different from the traditional simultaneous stabilization problem [3, 4]. The traditional simultaneous stabilization is concerned with designing a control law such that any individual system within the collection of systems can be stabilized by the control law. In other words, the resulting closed-loop system which consists of an individual system and its corresponding controller via its state or output feedback based on that control law is asymptotically stable. It is also noted that the traditional simultaneous stabilization problem is one of the important research topics in the area of robust control and has received a considerable attention in the past few decades [3–8].The descriptor system is a natural representation of dynamic systems and describes a larger class of systems than the normal system model [9–16]. In the last three decades, many nice results have been obtained for the controller design of linear descriptor systems; see [9, 10, 13, 14] and references therein. In general, it is not an easy task to design a controller for nonlinear descriptor systems (NDSs) and, accordingly, there are fewer works on NDSs except several special case studies [11, 12, 15, 16]; particularly, it is more difficult to design a parallel simultaneous stabilization controller for a class of nonlinear descriptor systems; the pertinent results were proposed for this case in [1]. For nonlinear differential-algebraic systems, an H∞ controller was designed in [15] based on the condition for the existence of H∞ controller of nonlinear systems, while the stabilization and robust stabilization of the systems were considered by the feedback linearization approach in [11] and the Hamiltonian function method in [12], respectively. In [16], based on the linear matrix inequality method, the generalized absolute stability was studied for linear descriptor systems with feedback-connected nonlinearities. Using a nonlinear performance index to the nominal system, a robust adaptive control scheme was presented in [17] for a class of nonlinear uncertain descriptor systems. For the case in which the singular matrix Ei=Midiag{Ir,0}Mi with Mi being an orthogonal matrix, the parallel simultaneous stabilization and robust adaptive parallel simultaneous stabilization problems were, respectively, studied in [1, 18] for two or a family of nonlinear descriptor systems via the Hamiltonian function method. It should be pointed out that there are, to the best of the authors’ knowledge, fewer works on the robust adaptive parallel simultaneous stabilization of NDSs [18].In this paper, motivated by the Hamiltonian function method [2, 19–29], we apply the structural properties of dissipative matrices to investigate the adaptive parallel simultaneous stabilization and robust adaptive parallel simultaneous stabilization problems for a class of NDSs via output feedback law [30, 31], and propose a new approach, called the dissipative matrix method, to study NDSs. Firstly, under an output feedback law, two NDSs are transformed into two nonlinear differential-algebraic systems by nonsingular transformations, and a sufficient condition of impulse-free is given for two closed-loop systems. Then, the two systems are combined to generate an augmented dissipative Hamiltonian differential-algebraic system by using the system-augmentation technique. Based on the dissipative system, an adaptive parallel simultaneous stabilization controller and a robust adaptive parallel simultaneous stabilization controller are designed for two NDSs, in which the singular matrix Ei≥0(≤0). Furthermore, the case of more than two NDSs is investigated. Finally, an illustrative example is studied by using the results proposed in this paper, and simulations show that the adaptive parallel simultaneous stabilization controllers obtained in this paper work very well.The paper is organized as follows. In Section2, we study the adaptive parallel simultaneous stabilization of two NDSs based on an augmented dissipative Hamiltonian form. Section 3 presents the robust adaptive parallel simultaneous stabilization controller for two NDSs with external disturbances and investigates the case of more than two NDSs. In Section 4, an illustrative example is provided, which is followed by the conclusion in Section 5.
## 2. Adaptive Parallel Simultaneous Stabilization of Two NDSs
This section investigates adaptive parallel simultaneous stabilization problem for two NDSs via dissipative matrix method. Firstly, based on suitable output feedback, two NDSs are transformed into two nonlinear differential-algebraic systems by new coordinate transformations, and then the two systems are combined to generate an augmented dissipative Hamiltonian differential-algebraic system by using the system-augmentation technique, based on which an adaptive parallel simultaneous stabilization controller is designed for the two systems.Consider the following two NDSs:(1)E1x˙=f1x,p1+g1xu,E1x0=E1x0,f10,p1=fp1p1,f10,0=0,y=g1Txx,(2)E2ξ˙=f2ξ,p2+g2ξu,E2ξ0=E2ξ0,f20,p2=fp2p2,f20,0=0,η=g2Tξξ,where x=[x~1,x~2,⋯,x~n]T,ξ=[ξ~1,ξ~2,⋯,ξ~n]T∈Rn and y,η∈Rm are the states and outputs of the two systems, respectively; u∈Rm is the control input; pi∈Rs is an unknown parameter perturbation vector and is assumed to be small enough to keep the dissipative structure unchanged; i.e., if R(x)>0, then R(x,pi)>0; fi(x,pi)∈Rn is sufficiently smooth vector fields, g1(x),g2(ξ)∈Rn×m; Ei∈Rn×n, 0< rank(Ei)=r<n, and Ei≥0 or Ei≤0,i=1,2. Without loss of generality, we discuss Ei≥0, i=1,2.Definition 1 (see [32]).
A control lawu=u(x) is called an admissible control law if, for any initial condition Ex0, the resulting closed-loop descriptor system has no impulsive solution.Lemma 2 (see [33]).
If a vector functionh(x) with h(0)=0(x∈Rn) has continuous nth-order partial derivatives, then h(x) can be expressed as(3)hx=a1xx1+⋯+anxxn,where ai(x),i=1,2,⋯,n, are vector functions.According to Lemma2, systems (1) and (2) can be transformed into the following form:(4)E1x˙=A1x,p1α1x,p1+g1xu,y=g1Txx,(5)E2ξ˙=A2ξ,p2α2ξ,p2+g2ξu,η=g2Tξξ,where the structural matrix Ai(x,pi)∈Rn×n, αi(x,pi)∈Rn is some vector of x and pi satisfying αi(x,0)=x,i=1,2.To study the adaptive parallel simultaneous stabilization problem of systems (4) and (5), the following assumptions are given:(A1)
rankEi,gi(x)=rank(Ei),∀x∈Rn,i=1,2;(A2)
assume there existsΦ∈Rl×m such that(6)Aix,piαix,pi-x=gixΦTθ,∀x∈Rn,i=1,2,whereθ∈Rl is an unknown constant vector related to pi.Assumption (A1) implies that fast subsystems of the descriptor systems (1) and (2) have no control u. Assumption (A2) is the so-called matched condition. In most cases, we can find Φ and θ such that (6) holds.Under assumption (A2), systems (4) and (5) are changed as(7)E1x˙=A1x,p1x+g1xu+g1xΦTθ,y=g1Txx,(8)E2ξ˙=A2ξ,p2ξ+g2ξu+g2ξΦTθ,η=g2Tξξ.Definition 3.
System (4) is called (strictly) dissipative if the structural matrix A(x) is (strictly) dissipative; i.e., A(x) can be expressed as A(x)=J(x)-R(x), where J(x) is skew-symmetric and R(x)≥0(R(x)>0); system (4) is called feedback (strictly) dissipative if there exists suitable state feedback u(x)=α(x)+v such that the resulting closed-loop descriptor system is (strictly) dissipative.Remark 4.
IfE1≤0, then systems (7) can be rewritten as(9)E1′x˙=A1′x,p1x+g1′xu+g1′xΦTθ,y′=g1′Txx,where E1′=-E1≥0, A1′(x,p1)=-A1(x,p1), and g1′(x)=-g1(x), y′=-y.We can always expressAi(x,pi) as Ai(x,pi)=Ji(x,pi)-Ri0(x,pi), where Ji(x,pi)=(1/2)(Ai(x,pi)-AiT(x,pi)) is skew-symmetric and Ri0(x,pi)=-(1/2)(Ai(x,pi)+AiT(x,pi)) is symmetric, i=1,2. In order to investigate adaptive parallel simultaneous stabilization of systems (4) and (5), we design an output feedback law such that the symmetric part of structural matrix of the closed-loop system can be transformed into positive definite one. Based on this, we have the following result.Lemma 5.
Assume that there exists a symmetric matrixK∈Rm×m such that(10)-12A1x,p1+A1Tx,p1+K11x,x>0,-12A2ξ,p2+A2Tξ,p2-K22ξ,ξ>0,where Kij(x,ξ)=gi(x)KgjT(ξ), i,j=1,2. Then, under the following adaptive output feedback law(11)u=-Ky-η-ΦTθ^+v,θ^˙=QΦy+η,systems (4) and (5) can be expressed in the following forms:(12)E1x˙=J1x,p1-R1x,p1x+g1xKg2Tξξ+g1xv+g1xΦTθ-θ^,θ^˙=QΦg1Txx+g2Tξξ,y=g1Txx,(13)E2ξ˙=J2ξ,p2-R2ξ,p2ξ-g2ξKg1Txx+g2ξv+g2ξΦTθ-θ^,θ^˙=QΦg1Txx+g2Tξξ,η=g2Tξξ,where Ji(x,pi) is skew-symmetric, Ri(x,pi)∈Rn×n is positive definite, i=1,2, θ^ is an estimate of θ, Q>0 is the adaptive gain constant matrix, and v is a new reference input.Proof.
Substituting (11) into systems (7) and (8), respectively, we can obtain systems (12) and (13), where R1(x,p1)=-(1/2)(A1(x,p1)+A1T(x,p1))+g1(x)Kg1T(x) and R2(ξ,p2)=-(1/2)(A2(ξ,p2)+A2T(ξ,p2))-g2(ξ)Kg2T(ξ). According to (10), we know that Ri(x,pi)>0. The proof is completed.SinceEi≥0 and 0< rank(Ei)=r<n, there exists a nonsingular matrix Mi∈Rn×n such that(14)MiTEiMi=Ir000,i=1,2.Denote(15)x=Mix¯,x¯=x1x2,Mi=Mi11Mi12Mi21Mi22,MiTgix=g~i1xg~i2x=g¯i1x¯g¯i2x¯,MiTJix,piMi=J~i11x,piJ~i12x,pi-J~i12Tx,piJ~i22x,pi=J¯i11x¯,piJ¯i12x¯,pi-J¯i12Tx¯,piJ¯i22x¯,pi,MiTRix,piMi=R~i11x,piR~i12x,piR~i12Tx,piR~i22x,pi=R¯i11x¯,piR¯i12x¯,piR¯i12Tx¯,piR¯i22x¯,pi,∇xHix=∂Hix∂x,i=1,2,where x1∈Rr, x2∈Rn-r, J~i11(x,pi)=J¯i11(x¯,pi) and J~i22(x,pi)=J¯i22(x¯,pi) are skew-symmetric matrices, and R¯i11(x¯,pi)=R~i11(x,pi)>0, R¯i22(x¯,pi)=R~i22(x,pi)=Mi12TMi22TRi(x,pi)Mi12Mi22, which implies that R¯i22(x¯,pi)=R~i22(x,pi)>0,i=1,2.Remark 6.
ThatRi(x,pi)>0 is a sufficient not necessary condition of R~i22(x,pi)>0. In this paper, R~i22(x,pi)>0 can guarantee that the closed-loop descriptor systems (12) and (13) have no impulsive solution. Therefore, (10) is a sufficient condition of systems (12) and (13) to be impulse-free.From (A1), we have(16)rankEi,gix=rankMiTEi,gixMi00I=rankIr0g~i1x00g~i2x=rankIr00g~i2x=rankEi=r,that is, g¯i2(x¯)=g~i2(x)=0. Thus, according to (15) and assumption (A1), systems (12) and (13) can be transformed into the following differential-algebraic systems:(17)x˙1=J¯111x¯,p1-R¯111x¯,p1x1+J¯112x¯,p1-R¯112x¯,p1x2+g¯11x¯Kg¯21Tξ¯ξ1+g¯11x¯v+g¯11x¯ΦTθ-θ^,0=-J¯112Tx¯,p1+R¯112Tx¯,p1x1+J¯122x¯,p1-R¯122x¯,p1x2≕φx1,x2,p1,θ^˙=QΦg¯11Tx¯x1+g¯21Tξ¯ξ1,y=g¯11Tx¯x1,(18)ξ˙1=J¯211ξ¯,p2-R¯211ξ¯,p2ξ1+J¯212ξ¯,p2-R¯212ξ¯,p2ξ2-g¯21ξ¯Kg¯11Tx¯x1+g¯21ξ¯v+g¯21ξ¯ΦTθ-θ^,0=-J¯212Tξ¯,p2+R¯212Tξ¯,p2ξ1+J¯222ξ¯,p2-R¯222ξ¯,p2ξ2,θ^˙=QΦg¯11Tx¯x1+g¯21Tξ¯ξ1,η=g¯21Tξ¯ξ1.SinceJ¯i22(x¯,pi)=-J¯i22T(x¯,pi) and R¯i22(x¯,pi)>0, we know that J¯i22(x¯,pi)-R¯i22(x¯,pi) is invertible [34], i=1,2. Therefore, systems (17) and (18) can be expressed in the following forms:(19)x˙1=J11x¯,p1-R11x¯,p1x1+g¯11x¯Kg¯21Tξ¯ξ1+g¯11x¯v+g¯11x¯ΦTθ-θ^,0=-J¯112Tx¯,p1+R¯112Tx¯,p1x1+J¯122x¯,p1-R¯122x¯,p1x2,θ^˙=QΦg¯11Tx¯x1+g¯21Tξ¯ξ1,y=g¯11Tx¯x1,(20)ξ˙1=J21ξ¯,p2-R21ξ¯,p2ξ1-g¯21ξ¯Kg¯11Tx¯x1+g¯21ξ¯v+g¯21ξ¯ΦTθ-θ^,0=-J¯212Tξ¯,p2+R¯212Tξ¯,p2ξ1+J¯222ξ¯,p2-R¯222ξ¯,p2ξ2,θ^˙=QΦg¯11Tx¯x1+g¯21Tξ¯ξ1,η=g¯21Tξ¯ξ1,where Ji1(x¯,pi)-Ri1(x¯,pi)=J¯i11(x¯,pi)-R¯i11(x¯,pi)+(J¯i12(x¯,pi)-R¯i12(x¯,pi))(J¯i22(x¯,pi)-R¯i22(x¯,pi))-1 · (J¯i12T(x¯,pi)+R¯i12T(x¯,pi)), i=1,2. Ji1(x¯,pi) is skew-symmetric, and Ri1(x¯,pi) is positive definite, because(21)NJ¯i11-R¯i11J¯i12-R¯i12-J¯i12T+R¯i12TJ¯i22-R¯i22NT=Ji1-Ri10∗J¯i22-R¯i22,where(22)N=I-J¯i12-R¯i12J¯i22-R¯i22-10I.With assumptions (A1) and (A2), we have the following result.Theorem 7.
Consider systems (1) and (2) with their equivalent forms (4) and (5). Assume assumptions (A1) and (A2) hold; if there exist symmetric matrices K∈Rm×m and Φ∈Rl×m such that (10) and (6) hold, respectively, then the admissible adaptive parallel controller (11) (v=0) can simultaneously stabilize systems (1) and (2).Proof.
If assumptions (A1) and (A2) hold, then systems (4) and (5) can be transformed into systems (19) and (20) by the adaptive feedback law (11), which are of index one at the equilibrium point 0 ( system (12) is said to have index one at the equilibrium point 0 if ∂φ(x1,x2,p1)/∂x2 in (17) is nonsingular in a neighborhood of 0); i.e., systems (19) and (20) are impulse-free. According to the implicit function theorem, there exist continuous functions qi(·) such that x2=q1(x1),ξ2=q2(ξ1),qi(0)=0. Thus, systems (19) and (20) can be rewritten as (v=0)(23)X˙=JX,p-RX,p∂HX∂X,0=-J¯112Tx¯,p1+R¯112Tx¯,p1x1+J¯122x¯,p1-R¯122x¯,p1x2,0=-J¯212Tξ¯,p2+R¯212Tξ¯,p2ξ1+J¯222ξ¯,p2-R¯222ξ¯,p2ξ2,where(24)X=x1ξ1θ^,p=p1p2,RX,p=R11x1,q1x1,p1000R21ξ1,q2ξ1,p20000,JX,p=J11x1,q1x1,p1g¯11x1,q1x1Kg¯21Tξ1,q2ξ1-g¯11x1,q1x1ΦTQ-g¯11x1,q1x1Kg¯21Tξ1,q2ξ1TJ21ξ1,q2ξ1,p2-g¯21ξ1,q2ξ1ΦTQg¯11x1,q1x1ΦTQTg¯21ξ1,q2ξ1ΦTQT0,HX=12x1Tx1+ξ1Tξ1+12θ-θ^TQ-1θ-θ^.Obviously, J(X,p)=-JT(X,p),R(X,p)≥0,H(X)≥0. Therefore, system (23) is a dissipative Hamiltonian system. Choosing V(X)=H(X), then H(X) has a local minimum at X0=(0T,0T,θ^0T)T. Then, based on system (23) we have(25)V˙X=∂THX∂XX˙=∂THX∂XJX,p-RX,p∂HX∂X=-∂THX∂XRX,p∂HX∂X=-x1TR11x1,q1x1,p1x1-ξ1TR21ξ1,q2ξ1,p2ξ1≤0.Thus, system (23) converges to the largest invariant set contained in(26)X:V˙X=0⊂X:R111/2x1,q1x1,p1x1=0,R211/2ξ1,q2ξ1,p2ξ1=0,∀t≥0≔S.From systems (19) and (20), we know that both R111/2(x1,q1(x1),p1) and R211/2(ξ1,q2(ξ1),p2) are nonsingular, which implies that R111/2(x1,q1(x1),p1)x1=0⇒x1=0 and R211/2(ξ1,q2(ξ1),p2)ξ1=0⇒ξ1=0. That is, the largest invariant set only contains one point, i.e., S={[0T,0T,θ^0T]T}, with which it is easy to see that x1→0 and ξ1→0, as t→∞. Moreover, according to systems (19) and (20), it is clear that x2→0 and ξ2→0, as t→∞. Thus, x=M1x¯→0,ξ=M2ξ¯→0, as t→∞. Therefore, under the admissible adaptive parallel control law (11), systems (1) and (2) can be simultaneously stabilized.
## 3. Robust Adaptive Parallel Simultaneous Stabilization of Two NDSs and More Than Two NDSs
In this section, we investigate the robust adaptive parallel simultaneous stabilization problem of two NDSs with external disturbances and parameters perturbation and discuss the case of more than two NDSs. Firstly, for a given disturbance attenuation levelγ>0, we design an adaptive parallel L2 disturbance attenuation output feedback law such that under the law the L2 gain (from w to z) of the closed-loop system is less than γ. Then, we show that the two systems are simultaneously asymptotically stable when w=0.To design the robust adaptive parallel simultaneous stabilization controller, the following lemma is recalled, first.Lemma 8 (see [34]).
Consider a dissipative Hamiltonian system as follows:(27)x˙=Jx-Rx∇H+g1xu+g2xw,z=hxg1Tx∇H,where x∈Rn is the state, u∈Rm is the control input, w∈Rq is the disturbance, J(x) is skew-symmetric, R(x)⩾0, H(x) has a strict local minimum at the system’s equilibrium, z is the penalty function, and h(x) is a weighting matrix. Given a disturbance attenuation level γ>0, if(28)Rx+12γ2g1xg1Tx-g2xg2Tx≥0,then an L2 disturbance attenuation controller of system (27) can be given as(29)u=-12hTxhx+12γ2Img1Tx∇H,and the γ-dissipation inequality(30)H˙+∇THRx+12γ2g1xg1Tx-g2xg2Tx∇H≤12γ2w2-z2holds along the trajectories of the closed-loop system consisting of (27) and (29).Now, we consider the following NDSs (1) and (2) with external disturbances:(31)E1x˙=f1x,p1+g1xu+d1w,E1x0=E1x0,f10,p1=fp1p1,f10,0=0,y=g1Txx,(32)E2ξ˙=f2ξ,p2+g2ξu+d2w,E2ξ0=E2ξ0,f20,p2=fp2p2,f20,0=0,η=g2Tξξ,where w∈Rq is the disturbance, di(x)∈Rn×q, i=1,2, other variables are the same as those in systems (1) and (2), and (33)MiTdix=d~i1xd~i2x=d¯i1x¯d¯i2x¯.Given a disturbance attenuation levelγ>0, choose(34)z=Λy+ηas the penalty function, where Λ∈Rs×m is a weighting matrix.To design the adaptive parallelL2 disturbance attenuation output feedback control law for systems (31) and (32), the following assumption is given:(A3)
rankEi,di(x)=rank(Ei),∀x∈Rn,i=1,2.Assumption (A3) implies that fast subsystems of the descriptor systems (31) and (32) have not been disturbed. Similar to (A1), from (A3) we can obtain that d~i2(x)=d¯i2(x¯)=0.Based on Section2, systems (31) and (32) can be transformed into the following forms:(35)E1x˙=A1x,p1α1x,p1+g1xu+d1xw,y=g1Txx,(36)E2ξ˙=A2ξ,p2α2ξ,p2+g2ξu+d2xw,η=g2Tξξ.Next, we design an adaptive parallelL2 disturbance attenuation controller for systems (31) and (32).Theorem 9.
Consider systems (31) and (32) with their equivalent forms (35) and (36), the penalty function (34), and the disturbance attenuation level γ>0. Assume that assumptions (A1)~ (A3) hold for systems (35) and (36). If (1)
there exists a symmetric matrixK∈Rm×m such that (10) holds,(2)
gi=di,i=1,2,
then, the following admissible adaptive parallel feedback law(37)u=-Ky-η-12ΛTΛ+12γ2Imy+η-ΦTθ^,θ^˙=QΦy+ηcan simultaneously stabilize systems (31) and (32).Proof.
Rewrite (37) as follows(38)u=-Ky-η-ΦTθ^+v,θ^˙=QΦy+η,v=-12ΛTΛ+12γ2Imy+η.
Substituting the first part of (38) into systems (35) and (36), according to the proof of Theorem 7 and assumption (A2), we know that systems (35) and (36) are impulse controllable and can be expressed as the following dissipative Hamiltonian form:(39)X˙=JX,p-RX,p∂HX∂X+GXv+DXw,0=-J¯112Tx¯,p1+R¯112Tx¯,p1x1+J¯122x¯,p1-R¯122x¯,p1x2,0=-J¯212Tξ¯,p2+R¯212Tξ¯,p2ξ1+J¯222ξ¯,p2-R¯222ξ¯,p2ξ2,and(40)z=ΛGTX∂HX∂X,where X, J(X,p), R(X,p), and H(X) are given in (23), GX=g¯11Tx1,q1x1g¯21Tξ1,q2ξ10T and DX=d¯11Tx1,q1x1d¯21Tξ1,q2ξ10T.
Becausegi=di,i=1,2, it is easy to show(41)RX,p+12γ2GXGTX-DXDTX=RX,p≥0.
Thus, system (39) with the penalty function (40) satisfies all the conditions of Lemma 8. From Lemma 8, an L2 disturbance attenuation controller of system (39) can be designed as(42)v=-12ΛTΛ+12γ2Imy+η,which is the second part of (38), and, furthermore, the γ-dissipation inequality(43)H˙+∂TH∂XRX,p∂H∂X≤12γ2w2-z2holds along the trajectories of the closed-loop system consisting of (39) and (42).
Therefore, the feedback law (37) is an L2 disturbance attenuation controller of systems (31) and (32). According to [34], the L2 gain from w to z is less than γ. On the other hand, because (∂TH/∂X)R(X,p)(∂H/∂X)=x1TR11(x1,q1(x1),p1)x1+ξ1TR21(ξ1,q2(ξ1),p2)ξ1>0, from (43), we know that system (39) is asymptotically stable when w=0; that is, x1→0 and ξ1→0 (as t→∞). Moreover, it is clear that x2=q1(x1)→0, ξ2=q2(ξ1)→0 (as t→∞). Therefore, x=M1x¯→0 and ξ=M2ξ¯→0 (as t→∞). Thus, the admissible adaptive parallel control law (37) can simultaneously stabilize systems (31) and (32).Theorem 10.
Consider systems (31) and (32) with their equivalent forms (35) and (36), the penalty function (34), and the disturbance attenuation level γ>0. Assume that assumptions (A1) ~ (A3) hold for systems (35) and (36). If (1)
there exists a symmetric matrixK∈Rm×m such that (10) holds, and(44)-12A1x,p1+A1x,p1T+K11x,x+12γ2g1xg1Tx-d1xd1Tx>0,-12A2ξ,p2+A2ξ,p2T-K22ξ,ξ+12γ2g2ξg2Tξ-d2ξd2Tξ>0,whereKij(x,ξ)=gi(x)KgjT(ξ),i,j=1,2;(2)
g1g2T=0 and d1d2T=0,
then, the admissible adaptive parallelL2 disturbance attenuation controller (37) can simultaneously stabilize systems (31) and (32).Proof.
From the proof of Theorem9, we know that under the controller (37), systems (35) and (36) are impulse controllable and can be expressed as (39). From condition (2), it can be seen that (45)M1Tg1g2TM2=g¯11x¯0g¯21Tx¯0=g¯11x¯g¯21Tx¯000=0,that is, g¯11(x¯)g¯21T(x¯)=0, and in a similar way, we can obtain d¯11(x¯)d¯21T(x¯)=0. Moreover, according to condition (1), we have(46)M1T-12A1x,p1+A1Tx,p1+g1xKg1TxM1+12γ2M1Tg1xg1Tx-d1xd1TxM1=R¯111x¯,p1R¯112x¯,p1R¯112Tx¯,p1R¯122x¯,p1+12γ2g¯11x¯g¯11Tx¯-d¯11x¯d¯11Tx¯000>0.Thus,(47)R¯111x¯,p1+12γ2g¯11x¯g¯11Tx¯-d¯11x¯d¯11Tx¯≔R¯111x¯,p1+Cx¯>0.Since(48)NJ¯111-R¯111-CJ¯112-R¯112-J¯112T+R¯112TJ¯122-R¯122NT=J11-R11-C0∗J¯22-R¯22, where J¯111 is skew-symmetric and N is the same as that in (22), we have(49)R^1x¯,p1≔R11x¯,p1+12γ2g¯11x¯g¯11Tx¯-d¯11x¯d¯11Tx¯>0.In a similar way,(50)R^2ξ¯,p2≔R21ξ¯,p2+12γ2g¯21ξ¯g¯21Tξ¯-d¯21ξ¯d¯21Tξ¯>0.Therefore,(51)RX,p+12γ2GXGTX-DXDTX=R^1x1,q1x1,p1000R^2ξ1,q2ξ1,p20000≥0.Thus, system (39) with the penalty function (40) satisfies all the conditions of Lemma 8. From Lemma 8, an adaptive parallel L2 disturbance attenuation controller of system (39) can be designed as (42), and, furthermore, the γ-dissipation inequality(52)H˙+∂TH∂XRX,p+12γ2GXGTX-DXDTX∂H∂X≤12γ2w2-z2holds along the trajectories of the closed-loop system consisting of (39) and (42). Therefore, according to the proof of Theorem 9, the admissible controller (37) can simultaneously stabilize systems (31) and (32).Remark 11.
We can utilize the results obtained on adaptive parallel simultaneous stabilization and robust adaptive parallel simultaneous stabilization problems for two NDSs to investigate the same problems of more than two NDSs.Consider the followingN NDSs:(53)Eix˙i=fixi,pi+gixiu+dixiw,Eixi0=Eix0i,fi0,pi=fpipi,fi0,0=0,yi=giTxixi,i=1,2,⋯,N,where xi∈Rni, u∈Rm, w∈Rq, and yi∈Rm are the states, control input, external disturbances, and outputs of the N systems, respectively; pi is an unknown parameter perturbation vector and is assumed to be small enough to keep the dissipative structure unchanged; gi(xi)∈Rni×m, 0≤Ei∈Rni×ni, and 0< rank(Ei)=ri<ni, i=1,2,⋯,N.Given a disturbance attenuation levelγ>0, choose(54)z=Λ∑i=1Nyi,i=1,2,⋯,Nas the penalty function, where Λ∈Rs×m is a weighting matrix.Similar to Section2, we obtain the following forms:(55)Eix˙i=Aixi,piαixi,pi+gixiu+dixiw,yi=giTxixi,where αi(xi,pi)∈Rni is some vector of xi and pi satisfying αi(xi,0)=xi,i=1,2,⋯,N.Assume that(i1,i2,⋯,iN) is an arbitrary permutation of {1,2,⋯,N} and that L is a positive integer satisfying 1⩽L⩽N-1. Let T1=ni1+⋯+niL and T2=niL+1+⋯+niN.Now, we divide theN systems into two sets as follows:(56)EaX˙a=AaXa,paΓaXa,pa+GaXau+DaXaw,Ya=GaTXaXa,(57)EbX˙b=AbXb,pbΓbXb,pb+GbXbu+DbXbw,Yb=GbTXbXb,where Xa=[(xi1)T,⋯,(xiL)T]T∈RT1, Xb=[(xiL+1)T,⋯,(xiN)T]T∈RT2, pa=[pi1T,⋯,piLT]T, pb=[piL+1T,⋯,piNT]T,(58)Ea=diagEi1,⋯,EiL,Eb=diagEiL+1,⋯,EiN,AaXa,pa=diagAi1xi1,pi1,⋯,AiLxiL,piL,AbXb,pb=diagAiL+1xiL+1,piL+1,⋯,AiNxiN,piN,ΓaXa,pa=diagαi1xi1,pi1,⋯,αiLxiL,piL,ΓbXb,pb=diagαiL+1xiL+1,piL+1,⋯,αiNxiN,piN,Ya=yi1+⋯+yiL,Yb=yiL+1+⋯+yiN,GaXa=gi1Txi1,⋯,giLTxiLT,GbXb=giL+1TxiL+1,⋯,giNTxiNT,DaXa=di1Txi1,⋯,diLTxiLT,DbXb=diL+1TxiL+1,⋯,diNTxiNT.According to Section2, (56), (57), and Theorems 9 and 10, we can easily obtain an adaptive parallel simultaneous stabilization controller (w=0) and a robust adaptive parallel simultaneous stabilization controller of systems (53).Theorem 12.
Consider systems (53) (w=0) with their equivalent forms (55) (w=0), and assume that assumptions (A1) and (A2) hold (i=1,2,⋯,N). If there exist a symmetric matrix K∈Rm×m, a permutation (i1,i2,⋯,iN) of {1,2,⋯,N}, and a positive integer L (1⩽L⩽N-1) such that(59)RaXa,pa≔-12AaXa,pa+AaXa,paT+KaaXa,Xa>0,RbXb,pb≔-12AbXb,pb+AbXb,pbT-KbbXb,Xb>0,where(60)KijXi,Xj=GiXiKGjTXj,i,j=a,b,then, the adaptive control law(61)u=-Kyi1+⋯+yiL-yiL+1-⋯-yiN-ΦTθ^+v,θ^˙=QΦ∑i=1Nyican simultaneously stabilize the N systems given by (53) (w=0), where v is a new reference input and θ^ and Q are the same as those in (11).Theorem 13.
Consider systems (53), the penalty function (54), and the disturbance attenuation level γ>0. Assume that assumptions (A1) ~ (A3) (i=1,2,⋯,N) hold. If (1)
there exist a symmetric matrixK∈Rm×m, a permutation (i1,i2,⋯,iN) of {1,2,⋯,N}, and a positive integer L (1⩽L⩽N-1) such that (59) holds,(2)
gi=di,i=1,2,⋯,N,
then, the following robust adaptive parallel controller(62)u=-Kyi1+⋯+yiL-yiL+1-⋯-yiN-12ΛTΛ+12γ2Im∑i=1Nyi-ΦTθ^,θ^˙=QΦ∑i=1Nyican simultaneously stabilize the N systems given by (53).
## 4. An Illustrative Example
In the following, we give an illustrative example to show how to apply Theorem9 to investigate robust adaptive parallel simultaneous stabilization for two NDSs.Example 14.
Consider the following two NDSs:(63)E1x˙=f1x,p+g1xu+d1w,E1x0=E1x0,f10,p=f1,pp,f10,0=0,y=g1Txx,(64)E2ξ˙=f2ξ,p+g2ξu+d2w,E2ξ0=E2ξ0,f20,p=f2,pp,f20,0=0,η=g2Tξξ,where x=[x~1,x~2,x~3]T∈R3,ξ=[ξ~1,ξ~2,ξ~3]T∈R3,u∈R2,w∈R2,(65)E1=000050001,E2=200020000,f1x,p=-x~1+2x~2-2x~1-x~23-x~2-2x~3-2p2x~3+2p,f2ξ,p=ξ~13-2ξ~1-ξ~2-p-2ξ~3-ξ~1-ξ~2-p2ξ~1-2ξ~3,g1x=d1x=002x~2-20,g2ξ=d2ξ=1ξ~11000.Choose the penalty functionz=Λ(y+η), where Λ is a weighting matrix.Noticing thatf1(0,0)=f2(0,0)=0, we obtain α1(x,p)=(x~1,x~2,x~3+p)T, α2(ξ,p)=(ξ~1,ξ~2+p,ξ~3)T, and (66)A1x,p=-120-2-1-x~22-2002,A2ξ,p=ξ~12-2-1-2-1-1020-2. It is easy to check that assumption (A2) is satisfied, where Φ=[-1,0] and θ=p. According to Theorem 9, we obtain the following forms of systems (63) and (64) by the output feedback u=-K(y-η)+v:(67)E1x˙=J1x,p-R1x,px+g1xKg2Tξξ+g1xv+d1xw+g1xΦTθ-θ^,y=g1Txx,(68)E2ξ˙=J2ξ,p-R2ξ,pξ-g2ξKg1Txx+g2ξv+d2ξw+g2ξΦTθ-θ^,η=g2Tξξ,where(69)K=0.800-1,J1x,p=020-20-1010,R1x,p=10004.2-2.20-2.21.2,J2ξ,p=00-2000200,R2ξ,p=1.20.200.20.20002.SinceE1≥0 and E2≥0, we can give nonsingular matrices (70)M1=0010550100,M2=22000220001.Moreover, it is clear that (A1) and (A3) are also satisfied. Thus, all the conditions of Theorem 9 hold. Therefore, an admissible adaptive parallel simultaneous stabilization controller of systems (63) and (64) can be designed as(71)u=-Ky-η-12ΛTΛ+12γ2Imy+η-ΦTθ^,θ^˙=QΦy+η.In order to test the effectiveness of the controller (71), we carry out some numerical simulations with the following choices: initial condition: E1x(0)=[0,-5,2]T, E2ξ(0)=[2,-4,0]T, θ^0=-0.5; parameter: γ=1, p=0.5, Q=1, and weighting matrix Λ=I2. To test the robustness of the controller with respect to external disturbances, we add a square-wave disturbance of amplitude [2,-4]T to the systems in the time duration [1s~2s]. The responses of the states, control signal, and θ^ are shown in Figures 1–3, respectively.Figure 1
Response of the statex.Figure 2
Response of the stateξ.Figure 3
The controlu and estimate θ^.It can be observed from Figures1–3 that the states quickly converge to the origin after the disturbance is removed. The simulation results show that the controller (71) is very effective in simultaneously stabilizing the two systems and has strong robustness against external disturbances and parameters perturbation.
## 5. Conclusion
This paper has investigated the (robust) adaptive parallel simultaneous stabilization problems of a class of nonlinear descriptor systems via dissipative matrix method. Firstly, under a suitable output feedback law, two nonlinear descriptor systems have been changed as two equivalent nonlinear differential-algebraic systems by nonsingular transforms, and a sufficient condition of impulse-free has been given for two closed-loop systems. Then, the two systems are combined to generate an augmented dissipative Hamiltonian differential-algebraic system, with which an adaptive parallel simultaneous stabilization controller has been designed for the two systems via the Hamiltonian function method. When there are external disturbances in the two systems, a robust adaptive parallel simultaneous stabilization controller has been presented. Finally, the case of more than two nonlinear descriptor systems has also been investigated in this paper.
---
*Source: 1019569-2018-08-30.xml* | 1019569-2018-08-30_1019569-2018-08-30.md | 30,088 | Adaptive Parallel Simultaneous Stabilization of a Class of Nonlinear Descriptor Systems via Dissipative Matrix Method | Liying Sun; Renming Yang | Mathematical Problems in Engineering
(2018) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1019569 | 1019569-2018-08-30.xml | ---
## Abstract
This paper investigates the adaptive parallel simultaneous stabilization and robust adaptive parallel simultaneous stabilization problems of a class of nonlinear descriptor systems via dissipative matrix method. Firstly, under an output feedback law, two nonlinear descriptor systems are transformed into two nonlinear differential-algebraic systems by nonsingular transformations, and a sufficient condition of impulse-free is given for two resulting closed-loop systems. Then, the two systems are combined to generate an augmented dissipative Hamiltonian differential-algebraic system by using the system-augmentation technique. Based on the dissipative system, an adaptive parallel simultaneous stabilization controller and a robust adaptive parallel simultaneous stabilization controller are designed for the two systems. Furthermore, the case of more than two nonlinear descriptor systems is investigated. Finally, an illustrative example is studied by using the results proposed in this paper, and simulations show that the adaptive parallel simultaneous stabilization controllers obtained in this paper work very well.
---
## Body
## 1. Introduction
In practical control designs, a commonly encountered problem is to design feedback controller(s) to stabilize a given family of parallel systems. It is straightforward to consider each system individually and design a stabilization controller for each system. However, a more economical approach to the problem is to design a single controller, which may take measurements/signals from all members of the family, to stabilize all the systems simultaneously [1, 2]. In this way, the controller implementation cost will be greatly reduced. This control is referred to the parallel simultaneous stabilization. It is noted that this kind of stabilization is different from the traditional simultaneous stabilization problem [3, 4]. The traditional simultaneous stabilization is concerned with designing a control law such that any individual system within the collection of systems can be stabilized by the control law. In other words, the resulting closed-loop system which consists of an individual system and its corresponding controller via its state or output feedback based on that control law is asymptotically stable. It is also noted that the traditional simultaneous stabilization problem is one of the important research topics in the area of robust control and has received a considerable attention in the past few decades [3–8].The descriptor system is a natural representation of dynamic systems and describes a larger class of systems than the normal system model [9–16]. In the last three decades, many nice results have been obtained for the controller design of linear descriptor systems; see [9, 10, 13, 14] and references therein. In general, it is not an easy task to design a controller for nonlinear descriptor systems (NDSs) and, accordingly, there are fewer works on NDSs except several special case studies [11, 12, 15, 16]; particularly, it is more difficult to design a parallel simultaneous stabilization controller for a class of nonlinear descriptor systems; the pertinent results were proposed for this case in [1]. For nonlinear differential-algebraic systems, an H∞ controller was designed in [15] based on the condition for the existence of H∞ controller of nonlinear systems, while the stabilization and robust stabilization of the systems were considered by the feedback linearization approach in [11] and the Hamiltonian function method in [12], respectively. In [16], based on the linear matrix inequality method, the generalized absolute stability was studied for linear descriptor systems with feedback-connected nonlinearities. Using a nonlinear performance index to the nominal system, a robust adaptive control scheme was presented in [17] for a class of nonlinear uncertain descriptor systems. For the case in which the singular matrix Ei=Midiag{Ir,0}Mi with Mi being an orthogonal matrix, the parallel simultaneous stabilization and robust adaptive parallel simultaneous stabilization problems were, respectively, studied in [1, 18] for two or a family of nonlinear descriptor systems via the Hamiltonian function method. It should be pointed out that there are, to the best of the authors’ knowledge, fewer works on the robust adaptive parallel simultaneous stabilization of NDSs [18].In this paper, motivated by the Hamiltonian function method [2, 19–29], we apply the structural properties of dissipative matrices to investigate the adaptive parallel simultaneous stabilization and robust adaptive parallel simultaneous stabilization problems for a class of NDSs via output feedback law [30, 31], and propose a new approach, called the dissipative matrix method, to study NDSs. Firstly, under an output feedback law, two NDSs are transformed into two nonlinear differential-algebraic systems by nonsingular transformations, and a sufficient condition of impulse-free is given for two closed-loop systems. Then, the two systems are combined to generate an augmented dissipative Hamiltonian differential-algebraic system by using the system-augmentation technique. Based on the dissipative system, an adaptive parallel simultaneous stabilization controller and a robust adaptive parallel simultaneous stabilization controller are designed for two NDSs, in which the singular matrix Ei≥0(≤0). Furthermore, the case of more than two NDSs is investigated. Finally, an illustrative example is studied by using the results proposed in this paper, and simulations show that the adaptive parallel simultaneous stabilization controllers obtained in this paper work very well.The paper is organized as follows. In Section2, we study the adaptive parallel simultaneous stabilization of two NDSs based on an augmented dissipative Hamiltonian form. Section 3 presents the robust adaptive parallel simultaneous stabilization controller for two NDSs with external disturbances and investigates the case of more than two NDSs. In Section 4, an illustrative example is provided, which is followed by the conclusion in Section 5.
## 2. Adaptive Parallel Simultaneous Stabilization of Two NDSs
This section investigates adaptive parallel simultaneous stabilization problem for two NDSs via dissipative matrix method. Firstly, based on suitable output feedback, two NDSs are transformed into two nonlinear differential-algebraic systems by new coordinate transformations, and then the two systems are combined to generate an augmented dissipative Hamiltonian differential-algebraic system by using the system-augmentation technique, based on which an adaptive parallel simultaneous stabilization controller is designed for the two systems.Consider the following two NDSs:(1)E1x˙=f1x,p1+g1xu,E1x0=E1x0,f10,p1=fp1p1,f10,0=0,y=g1Txx,(2)E2ξ˙=f2ξ,p2+g2ξu,E2ξ0=E2ξ0,f20,p2=fp2p2,f20,0=0,η=g2Tξξ,where x=[x~1,x~2,⋯,x~n]T,ξ=[ξ~1,ξ~2,⋯,ξ~n]T∈Rn and y,η∈Rm are the states and outputs of the two systems, respectively; u∈Rm is the control input; pi∈Rs is an unknown parameter perturbation vector and is assumed to be small enough to keep the dissipative structure unchanged; i.e., if R(x)>0, then R(x,pi)>0; fi(x,pi)∈Rn is sufficiently smooth vector fields, g1(x),g2(ξ)∈Rn×m; Ei∈Rn×n, 0< rank(Ei)=r<n, and Ei≥0 or Ei≤0,i=1,2. Without loss of generality, we discuss Ei≥0, i=1,2.Definition 1 (see [32]).
A control lawu=u(x) is called an admissible control law if, for any initial condition Ex0, the resulting closed-loop descriptor system has no impulsive solution.Lemma 2 (see [33]).
If a vector functionh(x) with h(0)=0(x∈Rn) has continuous nth-order partial derivatives, then h(x) can be expressed as(3)hx=a1xx1+⋯+anxxn,where ai(x),i=1,2,⋯,n, are vector functions.According to Lemma2, systems (1) and (2) can be transformed into the following form:(4)E1x˙=A1x,p1α1x,p1+g1xu,y=g1Txx,(5)E2ξ˙=A2ξ,p2α2ξ,p2+g2ξu,η=g2Tξξ,where the structural matrix Ai(x,pi)∈Rn×n, αi(x,pi)∈Rn is some vector of x and pi satisfying αi(x,0)=x,i=1,2.To study the adaptive parallel simultaneous stabilization problem of systems (4) and (5), the following assumptions are given:(A1)
rankEi,gi(x)=rank(Ei),∀x∈Rn,i=1,2;(A2)
assume there existsΦ∈Rl×m such that(6)Aix,piαix,pi-x=gixΦTθ,∀x∈Rn,i=1,2,whereθ∈Rl is an unknown constant vector related to pi.Assumption (A1) implies that fast subsystems of the descriptor systems (1) and (2) have no control u. Assumption (A2) is the so-called matched condition. In most cases, we can find Φ and θ such that (6) holds.Under assumption (A2), systems (4) and (5) are changed as(7)E1x˙=A1x,p1x+g1xu+g1xΦTθ,y=g1Txx,(8)E2ξ˙=A2ξ,p2ξ+g2ξu+g2ξΦTθ,η=g2Tξξ.Definition 3.
System (4) is called (strictly) dissipative if the structural matrix A(x) is (strictly) dissipative; i.e., A(x) can be expressed as A(x)=J(x)-R(x), where J(x) is skew-symmetric and R(x)≥0(R(x)>0); system (4) is called feedback (strictly) dissipative if there exists suitable state feedback u(x)=α(x)+v such that the resulting closed-loop descriptor system is (strictly) dissipative.Remark 4.
IfE1≤0, then systems (7) can be rewritten as(9)E1′x˙=A1′x,p1x+g1′xu+g1′xΦTθ,y′=g1′Txx,where E1′=-E1≥0, A1′(x,p1)=-A1(x,p1), and g1′(x)=-g1(x), y′=-y.We can always expressAi(x,pi) as Ai(x,pi)=Ji(x,pi)-Ri0(x,pi), where Ji(x,pi)=(1/2)(Ai(x,pi)-AiT(x,pi)) is skew-symmetric and Ri0(x,pi)=-(1/2)(Ai(x,pi)+AiT(x,pi)) is symmetric, i=1,2. In order to investigate adaptive parallel simultaneous stabilization of systems (4) and (5), we design an output feedback law such that the symmetric part of structural matrix of the closed-loop system can be transformed into positive definite one. Based on this, we have the following result.Lemma 5.
Assume that there exists a symmetric matrixK∈Rm×m such that(10)-12A1x,p1+A1Tx,p1+K11x,x>0,-12A2ξ,p2+A2Tξ,p2-K22ξ,ξ>0,where Kij(x,ξ)=gi(x)KgjT(ξ), i,j=1,2. Then, under the following adaptive output feedback law(11)u=-Ky-η-ΦTθ^+v,θ^˙=QΦy+η,systems (4) and (5) can be expressed in the following forms:(12)E1x˙=J1x,p1-R1x,p1x+g1xKg2Tξξ+g1xv+g1xΦTθ-θ^,θ^˙=QΦg1Txx+g2Tξξ,y=g1Txx,(13)E2ξ˙=J2ξ,p2-R2ξ,p2ξ-g2ξKg1Txx+g2ξv+g2ξΦTθ-θ^,θ^˙=QΦg1Txx+g2Tξξ,η=g2Tξξ,where Ji(x,pi) is skew-symmetric, Ri(x,pi)∈Rn×n is positive definite, i=1,2, θ^ is an estimate of θ, Q>0 is the adaptive gain constant matrix, and v is a new reference input.Proof.
Substituting (11) into systems (7) and (8), respectively, we can obtain systems (12) and (13), where R1(x,p1)=-(1/2)(A1(x,p1)+A1T(x,p1))+g1(x)Kg1T(x) and R2(ξ,p2)=-(1/2)(A2(ξ,p2)+A2T(ξ,p2))-g2(ξ)Kg2T(ξ). According to (10), we know that Ri(x,pi)>0. The proof is completed.SinceEi≥0 and 0< rank(Ei)=r<n, there exists a nonsingular matrix Mi∈Rn×n such that(14)MiTEiMi=Ir000,i=1,2.Denote(15)x=Mix¯,x¯=x1x2,Mi=Mi11Mi12Mi21Mi22,MiTgix=g~i1xg~i2x=g¯i1x¯g¯i2x¯,MiTJix,piMi=J~i11x,piJ~i12x,pi-J~i12Tx,piJ~i22x,pi=J¯i11x¯,piJ¯i12x¯,pi-J¯i12Tx¯,piJ¯i22x¯,pi,MiTRix,piMi=R~i11x,piR~i12x,piR~i12Tx,piR~i22x,pi=R¯i11x¯,piR¯i12x¯,piR¯i12Tx¯,piR¯i22x¯,pi,∇xHix=∂Hix∂x,i=1,2,where x1∈Rr, x2∈Rn-r, J~i11(x,pi)=J¯i11(x¯,pi) and J~i22(x,pi)=J¯i22(x¯,pi) are skew-symmetric matrices, and R¯i11(x¯,pi)=R~i11(x,pi)>0, R¯i22(x¯,pi)=R~i22(x,pi)=Mi12TMi22TRi(x,pi)Mi12Mi22, which implies that R¯i22(x¯,pi)=R~i22(x,pi)>0,i=1,2.Remark 6.
ThatRi(x,pi)>0 is a sufficient not necessary condition of R~i22(x,pi)>0. In this paper, R~i22(x,pi)>0 can guarantee that the closed-loop descriptor systems (12) and (13) have no impulsive solution. Therefore, (10) is a sufficient condition of systems (12) and (13) to be impulse-free.From (A1), we have(16)rankEi,gix=rankMiTEi,gixMi00I=rankIr0g~i1x00g~i2x=rankIr00g~i2x=rankEi=r,that is, g¯i2(x¯)=g~i2(x)=0. Thus, according to (15) and assumption (A1), systems (12) and (13) can be transformed into the following differential-algebraic systems:(17)x˙1=J¯111x¯,p1-R¯111x¯,p1x1+J¯112x¯,p1-R¯112x¯,p1x2+g¯11x¯Kg¯21Tξ¯ξ1+g¯11x¯v+g¯11x¯ΦTθ-θ^,0=-J¯112Tx¯,p1+R¯112Tx¯,p1x1+J¯122x¯,p1-R¯122x¯,p1x2≕φx1,x2,p1,θ^˙=QΦg¯11Tx¯x1+g¯21Tξ¯ξ1,y=g¯11Tx¯x1,(18)ξ˙1=J¯211ξ¯,p2-R¯211ξ¯,p2ξ1+J¯212ξ¯,p2-R¯212ξ¯,p2ξ2-g¯21ξ¯Kg¯11Tx¯x1+g¯21ξ¯v+g¯21ξ¯ΦTθ-θ^,0=-J¯212Tξ¯,p2+R¯212Tξ¯,p2ξ1+J¯222ξ¯,p2-R¯222ξ¯,p2ξ2,θ^˙=QΦg¯11Tx¯x1+g¯21Tξ¯ξ1,η=g¯21Tξ¯ξ1.SinceJ¯i22(x¯,pi)=-J¯i22T(x¯,pi) and R¯i22(x¯,pi)>0, we know that J¯i22(x¯,pi)-R¯i22(x¯,pi) is invertible [34], i=1,2. Therefore, systems (17) and (18) can be expressed in the following forms:(19)x˙1=J11x¯,p1-R11x¯,p1x1+g¯11x¯Kg¯21Tξ¯ξ1+g¯11x¯v+g¯11x¯ΦTθ-θ^,0=-J¯112Tx¯,p1+R¯112Tx¯,p1x1+J¯122x¯,p1-R¯122x¯,p1x2,θ^˙=QΦg¯11Tx¯x1+g¯21Tξ¯ξ1,y=g¯11Tx¯x1,(20)ξ˙1=J21ξ¯,p2-R21ξ¯,p2ξ1-g¯21ξ¯Kg¯11Tx¯x1+g¯21ξ¯v+g¯21ξ¯ΦTθ-θ^,0=-J¯212Tξ¯,p2+R¯212Tξ¯,p2ξ1+J¯222ξ¯,p2-R¯222ξ¯,p2ξ2,θ^˙=QΦg¯11Tx¯x1+g¯21Tξ¯ξ1,η=g¯21Tξ¯ξ1,where Ji1(x¯,pi)-Ri1(x¯,pi)=J¯i11(x¯,pi)-R¯i11(x¯,pi)+(J¯i12(x¯,pi)-R¯i12(x¯,pi))(J¯i22(x¯,pi)-R¯i22(x¯,pi))-1 · (J¯i12T(x¯,pi)+R¯i12T(x¯,pi)), i=1,2. Ji1(x¯,pi) is skew-symmetric, and Ri1(x¯,pi) is positive definite, because(21)NJ¯i11-R¯i11J¯i12-R¯i12-J¯i12T+R¯i12TJ¯i22-R¯i22NT=Ji1-Ri10∗J¯i22-R¯i22,where(22)N=I-J¯i12-R¯i12J¯i22-R¯i22-10I.With assumptions (A1) and (A2), we have the following result.Theorem 7.
Consider systems (1) and (2) with their equivalent forms (4) and (5). Assume assumptions (A1) and (A2) hold; if there exist symmetric matrices K∈Rm×m and Φ∈Rl×m such that (10) and (6) hold, respectively, then the admissible adaptive parallel controller (11) (v=0) can simultaneously stabilize systems (1) and (2).Proof.
If assumptions (A1) and (A2) hold, then systems (4) and (5) can be transformed into systems (19) and (20) by the adaptive feedback law (11), which are of index one at the equilibrium point 0 ( system (12) is said to have index one at the equilibrium point 0 if ∂φ(x1,x2,p1)/∂x2 in (17) is nonsingular in a neighborhood of 0); i.e., systems (19) and (20) are impulse-free. According to the implicit function theorem, there exist continuous functions qi(·) such that x2=q1(x1),ξ2=q2(ξ1),qi(0)=0. Thus, systems (19) and (20) can be rewritten as (v=0)(23)X˙=JX,p-RX,p∂HX∂X,0=-J¯112Tx¯,p1+R¯112Tx¯,p1x1+J¯122x¯,p1-R¯122x¯,p1x2,0=-J¯212Tξ¯,p2+R¯212Tξ¯,p2ξ1+J¯222ξ¯,p2-R¯222ξ¯,p2ξ2,where(24)X=x1ξ1θ^,p=p1p2,RX,p=R11x1,q1x1,p1000R21ξ1,q2ξ1,p20000,JX,p=J11x1,q1x1,p1g¯11x1,q1x1Kg¯21Tξ1,q2ξ1-g¯11x1,q1x1ΦTQ-g¯11x1,q1x1Kg¯21Tξ1,q2ξ1TJ21ξ1,q2ξ1,p2-g¯21ξ1,q2ξ1ΦTQg¯11x1,q1x1ΦTQTg¯21ξ1,q2ξ1ΦTQT0,HX=12x1Tx1+ξ1Tξ1+12θ-θ^TQ-1θ-θ^.Obviously, J(X,p)=-JT(X,p),R(X,p)≥0,H(X)≥0. Therefore, system (23) is a dissipative Hamiltonian system. Choosing V(X)=H(X), then H(X) has a local minimum at X0=(0T,0T,θ^0T)T. Then, based on system (23) we have(25)V˙X=∂THX∂XX˙=∂THX∂XJX,p-RX,p∂HX∂X=-∂THX∂XRX,p∂HX∂X=-x1TR11x1,q1x1,p1x1-ξ1TR21ξ1,q2ξ1,p2ξ1≤0.Thus, system (23) converges to the largest invariant set contained in(26)X:V˙X=0⊂X:R111/2x1,q1x1,p1x1=0,R211/2ξ1,q2ξ1,p2ξ1=0,∀t≥0≔S.From systems (19) and (20), we know that both R111/2(x1,q1(x1),p1) and R211/2(ξ1,q2(ξ1),p2) are nonsingular, which implies that R111/2(x1,q1(x1),p1)x1=0⇒x1=0 and R211/2(ξ1,q2(ξ1),p2)ξ1=0⇒ξ1=0. That is, the largest invariant set only contains one point, i.e., S={[0T,0T,θ^0T]T}, with which it is easy to see that x1→0 and ξ1→0, as t→∞. Moreover, according to systems (19) and (20), it is clear that x2→0 and ξ2→0, as t→∞. Thus, x=M1x¯→0,ξ=M2ξ¯→0, as t→∞. Therefore, under the admissible adaptive parallel control law (11), systems (1) and (2) can be simultaneously stabilized.
## 3. Robust Adaptive Parallel Simultaneous Stabilization of Two NDSs and More Than Two NDSs
In this section, we investigate the robust adaptive parallel simultaneous stabilization problem of two NDSs with external disturbances and parameters perturbation and discuss the case of more than two NDSs. Firstly, for a given disturbance attenuation levelγ>0, we design an adaptive parallel L2 disturbance attenuation output feedback law such that under the law the L2 gain (from w to z) of the closed-loop system is less than γ. Then, we show that the two systems are simultaneously asymptotically stable when w=0.To design the robust adaptive parallel simultaneous stabilization controller, the following lemma is recalled, first.Lemma 8 (see [34]).
Consider a dissipative Hamiltonian system as follows:(27)x˙=Jx-Rx∇H+g1xu+g2xw,z=hxg1Tx∇H,where x∈Rn is the state, u∈Rm is the control input, w∈Rq is the disturbance, J(x) is skew-symmetric, R(x)⩾0, H(x) has a strict local minimum at the system’s equilibrium, z is the penalty function, and h(x) is a weighting matrix. Given a disturbance attenuation level γ>0, if(28)Rx+12γ2g1xg1Tx-g2xg2Tx≥0,then an L2 disturbance attenuation controller of system (27) can be given as(29)u=-12hTxhx+12γ2Img1Tx∇H,and the γ-dissipation inequality(30)H˙+∇THRx+12γ2g1xg1Tx-g2xg2Tx∇H≤12γ2w2-z2holds along the trajectories of the closed-loop system consisting of (27) and (29).Now, we consider the following NDSs (1) and (2) with external disturbances:(31)E1x˙=f1x,p1+g1xu+d1w,E1x0=E1x0,f10,p1=fp1p1,f10,0=0,y=g1Txx,(32)E2ξ˙=f2ξ,p2+g2ξu+d2w,E2ξ0=E2ξ0,f20,p2=fp2p2,f20,0=0,η=g2Tξξ,where w∈Rq is the disturbance, di(x)∈Rn×q, i=1,2, other variables are the same as those in systems (1) and (2), and (33)MiTdix=d~i1xd~i2x=d¯i1x¯d¯i2x¯.Given a disturbance attenuation levelγ>0, choose(34)z=Λy+ηas the penalty function, where Λ∈Rs×m is a weighting matrix.To design the adaptive parallelL2 disturbance attenuation output feedback control law for systems (31) and (32), the following assumption is given:(A3)
rankEi,di(x)=rank(Ei),∀x∈Rn,i=1,2.Assumption (A3) implies that fast subsystems of the descriptor systems (31) and (32) have not been disturbed. Similar to (A1), from (A3) we can obtain that d~i2(x)=d¯i2(x¯)=0.Based on Section2, systems (31) and (32) can be transformed into the following forms:(35)E1x˙=A1x,p1α1x,p1+g1xu+d1xw,y=g1Txx,(36)E2ξ˙=A2ξ,p2α2ξ,p2+g2ξu+d2xw,η=g2Tξξ.Next, we design an adaptive parallelL2 disturbance attenuation controller for systems (31) and (32).Theorem 9.
Consider systems (31) and (32) with their equivalent forms (35) and (36), the penalty function (34), and the disturbance attenuation level γ>0. Assume that assumptions (A1)~ (A3) hold for systems (35) and (36). If (1)
there exists a symmetric matrixK∈Rm×m such that (10) holds,(2)
gi=di,i=1,2,
then, the following admissible adaptive parallel feedback law(37)u=-Ky-η-12ΛTΛ+12γ2Imy+η-ΦTθ^,θ^˙=QΦy+ηcan simultaneously stabilize systems (31) and (32).Proof.
Rewrite (37) as follows(38)u=-Ky-η-ΦTθ^+v,θ^˙=QΦy+η,v=-12ΛTΛ+12γ2Imy+η.
Substituting the first part of (38) into systems (35) and (36), according to the proof of Theorem 7 and assumption (A2), we know that systems (35) and (36) are impulse controllable and can be expressed as the following dissipative Hamiltonian form:(39)X˙=JX,p-RX,p∂HX∂X+GXv+DXw,0=-J¯112Tx¯,p1+R¯112Tx¯,p1x1+J¯122x¯,p1-R¯122x¯,p1x2,0=-J¯212Tξ¯,p2+R¯212Tξ¯,p2ξ1+J¯222ξ¯,p2-R¯222ξ¯,p2ξ2,and(40)z=ΛGTX∂HX∂X,where X, J(X,p), R(X,p), and H(X) are given in (23), GX=g¯11Tx1,q1x1g¯21Tξ1,q2ξ10T and DX=d¯11Tx1,q1x1d¯21Tξ1,q2ξ10T.
Becausegi=di,i=1,2, it is easy to show(41)RX,p+12γ2GXGTX-DXDTX=RX,p≥0.
Thus, system (39) with the penalty function (40) satisfies all the conditions of Lemma 8. From Lemma 8, an L2 disturbance attenuation controller of system (39) can be designed as(42)v=-12ΛTΛ+12γ2Imy+η,which is the second part of (38), and, furthermore, the γ-dissipation inequality(43)H˙+∂TH∂XRX,p∂H∂X≤12γ2w2-z2holds along the trajectories of the closed-loop system consisting of (39) and (42).
Therefore, the feedback law (37) is an L2 disturbance attenuation controller of systems (31) and (32). According to [34], the L2 gain from w to z is less than γ. On the other hand, because (∂TH/∂X)R(X,p)(∂H/∂X)=x1TR11(x1,q1(x1),p1)x1+ξ1TR21(ξ1,q2(ξ1),p2)ξ1>0, from (43), we know that system (39) is asymptotically stable when w=0; that is, x1→0 and ξ1→0 (as t→∞). Moreover, it is clear that x2=q1(x1)→0, ξ2=q2(ξ1)→0 (as t→∞). Therefore, x=M1x¯→0 and ξ=M2ξ¯→0 (as t→∞). Thus, the admissible adaptive parallel control law (37) can simultaneously stabilize systems (31) and (32).Theorem 10.
Consider systems (31) and (32) with their equivalent forms (35) and (36), the penalty function (34), and the disturbance attenuation level γ>0. Assume that assumptions (A1) ~ (A3) hold for systems (35) and (36). If (1)
there exists a symmetric matrixK∈Rm×m such that (10) holds, and(44)-12A1x,p1+A1x,p1T+K11x,x+12γ2g1xg1Tx-d1xd1Tx>0,-12A2ξ,p2+A2ξ,p2T-K22ξ,ξ+12γ2g2ξg2Tξ-d2ξd2Tξ>0,whereKij(x,ξ)=gi(x)KgjT(ξ),i,j=1,2;(2)
g1g2T=0 and d1d2T=0,
then, the admissible adaptive parallelL2 disturbance attenuation controller (37) can simultaneously stabilize systems (31) and (32).Proof.
From the proof of Theorem9, we know that under the controller (37), systems (35) and (36) are impulse controllable and can be expressed as (39). From condition (2), it can be seen that (45)M1Tg1g2TM2=g¯11x¯0g¯21Tx¯0=g¯11x¯g¯21Tx¯000=0,that is, g¯11(x¯)g¯21T(x¯)=0, and in a similar way, we can obtain d¯11(x¯)d¯21T(x¯)=0. Moreover, according to condition (1), we have(46)M1T-12A1x,p1+A1Tx,p1+g1xKg1TxM1+12γ2M1Tg1xg1Tx-d1xd1TxM1=R¯111x¯,p1R¯112x¯,p1R¯112Tx¯,p1R¯122x¯,p1+12γ2g¯11x¯g¯11Tx¯-d¯11x¯d¯11Tx¯000>0.Thus,(47)R¯111x¯,p1+12γ2g¯11x¯g¯11Tx¯-d¯11x¯d¯11Tx¯≔R¯111x¯,p1+Cx¯>0.Since(48)NJ¯111-R¯111-CJ¯112-R¯112-J¯112T+R¯112TJ¯122-R¯122NT=J11-R11-C0∗J¯22-R¯22, where J¯111 is skew-symmetric and N is the same as that in (22), we have(49)R^1x¯,p1≔R11x¯,p1+12γ2g¯11x¯g¯11Tx¯-d¯11x¯d¯11Tx¯>0.In a similar way,(50)R^2ξ¯,p2≔R21ξ¯,p2+12γ2g¯21ξ¯g¯21Tξ¯-d¯21ξ¯d¯21Tξ¯>0.Therefore,(51)RX,p+12γ2GXGTX-DXDTX=R^1x1,q1x1,p1000R^2ξ1,q2ξ1,p20000≥0.Thus, system (39) with the penalty function (40) satisfies all the conditions of Lemma 8. From Lemma 8, an adaptive parallel L2 disturbance attenuation controller of system (39) can be designed as (42), and, furthermore, the γ-dissipation inequality(52)H˙+∂TH∂XRX,p+12γ2GXGTX-DXDTX∂H∂X≤12γ2w2-z2holds along the trajectories of the closed-loop system consisting of (39) and (42). Therefore, according to the proof of Theorem 9, the admissible controller (37) can simultaneously stabilize systems (31) and (32).Remark 11.
We can utilize the results obtained on adaptive parallel simultaneous stabilization and robust adaptive parallel simultaneous stabilization problems for two NDSs to investigate the same problems of more than two NDSs.Consider the followingN NDSs:(53)Eix˙i=fixi,pi+gixiu+dixiw,Eixi0=Eix0i,fi0,pi=fpipi,fi0,0=0,yi=giTxixi,i=1,2,⋯,N,where xi∈Rni, u∈Rm, w∈Rq, and yi∈Rm are the states, control input, external disturbances, and outputs of the N systems, respectively; pi is an unknown parameter perturbation vector and is assumed to be small enough to keep the dissipative structure unchanged; gi(xi)∈Rni×m, 0≤Ei∈Rni×ni, and 0< rank(Ei)=ri<ni, i=1,2,⋯,N.Given a disturbance attenuation levelγ>0, choose(54)z=Λ∑i=1Nyi,i=1,2,⋯,Nas the penalty function, where Λ∈Rs×m is a weighting matrix.Similar to Section2, we obtain the following forms:(55)Eix˙i=Aixi,piαixi,pi+gixiu+dixiw,yi=giTxixi,where αi(xi,pi)∈Rni is some vector of xi and pi satisfying αi(xi,0)=xi,i=1,2,⋯,N.Assume that(i1,i2,⋯,iN) is an arbitrary permutation of {1,2,⋯,N} and that L is a positive integer satisfying 1⩽L⩽N-1. Let T1=ni1+⋯+niL and T2=niL+1+⋯+niN.Now, we divide theN systems into two sets as follows:(56)EaX˙a=AaXa,paΓaXa,pa+GaXau+DaXaw,Ya=GaTXaXa,(57)EbX˙b=AbXb,pbΓbXb,pb+GbXbu+DbXbw,Yb=GbTXbXb,where Xa=[(xi1)T,⋯,(xiL)T]T∈RT1, Xb=[(xiL+1)T,⋯,(xiN)T]T∈RT2, pa=[pi1T,⋯,piLT]T, pb=[piL+1T,⋯,piNT]T,(58)Ea=diagEi1,⋯,EiL,Eb=diagEiL+1,⋯,EiN,AaXa,pa=diagAi1xi1,pi1,⋯,AiLxiL,piL,AbXb,pb=diagAiL+1xiL+1,piL+1,⋯,AiNxiN,piN,ΓaXa,pa=diagαi1xi1,pi1,⋯,αiLxiL,piL,ΓbXb,pb=diagαiL+1xiL+1,piL+1,⋯,αiNxiN,piN,Ya=yi1+⋯+yiL,Yb=yiL+1+⋯+yiN,GaXa=gi1Txi1,⋯,giLTxiLT,GbXb=giL+1TxiL+1,⋯,giNTxiNT,DaXa=di1Txi1,⋯,diLTxiLT,DbXb=diL+1TxiL+1,⋯,diNTxiNT.According to Section2, (56), (57), and Theorems 9 and 10, we can easily obtain an adaptive parallel simultaneous stabilization controller (w=0) and a robust adaptive parallel simultaneous stabilization controller of systems (53).Theorem 12.
Consider systems (53) (w=0) with their equivalent forms (55) (w=0), and assume that assumptions (A1) and (A2) hold (i=1,2,⋯,N). If there exist a symmetric matrix K∈Rm×m, a permutation (i1,i2,⋯,iN) of {1,2,⋯,N}, and a positive integer L (1⩽L⩽N-1) such that(59)RaXa,pa≔-12AaXa,pa+AaXa,paT+KaaXa,Xa>0,RbXb,pb≔-12AbXb,pb+AbXb,pbT-KbbXb,Xb>0,where(60)KijXi,Xj=GiXiKGjTXj,i,j=a,b,then, the adaptive control law(61)u=-Kyi1+⋯+yiL-yiL+1-⋯-yiN-ΦTθ^+v,θ^˙=QΦ∑i=1Nyican simultaneously stabilize the N systems given by (53) (w=0), where v is a new reference input and θ^ and Q are the same as those in (11).Theorem 13.
Consider systems (53), the penalty function (54), and the disturbance attenuation level γ>0. Assume that assumptions (A1) ~ (A3) (i=1,2,⋯,N) hold. If (1)
there exist a symmetric matrixK∈Rm×m, a permutation (i1,i2,⋯,iN) of {1,2,⋯,N}, and a positive integer L (1⩽L⩽N-1) such that (59) holds,(2)
gi=di,i=1,2,⋯,N,
then, the following robust adaptive parallel controller(62)u=-Kyi1+⋯+yiL-yiL+1-⋯-yiN-12ΛTΛ+12γ2Im∑i=1Nyi-ΦTθ^,θ^˙=QΦ∑i=1Nyican simultaneously stabilize the N systems given by (53).
## 4. An Illustrative Example
In the following, we give an illustrative example to show how to apply Theorem9 to investigate robust adaptive parallel simultaneous stabilization for two NDSs.Example 14.
Consider the following two NDSs:(63)E1x˙=f1x,p+g1xu+d1w,E1x0=E1x0,f10,p=f1,pp,f10,0=0,y=g1Txx,(64)E2ξ˙=f2ξ,p+g2ξu+d2w,E2ξ0=E2ξ0,f20,p=f2,pp,f20,0=0,η=g2Tξξ,where x=[x~1,x~2,x~3]T∈R3,ξ=[ξ~1,ξ~2,ξ~3]T∈R3,u∈R2,w∈R2,(65)E1=000050001,E2=200020000,f1x,p=-x~1+2x~2-2x~1-x~23-x~2-2x~3-2p2x~3+2p,f2ξ,p=ξ~13-2ξ~1-ξ~2-p-2ξ~3-ξ~1-ξ~2-p2ξ~1-2ξ~3,g1x=d1x=002x~2-20,g2ξ=d2ξ=1ξ~11000.Choose the penalty functionz=Λ(y+η), where Λ is a weighting matrix.Noticing thatf1(0,0)=f2(0,0)=0, we obtain α1(x,p)=(x~1,x~2,x~3+p)T, α2(ξ,p)=(ξ~1,ξ~2+p,ξ~3)T, and (66)A1x,p=-120-2-1-x~22-2002,A2ξ,p=ξ~12-2-1-2-1-1020-2. It is easy to check that assumption (A2) is satisfied, where Φ=[-1,0] and θ=p. According to Theorem 9, we obtain the following forms of systems (63) and (64) by the output feedback u=-K(y-η)+v:(67)E1x˙=J1x,p-R1x,px+g1xKg2Tξξ+g1xv+d1xw+g1xΦTθ-θ^,y=g1Txx,(68)E2ξ˙=J2ξ,p-R2ξ,pξ-g2ξKg1Txx+g2ξv+d2ξw+g2ξΦTθ-θ^,η=g2Tξξ,where(69)K=0.800-1,J1x,p=020-20-1010,R1x,p=10004.2-2.20-2.21.2,J2ξ,p=00-2000200,R2ξ,p=1.20.200.20.20002.SinceE1≥0 and E2≥0, we can give nonsingular matrices (70)M1=0010550100,M2=22000220001.Moreover, it is clear that (A1) and (A3) are also satisfied. Thus, all the conditions of Theorem 9 hold. Therefore, an admissible adaptive parallel simultaneous stabilization controller of systems (63) and (64) can be designed as(71)u=-Ky-η-12ΛTΛ+12γ2Imy+η-ΦTθ^,θ^˙=QΦy+η.In order to test the effectiveness of the controller (71), we carry out some numerical simulations with the following choices: initial condition: E1x(0)=[0,-5,2]T, E2ξ(0)=[2,-4,0]T, θ^0=-0.5; parameter: γ=1, p=0.5, Q=1, and weighting matrix Λ=I2. To test the robustness of the controller with respect to external disturbances, we add a square-wave disturbance of amplitude [2,-4]T to the systems in the time duration [1s~2s]. The responses of the states, control signal, and θ^ are shown in Figures 1–3, respectively.Figure 1
Response of the statex.Figure 2
Response of the stateξ.Figure 3
The controlu and estimate θ^.It can be observed from Figures1–3 that the states quickly converge to the origin after the disturbance is removed. The simulation results show that the controller (71) is very effective in simultaneously stabilizing the two systems and has strong robustness against external disturbances and parameters perturbation.
## 5. Conclusion
This paper has investigated the (robust) adaptive parallel simultaneous stabilization problems of a class of nonlinear descriptor systems via dissipative matrix method. Firstly, under a suitable output feedback law, two nonlinear descriptor systems have been changed as two equivalent nonlinear differential-algebraic systems by nonsingular transforms, and a sufficient condition of impulse-free has been given for two closed-loop systems. Then, the two systems are combined to generate an augmented dissipative Hamiltonian differential-algebraic system, with which an adaptive parallel simultaneous stabilization controller has been designed for the two systems via the Hamiltonian function method. When there are external disturbances in the two systems, a robust adaptive parallel simultaneous stabilization controller has been presented. Finally, the case of more than two nonlinear descriptor systems has also been investigated in this paper.
---
*Source: 1019569-2018-08-30.xml* | 2018 |
# A Note on Inclusion Intervals of Matrix Singular Values
**Authors:** Shu-Yu Cui; Gui-Xian Tian
**Journal:** Journal of Applied Mathematics
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101957
---
## Abstract
We establish an inclusion relation between two known inclusion intervals of matrix singular values in some special case. In addition, based on the use of positive scale vectors, a known inclusion interval of matrix singular values is also improved.
---
## Body
## 1. Introduction
The set of alln-by-n complex matrices is denoted by ℂn×n. Let A=(aij)∈ℂn×n. Denote the Hermitian adjoint of matrix A by A*. Then the singular values of A are the eigenvalues of (AA*)1/2. It is well known that matrix singular values play a very key role in theory and practice. The location of singular values is very important in numerical analysis and many other applied fields. For more review about singular values, readers may refer to [1–9] and the references therein.LetN={1,2,…,n}. For a given matrix A=(aij)∈ℂn×n, we denote the deleted absolute row sums and column sums of A by(1.1)ri=∑j=1,≠in|aij|,ci=∑j=1,≠in|aji|,i∈N,
respectively. On the basis of ri and ci, the Geršgorin’s disk theorem, Brauer’s theorem and Brualdi’s theorem provide some elegant inclusion regions of the eigenvalues of A (see [10–12]). Recently, some authors have made efforts to establish analogues to these theorems for matrix singular values, for example, as follows.Theorem A (Geršgorin-type [8]).
LetA=(aij)∈ℂn×n. Then all singular values of A are contained in
(1.2)G(A)≡⋃i=1nBi,withBi={z≥0:|z-ai|≤si},
where si=max{ri,ci} and ai=|aii| for each i∈N.Theorem B (Brauer-type [5]).
LetA=(aij)∈ℂn×n. Then all singular values of A are contained in
(1.3)B(A)≡⋃i,j=1,i≠jn{z≥0:|z-ai||z-aj|≤sisj}.LetS denote a nonempty subset of N, and let S̅=N∖S denote its complement in N. For a given matrix A=(aij)∈ℂn×n with n≥2, define partial absolute deleted row sums and column sums as follows:(1.4)riS(A)=∑j∈S∖{i}|aij|,riS̅(A)=∑j∈S̅∖{i}|aij|;ciS(A)=∑j∈S∖{i}|aji|,ciS̅(A)=∑j∈S̅∖{i}|aji|.
Thus, one splits each row sum ri and each column sum ci from (1.1) into two parts, depending on S and S̅, that is,(1.5)ri=riS(A)+riS̅(A),ci=ciS(A)+ciS̅(A).
Define, for each i∈S, j∈S̅,(1.6)GiS(A)={z≥0:|z-ai|≤siS},GjS̅(A)={z≥0:|z-aj|≤sjS̅},VijS(A)={z≥0:(|z-ai|-siS)(|z-aj|-sjS̅)≤siS̅sjS,
where(1.7)siS=max{riS(A),ciS(A)},siS̅=max{riS̅(A),ciS̅(A)}.
For convenience, we will sometimes use riS (ciS, riS̅, ciS̅) to denote riS(A) (ciS(A), riS̅(A), ciS̅(A), resp.) unless a confusion is caused.Theorem C (modified Brauer-type [7]).
LetA=(aij)∈ℂn×n with n≥2. Then all singular values of A are contained in
(1.8)σ(A)⊆GVS(A)≡GS(A)∪VS(A),
where
(1.9)GS(A)=(⋃i∈SGiS(A))∪(⋃j∈S̅GjS̅(A)),VS(A)=⋃i∈S,j∈S̅VijS(A).A simple analysis shows that Theorem B improves Theorem A. On the other hand, Theorem C reduces to Theorem A ifS=∅ or S̅=∅ (see Remark 2.3 in [7]).Now it is natural to ask whether there exists an inclusion relation between Theorem B and Theorem C or not. In this note, we establish an inclusion relation between the inclusion interval of Theorem B and that of Theorem C in a particular situation. In addition, based on the use of positive scale vectors and their intersections, the inclusion interval of matrix singular values in Theorem C is also improved.
## 2. Main Results
In this section, we will establish an inclusion relation between the inclusion interval of Theorem B and that of Theorem C in a particular situation. We firstly remark that Theorem B and Theorem C are incomparable, for example, as follows.Example 2.1.
Consider the following matrix:(2.1)A=(10.10.100200.11030.10104).LetS={1} and S̅={2,3,4}. Applying Theorem C, one gets(2.2)G1S(A)={z≥0:|z-1|≤0}={1},G2S̅(A)={z≥0:|z-2|≤1}=[1,3],G3S̅(A)={z≥0:|z-3|≤0.1}=[2.9,3.1],G4S̅(A)={z≥0:|z-4|≤1}=[3,5],V12S(A)={z≥0:(|z-1|)(|z-2|-1)≤0.1}=[0.6838,3.0488],V13S(A)={z≥0:(|z-1|)(|z-3|-0.1)≤1}=[0.5707,3.5000],V14S(A)={z≥0:(|z-1|)(|z-4|-1)≤0}={1}∪[3,5].
Hence, the inclusion interval of σ(A) is [0.5707,5].Now applying Theorem B, one gets(2.3){z≥0:|z-1||z-2|≤1.1}=[0.3381,2.6619],{z≥0:|z-1||z-3|≤1.1}=[0.5509,3.4491],{z≥0:|z-1||z-4|≤1}=[0.6972,1.3820]∪[3.6180,4.3028],{z≥0:|z-2||z-3|≤1.21}=[1.2917,3.7083],{z≥0:|z-2||z-4|≤1.1}=[1.5509,4.4491],{z≥0:|z-3||z-4|≤1.1}=[2.3381,4.6619].
Therefore, the inclusion interval of σ(A) is [0.3381,4.6619].Example2.1 shows that Theorem B and Theorem C are incomparable in the general case, but Theorem C may be better than Theorem B whenever the set S is chosen suitably, for example, as follows.Example 2.2.
TakeS={1,2} and S̅={3,4} in Example 2.1. Applying Theorem C, one gets
(2.4)G1S(A)={z≥0:|z-1|≤0.1}=[0.9,1.1],G2S(A)={z≥0:|z-2|≤0.1}=[1.9,2.1],G3S̅(A)={z≥0:|z-3|≤0.1}=[2.9,3.1],G4S̅(A)={z≥0:|z-4|≤0.1}=[3.9,4.1],V13S(A)={z≥0:(|z-1|-0.1)(|z-3|-0.1)≤1}=[0.4858,3.5142],V23S(A)={z≥0:(|z-2|-0.1)(|z-3|-0.1)≤1}=[1.2820,3.7180],V14S(A)={z≥0:(|z-1|-0.1)(|z-4|-0.1)≤1}=[0.5972,1.5202]∪[3.4798,4.4028],V24S(A)={z≥0:(|z-2|-0.1)(|z-4|-0.1)≤1}=[1.4858,4.5142].
Hence, the inclusion interval of σ(A) is [0.4858,4.5142]. However, applying Theorem B, we get that the inclusion interval of σ(A) is [0.3381,4.6619] (see Example 2.1).Example2.2 shows that Theorem C is an improvement on Theorem B in some cases, but Theorem C is complex in calculation. In order to simplify our calculations, we may consider the following special case that the set S is a singleton, that is, Si={i} for some i∈N. In this case, the associated sets from (1.6) may be defined as the following sets:(2.5)GiSi(A)={z≥0:|z-ai|≤0},GjS̅i(A)={z≥0:|z-aj|≤sjS̅i},(2.6)VijSi(A)={z≥0:(|z-ai|)(|z-aj|-sjS̅i)≤simax{|aij|,|aji|}}.
By a simple analysis, 𝒢iSi(A) and 𝒢jS̅i(A) are necessarily contained in 𝒱ijSi(A) for any j≠i, we can simply write from (1.8) that, for any i∈N,(2.7)σ(A)⊆VSi(A)≡⋃j∈N∖{i}VijSi(A).
This shows that 𝒱Si(A) is determined by (n-1) sets 𝒱ijSi(A). The associated Geršgorin-type set G(A) from (1.2) is determined by n sets Bi(i∈N) and the associated Brauer-type set B(A) from (1.3) is determined by n(n-1)/2 sets. The following corollary is an immediate consequence of Theorem C.Corollary 2.3.
LetA=(aij)∈ℂn×n with n≥2. Then all singular values of A are contained in
(2.8)σ(A)⊆V(A)≡⋂i∈NVSi(A).Proof.
From (2.7), we get the required result.Notice that𝒱S1(A)=𝒱S2(A)=B(A) whenever n=2. Next, we will assume that n≥3. It is interesting to establish their relations between 𝒱Si(A) and G(A), as well as between 𝒱(A) and B(A).Definition 2.4 (see [9]).
A=(aij)∈ℂn×nis called a matrix with property𝒜𝒮(absolute symmetry) if|aij|=|aji|for anyi,j∈N.Note that a matrixA with property 𝒜𝒮 is said as A with property B in [9].Theorem 2.5.
LetA=(aij)∈ℂn×n with n≥3. If A is a matrix with property 𝒜𝒮, then for each i∈N(2.9)VSi(A)⊆G(A),V(A)⊆B(A).Proof.
Fix somei∈N and consider any z∈𝒱Si(A). Then from (2.7), there exists a j∈N∖{i} such that z∈𝒱ijSi(A), that is, from (2.6),
(2.10)(|z-ai|)(|z-aj|-sjS̅i)≤siS̅imax{|aij|,|aji|}=si⋅|aij|,
where the last equality holds as A has the property 𝒜𝒮(i.e., |aij|=|aji| for any i, j∈N).
Now assume thatz∉G(A), then |z-ak|>sk for each k∈N, implying that |z-ai|>si≥0 and |z-aj|>sj≥0 for above i,j∈N. Thus, the left part of (2.10) satisfies
(2.11)(|z-ai|)(|z-aj|-sjS̅i)>si(sj-sjS̅i)=si⋅|aij|,
which contradicts the inequality (2.10). Hence, z∈𝒱Si(A) implies z∈G(A), that is, 𝒱Si(A)⊆G(A).
Next, we will show that𝒱(A)⊆B(A). Since 𝒱Si(A)⊆G(A) for any i∈N, then, from (2.8), we get 𝒱(A)⊆G(A). Now consider any z∈𝒱(A), so that z∈𝒱Si(A) for each i∈N. Hence, for each i∈N, there exists a j∈N∖{i} such that z∈𝒱ijSi(A), that is, the inequality (2.10) holds. Since 𝒱(A)⊆G(A), there exists a k∈N such that |z-ak|≤sk. For this index k, there exists a l∈N∖{k} such that z∈𝒱klSk(A), that is,
(2.12)(|z-ak|)(|z-al|-slS̅k)≤skS̅kmax{|akl|,|alk|}=sk⋅|akl|.
Hence,
(2.13)|z-ak||z-al|≤|z-ak|slS̅k+sk⋅|akl|≤sk(slS̅k+|akl|)=sksl,
which implies z∈B(A). Since this is true for any z∈𝒱(A). Then 𝒱(A)⊆B(A). This completes our proof.Remark that the condition “the matrixA has the property 𝒜𝒮” is necessary in Theorem 2.5, for example, as follows.Example 2.6.
Consider the following matrix:(2.14)A=(120121103).
Let Si={1}, Si={2}, and Si={3}. From (2.7), we get that the inclusion intervals of σ(A) are [0,4.5616], [0,4.7321] and [0,4.6180], respectively. Hence, applying Corollary 2.3, we have σ(A)⊆[0,4.5616]. However, applying Theorem A and Theorem B, we get σ(A)⊆G(A)=B(A)=[0,4], which implies Theorem 2.5 is failling if the condition “the matrix A has the property 𝒜𝒮” is omitted.In the following, we will give a new inclusion interval for matrix singular values, which improves that of Theorem C. The proof of this result is based on the use of scaling techniques. It is well known that scaling techniques pay important roles in improving inclusion intervals for matrix singular values. For example, using positive scale vectors and their intersections, Qi [8] and Li et al. [6] obtained two new inclusion intervals (see Theorem 4 in [8] and Theorem 2.2 in [6], resp.), which improve these of Theorems A and B, respectively. Recently, Tian et al. [9], using this techniques, also obtained a new inclusion interval (see Theorem 2.4 in [9]), which is an improvement on these of Theorem 2.2 in [6] and Theorem B.Theorem 2.7.
LetA=(aij)∈ℂn×n with n≥2 and k=(k1,k2,…,kn)T be any vector with positive components. Then Theorem C remains true if one replaces the definition of siS(A) and siS̅(A) by
(2.15)SiS(A)=max{RiS,CiS},SiS̅(A)=max{RiS̅,CiS̅},
where
(2.16)RiS=1ki∑j∈S∖{i}|aij|kj,RiS̅=1ki∑j∈S̅∖{i}|aij|kj;CiS=1ki∑j∈S∖{i}|aji|kj,CiS̅=1ki∑j∈S̅∖{i}|aji|kj.Proof.
Suppose thatσ is any singular value of A. Then there exist two nonzero vectors x=(x1,x2,…,xn)T and y=(y1,y2,…,yn)T such that
(2.17)Ax=σy,A*y=σx,
(see Problem 5 of Section 7.3 in [11]).
The fundamental equation (2.17) implies that, for each i∈N,
(2.18)σxi-a̅iiyi=∑j∈S∖{i}a̅jiyj+∑j∈S̅∖{i}a̅jiyj,σyi-aiixi=∑j∈S∖{i}aijxj+∑j∈S̅∖{i}aijxj.
Letxi=kix̂i, yi=kiŷi for each i∈N. Then our fundamental equation (2.18) and become into, for each i∈N,
(2.19)σx̂i-a̅iiŷi=1ki∑j∈S∖{i}a̅jikjŷj+1ki∑j∈S̅∖{i}a̅jikjŷj,σŷi-aiix̂i=1ki∑j∈S∖{i}aijkjx̂j+1ki∑j∈S̅∖{i}aijkjx̂j.
Denotezi=max{|x̂i|,|ŷi|} for each i∈N. Now using the similar technique as the proof of Theorem 2.2 in [7], one gets the required result.Remarks 2.
Write the inclusion intervals in Theorem2.7 as 𝔊𝔜S(A). Since k=(k1,k2,…,kn)T is any vector with positive components, then all singular values of A are contained in
(2.20)σ(A)⊆⋂k>0GYS(A).
Obviously, Theorem 2.7 reduces to Theorem C whenever k=(1,1,…,1)T, which implies that
(2.21)⋂k>0GYS(A)⊆GVS(A).
Hence, the inclusion interval (2.20) is an improvement on that of (1.8).
---
*Source: 101957-2012-06-06.xml* | 101957-2012-06-06_101957-2012-06-06.md | 11,554 | A Note on Inclusion Intervals of Matrix Singular Values | Shu-Yu Cui; Gui-Xian Tian | Journal of Applied Mathematics
(2012) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101957 | 101957-2012-06-06.xml | ---
## Abstract
We establish an inclusion relation between two known inclusion intervals of matrix singular values in some special case. In addition, based on the use of positive scale vectors, a known inclusion interval of matrix singular values is also improved.
---
## Body
## 1. Introduction
The set of alln-by-n complex matrices is denoted by ℂn×n. Let A=(aij)∈ℂn×n. Denote the Hermitian adjoint of matrix A by A*. Then the singular values of A are the eigenvalues of (AA*)1/2. It is well known that matrix singular values play a very key role in theory and practice. The location of singular values is very important in numerical analysis and many other applied fields. For more review about singular values, readers may refer to [1–9] and the references therein.LetN={1,2,…,n}. For a given matrix A=(aij)∈ℂn×n, we denote the deleted absolute row sums and column sums of A by(1.1)ri=∑j=1,≠in|aij|,ci=∑j=1,≠in|aji|,i∈N,
respectively. On the basis of ri and ci, the Geršgorin’s disk theorem, Brauer’s theorem and Brualdi’s theorem provide some elegant inclusion regions of the eigenvalues of A (see [10–12]). Recently, some authors have made efforts to establish analogues to these theorems for matrix singular values, for example, as follows.Theorem A (Geršgorin-type [8]).
LetA=(aij)∈ℂn×n. Then all singular values of A are contained in
(1.2)G(A)≡⋃i=1nBi,withBi={z≥0:|z-ai|≤si},
where si=max{ri,ci} and ai=|aii| for each i∈N.Theorem B (Brauer-type [5]).
LetA=(aij)∈ℂn×n. Then all singular values of A are contained in
(1.3)B(A)≡⋃i,j=1,i≠jn{z≥0:|z-ai||z-aj|≤sisj}.LetS denote a nonempty subset of N, and let S̅=N∖S denote its complement in N. For a given matrix A=(aij)∈ℂn×n with n≥2, define partial absolute deleted row sums and column sums as follows:(1.4)riS(A)=∑j∈S∖{i}|aij|,riS̅(A)=∑j∈S̅∖{i}|aij|;ciS(A)=∑j∈S∖{i}|aji|,ciS̅(A)=∑j∈S̅∖{i}|aji|.
Thus, one splits each row sum ri and each column sum ci from (1.1) into two parts, depending on S and S̅, that is,(1.5)ri=riS(A)+riS̅(A),ci=ciS(A)+ciS̅(A).
Define, for each i∈S, j∈S̅,(1.6)GiS(A)={z≥0:|z-ai|≤siS},GjS̅(A)={z≥0:|z-aj|≤sjS̅},VijS(A)={z≥0:(|z-ai|-siS)(|z-aj|-sjS̅)≤siS̅sjS,
where(1.7)siS=max{riS(A),ciS(A)},siS̅=max{riS̅(A),ciS̅(A)}.
For convenience, we will sometimes use riS (ciS, riS̅, ciS̅) to denote riS(A) (ciS(A), riS̅(A), ciS̅(A), resp.) unless a confusion is caused.Theorem C (modified Brauer-type [7]).
LetA=(aij)∈ℂn×n with n≥2. Then all singular values of A are contained in
(1.8)σ(A)⊆GVS(A)≡GS(A)∪VS(A),
where
(1.9)GS(A)=(⋃i∈SGiS(A))∪(⋃j∈S̅GjS̅(A)),VS(A)=⋃i∈S,j∈S̅VijS(A).A simple analysis shows that Theorem B improves Theorem A. On the other hand, Theorem C reduces to Theorem A ifS=∅ or S̅=∅ (see Remark 2.3 in [7]).Now it is natural to ask whether there exists an inclusion relation between Theorem B and Theorem C or not. In this note, we establish an inclusion relation between the inclusion interval of Theorem B and that of Theorem C in a particular situation. In addition, based on the use of positive scale vectors and their intersections, the inclusion interval of matrix singular values in Theorem C is also improved.
## 2. Main Results
In this section, we will establish an inclusion relation between the inclusion interval of Theorem B and that of Theorem C in a particular situation. We firstly remark that Theorem B and Theorem C are incomparable, for example, as follows.Example 2.1.
Consider the following matrix:(2.1)A=(10.10.100200.11030.10104).LetS={1} and S̅={2,3,4}. Applying Theorem C, one gets(2.2)G1S(A)={z≥0:|z-1|≤0}={1},G2S̅(A)={z≥0:|z-2|≤1}=[1,3],G3S̅(A)={z≥0:|z-3|≤0.1}=[2.9,3.1],G4S̅(A)={z≥0:|z-4|≤1}=[3,5],V12S(A)={z≥0:(|z-1|)(|z-2|-1)≤0.1}=[0.6838,3.0488],V13S(A)={z≥0:(|z-1|)(|z-3|-0.1)≤1}=[0.5707,3.5000],V14S(A)={z≥0:(|z-1|)(|z-4|-1)≤0}={1}∪[3,5].
Hence, the inclusion interval of σ(A) is [0.5707,5].Now applying Theorem B, one gets(2.3){z≥0:|z-1||z-2|≤1.1}=[0.3381,2.6619],{z≥0:|z-1||z-3|≤1.1}=[0.5509,3.4491],{z≥0:|z-1||z-4|≤1}=[0.6972,1.3820]∪[3.6180,4.3028],{z≥0:|z-2||z-3|≤1.21}=[1.2917,3.7083],{z≥0:|z-2||z-4|≤1.1}=[1.5509,4.4491],{z≥0:|z-3||z-4|≤1.1}=[2.3381,4.6619].
Therefore, the inclusion interval of σ(A) is [0.3381,4.6619].Example2.1 shows that Theorem B and Theorem C are incomparable in the general case, but Theorem C may be better than Theorem B whenever the set S is chosen suitably, for example, as follows.Example 2.2.
TakeS={1,2} and S̅={3,4} in Example 2.1. Applying Theorem C, one gets
(2.4)G1S(A)={z≥0:|z-1|≤0.1}=[0.9,1.1],G2S(A)={z≥0:|z-2|≤0.1}=[1.9,2.1],G3S̅(A)={z≥0:|z-3|≤0.1}=[2.9,3.1],G4S̅(A)={z≥0:|z-4|≤0.1}=[3.9,4.1],V13S(A)={z≥0:(|z-1|-0.1)(|z-3|-0.1)≤1}=[0.4858,3.5142],V23S(A)={z≥0:(|z-2|-0.1)(|z-3|-0.1)≤1}=[1.2820,3.7180],V14S(A)={z≥0:(|z-1|-0.1)(|z-4|-0.1)≤1}=[0.5972,1.5202]∪[3.4798,4.4028],V24S(A)={z≥0:(|z-2|-0.1)(|z-4|-0.1)≤1}=[1.4858,4.5142].
Hence, the inclusion interval of σ(A) is [0.4858,4.5142]. However, applying Theorem B, we get that the inclusion interval of σ(A) is [0.3381,4.6619] (see Example 2.1).Example2.2 shows that Theorem C is an improvement on Theorem B in some cases, but Theorem C is complex in calculation. In order to simplify our calculations, we may consider the following special case that the set S is a singleton, that is, Si={i} for some i∈N. In this case, the associated sets from (1.6) may be defined as the following sets:(2.5)GiSi(A)={z≥0:|z-ai|≤0},GjS̅i(A)={z≥0:|z-aj|≤sjS̅i},(2.6)VijSi(A)={z≥0:(|z-ai|)(|z-aj|-sjS̅i)≤simax{|aij|,|aji|}}.
By a simple analysis, 𝒢iSi(A) and 𝒢jS̅i(A) are necessarily contained in 𝒱ijSi(A) for any j≠i, we can simply write from (1.8) that, for any i∈N,(2.7)σ(A)⊆VSi(A)≡⋃j∈N∖{i}VijSi(A).
This shows that 𝒱Si(A) is determined by (n-1) sets 𝒱ijSi(A). The associated Geršgorin-type set G(A) from (1.2) is determined by n sets Bi(i∈N) and the associated Brauer-type set B(A) from (1.3) is determined by n(n-1)/2 sets. The following corollary is an immediate consequence of Theorem C.Corollary 2.3.
LetA=(aij)∈ℂn×n with n≥2. Then all singular values of A are contained in
(2.8)σ(A)⊆V(A)≡⋂i∈NVSi(A).Proof.
From (2.7), we get the required result.Notice that𝒱S1(A)=𝒱S2(A)=B(A) whenever n=2. Next, we will assume that n≥3. It is interesting to establish their relations between 𝒱Si(A) and G(A), as well as between 𝒱(A) and B(A).Definition 2.4 (see [9]).
A=(aij)∈ℂn×nis called a matrix with property𝒜𝒮(absolute symmetry) if|aij|=|aji|for anyi,j∈N.Note that a matrixA with property 𝒜𝒮 is said as A with property B in [9].Theorem 2.5.
LetA=(aij)∈ℂn×n with n≥3. If A is a matrix with property 𝒜𝒮, then for each i∈N(2.9)VSi(A)⊆G(A),V(A)⊆B(A).Proof.
Fix somei∈N and consider any z∈𝒱Si(A). Then from (2.7), there exists a j∈N∖{i} such that z∈𝒱ijSi(A), that is, from (2.6),
(2.10)(|z-ai|)(|z-aj|-sjS̅i)≤siS̅imax{|aij|,|aji|}=si⋅|aij|,
where the last equality holds as A has the property 𝒜𝒮(i.e., |aij|=|aji| for any i, j∈N).
Now assume thatz∉G(A), then |z-ak|>sk for each k∈N, implying that |z-ai|>si≥0 and |z-aj|>sj≥0 for above i,j∈N. Thus, the left part of (2.10) satisfies
(2.11)(|z-ai|)(|z-aj|-sjS̅i)>si(sj-sjS̅i)=si⋅|aij|,
which contradicts the inequality (2.10). Hence, z∈𝒱Si(A) implies z∈G(A), that is, 𝒱Si(A)⊆G(A).
Next, we will show that𝒱(A)⊆B(A). Since 𝒱Si(A)⊆G(A) for any i∈N, then, from (2.8), we get 𝒱(A)⊆G(A). Now consider any z∈𝒱(A), so that z∈𝒱Si(A) for each i∈N. Hence, for each i∈N, there exists a j∈N∖{i} such that z∈𝒱ijSi(A), that is, the inequality (2.10) holds. Since 𝒱(A)⊆G(A), there exists a k∈N such that |z-ak|≤sk. For this index k, there exists a l∈N∖{k} such that z∈𝒱klSk(A), that is,
(2.12)(|z-ak|)(|z-al|-slS̅k)≤skS̅kmax{|akl|,|alk|}=sk⋅|akl|.
Hence,
(2.13)|z-ak||z-al|≤|z-ak|slS̅k+sk⋅|akl|≤sk(slS̅k+|akl|)=sksl,
which implies z∈B(A). Since this is true for any z∈𝒱(A). Then 𝒱(A)⊆B(A). This completes our proof.Remark that the condition “the matrixA has the property 𝒜𝒮” is necessary in Theorem 2.5, for example, as follows.Example 2.6.
Consider the following matrix:(2.14)A=(120121103).
Let Si={1}, Si={2}, and Si={3}. From (2.7), we get that the inclusion intervals of σ(A) are [0,4.5616], [0,4.7321] and [0,4.6180], respectively. Hence, applying Corollary 2.3, we have σ(A)⊆[0,4.5616]. However, applying Theorem A and Theorem B, we get σ(A)⊆G(A)=B(A)=[0,4], which implies Theorem 2.5 is failling if the condition “the matrix A has the property 𝒜𝒮” is omitted.In the following, we will give a new inclusion interval for matrix singular values, which improves that of Theorem C. The proof of this result is based on the use of scaling techniques. It is well known that scaling techniques pay important roles in improving inclusion intervals for matrix singular values. For example, using positive scale vectors and their intersections, Qi [8] and Li et al. [6] obtained two new inclusion intervals (see Theorem 4 in [8] and Theorem 2.2 in [6], resp.), which improve these of Theorems A and B, respectively. Recently, Tian et al. [9], using this techniques, also obtained a new inclusion interval (see Theorem 2.4 in [9]), which is an improvement on these of Theorem 2.2 in [6] and Theorem B.Theorem 2.7.
LetA=(aij)∈ℂn×n with n≥2 and k=(k1,k2,…,kn)T be any vector with positive components. Then Theorem C remains true if one replaces the definition of siS(A) and siS̅(A) by
(2.15)SiS(A)=max{RiS,CiS},SiS̅(A)=max{RiS̅,CiS̅},
where
(2.16)RiS=1ki∑j∈S∖{i}|aij|kj,RiS̅=1ki∑j∈S̅∖{i}|aij|kj;CiS=1ki∑j∈S∖{i}|aji|kj,CiS̅=1ki∑j∈S̅∖{i}|aji|kj.Proof.
Suppose thatσ is any singular value of A. Then there exist two nonzero vectors x=(x1,x2,…,xn)T and y=(y1,y2,…,yn)T such that
(2.17)Ax=σy,A*y=σx,
(see Problem 5 of Section 7.3 in [11]).
The fundamental equation (2.17) implies that, for each i∈N,
(2.18)σxi-a̅iiyi=∑j∈S∖{i}a̅jiyj+∑j∈S̅∖{i}a̅jiyj,σyi-aiixi=∑j∈S∖{i}aijxj+∑j∈S̅∖{i}aijxj.
Letxi=kix̂i, yi=kiŷi for each i∈N. Then our fundamental equation (2.18) and become into, for each i∈N,
(2.19)σx̂i-a̅iiŷi=1ki∑j∈S∖{i}a̅jikjŷj+1ki∑j∈S̅∖{i}a̅jikjŷj,σŷi-aiix̂i=1ki∑j∈S∖{i}aijkjx̂j+1ki∑j∈S̅∖{i}aijkjx̂j.
Denotezi=max{|x̂i|,|ŷi|} for each i∈N. Now using the similar technique as the proof of Theorem 2.2 in [7], one gets the required result.Remarks 2.
Write the inclusion intervals in Theorem2.7 as 𝔊𝔜S(A). Since k=(k1,k2,…,kn)T is any vector with positive components, then all singular values of A are contained in
(2.20)σ(A)⊆⋂k>0GYS(A).
Obviously, Theorem 2.7 reduces to Theorem C whenever k=(1,1,…,1)T, which implies that
(2.21)⋂k>0GYS(A)⊆GVS(A).
Hence, the inclusion interval (2.20) is an improvement on that of (1.8).
---
*Source: 101957-2012-06-06.xml* | 2012 |
# Metallurgical Mechanism and Optical Properties of CuSnZnSSe Powders Using a 2-Step Sintering Process
**Authors:** Tai-Hsiang Liu; Fei-Yi Hung; Truan-Sheng Lui; Kuan-Jen Chen
**Journal:** Journal of Nanomaterials
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101958
---
## Abstract
Cu2SnZn(S + Se)4 is an excellent absorber material for solar cells. This study obtained Cu2SnZn(S + Se)4 powders through solid state reaction by the ball milling and sintering processes from elemental Cu, Zn, Sn, S, and Se without using either polluting chemicals or expensive vacuum facilities. Ratios of S/S + Se in CuSnZnSSe were controlled from 0 to 1. The results showed that the 2-step sintering process (400°C for 12 hrs and then 700°C for 1 hr) was able to stabilize the composition and structure of the CuSnZnSSe powders. The crystallized intensity of the CuSnZnS matrix decreased with increasing the Se content. Raising the Se content restrained the SnS phase and reduced the resistance of the absorber layer. In addition, Raman data confirmed that Se caused a Raman shift in the CuSnZnSSe matrix and enhanced the optical properties of the CuSnZnSSe powders. For the interface of CuSnZnSSe film and Mo substrate, Mo could diffuse into CuSnZnSSe matrix after 200°C annealing. The interface thermal diffusion of CuSnZnSSe/ZnS improved the effects of stack to enhance the stability of structure.
---
## Body
## 1. Introduction
The development of CZTS (Cu2Zn1Sn1S4) has been a subject of focus in recent years [1, 2]. Due to the lower cost of Zn and Sn element compared with In and Ga in the CIGS system, CZTS is considered a potential substitute for CIGS in the future. In the literature [3–5], CZTS thin film has been formed in many ways such as cosputtering [3], electroplated deposition [4], and pulsed laser deposition (PLD) [5]. But the cost of manufacturing is high, so the development is slow.In this research, we used mechanical milling on the solid powders to synthesize CZTSSe powders and it was low-cost with a stable structure. Cu, Zn, Sn, and S have been used to form CZTS powders, but the low boiling point of S [6] makes it hard to control the composition of CZTS when the S vaporizes at higher temperatures. The boiling point of Se is higher than that of S, and Se can stabilize the CZTS powders. Therefore, this research controlled the Cu, Zn, and Sn = 2 : 1 : 1 at.%, and then mixed S and Se in different ratios to combine with Cu, Zn, and Sn precursor to form the Cu2SnZn(S + Se)4 powders. During mixing, a 2-step sintering process was performed (400°C for 12 hrs controlled the concentrations of S and Sn; 700°C for 1 hr controlled the concentration of Se) to adjust the ratios of x = S/S + Se. The 2-step sintering process is not only a continuous method, but also the metallurgical efficiency [7, 8] which helps to homogenize the compound powders. This study used the 2-step sintering process without using either polluting chemicals or expensive vacuum facilities to investigate the metallurgical mechanism of the CZTSSe powders. In addition, the morphology, crystalline structure, and optical properties of the CZTSSe powders were measured to examine the effect of Se addition. The effect of 200°C annealing in the interface diffusion of ZnS/CZTSSe/Mo structure was also explored in CZTSSe system.
## 2. Experimental Procedure
The Cu2SnZn(S + Se)4 powders were synthesized using pure Cu, Zn, Sn, S, and Se powders. The atomic ratio of Cu : Zn : Sn : (S + Se) was 2 : 1 : 1 : 4. The atomic ratio of Cu : Zn : Sn was fixed. Five atomic ratios containing pure S, S : Se = 3 : 1, S : Se = 1 : 1, S : Se = 1 : 3, and pure Se were mixed to obtain 5 types of Cu2SnZn(S + Se)4 powders. The ratio value was defined as x = S/(S + Se).The powders were milled for 1 hr in molecular ratio inside a crucible and then sintered in a stove at 400°C for 12 hours (1st-step sintering). During this 400°C sintering, S, Se, and Sn turned to liquid state and combined with Cu and Zn to form compounds. After this, the Cu2SnZn(S + Se)4 powders were sintered at 700°C for 1 hour (2nd-step sintering). The residual S and Se were vaporized from Cu2SnZn(S + Se)4 powders. Finally, the powders were cooled to room temperature and the measurement of crystallization and optical properties was performed.The morphology and crystalline structure of the powders were observed using SEM (Hitachi SU8000), TEM (JEOL JEM-1400), and XRD (Bruker AXS Gmbh, Karlsruhe, Germany). In addition, the compositions of the powders were determined using ICP (HEWLETT PACKARD 4500, JP) and EDS. Raman, reflection pattern and resistance of CZTSSe powders were measured to understand the contributions of S and Se ratios [6, 9]. Each analysis datum is the average of 4 test results.In addition, the powder of S : Se = 1 : 1 was deposited by thermal evaporation and combined ZnS film (ZnS film was obtained from aqueous solution method) and Mo substrate to form CZTSSe/Mo specimen and CZTSSe/ZnS/glass specimen (Figure1). The interface diffusion mechanisms of ZnS/CZTSSe/Mo structure were detected by TEM (JEOL JEM-1400) with EDS before and after 200°C annealing to explore the interface characteristics.Figure 1
Interface I and interface II of ZnS/CZTSSe/Mo structure.
## 3. Results and Discussion
The SEM morphologies of the five CZTSSe powders after the 2-step sintering process are shown in Figure2. The powders were particle-like and the agglomeration was not obvious after mechanical milling. EDS analysis showed that the S/S + Se ratio of powders complied with the proportion and the average particle size of the powders was 160~220 nm. The powders could be applied for coating of devices and their morphologies were similar to the powders in the literature [4]. In addition, the CZTSSe powders were examined by XRD to identify the phase structure (Figure 3). It was found that the diffraction peak angle of the CZTSSe powders reduced slightly with increasing the content of Se. The main reason is that the atomic radius of Se is larger than S [10]. Thus, Se atoms replacing S would cause the lattice to expand. According to diffraction theory, n
λ
=
2
d
sin
θ, we have good grounds for thinking that the addition of Se increased the value of d and then reduced the value of θ in the CZTS system.Morphology of CZTSSe powders. (a) CZTSe (Se: 100%,x
=
0), (b) CZTSSe (S: 25% + Se: 75%, x
=
0.25), (c) CZTSSe (S: 50% + Se: 50%, x
=
0.5), (d) CZTSSe (S: 75% +Se: 25%, x
=
0.75), and (e) CZTS (S: 100%, x
=
1).
(a)
(b)
(c)
(d)
(e)Figure 3
XRD of five CZTSSe powders.Notably, the combination of S and Se in the Cu-Zn-Sn matrix requires a stable sintering process. If the powders are only given the 1st-step sintering (without the 2nd step), the CZTS(S = 100%) will not only have the CZTS main diffraction planes, but also have the SnS phase (Figure4(a)). We attempted to extend our observation in the CZTSSe (S = 50%, Se = 50%) system (only 1st-step sintering, Figure 4(b)). XRD diffractions proved clearly that some pure Se phases remained in the CZTSSe matrix, but no SnS phase was found. It is clear that both the addition of Se and the 2-step sintering process are able to improve the crystallization of the CZTSSe system.(a) XRD of CZTS(S: 100%), (b) XRD of CZTSSe (S: 50%, Se: 50%).
(a)
(b)The CZTSSe powders with a 2-step sintering process were compressed into the ingots and then their electrical resistance was measured using a 4-point probe analyzer. Figure5 shows the electrical properties of the CZTSSe powders and the CZTS (S = 100%) powder has the highest electrical resistance. The electrical resistance of the CZTSSe (S : Se = 1 : 1, x
=
0.5) powder and the CZTSe (Se = 100%) powder were similar. Notably, the two CZTSSe powders with ratio S : Se = 3 : 1 (x
=
0.75) and S : Se = 1 : 3 (x
=
0.25) had the lowest electrical resistance. These electrical properties were closely related to the chemical composition and the phase structure. It is clear that adding Se can reduce the electrical resistance of CZTSSe powders. For the S : Se = 3 : 1 (x
=
0.75) powder, an excess of S combined with Sn to form SnS phase [10]. For the S : Se = 1 : 3 (x
=
0.25) powder, some residual Se could not enter the matrix. For this reason, their electrical resistance was lower than that of the other powders. Recent reports [11, 12] claim that the electrical resistance of CZTS powder systems has still not been explored. We have the experience in the electrical measurements of powders [13] and can confirm that SnS phase and Se in the CZTSSe powders are the main phases to affect the electrical properties.Figure 5
Resistance of five CZTSSe powders.The CZTSSe powders were subjected to Raman spectrum to observe their Raman shift characteristics. Figure6 shows that Se addition caused a Raman shift in the CZTSSe powders (from 334.8 to 323.8 cm−1) and the shift frequency increased with increasing Se content. Notably, a CZTSe (Se = 100%) peak was not found at 323~335 cm−1, but a ZnSe peak was found at 240.8 cm−1. In a word, adding Se affected the Raman results and the CZTSe (Se = 100%) powder revealed a different Raman spectrum from the CZTSSe powders. The two main reasons are as follows: (1) adding Se prevented Sn from binding with S to form SnS phase to cause structural defects and (2) some Se would inflate the lattice to cause a Raman shift in the CZTSSe powders. In a word, the random distribution of S and Se atoms in the lattice resulted in the fluctuations in the masses and force constants in the neighborhood [14, 15]. Because the electrical and optical properties of the CZTSSe (S : Se = 1 : 1) powders were improved, CZTSSe (S : Se = 1 : 1) was selected for TEM analysis.Figure 6
Raman of five CZTSSe powders.Figure7 shows the TEM observations of the CZTSSe (S : Se = 1 : 1) powder. The CZTSSe powder was agglomerated and the single particle size was about 160~220 nm. According to EDS results and comparing with the literature [11, 12], the ratio S : Se = 13 : 15 (Figure 7(a)) approached the atomic ratio of 1 : 1. In addition, a bright field image (Figure 7(a)) and a dark field image (Figure 7(b)) reveal that the overlapping of powders and Se was uniform in the matrix. Figure 7(c) shows that the CZTSSe powder had a tetragonal structure which grew in the direction ofC-axis.TEM observations of CZTSSe powders (S : Se = 13 : 17 at atomic ratio). (a) Bright field image with EDS data, (b) dark field image, and (c) SAED of CZTSSe powders.
(a)
(b)
(c)Figure8 shows the reflection percentage of the CZTSSe powders. We can be fairly certain that the CZTS (S = 100%) powder had the highest reflection percentage. As Se was added, the reflection percentage decreased. Judging from the above, for continuous wavelength light, the absorption of the CZTSSe powder was better than that of the CZTS powder with pure sulfur. From the present data and a previous paper, it is clear that adding Se increases the absorption edge (nm) in the S-Se mixed system and then raises the reflection percentage. Therefore, when the wavelength is higher than the absorption edge, the absorption of CZTSe or CZTSSe is higher than CZTS powder with pure sulfur. Figure 8 shows that the wavelength of the absorption edge of the CZTS powder was about 300 nm; thus, the reflection (R
%) decreased significantly below 300 nm in wavelength.Figure 8
Absorption-reflection detection of different ratio in CZTSSe (CZTSxSe1-x, X
=
1.0
,
0.5,0).The CZTSSe (S : Se = 1 : 1) powders were deposited on Mo substrate by thermal evaporation. Both as-deposed and annealed CZTSSe/Mo structures were detected by TEM [16–19]. According to Figure 1, the interface I was observed in Figures 9 and 10. In fact, the Mo atom had diffused into CZTSSe matrix due to thermal diffusion induced by thermal evaporation and the concentration of Mo in the surface of CZTSSe film was about 1.5 at.%. After annealing, the concentration of Mo increased in the CZTSSe film and the zone near Mo substrate that had formed a continuous structure (EDS2~EDS3) from network structure. No doubt the CZTSSe film had the pollution of Mo atoms and it still had a tetragonal structure (see the pattern of Figure 10). The same observation applies to interface II of CZTSSe/ZnS/Glass structure (Figure 1). In Figure 11, the CZTSSe film (S : Se = 1 : 1) of thermal evaporation was deposited on ZnS film. The CZTSSe film represented a stacking morphology, which associated with the lower thermal conductivity of ZnS/glass substrate. After annealing, the crystallization of CZTSSe film was improved to enhance the structural stability (Figure 12).Figure 9
Interface observation of CZTSSe/Mo structure before annealing.Figure 10
Interface characteristic of CZTSSe/Mo structure after annealing.Figure 11
Interface observation of CZTSSe/ZnS structure before annealing.Figure 12
Crystallization of CZTSSe/ZnS structure after annealing.In the past, each laboratory had focused on the solar cell design for power performance. In fact, the interface of the structure is significant to affect the results of power performance. Results of this study can provide the interfacial properties of the solar cell design to assist in understanding the relationship between the power performance and materials.
## 4. Conclusion
Adding Se stabilized the CZTSSe phase structure. It not only improved the electrical properties, but also caused obvious shift peaks in the Raman spectrum. In addition, the absorption of the CZTSSe powder was higher than the CZTS powder.The five-element CZTSSe powder matrix was a tetragonal crystal. Both the addition of Se and the 2-step sintering process were able to improve the crystallization. After annealing, the CZTSSe/Mo structure had an obvious thermal diffusion of Mo atoms and the stacking of CZTSSe/ZnS structure was improved. The effects can improve the design and application of the solar cells.
---
*Source: 101958-2014-06-05.xml* | 101958-2014-06-05_101958-2014-06-05.md | 14,200 | Metallurgical Mechanism and Optical Properties of CuSnZnSSe Powders Using a 2-Step Sintering Process | Tai-Hsiang Liu; Fei-Yi Hung; Truan-Sheng Lui; Kuan-Jen Chen | Journal of Nanomaterials
(2014) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101958 | 101958-2014-06-05.xml | ---
## Abstract
Cu2SnZn(S + Se)4 is an excellent absorber material for solar cells. This study obtained Cu2SnZn(S + Se)4 powders through solid state reaction by the ball milling and sintering processes from elemental Cu, Zn, Sn, S, and Se without using either polluting chemicals or expensive vacuum facilities. Ratios of S/S + Se in CuSnZnSSe were controlled from 0 to 1. The results showed that the 2-step sintering process (400°C for 12 hrs and then 700°C for 1 hr) was able to stabilize the composition and structure of the CuSnZnSSe powders. The crystallized intensity of the CuSnZnS matrix decreased with increasing the Se content. Raising the Se content restrained the SnS phase and reduced the resistance of the absorber layer. In addition, Raman data confirmed that Se caused a Raman shift in the CuSnZnSSe matrix and enhanced the optical properties of the CuSnZnSSe powders. For the interface of CuSnZnSSe film and Mo substrate, Mo could diffuse into CuSnZnSSe matrix after 200°C annealing. The interface thermal diffusion of CuSnZnSSe/ZnS improved the effects of stack to enhance the stability of structure.
---
## Body
## 1. Introduction
The development of CZTS (Cu2Zn1Sn1S4) has been a subject of focus in recent years [1, 2]. Due to the lower cost of Zn and Sn element compared with In and Ga in the CIGS system, CZTS is considered a potential substitute for CIGS in the future. In the literature [3–5], CZTS thin film has been formed in many ways such as cosputtering [3], electroplated deposition [4], and pulsed laser deposition (PLD) [5]. But the cost of manufacturing is high, so the development is slow.In this research, we used mechanical milling on the solid powders to synthesize CZTSSe powders and it was low-cost with a stable structure. Cu, Zn, Sn, and S have been used to form CZTS powders, but the low boiling point of S [6] makes it hard to control the composition of CZTS when the S vaporizes at higher temperatures. The boiling point of Se is higher than that of S, and Se can stabilize the CZTS powders. Therefore, this research controlled the Cu, Zn, and Sn = 2 : 1 : 1 at.%, and then mixed S and Se in different ratios to combine with Cu, Zn, and Sn precursor to form the Cu2SnZn(S + Se)4 powders. During mixing, a 2-step sintering process was performed (400°C for 12 hrs controlled the concentrations of S and Sn; 700°C for 1 hr controlled the concentration of Se) to adjust the ratios of x = S/S + Se. The 2-step sintering process is not only a continuous method, but also the metallurgical efficiency [7, 8] which helps to homogenize the compound powders. This study used the 2-step sintering process without using either polluting chemicals or expensive vacuum facilities to investigate the metallurgical mechanism of the CZTSSe powders. In addition, the morphology, crystalline structure, and optical properties of the CZTSSe powders were measured to examine the effect of Se addition. The effect of 200°C annealing in the interface diffusion of ZnS/CZTSSe/Mo structure was also explored in CZTSSe system.
## 2. Experimental Procedure
The Cu2SnZn(S + Se)4 powders were synthesized using pure Cu, Zn, Sn, S, and Se powders. The atomic ratio of Cu : Zn : Sn : (S + Se) was 2 : 1 : 1 : 4. The atomic ratio of Cu : Zn : Sn was fixed. Five atomic ratios containing pure S, S : Se = 3 : 1, S : Se = 1 : 1, S : Se = 1 : 3, and pure Se were mixed to obtain 5 types of Cu2SnZn(S + Se)4 powders. The ratio value was defined as x = S/(S + Se).The powders were milled for 1 hr in molecular ratio inside a crucible and then sintered in a stove at 400°C for 12 hours (1st-step sintering). During this 400°C sintering, S, Se, and Sn turned to liquid state and combined with Cu and Zn to form compounds. After this, the Cu2SnZn(S + Se)4 powders were sintered at 700°C for 1 hour (2nd-step sintering). The residual S and Se were vaporized from Cu2SnZn(S + Se)4 powders. Finally, the powders were cooled to room temperature and the measurement of crystallization and optical properties was performed.The morphology and crystalline structure of the powders were observed using SEM (Hitachi SU8000), TEM (JEOL JEM-1400), and XRD (Bruker AXS Gmbh, Karlsruhe, Germany). In addition, the compositions of the powders were determined using ICP (HEWLETT PACKARD 4500, JP) and EDS. Raman, reflection pattern and resistance of CZTSSe powders were measured to understand the contributions of S and Se ratios [6, 9]. Each analysis datum is the average of 4 test results.In addition, the powder of S : Se = 1 : 1 was deposited by thermal evaporation and combined ZnS film (ZnS film was obtained from aqueous solution method) and Mo substrate to form CZTSSe/Mo specimen and CZTSSe/ZnS/glass specimen (Figure1). The interface diffusion mechanisms of ZnS/CZTSSe/Mo structure were detected by TEM (JEOL JEM-1400) with EDS before and after 200°C annealing to explore the interface characteristics.Figure 1
Interface I and interface II of ZnS/CZTSSe/Mo structure.
## 3. Results and Discussion
The SEM morphologies of the five CZTSSe powders after the 2-step sintering process are shown in Figure2. The powders were particle-like and the agglomeration was not obvious after mechanical milling. EDS analysis showed that the S/S + Se ratio of powders complied with the proportion and the average particle size of the powders was 160~220 nm. The powders could be applied for coating of devices and their morphologies were similar to the powders in the literature [4]. In addition, the CZTSSe powders were examined by XRD to identify the phase structure (Figure 3). It was found that the diffraction peak angle of the CZTSSe powders reduced slightly with increasing the content of Se. The main reason is that the atomic radius of Se is larger than S [10]. Thus, Se atoms replacing S would cause the lattice to expand. According to diffraction theory, n
λ
=
2
d
sin
θ, we have good grounds for thinking that the addition of Se increased the value of d and then reduced the value of θ in the CZTS system.Morphology of CZTSSe powders. (a) CZTSe (Se: 100%,x
=
0), (b) CZTSSe (S: 25% + Se: 75%, x
=
0.25), (c) CZTSSe (S: 50% + Se: 50%, x
=
0.5), (d) CZTSSe (S: 75% +Se: 25%, x
=
0.75), and (e) CZTS (S: 100%, x
=
1).
(a)
(b)
(c)
(d)
(e)Figure 3
XRD of five CZTSSe powders.Notably, the combination of S and Se in the Cu-Zn-Sn matrix requires a stable sintering process. If the powders are only given the 1st-step sintering (without the 2nd step), the CZTS(S = 100%) will not only have the CZTS main diffraction planes, but also have the SnS phase (Figure4(a)). We attempted to extend our observation in the CZTSSe (S = 50%, Se = 50%) system (only 1st-step sintering, Figure 4(b)). XRD diffractions proved clearly that some pure Se phases remained in the CZTSSe matrix, but no SnS phase was found. It is clear that both the addition of Se and the 2-step sintering process are able to improve the crystallization of the CZTSSe system.(a) XRD of CZTS(S: 100%), (b) XRD of CZTSSe (S: 50%, Se: 50%).
(a)
(b)The CZTSSe powders with a 2-step sintering process were compressed into the ingots and then their electrical resistance was measured using a 4-point probe analyzer. Figure5 shows the electrical properties of the CZTSSe powders and the CZTS (S = 100%) powder has the highest electrical resistance. The electrical resistance of the CZTSSe (S : Se = 1 : 1, x
=
0.5) powder and the CZTSe (Se = 100%) powder were similar. Notably, the two CZTSSe powders with ratio S : Se = 3 : 1 (x
=
0.75) and S : Se = 1 : 3 (x
=
0.25) had the lowest electrical resistance. These electrical properties were closely related to the chemical composition and the phase structure. It is clear that adding Se can reduce the electrical resistance of CZTSSe powders. For the S : Se = 3 : 1 (x
=
0.75) powder, an excess of S combined with Sn to form SnS phase [10]. For the S : Se = 1 : 3 (x
=
0.25) powder, some residual Se could not enter the matrix. For this reason, their electrical resistance was lower than that of the other powders. Recent reports [11, 12] claim that the electrical resistance of CZTS powder systems has still not been explored. We have the experience in the electrical measurements of powders [13] and can confirm that SnS phase and Se in the CZTSSe powders are the main phases to affect the electrical properties.Figure 5
Resistance of five CZTSSe powders.The CZTSSe powders were subjected to Raman spectrum to observe their Raman shift characteristics. Figure6 shows that Se addition caused a Raman shift in the CZTSSe powders (from 334.8 to 323.8 cm−1) and the shift frequency increased with increasing Se content. Notably, a CZTSe (Se = 100%) peak was not found at 323~335 cm−1, but a ZnSe peak was found at 240.8 cm−1. In a word, adding Se affected the Raman results and the CZTSe (Se = 100%) powder revealed a different Raman spectrum from the CZTSSe powders. The two main reasons are as follows: (1) adding Se prevented Sn from binding with S to form SnS phase to cause structural defects and (2) some Se would inflate the lattice to cause a Raman shift in the CZTSSe powders. In a word, the random distribution of S and Se atoms in the lattice resulted in the fluctuations in the masses and force constants in the neighborhood [14, 15]. Because the electrical and optical properties of the CZTSSe (S : Se = 1 : 1) powders were improved, CZTSSe (S : Se = 1 : 1) was selected for TEM analysis.Figure 6
Raman of five CZTSSe powders.Figure7 shows the TEM observations of the CZTSSe (S : Se = 1 : 1) powder. The CZTSSe powder was agglomerated and the single particle size was about 160~220 nm. According to EDS results and comparing with the literature [11, 12], the ratio S : Se = 13 : 15 (Figure 7(a)) approached the atomic ratio of 1 : 1. In addition, a bright field image (Figure 7(a)) and a dark field image (Figure 7(b)) reveal that the overlapping of powders and Se was uniform in the matrix. Figure 7(c) shows that the CZTSSe powder had a tetragonal structure which grew in the direction ofC-axis.TEM observations of CZTSSe powders (S : Se = 13 : 17 at atomic ratio). (a) Bright field image with EDS data, (b) dark field image, and (c) SAED of CZTSSe powders.
(a)
(b)
(c)Figure8 shows the reflection percentage of the CZTSSe powders. We can be fairly certain that the CZTS (S = 100%) powder had the highest reflection percentage. As Se was added, the reflection percentage decreased. Judging from the above, for continuous wavelength light, the absorption of the CZTSSe powder was better than that of the CZTS powder with pure sulfur. From the present data and a previous paper, it is clear that adding Se increases the absorption edge (nm) in the S-Se mixed system and then raises the reflection percentage. Therefore, when the wavelength is higher than the absorption edge, the absorption of CZTSe or CZTSSe is higher than CZTS powder with pure sulfur. Figure 8 shows that the wavelength of the absorption edge of the CZTS powder was about 300 nm; thus, the reflection (R
%) decreased significantly below 300 nm in wavelength.Figure 8
Absorption-reflection detection of different ratio in CZTSSe (CZTSxSe1-x, X
=
1.0
,
0.5,0).The CZTSSe (S : Se = 1 : 1) powders were deposited on Mo substrate by thermal evaporation. Both as-deposed and annealed CZTSSe/Mo structures were detected by TEM [16–19]. According to Figure 1, the interface I was observed in Figures 9 and 10. In fact, the Mo atom had diffused into CZTSSe matrix due to thermal diffusion induced by thermal evaporation and the concentration of Mo in the surface of CZTSSe film was about 1.5 at.%. After annealing, the concentration of Mo increased in the CZTSSe film and the zone near Mo substrate that had formed a continuous structure (EDS2~EDS3) from network structure. No doubt the CZTSSe film had the pollution of Mo atoms and it still had a tetragonal structure (see the pattern of Figure 10). The same observation applies to interface II of CZTSSe/ZnS/Glass structure (Figure 1). In Figure 11, the CZTSSe film (S : Se = 1 : 1) of thermal evaporation was deposited on ZnS film. The CZTSSe film represented a stacking morphology, which associated with the lower thermal conductivity of ZnS/glass substrate. After annealing, the crystallization of CZTSSe film was improved to enhance the structural stability (Figure 12).Figure 9
Interface observation of CZTSSe/Mo structure before annealing.Figure 10
Interface characteristic of CZTSSe/Mo structure after annealing.Figure 11
Interface observation of CZTSSe/ZnS structure before annealing.Figure 12
Crystallization of CZTSSe/ZnS structure after annealing.In the past, each laboratory had focused on the solar cell design for power performance. In fact, the interface of the structure is significant to affect the results of power performance. Results of this study can provide the interfacial properties of the solar cell design to assist in understanding the relationship between the power performance and materials.
## 4. Conclusion
Adding Se stabilized the CZTSSe phase structure. It not only improved the electrical properties, but also caused obvious shift peaks in the Raman spectrum. In addition, the absorption of the CZTSSe powder was higher than the CZTS powder.The five-element CZTSSe powder matrix was a tetragonal crystal. Both the addition of Se and the 2-step sintering process were able to improve the crystallization. After annealing, the CZTSSe/Mo structure had an obvious thermal diffusion of Mo atoms and the stacking of CZTSSe/ZnS structure was improved. The effects can improve the design and application of the solar cells.
---
*Source: 101958-2014-06-05.xml* | 2014 |
# An Improved Traveling-Wave-Based Fault Location Method with Compensating the Dispersion Effect of Traveling Wave in Wavelet Domain
**Authors:** Huibin Jia
**Journal:** Mathematical Problems in Engineering
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1019591
---
## Abstract
The fault generated transient traveling waves are wide band signals which cover the whole frequency range. When the frequency characteristic of line parameters is considered, different frequency components of traveling wave will have different attenuation values and wave velocities, which is defined as the dispersion effect of traveling wave. Because of the dispersion effect, the rise or fall time of the wavefront becomes longer, which decreases the singularity of traveling wave and makes it difficult to determine the arrival time and velocity of traveling wave. Furthermore, the dispersion effect seriously affects the accuracy and reliability of fault location. In this paper, a novel double-ended fault location method has been proposed with compensating the dispersion effect of traveling wave in wavelet domain. From the propagation theory of traveling wave, a correction function is established within a certain limit band to compensate the dispersion effect of traveling wave. Based on the determined arrival time and velocity of traveling wave, the fault distance can be calculated precisely by utilizing the proposed method. The simulation experiments have been carried out in ATP/EMTP software, and simulation results demonstrate that, compared with the traditional traveling-wave fault location methods, the proposed method can significantly improve the accuracy of fault location. Moreover, the proposed method is insensitive to different fault conditions, and it is adaptive to both transposed and untransposed transmission lines well.
---
## Body
## 1. Introduction
Power systems have grown rapidly over the last few decades, and the number and length of transmission lines increased. Transmission lines are exposed in the field, and, especially in the mountains and hilly terrains, they are prone to failure. In this scenario, a fast and accurate fault location technique is essential to reduce the restoration time of power systems, which is important with respect to technical and economic issues. Therefore, the study and development of fault location have been motivated since the 1950s [1]. The traveling-wave-based fault location method has been proposed by lots of researchers because these are insensitive to load flow, transition resistance, wiring ways, and series compensation.The traveling-wave-based fault location methods for transmission lines can generally be classified as single- and double-ended methods in terms of their different ways of obtaining the fault information. For many years, the single-ended traveling-wave-based methods were recognized by utilities as a good way to overcome the drawbacks of impedance-based approaches [2]. However, these methods commonly have problems of distinguishing between traveling waves reflected from the fault point and from power system terminals, which decreases the reliability of the fault location method [3]. Therefore, double-ended traveling-wave-based methods have been reported for overcoming these drawbacks. The double-ended method employs the data from the two ends of transmission lines; these data are synchronized by using GPS [4]. And the double-ended method has higher reliability and accuracy than the single-ended one.For the traveling-wave-based fault location method, the accuracy of fault location lies in the arrival time and velocity of traveling wave. Several methods have been proposed to determine the arrival time of traveling wave [5–10]. Wavelet transform has strong time-frequency analysis capability, which can effectively improve the accuracy of the singular value detection. Therefore, it has been firstly used for determining the arrival time of traveling wave, which is represented by the wavelet modulus maxima point [5]. Hence, several researchers have exploited the continuous wavelet transform to extract the arrival time of traveling wave [6–10]. As a matter of fact, it can be easier to extract the singular value point when mother wavelet is similar to traveling wave. Therefore, the mother wavelet has been extracted from the fault transient signal to improve the accuracy of fault location in distribution network [11, 12]. All these methods described above determine the arrival time of wavefront with singular value detection algorithm without considering different frequency components of traveling wave having different arrival times.On the other hand, the wave velocity also directly affects the accuracy of fault location. Some researchers have proposed some methods which are insensitive to the velocity of traveling wave [13–15]. The method proposed in [13] has exploited the three-terminal synchronized transient data, which increases the cost of fault location and decreases the reliability of fault location. The method proposed in [14] is excessively dependent on the first consecutive transient wavefronts of traveling wave, which makes it difficult to distinguish the wavefront of the reflected traveling wave. The method in [15] needs a dedicated communication system to locate the faults, but unfortunately the latency of the dedicated communication system is uncertain, which increases the error of fault location. Currently, the light speed is selected in most traveling-wave-based fault location methods. A few researchers have made significant attempts to determine the wave velocity. Reference [16] calculated the wave velocity with the arrival time of initial wave at nonfault phase when an out-area fault happens. Reference [17] calculated the wave velocity with high-frequency signal generated by autoreclosure. There is a certain difference between the fault signal and the analyzed high-frequency signal. Lots of experts have made discussions and attempts on the problem of determining the wave velocity, but there is still not a brilliant solution. Moreover, these methods have not also considered the different frequency components having different propagation velocities.When a fault happens, the generated traveling wave contains a lot of frequency components, which can be viewed as very important fault information. Because of the frequency-dependent parameters of transmission line, each component has different velocity and attenuation. This phenomenon is defined as traveling-wave dispersion effect [18–20], which distorts the wavefront of traveling wave and lengthens the fall or rise time of the wavefront. Therefore, it is difficult to accurately determine the arrival time and velocity of traveling wave. In [18–20], the authors just analyzed the influence of the dispersion effect on the traveling-wave-based fault location and did not provide the perfect solution. In this paper, a correction method for dispersion effect of traveling wave is proposed. The correction method can shorten the fall or rise time of the wavefront and enhance the singularity of traveling wave. Thus, the accuracy and reliability of fault location can be improved significantly.The paper is organized as follows. Section2 introduces the principle of traveling-wave-based fault location. Section 3 analyzes dispersion effect of traveling wave. Section 4 proposes a correction method for dispersion effect of traveling wave in detail. Section 5 shows the analysis of simulations and results. Finally, Section 6 is the conclusion.
## 2. Principle of Traveling-Wave-Based Fault Location
### 2.1. Traveling-Wave Theory
When a fault happens on a transmission line, the generated voltage and current surges will travel towards both ends of the line. This is equivalent to generating a virtual voltage source at the fault point, as shown in Figure1. The traveling wave is driven by the voltage source and travels towards both ends of the line. Its velocity is close to the velocity of light.Figure 1
Traveling-wave theory.Refraction and reflection will occur at the fault point, ends of lines, and other discontinuity points. Thus, the generated reflected and refracted wave will propagate along the transmission line, as shown in Figure2. The wavefront is detected when the initial traveling wave first reaches M-side and N-side, denoted as M1 and N1. Then, the signal moves to the fault point from M-side (N-side) of the line and returns to the fault point F. The refracted wavefront N1 will reach M-side, marked as M2. And the reflected wavefront M1 will go back to M-side again, marked as M3. M1 and N1 are used for double-ended fault location in this paper.Figure 2
Refection and refraction of traveling wave.
### 2.2. Double-Ended Traveling-Wave-Based Method
In the double-ended traveling-wave-based method, the distance between a fault point and measuring point atM and N terminals can be obtained according to(1)lM=12vtM-tN+L(2)lN=12vtN-tM+L,where lM and lN are the distance away from the measuring points at M and N terminals to the fault point, respectively. L is the total length of the transmission line. v is the propagation velocity of traveling wave.
### 2.3. Phase-to-Modal Transformation
There is an electromagnetic coupling effect between transmission lines. Therefore, by using a modal transformation matrix, we can decompose the traveling wave between phases into several independent modes. In the case of transposed transmission lines, the parameters of transmission line are balanced and the modal transformation can be defined as constant and real [21], such as Karrenbauer transformation and Clarke transformation. In this paper, Karrenbauer transformation is adopted for decoupling the transposed transmission lines. The Karrenbauer transformation is as follows: (3)U0UαUβ=13111-1010-11UAUBUC,where UA, UB, and UC are the phase voltages, respectively. U0 is the ground mode component, and Uα and Uβ are two independent mode components, respectively. Each independent mode component has different velocity and attenuation.For untransposed transmission line, the phase-to-modal transformation matrix is frequency-dependent and unsymmetrical [22]. Fortunately, it is shown that the frequency effect on the modal transformation matrix can be neglected when the frequency is high enough [23]. Therefore, transformation matrix can be approximated as a real and frequency-independent one. In the case of untransposed transmission line, the Karrenbauer transformation is also adopted in this paper.
### 2.4. Wavelet Transform
Wavelet transform (WT) is a common mathematical tool for digital signal processing. WT has been widely applied in lots of fields, such as time series analysis, speech processing, digital image processing, and power system transient analysis [24]. The continuous wavelet transform (CWT) of a signal f(t) is the integral of the product between f(t) and the daughter wavelet. The daughter wavelet is constructed by a mother wavelet, which is dilated with a scale parameter a and translated by b. Therefore, the mother wavelet of CWT can be defined as (5). Hence,(4)CWTft;a,b=∫-∞∞ftΨa,b∗tdt(5)Ψa,b∗t=1aΨt-ba,where Ψa,b(t) is the mother wavelet and ∗ is the complex conjugate operation. a is a scale factor and b is the transition factor. Wavelet transform has a time-frequency resolution. A short window is used for high frequencies while a long window is for low frequencies. Sharp signal transitions create large-amplitude wavelet coefficients. Large wavelet coefficients can detect and measure short high-frequency variations because they have narrow time localization at high frequency. Therefore, WT is very attractive for the analysis of transient signals [25].Wavelet transform is regarded as a filter. The filter of wavelet transform can be employed as follows:(6)gt=∫-∞∞fτht-τdτ(7)Gω=Fω·Hω,where h(t) is the pulse response of the filter and H(ω) is the frequency response of the filter. Comparing (4) and (6), h(t) can be described as(8)ht=a-1/2ψ-ta¯.H ( ω ) is also represented as(9)Hω=a·Ψaω¯.From the analysis above, wavelet transform can be calculated by using Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT). Therefore, wavelet transform of traveling wave can be employed by using the following equation:(10)CWTa=IFFTfωHω,where f(ω) is the calculation result of FFT for traveling wave and CWTa is the result of wavelet transform under the scale a and it is used for fault location. In this paper, the continuous wavelet transform is utilized to extract the wavefront of traveling wave and determine the arrival time of traveling wave.
## 2.1. Traveling-Wave Theory
When a fault happens on a transmission line, the generated voltage and current surges will travel towards both ends of the line. This is equivalent to generating a virtual voltage source at the fault point, as shown in Figure1. The traveling wave is driven by the voltage source and travels towards both ends of the line. Its velocity is close to the velocity of light.Figure 1
Traveling-wave theory.Refraction and reflection will occur at the fault point, ends of lines, and other discontinuity points. Thus, the generated reflected and refracted wave will propagate along the transmission line, as shown in Figure2. The wavefront is detected when the initial traveling wave first reaches M-side and N-side, denoted as M1 and N1. Then, the signal moves to the fault point from M-side (N-side) of the line and returns to the fault point F. The refracted wavefront N1 will reach M-side, marked as M2. And the reflected wavefront M1 will go back to M-side again, marked as M3. M1 and N1 are used for double-ended fault location in this paper.Figure 2
Refection and refraction of traveling wave.
## 2.2. Double-Ended Traveling-Wave-Based Method
In the double-ended traveling-wave-based method, the distance between a fault point and measuring point atM and N terminals can be obtained according to(1)lM=12vtM-tN+L(2)lN=12vtN-tM+L,where lM and lN are the distance away from the measuring points at M and N terminals to the fault point, respectively. L is the total length of the transmission line. v is the propagation velocity of traveling wave.
## 2.3. Phase-to-Modal Transformation
There is an electromagnetic coupling effect between transmission lines. Therefore, by using a modal transformation matrix, we can decompose the traveling wave between phases into several independent modes. In the case of transposed transmission lines, the parameters of transmission line are balanced and the modal transformation can be defined as constant and real [21], such as Karrenbauer transformation and Clarke transformation. In this paper, Karrenbauer transformation is adopted for decoupling the transposed transmission lines. The Karrenbauer transformation is as follows: (3)U0UαUβ=13111-1010-11UAUBUC,where UA, UB, and UC are the phase voltages, respectively. U0 is the ground mode component, and Uα and Uβ are two independent mode components, respectively. Each independent mode component has different velocity and attenuation.For untransposed transmission line, the phase-to-modal transformation matrix is frequency-dependent and unsymmetrical [22]. Fortunately, it is shown that the frequency effect on the modal transformation matrix can be neglected when the frequency is high enough [23]. Therefore, transformation matrix can be approximated as a real and frequency-independent one. In the case of untransposed transmission line, the Karrenbauer transformation is also adopted in this paper.
## 2.4. Wavelet Transform
Wavelet transform (WT) is a common mathematical tool for digital signal processing. WT has been widely applied in lots of fields, such as time series analysis, speech processing, digital image processing, and power system transient analysis [24]. The continuous wavelet transform (CWT) of a signal f(t) is the integral of the product between f(t) and the daughter wavelet. The daughter wavelet is constructed by a mother wavelet, which is dilated with a scale parameter a and translated by b. Therefore, the mother wavelet of CWT can be defined as (5). Hence,(4)CWTft;a,b=∫-∞∞ftΨa,b∗tdt(5)Ψa,b∗t=1aΨt-ba,where Ψa,b(t) is the mother wavelet and ∗ is the complex conjugate operation. a is a scale factor and b is the transition factor. Wavelet transform has a time-frequency resolution. A short window is used for high frequencies while a long window is for low frequencies. Sharp signal transitions create large-amplitude wavelet coefficients. Large wavelet coefficients can detect and measure short high-frequency variations because they have narrow time localization at high frequency. Therefore, WT is very attractive for the analysis of transient signals [25].Wavelet transform is regarded as a filter. The filter of wavelet transform can be employed as follows:(6)gt=∫-∞∞fτht-τdτ(7)Gω=Fω·Hω,where h(t) is the pulse response of the filter and H(ω) is the frequency response of the filter. Comparing (4) and (6), h(t) can be described as(8)ht=a-1/2ψ-ta¯.H ( ω ) is also represented as(9)Hω=a·Ψaω¯.From the analysis above, wavelet transform can be calculated by using Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT). Therefore, wavelet transform of traveling wave can be employed by using the following equation:(10)CWTa=IFFTfωHω,where f(ω) is the calculation result of FFT for traveling wave and CWTa is the result of wavelet transform under the scale a and it is used for fault location. In this paper, the continuous wavelet transform is utilized to extract the wavefront of traveling wave and determine the arrival time of traveling wave.
## 3. Analysis of Traveling-Wave Dispersion Effect
### 3.1. Analysis of Dispersion Effect in Frequency Domain
The traveling wave has a lot of frequency components. As a matter of fact, the distributed parameters of transmission line are frequency-dependent, which results in different frequency components having different velocities and attenuation values. This phenomenon is defined as the dispersion effect of traveling wave [18].There are two important parameters for the traveling-wave dispersion effect. One is the propagation coefficient, and the other is phase velocity. The propagation coefficient of them-modal component is calculated as follows:(11)γmω=Rm+jωLmGm+jωCm,where Rm, Lm, Gm, and Cm are the m-modal resistance, inductance, conductance, and capacitance, respectively, which are frequency-dependent because of the influence of skin effect [26]. These distributed parameters can be calculated by using Carson formulation according to geometrical parameters of transmission line. It can be observed from Figure 3 that the modal resistance increases with the increase of the frequency, and the modal inductance decreases with the increase of frequency.Figure 3
Frequency-dependent parameters of transmission line.
(a)
Resistance varied with frequency (b)
Reactance varied with frequency (c)
Attenuation coefficient varied with frequency (d)
Propagation velocity varied with frequencyThe propagation coefficient is also described as follows:(12)γmω=αmω+jβmω,where αm is the attenuation coefficient and βm is the phase coefficient. They are also frequency-dependent parameters. The phase velocity can be calculated by phase coefficient βm, which is defined as(13)Vpm=ωβm.The modal attenuation coefficient and phase velocity, which vary with frequencies, are shown in Figures3(c) and 3(d). The frequency ranges from 1 Hz to 1 MHz.It can be seen from Figure3 that the ground-modal parameters are more affected by the frequency than the line-modal parameters. The line-modal component is widely used for the traveling-wave-based fault location. Nevertheless, the characteristics of frequency-dependent parameters should not be ignored for improving the accuracy of fault location. For line-modal parameters, attenuation coefficients and propagation velocities increase along with the increase of frequency. For attenuation coefficients, the change of the low frequency is less than that of high frequency; for propagation velocities, the change of the low frequency is greater than that of the high frequency. The characteristic of frequency-dependent parameters makes the rise or fall time of wavefront spread out, which decreases the singularity of transient traveling wave.
### 3.2. Analysis of Dispersion Effect in Time Domain
In frequency domain, attenuation and wave velocity of traveling wave increase along with frequency. In time domain, the arrival times of different frequency components are not the same. The high-frequency component will arrive at the measuring point first of all, and the low-frequency component will be delayed much more in reaching the measuring point than the high-frequency component. Therefore, all of the detected wavefronts at different measuring points are not ideal step signals; these rise times or fall times get long, which can been seen from Figure4. In the figure, the signal at the fault point is an ideal step one. At the measuring point 150 kilometers away from the fault point, the fall time of transient traveling wave becomes larger than that at the fault point. At the measuring point 300 kilometers away from the fault point, the fall time of the transient traveling wave becomes larger than that at 150 kilometers. The fall time will become larger and larger with the increase of the distance between the measuring point and the fault point. Thus, the singularity of transient traveling wave gradually decreases. In summary, as a result of the dispersion effect of traveling wave, the wavefront is distorted, which makes it more difficult to be detected. Furthermore, the dispersion effect of traveling wave will decrease the accuracy and reliability of fault location.Figure 4
α-mode wavefronts at different measuring points.
## 3.1. Analysis of Dispersion Effect in Frequency Domain
The traveling wave has a lot of frequency components. As a matter of fact, the distributed parameters of transmission line are frequency-dependent, which results in different frequency components having different velocities and attenuation values. This phenomenon is defined as the dispersion effect of traveling wave [18].There are two important parameters for the traveling-wave dispersion effect. One is the propagation coefficient, and the other is phase velocity. The propagation coefficient of them-modal component is calculated as follows:(11)γmω=Rm+jωLmGm+jωCm,where Rm, Lm, Gm, and Cm are the m-modal resistance, inductance, conductance, and capacitance, respectively, which are frequency-dependent because of the influence of skin effect [26]. These distributed parameters can be calculated by using Carson formulation according to geometrical parameters of transmission line. It can be observed from Figure 3 that the modal resistance increases with the increase of the frequency, and the modal inductance decreases with the increase of frequency.Figure 3
Frequency-dependent parameters of transmission line.
(a)
Resistance varied with frequency (b)
Reactance varied with frequency (c)
Attenuation coefficient varied with frequency (d)
Propagation velocity varied with frequencyThe propagation coefficient is also described as follows:(12)γmω=αmω+jβmω,where αm is the attenuation coefficient and βm is the phase coefficient. They are also frequency-dependent parameters. The phase velocity can be calculated by phase coefficient βm, which is defined as(13)Vpm=ωβm.The modal attenuation coefficient and phase velocity, which vary with frequencies, are shown in Figures3(c) and 3(d). The frequency ranges from 1 Hz to 1 MHz.It can be seen from Figure3 that the ground-modal parameters are more affected by the frequency than the line-modal parameters. The line-modal component is widely used for the traveling-wave-based fault location. Nevertheless, the characteristics of frequency-dependent parameters should not be ignored for improving the accuracy of fault location. For line-modal parameters, attenuation coefficients and propagation velocities increase along with the increase of frequency. For attenuation coefficients, the change of the low frequency is less than that of high frequency; for propagation velocities, the change of the low frequency is greater than that of the high frequency. The characteristic of frequency-dependent parameters makes the rise or fall time of wavefront spread out, which decreases the singularity of transient traveling wave.
## 3.2. Analysis of Dispersion Effect in Time Domain
In frequency domain, attenuation and wave velocity of traveling wave increase along with frequency. In time domain, the arrival times of different frequency components are not the same. The high-frequency component will arrive at the measuring point first of all, and the low-frequency component will be delayed much more in reaching the measuring point than the high-frequency component. Therefore, all of the detected wavefronts at different measuring points are not ideal step signals; these rise times or fall times get long, which can been seen from Figure4. In the figure, the signal at the fault point is an ideal step one. At the measuring point 150 kilometers away from the fault point, the fall time of transient traveling wave becomes larger than that at the fault point. At the measuring point 300 kilometers away from the fault point, the fall time of the transient traveling wave becomes larger than that at 150 kilometers. The fall time will become larger and larger with the increase of the distance between the measuring point and the fault point. Thus, the singularity of transient traveling wave gradually decreases. In summary, as a result of the dispersion effect of traveling wave, the wavefront is distorted, which makes it more difficult to be detected. Furthermore, the dispersion effect of traveling wave will decrease the accuracy and reliability of fault location.Figure 4
α-mode wavefronts at different measuring points.
## 4. Correction Method of Traveling-Wave Dispersion Effect
### 4.1. Correction Method of Traveling-Wave Dispersion Effect
The propagation theory of forward and reverse traveling wave is shown in Figure5. The direction of the “forward” and “reverse” is relative. The component that leaves the terminal is defined as “forward”; the component that enters the terminal from the line is defined as “reverse.” The length of transmission line between M and N terminal is l.Figure 5
The propagation theory of forward and reverse traveling wave.According to the propagation equation of single conductor line, the voltagesUM and UN at M and N terminal can be obtained as follows:(14)UN=FN+BN=FN+FNe-γωlUM=FM+BM=FM+FNe-γωl,where e-γ(ω)l is the propagation function. Compared with (14), each reverse traveling wave can be obtained as follows:(15)BM=FNe-γωlBN=FMe-γωl.From signal and system theory, the reverse traveling waveBM is the frequency response of the forward traveling wave FN; similarly, the reverse traveling wave BM is the frequency response of the forward traveling wave FM. For lossy transmission line, γ(ω) varies with frequency. It distorts the wavefront of traveling wave and decreases its singularity. Obviously, γ(ω) causes the dispersion effect of traveling wave.Likewise, the initial surge of traveling wave, which is used for double-ended fault location method, is the frequency response of the signal at the fault point. Suppose that the distance between the fault point and the measuring point isL and the transient traveling wave at the fault point is referred to as fω in frequency domain, which is an ideal step signal in time domain as shown in Figure 4. Thus, the traveling wave at the measuring points can be described as f(ω)e-γ(ω)L in frequency domain from the analysis above, and this wave is distorted by e-γ(ω)L. The traveling wave at the measuring points can be multiplied by A(ω), and the traveling wave at the fault point with the constant delay can be obtained, and it is described as follows: (16)fωe-γωLAω=fωe-jωτ(17)τ=Lυ,where υ is set as the velocity at 50 Hz. f(ω)e-jωτ is the perfect solution for the traveling-wave-based fault location, because it is an ideal step signal and it has perfect singularity and constant delay. In theory, f(ω)e-jωτ is very suitable for fault location. Consequently, the correction function in frequency domain is obtained as follows: (18)Aω=Q-1ω=eγωL-jωτ.For fault location,A(ω) multiplies the receiving traveling wave at the measuring point in frequency domain; the perfect signal f(ω)e-jωτ can be obtained and it is used for the traveling-wave-based fault location. That is to say, all of the frequency components have the same velocity and attenuation to the frequency component at 50 Hz by using the correction function. When the fault happens on the transmission line, the forward or reverse traveling wave travels towards the measuring points. The distorted wavefront of the initial traveling wave is corrected by the correction function. All frequency components of the initial traveling wave will be postponed for some constant time to reach the measuring point and all of them have constant attenuation values, which make it easy to accurately determine the arrival time and wave velocity. By the correction method proposed in this paper, the singularity of the receiving transient traveling wave is significantly enhanced.
### 4.2. Implementation of the Correction Method
When a fault happens, the three-phase voltage atM and N measuring terminals can be obtained. Firstly, approximate fault distance can be obtained by using wavelet transform, which is described in Sections 2.2 and 2.4. The approximate fault distance is used for calculating the correction function; secondly, we select several discrete frequencies and calculate the propagation function of each frequency with R, L, and C parameters by using Carson formulation, and then the correction function A(ω) can be obtained by using (18); thirdly, the corrected traveling wave can be obtained according to (16) in frequency domain and the fault location procedure is implemented by wavelet transform again. The specific steps are shown as follows.Step 1.
The three-phase voltage is obtained when a fault happens.Step 2.
The transient three-phase voltages are first decoupled into their independent modal components by using (3), and then α-mode traveling wave, uMt and uNt, is used for fault location.Step 3.
u M ω and uN(ω) of α-mode traveling wave can be obtained by Fast Fourier Transform (FFT).Step 4.
At the scalea, the frequency response of the mother wavelet function, H(ω), can be calculated by using (9).Step 5.
At the scalea, the decomposition result of wavelet transform can be obtained by IFFT according to (10); this result can be described as follows:(19)WTauMt=IFFTuMω·HωWTauNt=IFFTuNω·Hω.Step 6.
Based on the double-ended fault location theory described in Section2.2, approximate fault distance, lM and lN, can be obtained.Step 7.
Calculate the propagation function based on Carson formulation, and then calculate the corresponding corrected functionAM(ω) and AN(ω) for uM(ω) and uN(ω), respectively.Step 8.
At the scalea of wavelet transform, the corrected decomposition result of wavelet transform can be obtained by IFFT; this result can be shown as follows:(20)WTuM′t=IFFTuMω·AMω·HωWTuNt=IFFTuNω·ANω·Hω.Step 9.
Based on the double-ended fault location theory, accurate fault distance,lM′ and lN′, can be obtained.From the analysis above, it can be observed that the wavelet-transform-based fault location method is described in frequency domain from Step3 to Step 6. The method is implemented again from Step 7 to Step 9, and the traveling wave is corrected in frequency domain.
## 4.1. Correction Method of Traveling-Wave Dispersion Effect
The propagation theory of forward and reverse traveling wave is shown in Figure5. The direction of the “forward” and “reverse” is relative. The component that leaves the terminal is defined as “forward”; the component that enters the terminal from the line is defined as “reverse.” The length of transmission line between M and N terminal is l.Figure 5
The propagation theory of forward and reverse traveling wave.According to the propagation equation of single conductor line, the voltagesUM and UN at M and N terminal can be obtained as follows:(14)UN=FN+BN=FN+FNe-γωlUM=FM+BM=FM+FNe-γωl,where e-γ(ω)l is the propagation function. Compared with (14), each reverse traveling wave can be obtained as follows:(15)BM=FNe-γωlBN=FMe-γωl.From signal and system theory, the reverse traveling waveBM is the frequency response of the forward traveling wave FN; similarly, the reverse traveling wave BM is the frequency response of the forward traveling wave FM. For lossy transmission line, γ(ω) varies with frequency. It distorts the wavefront of traveling wave and decreases its singularity. Obviously, γ(ω) causes the dispersion effect of traveling wave.Likewise, the initial surge of traveling wave, which is used for double-ended fault location method, is the frequency response of the signal at the fault point. Suppose that the distance between the fault point and the measuring point isL and the transient traveling wave at the fault point is referred to as fω in frequency domain, which is an ideal step signal in time domain as shown in Figure 4. Thus, the traveling wave at the measuring points can be described as f(ω)e-γ(ω)L in frequency domain from the analysis above, and this wave is distorted by e-γ(ω)L. The traveling wave at the measuring points can be multiplied by A(ω), and the traveling wave at the fault point with the constant delay can be obtained, and it is described as follows: (16)fωe-γωLAω=fωe-jωτ(17)τ=Lυ,where υ is set as the velocity at 50 Hz. f(ω)e-jωτ is the perfect solution for the traveling-wave-based fault location, because it is an ideal step signal and it has perfect singularity and constant delay. In theory, f(ω)e-jωτ is very suitable for fault location. Consequently, the correction function in frequency domain is obtained as follows: (18)Aω=Q-1ω=eγωL-jωτ.For fault location,A(ω) multiplies the receiving traveling wave at the measuring point in frequency domain; the perfect signal f(ω)e-jωτ can be obtained and it is used for the traveling-wave-based fault location. That is to say, all of the frequency components have the same velocity and attenuation to the frequency component at 50 Hz by using the correction function. When the fault happens on the transmission line, the forward or reverse traveling wave travels towards the measuring points. The distorted wavefront of the initial traveling wave is corrected by the correction function. All frequency components of the initial traveling wave will be postponed for some constant time to reach the measuring point and all of them have constant attenuation values, which make it easy to accurately determine the arrival time and wave velocity. By the correction method proposed in this paper, the singularity of the receiving transient traveling wave is significantly enhanced.
## 4.2. Implementation of the Correction Method
When a fault happens, the three-phase voltage atM and N measuring terminals can be obtained. Firstly, approximate fault distance can be obtained by using wavelet transform, which is described in Sections 2.2 and 2.4. The approximate fault distance is used for calculating the correction function; secondly, we select several discrete frequencies and calculate the propagation function of each frequency with R, L, and C parameters by using Carson formulation, and then the correction function A(ω) can be obtained by using (18); thirdly, the corrected traveling wave can be obtained according to (16) in frequency domain and the fault location procedure is implemented by wavelet transform again. The specific steps are shown as follows.Step 1.
The three-phase voltage is obtained when a fault happens.Step 2.
The transient three-phase voltages are first decoupled into their independent modal components by using (3), and then α-mode traveling wave, uMt and uNt, is used for fault location.Step 3.
u M ω and uN(ω) of α-mode traveling wave can be obtained by Fast Fourier Transform (FFT).Step 4.
At the scalea, the frequency response of the mother wavelet function, H(ω), can be calculated by using (9).Step 5.
At the scalea, the decomposition result of wavelet transform can be obtained by IFFT according to (10); this result can be described as follows:(19)WTauMt=IFFTuMω·HωWTauNt=IFFTuNω·Hω.Step 6.
Based on the double-ended fault location theory described in Section2.2, approximate fault distance, lM and lN, can be obtained.Step 7.
Calculate the propagation function based on Carson formulation, and then calculate the corresponding corrected functionAM(ω) and AN(ω) for uM(ω) and uN(ω), respectively.Step 8.
At the scalea of wavelet transform, the corrected decomposition result of wavelet transform can be obtained by IFFT; this result can be shown as follows:(20)WTuM′t=IFFTuMω·AMω·HωWTuNt=IFFTuNω·ANω·Hω.Step 9.
Based on the double-ended fault location theory, accurate fault distance,lM′ and lN′, can be obtained.From the analysis above, it can be observed that the wavelet-transform-based fault location method is described in frequency domain from Step3 to Step 6. The method is implemented again from Step 7 to Step 9, and the traveling wave is corrected in frequency domain.
## 5. Simulations and Results Analysis
### 5.1. The Model of Simulation
A model for 500 kV transposed transmission line is constructed in ATP/EMTP, as shown in Figure6(a). The geometry structure of transmission line is shown in Figure 6(b). The classical J. Marti model is adopted for analyzing the frequency characteristic of transmission lines. The voltage measuring terminal is at each transformer and is used to acquire the fault voltage data. The total length of the transmission line is 414 km. The fault distance is 100 km. Besides, an A-phase grounding fault happens.Figure 6
Model of transmission line.
(a)
500 kV transmission line model (b)
Geometry structure of transmission lineIn ATP/EMTP software, the sample rate is set as 1 MHz. The traveling-wave data generated in ATP software is imported into MATLAB software. In MATLAB, the correction method is implemented. In this paper,α-mode component is used for fault location. Due to the fact that the velocity is dependent on frequency, the α-mode velocity of traveling wave is calculated at 50 Hz, which is 2.9724 × 105 km/s.
### 5.2. Simulation Waveform Analysis
When we select several discrete frequencies and calculate the propagation function of each frequency withR, L, and C parameters by using Carson formulation, the propagation function A(ω) can be obtained by using (19).When we obtainA(ω), it is used for correcting the receiving traveling wave by using (21). The corrected traveling wave is shown in Figure 7. At the moment, the transmission line is grounded in A phase, the transition resistance is 10 Ω, and the fault inception angle is 106°.Figure 7
Corrected waveforms with the distance of 345 km.As can be seen from Figure7, the fall time of the corrected traveling wave becomes shorter than of the uncorrected one, which enhances the singularity of the traveling wave. That is to say, the corrected traveling wave shows more singularity than the uncorrected one. The correction method proposed in this paper makes the wavefront easier to be detected by the extraction algorithms, such as wavelet transform and HHT. In this paper, the comparison analysis is also implemented in wavelet field. As shown in Figure 8, the modulus maximum of uncorrected wavelet component is less than that of corrected wavelet component.Figure 8
Comparisons of the wavelet coefficients.
### 5.3. Results of Fault Location
In this paper, we compare the fault location accuracy between the traditional method and the proposed method. The traditional method is based on the wavelet transform (Morlet wavelet function) and it is not corrected by the correction method proposed in this paper. In order to test the applicability of the proposed method, various fault conditions are simulated, respectively. In the simulated experiments, the error of fault location is calculated as a performance index by the following equation:(21)error=l′-lS×100%,where l′ is the calculated fault distance, l is the actual fault distance, and S is the length of total transmission line.
#### 5.3.1. Performance of Different Transition Resistance Values
The transition resistance directly affects the voltage amplitude of the initial traveling wave. In this paper, we set the fault inception angle to analyze the errors of fault location, when the transition resistance varies from 10 Ω to 500 Ω. The distance between the fault point and the measuring terminal is 100 km, the fault type is A-G, and the fault inception angle is 106°. The simulated results are shown in Tables1 and 2. As can be seen from the table, the location error of the proposed method becomes smaller than that of the traditional one, no matter what the transition resistance is.Table 1
Error under different transition resistance values (transposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.3 0.10 100.23 0.07 50 99.50 0.18 99.60 0.13 100 99.34 0.22 99.62 0.12 500 98.70 0.43 99.30 0.23Table 2
Error under different transition resistance values (untransposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.57 0.19 100.30 0.10 50 101.20 0.40 100.56 0.19 100 98.75 0.42 99.18 0.27 500 101.02 0.34 100.55 0.18
#### 5.3.2. Performance of Different Inception Angles
The fault inception angle is one of the most effective parameters which influence the accuracy of the traveling-wave-based fault location. In order to test the influence of fault inception angle, the simulated cases are constructed with the fault inception angle varied from 18° to 90°. At the moment, the fault type is A-G, the transition resistance is 10 Ω, and the fault distance is 100 kilometers. The experiment results are shown in Tables3 and 4. The proposed method also shows great adaptability to the different inception angles.Table 3
Error under different inception angles (transposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.60 0.20 100.42 0.07 36° 100.31 0.10 100.22 0.10 72° 100.86 0.29 99.96 0.18 90° 100.80 0.27 100.78 0.26Table 4
Error under different inception angles (untransposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.85 0.28 100.50 0.17 36° 100.62 0.21 100.40 0.13 72° 99.33 0.22 99.60 0.13 90° 100.56 0.19 100.33 0.11
#### 5.3.3. Performance of Different Fault Types
To investigate the effect of different fault types based on the proposed method in this paper, several experiments have been implemented and experiment results are shown in Tables5 and 6. In these experiments, all transition resistances are 10 Ω, the fault inception angle is 106°, and the fault distance is 100 kilometers.Table 5
Error under different fault types (transposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.30 0.10 100.23 0.08 B-G 99.50 0.17 99.63 0.12 BC 99.21 0.26 100.45 0.15 BC-G 100.87 0.29 100.14 0.05 ABC-G 99.25 0.25 99.45 0.18Table 6
Error under different fault types (untransposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.57 0.19 100.30 0.10 B-G 99.54 0.15 99.64 0.12 BC 101.05 0.35 100.85 0.28 BC-G 100.78 0.26 100.23 0.08 ABC-G 100.50 0.16 100.26 0.09
#### 5.3.4. Performance of Antinoise
To investigate the performance of antinoise under different levels, several experiments have been implemented. In these experiments, the transition resistance is 10 Ω with BC-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Since noise has no meaningful information about fault location, it would be rational to define a threshold to reject the noise-associated WT coefficient [8]. Simulation results in Table 7 reveal that the additional noise has little influence on traveling-wave-based fault location.Table 7
Error under different noise levels (transposed circuit).
Noise (unit: dBw) The traditional method The proposed method Location results Error Location results Error 10 100.78 0.26 100.14 0.05 20 100.78 0.26 100.15 0.05 30 99.21 0.2633 100.30 0.10 40 100.76 0.2533 100.21 0.07
#### 5.3.5. Performance Analysis of Geometry Parameters
The production error of the towers supporting transmission line is very low, and it is within 1 cm. The geometrical parameters of the corner tower and the tower in substation are sometimes different from other towers along the transmission line. The number of the corner towers and the towers in substation is very small and they can be omitted. Therefore, from the point of view of theoretical calculation, we assume that the relative position between line conductors is unchanged. On the other hand, the height of line conductors will sometimes increase when the transmission line passes through a road, village, city, mountain, and so forth. The change is very complicated; it is very difficult to model the real transmission line. In the paper, for analyzing the applicability of the proposed method, the length of the transmission line which is supported by the changed tower is 20 km. In these experiments, the transition resistance is 10 Ω with A-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Simulation results in Table8 reveal that the proposed method can also improve the accuracy of fault location when the height of tower varies.Table 8
Error under different tower heights (transposed circuit).
Tower height (mile) The traditional method The proposed method Location results Error Location results Error 0 100.60 0.20 100.14 0.05 +5 100.47 0.16 100.22 0.07 +10 99.18 0.27 100.45 0.15 +20 100.80 0.27 100.55 0.18
## 5.1. The Model of Simulation
A model for 500 kV transposed transmission line is constructed in ATP/EMTP, as shown in Figure6(a). The geometry structure of transmission line is shown in Figure 6(b). The classical J. Marti model is adopted for analyzing the frequency characteristic of transmission lines. The voltage measuring terminal is at each transformer and is used to acquire the fault voltage data. The total length of the transmission line is 414 km. The fault distance is 100 km. Besides, an A-phase grounding fault happens.Figure 6
Model of transmission line.
(a)
500 kV transmission line model (b)
Geometry structure of transmission lineIn ATP/EMTP software, the sample rate is set as 1 MHz. The traveling-wave data generated in ATP software is imported into MATLAB software. In MATLAB, the correction method is implemented. In this paper,α-mode component is used for fault location. Due to the fact that the velocity is dependent on frequency, the α-mode velocity of traveling wave is calculated at 50 Hz, which is 2.9724 × 105 km/s.
## 5.2. Simulation Waveform Analysis
When we select several discrete frequencies and calculate the propagation function of each frequency withR, L, and C parameters by using Carson formulation, the propagation function A(ω) can be obtained by using (19).When we obtainA(ω), it is used for correcting the receiving traveling wave by using (21). The corrected traveling wave is shown in Figure 7. At the moment, the transmission line is grounded in A phase, the transition resistance is 10 Ω, and the fault inception angle is 106°.Figure 7
Corrected waveforms with the distance of 345 km.As can be seen from Figure7, the fall time of the corrected traveling wave becomes shorter than of the uncorrected one, which enhances the singularity of the traveling wave. That is to say, the corrected traveling wave shows more singularity than the uncorrected one. The correction method proposed in this paper makes the wavefront easier to be detected by the extraction algorithms, such as wavelet transform and HHT. In this paper, the comparison analysis is also implemented in wavelet field. As shown in Figure 8, the modulus maximum of uncorrected wavelet component is less than that of corrected wavelet component.Figure 8
Comparisons of the wavelet coefficients.
## 5.3. Results of Fault Location
In this paper, we compare the fault location accuracy between the traditional method and the proposed method. The traditional method is based on the wavelet transform (Morlet wavelet function) and it is not corrected by the correction method proposed in this paper. In order to test the applicability of the proposed method, various fault conditions are simulated, respectively. In the simulated experiments, the error of fault location is calculated as a performance index by the following equation:(21)error=l′-lS×100%,where l′ is the calculated fault distance, l is the actual fault distance, and S is the length of total transmission line.
### 5.3.1. Performance of Different Transition Resistance Values
The transition resistance directly affects the voltage amplitude of the initial traveling wave. In this paper, we set the fault inception angle to analyze the errors of fault location, when the transition resistance varies from 10 Ω to 500 Ω. The distance between the fault point and the measuring terminal is 100 km, the fault type is A-G, and the fault inception angle is 106°. The simulated results are shown in Tables1 and 2. As can be seen from the table, the location error of the proposed method becomes smaller than that of the traditional one, no matter what the transition resistance is.Table 1
Error under different transition resistance values (transposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.3 0.10 100.23 0.07 50 99.50 0.18 99.60 0.13 100 99.34 0.22 99.62 0.12 500 98.70 0.43 99.30 0.23Table 2
Error under different transition resistance values (untransposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.57 0.19 100.30 0.10 50 101.20 0.40 100.56 0.19 100 98.75 0.42 99.18 0.27 500 101.02 0.34 100.55 0.18
### 5.3.2. Performance of Different Inception Angles
The fault inception angle is one of the most effective parameters which influence the accuracy of the traveling-wave-based fault location. In order to test the influence of fault inception angle, the simulated cases are constructed with the fault inception angle varied from 18° to 90°. At the moment, the fault type is A-G, the transition resistance is 10 Ω, and the fault distance is 100 kilometers. The experiment results are shown in Tables3 and 4. The proposed method also shows great adaptability to the different inception angles.Table 3
Error under different inception angles (transposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.60 0.20 100.42 0.07 36° 100.31 0.10 100.22 0.10 72° 100.86 0.29 99.96 0.18 90° 100.80 0.27 100.78 0.26Table 4
Error under different inception angles (untransposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.85 0.28 100.50 0.17 36° 100.62 0.21 100.40 0.13 72° 99.33 0.22 99.60 0.13 90° 100.56 0.19 100.33 0.11
### 5.3.3. Performance of Different Fault Types
To investigate the effect of different fault types based on the proposed method in this paper, several experiments have been implemented and experiment results are shown in Tables5 and 6. In these experiments, all transition resistances are 10 Ω, the fault inception angle is 106°, and the fault distance is 100 kilometers.Table 5
Error under different fault types (transposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.30 0.10 100.23 0.08 B-G 99.50 0.17 99.63 0.12 BC 99.21 0.26 100.45 0.15 BC-G 100.87 0.29 100.14 0.05 ABC-G 99.25 0.25 99.45 0.18Table 6
Error under different fault types (untransposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.57 0.19 100.30 0.10 B-G 99.54 0.15 99.64 0.12 BC 101.05 0.35 100.85 0.28 BC-G 100.78 0.26 100.23 0.08 ABC-G 100.50 0.16 100.26 0.09
### 5.3.4. Performance of Antinoise
To investigate the performance of antinoise under different levels, several experiments have been implemented. In these experiments, the transition resistance is 10 Ω with BC-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Since noise has no meaningful information about fault location, it would be rational to define a threshold to reject the noise-associated WT coefficient [8]. Simulation results in Table 7 reveal that the additional noise has little influence on traveling-wave-based fault location.Table 7
Error under different noise levels (transposed circuit).
Noise (unit: dBw) The traditional method The proposed method Location results Error Location results Error 10 100.78 0.26 100.14 0.05 20 100.78 0.26 100.15 0.05 30 99.21 0.2633 100.30 0.10 40 100.76 0.2533 100.21 0.07
### 5.3.5. Performance Analysis of Geometry Parameters
The production error of the towers supporting transmission line is very low, and it is within 1 cm. The geometrical parameters of the corner tower and the tower in substation are sometimes different from other towers along the transmission line. The number of the corner towers and the towers in substation is very small and they can be omitted. Therefore, from the point of view of theoretical calculation, we assume that the relative position between line conductors is unchanged. On the other hand, the height of line conductors will sometimes increase when the transmission line passes through a road, village, city, mountain, and so forth. The change is very complicated; it is very difficult to model the real transmission line. In the paper, for analyzing the applicability of the proposed method, the length of the transmission line which is supported by the changed tower is 20 km. In these experiments, the transition resistance is 10 Ω with A-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Simulation results in Table8 reveal that the proposed method can also improve the accuracy of fault location when the height of tower varies.Table 8
Error under different tower heights (transposed circuit).
Tower height (mile) The traditional method The proposed method Location results Error Location results Error 0 100.60 0.20 100.14 0.05 +5 100.47 0.16 100.22 0.07 +10 99.18 0.27 100.45 0.15 +20 100.80 0.27 100.55 0.18
## 5.3.1. Performance of Different Transition Resistance Values
The transition resistance directly affects the voltage amplitude of the initial traveling wave. In this paper, we set the fault inception angle to analyze the errors of fault location, when the transition resistance varies from 10 Ω to 500 Ω. The distance between the fault point and the measuring terminal is 100 km, the fault type is A-G, and the fault inception angle is 106°. The simulated results are shown in Tables1 and 2. As can be seen from the table, the location error of the proposed method becomes smaller than that of the traditional one, no matter what the transition resistance is.Table 1
Error under different transition resistance values (transposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.3 0.10 100.23 0.07 50 99.50 0.18 99.60 0.13 100 99.34 0.22 99.62 0.12 500 98.70 0.43 99.30 0.23Table 2
Error under different transition resistance values (untransposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.57 0.19 100.30 0.10 50 101.20 0.40 100.56 0.19 100 98.75 0.42 99.18 0.27 500 101.02 0.34 100.55 0.18
## 5.3.2. Performance of Different Inception Angles
The fault inception angle is one of the most effective parameters which influence the accuracy of the traveling-wave-based fault location. In order to test the influence of fault inception angle, the simulated cases are constructed with the fault inception angle varied from 18° to 90°. At the moment, the fault type is A-G, the transition resistance is 10 Ω, and the fault distance is 100 kilometers. The experiment results are shown in Tables3 and 4. The proposed method also shows great adaptability to the different inception angles.Table 3
Error under different inception angles (transposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.60 0.20 100.42 0.07 36° 100.31 0.10 100.22 0.10 72° 100.86 0.29 99.96 0.18 90° 100.80 0.27 100.78 0.26Table 4
Error under different inception angles (untransposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.85 0.28 100.50 0.17 36° 100.62 0.21 100.40 0.13 72° 99.33 0.22 99.60 0.13 90° 100.56 0.19 100.33 0.11
## 5.3.3. Performance of Different Fault Types
To investigate the effect of different fault types based on the proposed method in this paper, several experiments have been implemented and experiment results are shown in Tables5 and 6. In these experiments, all transition resistances are 10 Ω, the fault inception angle is 106°, and the fault distance is 100 kilometers.Table 5
Error under different fault types (transposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.30 0.10 100.23 0.08 B-G 99.50 0.17 99.63 0.12 BC 99.21 0.26 100.45 0.15 BC-G 100.87 0.29 100.14 0.05 ABC-G 99.25 0.25 99.45 0.18Table 6
Error under different fault types (untransposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.57 0.19 100.30 0.10 B-G 99.54 0.15 99.64 0.12 BC 101.05 0.35 100.85 0.28 BC-G 100.78 0.26 100.23 0.08 ABC-G 100.50 0.16 100.26 0.09
## 5.3.4. Performance of Antinoise
To investigate the performance of antinoise under different levels, several experiments have been implemented. In these experiments, the transition resistance is 10 Ω with BC-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Since noise has no meaningful information about fault location, it would be rational to define a threshold to reject the noise-associated WT coefficient [8]. Simulation results in Table 7 reveal that the additional noise has little influence on traveling-wave-based fault location.Table 7
Error under different noise levels (transposed circuit).
Noise (unit: dBw) The traditional method The proposed method Location results Error Location results Error 10 100.78 0.26 100.14 0.05 20 100.78 0.26 100.15 0.05 30 99.21 0.2633 100.30 0.10 40 100.76 0.2533 100.21 0.07
## 5.3.5. Performance Analysis of Geometry Parameters
The production error of the towers supporting transmission line is very low, and it is within 1 cm. The geometrical parameters of the corner tower and the tower in substation are sometimes different from other towers along the transmission line. The number of the corner towers and the towers in substation is very small and they can be omitted. Therefore, from the point of view of theoretical calculation, we assume that the relative position between line conductors is unchanged. On the other hand, the height of line conductors will sometimes increase when the transmission line passes through a road, village, city, mountain, and so forth. The change is very complicated; it is very difficult to model the real transmission line. In the paper, for analyzing the applicability of the proposed method, the length of the transmission line which is supported by the changed tower is 20 km. In these experiments, the transition resistance is 10 Ω with A-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Simulation results in Table8 reveal that the proposed method can also improve the accuracy of fault location when the height of tower varies.Table 8
Error under different tower heights (transposed circuit).
Tower height (mile) The traditional method The proposed method Location results Error Location results Error 0 100.60 0.20 100.14 0.05 +5 100.47 0.16 100.22 0.07 +10 99.18 0.27 100.45 0.15 +20 100.80 0.27 100.55 0.18
## 6. Conclusions
In this paper, the dispersion characteristic of traveling wave is analyzed in time and frequency domain, respectively. When traveling wave travels along transmission line, the fall or rise time of the wavefront will become long with the increase of propagation distance. Thus, the singularity of the transient wavefront decreases. In this paper, a novel double-ended fault location method has been proposed to overcome the dispersion effect of traveling wave. In the method, a correction algorithm for overcoming the dispersion effect of traveling wave enhances the singularity of the transient traveling wave. The proposed method is tested under various experiment conditions, such as different fault distances, different transition resistances, and different fault inception angles. The simulation experiments demonstrate that the proposed method is better than the traditional traveling-wave-based fault location method. Furthermore, the novel method is suitable for both transposed and untransposed transmission lines. All of the advantages prove that the proposed method is available. However, there are several other parameters which affect the slope of traveling wave, such as the impedance characteristic of traveling-wave measurement equipment and shunt reactor connected to the line. Future work will consider those parameters.
---
*Source: 1019591-2017-02-08.xml* | 1019591-2017-02-08_1019591-2017-02-08.md | 62,855 | An Improved Traveling-Wave-Based Fault Location Method with Compensating the Dispersion Effect of Traveling Wave in Wavelet Domain | Huibin Jia | Mathematical Problems in Engineering
(2017) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1019591 | 1019591-2017-02-08.xml | ---
## Abstract
The fault generated transient traveling waves are wide band signals which cover the whole frequency range. When the frequency characteristic of line parameters is considered, different frequency components of traveling wave will have different attenuation values and wave velocities, which is defined as the dispersion effect of traveling wave. Because of the dispersion effect, the rise or fall time of the wavefront becomes longer, which decreases the singularity of traveling wave and makes it difficult to determine the arrival time and velocity of traveling wave. Furthermore, the dispersion effect seriously affects the accuracy and reliability of fault location. In this paper, a novel double-ended fault location method has been proposed with compensating the dispersion effect of traveling wave in wavelet domain. From the propagation theory of traveling wave, a correction function is established within a certain limit band to compensate the dispersion effect of traveling wave. Based on the determined arrival time and velocity of traveling wave, the fault distance can be calculated precisely by utilizing the proposed method. The simulation experiments have been carried out in ATP/EMTP software, and simulation results demonstrate that, compared with the traditional traveling-wave fault location methods, the proposed method can significantly improve the accuracy of fault location. Moreover, the proposed method is insensitive to different fault conditions, and it is adaptive to both transposed and untransposed transmission lines well.
---
## Body
## 1. Introduction
Power systems have grown rapidly over the last few decades, and the number and length of transmission lines increased. Transmission lines are exposed in the field, and, especially in the mountains and hilly terrains, they are prone to failure. In this scenario, a fast and accurate fault location technique is essential to reduce the restoration time of power systems, which is important with respect to technical and economic issues. Therefore, the study and development of fault location have been motivated since the 1950s [1]. The traveling-wave-based fault location method has been proposed by lots of researchers because these are insensitive to load flow, transition resistance, wiring ways, and series compensation.The traveling-wave-based fault location methods for transmission lines can generally be classified as single- and double-ended methods in terms of their different ways of obtaining the fault information. For many years, the single-ended traveling-wave-based methods were recognized by utilities as a good way to overcome the drawbacks of impedance-based approaches [2]. However, these methods commonly have problems of distinguishing between traveling waves reflected from the fault point and from power system terminals, which decreases the reliability of the fault location method [3]. Therefore, double-ended traveling-wave-based methods have been reported for overcoming these drawbacks. The double-ended method employs the data from the two ends of transmission lines; these data are synchronized by using GPS [4]. And the double-ended method has higher reliability and accuracy than the single-ended one.For the traveling-wave-based fault location method, the accuracy of fault location lies in the arrival time and velocity of traveling wave. Several methods have been proposed to determine the arrival time of traveling wave [5–10]. Wavelet transform has strong time-frequency analysis capability, which can effectively improve the accuracy of the singular value detection. Therefore, it has been firstly used for determining the arrival time of traveling wave, which is represented by the wavelet modulus maxima point [5]. Hence, several researchers have exploited the continuous wavelet transform to extract the arrival time of traveling wave [6–10]. As a matter of fact, it can be easier to extract the singular value point when mother wavelet is similar to traveling wave. Therefore, the mother wavelet has been extracted from the fault transient signal to improve the accuracy of fault location in distribution network [11, 12]. All these methods described above determine the arrival time of wavefront with singular value detection algorithm without considering different frequency components of traveling wave having different arrival times.On the other hand, the wave velocity also directly affects the accuracy of fault location. Some researchers have proposed some methods which are insensitive to the velocity of traveling wave [13–15]. The method proposed in [13] has exploited the three-terminal synchronized transient data, which increases the cost of fault location and decreases the reliability of fault location. The method proposed in [14] is excessively dependent on the first consecutive transient wavefronts of traveling wave, which makes it difficult to distinguish the wavefront of the reflected traveling wave. The method in [15] needs a dedicated communication system to locate the faults, but unfortunately the latency of the dedicated communication system is uncertain, which increases the error of fault location. Currently, the light speed is selected in most traveling-wave-based fault location methods. A few researchers have made significant attempts to determine the wave velocity. Reference [16] calculated the wave velocity with the arrival time of initial wave at nonfault phase when an out-area fault happens. Reference [17] calculated the wave velocity with high-frequency signal generated by autoreclosure. There is a certain difference between the fault signal and the analyzed high-frequency signal. Lots of experts have made discussions and attempts on the problem of determining the wave velocity, but there is still not a brilliant solution. Moreover, these methods have not also considered the different frequency components having different propagation velocities.When a fault happens, the generated traveling wave contains a lot of frequency components, which can be viewed as very important fault information. Because of the frequency-dependent parameters of transmission line, each component has different velocity and attenuation. This phenomenon is defined as traveling-wave dispersion effect [18–20], which distorts the wavefront of traveling wave and lengthens the fall or rise time of the wavefront. Therefore, it is difficult to accurately determine the arrival time and velocity of traveling wave. In [18–20], the authors just analyzed the influence of the dispersion effect on the traveling-wave-based fault location and did not provide the perfect solution. In this paper, a correction method for dispersion effect of traveling wave is proposed. The correction method can shorten the fall or rise time of the wavefront and enhance the singularity of traveling wave. Thus, the accuracy and reliability of fault location can be improved significantly.The paper is organized as follows. Section2 introduces the principle of traveling-wave-based fault location. Section 3 analyzes dispersion effect of traveling wave. Section 4 proposes a correction method for dispersion effect of traveling wave in detail. Section 5 shows the analysis of simulations and results. Finally, Section 6 is the conclusion.
## 2. Principle of Traveling-Wave-Based Fault Location
### 2.1. Traveling-Wave Theory
When a fault happens on a transmission line, the generated voltage and current surges will travel towards both ends of the line. This is equivalent to generating a virtual voltage source at the fault point, as shown in Figure1. The traveling wave is driven by the voltage source and travels towards both ends of the line. Its velocity is close to the velocity of light.Figure 1
Traveling-wave theory.Refraction and reflection will occur at the fault point, ends of lines, and other discontinuity points. Thus, the generated reflected and refracted wave will propagate along the transmission line, as shown in Figure2. The wavefront is detected when the initial traveling wave first reaches M-side and N-side, denoted as M1 and N1. Then, the signal moves to the fault point from M-side (N-side) of the line and returns to the fault point F. The refracted wavefront N1 will reach M-side, marked as M2. And the reflected wavefront M1 will go back to M-side again, marked as M3. M1 and N1 are used for double-ended fault location in this paper.Figure 2
Refection and refraction of traveling wave.
### 2.2. Double-Ended Traveling-Wave-Based Method
In the double-ended traveling-wave-based method, the distance between a fault point and measuring point atM and N terminals can be obtained according to(1)lM=12vtM-tN+L(2)lN=12vtN-tM+L,where lM and lN are the distance away from the measuring points at M and N terminals to the fault point, respectively. L is the total length of the transmission line. v is the propagation velocity of traveling wave.
### 2.3. Phase-to-Modal Transformation
There is an electromagnetic coupling effect between transmission lines. Therefore, by using a modal transformation matrix, we can decompose the traveling wave between phases into several independent modes. In the case of transposed transmission lines, the parameters of transmission line are balanced and the modal transformation can be defined as constant and real [21], such as Karrenbauer transformation and Clarke transformation. In this paper, Karrenbauer transformation is adopted for decoupling the transposed transmission lines. The Karrenbauer transformation is as follows: (3)U0UαUβ=13111-1010-11UAUBUC,where UA, UB, and UC are the phase voltages, respectively. U0 is the ground mode component, and Uα and Uβ are two independent mode components, respectively. Each independent mode component has different velocity and attenuation.For untransposed transmission line, the phase-to-modal transformation matrix is frequency-dependent and unsymmetrical [22]. Fortunately, it is shown that the frequency effect on the modal transformation matrix can be neglected when the frequency is high enough [23]. Therefore, transformation matrix can be approximated as a real and frequency-independent one. In the case of untransposed transmission line, the Karrenbauer transformation is also adopted in this paper.
### 2.4. Wavelet Transform
Wavelet transform (WT) is a common mathematical tool for digital signal processing. WT has been widely applied in lots of fields, such as time series analysis, speech processing, digital image processing, and power system transient analysis [24]. The continuous wavelet transform (CWT) of a signal f(t) is the integral of the product between f(t) and the daughter wavelet. The daughter wavelet is constructed by a mother wavelet, which is dilated with a scale parameter a and translated by b. Therefore, the mother wavelet of CWT can be defined as (5). Hence,(4)CWTft;a,b=∫-∞∞ftΨa,b∗tdt(5)Ψa,b∗t=1aΨt-ba,where Ψa,b(t) is the mother wavelet and ∗ is the complex conjugate operation. a is a scale factor and b is the transition factor. Wavelet transform has a time-frequency resolution. A short window is used for high frequencies while a long window is for low frequencies. Sharp signal transitions create large-amplitude wavelet coefficients. Large wavelet coefficients can detect and measure short high-frequency variations because they have narrow time localization at high frequency. Therefore, WT is very attractive for the analysis of transient signals [25].Wavelet transform is regarded as a filter. The filter of wavelet transform can be employed as follows:(6)gt=∫-∞∞fτht-τdτ(7)Gω=Fω·Hω,where h(t) is the pulse response of the filter and H(ω) is the frequency response of the filter. Comparing (4) and (6), h(t) can be described as(8)ht=a-1/2ψ-ta¯.H ( ω ) is also represented as(9)Hω=a·Ψaω¯.From the analysis above, wavelet transform can be calculated by using Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT). Therefore, wavelet transform of traveling wave can be employed by using the following equation:(10)CWTa=IFFTfωHω,where f(ω) is the calculation result of FFT for traveling wave and CWTa is the result of wavelet transform under the scale a and it is used for fault location. In this paper, the continuous wavelet transform is utilized to extract the wavefront of traveling wave and determine the arrival time of traveling wave.
## 2.1. Traveling-Wave Theory
When a fault happens on a transmission line, the generated voltage and current surges will travel towards both ends of the line. This is equivalent to generating a virtual voltage source at the fault point, as shown in Figure1. The traveling wave is driven by the voltage source and travels towards both ends of the line. Its velocity is close to the velocity of light.Figure 1
Traveling-wave theory.Refraction and reflection will occur at the fault point, ends of lines, and other discontinuity points. Thus, the generated reflected and refracted wave will propagate along the transmission line, as shown in Figure2. The wavefront is detected when the initial traveling wave first reaches M-side and N-side, denoted as M1 and N1. Then, the signal moves to the fault point from M-side (N-side) of the line and returns to the fault point F. The refracted wavefront N1 will reach M-side, marked as M2. And the reflected wavefront M1 will go back to M-side again, marked as M3. M1 and N1 are used for double-ended fault location in this paper.Figure 2
Refection and refraction of traveling wave.
## 2.2. Double-Ended Traveling-Wave-Based Method
In the double-ended traveling-wave-based method, the distance between a fault point and measuring point atM and N terminals can be obtained according to(1)lM=12vtM-tN+L(2)lN=12vtN-tM+L,where lM and lN are the distance away from the measuring points at M and N terminals to the fault point, respectively. L is the total length of the transmission line. v is the propagation velocity of traveling wave.
## 2.3. Phase-to-Modal Transformation
There is an electromagnetic coupling effect between transmission lines. Therefore, by using a modal transformation matrix, we can decompose the traveling wave between phases into several independent modes. In the case of transposed transmission lines, the parameters of transmission line are balanced and the modal transformation can be defined as constant and real [21], such as Karrenbauer transformation and Clarke transformation. In this paper, Karrenbauer transformation is adopted for decoupling the transposed transmission lines. The Karrenbauer transformation is as follows: (3)U0UαUβ=13111-1010-11UAUBUC,where UA, UB, and UC are the phase voltages, respectively. U0 is the ground mode component, and Uα and Uβ are two independent mode components, respectively. Each independent mode component has different velocity and attenuation.For untransposed transmission line, the phase-to-modal transformation matrix is frequency-dependent and unsymmetrical [22]. Fortunately, it is shown that the frequency effect on the modal transformation matrix can be neglected when the frequency is high enough [23]. Therefore, transformation matrix can be approximated as a real and frequency-independent one. In the case of untransposed transmission line, the Karrenbauer transformation is also adopted in this paper.
## 2.4. Wavelet Transform
Wavelet transform (WT) is a common mathematical tool for digital signal processing. WT has been widely applied in lots of fields, such as time series analysis, speech processing, digital image processing, and power system transient analysis [24]. The continuous wavelet transform (CWT) of a signal f(t) is the integral of the product between f(t) and the daughter wavelet. The daughter wavelet is constructed by a mother wavelet, which is dilated with a scale parameter a and translated by b. Therefore, the mother wavelet of CWT can be defined as (5). Hence,(4)CWTft;a,b=∫-∞∞ftΨa,b∗tdt(5)Ψa,b∗t=1aΨt-ba,where Ψa,b(t) is the mother wavelet and ∗ is the complex conjugate operation. a is a scale factor and b is the transition factor. Wavelet transform has a time-frequency resolution. A short window is used for high frequencies while a long window is for low frequencies. Sharp signal transitions create large-amplitude wavelet coefficients. Large wavelet coefficients can detect and measure short high-frequency variations because they have narrow time localization at high frequency. Therefore, WT is very attractive for the analysis of transient signals [25].Wavelet transform is regarded as a filter. The filter of wavelet transform can be employed as follows:(6)gt=∫-∞∞fτht-τdτ(7)Gω=Fω·Hω,where h(t) is the pulse response of the filter and H(ω) is the frequency response of the filter. Comparing (4) and (6), h(t) can be described as(8)ht=a-1/2ψ-ta¯.H ( ω ) is also represented as(9)Hω=a·Ψaω¯.From the analysis above, wavelet transform can be calculated by using Fast Fourier Transform (FFT) and Inverse Fast Fourier Transform (IFFT). Therefore, wavelet transform of traveling wave can be employed by using the following equation:(10)CWTa=IFFTfωHω,where f(ω) is the calculation result of FFT for traveling wave and CWTa is the result of wavelet transform under the scale a and it is used for fault location. In this paper, the continuous wavelet transform is utilized to extract the wavefront of traveling wave and determine the arrival time of traveling wave.
## 3. Analysis of Traveling-Wave Dispersion Effect
### 3.1. Analysis of Dispersion Effect in Frequency Domain
The traveling wave has a lot of frequency components. As a matter of fact, the distributed parameters of transmission line are frequency-dependent, which results in different frequency components having different velocities and attenuation values. This phenomenon is defined as the dispersion effect of traveling wave [18].There are two important parameters for the traveling-wave dispersion effect. One is the propagation coefficient, and the other is phase velocity. The propagation coefficient of them-modal component is calculated as follows:(11)γmω=Rm+jωLmGm+jωCm,where Rm, Lm, Gm, and Cm are the m-modal resistance, inductance, conductance, and capacitance, respectively, which are frequency-dependent because of the influence of skin effect [26]. These distributed parameters can be calculated by using Carson formulation according to geometrical parameters of transmission line. It can be observed from Figure 3 that the modal resistance increases with the increase of the frequency, and the modal inductance decreases with the increase of frequency.Figure 3
Frequency-dependent parameters of transmission line.
(a)
Resistance varied with frequency (b)
Reactance varied with frequency (c)
Attenuation coefficient varied with frequency (d)
Propagation velocity varied with frequencyThe propagation coefficient is also described as follows:(12)γmω=αmω+jβmω,where αm is the attenuation coefficient and βm is the phase coefficient. They are also frequency-dependent parameters. The phase velocity can be calculated by phase coefficient βm, which is defined as(13)Vpm=ωβm.The modal attenuation coefficient and phase velocity, which vary with frequencies, are shown in Figures3(c) and 3(d). The frequency ranges from 1 Hz to 1 MHz.It can be seen from Figure3 that the ground-modal parameters are more affected by the frequency than the line-modal parameters. The line-modal component is widely used for the traveling-wave-based fault location. Nevertheless, the characteristics of frequency-dependent parameters should not be ignored for improving the accuracy of fault location. For line-modal parameters, attenuation coefficients and propagation velocities increase along with the increase of frequency. For attenuation coefficients, the change of the low frequency is less than that of high frequency; for propagation velocities, the change of the low frequency is greater than that of the high frequency. The characteristic of frequency-dependent parameters makes the rise or fall time of wavefront spread out, which decreases the singularity of transient traveling wave.
### 3.2. Analysis of Dispersion Effect in Time Domain
In frequency domain, attenuation and wave velocity of traveling wave increase along with frequency. In time domain, the arrival times of different frequency components are not the same. The high-frequency component will arrive at the measuring point first of all, and the low-frequency component will be delayed much more in reaching the measuring point than the high-frequency component. Therefore, all of the detected wavefronts at different measuring points are not ideal step signals; these rise times or fall times get long, which can been seen from Figure4. In the figure, the signal at the fault point is an ideal step one. At the measuring point 150 kilometers away from the fault point, the fall time of transient traveling wave becomes larger than that at the fault point. At the measuring point 300 kilometers away from the fault point, the fall time of the transient traveling wave becomes larger than that at 150 kilometers. The fall time will become larger and larger with the increase of the distance between the measuring point and the fault point. Thus, the singularity of transient traveling wave gradually decreases. In summary, as a result of the dispersion effect of traveling wave, the wavefront is distorted, which makes it more difficult to be detected. Furthermore, the dispersion effect of traveling wave will decrease the accuracy and reliability of fault location.Figure 4
α-mode wavefronts at different measuring points.
## 3.1. Analysis of Dispersion Effect in Frequency Domain
The traveling wave has a lot of frequency components. As a matter of fact, the distributed parameters of transmission line are frequency-dependent, which results in different frequency components having different velocities and attenuation values. This phenomenon is defined as the dispersion effect of traveling wave [18].There are two important parameters for the traveling-wave dispersion effect. One is the propagation coefficient, and the other is phase velocity. The propagation coefficient of them-modal component is calculated as follows:(11)γmω=Rm+jωLmGm+jωCm,where Rm, Lm, Gm, and Cm are the m-modal resistance, inductance, conductance, and capacitance, respectively, which are frequency-dependent because of the influence of skin effect [26]. These distributed parameters can be calculated by using Carson formulation according to geometrical parameters of transmission line. It can be observed from Figure 3 that the modal resistance increases with the increase of the frequency, and the modal inductance decreases with the increase of frequency.Figure 3
Frequency-dependent parameters of transmission line.
(a)
Resistance varied with frequency (b)
Reactance varied with frequency (c)
Attenuation coefficient varied with frequency (d)
Propagation velocity varied with frequencyThe propagation coefficient is also described as follows:(12)γmω=αmω+jβmω,where αm is the attenuation coefficient and βm is the phase coefficient. They are also frequency-dependent parameters. The phase velocity can be calculated by phase coefficient βm, which is defined as(13)Vpm=ωβm.The modal attenuation coefficient and phase velocity, which vary with frequencies, are shown in Figures3(c) and 3(d). The frequency ranges from 1 Hz to 1 MHz.It can be seen from Figure3 that the ground-modal parameters are more affected by the frequency than the line-modal parameters. The line-modal component is widely used for the traveling-wave-based fault location. Nevertheless, the characteristics of frequency-dependent parameters should not be ignored for improving the accuracy of fault location. For line-modal parameters, attenuation coefficients and propagation velocities increase along with the increase of frequency. For attenuation coefficients, the change of the low frequency is less than that of high frequency; for propagation velocities, the change of the low frequency is greater than that of the high frequency. The characteristic of frequency-dependent parameters makes the rise or fall time of wavefront spread out, which decreases the singularity of transient traveling wave.
## 3.2. Analysis of Dispersion Effect in Time Domain
In frequency domain, attenuation and wave velocity of traveling wave increase along with frequency. In time domain, the arrival times of different frequency components are not the same. The high-frequency component will arrive at the measuring point first of all, and the low-frequency component will be delayed much more in reaching the measuring point than the high-frequency component. Therefore, all of the detected wavefronts at different measuring points are not ideal step signals; these rise times or fall times get long, which can been seen from Figure4. In the figure, the signal at the fault point is an ideal step one. At the measuring point 150 kilometers away from the fault point, the fall time of transient traveling wave becomes larger than that at the fault point. At the measuring point 300 kilometers away from the fault point, the fall time of the transient traveling wave becomes larger than that at 150 kilometers. The fall time will become larger and larger with the increase of the distance between the measuring point and the fault point. Thus, the singularity of transient traveling wave gradually decreases. In summary, as a result of the dispersion effect of traveling wave, the wavefront is distorted, which makes it more difficult to be detected. Furthermore, the dispersion effect of traveling wave will decrease the accuracy and reliability of fault location.Figure 4
α-mode wavefronts at different measuring points.
## 4. Correction Method of Traveling-Wave Dispersion Effect
### 4.1. Correction Method of Traveling-Wave Dispersion Effect
The propagation theory of forward and reverse traveling wave is shown in Figure5. The direction of the “forward” and “reverse” is relative. The component that leaves the terminal is defined as “forward”; the component that enters the terminal from the line is defined as “reverse.” The length of transmission line between M and N terminal is l.Figure 5
The propagation theory of forward and reverse traveling wave.According to the propagation equation of single conductor line, the voltagesUM and UN at M and N terminal can be obtained as follows:(14)UN=FN+BN=FN+FNe-γωlUM=FM+BM=FM+FNe-γωl,where e-γ(ω)l is the propagation function. Compared with (14), each reverse traveling wave can be obtained as follows:(15)BM=FNe-γωlBN=FMe-γωl.From signal and system theory, the reverse traveling waveBM is the frequency response of the forward traveling wave FN; similarly, the reverse traveling wave BM is the frequency response of the forward traveling wave FM. For lossy transmission line, γ(ω) varies with frequency. It distorts the wavefront of traveling wave and decreases its singularity. Obviously, γ(ω) causes the dispersion effect of traveling wave.Likewise, the initial surge of traveling wave, which is used for double-ended fault location method, is the frequency response of the signal at the fault point. Suppose that the distance between the fault point and the measuring point isL and the transient traveling wave at the fault point is referred to as fω in frequency domain, which is an ideal step signal in time domain as shown in Figure 4. Thus, the traveling wave at the measuring points can be described as f(ω)e-γ(ω)L in frequency domain from the analysis above, and this wave is distorted by e-γ(ω)L. The traveling wave at the measuring points can be multiplied by A(ω), and the traveling wave at the fault point with the constant delay can be obtained, and it is described as follows: (16)fωe-γωLAω=fωe-jωτ(17)τ=Lυ,where υ is set as the velocity at 50 Hz. f(ω)e-jωτ is the perfect solution for the traveling-wave-based fault location, because it is an ideal step signal and it has perfect singularity and constant delay. In theory, f(ω)e-jωτ is very suitable for fault location. Consequently, the correction function in frequency domain is obtained as follows: (18)Aω=Q-1ω=eγωL-jωτ.For fault location,A(ω) multiplies the receiving traveling wave at the measuring point in frequency domain; the perfect signal f(ω)e-jωτ can be obtained and it is used for the traveling-wave-based fault location. That is to say, all of the frequency components have the same velocity and attenuation to the frequency component at 50 Hz by using the correction function. When the fault happens on the transmission line, the forward or reverse traveling wave travels towards the measuring points. The distorted wavefront of the initial traveling wave is corrected by the correction function. All frequency components of the initial traveling wave will be postponed for some constant time to reach the measuring point and all of them have constant attenuation values, which make it easy to accurately determine the arrival time and wave velocity. By the correction method proposed in this paper, the singularity of the receiving transient traveling wave is significantly enhanced.
### 4.2. Implementation of the Correction Method
When a fault happens, the three-phase voltage atM and N measuring terminals can be obtained. Firstly, approximate fault distance can be obtained by using wavelet transform, which is described in Sections 2.2 and 2.4. The approximate fault distance is used for calculating the correction function; secondly, we select several discrete frequencies and calculate the propagation function of each frequency with R, L, and C parameters by using Carson formulation, and then the correction function A(ω) can be obtained by using (18); thirdly, the corrected traveling wave can be obtained according to (16) in frequency domain and the fault location procedure is implemented by wavelet transform again. The specific steps are shown as follows.Step 1.
The three-phase voltage is obtained when a fault happens.Step 2.
The transient three-phase voltages are first decoupled into their independent modal components by using (3), and then α-mode traveling wave, uMt and uNt, is used for fault location.Step 3.
u M ω and uN(ω) of α-mode traveling wave can be obtained by Fast Fourier Transform (FFT).Step 4.
At the scalea, the frequency response of the mother wavelet function, H(ω), can be calculated by using (9).Step 5.
At the scalea, the decomposition result of wavelet transform can be obtained by IFFT according to (10); this result can be described as follows:(19)WTauMt=IFFTuMω·HωWTauNt=IFFTuNω·Hω.Step 6.
Based on the double-ended fault location theory described in Section2.2, approximate fault distance, lM and lN, can be obtained.Step 7.
Calculate the propagation function based on Carson formulation, and then calculate the corresponding corrected functionAM(ω) and AN(ω) for uM(ω) and uN(ω), respectively.Step 8.
At the scalea of wavelet transform, the corrected decomposition result of wavelet transform can be obtained by IFFT; this result can be shown as follows:(20)WTuM′t=IFFTuMω·AMω·HωWTuNt=IFFTuNω·ANω·Hω.Step 9.
Based on the double-ended fault location theory, accurate fault distance,lM′ and lN′, can be obtained.From the analysis above, it can be observed that the wavelet-transform-based fault location method is described in frequency domain from Step3 to Step 6. The method is implemented again from Step 7 to Step 9, and the traveling wave is corrected in frequency domain.
## 4.1. Correction Method of Traveling-Wave Dispersion Effect
The propagation theory of forward and reverse traveling wave is shown in Figure5. The direction of the “forward” and “reverse” is relative. The component that leaves the terminal is defined as “forward”; the component that enters the terminal from the line is defined as “reverse.” The length of transmission line between M and N terminal is l.Figure 5
The propagation theory of forward and reverse traveling wave.According to the propagation equation of single conductor line, the voltagesUM and UN at M and N terminal can be obtained as follows:(14)UN=FN+BN=FN+FNe-γωlUM=FM+BM=FM+FNe-γωl,where e-γ(ω)l is the propagation function. Compared with (14), each reverse traveling wave can be obtained as follows:(15)BM=FNe-γωlBN=FMe-γωl.From signal and system theory, the reverse traveling waveBM is the frequency response of the forward traveling wave FN; similarly, the reverse traveling wave BM is the frequency response of the forward traveling wave FM. For lossy transmission line, γ(ω) varies with frequency. It distorts the wavefront of traveling wave and decreases its singularity. Obviously, γ(ω) causes the dispersion effect of traveling wave.Likewise, the initial surge of traveling wave, which is used for double-ended fault location method, is the frequency response of the signal at the fault point. Suppose that the distance between the fault point and the measuring point isL and the transient traveling wave at the fault point is referred to as fω in frequency domain, which is an ideal step signal in time domain as shown in Figure 4. Thus, the traveling wave at the measuring points can be described as f(ω)e-γ(ω)L in frequency domain from the analysis above, and this wave is distorted by e-γ(ω)L. The traveling wave at the measuring points can be multiplied by A(ω), and the traveling wave at the fault point with the constant delay can be obtained, and it is described as follows: (16)fωe-γωLAω=fωe-jωτ(17)τ=Lυ,where υ is set as the velocity at 50 Hz. f(ω)e-jωτ is the perfect solution for the traveling-wave-based fault location, because it is an ideal step signal and it has perfect singularity and constant delay. In theory, f(ω)e-jωτ is very suitable for fault location. Consequently, the correction function in frequency domain is obtained as follows: (18)Aω=Q-1ω=eγωL-jωτ.For fault location,A(ω) multiplies the receiving traveling wave at the measuring point in frequency domain; the perfect signal f(ω)e-jωτ can be obtained and it is used for the traveling-wave-based fault location. That is to say, all of the frequency components have the same velocity and attenuation to the frequency component at 50 Hz by using the correction function. When the fault happens on the transmission line, the forward or reverse traveling wave travels towards the measuring points. The distorted wavefront of the initial traveling wave is corrected by the correction function. All frequency components of the initial traveling wave will be postponed for some constant time to reach the measuring point and all of them have constant attenuation values, which make it easy to accurately determine the arrival time and wave velocity. By the correction method proposed in this paper, the singularity of the receiving transient traveling wave is significantly enhanced.
## 4.2. Implementation of the Correction Method
When a fault happens, the three-phase voltage atM and N measuring terminals can be obtained. Firstly, approximate fault distance can be obtained by using wavelet transform, which is described in Sections 2.2 and 2.4. The approximate fault distance is used for calculating the correction function; secondly, we select several discrete frequencies and calculate the propagation function of each frequency with R, L, and C parameters by using Carson formulation, and then the correction function A(ω) can be obtained by using (18); thirdly, the corrected traveling wave can be obtained according to (16) in frequency domain and the fault location procedure is implemented by wavelet transform again. The specific steps are shown as follows.Step 1.
The three-phase voltage is obtained when a fault happens.Step 2.
The transient three-phase voltages are first decoupled into their independent modal components by using (3), and then α-mode traveling wave, uMt and uNt, is used for fault location.Step 3.
u M ω and uN(ω) of α-mode traveling wave can be obtained by Fast Fourier Transform (FFT).Step 4.
At the scalea, the frequency response of the mother wavelet function, H(ω), can be calculated by using (9).Step 5.
At the scalea, the decomposition result of wavelet transform can be obtained by IFFT according to (10); this result can be described as follows:(19)WTauMt=IFFTuMω·HωWTauNt=IFFTuNω·Hω.Step 6.
Based on the double-ended fault location theory described in Section2.2, approximate fault distance, lM and lN, can be obtained.Step 7.
Calculate the propagation function based on Carson formulation, and then calculate the corresponding corrected functionAM(ω) and AN(ω) for uM(ω) and uN(ω), respectively.Step 8.
At the scalea of wavelet transform, the corrected decomposition result of wavelet transform can be obtained by IFFT; this result can be shown as follows:(20)WTuM′t=IFFTuMω·AMω·HωWTuNt=IFFTuNω·ANω·Hω.Step 9.
Based on the double-ended fault location theory, accurate fault distance,lM′ and lN′, can be obtained.From the analysis above, it can be observed that the wavelet-transform-based fault location method is described in frequency domain from Step3 to Step 6. The method is implemented again from Step 7 to Step 9, and the traveling wave is corrected in frequency domain.
## 5. Simulations and Results Analysis
### 5.1. The Model of Simulation
A model for 500 kV transposed transmission line is constructed in ATP/EMTP, as shown in Figure6(a). The geometry structure of transmission line is shown in Figure 6(b). The classical J. Marti model is adopted for analyzing the frequency characteristic of transmission lines. The voltage measuring terminal is at each transformer and is used to acquire the fault voltage data. The total length of the transmission line is 414 km. The fault distance is 100 km. Besides, an A-phase grounding fault happens.Figure 6
Model of transmission line.
(a)
500 kV transmission line model (b)
Geometry structure of transmission lineIn ATP/EMTP software, the sample rate is set as 1 MHz. The traveling-wave data generated in ATP software is imported into MATLAB software. In MATLAB, the correction method is implemented. In this paper,α-mode component is used for fault location. Due to the fact that the velocity is dependent on frequency, the α-mode velocity of traveling wave is calculated at 50 Hz, which is 2.9724 × 105 km/s.
### 5.2. Simulation Waveform Analysis
When we select several discrete frequencies and calculate the propagation function of each frequency withR, L, and C parameters by using Carson formulation, the propagation function A(ω) can be obtained by using (19).When we obtainA(ω), it is used for correcting the receiving traveling wave by using (21). The corrected traveling wave is shown in Figure 7. At the moment, the transmission line is grounded in A phase, the transition resistance is 10 Ω, and the fault inception angle is 106°.Figure 7
Corrected waveforms with the distance of 345 km.As can be seen from Figure7, the fall time of the corrected traveling wave becomes shorter than of the uncorrected one, which enhances the singularity of the traveling wave. That is to say, the corrected traveling wave shows more singularity than the uncorrected one. The correction method proposed in this paper makes the wavefront easier to be detected by the extraction algorithms, such as wavelet transform and HHT. In this paper, the comparison analysis is also implemented in wavelet field. As shown in Figure 8, the modulus maximum of uncorrected wavelet component is less than that of corrected wavelet component.Figure 8
Comparisons of the wavelet coefficients.
### 5.3. Results of Fault Location
In this paper, we compare the fault location accuracy between the traditional method and the proposed method. The traditional method is based on the wavelet transform (Morlet wavelet function) and it is not corrected by the correction method proposed in this paper. In order to test the applicability of the proposed method, various fault conditions are simulated, respectively. In the simulated experiments, the error of fault location is calculated as a performance index by the following equation:(21)error=l′-lS×100%,where l′ is the calculated fault distance, l is the actual fault distance, and S is the length of total transmission line.
#### 5.3.1. Performance of Different Transition Resistance Values
The transition resistance directly affects the voltage amplitude of the initial traveling wave. In this paper, we set the fault inception angle to analyze the errors of fault location, when the transition resistance varies from 10 Ω to 500 Ω. The distance between the fault point and the measuring terminal is 100 km, the fault type is A-G, and the fault inception angle is 106°. The simulated results are shown in Tables1 and 2. As can be seen from the table, the location error of the proposed method becomes smaller than that of the traditional one, no matter what the transition resistance is.Table 1
Error under different transition resistance values (transposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.3 0.10 100.23 0.07 50 99.50 0.18 99.60 0.13 100 99.34 0.22 99.62 0.12 500 98.70 0.43 99.30 0.23Table 2
Error under different transition resistance values (untransposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.57 0.19 100.30 0.10 50 101.20 0.40 100.56 0.19 100 98.75 0.42 99.18 0.27 500 101.02 0.34 100.55 0.18
#### 5.3.2. Performance of Different Inception Angles
The fault inception angle is one of the most effective parameters which influence the accuracy of the traveling-wave-based fault location. In order to test the influence of fault inception angle, the simulated cases are constructed with the fault inception angle varied from 18° to 90°. At the moment, the fault type is A-G, the transition resistance is 10 Ω, and the fault distance is 100 kilometers. The experiment results are shown in Tables3 and 4. The proposed method also shows great adaptability to the different inception angles.Table 3
Error under different inception angles (transposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.60 0.20 100.42 0.07 36° 100.31 0.10 100.22 0.10 72° 100.86 0.29 99.96 0.18 90° 100.80 0.27 100.78 0.26Table 4
Error under different inception angles (untransposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.85 0.28 100.50 0.17 36° 100.62 0.21 100.40 0.13 72° 99.33 0.22 99.60 0.13 90° 100.56 0.19 100.33 0.11
#### 5.3.3. Performance of Different Fault Types
To investigate the effect of different fault types based on the proposed method in this paper, several experiments have been implemented and experiment results are shown in Tables5 and 6. In these experiments, all transition resistances are 10 Ω, the fault inception angle is 106°, and the fault distance is 100 kilometers.Table 5
Error under different fault types (transposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.30 0.10 100.23 0.08 B-G 99.50 0.17 99.63 0.12 BC 99.21 0.26 100.45 0.15 BC-G 100.87 0.29 100.14 0.05 ABC-G 99.25 0.25 99.45 0.18Table 6
Error under different fault types (untransposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.57 0.19 100.30 0.10 B-G 99.54 0.15 99.64 0.12 BC 101.05 0.35 100.85 0.28 BC-G 100.78 0.26 100.23 0.08 ABC-G 100.50 0.16 100.26 0.09
#### 5.3.4. Performance of Antinoise
To investigate the performance of antinoise under different levels, several experiments have been implemented. In these experiments, the transition resistance is 10 Ω with BC-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Since noise has no meaningful information about fault location, it would be rational to define a threshold to reject the noise-associated WT coefficient [8]. Simulation results in Table 7 reveal that the additional noise has little influence on traveling-wave-based fault location.Table 7
Error under different noise levels (transposed circuit).
Noise (unit: dBw) The traditional method The proposed method Location results Error Location results Error 10 100.78 0.26 100.14 0.05 20 100.78 0.26 100.15 0.05 30 99.21 0.2633 100.30 0.10 40 100.76 0.2533 100.21 0.07
#### 5.3.5. Performance Analysis of Geometry Parameters
The production error of the towers supporting transmission line is very low, and it is within 1 cm. The geometrical parameters of the corner tower and the tower in substation are sometimes different from other towers along the transmission line. The number of the corner towers and the towers in substation is very small and they can be omitted. Therefore, from the point of view of theoretical calculation, we assume that the relative position between line conductors is unchanged. On the other hand, the height of line conductors will sometimes increase when the transmission line passes through a road, village, city, mountain, and so forth. The change is very complicated; it is very difficult to model the real transmission line. In the paper, for analyzing the applicability of the proposed method, the length of the transmission line which is supported by the changed tower is 20 km. In these experiments, the transition resistance is 10 Ω with A-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Simulation results in Table8 reveal that the proposed method can also improve the accuracy of fault location when the height of tower varies.Table 8
Error under different tower heights (transposed circuit).
Tower height (mile) The traditional method The proposed method Location results Error Location results Error 0 100.60 0.20 100.14 0.05 +5 100.47 0.16 100.22 0.07 +10 99.18 0.27 100.45 0.15 +20 100.80 0.27 100.55 0.18
## 5.1. The Model of Simulation
A model for 500 kV transposed transmission line is constructed in ATP/EMTP, as shown in Figure6(a). The geometry structure of transmission line is shown in Figure 6(b). The classical J. Marti model is adopted for analyzing the frequency characteristic of transmission lines. The voltage measuring terminal is at each transformer and is used to acquire the fault voltage data. The total length of the transmission line is 414 km. The fault distance is 100 km. Besides, an A-phase grounding fault happens.Figure 6
Model of transmission line.
(a)
500 kV transmission line model (b)
Geometry structure of transmission lineIn ATP/EMTP software, the sample rate is set as 1 MHz. The traveling-wave data generated in ATP software is imported into MATLAB software. In MATLAB, the correction method is implemented. In this paper,α-mode component is used for fault location. Due to the fact that the velocity is dependent on frequency, the α-mode velocity of traveling wave is calculated at 50 Hz, which is 2.9724 × 105 km/s.
## 5.2. Simulation Waveform Analysis
When we select several discrete frequencies and calculate the propagation function of each frequency withR, L, and C parameters by using Carson formulation, the propagation function A(ω) can be obtained by using (19).When we obtainA(ω), it is used for correcting the receiving traveling wave by using (21). The corrected traveling wave is shown in Figure 7. At the moment, the transmission line is grounded in A phase, the transition resistance is 10 Ω, and the fault inception angle is 106°.Figure 7
Corrected waveforms with the distance of 345 km.As can be seen from Figure7, the fall time of the corrected traveling wave becomes shorter than of the uncorrected one, which enhances the singularity of the traveling wave. That is to say, the corrected traveling wave shows more singularity than the uncorrected one. The correction method proposed in this paper makes the wavefront easier to be detected by the extraction algorithms, such as wavelet transform and HHT. In this paper, the comparison analysis is also implemented in wavelet field. As shown in Figure 8, the modulus maximum of uncorrected wavelet component is less than that of corrected wavelet component.Figure 8
Comparisons of the wavelet coefficients.
## 5.3. Results of Fault Location
In this paper, we compare the fault location accuracy between the traditional method and the proposed method. The traditional method is based on the wavelet transform (Morlet wavelet function) and it is not corrected by the correction method proposed in this paper. In order to test the applicability of the proposed method, various fault conditions are simulated, respectively. In the simulated experiments, the error of fault location is calculated as a performance index by the following equation:(21)error=l′-lS×100%,where l′ is the calculated fault distance, l is the actual fault distance, and S is the length of total transmission line.
### 5.3.1. Performance of Different Transition Resistance Values
The transition resistance directly affects the voltage amplitude of the initial traveling wave. In this paper, we set the fault inception angle to analyze the errors of fault location, when the transition resistance varies from 10 Ω to 500 Ω. The distance between the fault point and the measuring terminal is 100 km, the fault type is A-G, and the fault inception angle is 106°. The simulated results are shown in Tables1 and 2. As can be seen from the table, the location error of the proposed method becomes smaller than that of the traditional one, no matter what the transition resistance is.Table 1
Error under different transition resistance values (transposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.3 0.10 100.23 0.07 50 99.50 0.18 99.60 0.13 100 99.34 0.22 99.62 0.12 500 98.70 0.43 99.30 0.23Table 2
Error under different transition resistance values (untransposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.57 0.19 100.30 0.10 50 101.20 0.40 100.56 0.19 100 98.75 0.42 99.18 0.27 500 101.02 0.34 100.55 0.18
### 5.3.2. Performance of Different Inception Angles
The fault inception angle is one of the most effective parameters which influence the accuracy of the traveling-wave-based fault location. In order to test the influence of fault inception angle, the simulated cases are constructed with the fault inception angle varied from 18° to 90°. At the moment, the fault type is A-G, the transition resistance is 10 Ω, and the fault distance is 100 kilometers. The experiment results are shown in Tables3 and 4. The proposed method also shows great adaptability to the different inception angles.Table 3
Error under different inception angles (transposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.60 0.20 100.42 0.07 36° 100.31 0.10 100.22 0.10 72° 100.86 0.29 99.96 0.18 90° 100.80 0.27 100.78 0.26Table 4
Error under different inception angles (untransposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.85 0.28 100.50 0.17 36° 100.62 0.21 100.40 0.13 72° 99.33 0.22 99.60 0.13 90° 100.56 0.19 100.33 0.11
### 5.3.3. Performance of Different Fault Types
To investigate the effect of different fault types based on the proposed method in this paper, several experiments have been implemented and experiment results are shown in Tables5 and 6. In these experiments, all transition resistances are 10 Ω, the fault inception angle is 106°, and the fault distance is 100 kilometers.Table 5
Error under different fault types (transposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.30 0.10 100.23 0.08 B-G 99.50 0.17 99.63 0.12 BC 99.21 0.26 100.45 0.15 BC-G 100.87 0.29 100.14 0.05 ABC-G 99.25 0.25 99.45 0.18Table 6
Error under different fault types (untransposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.57 0.19 100.30 0.10 B-G 99.54 0.15 99.64 0.12 BC 101.05 0.35 100.85 0.28 BC-G 100.78 0.26 100.23 0.08 ABC-G 100.50 0.16 100.26 0.09
### 5.3.4. Performance of Antinoise
To investigate the performance of antinoise under different levels, several experiments have been implemented. In these experiments, the transition resistance is 10 Ω with BC-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Since noise has no meaningful information about fault location, it would be rational to define a threshold to reject the noise-associated WT coefficient [8]. Simulation results in Table 7 reveal that the additional noise has little influence on traveling-wave-based fault location.Table 7
Error under different noise levels (transposed circuit).
Noise (unit: dBw) The traditional method The proposed method Location results Error Location results Error 10 100.78 0.26 100.14 0.05 20 100.78 0.26 100.15 0.05 30 99.21 0.2633 100.30 0.10 40 100.76 0.2533 100.21 0.07
### 5.3.5. Performance Analysis of Geometry Parameters
The production error of the towers supporting transmission line is very low, and it is within 1 cm. The geometrical parameters of the corner tower and the tower in substation are sometimes different from other towers along the transmission line. The number of the corner towers and the towers in substation is very small and they can be omitted. Therefore, from the point of view of theoretical calculation, we assume that the relative position between line conductors is unchanged. On the other hand, the height of line conductors will sometimes increase when the transmission line passes through a road, village, city, mountain, and so forth. The change is very complicated; it is very difficult to model the real transmission line. In the paper, for analyzing the applicability of the proposed method, the length of the transmission line which is supported by the changed tower is 20 km. In these experiments, the transition resistance is 10 Ω with A-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Simulation results in Table8 reveal that the proposed method can also improve the accuracy of fault location when the height of tower varies.Table 8
Error under different tower heights (transposed circuit).
Tower height (mile) The traditional method The proposed method Location results Error Location results Error 0 100.60 0.20 100.14 0.05 +5 100.47 0.16 100.22 0.07 +10 99.18 0.27 100.45 0.15 +20 100.80 0.27 100.55 0.18
## 5.3.1. Performance of Different Transition Resistance Values
The transition resistance directly affects the voltage amplitude of the initial traveling wave. In this paper, we set the fault inception angle to analyze the errors of fault location, when the transition resistance varies from 10 Ω to 500 Ω. The distance between the fault point and the measuring terminal is 100 km, the fault type is A-G, and the fault inception angle is 106°. The simulated results are shown in Tables1 and 2. As can be seen from the table, the location error of the proposed method becomes smaller than that of the traditional one, no matter what the transition resistance is.Table 1
Error under different transition resistance values (transposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.3 0.10 100.23 0.07 50 99.50 0.18 99.60 0.13 100 99.34 0.22 99.62 0.12 500 98.70 0.43 99.30 0.23Table 2
Error under different transition resistance values (untransposed circuit).
Transition resistance The traditional method The proposed method Location results Error Location results Error 10 100.57 0.19 100.30 0.10 50 101.20 0.40 100.56 0.19 100 98.75 0.42 99.18 0.27 500 101.02 0.34 100.55 0.18
## 5.3.2. Performance of Different Inception Angles
The fault inception angle is one of the most effective parameters which influence the accuracy of the traveling-wave-based fault location. In order to test the influence of fault inception angle, the simulated cases are constructed with the fault inception angle varied from 18° to 90°. At the moment, the fault type is A-G, the transition resistance is 10 Ω, and the fault distance is 100 kilometers. The experiment results are shown in Tables3 and 4. The proposed method also shows great adaptability to the different inception angles.Table 3
Error under different inception angles (transposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.60 0.20 100.42 0.07 36° 100.31 0.10 100.22 0.10 72° 100.86 0.29 99.96 0.18 90° 100.80 0.27 100.78 0.26Table 4
Error under different inception angles (untransposed circuit).
Fault inception angle The traditional method The proposed method Location results Error Location results Error 18° 100.85 0.28 100.50 0.17 36° 100.62 0.21 100.40 0.13 72° 99.33 0.22 99.60 0.13 90° 100.56 0.19 100.33 0.11
## 5.3.3. Performance of Different Fault Types
To investigate the effect of different fault types based on the proposed method in this paper, several experiments have been implemented and experiment results are shown in Tables5 and 6. In these experiments, all transition resistances are 10 Ω, the fault inception angle is 106°, and the fault distance is 100 kilometers.Table 5
Error under different fault types (transposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.30 0.10 100.23 0.08 B-G 99.50 0.17 99.63 0.12 BC 99.21 0.26 100.45 0.15 BC-G 100.87 0.29 100.14 0.05 ABC-G 99.25 0.25 99.45 0.18Table 6
Error under different fault types (untransposed circuit).
Fault type The traditional method The proposed method Location results Error Location results Error A-G 100.57 0.19 100.30 0.10 B-G 99.54 0.15 99.64 0.12 BC 101.05 0.35 100.85 0.28 BC-G 100.78 0.26 100.23 0.08 ABC-G 100.50 0.16 100.26 0.09
## 5.3.4. Performance of Antinoise
To investigate the performance of antinoise under different levels, several experiments have been implemented. In these experiments, the transition resistance is 10 Ω with BC-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Since noise has no meaningful information about fault location, it would be rational to define a threshold to reject the noise-associated WT coefficient [8]. Simulation results in Table 7 reveal that the additional noise has little influence on traveling-wave-based fault location.Table 7
Error under different noise levels (transposed circuit).
Noise (unit: dBw) The traditional method The proposed method Location results Error Location results Error 10 100.78 0.26 100.14 0.05 20 100.78 0.26 100.15 0.05 30 99.21 0.2633 100.30 0.10 40 100.76 0.2533 100.21 0.07
## 5.3.5. Performance Analysis of Geometry Parameters
The production error of the towers supporting transmission line is very low, and it is within 1 cm. The geometrical parameters of the corner tower and the tower in substation are sometimes different from other towers along the transmission line. The number of the corner towers and the towers in substation is very small and they can be omitted. Therefore, from the point of view of theoretical calculation, we assume that the relative position between line conductors is unchanged. On the other hand, the height of line conductors will sometimes increase when the transmission line passes through a road, village, city, mountain, and so forth. The change is very complicated; it is very difficult to model the real transmission line. In the paper, for analyzing the applicability of the proposed method, the length of the transmission line which is supported by the changed tower is 20 km. In these experiments, the transition resistance is 10 Ω with A-G fault, the fault inception angle is 106°, and the fault distance is 100 kilometers. Simulation results in Table8 reveal that the proposed method can also improve the accuracy of fault location when the height of tower varies.Table 8
Error under different tower heights (transposed circuit).
Tower height (mile) The traditional method The proposed method Location results Error Location results Error 0 100.60 0.20 100.14 0.05 +5 100.47 0.16 100.22 0.07 +10 99.18 0.27 100.45 0.15 +20 100.80 0.27 100.55 0.18
## 6. Conclusions
In this paper, the dispersion characteristic of traveling wave is analyzed in time and frequency domain, respectively. When traveling wave travels along transmission line, the fall or rise time of the wavefront will become long with the increase of propagation distance. Thus, the singularity of the transient wavefront decreases. In this paper, a novel double-ended fault location method has been proposed to overcome the dispersion effect of traveling wave. In the method, a correction algorithm for overcoming the dispersion effect of traveling wave enhances the singularity of the transient traveling wave. The proposed method is tested under various experiment conditions, such as different fault distances, different transition resistances, and different fault inception angles. The simulation experiments demonstrate that the proposed method is better than the traditional traveling-wave-based fault location method. Furthermore, the novel method is suitable for both transposed and untransposed transmission lines. All of the advantages prove that the proposed method is available. However, there are several other parameters which affect the slope of traveling wave, such as the impedance characteristic of traveling-wave measurement equipment and shunt reactor connected to the line. Future work will consider those parameters.
---
*Source: 1019591-2017-02-08.xml* | 2017 |
# Adaptive Backstepping Control for Longitudinal Dynamics of Hypersonic Vehicle Subject to Actuator Saturation and Disturbance
**Authors:** Zhiqiang Jia; Tianya Li; Kunfeng Lu
**Journal:** Mathematical Problems in Engineering
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1019621
---
## Abstract
In this paper, an adaptive backstepping control strategy is presented to solve the longitudinal control problem for a hypersonic vehicle (HSV) subject to actuator saturation and disturbances. Small perturbation linearization transforms the dynamics to a seconded-order system at each trimming point, with total disturbance including unmodeled dynamics, parametric uncertainties, and external disturbances. The disturbance can be estimated and compensated for by an extended state observer (ESO), and thus the system is decoupled. To deal with the actuator saturation and wide flight envelope, an adaptive backstepping control strategy is designed. A rigorous proof of finite-time convergence is provided applying Lyapunov method. The effectiveness of the proposed control scheme is verified in simulations.
---
## Body
## 1. Introduction
Attitude control is a typical nonlinear control problem, which is very important for spacecraft, missile, HSV, and so on in engineering practice. According to small perturbation assumption, the flight dynamics of HSVs can be linearized around the trimming points. Then, the classical control methods such as PID and feedback linearization are employed to design the controller [1–3]. Hypersonic vehicle is required to work within a large flight envelope to meet the challenge of highly maneuverable targets in all probable engagements [4]. Due to the wide envelope, a great number of operating points should be carefully chosen to cover the full envelop, and an effective controller should be designed for each point. Note that the designed control algorithms should guarantee stability with superior control performance and robustness throughout the flight envelope [5, 6]. However, the equations of motion that govern the behavior of an HSV are nonlinear and time varying, which make flight control system design for aircraft a complex and difficult problem. Considering the uncertainties in aerodynamic parameters, nonlinearities and measurement noises, the task of guaranteeing favorable control performance and robustness throughout the entire flight envelope is a challenging one.Due to the practical importance of HSV, the attitude control has attracted extensive attention and a great number of control methods have been designed to improve control performance. In [7–10], the backstepping procedure is designed for an angle of attack autopilot. Furthermore, to deal with the explosion of the complexity associated with the backstepping method, improving the dynamic surface control method has been studied in [11] on the longitudinal dynamics of HSV. Although the above literature can obtain some meaningful results on the HSV control, this algorithm has low robustness under the uncertainties. Since sliding mode control (SMC) technique provides robustness to internal parameter variations and extraneous disturbance satisfying the matching condition, addressing the velocity and altitude tracking control of hypersonic vehicle has been introduced in [1, 10, 12, 13]. These designed laws guaranteed that both velocity and altitude track fast their reference trajectories respectively under both uncertainties and external disturbances, and it has been also confirmed by the simulation results. But the above-mentioned SMC methods inherently suffer from the chattering problem, which is an undesired phenomenon in practice systems. In [14], the H∞ control with gain scheduling and dynamic inversion (DI) methods is designed for the aircraft. The time-domain and frequency-domain analysis procedures show that these methods possess strong robustness and high control performance. Based on DI methods, the feedback linearization technique is used to design the controller for the attitude control of HSV [15–17], where the nonlinear dynamics of system is converted into a chain of integrators in the design of inner-loop nonlinear control law. Although DI method can obtain some meaningful results on the HSV control, this algorithm is sensitive to the modelling errors [18–22]. Therefore, a model-free framework for the DI strategy is urgently needed.Based on the feedback linearization approach, Han proposes a novel philosophy, active disturbance rejection control (ADRC), which does not rely on a refined dynamic model [23]. A linear ADRC controller is then presented for practical applications, since the parameter tuning process is simplified [24]. ESO is the core concept of ADRC and has demonstrated its effectiveness in many fields. In [25], an ESO-based control scheme is presented to handle the initial turning problem for a vertical launching air-defense missile. Based on the work of [25], Tian et al. [26] employ ADRC to a generic nonlinear uncertain system with mismatched disturbances, and then a robust output feedback autopilot for an HSV is devised. In [27], a control method combining ADRC and optimal control is discussed for large aircraft landing attitude control under external disturbances. The results show that the ESO technique can guarantee the unknown disturbance and model uncertainties rejection. Actually, actuator saturation affects virtually all practical control systems. It may result in performance degradation and even induce instability. The hypersonic vehicle dynamics is inherently nonlinear, and the number of available results by considering actuator saturation in the design and analysis of hypersonic vehicle dynamics is still limited [13, 28]. In this case, the flight envelope of hypersonic vehicle is narrow and the existing actuator saturation deteriorates the control performance, thus the effective control method should be investigated for HSV.The paper mainly focuses on the longitudinal control problem of HSVs suffering actuator saturation and disturbances within large envelop. The main contributions of this paper are threefold:(1)
A linear ESO is used to estimate and compensate for the total disturbances of the small-perturbation-linearized longitudinal dynamics of HSV, which consists of unmodeled dynamics, parametric uncertainties, and external disturbances.(2)
An adaptive backstepping control law is designed to deal with the large envelop and actuator saturation problem, and a rigorous proof of stability is provided by employing Lyapunov theory.(3)
The control structure considers the complex aerodynamics and ensures the tracking performance with very little requirement of model information, which is suitable for engineering practices.The remainder of this paper is organized as follows: the longitudinal flight model of HSV and problem formulation are presented in the “Preliminaries" section. In the “Control Strategy" section, an ESO-based adaptive backstepping controller is designed, and the closed-loop convergence is analyzed. The “Simulation Results" section gives simulation results and some discussions. Finally, conclusions are drawn in the “Conclusion" section.
## 2. Preliminaries
### 2.1. Longitudinal Dynamic Model
This study is concerned with the longitudinal motion of the vehicle. It is assumed that there is no side slip, no lateral motion, and no roll for the hypersonic vehicle. As shown in Figure1, the longitudinal dynamic model for a generic HSV is as follows [6]:(1)mdVdt=-X-mgsinθmVdθdt=Y-mgcosθJzdωzdt=Mz+Mgzdϕdt=ωzα=ϕ-θwhere the drag force, lift force, and pitch moment of the HSV are depicted by (2)X=12ρV2SCXα,V,δzY=12ρV2SCYα,V,δzMz=12ρV2Sc¯CMzα,V,δzThe parameters ρ, S, and c¯ represent the air density, the reference area, and the mean aerodynamic chord, and CX, CY, and CMz represent the drag, lift, and moment coefficients, respectively.Figure 1
Body diagram of an HSV.SinceX, Y, and Mz are all related to δz, the system is strongly coupled. The control objective for this paper is to find a feedback control δz such that the pitch angle can track the desired trajectory of pitch angle very well. Therefore, we should consider the pitch dynamics and small perturbation method is applied.Introduce small perturbation assumption, ignore second order or higher order traces and secondary factors of aerodynamic forces and moment, linearize the equations and develop the perturbation equation in three-dimensional space as follows [6]:(3)dΔVdt=-XVmΔV-XαmΔα-gcosθΔθdΔθdt=YVmVΔV+YαmVΔα+gsinθVΔθ+YδzmVΔδzdΔωzdt=MzVJzΔV+MzαJzΔα+MzωzJzΔωz+MzδzJzΔδz+MgzJzdΔϕdt=ΔωzΔα=Δϕ-Δθwhere the aerodynamic coefficient Ab stands for ∂A/∂b for A∈X,Y,Z and b∈V,α,ωz,δz and can be obtained from prior knowledge.Assuming the velocity to be a constant in a short time, then (3) can be simplified as follows:(4)Δϕ¨-a1Δϕ˙-a2Δα-a3Δα˙-a4Δδz=MgzJz(5)Δθ˙-a5Δθ-a6Δα=a7Δδz(6)Δα=Δϕ-Δθwhere a1=Mzωz/Jz, a2=Mzα/Jz, a3=Mzα˙/Jz, a4=Mzδz/Jz, a5=gsinθ/V, a6=Yα/mV, and a7=Yδz/mV.Then the second-order dynamics of pitch angle (4) can be rewritten as follows:(7)ϕ¨t=f·+a4σδzδztwith the total disturbance f(·) defined as(8)f·=ϕ¨r+a1Δϕ˙+a2Δα+a3Δα˙-a4δz0+MgzJzwith ϕr the reference pitch angle, and δz0 the deflection angle calculated by the nominal system with reference ϕr, which is not required to be known for subsequent controller design.The functionσ(·) is the saturation function of deflection angle defined as follows:(9)σδz=1δzt⩽δz¯δz¯δzt·Signδztδzt>δz¯where δz¯ is the maximum allowable value of the deflection. Obviously, σ∈(0,1].Then an assumption for the disturbance is given as follows.Assumption 1.
The additive disturbances momentMgz is differentiable, and the derivative is bounded.Remark 2.
As shown in (8), the total disturbance f(·) contains coupling terms and external disturbances. According to Assumption 1, f(·) is physically differentiable, which is necessary for further discussion. Applying small perturbation linearization, high-order dynamics and secondary factors are omitted, which brings unmodeled dynamics. Note that the unmodeled dynamics caused by linearization, and the parametric uncertainties can also be concluded in the total disturbance, which is not presented in the equation for simplicity.
### 2.2. Extended State Observer
The ESO is a special state observer estimating both system states and an extended state, which consist of the unknown dynamics and external disturbance of the system. Appropriately designed observers can provide comparatively accurate estimations that can be compensated in the control inputs.Consider a nonlinear system with uncertainties and external disturbances(10)xnt=f1xt,x˙t,…,xn-1t+d1t+butwhere f1(x(t),x˙(t),…,x(n-1)(t)) is an unknown function, d1(t) is the unknown external disturbance, u(t) is the control input, and b is a known constant.For clarity, System (10) can be rewritten as(11)x˙1=x2x˙2=x3⋮x˙n=xn+1+bux˙n+1=hty=x1where(12)xn+1=f1xt,x˙t,…,xn-1t+d1tis the extended state.Then, an ESO can be constructed as follows:(13)x^˙1=x^2-l1x^1-y⋮x^˙n=x^n+1-lnx^1-y+bux^˙n+1=-ln+1x^1-ywhere x^i are the observer outputs and li are positive observer gains, i=1,2,…,n+1.Note that the extended state is the total disturbance, which contains the unknown dynamics and external disturbances. Appropriately designed observers can provide comparatively accurate estimations that can be compensated in the control inputs, which improves the robustness. More detailed description of the principle of ESO can be found in [23].
## 2.1. Longitudinal Dynamic Model
This study is concerned with the longitudinal motion of the vehicle. It is assumed that there is no side slip, no lateral motion, and no roll for the hypersonic vehicle. As shown in Figure1, the longitudinal dynamic model for a generic HSV is as follows [6]:(1)mdVdt=-X-mgsinθmVdθdt=Y-mgcosθJzdωzdt=Mz+Mgzdϕdt=ωzα=ϕ-θwhere the drag force, lift force, and pitch moment of the HSV are depicted by (2)X=12ρV2SCXα,V,δzY=12ρV2SCYα,V,δzMz=12ρV2Sc¯CMzα,V,δzThe parameters ρ, S, and c¯ represent the air density, the reference area, and the mean aerodynamic chord, and CX, CY, and CMz represent the drag, lift, and moment coefficients, respectively.Figure 1
Body diagram of an HSV.SinceX, Y, and Mz are all related to δz, the system is strongly coupled. The control objective for this paper is to find a feedback control δz such that the pitch angle can track the desired trajectory of pitch angle very well. Therefore, we should consider the pitch dynamics and small perturbation method is applied.Introduce small perturbation assumption, ignore second order or higher order traces and secondary factors of aerodynamic forces and moment, linearize the equations and develop the perturbation equation in three-dimensional space as follows [6]:(3)dΔVdt=-XVmΔV-XαmΔα-gcosθΔθdΔθdt=YVmVΔV+YαmVΔα+gsinθVΔθ+YδzmVΔδzdΔωzdt=MzVJzΔV+MzαJzΔα+MzωzJzΔωz+MzδzJzΔδz+MgzJzdΔϕdt=ΔωzΔα=Δϕ-Δθwhere the aerodynamic coefficient Ab stands for ∂A/∂b for A∈X,Y,Z and b∈V,α,ωz,δz and can be obtained from prior knowledge.Assuming the velocity to be a constant in a short time, then (3) can be simplified as follows:(4)Δϕ¨-a1Δϕ˙-a2Δα-a3Δα˙-a4Δδz=MgzJz(5)Δθ˙-a5Δθ-a6Δα=a7Δδz(6)Δα=Δϕ-Δθwhere a1=Mzωz/Jz, a2=Mzα/Jz, a3=Mzα˙/Jz, a4=Mzδz/Jz, a5=gsinθ/V, a6=Yα/mV, and a7=Yδz/mV.Then the second-order dynamics of pitch angle (4) can be rewritten as follows:(7)ϕ¨t=f·+a4σδzδztwith the total disturbance f(·) defined as(8)f·=ϕ¨r+a1Δϕ˙+a2Δα+a3Δα˙-a4δz0+MgzJzwith ϕr the reference pitch angle, and δz0 the deflection angle calculated by the nominal system with reference ϕr, which is not required to be known for subsequent controller design.The functionσ(·) is the saturation function of deflection angle defined as follows:(9)σδz=1δzt⩽δz¯δz¯δzt·Signδztδzt>δz¯where δz¯ is the maximum allowable value of the deflection. Obviously, σ∈(0,1].Then an assumption for the disturbance is given as follows.Assumption 1.
The additive disturbances momentMgz is differentiable, and the derivative is bounded.Remark 2.
As shown in (8), the total disturbance f(·) contains coupling terms and external disturbances. According to Assumption 1, f(·) is physically differentiable, which is necessary for further discussion. Applying small perturbation linearization, high-order dynamics and secondary factors are omitted, which brings unmodeled dynamics. Note that the unmodeled dynamics caused by linearization, and the parametric uncertainties can also be concluded in the total disturbance, which is not presented in the equation for simplicity.
## 2.2. Extended State Observer
The ESO is a special state observer estimating both system states and an extended state, which consist of the unknown dynamics and external disturbance of the system. Appropriately designed observers can provide comparatively accurate estimations that can be compensated in the control inputs.Consider a nonlinear system with uncertainties and external disturbances(10)xnt=f1xt,x˙t,…,xn-1t+d1t+butwhere f1(x(t),x˙(t),…,x(n-1)(t)) is an unknown function, d1(t) is the unknown external disturbance, u(t) is the control input, and b is a known constant.For clarity, System (10) can be rewritten as(11)x˙1=x2x˙2=x3⋮x˙n=xn+1+bux˙n+1=hty=x1where(12)xn+1=f1xt,x˙t,…,xn-1t+d1tis the extended state.Then, an ESO can be constructed as follows:(13)x^˙1=x^2-l1x^1-y⋮x^˙n=x^n+1-lnx^1-y+bux^˙n+1=-ln+1x^1-ywhere x^i are the observer outputs and li are positive observer gains, i=1,2,…,n+1.Note that the extended state is the total disturbance, which contains the unknown dynamics and external disturbances. Appropriately designed observers can provide comparatively accurate estimations that can be compensated in the control inputs, which improves the robustness. More detailed description of the principle of ESO can be found in [23].
## 3. Control Strategy
In this section, an ESO-based pitch controller for HSV pitch angle controller is devised. Adaptive backstepping technique is applied with ESO to reject disturbances and unmodeled dynamics. In the first stage, the estimation of total disturbances, including external disturbances and unmodeled dynamics, is discussed. Then, in the second stage, an adaptive backstepping controller is designed, while the estimation of the disturbances is used as a time-varying parameter to improve robustness.
### 3.1. Lumped Disturbance Estimation
In order to improve the robustness, an extended state observer is used to estimate and compensate for the disturbance. Letx1=ϕ, x2=ϕ˙, and regard total disturbance f(·) as an extended state x3, then the longitudinal system in (7) can be expressed as follows:(14)x˙1=x2x˙2=x3+a4σδzδzx˙3=h·where h(·)=f˙(·) is bounded according to the discussion in Remark 2.Consider the third-order linear ESO as follows:(15)x^˙1=x^2-3ωox^1-x1x^˙2=x^3-3ωo2x^1-x1+a4σδzδzx^˙3=-ωo3x^1-x1where x^1, x^2, and x^3 are the observer outputs and ωo>0 is the bandwidth of the ESO. Note that few model information is required for observer design except a4.It is obvious that the characteristic polynomial is Hurwitz, and the observer is bounded-input-bounded-output (BIBO) stable. Define the estimation error asx~i=xi-x^i,i=1,2,3, the observer estimation error is as follows:(16)x~1=x~2-3ωox~1x~2=x~3-3ωo2x~1x~3=h·-ωo3x~1Letεi=x~1/ωoi-1,i=1,2,3, and (16) can be simplified as follows:(17)ε˙=ωoAε+Bh·ωo2where ε=ε1ε2ε3T, B=001T and (18)A=-310-301-100is Hurwitz.Lemma 3 (see [29]).
Assumingh(·) is bounded, there exist a constant ϱi>0 and a finite T1>0, so that(19)x~it⩽ρi,ρi=O1ωok1,i=1,2,3,∀t⩾T1>0for some positive integer k1, where O(·) represents the infinitely small.Remark 4.
SinceA is Hurwitz, error system (17) is BIBO stable. Therefore, the requirement of ESO is the boundedness of h(·). Lemma 3 also indicates a good steady estimation performance of ESO, and the estimation error can be reduced to a sufficient small range within a finite time T1 by increasing the bandwidth ωo. Since the disturbance f(·) is partly compensated for and the actual disturbance on system becomes x~i(t), the steady control performance is likely to be improved.With a well-tuned ESO, the total disturbancef(·) can be actively estimated by x^3. For simplicity, here we denote f^=x^3. Since the observer (15) is BIBO stable, the estimation error of f(·) is bounded. Defining η as the upper bound of estimation error, there is(20)f^-f⩽η
### 3.2. Adaptive Backstepping Control
The first step of backstepping design is to define the tracking error ase1=ϕr-x1 and a virtual input as(21)χ1=c1e1+ϕ˙rwhere c1 is a positive constant.Then define the angular velocity tracking error as(22)e2=χ1-x2and thus(23)e˙1=ϕ˙r-x2=-c1e1+e2The derivative of the virtual input is(24)χ˙1=c1e˙1+ϕ¨r=-c12e1+c1e2+ϕ¨rDesign an adaptive controller as(25)δz=ca4γ^χ˙1-f^+η^e2+c2a4e2with adaption update laws(26)η^˙=λ1ce22γ^˙=λ2ce22γ^3χ˙1-f^+η^where c, c2, λ1, and λ2 are designed parameters.The proposed ESO-based adaptive backstepping control structure for HSV system is shown in Figure2.Figure 2
Block diagram of proposed ESO-based adaptive backstepping control structure for HSV system.
### 3.3. Stability Analysis
The convergence of the tracking errors is established by Theorems5 and 6.Theorem 5.
For system (1) controlled by (25) and (26), where initial estimated values satisfy 1/4cc1<η^(0)<η, the error e2 converges into a small neighborhood of origin |e2|<ϵ=1/c-1/4c1η^(0) within finite time tϵ>0 and is guaranteed to be uniformly ultimately bounded (UUB) for t⩾tε.Proof.
Consider the Lyapunov function(27)V1=12e12+12e22+12λ1η~2+12λ2γ~2where η~=η^-η, γ~=γ^-1-γ, and γ is a positive constant satisfying 0<γ⩽σ(δz)⩽1.
Considering the dynamics (7) and proposed control law (25), the derivative of V1 can be derived as(28)V˙1=-c1e12+e1e2+e2e˙2+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙=-c1e12+e1e2+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙+e2χ˙1-f-a4σδzca4γ^χ˙1-f^+η^e2+c2a4e2=-c1e12+e1e2+e2χ˙1-f-ce22γ^σδzχ˙1-f^+η^-c2σδze22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙=-c1e12+e1e2+e2χ˙1-f^+f^-f-ce22γ^σδzχ˙1-f^+η^-c2σδze22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙
Letϑ=|χ˙1-f^|, then(29)V˙1⩽-c1e12+e1e2+e2ϑ+η-ce22γγ^ϑ+η^-c2γe22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙
Substituting (26) into (29) yields(30)V˙1⩽-c1e12+e1e2+e2ϑ+η-ce22γγ^ϑ+η^-c2γe22+ce22η^-η-ce2ϑ+η^1-γγ^=-c1e12+e1e2+e2ϑ+η-c2γe22-ce22ϑ+η=-c1e1-12c1e22+e224c1-c2γe22-ce22ϑ+η+e2ϑ+η=-c1e1-12c1e22-c2γe22-ϑ+ηc-14c1ϑ+ηe22-e2
Provided that0<η^(0)<η, we obtain 0<η^(0)<ϑ+η, that is,(31)1ϑ+η<1η^0Therefore,(32)V˙1⩽-c1e1-12c1e22-c2γe22-ϑ+ηc-14c1η^0e22-e2
Since1/4cc1<η^(0), ϵ=1/c-1/4c1η^(0)>0. If |e2|>ϵ,(33)V˙1⩽-c1e1-12c1e22-c2γe22which means that V1 is decreasing and bounded, and the error e2 converges into a small neighborhood of origin; i.e., e2<ε within a finite time tε>0. Even though the tracking error enters the region |e2|<ε within a finite time, it may move in and out since the nonnegativity cannot be guaranteed in the range. However, when it moves out, the Lyapunov function V1 becomes negative again and the error is driven back to the region. Therefore, e2 is guaranteed to be UUB for t⩾tεz.Theorem 6.
Considering system (1) controlled by (25) and (26), the output tracking can be accomplished with virtual control input (21).Proof.
To illustrate the reference state tracking, Lyapunov function is chosen as follows:(34)V2=12e12
The derivative ofV2 with (23) equals(35)V˙2=-c1e12+e1e2
According to Theorem5, it has been proved that e2 is bounded. Thus, by selecting positive c1 large enough, we obtain V˙2<0 when V2 is out of a certain bounded region. Therefore, e1 is also UUB by which x1 tracking ϕr is guaranteed. Note that, from (35), it is clear that V2 will not converge to zero due to the existence of e2. It also implies that the state x1 can only converge into a neighborhood of the origin and remain within it.Remark 7.
Small perturbation linearization is a typical engineering method for HSV attitude control and nominal values of parameters are usually used in the design process, which brings the problems of structural and parametric uncertainties. Applications indicate that ESO can estimate the total disturbance well even ifa4 is not calculated precisely. Thus the proposed method needs only a little model information and the adaptive law guarantees a smooth tracking performance within a wide flight envelope, which simplifies the designing process.
## 3.1. Lumped Disturbance Estimation
In order to improve the robustness, an extended state observer is used to estimate and compensate for the disturbance. Letx1=ϕ, x2=ϕ˙, and regard total disturbance f(·) as an extended state x3, then the longitudinal system in (7) can be expressed as follows:(14)x˙1=x2x˙2=x3+a4σδzδzx˙3=h·where h(·)=f˙(·) is bounded according to the discussion in Remark 2.Consider the third-order linear ESO as follows:(15)x^˙1=x^2-3ωox^1-x1x^˙2=x^3-3ωo2x^1-x1+a4σδzδzx^˙3=-ωo3x^1-x1where x^1, x^2, and x^3 are the observer outputs and ωo>0 is the bandwidth of the ESO. Note that few model information is required for observer design except a4.It is obvious that the characteristic polynomial is Hurwitz, and the observer is bounded-input-bounded-output (BIBO) stable. Define the estimation error asx~i=xi-x^i,i=1,2,3, the observer estimation error is as follows:(16)x~1=x~2-3ωox~1x~2=x~3-3ωo2x~1x~3=h·-ωo3x~1Letεi=x~1/ωoi-1,i=1,2,3, and (16) can be simplified as follows:(17)ε˙=ωoAε+Bh·ωo2where ε=ε1ε2ε3T, B=001T and (18)A=-310-301-100is Hurwitz.Lemma 3 (see [29]).
Assumingh(·) is bounded, there exist a constant ϱi>0 and a finite T1>0, so that(19)x~it⩽ρi,ρi=O1ωok1,i=1,2,3,∀t⩾T1>0for some positive integer k1, where O(·) represents the infinitely small.Remark 4.
SinceA is Hurwitz, error system (17) is BIBO stable. Therefore, the requirement of ESO is the boundedness of h(·). Lemma 3 also indicates a good steady estimation performance of ESO, and the estimation error can be reduced to a sufficient small range within a finite time T1 by increasing the bandwidth ωo. Since the disturbance f(·) is partly compensated for and the actual disturbance on system becomes x~i(t), the steady control performance is likely to be improved.With a well-tuned ESO, the total disturbancef(·) can be actively estimated by x^3. For simplicity, here we denote f^=x^3. Since the observer (15) is BIBO stable, the estimation error of f(·) is bounded. Defining η as the upper bound of estimation error, there is(20)f^-f⩽η
## 3.2. Adaptive Backstepping Control
The first step of backstepping design is to define the tracking error ase1=ϕr-x1 and a virtual input as(21)χ1=c1e1+ϕ˙rwhere c1 is a positive constant.Then define the angular velocity tracking error as(22)e2=χ1-x2and thus(23)e˙1=ϕ˙r-x2=-c1e1+e2The derivative of the virtual input is(24)χ˙1=c1e˙1+ϕ¨r=-c12e1+c1e2+ϕ¨rDesign an adaptive controller as(25)δz=ca4γ^χ˙1-f^+η^e2+c2a4e2with adaption update laws(26)η^˙=λ1ce22γ^˙=λ2ce22γ^3χ˙1-f^+η^where c, c2, λ1, and λ2 are designed parameters.The proposed ESO-based adaptive backstepping control structure for HSV system is shown in Figure2.Figure 2
Block diagram of proposed ESO-based adaptive backstepping control structure for HSV system.
## 3.3. Stability Analysis
The convergence of the tracking errors is established by Theorems5 and 6.Theorem 5.
For system (1) controlled by (25) and (26), where initial estimated values satisfy 1/4cc1<η^(0)<η, the error e2 converges into a small neighborhood of origin |e2|<ϵ=1/c-1/4c1η^(0) within finite time tϵ>0 and is guaranteed to be uniformly ultimately bounded (UUB) for t⩾tε.Proof.
Consider the Lyapunov function(27)V1=12e12+12e22+12λ1η~2+12λ2γ~2where η~=η^-η, γ~=γ^-1-γ, and γ is a positive constant satisfying 0<γ⩽σ(δz)⩽1.
Considering the dynamics (7) and proposed control law (25), the derivative of V1 can be derived as(28)V˙1=-c1e12+e1e2+e2e˙2+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙=-c1e12+e1e2+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙+e2χ˙1-f-a4σδzca4γ^χ˙1-f^+η^e2+c2a4e2=-c1e12+e1e2+e2χ˙1-f-ce22γ^σδzχ˙1-f^+η^-c2σδze22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙=-c1e12+e1e2+e2χ˙1-f^+f^-f-ce22γ^σδzχ˙1-f^+η^-c2σδze22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙
Letϑ=|χ˙1-f^|, then(29)V˙1⩽-c1e12+e1e2+e2ϑ+η-ce22γγ^ϑ+η^-c2γe22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙
Substituting (26) into (29) yields(30)V˙1⩽-c1e12+e1e2+e2ϑ+η-ce22γγ^ϑ+η^-c2γe22+ce22η^-η-ce2ϑ+η^1-γγ^=-c1e12+e1e2+e2ϑ+η-c2γe22-ce22ϑ+η=-c1e1-12c1e22+e224c1-c2γe22-ce22ϑ+η+e2ϑ+η=-c1e1-12c1e22-c2γe22-ϑ+ηc-14c1ϑ+ηe22-e2
Provided that0<η^(0)<η, we obtain 0<η^(0)<ϑ+η, that is,(31)1ϑ+η<1η^0Therefore,(32)V˙1⩽-c1e1-12c1e22-c2γe22-ϑ+ηc-14c1η^0e22-e2
Since1/4cc1<η^(0), ϵ=1/c-1/4c1η^(0)>0. If |e2|>ϵ,(33)V˙1⩽-c1e1-12c1e22-c2γe22which means that V1 is decreasing and bounded, and the error e2 converges into a small neighborhood of origin; i.e., e2<ε within a finite time tε>0. Even though the tracking error enters the region |e2|<ε within a finite time, it may move in and out since the nonnegativity cannot be guaranteed in the range. However, when it moves out, the Lyapunov function V1 becomes negative again and the error is driven back to the region. Therefore, e2 is guaranteed to be UUB for t⩾tεz.Theorem 6.
Considering system (1) controlled by (25) and (26), the output tracking can be accomplished with virtual control input (21).Proof.
To illustrate the reference state tracking, Lyapunov function is chosen as follows:(34)V2=12e12
The derivative ofV2 with (23) equals(35)V˙2=-c1e12+e1e2
According to Theorem5, it has been proved that e2 is bounded. Thus, by selecting positive c1 large enough, we obtain V˙2<0 when V2 is out of a certain bounded region. Therefore, e1 is also UUB by which x1 tracking ϕr is guaranteed. Note that, from (35), it is clear that V2 will not converge to zero due to the existence of e2. It also implies that the state x1 can only converge into a neighborhood of the origin and remain within it.Remark 7.
Small perturbation linearization is a typical engineering method for HSV attitude control and nominal values of parameters are usually used in the design process, which brings the problems of structural and parametric uncertainties. Applications indicate that ESO can estimate the total disturbance well even ifa4 is not calculated precisely. Thus the proposed method needs only a little model information and the adaptive law guarantees a smooth tracking performance within a wide flight envelope, which simplifies the designing process.
## 4. Simulation Results
In this section, simulation results for an HSV are provided to verify the feasibility and efficiency of the proposed control scheme. The reference trajectory used in the simulations is a typical trajectory of reentry segment. The longitudinal dynamics (1) are simulated as the real system, while the controller design procedure is based on the linearized model (3). The simulations are run for 150 seconds and at 100 samples per second. The controller gains are c1=10, c2=20, c=3, λ1=0.1, λ2=0.1, and the ESO gains are tuned by bandwidth method with bandwidth ωo=5. The initial value of adaptive gains are γ^(0)=0.1, η^(0)=0.1, and ones of estimations are x^1(0)=x^2(0)=x^3(0)=0. The deflection angle δz is bounded by [-30∘,30∘].The control performance using an intelligent ADRC controller is also given to show the superiority of the proposed method. In the ADRC controller design, the control law is as follows:(36)δz=kpϕr-ϕ+kdϕ˙r-ϕ˙-f^where f^ is the estimation of total disturbances by an ESO with the same bandwidth ωo=5; control gains kp, kd at several feature points are optimized by genetic algorithm (GA) according to the nominal linearized model and then interpolated, in order to achieve a smooth and quick tracking. The control gains at feature points are shown in Table 1.Table 1
Control gains at feature points.
Time(s) k p k d 0 71.35 103.16 35 96.30 19.25 45 60.58 9.03 55 73.46 10.21 75 62.73 11.69 100 67.80 15.91 150 15.04 19.12To begin with, a set of comparative simulations for nominal longitudinal dynamics is studied, with no external disturbances and parametric uncertainties. Figure3 shows the pitch angle tracking performance, and Figure 4 shows the tracking errors. The deflection angles are shown in Figure 5. Form the figures, it is indicated that both control methods can track the reference. Thanks to the excellent estimation ability of ESO to the internal “disturbance", the system can be approximately transformed into a second-order integrator which is easier to be controlled.Figure 3
Pitch angle tracking performance for nominal longitudinal control system.Figure 4
Pitch angle tracking error for nominal longitudinal control system.Figure 5
Deflection angle for nominal longitudinal control system.It is easily seen from Figure4 that the proposed controller tracks the reference more precisely. The ADRC controller can be regarded as a PD controller with compensation of disturbances. When the HSV works in a large flight envelop, especially when the reference pitch angle changes rapidly, the set of offline-tuned gains in ADRC may not perform well. Although the feature points will be selected more densely in practical applications, it may not reach the performance of a continuous adaptive method and the computational load will increase due to interpolation operations. Moreover, the parameter tuning procedure for traditional controllers like PID and ADRC is quite complex, while the proposed controller can be effective even in large envelop.Then considering the existence of external disturbances and parametric uncertainties, another set of simulation is done. The simulations are done under sustained disturbance and abrupt reference changes. In the simulation, a sinusoidal wave disturbance is given asMgz(t)=2×103sin0.5t N·m, and uncertainty of ±20% in mass m and moment of inertia Jz is added to show the parameter robustness.The tracking performance and tracking error of system using proposed control method are shown in Figures6 and 7, while the ones using intelligent ADRC controller are shown in Figures 8 and 9. It can be inferred that both methods track the reference soon and remain stable in general due to the introduction of ESO. The time-varying disturbances, as well as parametric uncertainties and unmodeled dynamics, can be lumped together as the disturbances, which can be estimated by ESO and actively compensated for. However, the proposed method shows certain superiority in tracking error, since the controller gains are tuned adaptively. Also, the accuracy of disturbance estimation is ensured by setting the ESO gains large enough, but the existence of measurement noises adds a limit to observer gains in practical applications. Hence there will be a phase lag for estimation and the estimation error cannot be neglected. From this aspect, the adaptive method estimates the upper bound η of estimation error and can also handle it.Figure 6
Pitch angle tracking performance using proposed control method subject to disturbance and parametric uncertainties.Figure 7
Pitch angle tracking error using proposed control method subject to disturbance and parametric uncertainties.Figure 8
Pitch angle tracking performance using intelligent ADRC controller subject to disturbance and parametric uncertainties.Figure 9
Pitch angle tracking error using intelligent ADRC controller subject to disturbance and parametric uncertainties.
## 5. Conclusion
In this paper, the longitudinal control problem for HSVs subject to actuator saturation and disturbance is studied. Applying small perturbation assumption, the longitudinal dynamics can be considered as a second-order system at every trimming point, with total disturbance including unmodeled dynamics and parametric uncertainties. Then an ESO is constructed to estimate and compensate for the total disturbance actively, in order to decouple the system and improve the robustness. To deal with the large envelop and actuator saturation, an adaptive backstepping control scheme is designed to control the pitch angle. The presented method requires very little model information and the closed-loop convergence is proved. Finally, simulation results indicated a quick and smooth tracking performance and verified that the proposed method is effective. Further works may focus on the trajectory tracking control of HSV in 6 DoFs.
---
*Source: 1019621-2019-03-03.xml* | 1019621-2019-03-03_1019621-2019-03-03.md | 35,186 | Adaptive Backstepping Control for Longitudinal Dynamics of Hypersonic Vehicle Subject to Actuator Saturation and Disturbance | Zhiqiang Jia; Tianya Li; Kunfeng Lu | Mathematical Problems in Engineering
(2019) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1019621 | 1019621-2019-03-03.xml | ---
## Abstract
In this paper, an adaptive backstepping control strategy is presented to solve the longitudinal control problem for a hypersonic vehicle (HSV) subject to actuator saturation and disturbances. Small perturbation linearization transforms the dynamics to a seconded-order system at each trimming point, with total disturbance including unmodeled dynamics, parametric uncertainties, and external disturbances. The disturbance can be estimated and compensated for by an extended state observer (ESO), and thus the system is decoupled. To deal with the actuator saturation and wide flight envelope, an adaptive backstepping control strategy is designed. A rigorous proof of finite-time convergence is provided applying Lyapunov method. The effectiveness of the proposed control scheme is verified in simulations.
---
## Body
## 1. Introduction
Attitude control is a typical nonlinear control problem, which is very important for spacecraft, missile, HSV, and so on in engineering practice. According to small perturbation assumption, the flight dynamics of HSVs can be linearized around the trimming points. Then, the classical control methods such as PID and feedback linearization are employed to design the controller [1–3]. Hypersonic vehicle is required to work within a large flight envelope to meet the challenge of highly maneuverable targets in all probable engagements [4]. Due to the wide envelope, a great number of operating points should be carefully chosen to cover the full envelop, and an effective controller should be designed for each point. Note that the designed control algorithms should guarantee stability with superior control performance and robustness throughout the flight envelope [5, 6]. However, the equations of motion that govern the behavior of an HSV are nonlinear and time varying, which make flight control system design for aircraft a complex and difficult problem. Considering the uncertainties in aerodynamic parameters, nonlinearities and measurement noises, the task of guaranteeing favorable control performance and robustness throughout the entire flight envelope is a challenging one.Due to the practical importance of HSV, the attitude control has attracted extensive attention and a great number of control methods have been designed to improve control performance. In [7–10], the backstepping procedure is designed for an angle of attack autopilot. Furthermore, to deal with the explosion of the complexity associated with the backstepping method, improving the dynamic surface control method has been studied in [11] on the longitudinal dynamics of HSV. Although the above literature can obtain some meaningful results on the HSV control, this algorithm has low robustness under the uncertainties. Since sliding mode control (SMC) technique provides robustness to internal parameter variations and extraneous disturbance satisfying the matching condition, addressing the velocity and altitude tracking control of hypersonic vehicle has been introduced in [1, 10, 12, 13]. These designed laws guaranteed that both velocity and altitude track fast their reference trajectories respectively under both uncertainties and external disturbances, and it has been also confirmed by the simulation results. But the above-mentioned SMC methods inherently suffer from the chattering problem, which is an undesired phenomenon in practice systems. In [14], the H∞ control with gain scheduling and dynamic inversion (DI) methods is designed for the aircraft. The time-domain and frequency-domain analysis procedures show that these methods possess strong robustness and high control performance. Based on DI methods, the feedback linearization technique is used to design the controller for the attitude control of HSV [15–17], where the nonlinear dynamics of system is converted into a chain of integrators in the design of inner-loop nonlinear control law. Although DI method can obtain some meaningful results on the HSV control, this algorithm is sensitive to the modelling errors [18–22]. Therefore, a model-free framework for the DI strategy is urgently needed.Based on the feedback linearization approach, Han proposes a novel philosophy, active disturbance rejection control (ADRC), which does not rely on a refined dynamic model [23]. A linear ADRC controller is then presented for practical applications, since the parameter tuning process is simplified [24]. ESO is the core concept of ADRC and has demonstrated its effectiveness in many fields. In [25], an ESO-based control scheme is presented to handle the initial turning problem for a vertical launching air-defense missile. Based on the work of [25], Tian et al. [26] employ ADRC to a generic nonlinear uncertain system with mismatched disturbances, and then a robust output feedback autopilot for an HSV is devised. In [27], a control method combining ADRC and optimal control is discussed for large aircraft landing attitude control under external disturbances. The results show that the ESO technique can guarantee the unknown disturbance and model uncertainties rejection. Actually, actuator saturation affects virtually all practical control systems. It may result in performance degradation and even induce instability. The hypersonic vehicle dynamics is inherently nonlinear, and the number of available results by considering actuator saturation in the design and analysis of hypersonic vehicle dynamics is still limited [13, 28]. In this case, the flight envelope of hypersonic vehicle is narrow and the existing actuator saturation deteriorates the control performance, thus the effective control method should be investigated for HSV.The paper mainly focuses on the longitudinal control problem of HSVs suffering actuator saturation and disturbances within large envelop. The main contributions of this paper are threefold:(1)
A linear ESO is used to estimate and compensate for the total disturbances of the small-perturbation-linearized longitudinal dynamics of HSV, which consists of unmodeled dynamics, parametric uncertainties, and external disturbances.(2)
An adaptive backstepping control law is designed to deal with the large envelop and actuator saturation problem, and a rigorous proof of stability is provided by employing Lyapunov theory.(3)
The control structure considers the complex aerodynamics and ensures the tracking performance with very little requirement of model information, which is suitable for engineering practices.The remainder of this paper is organized as follows: the longitudinal flight model of HSV and problem formulation are presented in the “Preliminaries" section. In the “Control Strategy" section, an ESO-based adaptive backstepping controller is designed, and the closed-loop convergence is analyzed. The “Simulation Results" section gives simulation results and some discussions. Finally, conclusions are drawn in the “Conclusion" section.
## 2. Preliminaries
### 2.1. Longitudinal Dynamic Model
This study is concerned with the longitudinal motion of the vehicle. It is assumed that there is no side slip, no lateral motion, and no roll for the hypersonic vehicle. As shown in Figure1, the longitudinal dynamic model for a generic HSV is as follows [6]:(1)mdVdt=-X-mgsinθmVdθdt=Y-mgcosθJzdωzdt=Mz+Mgzdϕdt=ωzα=ϕ-θwhere the drag force, lift force, and pitch moment of the HSV are depicted by (2)X=12ρV2SCXα,V,δzY=12ρV2SCYα,V,δzMz=12ρV2Sc¯CMzα,V,δzThe parameters ρ, S, and c¯ represent the air density, the reference area, and the mean aerodynamic chord, and CX, CY, and CMz represent the drag, lift, and moment coefficients, respectively.Figure 1
Body diagram of an HSV.SinceX, Y, and Mz are all related to δz, the system is strongly coupled. The control objective for this paper is to find a feedback control δz such that the pitch angle can track the desired trajectory of pitch angle very well. Therefore, we should consider the pitch dynamics and small perturbation method is applied.Introduce small perturbation assumption, ignore second order or higher order traces and secondary factors of aerodynamic forces and moment, linearize the equations and develop the perturbation equation in three-dimensional space as follows [6]:(3)dΔVdt=-XVmΔV-XαmΔα-gcosθΔθdΔθdt=YVmVΔV+YαmVΔα+gsinθVΔθ+YδzmVΔδzdΔωzdt=MzVJzΔV+MzαJzΔα+MzωzJzΔωz+MzδzJzΔδz+MgzJzdΔϕdt=ΔωzΔα=Δϕ-Δθwhere the aerodynamic coefficient Ab stands for ∂A/∂b for A∈X,Y,Z and b∈V,α,ωz,δz and can be obtained from prior knowledge.Assuming the velocity to be a constant in a short time, then (3) can be simplified as follows:(4)Δϕ¨-a1Δϕ˙-a2Δα-a3Δα˙-a4Δδz=MgzJz(5)Δθ˙-a5Δθ-a6Δα=a7Δδz(6)Δα=Δϕ-Δθwhere a1=Mzωz/Jz, a2=Mzα/Jz, a3=Mzα˙/Jz, a4=Mzδz/Jz, a5=gsinθ/V, a6=Yα/mV, and a7=Yδz/mV.Then the second-order dynamics of pitch angle (4) can be rewritten as follows:(7)ϕ¨t=f·+a4σδzδztwith the total disturbance f(·) defined as(8)f·=ϕ¨r+a1Δϕ˙+a2Δα+a3Δα˙-a4δz0+MgzJzwith ϕr the reference pitch angle, and δz0 the deflection angle calculated by the nominal system with reference ϕr, which is not required to be known for subsequent controller design.The functionσ(·) is the saturation function of deflection angle defined as follows:(9)σδz=1δzt⩽δz¯δz¯δzt·Signδztδzt>δz¯where δz¯ is the maximum allowable value of the deflection. Obviously, σ∈(0,1].Then an assumption for the disturbance is given as follows.Assumption 1.
The additive disturbances momentMgz is differentiable, and the derivative is bounded.Remark 2.
As shown in (8), the total disturbance f(·) contains coupling terms and external disturbances. According to Assumption 1, f(·) is physically differentiable, which is necessary for further discussion. Applying small perturbation linearization, high-order dynamics and secondary factors are omitted, which brings unmodeled dynamics. Note that the unmodeled dynamics caused by linearization, and the parametric uncertainties can also be concluded in the total disturbance, which is not presented in the equation for simplicity.
### 2.2. Extended State Observer
The ESO is a special state observer estimating both system states and an extended state, which consist of the unknown dynamics and external disturbance of the system. Appropriately designed observers can provide comparatively accurate estimations that can be compensated in the control inputs.Consider a nonlinear system with uncertainties and external disturbances(10)xnt=f1xt,x˙t,…,xn-1t+d1t+butwhere f1(x(t),x˙(t),…,x(n-1)(t)) is an unknown function, d1(t) is the unknown external disturbance, u(t) is the control input, and b is a known constant.For clarity, System (10) can be rewritten as(11)x˙1=x2x˙2=x3⋮x˙n=xn+1+bux˙n+1=hty=x1where(12)xn+1=f1xt,x˙t,…,xn-1t+d1tis the extended state.Then, an ESO can be constructed as follows:(13)x^˙1=x^2-l1x^1-y⋮x^˙n=x^n+1-lnx^1-y+bux^˙n+1=-ln+1x^1-ywhere x^i are the observer outputs and li are positive observer gains, i=1,2,…,n+1.Note that the extended state is the total disturbance, which contains the unknown dynamics and external disturbances. Appropriately designed observers can provide comparatively accurate estimations that can be compensated in the control inputs, which improves the robustness. More detailed description of the principle of ESO can be found in [23].
## 2.1. Longitudinal Dynamic Model
This study is concerned with the longitudinal motion of the vehicle. It is assumed that there is no side slip, no lateral motion, and no roll for the hypersonic vehicle. As shown in Figure1, the longitudinal dynamic model for a generic HSV is as follows [6]:(1)mdVdt=-X-mgsinθmVdθdt=Y-mgcosθJzdωzdt=Mz+Mgzdϕdt=ωzα=ϕ-θwhere the drag force, lift force, and pitch moment of the HSV are depicted by (2)X=12ρV2SCXα,V,δzY=12ρV2SCYα,V,δzMz=12ρV2Sc¯CMzα,V,δzThe parameters ρ, S, and c¯ represent the air density, the reference area, and the mean aerodynamic chord, and CX, CY, and CMz represent the drag, lift, and moment coefficients, respectively.Figure 1
Body diagram of an HSV.SinceX, Y, and Mz are all related to δz, the system is strongly coupled. The control objective for this paper is to find a feedback control δz such that the pitch angle can track the desired trajectory of pitch angle very well. Therefore, we should consider the pitch dynamics and small perturbation method is applied.Introduce small perturbation assumption, ignore second order or higher order traces and secondary factors of aerodynamic forces and moment, linearize the equations and develop the perturbation equation in three-dimensional space as follows [6]:(3)dΔVdt=-XVmΔV-XαmΔα-gcosθΔθdΔθdt=YVmVΔV+YαmVΔα+gsinθVΔθ+YδzmVΔδzdΔωzdt=MzVJzΔV+MzαJzΔα+MzωzJzΔωz+MzδzJzΔδz+MgzJzdΔϕdt=ΔωzΔα=Δϕ-Δθwhere the aerodynamic coefficient Ab stands for ∂A/∂b for A∈X,Y,Z and b∈V,α,ωz,δz and can be obtained from prior knowledge.Assuming the velocity to be a constant in a short time, then (3) can be simplified as follows:(4)Δϕ¨-a1Δϕ˙-a2Δα-a3Δα˙-a4Δδz=MgzJz(5)Δθ˙-a5Δθ-a6Δα=a7Δδz(6)Δα=Δϕ-Δθwhere a1=Mzωz/Jz, a2=Mzα/Jz, a3=Mzα˙/Jz, a4=Mzδz/Jz, a5=gsinθ/V, a6=Yα/mV, and a7=Yδz/mV.Then the second-order dynamics of pitch angle (4) can be rewritten as follows:(7)ϕ¨t=f·+a4σδzδztwith the total disturbance f(·) defined as(8)f·=ϕ¨r+a1Δϕ˙+a2Δα+a3Δα˙-a4δz0+MgzJzwith ϕr the reference pitch angle, and δz0 the deflection angle calculated by the nominal system with reference ϕr, which is not required to be known for subsequent controller design.The functionσ(·) is the saturation function of deflection angle defined as follows:(9)σδz=1δzt⩽δz¯δz¯δzt·Signδztδzt>δz¯where δz¯ is the maximum allowable value of the deflection. Obviously, σ∈(0,1].Then an assumption for the disturbance is given as follows.Assumption 1.
The additive disturbances momentMgz is differentiable, and the derivative is bounded.Remark 2.
As shown in (8), the total disturbance f(·) contains coupling terms and external disturbances. According to Assumption 1, f(·) is physically differentiable, which is necessary for further discussion. Applying small perturbation linearization, high-order dynamics and secondary factors are omitted, which brings unmodeled dynamics. Note that the unmodeled dynamics caused by linearization, and the parametric uncertainties can also be concluded in the total disturbance, which is not presented in the equation for simplicity.
## 2.2. Extended State Observer
The ESO is a special state observer estimating both system states and an extended state, which consist of the unknown dynamics and external disturbance of the system. Appropriately designed observers can provide comparatively accurate estimations that can be compensated in the control inputs.Consider a nonlinear system with uncertainties and external disturbances(10)xnt=f1xt,x˙t,…,xn-1t+d1t+butwhere f1(x(t),x˙(t),…,x(n-1)(t)) is an unknown function, d1(t) is the unknown external disturbance, u(t) is the control input, and b is a known constant.For clarity, System (10) can be rewritten as(11)x˙1=x2x˙2=x3⋮x˙n=xn+1+bux˙n+1=hty=x1where(12)xn+1=f1xt,x˙t,…,xn-1t+d1tis the extended state.Then, an ESO can be constructed as follows:(13)x^˙1=x^2-l1x^1-y⋮x^˙n=x^n+1-lnx^1-y+bux^˙n+1=-ln+1x^1-ywhere x^i are the observer outputs and li are positive observer gains, i=1,2,…,n+1.Note that the extended state is the total disturbance, which contains the unknown dynamics and external disturbances. Appropriately designed observers can provide comparatively accurate estimations that can be compensated in the control inputs, which improves the robustness. More detailed description of the principle of ESO can be found in [23].
## 3. Control Strategy
In this section, an ESO-based pitch controller for HSV pitch angle controller is devised. Adaptive backstepping technique is applied with ESO to reject disturbances and unmodeled dynamics. In the first stage, the estimation of total disturbances, including external disturbances and unmodeled dynamics, is discussed. Then, in the second stage, an adaptive backstepping controller is designed, while the estimation of the disturbances is used as a time-varying parameter to improve robustness.
### 3.1. Lumped Disturbance Estimation
In order to improve the robustness, an extended state observer is used to estimate and compensate for the disturbance. Letx1=ϕ, x2=ϕ˙, and regard total disturbance f(·) as an extended state x3, then the longitudinal system in (7) can be expressed as follows:(14)x˙1=x2x˙2=x3+a4σδzδzx˙3=h·where h(·)=f˙(·) is bounded according to the discussion in Remark 2.Consider the third-order linear ESO as follows:(15)x^˙1=x^2-3ωox^1-x1x^˙2=x^3-3ωo2x^1-x1+a4σδzδzx^˙3=-ωo3x^1-x1where x^1, x^2, and x^3 are the observer outputs and ωo>0 is the bandwidth of the ESO. Note that few model information is required for observer design except a4.It is obvious that the characteristic polynomial is Hurwitz, and the observer is bounded-input-bounded-output (BIBO) stable. Define the estimation error asx~i=xi-x^i,i=1,2,3, the observer estimation error is as follows:(16)x~1=x~2-3ωox~1x~2=x~3-3ωo2x~1x~3=h·-ωo3x~1Letεi=x~1/ωoi-1,i=1,2,3, and (16) can be simplified as follows:(17)ε˙=ωoAε+Bh·ωo2where ε=ε1ε2ε3T, B=001T and (18)A=-310-301-100is Hurwitz.Lemma 3 (see [29]).
Assumingh(·) is bounded, there exist a constant ϱi>0 and a finite T1>0, so that(19)x~it⩽ρi,ρi=O1ωok1,i=1,2,3,∀t⩾T1>0for some positive integer k1, where O(·) represents the infinitely small.Remark 4.
SinceA is Hurwitz, error system (17) is BIBO stable. Therefore, the requirement of ESO is the boundedness of h(·). Lemma 3 also indicates a good steady estimation performance of ESO, and the estimation error can be reduced to a sufficient small range within a finite time T1 by increasing the bandwidth ωo. Since the disturbance f(·) is partly compensated for and the actual disturbance on system becomes x~i(t), the steady control performance is likely to be improved.With a well-tuned ESO, the total disturbancef(·) can be actively estimated by x^3. For simplicity, here we denote f^=x^3. Since the observer (15) is BIBO stable, the estimation error of f(·) is bounded. Defining η as the upper bound of estimation error, there is(20)f^-f⩽η
### 3.2. Adaptive Backstepping Control
The first step of backstepping design is to define the tracking error ase1=ϕr-x1 and a virtual input as(21)χ1=c1e1+ϕ˙rwhere c1 is a positive constant.Then define the angular velocity tracking error as(22)e2=χ1-x2and thus(23)e˙1=ϕ˙r-x2=-c1e1+e2The derivative of the virtual input is(24)χ˙1=c1e˙1+ϕ¨r=-c12e1+c1e2+ϕ¨rDesign an adaptive controller as(25)δz=ca4γ^χ˙1-f^+η^e2+c2a4e2with adaption update laws(26)η^˙=λ1ce22γ^˙=λ2ce22γ^3χ˙1-f^+η^where c, c2, λ1, and λ2 are designed parameters.The proposed ESO-based adaptive backstepping control structure for HSV system is shown in Figure2.Figure 2
Block diagram of proposed ESO-based adaptive backstepping control structure for HSV system.
### 3.3. Stability Analysis
The convergence of the tracking errors is established by Theorems5 and 6.Theorem 5.
For system (1) controlled by (25) and (26), where initial estimated values satisfy 1/4cc1<η^(0)<η, the error e2 converges into a small neighborhood of origin |e2|<ϵ=1/c-1/4c1η^(0) within finite time tϵ>0 and is guaranteed to be uniformly ultimately bounded (UUB) for t⩾tε.Proof.
Consider the Lyapunov function(27)V1=12e12+12e22+12λ1η~2+12λ2γ~2where η~=η^-η, γ~=γ^-1-γ, and γ is a positive constant satisfying 0<γ⩽σ(δz)⩽1.
Considering the dynamics (7) and proposed control law (25), the derivative of V1 can be derived as(28)V˙1=-c1e12+e1e2+e2e˙2+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙=-c1e12+e1e2+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙+e2χ˙1-f-a4σδzca4γ^χ˙1-f^+η^e2+c2a4e2=-c1e12+e1e2+e2χ˙1-f-ce22γ^σδzχ˙1-f^+η^-c2σδze22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙=-c1e12+e1e2+e2χ˙1-f^+f^-f-ce22γ^σδzχ˙1-f^+η^-c2σδze22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙
Letϑ=|χ˙1-f^|, then(29)V˙1⩽-c1e12+e1e2+e2ϑ+η-ce22γγ^ϑ+η^-c2γe22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙
Substituting (26) into (29) yields(30)V˙1⩽-c1e12+e1e2+e2ϑ+η-ce22γγ^ϑ+η^-c2γe22+ce22η^-η-ce2ϑ+η^1-γγ^=-c1e12+e1e2+e2ϑ+η-c2γe22-ce22ϑ+η=-c1e1-12c1e22+e224c1-c2γe22-ce22ϑ+η+e2ϑ+η=-c1e1-12c1e22-c2γe22-ϑ+ηc-14c1ϑ+ηe22-e2
Provided that0<η^(0)<η, we obtain 0<η^(0)<ϑ+η, that is,(31)1ϑ+η<1η^0Therefore,(32)V˙1⩽-c1e1-12c1e22-c2γe22-ϑ+ηc-14c1η^0e22-e2
Since1/4cc1<η^(0), ϵ=1/c-1/4c1η^(0)>0. If |e2|>ϵ,(33)V˙1⩽-c1e1-12c1e22-c2γe22which means that V1 is decreasing and bounded, and the error e2 converges into a small neighborhood of origin; i.e., e2<ε within a finite time tε>0. Even though the tracking error enters the region |e2|<ε within a finite time, it may move in and out since the nonnegativity cannot be guaranteed in the range. However, when it moves out, the Lyapunov function V1 becomes negative again and the error is driven back to the region. Therefore, e2 is guaranteed to be UUB for t⩾tεz.Theorem 6.
Considering system (1) controlled by (25) and (26), the output tracking can be accomplished with virtual control input (21).Proof.
To illustrate the reference state tracking, Lyapunov function is chosen as follows:(34)V2=12e12
The derivative ofV2 with (23) equals(35)V˙2=-c1e12+e1e2
According to Theorem5, it has been proved that e2 is bounded. Thus, by selecting positive c1 large enough, we obtain V˙2<0 when V2 is out of a certain bounded region. Therefore, e1 is also UUB by which x1 tracking ϕr is guaranteed. Note that, from (35), it is clear that V2 will not converge to zero due to the existence of e2. It also implies that the state x1 can only converge into a neighborhood of the origin and remain within it.Remark 7.
Small perturbation linearization is a typical engineering method for HSV attitude control and nominal values of parameters are usually used in the design process, which brings the problems of structural and parametric uncertainties. Applications indicate that ESO can estimate the total disturbance well even ifa4 is not calculated precisely. Thus the proposed method needs only a little model information and the adaptive law guarantees a smooth tracking performance within a wide flight envelope, which simplifies the designing process.
## 3.1. Lumped Disturbance Estimation
In order to improve the robustness, an extended state observer is used to estimate and compensate for the disturbance. Letx1=ϕ, x2=ϕ˙, and regard total disturbance f(·) as an extended state x3, then the longitudinal system in (7) can be expressed as follows:(14)x˙1=x2x˙2=x3+a4σδzδzx˙3=h·where h(·)=f˙(·) is bounded according to the discussion in Remark 2.Consider the third-order linear ESO as follows:(15)x^˙1=x^2-3ωox^1-x1x^˙2=x^3-3ωo2x^1-x1+a4σδzδzx^˙3=-ωo3x^1-x1where x^1, x^2, and x^3 are the observer outputs and ωo>0 is the bandwidth of the ESO. Note that few model information is required for observer design except a4.It is obvious that the characteristic polynomial is Hurwitz, and the observer is bounded-input-bounded-output (BIBO) stable. Define the estimation error asx~i=xi-x^i,i=1,2,3, the observer estimation error is as follows:(16)x~1=x~2-3ωox~1x~2=x~3-3ωo2x~1x~3=h·-ωo3x~1Letεi=x~1/ωoi-1,i=1,2,3, and (16) can be simplified as follows:(17)ε˙=ωoAε+Bh·ωo2where ε=ε1ε2ε3T, B=001T and (18)A=-310-301-100is Hurwitz.Lemma 3 (see [29]).
Assumingh(·) is bounded, there exist a constant ϱi>0 and a finite T1>0, so that(19)x~it⩽ρi,ρi=O1ωok1,i=1,2,3,∀t⩾T1>0for some positive integer k1, where O(·) represents the infinitely small.Remark 4.
SinceA is Hurwitz, error system (17) is BIBO stable. Therefore, the requirement of ESO is the boundedness of h(·). Lemma 3 also indicates a good steady estimation performance of ESO, and the estimation error can be reduced to a sufficient small range within a finite time T1 by increasing the bandwidth ωo. Since the disturbance f(·) is partly compensated for and the actual disturbance on system becomes x~i(t), the steady control performance is likely to be improved.With a well-tuned ESO, the total disturbancef(·) can be actively estimated by x^3. For simplicity, here we denote f^=x^3. Since the observer (15) is BIBO stable, the estimation error of f(·) is bounded. Defining η as the upper bound of estimation error, there is(20)f^-f⩽η
## 3.2. Adaptive Backstepping Control
The first step of backstepping design is to define the tracking error ase1=ϕr-x1 and a virtual input as(21)χ1=c1e1+ϕ˙rwhere c1 is a positive constant.Then define the angular velocity tracking error as(22)e2=χ1-x2and thus(23)e˙1=ϕ˙r-x2=-c1e1+e2The derivative of the virtual input is(24)χ˙1=c1e˙1+ϕ¨r=-c12e1+c1e2+ϕ¨rDesign an adaptive controller as(25)δz=ca4γ^χ˙1-f^+η^e2+c2a4e2with adaption update laws(26)η^˙=λ1ce22γ^˙=λ2ce22γ^3χ˙1-f^+η^where c, c2, λ1, and λ2 are designed parameters.The proposed ESO-based adaptive backstepping control structure for HSV system is shown in Figure2.Figure 2
Block diagram of proposed ESO-based adaptive backstepping control structure for HSV system.
## 3.3. Stability Analysis
The convergence of the tracking errors is established by Theorems5 and 6.Theorem 5.
For system (1) controlled by (25) and (26), where initial estimated values satisfy 1/4cc1<η^(0)<η, the error e2 converges into a small neighborhood of origin |e2|<ϵ=1/c-1/4c1η^(0) within finite time tϵ>0 and is guaranteed to be uniformly ultimately bounded (UUB) for t⩾tε.Proof.
Consider the Lyapunov function(27)V1=12e12+12e22+12λ1η~2+12λ2γ~2where η~=η^-η, γ~=γ^-1-γ, and γ is a positive constant satisfying 0<γ⩽σ(δz)⩽1.
Considering the dynamics (7) and proposed control law (25), the derivative of V1 can be derived as(28)V˙1=-c1e12+e1e2+e2e˙2+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙=-c1e12+e1e2+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙+e2χ˙1-f-a4σδzca4γ^χ˙1-f^+η^e2+c2a4e2=-c1e12+e1e2+e2χ˙1-f-ce22γ^σδzχ˙1-f^+η^-c2σδze22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙=-c1e12+e1e2+e2χ˙1-f^+f^-f-ce22γ^σδzχ˙1-f^+η^-c2σδze22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙
Letϑ=|χ˙1-f^|, then(29)V˙1⩽-c1e12+e1e2+e2ϑ+η-ce22γγ^ϑ+η^-c2γe22+1λ1η^-ηη^˙-1λ2γ^-1-γγ^-1γ^˙
Substituting (26) into (29) yields(30)V˙1⩽-c1e12+e1e2+e2ϑ+η-ce22γγ^ϑ+η^-c2γe22+ce22η^-η-ce2ϑ+η^1-γγ^=-c1e12+e1e2+e2ϑ+η-c2γe22-ce22ϑ+η=-c1e1-12c1e22+e224c1-c2γe22-ce22ϑ+η+e2ϑ+η=-c1e1-12c1e22-c2γe22-ϑ+ηc-14c1ϑ+ηe22-e2
Provided that0<η^(0)<η, we obtain 0<η^(0)<ϑ+η, that is,(31)1ϑ+η<1η^0Therefore,(32)V˙1⩽-c1e1-12c1e22-c2γe22-ϑ+ηc-14c1η^0e22-e2
Since1/4cc1<η^(0), ϵ=1/c-1/4c1η^(0)>0. If |e2|>ϵ,(33)V˙1⩽-c1e1-12c1e22-c2γe22which means that V1 is decreasing and bounded, and the error e2 converges into a small neighborhood of origin; i.e., e2<ε within a finite time tε>0. Even though the tracking error enters the region |e2|<ε within a finite time, it may move in and out since the nonnegativity cannot be guaranteed in the range. However, when it moves out, the Lyapunov function V1 becomes negative again and the error is driven back to the region. Therefore, e2 is guaranteed to be UUB for t⩾tεz.Theorem 6.
Considering system (1) controlled by (25) and (26), the output tracking can be accomplished with virtual control input (21).Proof.
To illustrate the reference state tracking, Lyapunov function is chosen as follows:(34)V2=12e12
The derivative ofV2 with (23) equals(35)V˙2=-c1e12+e1e2
According to Theorem5, it has been proved that e2 is bounded. Thus, by selecting positive c1 large enough, we obtain V˙2<0 when V2 is out of a certain bounded region. Therefore, e1 is also UUB by which x1 tracking ϕr is guaranteed. Note that, from (35), it is clear that V2 will not converge to zero due to the existence of e2. It also implies that the state x1 can only converge into a neighborhood of the origin and remain within it.Remark 7.
Small perturbation linearization is a typical engineering method for HSV attitude control and nominal values of parameters are usually used in the design process, which brings the problems of structural and parametric uncertainties. Applications indicate that ESO can estimate the total disturbance well even ifa4 is not calculated precisely. Thus the proposed method needs only a little model information and the adaptive law guarantees a smooth tracking performance within a wide flight envelope, which simplifies the designing process.
## 4. Simulation Results
In this section, simulation results for an HSV are provided to verify the feasibility and efficiency of the proposed control scheme. The reference trajectory used in the simulations is a typical trajectory of reentry segment. The longitudinal dynamics (1) are simulated as the real system, while the controller design procedure is based on the linearized model (3). The simulations are run for 150 seconds and at 100 samples per second. The controller gains are c1=10, c2=20, c=3, λ1=0.1, λ2=0.1, and the ESO gains are tuned by bandwidth method with bandwidth ωo=5. The initial value of adaptive gains are γ^(0)=0.1, η^(0)=0.1, and ones of estimations are x^1(0)=x^2(0)=x^3(0)=0. The deflection angle δz is bounded by [-30∘,30∘].The control performance using an intelligent ADRC controller is also given to show the superiority of the proposed method. In the ADRC controller design, the control law is as follows:(36)δz=kpϕr-ϕ+kdϕ˙r-ϕ˙-f^where f^ is the estimation of total disturbances by an ESO with the same bandwidth ωo=5; control gains kp, kd at several feature points are optimized by genetic algorithm (GA) according to the nominal linearized model and then interpolated, in order to achieve a smooth and quick tracking. The control gains at feature points are shown in Table 1.Table 1
Control gains at feature points.
Time(s) k p k d 0 71.35 103.16 35 96.30 19.25 45 60.58 9.03 55 73.46 10.21 75 62.73 11.69 100 67.80 15.91 150 15.04 19.12To begin with, a set of comparative simulations for nominal longitudinal dynamics is studied, with no external disturbances and parametric uncertainties. Figure3 shows the pitch angle tracking performance, and Figure 4 shows the tracking errors. The deflection angles are shown in Figure 5. Form the figures, it is indicated that both control methods can track the reference. Thanks to the excellent estimation ability of ESO to the internal “disturbance", the system can be approximately transformed into a second-order integrator which is easier to be controlled.Figure 3
Pitch angle tracking performance for nominal longitudinal control system.Figure 4
Pitch angle tracking error for nominal longitudinal control system.Figure 5
Deflection angle for nominal longitudinal control system.It is easily seen from Figure4 that the proposed controller tracks the reference more precisely. The ADRC controller can be regarded as a PD controller with compensation of disturbances. When the HSV works in a large flight envelop, especially when the reference pitch angle changes rapidly, the set of offline-tuned gains in ADRC may not perform well. Although the feature points will be selected more densely in practical applications, it may not reach the performance of a continuous adaptive method and the computational load will increase due to interpolation operations. Moreover, the parameter tuning procedure for traditional controllers like PID and ADRC is quite complex, while the proposed controller can be effective even in large envelop.Then considering the existence of external disturbances and parametric uncertainties, another set of simulation is done. The simulations are done under sustained disturbance and abrupt reference changes. In the simulation, a sinusoidal wave disturbance is given asMgz(t)=2×103sin0.5t N·m, and uncertainty of ±20% in mass m and moment of inertia Jz is added to show the parameter robustness.The tracking performance and tracking error of system using proposed control method are shown in Figures6 and 7, while the ones using intelligent ADRC controller are shown in Figures 8 and 9. It can be inferred that both methods track the reference soon and remain stable in general due to the introduction of ESO. The time-varying disturbances, as well as parametric uncertainties and unmodeled dynamics, can be lumped together as the disturbances, which can be estimated by ESO and actively compensated for. However, the proposed method shows certain superiority in tracking error, since the controller gains are tuned adaptively. Also, the accuracy of disturbance estimation is ensured by setting the ESO gains large enough, but the existence of measurement noises adds a limit to observer gains in practical applications. Hence there will be a phase lag for estimation and the estimation error cannot be neglected. From this aspect, the adaptive method estimates the upper bound η of estimation error and can also handle it.Figure 6
Pitch angle tracking performance using proposed control method subject to disturbance and parametric uncertainties.Figure 7
Pitch angle tracking error using proposed control method subject to disturbance and parametric uncertainties.Figure 8
Pitch angle tracking performance using intelligent ADRC controller subject to disturbance and parametric uncertainties.Figure 9
Pitch angle tracking error using intelligent ADRC controller subject to disturbance and parametric uncertainties.
## 5. Conclusion
In this paper, the longitudinal control problem for HSVs subject to actuator saturation and disturbance is studied. Applying small perturbation assumption, the longitudinal dynamics can be considered as a second-order system at every trimming point, with total disturbance including unmodeled dynamics and parametric uncertainties. Then an ESO is constructed to estimate and compensate for the total disturbance actively, in order to decouple the system and improve the robustness. To deal with the large envelop and actuator saturation, an adaptive backstepping control scheme is designed to control the pitch angle. The presented method requires very little model information and the closed-loop convergence is proved. Finally, simulation results indicated a quick and smooth tracking performance and verified that the proposed method is effective. Further works may focus on the trajectory tracking control of HSV in 6 DoFs.
---
*Source: 1019621-2019-03-03.xml* | 2019 |
# Exploring the Role of C-C Motif Chemokine Ligand-2 Single Nucleotide Polymorphism in Pulmonary Tuberculosis: A Genetic Association Study from North India
**Authors:** Sanjay K. Biswas; Mayank Mittal; Ekata Sinha; Vandana Singh; Nidhi Arela; Bharat Bajaj; Pramod K. Tiwari; Vishwa M. Katoch; Keshar K. Mohanty
**Journal:** Journal of Immunology Research
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1019639
---
## Abstract
The C-C motif chemokine ligand-2 (CCL2) was evidenced to be associated with tuberculosis susceptibility in some ethnic groups. In the present study, effort was made to find out the association ofCCL2-2518 A>G and -362 G>C variants with susceptibility to TB in a population from North India. The genotyping was carried out in 373 participants with pulmonary TB (PTB) and 248 healthy controls (HCs) for CCL2-2518 A>G and -362 G>C polymorphisms by PCR-RFLP and by melting curve analysis using fluorescence-labeled hybridization fluorescent resonance energy transfer (FRET) probes, respectively, followed by DNA sequencing in a few representative samples. Genotype and allele frequencies were compared by the chi-squared test and crude and Mantel-Haenszel (M-H) odds ratio (OR). OR was calculated using STATA/MP16.1 software. Further, CCL2, IL-12p70, IFN-γ, TNF-α, and TGF-β levels were measured in serum samples of these participants using commercially available kits. Our analysis indicated that the homozygous mutant in both -2518 GG (OR=2.07, p=0.02) and -362 CC (OR=1.92, p=0.03) genotypes was associated with susceptibility to pulmonary TB. Further, heterozygous genotypes -2518AG (OR=0.60, p=0.003) and -362GC (OR=0.64, p=0.013) provide resistance from PTB disease. Haplotype analysis revealed AC haplotype (p=0.006) to be a risk factor associated with PTB susceptibility. The serum CCL2 level was significantly elevated among participants with -2518 AA genotype compared to -2518 GG genotype. CCL2 level was observed to be positively correlated with IL12p70, IFN-γ and TNF-α, thus suggesting the immunological regulatory role of CCL2 against pulmonary tuberculosis. CCL2-2518 GG and -362 CC genotypes were found to be associated with susceptibility to pulmonary tuberculosis and CCL2-2518AG and CCL2-362GC with resistance from PTB. AC haplotype was found to be a risk factor for PTB in the present study. It may be hypothesized from the findings that -2518G allele could be responsible for lower production of CCL2 which leads to defective Th1 response and makes a host susceptible for pulmonary tuberculosis.
---
## Body
## 1. Introduction
Tuberculosis (TB) is a major health concern all over the world. Globally, approximately 10 million people fell ill with TB in 2018 from the range of 5 to 500 cases per 100000 populations, out of which 57% were men, 32% were women, and 11% accounted for children less than age of 15 years; among all, 1.2 million died of TB [1]. Geographically, eight countries accounted for 2/3rd of global TB burden with India being the highest at 27% and South Africa being the lowest at 3% (the 2019 edition of the global TB report was released on 17 October 2019) (http://www.who.int/tb/data).Susceptibility to infectious diseases after exposure to pathogen is a complex mechanism which involves the interactions among host, pathogens, and environmental factors [2]. Many studies have supported the crucial role of host genetic factors in susceptibility to PTB [3]. On exposure to M. tuberculosis, our first line of defense comes into play which activates our adaptive immunity which is mainly driven by CD4+ T cells and macrophages, supported by a network of inflammatory cytokines (IFN-γ and TNF-α) and chemokines. Chemokines are small molecular weight proteins involved in immunoregulatory and inflammatory functions [4], and based on their N-terminal cysteine residues, they are categorized into the following: C–, C–C, C–X–C, and C–X3–C subfamilies [5]. CCL2, a strong chemotactic and proinflammatory chemokine belonging to the C-C family, has been reported to provide protection against M. tuberculosis [6] and is reported to be stimulated by TNF-α along with the activation of macrophages [7–8]. The chemokine gene can be mapped in human chromosome 17q11-17q12, the two known polymorphisms -2518 A>G (rs 1024611) and -362G/C (rs 2857656) are reported in the promoter region, and the mutation in these regions affects the gene expression and has also been linked to tuberculosis susceptibility [9].Various studies worldwide have been conducted to understand the effect of the mutation in these variants with respect to the susceptibility or resistance to pulmonary tuberculosis (PTB). The very first study in this respect was conducted by Flores-Villanueva in a Mexican population where they have reported the odds of developing pulmonary tuberculosis to be 2.3- and 5.4-fold in carrier of AG and GG genotypes, respectively, than in homozygous AA. They also reported GG to have the highest level of plasmaCCL2 and the lowest level of plasma IL-12p40 [10]. A study on population from Ghana and Russia reported -2518G and -362C to be more prevalent in control groups compared to the PTB cases, hence indicating the protective effect of the alleles against the PTB disease in a Ghanian population; on the other hand, they did not find any association in a Russian population [11]. Another study from Mexico and Peru reported that the joint effect of CCL2-2518GG genotype along with MMP1-1607GG increased the risk of developing PTB by 3.59-fold in Mexican and 3.9-fold in a Peruvian population, respectively [12]. Arji et al. [13] have reported higher prevalence of CCL2-2518G allele in a healthy Moroccan population suggesting a potential protective effect of the allele against the PTB disease. A meta-analysis conducted by Gong et al. [14] revealed that the G allele of the CCL2-2518 polymorphism is a risk factor for PTB in Asian and Americans but not Africans and the C allele of the -362G>C polymorphism is a protective factor for tuberculosis in these populations. A study conducted on a Sahariya tribe, from India, analyzed the -2518A>G and -362G>C polymorphism on PTB cases and healthy controls but they did not find any association with PTB disease [15]. Another study from a South Indian population reported a significantly decreased frequency of CCL2-2518GG genotype in male patients with PTB and a significantly increased frequency of the same genotype among female patients with PTB. Their results suggested that the -2518GG genotype may be associated with protection in males and susceptibility to PTB in females [16]. In a recent meta-analysis, an association between the CCL2-2518A>G polymorphism and human TB susceptibility was reported [17]. Earlier studies conducted on a population from Hong Kong [18] and South Africa [19] could not find any significant association with the disease. These two polymorphisms (-2518A>G and -362G>C) of CCL2 located in the promoter region of the gene are known to play important role in immune gene regulation. The divergence in the earlier worldwide reports and in an Indian population evoked us to analyze these polymorphisms in the north Indian population from Agra, India.So, the present study was conducted with two main objectives, i.e., to address the association ofCCL2-2518 A>G and -362 G>C polymorphisms and haplotypes with TB in a population of the northern part of India and to analyze the correlation between the level of serum CCL2 and cytokines in TB cases and controls with respect to their genotypes.
## 2. Materials and Methods
### 2.1. Study Subjects
The present study conducted was a part of the major project going on, in the institute which was approved by the institute’s human ethical committee, constituted following the guidelines laid by Indian Council of Medical Research, New Delhi [20]. Before the start of the study, an interview schedule was formulated regarding the demographic details of the cases and controls along with the written informed consent and these were also approved by the institute’s ethical committee. These written informed consents were obtained from all participants of the study, and for minor or children below 18 years of age, the written informed consent was obtained from a parent or guardian. 373 pulmonary tuberculosis cases (PTB) (mean age 32.47±12.94; male : female 253 : 120) and 248 healthy controls (mean age 33.71±12.82; male : female 122 : 126) were included in the study. We analyzed the CCL2-2518 A>G polymorphism in 373 PTB cases and 248 healthy controls and the CCL2-362 G>C polymorphism in 330 PTB cases and 235 healthy controls. We collected the information regarding the subject’s age, sex, smoking habits, drinking habits, and BCG vaccination with the help of the interview schedule for both the cases and controls.
### 2.2. Pulmonary Tuberculosis Patients (PTB)
Cases with pulmonary TB were included in the study with in the age group of 16 to 63 years. PTB cases were recruited from the outpatient department (OPD) of the State Tuberculosis Demonstration Centre (STDC), Agra, during the period from 2007 to 2012 who were registered in the OPD on Monday, Wednesday, and Friday and met the inclusion and exclusion criteria of the PTB cases and agreed to participate in the study. The cases were mainly the residents of Agra or nearby area, within the state of Uttar Pradesh. The cases were recruited on the basis of defined clinical criteria, including the standard respiratory symptoms (fever, cough, expectoration, and malaise). The sputum smear and/or culture positivity were diagnosed on the basis of acid fast bacillus (AFB) smear positivity by Zeil-Neelsen staining and clinical symptoms following the guidelines of the Revised National TB Control Programme (RNTCP) [21]. As a routine, two sputum samples were collected over 2 days (on spot/morning sputum) and, by definition, a new smear-positive pulmonary TB case was diagnosed only when any of the sputum sample showed smear-positive result. AFB culture was performed in Lowenstein-Jensen (LJ) slant, and M. tuberculosis was confirmed by biochemical tests following the protocol described in Vestal [22]. We excluded all the cases showing symptoms of other form of tuberculosis, seropositive for HIV infection, or any other immunosuppressive disorders like diabetes mellitus from the study and those who have taken anti-TB drugs before.
### 2.3. Healthy Controls
Healthy controls included those subjects who escorted PTB cases to the hospital but were not blood related to the cases; randomly selected healthy subjects who were residing in the same area as those of patients, by a house-to-house survey method; between the age of 16 and 63 years; and postgraduate students who were short-term trainees in the institute and who agreed to participate in the study and also met the inclusion criteria of controls. Those with a recent history of fever, viral infection, other illness, or any other immunological disease and who have undergone treatment for tuberculosis or leprosy in the past and any family history of tuberculosis and persons found to be positive for AFB smear tests were excluded from the study. The controls were inoculated with 0.1 ml (5 tuberculin units) PPD antigen intradermally, and induration was noted after 48 to 72 hours of application in 104 healthy controls; among them, 34 (32.69%) were positive and 70 (67.30%) were negative for PPD. The detailed description of PTB cases and healthy controls is given in Table1.Table 1
Characteristics of pulmonary TB cases and healthy controls included in the study.
Pulmonary TB casesN=373Healthy controlsN=248p valueAgemean±SD32.47±12.9433.71±12.820.24∗Gender (male : female)253 : 120122 : 1260.00003∗∗AFB smear positivityND1+1072+853+134Scanty19X-ray positive25Not known3AFB culture positivityND1+992+993+17Scanty153Not known5PPD statusNDPositive34 (32.69%)Negative70 (67.30%)∗p value for t-test, ∗∗p value for χ2 test. PPD test was carried out in 104 healthy control subjects. Percentage of positivity was calculated among these subjects.
### 2.4. Collection of Blood Samples and DNA Extraction
A total of 4 ml of blood was collected from each subject where 2 ml of blood was collected in tubes containing acid citrate dextrose (ACD) from which DNA was isolated following the user’s instruction using the DNA isolation kit (Midi prep from Qiagen, Germany). Another 2 ml of blood was collected in tubes without anticoagulants for separating the serum, and the separated serum was stored at -20°C with protease inhibitor forCCL2 and other serum cytokine assays.
### 2.5. Selection of Single Nucleotide Polymorphism (SNP) and Sample Size
CCL2-2518 A>G and -362 G>C polymorphisms were reported to be associated with susceptibility or resistance to TB in various populations of the world [6–19]. However, a reported variation in the results and the fact that the two polymorphisms of CCL2-2518A>G and -362G>C located in the promoter region of the gene which plays important role in immune regulatory mechanisms induced us to analyze the polymorphism in the north Indian population from Agra, India. So, in the present study, we intended to address the association of CCL2-2518 A>G and -362 G>C polymorphisms with TB in a population of the northern part of India along with the level of serum CCL2 and cytokines. Initially, a small pilot study was carried out with small sample size and after positive results were found, the sample size was calculated by statistical methods.
### 2.6. Genotyping of -2518 A>G and -362 G>C Single Nucleotide Polymorphisms
Genotyping of theCCL2-2518 A>G polymorphism was carried out using a polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method as described previously by Flores-Villanueva [10]. The region containing the -2518 A>G polymorphism in the CCL2 promoter region was amplified using 100 ng of genomic DNA by the forward primer 5′-GCTCCGGGCCCAGTATCT-3′ and reverse primer 5′-ACAGGGAAGGTGAAGGGTATGA-3′. Restriction enzyme PvuII was used for the detection of CCL2 alleles, where the allele G was represented by generation of two fragments of 182 bp and 54 bp after digestion and the allele A was identified by the presence of a 236 bp undigested fragment. These fragments were resolved on agarose gel electrophoresis.Genotyping for -362G>C polymorphism was performed by melting curve analysis using fluorescence-labeled hybridization probes (TIB Mol Biol, Berlin, Germany) using the Light Cycler 480 system (Roche Diagnostics, Berlin, Germany) following the modified protocol of Thye et al. [11]. 5′-GAGCCTGACATGCTTTCATCTA-3′ sense primer and 5′-TTTCCATTCACTGCTGAGAC-3′ and antisense primer along with FRET probes 5′-TTCGCTTCACAGAAAGCAGAATCCTTA-3′ (3′ labeled with fluorescein) and 5′-AAATAACCCTCTTAGTTCACATCTGTGGTCAGTCT-3′ (5′ labeled with LCRed640). PCR was performed using 1.5 μl DNA primers (sense and antisense) at 1.25 pmol, 2.5 mM MgCl2, and 250 nM of the sensor probe and anchor probe. The sensor probe was labeled with fluorescein at the 3′ end. The anchor probe was labeled with Light Cycler Red 640 at the 5′ end. Difference in temperature of melting peaks determined the different homozygous and heterozygous genotypes.
### 2.7. DNA Sequencing
The region covering both the polymorphisms, -2518 A>G and -362 G>C, of theCCL2 gene was amplified in 10 samples from each genotype using sequence-specific primers reported previously [10, 11] using the ABI Big Dye Terminator v2 kit (Applied Biosystems, Foster City, CA, USA) in conjunction with the ABI-recommended protocol in the ABI 3700 capillary sequencer.
### 2.8. Estimation of CCL2, IL-12p70, IFN-γ, TNF-γ, and TGF-β
Serum CCL2 and IL-12p70, IFN-γ, TNF-γ, and TGF-β cytokines were assayed using respective human Duoset enzyme-linked immunosorbent assay (ELISA) Development System (R&D Systems, Minneapolis, MN, USA) in 120 tuberculosis cases (40 representative cases each from wild, heterozygous, and mutant genotypes) and 54 healthy controls (20 representative healthy controls each from heterozygous and wild genotypes and 14 from mutant genotypes with respect to the CCL2-2518A>G polymorphism).
### 2.9. Statistical Analysis
Allele and genotype frequencies of each polymorphism were determined by direct counting. Hardy-Weinberg equilibrium (HWE) was examined in controls and patients byχ2 test. Genotype and allele frequencies were compared between patients and controls by the chi-squared test; magnitude of association was expressed as odds ratio (OR) with 95% CI. p<0.05 was considered significant for all analyses. Genotypic associations for dominant, recessive, and overdominant models were tested using STATA/MP16.1 software (StataCorp LP Lakeway Drive, College Station, TX, USA). Mantel-Haenszel (M-H) estimate was also calculated after adjusting for sex. Both crude and M-H estimates were represented.Analysis of linkage disequilibrium (LD) and haplotype between the SNPs was carried out using online software SNP stats. Levels of serum CCL2 and cytokines were compared either using Mann-Whitney or Kruskal-Wallis tests. The Spearman rank correlation test was performed using STATA/SE 11.0 software.
## 2.1. Study Subjects
The present study conducted was a part of the major project going on, in the institute which was approved by the institute’s human ethical committee, constituted following the guidelines laid by Indian Council of Medical Research, New Delhi [20]. Before the start of the study, an interview schedule was formulated regarding the demographic details of the cases and controls along with the written informed consent and these were also approved by the institute’s ethical committee. These written informed consents were obtained from all participants of the study, and for minor or children below 18 years of age, the written informed consent was obtained from a parent or guardian. 373 pulmonary tuberculosis cases (PTB) (mean age 32.47±12.94; male : female 253 : 120) and 248 healthy controls (mean age 33.71±12.82; male : female 122 : 126) were included in the study. We analyzed the CCL2-2518 A>G polymorphism in 373 PTB cases and 248 healthy controls and the CCL2-362 G>C polymorphism in 330 PTB cases and 235 healthy controls. We collected the information regarding the subject’s age, sex, smoking habits, drinking habits, and BCG vaccination with the help of the interview schedule for both the cases and controls.
## 2.2. Pulmonary Tuberculosis Patients (PTB)
Cases with pulmonary TB were included in the study with in the age group of 16 to 63 years. PTB cases were recruited from the outpatient department (OPD) of the State Tuberculosis Demonstration Centre (STDC), Agra, during the period from 2007 to 2012 who were registered in the OPD on Monday, Wednesday, and Friday and met the inclusion and exclusion criteria of the PTB cases and agreed to participate in the study. The cases were mainly the residents of Agra or nearby area, within the state of Uttar Pradesh. The cases were recruited on the basis of defined clinical criteria, including the standard respiratory symptoms (fever, cough, expectoration, and malaise). The sputum smear and/or culture positivity were diagnosed on the basis of acid fast bacillus (AFB) smear positivity by Zeil-Neelsen staining and clinical symptoms following the guidelines of the Revised National TB Control Programme (RNTCP) [21]. As a routine, two sputum samples were collected over 2 days (on spot/morning sputum) and, by definition, a new smear-positive pulmonary TB case was diagnosed only when any of the sputum sample showed smear-positive result. AFB culture was performed in Lowenstein-Jensen (LJ) slant, and M. tuberculosis was confirmed by biochemical tests following the protocol described in Vestal [22]. We excluded all the cases showing symptoms of other form of tuberculosis, seropositive for HIV infection, or any other immunosuppressive disorders like diabetes mellitus from the study and those who have taken anti-TB drugs before.
## 2.3. Healthy Controls
Healthy controls included those subjects who escorted PTB cases to the hospital but were not blood related to the cases; randomly selected healthy subjects who were residing in the same area as those of patients, by a house-to-house survey method; between the age of 16 and 63 years; and postgraduate students who were short-term trainees in the institute and who agreed to participate in the study and also met the inclusion criteria of controls. Those with a recent history of fever, viral infection, other illness, or any other immunological disease and who have undergone treatment for tuberculosis or leprosy in the past and any family history of tuberculosis and persons found to be positive for AFB smear tests were excluded from the study. The controls were inoculated with 0.1 ml (5 tuberculin units) PPD antigen intradermally, and induration was noted after 48 to 72 hours of application in 104 healthy controls; among them, 34 (32.69%) were positive and 70 (67.30%) were negative for PPD. The detailed description of PTB cases and healthy controls is given in Table1.Table 1
Characteristics of pulmonary TB cases and healthy controls included in the study.
Pulmonary TB casesN=373Healthy controlsN=248p valueAgemean±SD32.47±12.9433.71±12.820.24∗Gender (male : female)253 : 120122 : 1260.00003∗∗AFB smear positivityND1+1072+853+134Scanty19X-ray positive25Not known3AFB culture positivityND1+992+993+17Scanty153Not known5PPD statusNDPositive34 (32.69%)Negative70 (67.30%)∗p value for t-test, ∗∗p value for χ2 test. PPD test was carried out in 104 healthy control subjects. Percentage of positivity was calculated among these subjects.
## 2.4. Collection of Blood Samples and DNA Extraction
A total of 4 ml of blood was collected from each subject where 2 ml of blood was collected in tubes containing acid citrate dextrose (ACD) from which DNA was isolated following the user’s instruction using the DNA isolation kit (Midi prep from Qiagen, Germany). Another 2 ml of blood was collected in tubes without anticoagulants for separating the serum, and the separated serum was stored at -20°C with protease inhibitor forCCL2 and other serum cytokine assays.
## 2.5. Selection of Single Nucleotide Polymorphism (SNP) and Sample Size
CCL2-2518 A>G and -362 G>C polymorphisms were reported to be associated with susceptibility or resistance to TB in various populations of the world [6–19]. However, a reported variation in the results and the fact that the two polymorphisms of CCL2-2518A>G and -362G>C located in the promoter region of the gene which plays important role in immune regulatory mechanisms induced us to analyze the polymorphism in the north Indian population from Agra, India. So, in the present study, we intended to address the association of CCL2-2518 A>G and -362 G>C polymorphisms with TB in a population of the northern part of India along with the level of serum CCL2 and cytokines. Initially, a small pilot study was carried out with small sample size and after positive results were found, the sample size was calculated by statistical methods.
## 2.6. Genotyping of -2518 A>G and -362 G>C Single Nucleotide Polymorphisms
Genotyping of theCCL2-2518 A>G polymorphism was carried out using a polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method as described previously by Flores-Villanueva [10]. The region containing the -2518 A>G polymorphism in the CCL2 promoter region was amplified using 100 ng of genomic DNA by the forward primer 5′-GCTCCGGGCCCAGTATCT-3′ and reverse primer 5′-ACAGGGAAGGTGAAGGGTATGA-3′. Restriction enzyme PvuII was used for the detection of CCL2 alleles, where the allele G was represented by generation of two fragments of 182 bp and 54 bp after digestion and the allele A was identified by the presence of a 236 bp undigested fragment. These fragments were resolved on agarose gel electrophoresis.Genotyping for -362G>C polymorphism was performed by melting curve analysis using fluorescence-labeled hybridization probes (TIB Mol Biol, Berlin, Germany) using the Light Cycler 480 system (Roche Diagnostics, Berlin, Germany) following the modified protocol of Thye et al. [11]. 5′-GAGCCTGACATGCTTTCATCTA-3′ sense primer and 5′-TTTCCATTCACTGCTGAGAC-3′ and antisense primer along with FRET probes 5′-TTCGCTTCACAGAAAGCAGAATCCTTA-3′ (3′ labeled with fluorescein) and 5′-AAATAACCCTCTTAGTTCACATCTGTGGTCAGTCT-3′ (5′ labeled with LCRed640). PCR was performed using 1.5 μl DNA primers (sense and antisense) at 1.25 pmol, 2.5 mM MgCl2, and 250 nM of the sensor probe and anchor probe. The sensor probe was labeled with fluorescein at the 3′ end. The anchor probe was labeled with Light Cycler Red 640 at the 5′ end. Difference in temperature of melting peaks determined the different homozygous and heterozygous genotypes.
## 2.7. DNA Sequencing
The region covering both the polymorphisms, -2518 A>G and -362 G>C, of theCCL2 gene was amplified in 10 samples from each genotype using sequence-specific primers reported previously [10, 11] using the ABI Big Dye Terminator v2 kit (Applied Biosystems, Foster City, CA, USA) in conjunction with the ABI-recommended protocol in the ABI 3700 capillary sequencer.
## 2.8. Estimation of CCL2, IL-12p70, IFN-γ, TNF-γ, and TGF-β
Serum CCL2 and IL-12p70, IFN-γ, TNF-γ, and TGF-β cytokines were assayed using respective human Duoset enzyme-linked immunosorbent assay (ELISA) Development System (R&D Systems, Minneapolis, MN, USA) in 120 tuberculosis cases (40 representative cases each from wild, heterozygous, and mutant genotypes) and 54 healthy controls (20 representative healthy controls each from heterozygous and wild genotypes and 14 from mutant genotypes with respect to the CCL2-2518A>G polymorphism).
## 2.9. Statistical Analysis
Allele and genotype frequencies of each polymorphism were determined by direct counting. Hardy-Weinberg equilibrium (HWE) was examined in controls and patients byχ2 test. Genotype and allele frequencies were compared between patients and controls by the chi-squared test; magnitude of association was expressed as odds ratio (OR) with 95% CI. p<0.05 was considered significant for all analyses. Genotypic associations for dominant, recessive, and overdominant models were tested using STATA/MP16.1 software (StataCorp LP Lakeway Drive, College Station, TX, USA). Mantel-Haenszel (M-H) estimate was also calculated after adjusting for sex. Both crude and M-H estimates were represented.Analysis of linkage disequilibrium (LD) and haplotype between the SNPs was carried out using online software SNP stats. Levels of serum CCL2 and cytokines were compared either using Mann-Whitney or Kruskal-Wallis tests. The Spearman rank correlation test was performed using STATA/SE 11.0 software.
## 3. Results
### 3.1. Demographic Parameter Analysis
A total of 373 PTB cases and 248 healthy controls (HCs) between the age group of 16 and 63 years were included in the present study. The mean age of PTB cases (32.47) and healthy controls (33.71) did not differ significantly; however, the male to female ratio was observed to be significantly higher in PTB cases compared to HCs (p=0.007) (Table 1).Out of the 373 PTB cases, AFB smear positivity could be analyzed for 370 cases and AFB culture positivity for 368 cases. A detailed distribution of AFB smear positivity and culture positivity is described in Table1. The PTB cases were further analyzed on the basis of bacterial load (scanty +, 1+, 2+, and 3+) and also for any significant difference with respect to age, gender, and polymorphism, but we did not find any significant difference or association.Healthy controls were also tested for the PTB disease, and none of them showed any sign of AFB smear positivity or culture positivity. Out of 248 HCs, 104 could be inoculated with PPD (purified protein derivative) of which 32.69% were found to be PPD positive and 67.30% were PPD negative. PPD-positive HCs were followed till the duration of the study, and none of them developed the active form of PTB.
### 3.2. Genotypic Analysis ofCCL2-2518 A>G (rs1024611) and -362 G>C (rs2857656) Single Nucleotide Polymorphisms
CCL2-2518A/G polymorphism was analyzed in 373 PTB cases and 248 HCs. The frequencies of genotypes and alleles were in Hardy-Weinberg equilibrium in both cases and controls (p>0.05). The male : female ratio is significantly higher in PTB cases compared to healthy controls; however, the frequencies of genotypes for both CCL2-2518A>G and -362G>C are not significantly different between males and females (p=0.81 and 0.93). On comparing the genotypic frequencies of PTB cases and healthy controls, a significant difference was observed with respect to heterozygous AG genotype which was found to be significantly higher in controls (0.43, p<0.003) and also the homozygous GG genotype which was found to be significantly higher in PTB patients (0.11, p=0.004). A allele was found to be the dominant allele with frequency of 0.73 and 0.72 in both cases and controls, respectively. They did not differ significantly (p value = 0.79). On observing the significant difference in genotypic frequencies, we analyzed the association of the genotypes with disease using various models and observed that in the overdominant model, the heterozygous AG genotype was providing resistance against PTB disease [OR=0.60 (95%CI=0.43-0.84), p value = 0.003]. On the other hand, in the recessive model, the homozygous recessive genotype GG showed nearly twofold risk of developing the disease [OR=1.97 (95%CI=1.06-3.64), p value = 0.02] and M-H estimates after adjusting for sex [OR=2.07 (CI=1.10-3.91)] (Table 2).CCL2-362G>C polymorphism was analyzed in 330 PTB cases and 235 HCs. The polymorphism was in Hardy-Weinberg equilibrium in both PTB cases and healthy controls (p>0.05). Significant difference in the frequency of homozygous CC and heterozygous GC genotypes was observed for the CCL2-362 G>C polymorphism (rs 2857656) between PTB cases and healthy controls. The frequency of homozygous CC genotype at locus -362G>C was found significantly higher in PTB cases (0.13) than that found in healthy controls (0.07) whereas the frequency of heterozygous GC genotype was found to be higher in healthy controls (0.45) than in PTB cases (0.35). On performing association analysis in the overdominant model, we found heterozygous GC genotype to be providing protection against PTB disease [OR=0.65 (95%CI=0.46-0.92), p value = 0.01], whereas, in the recessive model, homozygous CC genotype was associated with the disease susceptibility [OR=1.87 (CI=1.03-3.38), p value = 0.03] (Table 2). The PTB cases were further categorized on the basis of bacterial load (scanty +, 1+, 2+, and 3+) and gender but we did not find any significant difference or association on categorization with any of the two polymorphism or their genotypes.
### 3.3. Haplotype Frequency and Linkage Disequilibrium Analysis of -2518 A>G (rs1024611) and -362 G>C (rs2857656) Loci ofCCL2 among PTB Cases and Healthy Controls
1090 chromosomes were studied for haplotype analysis for these two loci of chromosome 17q11-17q12. All the four haplotypes were present in the studied population. The frequency of AG (0.70) haplotype was higher compared to that of GC (0.25), AC (0.04), and GG (0.01) haplotype in PTB cases. The frequency of AG (0.68) haplotype was higher among the healthy controls compared to GC (0.28), AC (0.004), and GG (0.002) haplotype. The frequency of AC haplotype was found to be significantly high (0.046) in PTB cases compared to HCs (0.004) [OR=7.23 (CI=1.74-30.09), p value = 0.006] (Table 2). A strong LD was observed between both the sites (D′=0.961, p value = 0.00) indicating that the studied loci were in linkage disequilibrium with each other in the present population.Table 2
Genotype analysis of CCL2-2518 A>G and-362 G>C polymorphisms in PTB cases and healthy control.
Genotype/allelePTB casesHealthy controlsChi (DF)OR (95% CI)p value-2518 A>GN=373No (frequency)N=248No (frequency)AA214 (0.57)126 (0.51)AG117 (0.31)107 (0.43)GG42 (0.11)15 (0.06)11.31 (2)0.004A allele545 (0.73)359 (0.72)G allele201 (0.27)137 (0.28)0.07 (1)0.0.97 (0.74-1.24)0.79MH-OR (Adj. for sex)0.02 (1)0.98 (0.75-1.27)0.88Dominant modelAA214 (0.57)126 (0.51)AG+GG159 (0.43)122 (0.49)2.59 (1)0.76 (0.55-1.06)0.10MH-OR (Adj. for sex)2.32 (1)0.77 (0.55-1.07)0.12Overdominant modelAA+GG256 (0.69)141 (0.57)AG117 (0.31)107 (0.43)8.95 (1)0.60 (0.43-0.84)0.003MH-OR (Adj. for sex)8.81 (1)0.60 (0.42-0.840.003Recessive modelAA+AG331 (0.89)233 (0.94)GG42 (0.11)15 (0.06)4.85 (1)1.97 (1.06-3.64)0.028MH-OR (Adj. for sex)5.35 (1)2.07 (1.10-3.91)0.02-362 G>CN=330N=235GG174 (0.53)113 (0.48)GC114 (0.35)105 (0.45)CC42 (0.13)17 (0.07)8.18 (2)0.017G allele462 (0.70)331 (0.70)C allele198 (0.30)139 (0.30)0.02 (1)1.02 (0.78-1.3)0.87MH-OR (Adj. for sex)0.05 (1)1.03 (0.79-1.33)0.82Dominant modelGG174 (0.53)113 (0.48)GC+CC156 (0.47)122 (0.52)1.18 (1)0.83 (0.59-1.16)0.27MH-OR (Adj. for sex)1.13 (1)0.83 (0.59-1.16)0.28Overdominant modelGG+CC216 (0.65)130 (0.55)GC114 (0.34)105 (0.45)5.93 (1)0.65 (0.46-0.92)0.015MH-OR (Adj. for sex)6.15 (1)0.64 (0.45-0.91)0.013Recessive modelGG+GC288 (0.87)218 (0.93)CC42 (0.13)17 (0.07)4.42 (1)1.87 (1.03-3.38)0.035MH-OR (Adj. for sex)4.89 (1)1.92 (1.06-3.47)0.027
### 3.4. Serum Analysis of CCL2, IL-12p70, IFN-γ, TNF-α, and TGF-β
CCL2/MCP-1, IL12p70, IFN-γ, TNF-α, and TGF-β were measured in serum samples of 120 pulmonary TB patients and 54 healthy controls. The CCL2 level was observed to be significantly elevated in PTB patients compared to healthy controls (p<0.005) (shown in Figure 1) and varied among CCL2 variants in PTB patients (Spearman corr=−0.225, p=0.01). The significantly higher mean level of serum CCL2 was observed in cases with -2518 homozygous AA genotype (352.8 pg/ml) compared to the level seen in cases with -2518 heterozygous AG genotype (183.3 pg/ml) and -2518 homozygous GG genotype (119.3 pg/ml) (p=0.03).Figure 1
Comparative analysis of serum CCL2 level in PTB patients and healthy controls. Serum CCL2 was measured by sandwich ELISA using Duoset from R&D Systems, USA, in pulmonary TB patients and controls. The level was expressed as pg/ml on theY-axis and represented as box and whisker plot. Each dot above the vertical box plot represents the outside value of one subject. Each box is with whiskers on both sides with upper and lower adjacent values, respectively. The box shows the 75th, median, and 25th percentile values from the upper hinge to lower hinge, respectively. Subjects are represented on the X-axis. HC: healthy controls; PTB: pulmonary TB patients; AA: subjects with CCL2-2518AA genotypes; AG: subjects with CCL2-2518AG genotype; GG: subjects with CCL2-2518GG genotypes. (a) For comparison of CCL2 level between HC and PTB patients. (b) For comparison of the CCL2 level among PTB patients having various CCL2 genotypes. (c) The comparison of the CCL2 level among HC having various genotypes. Pairwise comparison was made by the Wilcoxon rank-sum test, and comparison of groups was done by the Kruskal-Wallis equality of population rank test. p values are shown above the box plots.
(a)(b)(c)The serum IL-12p70 level was found to be significantly higher in PTB patients compared to healthy controls (p=0.0000) (shown in Figure 2) and differed significantly among subjects having various genotypes of CCL2-2518 variants (p<0.05); it was observed to be significantly higher in PTB patients with homozygous AA and heterozygous AG genotypes compared to homozygous GG genotypes.Figure 2
Comparative analysis of serum IL-12p70 level in PTB patients and healthy controls. Serum IL-12p70 was measured by sandwich ELISA using Duoset from R&D Systems, USA, in pulmonary TB patients and controls. The level was expressed as pg/ml on theY-axis and represented as box and whisker plot. Each dot above the vertical box plot represents the outside value of one subject. Each box is with whiskers on both sides with upper and lower adjacent values, respectively. The box shows the 75th, median, and 25thpercentile values from the upper hinge to lower hinge, respectively. Subjects are represented on the X-axis. HC: healthy controls; PTB: pulmonary TB patients; AA: subjects with CCL2-2518AA genotypes, AG: subjects with CCL2-2518AG genotype; GG: subjects with CCL2-2518GG genotypes. (a) For comparison of IL-12p70 level between HC and PTB patients. (b) For comparison of IL-12p70 level among PTB patients having various CCL2 genotypes. (c) The comparison of IL-12p70 level among HC having various genotypes. Pairwise comparison was made by the Wilcoxon rank-sum test, and comparison of groups was done by the Kruskal-Wallis equality of populations rank test. p values are shown above the box plots.
(a)(b)(c)The serum CCL2 level is significantly positively correlated with the serum level of IL-12p70 in healthy controls as well as in PTB cases. On analysis with reference to specific genotypes of theCCL2-2518 A>G variant, the serum CCL2 level was observed to be significantly positively correlated in healthy controls, having homozygous AA genotypes (Spearman r=0.79, p=0.000) and heterozygous AG genotypes (R=0.68, p=0.0009), with the serum IL-12p70, whereas in PTB cases, the serum CCL2 level was observed to be significantly positively correlated with the serum IL-12p70 in homozygous GG genotype (Spearman rho=0.51, p=0.0008) only.Th1 cytokines IFN-γ and TNF-α were also analyzed in relation to the CCL2-2518 A>G variant and correlated with serum CCL2. A positive significant correlation was found.Regression analysis of CCL2 with all these cytokines revealed that the level of serum CCL2 was significantly correlated with the level of serum IL-12p70 (regression coefficient 0.37,p=0.048) and IFN-γ (regression coefficient=1.69, p=0.00) in healthy controls.
## 3.1. Demographic Parameter Analysis
A total of 373 PTB cases and 248 healthy controls (HCs) between the age group of 16 and 63 years were included in the present study. The mean age of PTB cases (32.47) and healthy controls (33.71) did not differ significantly; however, the male to female ratio was observed to be significantly higher in PTB cases compared to HCs (p=0.007) (Table 1).Out of the 373 PTB cases, AFB smear positivity could be analyzed for 370 cases and AFB culture positivity for 368 cases. A detailed distribution of AFB smear positivity and culture positivity is described in Table1. The PTB cases were further analyzed on the basis of bacterial load (scanty +, 1+, 2+, and 3+) and also for any significant difference with respect to age, gender, and polymorphism, but we did not find any significant difference or association.Healthy controls were also tested for the PTB disease, and none of them showed any sign of AFB smear positivity or culture positivity. Out of 248 HCs, 104 could be inoculated with PPD (purified protein derivative) of which 32.69% were found to be PPD positive and 67.30% were PPD negative. PPD-positive HCs were followed till the duration of the study, and none of them developed the active form of PTB.
## 3.2. Genotypic Analysis ofCCL2-2518 A>G (rs1024611) and -362 G>C (rs2857656) Single Nucleotide Polymorphisms
CCL2-2518A/G polymorphism was analyzed in 373 PTB cases and 248 HCs. The frequencies of genotypes and alleles were in Hardy-Weinberg equilibrium in both cases and controls (p>0.05). The male : female ratio is significantly higher in PTB cases compared to healthy controls; however, the frequencies of genotypes for both CCL2-2518A>G and -362G>C are not significantly different between males and females (p=0.81 and 0.93). On comparing the genotypic frequencies of PTB cases and healthy controls, a significant difference was observed with respect to heterozygous AG genotype which was found to be significantly higher in controls (0.43, p<0.003) and also the homozygous GG genotype which was found to be significantly higher in PTB patients (0.11, p=0.004). A allele was found to be the dominant allele with frequency of 0.73 and 0.72 in both cases and controls, respectively. They did not differ significantly (p value = 0.79). On observing the significant difference in genotypic frequencies, we analyzed the association of the genotypes with disease using various models and observed that in the overdominant model, the heterozygous AG genotype was providing resistance against PTB disease [OR=0.60 (95%CI=0.43-0.84), p value = 0.003]. On the other hand, in the recessive model, the homozygous recessive genotype GG showed nearly twofold risk of developing the disease [OR=1.97 (95%CI=1.06-3.64), p value = 0.02] and M-H estimates after adjusting for sex [OR=2.07 (CI=1.10-3.91)] (Table 2).CCL2-362G>C polymorphism was analyzed in 330 PTB cases and 235 HCs. The polymorphism was in Hardy-Weinberg equilibrium in both PTB cases and healthy controls (p>0.05). Significant difference in the frequency of homozygous CC and heterozygous GC genotypes was observed for the CCL2-362 G>C polymorphism (rs 2857656) between PTB cases and healthy controls. The frequency of homozygous CC genotype at locus -362G>C was found significantly higher in PTB cases (0.13) than that found in healthy controls (0.07) whereas the frequency of heterozygous GC genotype was found to be higher in healthy controls (0.45) than in PTB cases (0.35). On performing association analysis in the overdominant model, we found heterozygous GC genotype to be providing protection against PTB disease [OR=0.65 (95%CI=0.46-0.92), p value = 0.01], whereas, in the recessive model, homozygous CC genotype was associated with the disease susceptibility [OR=1.87 (CI=1.03-3.38), p value = 0.03] (Table 2). The PTB cases were further categorized on the basis of bacterial load (scanty +, 1+, 2+, and 3+) and gender but we did not find any significant difference or association on categorization with any of the two polymorphism or their genotypes.
## 3.3. Haplotype Frequency and Linkage Disequilibrium Analysis of -2518 A>G (rs1024611) and -362 G>C (rs2857656) Loci ofCCL2 among PTB Cases and Healthy Controls
1090 chromosomes were studied for haplotype analysis for these two loci of chromosome 17q11-17q12. All the four haplotypes were present in the studied population. The frequency of AG (0.70) haplotype was higher compared to that of GC (0.25), AC (0.04), and GG (0.01) haplotype in PTB cases. The frequency of AG (0.68) haplotype was higher among the healthy controls compared to GC (0.28), AC (0.004), and GG (0.002) haplotype. The frequency of AC haplotype was found to be significantly high (0.046) in PTB cases compared to HCs (0.004) [OR=7.23 (CI=1.74-30.09), p value = 0.006] (Table 2). A strong LD was observed between both the sites (D′=0.961, p value = 0.00) indicating that the studied loci were in linkage disequilibrium with each other in the present population.Table 2
Genotype analysis of CCL2-2518 A>G and-362 G>C polymorphisms in PTB cases and healthy control.
Genotype/allelePTB casesHealthy controlsChi (DF)OR (95% CI)p value-2518 A>GN=373No (frequency)N=248No (frequency)AA214 (0.57)126 (0.51)AG117 (0.31)107 (0.43)GG42 (0.11)15 (0.06)11.31 (2)0.004A allele545 (0.73)359 (0.72)G allele201 (0.27)137 (0.28)0.07 (1)0.0.97 (0.74-1.24)0.79MH-OR (Adj. for sex)0.02 (1)0.98 (0.75-1.27)0.88Dominant modelAA214 (0.57)126 (0.51)AG+GG159 (0.43)122 (0.49)2.59 (1)0.76 (0.55-1.06)0.10MH-OR (Adj. for sex)2.32 (1)0.77 (0.55-1.07)0.12Overdominant modelAA+GG256 (0.69)141 (0.57)AG117 (0.31)107 (0.43)8.95 (1)0.60 (0.43-0.84)0.003MH-OR (Adj. for sex)8.81 (1)0.60 (0.42-0.840.003Recessive modelAA+AG331 (0.89)233 (0.94)GG42 (0.11)15 (0.06)4.85 (1)1.97 (1.06-3.64)0.028MH-OR (Adj. for sex)5.35 (1)2.07 (1.10-3.91)0.02-362 G>CN=330N=235GG174 (0.53)113 (0.48)GC114 (0.35)105 (0.45)CC42 (0.13)17 (0.07)8.18 (2)0.017G allele462 (0.70)331 (0.70)C allele198 (0.30)139 (0.30)0.02 (1)1.02 (0.78-1.3)0.87MH-OR (Adj. for sex)0.05 (1)1.03 (0.79-1.33)0.82Dominant modelGG174 (0.53)113 (0.48)GC+CC156 (0.47)122 (0.52)1.18 (1)0.83 (0.59-1.16)0.27MH-OR (Adj. for sex)1.13 (1)0.83 (0.59-1.16)0.28Overdominant modelGG+CC216 (0.65)130 (0.55)GC114 (0.34)105 (0.45)5.93 (1)0.65 (0.46-0.92)0.015MH-OR (Adj. for sex)6.15 (1)0.64 (0.45-0.91)0.013Recessive modelGG+GC288 (0.87)218 (0.93)CC42 (0.13)17 (0.07)4.42 (1)1.87 (1.03-3.38)0.035MH-OR (Adj. for sex)4.89 (1)1.92 (1.06-3.47)0.027
## 3.4. Serum Analysis of CCL2, IL-12p70, IFN-γ, TNF-α, and TGF-β
CCL2/MCP-1, IL12p70, IFN-γ, TNF-α, and TGF-β were measured in serum samples of 120 pulmonary TB patients and 54 healthy controls. The CCL2 level was observed to be significantly elevated in PTB patients compared to healthy controls (p<0.005) (shown in Figure 1) and varied among CCL2 variants in PTB patients (Spearman corr=−0.225, p=0.01). The significantly higher mean level of serum CCL2 was observed in cases with -2518 homozygous AA genotype (352.8 pg/ml) compared to the level seen in cases with -2518 heterozygous AG genotype (183.3 pg/ml) and -2518 homozygous GG genotype (119.3 pg/ml) (p=0.03).Figure 1
Comparative analysis of serum CCL2 level in PTB patients and healthy controls. Serum CCL2 was measured by sandwich ELISA using Duoset from R&D Systems, USA, in pulmonary TB patients and controls. The level was expressed as pg/ml on theY-axis and represented as box and whisker plot. Each dot above the vertical box plot represents the outside value of one subject. Each box is with whiskers on both sides with upper and lower adjacent values, respectively. The box shows the 75th, median, and 25th percentile values from the upper hinge to lower hinge, respectively. Subjects are represented on the X-axis. HC: healthy controls; PTB: pulmonary TB patients; AA: subjects with CCL2-2518AA genotypes; AG: subjects with CCL2-2518AG genotype; GG: subjects with CCL2-2518GG genotypes. (a) For comparison of CCL2 level between HC and PTB patients. (b) For comparison of the CCL2 level among PTB patients having various CCL2 genotypes. (c) The comparison of the CCL2 level among HC having various genotypes. Pairwise comparison was made by the Wilcoxon rank-sum test, and comparison of groups was done by the Kruskal-Wallis equality of population rank test. p values are shown above the box plots.
(a)(b)(c)The serum IL-12p70 level was found to be significantly higher in PTB patients compared to healthy controls (p=0.0000) (shown in Figure 2) and differed significantly among subjects having various genotypes of CCL2-2518 variants (p<0.05); it was observed to be significantly higher in PTB patients with homozygous AA and heterozygous AG genotypes compared to homozygous GG genotypes.Figure 2
Comparative analysis of serum IL-12p70 level in PTB patients and healthy controls. Serum IL-12p70 was measured by sandwich ELISA using Duoset from R&D Systems, USA, in pulmonary TB patients and controls. The level was expressed as pg/ml on theY-axis and represented as box and whisker plot. Each dot above the vertical box plot represents the outside value of one subject. Each box is with whiskers on both sides with upper and lower adjacent values, respectively. The box shows the 75th, median, and 25thpercentile values from the upper hinge to lower hinge, respectively. Subjects are represented on the X-axis. HC: healthy controls; PTB: pulmonary TB patients; AA: subjects with CCL2-2518AA genotypes, AG: subjects with CCL2-2518AG genotype; GG: subjects with CCL2-2518GG genotypes. (a) For comparison of IL-12p70 level between HC and PTB patients. (b) For comparison of IL-12p70 level among PTB patients having various CCL2 genotypes. (c) The comparison of IL-12p70 level among HC having various genotypes. Pairwise comparison was made by the Wilcoxon rank-sum test, and comparison of groups was done by the Kruskal-Wallis equality of populations rank test. p values are shown above the box plots.
(a)(b)(c)The serum CCL2 level is significantly positively correlated with the serum level of IL-12p70 in healthy controls as well as in PTB cases. On analysis with reference to specific genotypes of theCCL2-2518 A>G variant, the serum CCL2 level was observed to be significantly positively correlated in healthy controls, having homozygous AA genotypes (Spearman r=0.79, p=0.000) and heterozygous AG genotypes (R=0.68, p=0.0009), with the serum IL-12p70, whereas in PTB cases, the serum CCL2 level was observed to be significantly positively correlated with the serum IL-12p70 in homozygous GG genotype (Spearman rho=0.51, p=0.0008) only.Th1 cytokines IFN-γ and TNF-α were also analyzed in relation to the CCL2-2518 A>G variant and correlated with serum CCL2. A positive significant correlation was found.Regression analysis of CCL2 with all these cytokines revealed that the level of serum CCL2 was significantly correlated with the level of serum IL-12p70 (regression coefficient 0.37,p=0.048) and IFN-γ (regression coefficient=1.69, p=0.00) in healthy controls.
## 4. Discussion
The present study explored the genetic frequencies ofCCL2-2518 A>G and -362 G>C polymorphisms in a north Indian population from Agra, India. Pulmonary TB cases and healthy controls were recruited keeping in mind their similar environmental exposure and socioeconomic background. Although a number of reports have been published suggesting the role of CCL2 gene polymorphism in different populations, the findings are often contradictory based on the ethnicity [23], population [24], and type of tuberculosis [25, 26]. Other studies reported from India are based on a south Indian population [27] and in tribal population [15]. Our study is based on a population from the northern part of India, and no report on the CCL2 gene polymorphism is available for this region in relation to tuberculosis. The participants for this study were included between 2007 and 2012, as tuberculosis is prevalent in this part which was the persuasive situation for this study to be carried out. In comparison to other studies, we have attempted to partially address the functional relevance of this polymorphism with reference to its possible regulatory role in cytokine levels. In our observations -2518A allele and -2518AA genotype are noted to be predominant allele and genotype, respectively, in the present population. Flores-Villanueva et al., in a Mexican population, reported -2518 G allele and GG genotype to be the major allele and genotype, respectively, in their study [10]. They have reported in their study a 5.4- and 6.9-fold increased risk of developing TB in the carrier of GG genotype in a Mexican and Korean population, respectively; we also found the same, i.e., association of CCL2-2518 GG genotype with susceptibility to TB (p=0.02) in the recessive model (Table 2). Our result of AG genotype was totally opposite as reported by them where they found 2.3- and 2.8-fold increased risk of developing the disease in a Mexican and Korean population, respectively, and we found in the present study that -2518 AG was providing resistance to the disease (p value = 0.003) (Table 2). Other population studies too have reported association with G allele of -2518 polymorphism; Gong et al. [14] have reported in their meta-analysis that G allele of -2518 polymorphism is a risk factor for TB in Asian and American population but not in African population. Thye et al. [11] and Arji et al. [13] have reported the protective role of G allele of -2518 polymorphism in a Ghanian and Moroccan population, respectively. A study conducted on a Sahariya tribe (the tribe is reported to have high TB prevalence) from India did not report any association with CCL2-2518A/G polymorphism [15]. Another study from the mainland of India, on a South Indian population, reported association of CCL2-2518 GG genotype with protection against PTB in males but in contrast, at the same time, they reported it to be susceptible for developing the disease in females [16]. In the present study, we found the GG genotype of -2518 to be having approximately 2-fold increased risk of developing PTB.Table 3
Haplotype analysis for -2518 A>G and -362 G>C polymorphism in PTB cases and controls.
-2518 A>G SNP-362 G>C SNPPTB casesN=329ControlsN=216OR (95% CI)p value1AG0.7060.6881—2GC0.2540.2870.94 (0.72-1.23)0.663AC0.0460.0047.23 (1.74-30.09)0.0064GG0.0110.0024.12 (0.51-33.25)0.18SNP: single nucleotide polymorphism;N: total number of subjects; p: p value; OR: odds ratio; CI: confidence interval.The other variant ofCCL2 reported worldwide is -362 G>C; in the present study, we found the CCL2-362 GC genotype to be significantly high among healthy controls (p=0.01) compared to PTB cases suggesting a protective role of the genotype against PTB; on the other hand, the homozygous -362 CC genotype is found to be a risk genotype (Table 2); this is in contrast to the report by Thye et al. [11] where they reported that both CC and CG genotypes were overrepresented in healthy controls in a Ghanaian population and the -362C allele was associated with protection against TB. Mishra et al. [15] could not find a significant difference in frequencies of allele or genotypes of -362 G>C polymorphism among the PTB cases and HCs of a primitive tribal group “Saharia,” although the GC genotype was more frequently present in controls. Velez Edwards et al. [28] also could not find any evidence of association of the -362 G>C polymorphism with PTB in Guinea Bissau, the Gambian, and African-American populations. Thye et al. reported a significant heterogeneity in association of the -362 G>C polymorphism with PTB between studies in meta-analysis of five case control studies from five ethnicities [11]. The ethnic variation could be the reason for difference in observations.Our study subjects were drawn from a population consisting of multireligion communities residing near Agra in Uttar Pradesh and nearby states. Here, we found that the GG genotype ofCCL2 2518 A>G is overrepresented in PTB patients. While AG genotype is higher in healthy controls (both tuberculin-positive and tuberculin-negative controls) compared to PTB patients. There was no differences in genotype frequencies between tuberculin-positive and tuberculin-negative individuals, and the frequency of PPD (+) and PPD (-) individuals was the same as the national frequency. It is noteworthy to find the heterozygous genotype providing protection from the disease. Both the polymorphisms -2518 A>G and -362 G>C provided protection against the disease in a heterozygous condition. The heterozygous protection is represented by the overdominance model and the model states that polymorphism is maintained because heterozygous individuals are able to recognize a wider variety of parasites [29], and India being the TB endemic region, we can speculate that this heterozygous effect played some role in local adaptation against TB; it has been previously described by Sinha et al. [30]. To substantiate our hypothesis, we further analyzed the functional aspect of CCL2 in serum. We detected a significantly higher level of serum CCL2 in PTB patients compared to healthy controls which is in accordance with the findings of earlier studies [4, 5]. In contrast to their observations, our findings indicate a strong association of the serum CCL2 level with various genotypes of -2518 A>G polymorphism in PTB cases. PTB cases with the -2518AA genotype showed a significantly higher level of serum CCL2 compared to the cases with -2518 AG and -2518 GG genotypes. Flores-Villanueva et al. [10] and Rovin et al. [31] had noted a higher level of CCL2 in -2518 GG patients and lower serum IL-12. We observed a significantly higher level of IL-12p70 and IFN-γ in serum in PTB cases compared to healthy controls. The level of IL-12p70 was positively correlated to the CCL2 level in both the groups of study subjects. The level of CCL2 was significantly correlated with IL-12, IFN-γ, and TNF-α in PTB cases with the -2518 GG genotype. So, the lower level of CCL2 could be responsible for the low level of IL-12 and IFN gamma, which were evident to be protective cytokines giving resistance to TB infection. This could be the reason for -2518 GG genotype susceptibility for PTB in the present population. -2518 GG genotype cases produced less amount of CCL2 and also less IL-12p70 resulting in increased susceptibility to the disease. Hence, subjects with -2518 GG genotypes are more likely to be prone to M. tuberculosis infection in the present population, and the lower level of IL-12p70 could possibly influence the lower immunity of these people. Additionally, healthy people with AG genotypes showing an intermediate level of IL-12p70 probably would be regulating in some way the CCL2 level giving the protective effect against M. tuberculosis infection in the present population. Furthermore, for the first time, the level of other important cytokines in serum with reference to various CCL2-2518 A>G genotypes was analyzed in the present study. PTB subjects with the -2518GG genotype were having a lower level of IFN-γ, presumably due to the lower level of CCL2 and IL12p70. The TGF-β level was significantly higher in healthy controls compared to PTB cases. The higher level of TGF-β in healthy subjects with the -2518AG genotype is suggestive of its regulatory role in providing protection against TB infection.We explored the correlation of the serum CCL2 level with IL-12p70, IFN-γ, TNF-α, and TGF-β in subjects with variants of -2518A>G. Regression analysis suggested that the serum CCL2 concentration regulates the concentration of IL-12p70 and more strongly IFN-γ concentration. Stratification on the basis of genotypes indicated that healthy subjects with the -2518AA or -2518AG genotype with a high level of CCL2 probably positively regulate the production of key cytokines IL-12p70 and IFN-γ which could be the reason for providing protective immunity. On the other hand, PTB cases with the CCL2-2518 GG genotype were having lower concentration of CCL2 thus lowering the production of IFN-γ and becoming susceptible to infection due to poor Th1 response. An earlier report by Velez Edwards et al. [28] observed interaction between one of the CCL2 and IL12B polymorphisms in Africans, and the effect was opposite.It was general assumption that higher CCL2 promotes Th2 response and suppresses Th1 response. The present study for the first time showed that CCL2 has a positive correlation with IL-12 and Th1 cytokines such as IFN-γ and TNF-α and hence essential for proper Th1 response against tuberculosis. -2518G allele is responsible for lower production of CCL2 which leads to lower Th1 cytokines, hence leading to defective Th1 response which makes a host susceptible for tuberculosis. Divergent observations about the genetic association with diseases across the ethnically different population are well reported. Similar nucleotide polymorphism can act differently in different environmental setups to affect the susceptibility. These include the duration of exposure to infectious agents, nutritional status of the individuals, and the other epigenetic factors. The varied observations also could be due to the genetic differences among the population and relatively small size of the database.The C-C motif chemokine ligand 2 (CCL2) is a member of the small inducible gene (SIG) family. CC-chemokines are characterized by two adjacent cysteine residues close to the amino terminus of the molecule. They are involved in the recruitment of lymphocytes and monocytes and control migration of these cells to sites of cell injury and cellular immune reactions [32]. CCL2 is produced by different cell types in response to microbial stimuli [33]. The polymorphisms studied in the present population could have effect on each other so it becomes important to study the haplotypic analysis of the polymorphism. In the present study, AG haplotype was found to be the most predominant haplotype in both PTB cases and healthy controls. Interestingly, on haplotype analysis, AC haplotype was found to be the susceptible haplotype for tuberculosis (Table 3). The two polymorphism (-2518 A>G and -362 G>C) were in linkage disequilibrium having strong D′ between them. Thye et al. [11] have shown through interaction analysis that the CCL2-362 G>C variant exclusively explains the observed association with resistance to TB whereas the CCL2 -2518 A>G variant was not independent of -362 G>C. These observations suggest that this haplotype blocks consisting of these two polymorphisms which shows that a strong LD between them has been jointly inherited in the present population to exert certain effect in the development of tuberculosis in the prevailing environmental factors. Intemann et al. [34] have reported that haplotype comprising -2581G/-362C/int1del 554-567 has stronger protection than the -362 G>C variant alone. These haplotype variants result in decreased CCL2 expression and decreased risk of TB. Ganachari et al. [12] have reported that the haplotype consisting of CCL2-2518 GG along with MMP-1607 GG increases the risk of developing TB -3.5-fold in Mexican and 3.9-fold in Peruvian populations. As AC haplotype of CCL2-2518 A>G and -362 G> C was observed to be a susceptible haplotype for M. tuberculosis infection in this region, polymorphisms in these regulatory regions may be functioning in a complex manner influencing the immune function and disease susceptibility. To understand this complex interaction of the polymorphism in the present population, an in-depth immunological analysis is needed further.The study has its own limitation such as having a comparatively smaller data set. The functional relevance of these polymorphisms with respect to immune responses to tuberculosis could be addressed in a classified manner.
## 5. Conclusion
This study reported a significant association of theCCL2-2518 GG and -362 CC genotype with tuberculosis. Heterozygous CCL2-2518AG and -362GC were noted to be associated with resistance against PTB. The biallelic AC haplotype (CCL2-2518 A>G and -362 G>C) was noted to be a susceptible haplotype for pulmonary tuberculosis. The serum cytokine analysis suggested a complex regulatory mechanism among CCL2/MCP-1, IL-12p70, and IFN-γ concentration. CCL2 showed a positive correlation with IL-12, IFN-γ, and TNF-α, and a normal CCL2 level is essential for normal Th1 response. -2518G allele produces less CCL2 compared to A allele which leads to improper Th1 response and makes a host susceptible for tuberculosis.In the present study, we have tried to unwind the function of the gene in the promoter region of theCCL2 gene but an in-depth study is warranted to further understand the complex interaction of the polymorphisms in the regulatory region of CCL2.
---
*Source: 1019639-2020-12-17.xml* | 1019639-2020-12-17_1019639-2020-12-17.md | 63,126 | Exploring the Role of C-C Motif Chemokine Ligand-2 Single Nucleotide Polymorphism in Pulmonary Tuberculosis: A Genetic Association Study from North India | Sanjay K. Biswas; Mayank Mittal; Ekata Sinha; Vandana Singh; Nidhi Arela; Bharat Bajaj; Pramod K. Tiwari; Vishwa M. Katoch; Keshar K. Mohanty | Journal of Immunology Research
(2020) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1019639 | 1019639-2020-12-17.xml | ---
## Abstract
The C-C motif chemokine ligand-2 (CCL2) was evidenced to be associated with tuberculosis susceptibility in some ethnic groups. In the present study, effort was made to find out the association ofCCL2-2518 A>G and -362 G>C variants with susceptibility to TB in a population from North India. The genotyping was carried out in 373 participants with pulmonary TB (PTB) and 248 healthy controls (HCs) for CCL2-2518 A>G and -362 G>C polymorphisms by PCR-RFLP and by melting curve analysis using fluorescence-labeled hybridization fluorescent resonance energy transfer (FRET) probes, respectively, followed by DNA sequencing in a few representative samples. Genotype and allele frequencies were compared by the chi-squared test and crude and Mantel-Haenszel (M-H) odds ratio (OR). OR was calculated using STATA/MP16.1 software. Further, CCL2, IL-12p70, IFN-γ, TNF-α, and TGF-β levels were measured in serum samples of these participants using commercially available kits. Our analysis indicated that the homozygous mutant in both -2518 GG (OR=2.07, p=0.02) and -362 CC (OR=1.92, p=0.03) genotypes was associated with susceptibility to pulmonary TB. Further, heterozygous genotypes -2518AG (OR=0.60, p=0.003) and -362GC (OR=0.64, p=0.013) provide resistance from PTB disease. Haplotype analysis revealed AC haplotype (p=0.006) to be a risk factor associated with PTB susceptibility. The serum CCL2 level was significantly elevated among participants with -2518 AA genotype compared to -2518 GG genotype. CCL2 level was observed to be positively correlated with IL12p70, IFN-γ and TNF-α, thus suggesting the immunological regulatory role of CCL2 against pulmonary tuberculosis. CCL2-2518 GG and -362 CC genotypes were found to be associated with susceptibility to pulmonary tuberculosis and CCL2-2518AG and CCL2-362GC with resistance from PTB. AC haplotype was found to be a risk factor for PTB in the present study. It may be hypothesized from the findings that -2518G allele could be responsible for lower production of CCL2 which leads to defective Th1 response and makes a host susceptible for pulmonary tuberculosis.
---
## Body
## 1. Introduction
Tuberculosis (TB) is a major health concern all over the world. Globally, approximately 10 million people fell ill with TB in 2018 from the range of 5 to 500 cases per 100000 populations, out of which 57% were men, 32% were women, and 11% accounted for children less than age of 15 years; among all, 1.2 million died of TB [1]. Geographically, eight countries accounted for 2/3rd of global TB burden with India being the highest at 27% and South Africa being the lowest at 3% (the 2019 edition of the global TB report was released on 17 October 2019) (http://www.who.int/tb/data).Susceptibility to infectious diseases after exposure to pathogen is a complex mechanism which involves the interactions among host, pathogens, and environmental factors [2]. Many studies have supported the crucial role of host genetic factors in susceptibility to PTB [3]. On exposure to M. tuberculosis, our first line of defense comes into play which activates our adaptive immunity which is mainly driven by CD4+ T cells and macrophages, supported by a network of inflammatory cytokines (IFN-γ and TNF-α) and chemokines. Chemokines are small molecular weight proteins involved in immunoregulatory and inflammatory functions [4], and based on their N-terminal cysteine residues, they are categorized into the following: C–, C–C, C–X–C, and C–X3–C subfamilies [5]. CCL2, a strong chemotactic and proinflammatory chemokine belonging to the C-C family, has been reported to provide protection against M. tuberculosis [6] and is reported to be stimulated by TNF-α along with the activation of macrophages [7–8]. The chemokine gene can be mapped in human chromosome 17q11-17q12, the two known polymorphisms -2518 A>G (rs 1024611) and -362G/C (rs 2857656) are reported in the promoter region, and the mutation in these regions affects the gene expression and has also been linked to tuberculosis susceptibility [9].Various studies worldwide have been conducted to understand the effect of the mutation in these variants with respect to the susceptibility or resistance to pulmonary tuberculosis (PTB). The very first study in this respect was conducted by Flores-Villanueva in a Mexican population where they have reported the odds of developing pulmonary tuberculosis to be 2.3- and 5.4-fold in carrier of AG and GG genotypes, respectively, than in homozygous AA. They also reported GG to have the highest level of plasmaCCL2 and the lowest level of plasma IL-12p40 [10]. A study on population from Ghana and Russia reported -2518G and -362C to be more prevalent in control groups compared to the PTB cases, hence indicating the protective effect of the alleles against the PTB disease in a Ghanian population; on the other hand, they did not find any association in a Russian population [11]. Another study from Mexico and Peru reported that the joint effect of CCL2-2518GG genotype along with MMP1-1607GG increased the risk of developing PTB by 3.59-fold in Mexican and 3.9-fold in a Peruvian population, respectively [12]. Arji et al. [13] have reported higher prevalence of CCL2-2518G allele in a healthy Moroccan population suggesting a potential protective effect of the allele against the PTB disease. A meta-analysis conducted by Gong et al. [14] revealed that the G allele of the CCL2-2518 polymorphism is a risk factor for PTB in Asian and Americans but not Africans and the C allele of the -362G>C polymorphism is a protective factor for tuberculosis in these populations. A study conducted on a Sahariya tribe, from India, analyzed the -2518A>G and -362G>C polymorphism on PTB cases and healthy controls but they did not find any association with PTB disease [15]. Another study from a South Indian population reported a significantly decreased frequency of CCL2-2518GG genotype in male patients with PTB and a significantly increased frequency of the same genotype among female patients with PTB. Their results suggested that the -2518GG genotype may be associated with protection in males and susceptibility to PTB in females [16]. In a recent meta-analysis, an association between the CCL2-2518A>G polymorphism and human TB susceptibility was reported [17]. Earlier studies conducted on a population from Hong Kong [18] and South Africa [19] could not find any significant association with the disease. These two polymorphisms (-2518A>G and -362G>C) of CCL2 located in the promoter region of the gene are known to play important role in immune gene regulation. The divergence in the earlier worldwide reports and in an Indian population evoked us to analyze these polymorphisms in the north Indian population from Agra, India.So, the present study was conducted with two main objectives, i.e., to address the association ofCCL2-2518 A>G and -362 G>C polymorphisms and haplotypes with TB in a population of the northern part of India and to analyze the correlation between the level of serum CCL2 and cytokines in TB cases and controls with respect to their genotypes.
## 2. Materials and Methods
### 2.1. Study Subjects
The present study conducted was a part of the major project going on, in the institute which was approved by the institute’s human ethical committee, constituted following the guidelines laid by Indian Council of Medical Research, New Delhi [20]. Before the start of the study, an interview schedule was formulated regarding the demographic details of the cases and controls along with the written informed consent and these were also approved by the institute’s ethical committee. These written informed consents were obtained from all participants of the study, and for minor or children below 18 years of age, the written informed consent was obtained from a parent or guardian. 373 pulmonary tuberculosis cases (PTB) (mean age 32.47±12.94; male : female 253 : 120) and 248 healthy controls (mean age 33.71±12.82; male : female 122 : 126) were included in the study. We analyzed the CCL2-2518 A>G polymorphism in 373 PTB cases and 248 healthy controls and the CCL2-362 G>C polymorphism in 330 PTB cases and 235 healthy controls. We collected the information regarding the subject’s age, sex, smoking habits, drinking habits, and BCG vaccination with the help of the interview schedule for both the cases and controls.
### 2.2. Pulmonary Tuberculosis Patients (PTB)
Cases with pulmonary TB were included in the study with in the age group of 16 to 63 years. PTB cases were recruited from the outpatient department (OPD) of the State Tuberculosis Demonstration Centre (STDC), Agra, during the period from 2007 to 2012 who were registered in the OPD on Monday, Wednesday, and Friday and met the inclusion and exclusion criteria of the PTB cases and agreed to participate in the study. The cases were mainly the residents of Agra or nearby area, within the state of Uttar Pradesh. The cases were recruited on the basis of defined clinical criteria, including the standard respiratory symptoms (fever, cough, expectoration, and malaise). The sputum smear and/or culture positivity were diagnosed on the basis of acid fast bacillus (AFB) smear positivity by Zeil-Neelsen staining and clinical symptoms following the guidelines of the Revised National TB Control Programme (RNTCP) [21]. As a routine, two sputum samples were collected over 2 days (on spot/morning sputum) and, by definition, a new smear-positive pulmonary TB case was diagnosed only when any of the sputum sample showed smear-positive result. AFB culture was performed in Lowenstein-Jensen (LJ) slant, and M. tuberculosis was confirmed by biochemical tests following the protocol described in Vestal [22]. We excluded all the cases showing symptoms of other form of tuberculosis, seropositive for HIV infection, or any other immunosuppressive disorders like diabetes mellitus from the study and those who have taken anti-TB drugs before.
### 2.3. Healthy Controls
Healthy controls included those subjects who escorted PTB cases to the hospital but were not blood related to the cases; randomly selected healthy subjects who were residing in the same area as those of patients, by a house-to-house survey method; between the age of 16 and 63 years; and postgraduate students who were short-term trainees in the institute and who agreed to participate in the study and also met the inclusion criteria of controls. Those with a recent history of fever, viral infection, other illness, or any other immunological disease and who have undergone treatment for tuberculosis or leprosy in the past and any family history of tuberculosis and persons found to be positive for AFB smear tests were excluded from the study. The controls were inoculated with 0.1 ml (5 tuberculin units) PPD antigen intradermally, and induration was noted after 48 to 72 hours of application in 104 healthy controls; among them, 34 (32.69%) were positive and 70 (67.30%) were negative for PPD. The detailed description of PTB cases and healthy controls is given in Table1.Table 1
Characteristics of pulmonary TB cases and healthy controls included in the study.
Pulmonary TB casesN=373Healthy controlsN=248p valueAgemean±SD32.47±12.9433.71±12.820.24∗Gender (male : female)253 : 120122 : 1260.00003∗∗AFB smear positivityND1+1072+853+134Scanty19X-ray positive25Not known3AFB culture positivityND1+992+993+17Scanty153Not known5PPD statusNDPositive34 (32.69%)Negative70 (67.30%)∗p value for t-test, ∗∗p value for χ2 test. PPD test was carried out in 104 healthy control subjects. Percentage of positivity was calculated among these subjects.
### 2.4. Collection of Blood Samples and DNA Extraction
A total of 4 ml of blood was collected from each subject where 2 ml of blood was collected in tubes containing acid citrate dextrose (ACD) from which DNA was isolated following the user’s instruction using the DNA isolation kit (Midi prep from Qiagen, Germany). Another 2 ml of blood was collected in tubes without anticoagulants for separating the serum, and the separated serum was stored at -20°C with protease inhibitor forCCL2 and other serum cytokine assays.
### 2.5. Selection of Single Nucleotide Polymorphism (SNP) and Sample Size
CCL2-2518 A>G and -362 G>C polymorphisms were reported to be associated with susceptibility or resistance to TB in various populations of the world [6–19]. However, a reported variation in the results and the fact that the two polymorphisms of CCL2-2518A>G and -362G>C located in the promoter region of the gene which plays important role in immune regulatory mechanisms induced us to analyze the polymorphism in the north Indian population from Agra, India. So, in the present study, we intended to address the association of CCL2-2518 A>G and -362 G>C polymorphisms with TB in a population of the northern part of India along with the level of serum CCL2 and cytokines. Initially, a small pilot study was carried out with small sample size and after positive results were found, the sample size was calculated by statistical methods.
### 2.6. Genotyping of -2518 A>G and -362 G>C Single Nucleotide Polymorphisms
Genotyping of theCCL2-2518 A>G polymorphism was carried out using a polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method as described previously by Flores-Villanueva [10]. The region containing the -2518 A>G polymorphism in the CCL2 promoter region was amplified using 100 ng of genomic DNA by the forward primer 5′-GCTCCGGGCCCAGTATCT-3′ and reverse primer 5′-ACAGGGAAGGTGAAGGGTATGA-3′. Restriction enzyme PvuII was used for the detection of CCL2 alleles, where the allele G was represented by generation of two fragments of 182 bp and 54 bp after digestion and the allele A was identified by the presence of a 236 bp undigested fragment. These fragments were resolved on agarose gel electrophoresis.Genotyping for -362G>C polymorphism was performed by melting curve analysis using fluorescence-labeled hybridization probes (TIB Mol Biol, Berlin, Germany) using the Light Cycler 480 system (Roche Diagnostics, Berlin, Germany) following the modified protocol of Thye et al. [11]. 5′-GAGCCTGACATGCTTTCATCTA-3′ sense primer and 5′-TTTCCATTCACTGCTGAGAC-3′ and antisense primer along with FRET probes 5′-TTCGCTTCACAGAAAGCAGAATCCTTA-3′ (3′ labeled with fluorescein) and 5′-AAATAACCCTCTTAGTTCACATCTGTGGTCAGTCT-3′ (5′ labeled with LCRed640). PCR was performed using 1.5 μl DNA primers (sense and antisense) at 1.25 pmol, 2.5 mM MgCl2, and 250 nM of the sensor probe and anchor probe. The sensor probe was labeled with fluorescein at the 3′ end. The anchor probe was labeled with Light Cycler Red 640 at the 5′ end. Difference in temperature of melting peaks determined the different homozygous and heterozygous genotypes.
### 2.7. DNA Sequencing
The region covering both the polymorphisms, -2518 A>G and -362 G>C, of theCCL2 gene was amplified in 10 samples from each genotype using sequence-specific primers reported previously [10, 11] using the ABI Big Dye Terminator v2 kit (Applied Biosystems, Foster City, CA, USA) in conjunction with the ABI-recommended protocol in the ABI 3700 capillary sequencer.
### 2.8. Estimation of CCL2, IL-12p70, IFN-γ, TNF-γ, and TGF-β
Serum CCL2 and IL-12p70, IFN-γ, TNF-γ, and TGF-β cytokines were assayed using respective human Duoset enzyme-linked immunosorbent assay (ELISA) Development System (R&D Systems, Minneapolis, MN, USA) in 120 tuberculosis cases (40 representative cases each from wild, heterozygous, and mutant genotypes) and 54 healthy controls (20 representative healthy controls each from heterozygous and wild genotypes and 14 from mutant genotypes with respect to the CCL2-2518A>G polymorphism).
### 2.9. Statistical Analysis
Allele and genotype frequencies of each polymorphism were determined by direct counting. Hardy-Weinberg equilibrium (HWE) was examined in controls and patients byχ2 test. Genotype and allele frequencies were compared between patients and controls by the chi-squared test; magnitude of association was expressed as odds ratio (OR) with 95% CI. p<0.05 was considered significant for all analyses. Genotypic associations for dominant, recessive, and overdominant models were tested using STATA/MP16.1 software (StataCorp LP Lakeway Drive, College Station, TX, USA). Mantel-Haenszel (M-H) estimate was also calculated after adjusting for sex. Both crude and M-H estimates were represented.Analysis of linkage disequilibrium (LD) and haplotype between the SNPs was carried out using online software SNP stats. Levels of serum CCL2 and cytokines were compared either using Mann-Whitney or Kruskal-Wallis tests. The Spearman rank correlation test was performed using STATA/SE 11.0 software.
## 2.1. Study Subjects
The present study conducted was a part of the major project going on, in the institute which was approved by the institute’s human ethical committee, constituted following the guidelines laid by Indian Council of Medical Research, New Delhi [20]. Before the start of the study, an interview schedule was formulated regarding the demographic details of the cases and controls along with the written informed consent and these were also approved by the institute’s ethical committee. These written informed consents were obtained from all participants of the study, and for minor or children below 18 years of age, the written informed consent was obtained from a parent or guardian. 373 pulmonary tuberculosis cases (PTB) (mean age 32.47±12.94; male : female 253 : 120) and 248 healthy controls (mean age 33.71±12.82; male : female 122 : 126) were included in the study. We analyzed the CCL2-2518 A>G polymorphism in 373 PTB cases and 248 healthy controls and the CCL2-362 G>C polymorphism in 330 PTB cases and 235 healthy controls. We collected the information regarding the subject’s age, sex, smoking habits, drinking habits, and BCG vaccination with the help of the interview schedule for both the cases and controls.
## 2.2. Pulmonary Tuberculosis Patients (PTB)
Cases with pulmonary TB were included in the study with in the age group of 16 to 63 years. PTB cases were recruited from the outpatient department (OPD) of the State Tuberculosis Demonstration Centre (STDC), Agra, during the period from 2007 to 2012 who were registered in the OPD on Monday, Wednesday, and Friday and met the inclusion and exclusion criteria of the PTB cases and agreed to participate in the study. The cases were mainly the residents of Agra or nearby area, within the state of Uttar Pradesh. The cases were recruited on the basis of defined clinical criteria, including the standard respiratory symptoms (fever, cough, expectoration, and malaise). The sputum smear and/or culture positivity were diagnosed on the basis of acid fast bacillus (AFB) smear positivity by Zeil-Neelsen staining and clinical symptoms following the guidelines of the Revised National TB Control Programme (RNTCP) [21]. As a routine, two sputum samples were collected over 2 days (on spot/morning sputum) and, by definition, a new smear-positive pulmonary TB case was diagnosed only when any of the sputum sample showed smear-positive result. AFB culture was performed in Lowenstein-Jensen (LJ) slant, and M. tuberculosis was confirmed by biochemical tests following the protocol described in Vestal [22]. We excluded all the cases showing symptoms of other form of tuberculosis, seropositive for HIV infection, or any other immunosuppressive disorders like diabetes mellitus from the study and those who have taken anti-TB drugs before.
## 2.3. Healthy Controls
Healthy controls included those subjects who escorted PTB cases to the hospital but were not blood related to the cases; randomly selected healthy subjects who were residing in the same area as those of patients, by a house-to-house survey method; between the age of 16 and 63 years; and postgraduate students who were short-term trainees in the institute and who agreed to participate in the study and also met the inclusion criteria of controls. Those with a recent history of fever, viral infection, other illness, or any other immunological disease and who have undergone treatment for tuberculosis or leprosy in the past and any family history of tuberculosis and persons found to be positive for AFB smear tests were excluded from the study. The controls were inoculated with 0.1 ml (5 tuberculin units) PPD antigen intradermally, and induration was noted after 48 to 72 hours of application in 104 healthy controls; among them, 34 (32.69%) were positive and 70 (67.30%) were negative for PPD. The detailed description of PTB cases and healthy controls is given in Table1.Table 1
Characteristics of pulmonary TB cases and healthy controls included in the study.
Pulmonary TB casesN=373Healthy controlsN=248p valueAgemean±SD32.47±12.9433.71±12.820.24∗Gender (male : female)253 : 120122 : 1260.00003∗∗AFB smear positivityND1+1072+853+134Scanty19X-ray positive25Not known3AFB culture positivityND1+992+993+17Scanty153Not known5PPD statusNDPositive34 (32.69%)Negative70 (67.30%)∗p value for t-test, ∗∗p value for χ2 test. PPD test was carried out in 104 healthy control subjects. Percentage of positivity was calculated among these subjects.
## 2.4. Collection of Blood Samples and DNA Extraction
A total of 4 ml of blood was collected from each subject where 2 ml of blood was collected in tubes containing acid citrate dextrose (ACD) from which DNA was isolated following the user’s instruction using the DNA isolation kit (Midi prep from Qiagen, Germany). Another 2 ml of blood was collected in tubes without anticoagulants for separating the serum, and the separated serum was stored at -20°C with protease inhibitor forCCL2 and other serum cytokine assays.
## 2.5. Selection of Single Nucleotide Polymorphism (SNP) and Sample Size
CCL2-2518 A>G and -362 G>C polymorphisms were reported to be associated with susceptibility or resistance to TB in various populations of the world [6–19]. However, a reported variation in the results and the fact that the two polymorphisms of CCL2-2518A>G and -362G>C located in the promoter region of the gene which plays important role in immune regulatory mechanisms induced us to analyze the polymorphism in the north Indian population from Agra, India. So, in the present study, we intended to address the association of CCL2-2518 A>G and -362 G>C polymorphisms with TB in a population of the northern part of India along with the level of serum CCL2 and cytokines. Initially, a small pilot study was carried out with small sample size and after positive results were found, the sample size was calculated by statistical methods.
## 2.6. Genotyping of -2518 A>G and -362 G>C Single Nucleotide Polymorphisms
Genotyping of theCCL2-2518 A>G polymorphism was carried out using a polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method as described previously by Flores-Villanueva [10]. The region containing the -2518 A>G polymorphism in the CCL2 promoter region was amplified using 100 ng of genomic DNA by the forward primer 5′-GCTCCGGGCCCAGTATCT-3′ and reverse primer 5′-ACAGGGAAGGTGAAGGGTATGA-3′. Restriction enzyme PvuII was used for the detection of CCL2 alleles, where the allele G was represented by generation of two fragments of 182 bp and 54 bp after digestion and the allele A was identified by the presence of a 236 bp undigested fragment. These fragments were resolved on agarose gel electrophoresis.Genotyping for -362G>C polymorphism was performed by melting curve analysis using fluorescence-labeled hybridization probes (TIB Mol Biol, Berlin, Germany) using the Light Cycler 480 system (Roche Diagnostics, Berlin, Germany) following the modified protocol of Thye et al. [11]. 5′-GAGCCTGACATGCTTTCATCTA-3′ sense primer and 5′-TTTCCATTCACTGCTGAGAC-3′ and antisense primer along with FRET probes 5′-TTCGCTTCACAGAAAGCAGAATCCTTA-3′ (3′ labeled with fluorescein) and 5′-AAATAACCCTCTTAGTTCACATCTGTGGTCAGTCT-3′ (5′ labeled with LCRed640). PCR was performed using 1.5 μl DNA primers (sense and antisense) at 1.25 pmol, 2.5 mM MgCl2, and 250 nM of the sensor probe and anchor probe. The sensor probe was labeled with fluorescein at the 3′ end. The anchor probe was labeled with Light Cycler Red 640 at the 5′ end. Difference in temperature of melting peaks determined the different homozygous and heterozygous genotypes.
## 2.7. DNA Sequencing
The region covering both the polymorphisms, -2518 A>G and -362 G>C, of theCCL2 gene was amplified in 10 samples from each genotype using sequence-specific primers reported previously [10, 11] using the ABI Big Dye Terminator v2 kit (Applied Biosystems, Foster City, CA, USA) in conjunction with the ABI-recommended protocol in the ABI 3700 capillary sequencer.
## 2.8. Estimation of CCL2, IL-12p70, IFN-γ, TNF-γ, and TGF-β
Serum CCL2 and IL-12p70, IFN-γ, TNF-γ, and TGF-β cytokines were assayed using respective human Duoset enzyme-linked immunosorbent assay (ELISA) Development System (R&D Systems, Minneapolis, MN, USA) in 120 tuberculosis cases (40 representative cases each from wild, heterozygous, and mutant genotypes) and 54 healthy controls (20 representative healthy controls each from heterozygous and wild genotypes and 14 from mutant genotypes with respect to the CCL2-2518A>G polymorphism).
## 2.9. Statistical Analysis
Allele and genotype frequencies of each polymorphism were determined by direct counting. Hardy-Weinberg equilibrium (HWE) was examined in controls and patients byχ2 test. Genotype and allele frequencies were compared between patients and controls by the chi-squared test; magnitude of association was expressed as odds ratio (OR) with 95% CI. p<0.05 was considered significant for all analyses. Genotypic associations for dominant, recessive, and overdominant models were tested using STATA/MP16.1 software (StataCorp LP Lakeway Drive, College Station, TX, USA). Mantel-Haenszel (M-H) estimate was also calculated after adjusting for sex. Both crude and M-H estimates were represented.Analysis of linkage disequilibrium (LD) and haplotype between the SNPs was carried out using online software SNP stats. Levels of serum CCL2 and cytokines were compared either using Mann-Whitney or Kruskal-Wallis tests. The Spearman rank correlation test was performed using STATA/SE 11.0 software.
## 3. Results
### 3.1. Demographic Parameter Analysis
A total of 373 PTB cases and 248 healthy controls (HCs) between the age group of 16 and 63 years were included in the present study. The mean age of PTB cases (32.47) and healthy controls (33.71) did not differ significantly; however, the male to female ratio was observed to be significantly higher in PTB cases compared to HCs (p=0.007) (Table 1).Out of the 373 PTB cases, AFB smear positivity could be analyzed for 370 cases and AFB culture positivity for 368 cases. A detailed distribution of AFB smear positivity and culture positivity is described in Table1. The PTB cases were further analyzed on the basis of bacterial load (scanty +, 1+, 2+, and 3+) and also for any significant difference with respect to age, gender, and polymorphism, but we did not find any significant difference or association.Healthy controls were also tested for the PTB disease, and none of them showed any sign of AFB smear positivity or culture positivity. Out of 248 HCs, 104 could be inoculated with PPD (purified protein derivative) of which 32.69% were found to be PPD positive and 67.30% were PPD negative. PPD-positive HCs were followed till the duration of the study, and none of them developed the active form of PTB.
### 3.2. Genotypic Analysis ofCCL2-2518 A>G (rs1024611) and -362 G>C (rs2857656) Single Nucleotide Polymorphisms
CCL2-2518A/G polymorphism was analyzed in 373 PTB cases and 248 HCs. The frequencies of genotypes and alleles were in Hardy-Weinberg equilibrium in both cases and controls (p>0.05). The male : female ratio is significantly higher in PTB cases compared to healthy controls; however, the frequencies of genotypes for both CCL2-2518A>G and -362G>C are not significantly different between males and females (p=0.81 and 0.93). On comparing the genotypic frequencies of PTB cases and healthy controls, a significant difference was observed with respect to heterozygous AG genotype which was found to be significantly higher in controls (0.43, p<0.003) and also the homozygous GG genotype which was found to be significantly higher in PTB patients (0.11, p=0.004). A allele was found to be the dominant allele with frequency of 0.73 and 0.72 in both cases and controls, respectively. They did not differ significantly (p value = 0.79). On observing the significant difference in genotypic frequencies, we analyzed the association of the genotypes with disease using various models and observed that in the overdominant model, the heterozygous AG genotype was providing resistance against PTB disease [OR=0.60 (95%CI=0.43-0.84), p value = 0.003]. On the other hand, in the recessive model, the homozygous recessive genotype GG showed nearly twofold risk of developing the disease [OR=1.97 (95%CI=1.06-3.64), p value = 0.02] and M-H estimates after adjusting for sex [OR=2.07 (CI=1.10-3.91)] (Table 2).CCL2-362G>C polymorphism was analyzed in 330 PTB cases and 235 HCs. The polymorphism was in Hardy-Weinberg equilibrium in both PTB cases and healthy controls (p>0.05). Significant difference in the frequency of homozygous CC and heterozygous GC genotypes was observed for the CCL2-362 G>C polymorphism (rs 2857656) between PTB cases and healthy controls. The frequency of homozygous CC genotype at locus -362G>C was found significantly higher in PTB cases (0.13) than that found in healthy controls (0.07) whereas the frequency of heterozygous GC genotype was found to be higher in healthy controls (0.45) than in PTB cases (0.35). On performing association analysis in the overdominant model, we found heterozygous GC genotype to be providing protection against PTB disease [OR=0.65 (95%CI=0.46-0.92), p value = 0.01], whereas, in the recessive model, homozygous CC genotype was associated with the disease susceptibility [OR=1.87 (CI=1.03-3.38), p value = 0.03] (Table 2). The PTB cases were further categorized on the basis of bacterial load (scanty +, 1+, 2+, and 3+) and gender but we did not find any significant difference or association on categorization with any of the two polymorphism or their genotypes.
### 3.3. Haplotype Frequency and Linkage Disequilibrium Analysis of -2518 A>G (rs1024611) and -362 G>C (rs2857656) Loci ofCCL2 among PTB Cases and Healthy Controls
1090 chromosomes were studied for haplotype analysis for these two loci of chromosome 17q11-17q12. All the four haplotypes were present in the studied population. The frequency of AG (0.70) haplotype was higher compared to that of GC (0.25), AC (0.04), and GG (0.01) haplotype in PTB cases. The frequency of AG (0.68) haplotype was higher among the healthy controls compared to GC (0.28), AC (0.004), and GG (0.002) haplotype. The frequency of AC haplotype was found to be significantly high (0.046) in PTB cases compared to HCs (0.004) [OR=7.23 (CI=1.74-30.09), p value = 0.006] (Table 2). A strong LD was observed between both the sites (D′=0.961, p value = 0.00) indicating that the studied loci were in linkage disequilibrium with each other in the present population.Table 2
Genotype analysis of CCL2-2518 A>G and-362 G>C polymorphisms in PTB cases and healthy control.
Genotype/allelePTB casesHealthy controlsChi (DF)OR (95% CI)p value-2518 A>GN=373No (frequency)N=248No (frequency)AA214 (0.57)126 (0.51)AG117 (0.31)107 (0.43)GG42 (0.11)15 (0.06)11.31 (2)0.004A allele545 (0.73)359 (0.72)G allele201 (0.27)137 (0.28)0.07 (1)0.0.97 (0.74-1.24)0.79MH-OR (Adj. for sex)0.02 (1)0.98 (0.75-1.27)0.88Dominant modelAA214 (0.57)126 (0.51)AG+GG159 (0.43)122 (0.49)2.59 (1)0.76 (0.55-1.06)0.10MH-OR (Adj. for sex)2.32 (1)0.77 (0.55-1.07)0.12Overdominant modelAA+GG256 (0.69)141 (0.57)AG117 (0.31)107 (0.43)8.95 (1)0.60 (0.43-0.84)0.003MH-OR (Adj. for sex)8.81 (1)0.60 (0.42-0.840.003Recessive modelAA+AG331 (0.89)233 (0.94)GG42 (0.11)15 (0.06)4.85 (1)1.97 (1.06-3.64)0.028MH-OR (Adj. for sex)5.35 (1)2.07 (1.10-3.91)0.02-362 G>CN=330N=235GG174 (0.53)113 (0.48)GC114 (0.35)105 (0.45)CC42 (0.13)17 (0.07)8.18 (2)0.017G allele462 (0.70)331 (0.70)C allele198 (0.30)139 (0.30)0.02 (1)1.02 (0.78-1.3)0.87MH-OR (Adj. for sex)0.05 (1)1.03 (0.79-1.33)0.82Dominant modelGG174 (0.53)113 (0.48)GC+CC156 (0.47)122 (0.52)1.18 (1)0.83 (0.59-1.16)0.27MH-OR (Adj. for sex)1.13 (1)0.83 (0.59-1.16)0.28Overdominant modelGG+CC216 (0.65)130 (0.55)GC114 (0.34)105 (0.45)5.93 (1)0.65 (0.46-0.92)0.015MH-OR (Adj. for sex)6.15 (1)0.64 (0.45-0.91)0.013Recessive modelGG+GC288 (0.87)218 (0.93)CC42 (0.13)17 (0.07)4.42 (1)1.87 (1.03-3.38)0.035MH-OR (Adj. for sex)4.89 (1)1.92 (1.06-3.47)0.027
### 3.4. Serum Analysis of CCL2, IL-12p70, IFN-γ, TNF-α, and TGF-β
CCL2/MCP-1, IL12p70, IFN-γ, TNF-α, and TGF-β were measured in serum samples of 120 pulmonary TB patients and 54 healthy controls. The CCL2 level was observed to be significantly elevated in PTB patients compared to healthy controls (p<0.005) (shown in Figure 1) and varied among CCL2 variants in PTB patients (Spearman corr=−0.225, p=0.01). The significantly higher mean level of serum CCL2 was observed in cases with -2518 homozygous AA genotype (352.8 pg/ml) compared to the level seen in cases with -2518 heterozygous AG genotype (183.3 pg/ml) and -2518 homozygous GG genotype (119.3 pg/ml) (p=0.03).Figure 1
Comparative analysis of serum CCL2 level in PTB patients and healthy controls. Serum CCL2 was measured by sandwich ELISA using Duoset from R&D Systems, USA, in pulmonary TB patients and controls. The level was expressed as pg/ml on theY-axis and represented as box and whisker plot. Each dot above the vertical box plot represents the outside value of one subject. Each box is with whiskers on both sides with upper and lower adjacent values, respectively. The box shows the 75th, median, and 25th percentile values from the upper hinge to lower hinge, respectively. Subjects are represented on the X-axis. HC: healthy controls; PTB: pulmonary TB patients; AA: subjects with CCL2-2518AA genotypes; AG: subjects with CCL2-2518AG genotype; GG: subjects with CCL2-2518GG genotypes. (a) For comparison of CCL2 level between HC and PTB patients. (b) For comparison of the CCL2 level among PTB patients having various CCL2 genotypes. (c) The comparison of the CCL2 level among HC having various genotypes. Pairwise comparison was made by the Wilcoxon rank-sum test, and comparison of groups was done by the Kruskal-Wallis equality of population rank test. p values are shown above the box plots.
(a)(b)(c)The serum IL-12p70 level was found to be significantly higher in PTB patients compared to healthy controls (p=0.0000) (shown in Figure 2) and differed significantly among subjects having various genotypes of CCL2-2518 variants (p<0.05); it was observed to be significantly higher in PTB patients with homozygous AA and heterozygous AG genotypes compared to homozygous GG genotypes.Figure 2
Comparative analysis of serum IL-12p70 level in PTB patients and healthy controls. Serum IL-12p70 was measured by sandwich ELISA using Duoset from R&D Systems, USA, in pulmonary TB patients and controls. The level was expressed as pg/ml on theY-axis and represented as box and whisker plot. Each dot above the vertical box plot represents the outside value of one subject. Each box is with whiskers on both sides with upper and lower adjacent values, respectively. The box shows the 75th, median, and 25thpercentile values from the upper hinge to lower hinge, respectively. Subjects are represented on the X-axis. HC: healthy controls; PTB: pulmonary TB patients; AA: subjects with CCL2-2518AA genotypes, AG: subjects with CCL2-2518AG genotype; GG: subjects with CCL2-2518GG genotypes. (a) For comparison of IL-12p70 level between HC and PTB patients. (b) For comparison of IL-12p70 level among PTB patients having various CCL2 genotypes. (c) The comparison of IL-12p70 level among HC having various genotypes. Pairwise comparison was made by the Wilcoxon rank-sum test, and comparison of groups was done by the Kruskal-Wallis equality of populations rank test. p values are shown above the box plots.
(a)(b)(c)The serum CCL2 level is significantly positively correlated with the serum level of IL-12p70 in healthy controls as well as in PTB cases. On analysis with reference to specific genotypes of theCCL2-2518 A>G variant, the serum CCL2 level was observed to be significantly positively correlated in healthy controls, having homozygous AA genotypes (Spearman r=0.79, p=0.000) and heterozygous AG genotypes (R=0.68, p=0.0009), with the serum IL-12p70, whereas in PTB cases, the serum CCL2 level was observed to be significantly positively correlated with the serum IL-12p70 in homozygous GG genotype (Spearman rho=0.51, p=0.0008) only.Th1 cytokines IFN-γ and TNF-α were also analyzed in relation to the CCL2-2518 A>G variant and correlated with serum CCL2. A positive significant correlation was found.Regression analysis of CCL2 with all these cytokines revealed that the level of serum CCL2 was significantly correlated with the level of serum IL-12p70 (regression coefficient 0.37,p=0.048) and IFN-γ (regression coefficient=1.69, p=0.00) in healthy controls.
## 3.1. Demographic Parameter Analysis
A total of 373 PTB cases and 248 healthy controls (HCs) between the age group of 16 and 63 years were included in the present study. The mean age of PTB cases (32.47) and healthy controls (33.71) did not differ significantly; however, the male to female ratio was observed to be significantly higher in PTB cases compared to HCs (p=0.007) (Table 1).Out of the 373 PTB cases, AFB smear positivity could be analyzed for 370 cases and AFB culture positivity for 368 cases. A detailed distribution of AFB smear positivity and culture positivity is described in Table1. The PTB cases were further analyzed on the basis of bacterial load (scanty +, 1+, 2+, and 3+) and also for any significant difference with respect to age, gender, and polymorphism, but we did not find any significant difference or association.Healthy controls were also tested for the PTB disease, and none of them showed any sign of AFB smear positivity or culture positivity. Out of 248 HCs, 104 could be inoculated with PPD (purified protein derivative) of which 32.69% were found to be PPD positive and 67.30% were PPD negative. PPD-positive HCs were followed till the duration of the study, and none of them developed the active form of PTB.
## 3.2. Genotypic Analysis ofCCL2-2518 A>G (rs1024611) and -362 G>C (rs2857656) Single Nucleotide Polymorphisms
CCL2-2518A/G polymorphism was analyzed in 373 PTB cases and 248 HCs. The frequencies of genotypes and alleles were in Hardy-Weinberg equilibrium in both cases and controls (p>0.05). The male : female ratio is significantly higher in PTB cases compared to healthy controls; however, the frequencies of genotypes for both CCL2-2518A>G and -362G>C are not significantly different between males and females (p=0.81 and 0.93). On comparing the genotypic frequencies of PTB cases and healthy controls, a significant difference was observed with respect to heterozygous AG genotype which was found to be significantly higher in controls (0.43, p<0.003) and also the homozygous GG genotype which was found to be significantly higher in PTB patients (0.11, p=0.004). A allele was found to be the dominant allele with frequency of 0.73 and 0.72 in both cases and controls, respectively. They did not differ significantly (p value = 0.79). On observing the significant difference in genotypic frequencies, we analyzed the association of the genotypes with disease using various models and observed that in the overdominant model, the heterozygous AG genotype was providing resistance against PTB disease [OR=0.60 (95%CI=0.43-0.84), p value = 0.003]. On the other hand, in the recessive model, the homozygous recessive genotype GG showed nearly twofold risk of developing the disease [OR=1.97 (95%CI=1.06-3.64), p value = 0.02] and M-H estimates after adjusting for sex [OR=2.07 (CI=1.10-3.91)] (Table 2).CCL2-362G>C polymorphism was analyzed in 330 PTB cases and 235 HCs. The polymorphism was in Hardy-Weinberg equilibrium in both PTB cases and healthy controls (p>0.05). Significant difference in the frequency of homozygous CC and heterozygous GC genotypes was observed for the CCL2-362 G>C polymorphism (rs 2857656) between PTB cases and healthy controls. The frequency of homozygous CC genotype at locus -362G>C was found significantly higher in PTB cases (0.13) than that found in healthy controls (0.07) whereas the frequency of heterozygous GC genotype was found to be higher in healthy controls (0.45) than in PTB cases (0.35). On performing association analysis in the overdominant model, we found heterozygous GC genotype to be providing protection against PTB disease [OR=0.65 (95%CI=0.46-0.92), p value = 0.01], whereas, in the recessive model, homozygous CC genotype was associated with the disease susceptibility [OR=1.87 (CI=1.03-3.38), p value = 0.03] (Table 2). The PTB cases were further categorized on the basis of bacterial load (scanty +, 1+, 2+, and 3+) and gender but we did not find any significant difference or association on categorization with any of the two polymorphism or their genotypes.
## 3.3. Haplotype Frequency and Linkage Disequilibrium Analysis of -2518 A>G (rs1024611) and -362 G>C (rs2857656) Loci ofCCL2 among PTB Cases and Healthy Controls
1090 chromosomes were studied for haplotype analysis for these two loci of chromosome 17q11-17q12. All the four haplotypes were present in the studied population. The frequency of AG (0.70) haplotype was higher compared to that of GC (0.25), AC (0.04), and GG (0.01) haplotype in PTB cases. The frequency of AG (0.68) haplotype was higher among the healthy controls compared to GC (0.28), AC (0.004), and GG (0.002) haplotype. The frequency of AC haplotype was found to be significantly high (0.046) in PTB cases compared to HCs (0.004) [OR=7.23 (CI=1.74-30.09), p value = 0.006] (Table 2). A strong LD was observed between both the sites (D′=0.961, p value = 0.00) indicating that the studied loci were in linkage disequilibrium with each other in the present population.Table 2
Genotype analysis of CCL2-2518 A>G and-362 G>C polymorphisms in PTB cases and healthy control.
Genotype/allelePTB casesHealthy controlsChi (DF)OR (95% CI)p value-2518 A>GN=373No (frequency)N=248No (frequency)AA214 (0.57)126 (0.51)AG117 (0.31)107 (0.43)GG42 (0.11)15 (0.06)11.31 (2)0.004A allele545 (0.73)359 (0.72)G allele201 (0.27)137 (0.28)0.07 (1)0.0.97 (0.74-1.24)0.79MH-OR (Adj. for sex)0.02 (1)0.98 (0.75-1.27)0.88Dominant modelAA214 (0.57)126 (0.51)AG+GG159 (0.43)122 (0.49)2.59 (1)0.76 (0.55-1.06)0.10MH-OR (Adj. for sex)2.32 (1)0.77 (0.55-1.07)0.12Overdominant modelAA+GG256 (0.69)141 (0.57)AG117 (0.31)107 (0.43)8.95 (1)0.60 (0.43-0.84)0.003MH-OR (Adj. for sex)8.81 (1)0.60 (0.42-0.840.003Recessive modelAA+AG331 (0.89)233 (0.94)GG42 (0.11)15 (0.06)4.85 (1)1.97 (1.06-3.64)0.028MH-OR (Adj. for sex)5.35 (1)2.07 (1.10-3.91)0.02-362 G>CN=330N=235GG174 (0.53)113 (0.48)GC114 (0.35)105 (0.45)CC42 (0.13)17 (0.07)8.18 (2)0.017G allele462 (0.70)331 (0.70)C allele198 (0.30)139 (0.30)0.02 (1)1.02 (0.78-1.3)0.87MH-OR (Adj. for sex)0.05 (1)1.03 (0.79-1.33)0.82Dominant modelGG174 (0.53)113 (0.48)GC+CC156 (0.47)122 (0.52)1.18 (1)0.83 (0.59-1.16)0.27MH-OR (Adj. for sex)1.13 (1)0.83 (0.59-1.16)0.28Overdominant modelGG+CC216 (0.65)130 (0.55)GC114 (0.34)105 (0.45)5.93 (1)0.65 (0.46-0.92)0.015MH-OR (Adj. for sex)6.15 (1)0.64 (0.45-0.91)0.013Recessive modelGG+GC288 (0.87)218 (0.93)CC42 (0.13)17 (0.07)4.42 (1)1.87 (1.03-3.38)0.035MH-OR (Adj. for sex)4.89 (1)1.92 (1.06-3.47)0.027
## 3.4. Serum Analysis of CCL2, IL-12p70, IFN-γ, TNF-α, and TGF-β
CCL2/MCP-1, IL12p70, IFN-γ, TNF-α, and TGF-β were measured in serum samples of 120 pulmonary TB patients and 54 healthy controls. The CCL2 level was observed to be significantly elevated in PTB patients compared to healthy controls (p<0.005) (shown in Figure 1) and varied among CCL2 variants in PTB patients (Spearman corr=−0.225, p=0.01). The significantly higher mean level of serum CCL2 was observed in cases with -2518 homozygous AA genotype (352.8 pg/ml) compared to the level seen in cases with -2518 heterozygous AG genotype (183.3 pg/ml) and -2518 homozygous GG genotype (119.3 pg/ml) (p=0.03).Figure 1
Comparative analysis of serum CCL2 level in PTB patients and healthy controls. Serum CCL2 was measured by sandwich ELISA using Duoset from R&D Systems, USA, in pulmonary TB patients and controls. The level was expressed as pg/ml on theY-axis and represented as box and whisker plot. Each dot above the vertical box plot represents the outside value of one subject. Each box is with whiskers on both sides with upper and lower adjacent values, respectively. The box shows the 75th, median, and 25th percentile values from the upper hinge to lower hinge, respectively. Subjects are represented on the X-axis. HC: healthy controls; PTB: pulmonary TB patients; AA: subjects with CCL2-2518AA genotypes; AG: subjects with CCL2-2518AG genotype; GG: subjects with CCL2-2518GG genotypes. (a) For comparison of CCL2 level between HC and PTB patients. (b) For comparison of the CCL2 level among PTB patients having various CCL2 genotypes. (c) The comparison of the CCL2 level among HC having various genotypes. Pairwise comparison was made by the Wilcoxon rank-sum test, and comparison of groups was done by the Kruskal-Wallis equality of population rank test. p values are shown above the box plots.
(a)(b)(c)The serum IL-12p70 level was found to be significantly higher in PTB patients compared to healthy controls (p=0.0000) (shown in Figure 2) and differed significantly among subjects having various genotypes of CCL2-2518 variants (p<0.05); it was observed to be significantly higher in PTB patients with homozygous AA and heterozygous AG genotypes compared to homozygous GG genotypes.Figure 2
Comparative analysis of serum IL-12p70 level in PTB patients and healthy controls. Serum IL-12p70 was measured by sandwich ELISA using Duoset from R&D Systems, USA, in pulmonary TB patients and controls. The level was expressed as pg/ml on theY-axis and represented as box and whisker plot. Each dot above the vertical box plot represents the outside value of one subject. Each box is with whiskers on both sides with upper and lower adjacent values, respectively. The box shows the 75th, median, and 25thpercentile values from the upper hinge to lower hinge, respectively. Subjects are represented on the X-axis. HC: healthy controls; PTB: pulmonary TB patients; AA: subjects with CCL2-2518AA genotypes, AG: subjects with CCL2-2518AG genotype; GG: subjects with CCL2-2518GG genotypes. (a) For comparison of IL-12p70 level between HC and PTB patients. (b) For comparison of IL-12p70 level among PTB patients having various CCL2 genotypes. (c) The comparison of IL-12p70 level among HC having various genotypes. Pairwise comparison was made by the Wilcoxon rank-sum test, and comparison of groups was done by the Kruskal-Wallis equality of populations rank test. p values are shown above the box plots.
(a)(b)(c)The serum CCL2 level is significantly positively correlated with the serum level of IL-12p70 in healthy controls as well as in PTB cases. On analysis with reference to specific genotypes of theCCL2-2518 A>G variant, the serum CCL2 level was observed to be significantly positively correlated in healthy controls, having homozygous AA genotypes (Spearman r=0.79, p=0.000) and heterozygous AG genotypes (R=0.68, p=0.0009), with the serum IL-12p70, whereas in PTB cases, the serum CCL2 level was observed to be significantly positively correlated with the serum IL-12p70 in homozygous GG genotype (Spearman rho=0.51, p=0.0008) only.Th1 cytokines IFN-γ and TNF-α were also analyzed in relation to the CCL2-2518 A>G variant and correlated with serum CCL2. A positive significant correlation was found.Regression analysis of CCL2 with all these cytokines revealed that the level of serum CCL2 was significantly correlated with the level of serum IL-12p70 (regression coefficient 0.37,p=0.048) and IFN-γ (regression coefficient=1.69, p=0.00) in healthy controls.
## 4. Discussion
The present study explored the genetic frequencies ofCCL2-2518 A>G and -362 G>C polymorphisms in a north Indian population from Agra, India. Pulmonary TB cases and healthy controls were recruited keeping in mind their similar environmental exposure and socioeconomic background. Although a number of reports have been published suggesting the role of CCL2 gene polymorphism in different populations, the findings are often contradictory based on the ethnicity [23], population [24], and type of tuberculosis [25, 26]. Other studies reported from India are based on a south Indian population [27] and in tribal population [15]. Our study is based on a population from the northern part of India, and no report on the CCL2 gene polymorphism is available for this region in relation to tuberculosis. The participants for this study were included between 2007 and 2012, as tuberculosis is prevalent in this part which was the persuasive situation for this study to be carried out. In comparison to other studies, we have attempted to partially address the functional relevance of this polymorphism with reference to its possible regulatory role in cytokine levels. In our observations -2518A allele and -2518AA genotype are noted to be predominant allele and genotype, respectively, in the present population. Flores-Villanueva et al., in a Mexican population, reported -2518 G allele and GG genotype to be the major allele and genotype, respectively, in their study [10]. They have reported in their study a 5.4- and 6.9-fold increased risk of developing TB in the carrier of GG genotype in a Mexican and Korean population, respectively; we also found the same, i.e., association of CCL2-2518 GG genotype with susceptibility to TB (p=0.02) in the recessive model (Table 2). Our result of AG genotype was totally opposite as reported by them where they found 2.3- and 2.8-fold increased risk of developing the disease in a Mexican and Korean population, respectively, and we found in the present study that -2518 AG was providing resistance to the disease (p value = 0.003) (Table 2). Other population studies too have reported association with G allele of -2518 polymorphism; Gong et al. [14] have reported in their meta-analysis that G allele of -2518 polymorphism is a risk factor for TB in Asian and American population but not in African population. Thye et al. [11] and Arji et al. [13] have reported the protective role of G allele of -2518 polymorphism in a Ghanian and Moroccan population, respectively. A study conducted on a Sahariya tribe (the tribe is reported to have high TB prevalence) from India did not report any association with CCL2-2518A/G polymorphism [15]. Another study from the mainland of India, on a South Indian population, reported association of CCL2-2518 GG genotype with protection against PTB in males but in contrast, at the same time, they reported it to be susceptible for developing the disease in females [16]. In the present study, we found the GG genotype of -2518 to be having approximately 2-fold increased risk of developing PTB.Table 3
Haplotype analysis for -2518 A>G and -362 G>C polymorphism in PTB cases and controls.
-2518 A>G SNP-362 G>C SNPPTB casesN=329ControlsN=216OR (95% CI)p value1AG0.7060.6881—2GC0.2540.2870.94 (0.72-1.23)0.663AC0.0460.0047.23 (1.74-30.09)0.0064GG0.0110.0024.12 (0.51-33.25)0.18SNP: single nucleotide polymorphism;N: total number of subjects; p: p value; OR: odds ratio; CI: confidence interval.The other variant ofCCL2 reported worldwide is -362 G>C; in the present study, we found the CCL2-362 GC genotype to be significantly high among healthy controls (p=0.01) compared to PTB cases suggesting a protective role of the genotype against PTB; on the other hand, the homozygous -362 CC genotype is found to be a risk genotype (Table 2); this is in contrast to the report by Thye et al. [11] where they reported that both CC and CG genotypes were overrepresented in healthy controls in a Ghanaian population and the -362C allele was associated with protection against TB. Mishra et al. [15] could not find a significant difference in frequencies of allele or genotypes of -362 G>C polymorphism among the PTB cases and HCs of a primitive tribal group “Saharia,” although the GC genotype was more frequently present in controls. Velez Edwards et al. [28] also could not find any evidence of association of the -362 G>C polymorphism with PTB in Guinea Bissau, the Gambian, and African-American populations. Thye et al. reported a significant heterogeneity in association of the -362 G>C polymorphism with PTB between studies in meta-analysis of five case control studies from five ethnicities [11]. The ethnic variation could be the reason for difference in observations.Our study subjects were drawn from a population consisting of multireligion communities residing near Agra in Uttar Pradesh and nearby states. Here, we found that the GG genotype ofCCL2 2518 A>G is overrepresented in PTB patients. While AG genotype is higher in healthy controls (both tuberculin-positive and tuberculin-negative controls) compared to PTB patients. There was no differences in genotype frequencies between tuberculin-positive and tuberculin-negative individuals, and the frequency of PPD (+) and PPD (-) individuals was the same as the national frequency. It is noteworthy to find the heterozygous genotype providing protection from the disease. Both the polymorphisms -2518 A>G and -362 G>C provided protection against the disease in a heterozygous condition. The heterozygous protection is represented by the overdominance model and the model states that polymorphism is maintained because heterozygous individuals are able to recognize a wider variety of parasites [29], and India being the TB endemic region, we can speculate that this heterozygous effect played some role in local adaptation against TB; it has been previously described by Sinha et al. [30]. To substantiate our hypothesis, we further analyzed the functional aspect of CCL2 in serum. We detected a significantly higher level of serum CCL2 in PTB patients compared to healthy controls which is in accordance with the findings of earlier studies [4, 5]. In contrast to their observations, our findings indicate a strong association of the serum CCL2 level with various genotypes of -2518 A>G polymorphism in PTB cases. PTB cases with the -2518AA genotype showed a significantly higher level of serum CCL2 compared to the cases with -2518 AG and -2518 GG genotypes. Flores-Villanueva et al. [10] and Rovin et al. [31] had noted a higher level of CCL2 in -2518 GG patients and lower serum IL-12. We observed a significantly higher level of IL-12p70 and IFN-γ in serum in PTB cases compared to healthy controls. The level of IL-12p70 was positively correlated to the CCL2 level in both the groups of study subjects. The level of CCL2 was significantly correlated with IL-12, IFN-γ, and TNF-α in PTB cases with the -2518 GG genotype. So, the lower level of CCL2 could be responsible for the low level of IL-12 and IFN gamma, which were evident to be protective cytokines giving resistance to TB infection. This could be the reason for -2518 GG genotype susceptibility for PTB in the present population. -2518 GG genotype cases produced less amount of CCL2 and also less IL-12p70 resulting in increased susceptibility to the disease. Hence, subjects with -2518 GG genotypes are more likely to be prone to M. tuberculosis infection in the present population, and the lower level of IL-12p70 could possibly influence the lower immunity of these people. Additionally, healthy people with AG genotypes showing an intermediate level of IL-12p70 probably would be regulating in some way the CCL2 level giving the protective effect against M. tuberculosis infection in the present population. Furthermore, for the first time, the level of other important cytokines in serum with reference to various CCL2-2518 A>G genotypes was analyzed in the present study. PTB subjects with the -2518GG genotype were having a lower level of IFN-γ, presumably due to the lower level of CCL2 and IL12p70. The TGF-β level was significantly higher in healthy controls compared to PTB cases. The higher level of TGF-β in healthy subjects with the -2518AG genotype is suggestive of its regulatory role in providing protection against TB infection.We explored the correlation of the serum CCL2 level with IL-12p70, IFN-γ, TNF-α, and TGF-β in subjects with variants of -2518A>G. Regression analysis suggested that the serum CCL2 concentration regulates the concentration of IL-12p70 and more strongly IFN-γ concentration. Stratification on the basis of genotypes indicated that healthy subjects with the -2518AA or -2518AG genotype with a high level of CCL2 probably positively regulate the production of key cytokines IL-12p70 and IFN-γ which could be the reason for providing protective immunity. On the other hand, PTB cases with the CCL2-2518 GG genotype were having lower concentration of CCL2 thus lowering the production of IFN-γ and becoming susceptible to infection due to poor Th1 response. An earlier report by Velez Edwards et al. [28] observed interaction between one of the CCL2 and IL12B polymorphisms in Africans, and the effect was opposite.It was general assumption that higher CCL2 promotes Th2 response and suppresses Th1 response. The present study for the first time showed that CCL2 has a positive correlation with IL-12 and Th1 cytokines such as IFN-γ and TNF-α and hence essential for proper Th1 response against tuberculosis. -2518G allele is responsible for lower production of CCL2 which leads to lower Th1 cytokines, hence leading to defective Th1 response which makes a host susceptible for tuberculosis. Divergent observations about the genetic association with diseases across the ethnically different population are well reported. Similar nucleotide polymorphism can act differently in different environmental setups to affect the susceptibility. These include the duration of exposure to infectious agents, nutritional status of the individuals, and the other epigenetic factors. The varied observations also could be due to the genetic differences among the population and relatively small size of the database.The C-C motif chemokine ligand 2 (CCL2) is a member of the small inducible gene (SIG) family. CC-chemokines are characterized by two adjacent cysteine residues close to the amino terminus of the molecule. They are involved in the recruitment of lymphocytes and monocytes and control migration of these cells to sites of cell injury and cellular immune reactions [32]. CCL2 is produced by different cell types in response to microbial stimuli [33]. The polymorphisms studied in the present population could have effect on each other so it becomes important to study the haplotypic analysis of the polymorphism. In the present study, AG haplotype was found to be the most predominant haplotype in both PTB cases and healthy controls. Interestingly, on haplotype analysis, AC haplotype was found to be the susceptible haplotype for tuberculosis (Table 3). The two polymorphism (-2518 A>G and -362 G>C) were in linkage disequilibrium having strong D′ between them. Thye et al. [11] have shown through interaction analysis that the CCL2-362 G>C variant exclusively explains the observed association with resistance to TB whereas the CCL2 -2518 A>G variant was not independent of -362 G>C. These observations suggest that this haplotype blocks consisting of these two polymorphisms which shows that a strong LD between them has been jointly inherited in the present population to exert certain effect in the development of tuberculosis in the prevailing environmental factors. Intemann et al. [34] have reported that haplotype comprising -2581G/-362C/int1del 554-567 has stronger protection than the -362 G>C variant alone. These haplotype variants result in decreased CCL2 expression and decreased risk of TB. Ganachari et al. [12] have reported that the haplotype consisting of CCL2-2518 GG along with MMP-1607 GG increases the risk of developing TB -3.5-fold in Mexican and 3.9-fold in Peruvian populations. As AC haplotype of CCL2-2518 A>G and -362 G> C was observed to be a susceptible haplotype for M. tuberculosis infection in this region, polymorphisms in these regulatory regions may be functioning in a complex manner influencing the immune function and disease susceptibility. To understand this complex interaction of the polymorphism in the present population, an in-depth immunological analysis is needed further.The study has its own limitation such as having a comparatively smaller data set. The functional relevance of these polymorphisms with respect to immune responses to tuberculosis could be addressed in a classified manner.
## 5. Conclusion
This study reported a significant association of theCCL2-2518 GG and -362 CC genotype with tuberculosis. Heterozygous CCL2-2518AG and -362GC were noted to be associated with resistance against PTB. The biallelic AC haplotype (CCL2-2518 A>G and -362 G>C) was noted to be a susceptible haplotype for pulmonary tuberculosis. The serum cytokine analysis suggested a complex regulatory mechanism among CCL2/MCP-1, IL-12p70, and IFN-γ concentration. CCL2 showed a positive correlation with IL-12, IFN-γ, and TNF-α, and a normal CCL2 level is essential for normal Th1 response. -2518G allele produces less CCL2 compared to A allele which leads to improper Th1 response and makes a host susceptible for tuberculosis.In the present study, we have tried to unwind the function of the gene in the promoter region of theCCL2 gene but an in-depth study is warranted to further understand the complex interaction of the polymorphisms in the regulatory region of CCL2.
---
*Source: 1019639-2020-12-17.xml* | 2020 |
# Codynamics of Four Variables Involved in Dengue Transmission and Its Control by Community Intervention: A System of Four Difference Equations
**Authors:** T. Awerbuch-Friedlander; Richard Levins; M. Predescu
**Journal:** Discrete Dynamics in Nature and Society
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101965
---
## Abstract
In the case of Dengue transmission and control, the interaction of nature and society is captured by a system of difference equations. For the purpose of studying the dynamics of these interactions, four variables involved in a Dengue epidemic, proportion of infected people (P), number of mosquitoes involved in transmission (M), mosquito habitats (H), and population awareness (A), are linked in a system of difference equations: Pn+1=aPn+1-e-iMn1-Pn, Mn+1=lMne-An+bHn1-e-Mn, Hn+1=cHn/(1+pAn)+1/(1+qAn), and An+1=rAn+fPn, n=0,1,…. The constraints have socioecological meaning. The initial conditions are such that 0≤P0≤1,(M0,H0,A0)≥(0,0,0), the parameters l,a,c,r∈(0,1), and the parameters f, i, b, and p are positive. The paper is concerned with the analysis of solutions of the above system for p=q. We studied the global asymptotic stability of the degenerate equilibrium. We also propose extensions of the above model and some open problems. We explored the role of memory in community awareness by numerical simulations. When the memory parameter is large, the proportion of infected people decreases and stabilizes at zero. Below a critical point we observe periodic oscillations.
---
## Body
## 1. Introduction
The response to an epidemic is triggered by awareness of a coming epidemic or by an existing one. The response is aimed at reducing the incidence of the actual disease. In the case of Dengue fever, the disease is caused by a virus that is transmitted by the bite of the mosquito, usuallyAedes aegypti. The mosquitoes deposit eggs in small containers of water. These hatch to produce larvae. Some transform into pupae and then adult mosquitoes. The breeding sites may be ephemeral, such as water in an empty beer can or used tire, an animal drinking trough near a human habitation, or in-doors stored water in large containers [1, 2].The information about a Dengue epidemic can come from the number of reported cases of Dengue, the abundance of mosquitoes, or the numbers of breeding sites for mosquitoes, or some other indicator such as rainfall that predicts breeding sites. The information triggers consciousness, and the response can be either individual and/or one at the community level. In previous work, we studied the dynamics of a discrete time system in which we modeled the awareness as a factor that is triggered by the formation of potential breeding sites and the response was aimed at eliminating them. The system was studied by a pair of two difference equations [3].By expanding the model to introduce an ongoing educational program, our new model predicted that high consciousness over time kept the number of breeding sites low [4]. In a study with three difference equations, we study a system in which the information is related to the number of adult mosquitoes. The more mosquitoes, the greater the awareness of the population, and this awareness leads to action to reduce the mosquito population by controlling breeding sites [5]. This population awareness is prompted and dissipates at a rate determined by the abundance of mosquitoes, similar to a birth and death process. The dynamics then is that mosquitoes are produced when adult females locate breeding sites and deposit eggs which develop into adult mosquitoes, and mosquitoes die at a rate depending on their own biology and environmental conditions as a result of control measures implemented as awareness rises. Thus the pair of variables, mosquitoes and awareness, are linked in a negative feedback loop in a system of equations where decay due to control was modeled with a rational fractional term at the environmental level. With another system of three difference equations we have explored an intervention by spraying mosquitoes [6]. The change in the spraying parameter resulted in almost periodic behavior and fluctuations in the populations of mosquitoes. Simulations show that alertness in consciousness, by keeping the memory parameter of previous week high, has an impact on the behavior of solutions and implicitly on the number of mosquitoes. When the memory parameter is high, there will be a steady decrease in the number of mosquitoes. The present study builds upon the previous models. We present a system of four difference equations, with the proportion of infected people as an additional variable that prompts consciousness:
(1)Pn+1=aPn+1-e-iMn1-Pn,Mn+1=lMne-gAn+bHn1-e-sMn,Hn+1=cHn1+pAn+d1+pAn,An+1=rAn+fPn,mmmmmmmmmnmmmn=0,1,….
This discrete system links the proportion of the infected people (Pn), mosquitoes (Mn), habitats (Hn), and awareness An. The initial conditions are such that 0≤P0≤1, (M0,H0,A0)≥(0,0,0), the parameters l,a,c,r∈(0,1), and the parameters f, i, b, and p are positive. The current system represents a modification of the system in [5].The first equation describes the proportion of infected people (between0 and 1). They prompt consciousness, while the intervention is against mosquitoes and perhaps habitats. In the relationships among variables, the awareness is prompted by the proportion of sick people. The control of both adult mosquitoes by spraying and habitats is carried out by community intervention.The parameteri is related to the behavior of infected mosquitoes, and it can be viewed as a transmission rate. An explanation of the term (1-e-iMn) goes as follows. If Q represents the probability that a mosquito transmits the infection, then 1-Q is the probability that it does not transmit the infection. Therefore, (1-Q)Mn will be the probability that Mn mosquitoes do not transmit the infection. One can rewrite
(2)1-QMn=eln(1-Q)Mn=eMnln(1-Q).
We denote i=-ln(1-Q)>0.One can observe that if0≤P0≤1 then P1≤1. This is true because
(3)P1=aP0+1-e-iM01-P0≤aP0+1-P0≤1.
It follows by induction that 0≤Pn≤1. Also, if (M0,H0,A0)≥(0,0,0) then (Mn,Hn,An)≥(0,0,0). Thus, we have that (P0,M0,H0,A0)≥(0,0,0,0) then (Pn,Mn,Hn,An)≥(0,0,0,0).By using a series of transformations, one can rescale the parametersg, s. and d in (1). We use the following changes of variables, Mn=(1/s)mn (in the second and first equation), An=(1/g)an (in the third and fourth equation), and Hn=dhn. These transformations will not change the nature of parameters a, c, l, and r, as these remain between 0 and 1. Thus, after relabeling the variables and parameters, one can work with a simplified system of equations as below (it is this system that will get analyzed in the next sections):
(4)Pn+1=aPn+1-e-iMn1-Pn,Mn+1=lMne-An+bHn1-e-Mn,Hn+1=cHn1+pAn+11+pAn,An+1=rAn+fPn,mmmmmmmmmmmin=0,1,….
In the sequel, we look at boundedness properties, local and global asymptotic stability of equilibria. Numerical simulations, open problems, and further directions of improvement will be mentioned.
## 2. Boundedness of Solutions
Lemma 1.
Let{Pn,Mn,Hn,An}n≥0 be a positive solution of system (4). Parameters are such that 0<l<1, 0<a<1, 0<c<1, and 0<r<1. Then limsupn→∞Pn≤1/(1-a), limsupn→∞Mn≤b/(1-l)(1-c), limsupn→∞Hn≤1/(1-c), and limsupn→∞An≤f/(1-a)(1-r).Proof.
First equation of system (4) gives Pn+1≤aPn+(1-Pn)≤aPn+1. Thus limsupn→∞Pn≤1/(1-a) and then for any positive number ϵp, there exists N sufficiently large, such that, for all n≥N,
(5)Pn+1<11-a+ϵp.
Making use of (5) in the fourth equation, we get
(6)An+1≤rAn+f1-a+ϵp.
Since 0<r<1 we obtain limsupn→∞An≤f/(1-a)(1-r) and then for any positive number ϵa, there exists N sufficiently large, such that, for all n≥N,
(7)An+1<f1-a1-r+ϵa.
The third equation of (4) yields Hn+1≤cHn+1 which combined with 0<c<1 gives limsupn→∞Hn≤1/(1-c). Thus, for any positive number ϵh, there exists N sufficiently large, such that, for all n≥N,
(8)Hn+1<11-c+ϵh.
Finally, (8) and Mn+1≤lMn+bHn≤lMn+b/(1-c)+ϵh produce limsupn→∞Mn≤b/(1-l)(1-c). Thus for any positive number ϵm, there exists N sufficiently large, such that, for all n≥N,
(9)Mn+1<b1-l1-c+ϵm.Some notations that will be used throughout the paper are, in order,(10)limsupn→∞Pn=SP,liminfn→∞Pn=IP,limsupn→∞Hn=SH,liminfn→∞Hn=IH,limsupn→∞Mn=SM,liminfn→∞Mn=IM,limsupn→∞An=SA,liminfn→∞An=IA.
## 3. Equilibria
Clearly,(11)0,0,11-c,0
is an equilibrium point of system (4) for all the values of the parameters.Lemma 2.
(1) Assume thatb≤(1-c)(1-l). Then the degenerate equilibrium (0,0,1/(1-c),0) is the only equilibrium point.
(2) Assume thatb>(1-c)(1-l); then there are two equilibrium points, namely, the degenerate one and a positive one denoted by (P¯,M¯,H¯,A¯). The positive equilibrium can take the form
(12)∑f1-e-iM¯1-r2-a-e-iM¯1-e-iM¯2-a-e-iM¯,M¯,11-c+pf1-e-iM¯/1-r2-a-e-iM¯,f1-e-iM¯1-r2-a-e-iM¯∑f1-e-iM¯1-r2-a-e-iM¯.Proof.
The equilibrium solutions verify the system(13)P¯=aP¯+1-e-iM¯1-P¯,M¯=lM¯e-A¯+bH¯1-e-M¯,H¯=cH¯1+pA¯+11+pA¯,A¯=rA¯+fP¯.
The fourth equation in the above system gives
(14)P¯=(1-r)A¯f.
Solving for H¯ in the third equation yields
(15)H¯=11-c+pA¯.
Combining (15) with the second equation of system (13) produces
(16)1-le-A¯M¯=b1-e-M¯1-c+pA¯.
Replacing (14) in first system equation and multiplying by f to both sides,
(17)1-a1-rA¯=1-e-iM¯f-1-rA¯.
Since (1-c+pA¯)≠0 and (1-le-A¯)≠0, (16) can be written in the form
(18)M¯1-e-M¯=b1-c+pA¯1-le-A¯.
Equation (17) gives
(19)A¯=f1-e-iM¯1-r2-a-e-iM¯.
Notice that (2-a-e-iM¯)>0. Set
(20)w(M¯)=f(1-e-iM¯)(1-r)(2-a-e-iM¯).
Notice that
(21)A¯=wM¯,
where function w(M) has the property that it is an increasing function, first order derivative
(22)w′(M)=f(1-a)ie-iM(1-r)2-a-e-iM2>0
for M∈(0,∞). Set the real valued functions
(23)Φ1M=M1-e-M,gA=b1-c+pA1-le-A.
We have thatΦ1(M) is an increasing function in M, Φ1(0+)=1 and Φ1(M¯)=g(A¯).
From the above,(24)gA¯=gwM¯=g∘wM¯,
where we denote Φ2 as
(25)Φ2(M)=(g∘w)(M)=g(w(M)).
Function Φ2 is decreasing. Let M1<M2. Since function w is increasing, one has w(M1)<w(M2). But g is a decreasing function and
(26)Φ2M1=g∘wM1=gwM1>gwM2=Φ2M2.
Using that w(0+)=0, we have that
(27)Φ20+=b1-c1-l.
For (18) to have a unique solution (and thus system to have a unique solution), one must have Φ1(0+)<Φ2(0+) or equivalently 1<b/(1-c)(1-l) and the proof ends.
## 4. Stability of Equilibrium Points
Next we are concerned with the local and global asymptotic stability of equilibrium points. Notations for our map are as follows:(28)Pn+1=ΘPn,Mn,Hn,AnwithΘP,M,H,A=aP+1-e-iM1-P,Mn+1=gPn,Mn,Hn,Anwithg(P,M,H,A)=lMe-A+bH1-e-M,Hn+1=hPn,Mn,Hn,AnwithhP,M,H,A=cH1+pA+11+pA,An+1=ΦPn,Mn,Hn,AnwithΦP,M,H,A=rA+fP.
The Jacobian evaluated at the equilibrium point (P¯,M¯,H¯,A¯) has the form
(29)JP¯,M¯,H¯,A¯=a-1-e-iM¯1-P¯ie-iM¯000le-A¯+bH¯e-M¯b-be-M¯-lM¯e-A¯00c1+pA¯-p1+cH¯1+pA¯2f00r.
Using the third equilibrium equation, 1+cH¯=H¯(1+pA¯). Thus, -p(1+cH¯)/(1+pA¯)2=-pH¯/(1+pA¯). The characteristic equation associated with (P¯,M¯,H¯,A¯) is given by the fourth order polynomial:
(30)a-1-e-iM¯-λle-A¯+bH¯e-M¯-λ×c1+pA¯-λ][r-λ-1-P¯ie-iM¯f-pH¯b1-e-M¯1+pA¯mmmmmmmmmm+lM¯e-A¯c1+pA¯-λ-pH¯b1-e-M¯1+pA¯=0.
One can look at the characteristic equation in the form
(31)λ4-(A1+A2+A3+A4)λ3+(A1A2+A1A3+A1A4+A2A3+A2A4+A3A4)λ2-A1A2A3+A1A2A4mmm+A1A3A4+A2A3A4+A5A7λ+A1A2A3A4+A5A6+A3A5A7=0,
where
(32)A1=a-1-e-iM¯,A2=le-A¯+bH¯e-M¯,A3=c1+pA¯,A4=r,A5=-1-P¯ife-iM¯,A6=-pH¯b1-e-M¯1+pA¯,A7=lM¯e-A¯.
In the region of existence of positive equilibrium point, b>(1-c)(1-l), the values of parameters for which the roots of the fourth order polynomial are inside unit disc generate a locally asymptotically stable equilibrium point. The positive equilibrium point is not always locally asymptotically stable in the region b>(1-c)(1-l) (see Figure 3).The following theorem about the degenerate equilibrium point (Figure1) (0,0,1/(1-c),0) holds.Figure 1
The above graph is generated with parameter valuesa=0.5, p=5, b=0.48, c=0.04, l=0.5, i=0.5, r=0.5, and f=20 (the parameters are placed in the region where b=(1-c)(1-l)). One can see that the solutions converge to the degenerate equilibrium point.Theorem 3.
Assume thatb<(1-c)(1-l). Then (0,0,1/(1-c),0) is globally asymptotically stable.Proof.
WhenP¯=0, M¯=0, H¯=1/(1-c), and A¯=0, the Jacobian becomes
(33)J0,0,11-c,0=ai000l+b1-c0000c-cp1-c-pf00r
with the characteristic equation a polynomial that factors into
(34)a-λl+b1-c-λc-λr-λ=0.
Three of the roots, namely, λ1=a, λ2=c, and λ4=r are less than 1 and if l+b/(1-c)<1 (or b<(1-c)(1-l)) then the degenerate equilibrium is a sink and thus locally asymptotically stable. It remains to be shown that this equilibrium is a global attractor. We offer a proof by contradiction as in [5]. Let us suppose SP>0 and SM>0. Then using the last equation in the system, we conclude
(35)SA≤rSA+f1-a.
Using that 1-e-Mn<Mn in the second equation of the reduced system yields
(36)Mn≤lMn+bHnMn.
Thus
(37)SM≤lSM+bSHSM.
Dividing by SM>0 to both sides one obtains
(38)1-lb≤SH≤11-c
which implies that (1-l)(1-c)≤b (hence the contradiction). Thus SM=0.
First equation in the reduced system yields the inequality(39)Pn+1≤aPn+iMn1-Pn
or further Pn+1≤aPn+iMn. Passing to the limit one has
(40)SP≤aSP+iSM=aSP.
Dividing by (1-a)>0 the above yields SP≤0 which in combination with SP≥0 gives SP=0.
Using the inequality in the third equation(41)IH≥cIH1+pSA+11+pSA≥cIH+1.
It follows IH≥1/(1-c). But SH≤1/(1-c)≤IH and thus SH=IH=1/(1-c).
S
A
=
0 follows easily.
## 5. Conclusions and Open Problems
The global asymptotic stability of the degenerate equilibrium was investigated (but the global asymptotic stability of the positive equilibrium remains an open problem that is worth investigating mathematically). An interesting result pertains to the role that the memory plays in controlling the epidemic. We observed oscillatory behavior for marginally low memory parameter values (r=0.5, Figure 3), meaning that the population might recover only for a short period of time and then get periodically infected. Computer simulations indicate that high awareness (r=0.97, Figure 2) leads to a complete decrease in the proportion of infected people and the solutions stabilize.Figure 2
The above graph is generated with parameter valuesa=0.5, p=0.5, b=5, c=0.6, l=0.5, i=0.5, r=0.97, and f=20 (the parameters are placed in the region where b>(1-c)(1-l)). One can see that the solutions display convergence to the equilibrium for big values of r.Figure 3
The above graph is generated with parameter valuesa=0.5, p=0.5, b=5, c=0.6, l=0.5, i=0.5, r=0.5, and f=20 (the parameters are placed in the region where b>(1-c)(1-l)). One can see that the solutions display oscillatory behavior for smaller values of r.Simulations done with various parameter values seem to suggest that the memory parameter has a threshold below which there are oscillations and above which it exhibits the equilibria, leading to the extinction of the infection. This is consistent with other findings from studies specifically designed to discover thresholds (see [7]). In [7] the authors considered the rate of contact between susceptible people and infectious vectors, a component captured in our system in the first equation by the term (1-e-iMn)(1-Pn). Their study reports that they were surprised to discover that the size of the viral introduction“was not seen to significantly influence the magnitude of the threshold.” In our future study, we shall focus on finding the memory parameter threshold value that leads to the extinction of the infection and to testing whether changing the initial conditions of the proportion of infected people P0 has an impact on the threshold value or not.The average number of mosquitoes per breeding site (parameterb) was estimated to be 9.5, ranging from 3 to 30, in field studies (see [2]). We used b=5 (see Figures 2 and 3), a value within the range suggested by field studies in the aforementioned reference. Computer simulations on system (1) indicate that it is possible that for large values of parameter d (high pollution level such as new empty cans and tires that collect water), the memory parameter r alone may not be sufficiently strong enough to eliminate the infection from the population, and the infection might equilibrate at levels higher than zero. In future work we shall explore the relationship between environmental pollution and the memory that creates awareness in the community.In this section we also want to bring attention to some extensions and open problems related to system (1). An interesting question to be analytically investigated in a further study is the global asymptotic stability of nondegenerate equilibrium of system (1) especially in the case when the system incorporates different parameters that measure the sensitivity of surviving habitats to communal awareness and individual awareness (hence p≠q). Thus, in this case the third equation reads
(42)Hn+1=cHnh1pAn+dh2qAn.
Based on biological considerations, one can take h1(·) and h2(·) as decreasing functions, h1,h2∈C1((0,∞)→(0,1]) with properties (i) h1(0)=1 and h2(0)=1 and (ii) limy→∞h1(y)=0 and limy→∞h2(y)=0. Two most used examples of such functions (used in the previous work, [3]) are for instance h1(y)=1/(1+py) and h2(y)=1/(1+qy). Thus, an open problem that we want to pose here refers to the study of the existence and global asymptotic stability of the positive equilibrium of the general system:
(43)Pn+1=aPn+1-e-iMnPn1-Pn,Mn+1=lMne-gAn+bHn1-e-sMn,Hn+1=cHnh1(pAn)+dh2qAn,An+1=rAn+fPn,mmmmmmmiimmmmin=0,1,….
Mathematical models may serve at designing policy interventions and provide a better understanding of phenomena at study [3]. Because at times interventions are implemented when consciousness is prompted by an increase in the incidence of sick people, one can work with the original system in a form as such:
(44)Pn+1=aPn+1-e-iMnPn1-Pn,Mn+1=lMne-gAn+bHn1-e-sMn,Hn+1=cHn1+pAn+d1+qAn,An+1=rAn+fPn,mmmmmmmmiimmmn=0,1,….
The first equation describes the proportion of infected people in the population (between 0 and 1). Proportion of sick people is assumed to prompt consciousness, while the intervention is against mosquitoes and perhaps habitats. The control of both, adult mosquitoes (M) and habitats (H) where mosquitoes lay their eggs, is carried out by spraying and community intervention by reducing breeding sites. One may use this system (system (44)) to compare a few control strategies, where increase in the proportion of infected people is linked to consciousness. Insecticide spraying is a common method in mosquito control despite its many disadvantages; and new ones are continuously being developed and tested [8–10]. In the long run the mosquitoes become resistant and the insecticide ineffective [11]; it poses serious risks to humans and the environment [10, 12–14]. In order to assess the effect of insecticide spraying without habitat management, the equations are modified so that we eliminate the rational control on Hn, and keep the population control on Mn. To assess the effect of habitat control only, through citizens intervention, the equation will keep its intervention parameters as such, Hn+1=cHn/(1+pAn)+d/(1+qAn), for example. We believe that system (44) is not only useful biologically but also interesting mathematically. Both systems (43) and (44) possess bounded solutions.
---
*Source: 101965-2014-11-17.xml* | 101965-2014-11-17_101965-2014-11-17.md | 20,054 | Codynamics of Four Variables Involved in Dengue Transmission and Its Control by Community Intervention: A System of Four Difference Equations | T. Awerbuch-Friedlander; Richard Levins; M. Predescu | Discrete Dynamics in Nature and Society
(2014) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101965 | 101965-2014-11-17.xml | ---
## Abstract
In the case of Dengue transmission and control, the interaction of nature and society is captured by a system of difference equations. For the purpose of studying the dynamics of these interactions, four variables involved in a Dengue epidemic, proportion of infected people (P), number of mosquitoes involved in transmission (M), mosquito habitats (H), and population awareness (A), are linked in a system of difference equations: Pn+1=aPn+1-e-iMn1-Pn, Mn+1=lMne-An+bHn1-e-Mn, Hn+1=cHn/(1+pAn)+1/(1+qAn), and An+1=rAn+fPn, n=0,1,…. The constraints have socioecological meaning. The initial conditions are such that 0≤P0≤1,(M0,H0,A0)≥(0,0,0), the parameters l,a,c,r∈(0,1), and the parameters f, i, b, and p are positive. The paper is concerned with the analysis of solutions of the above system for p=q. We studied the global asymptotic stability of the degenerate equilibrium. We also propose extensions of the above model and some open problems. We explored the role of memory in community awareness by numerical simulations. When the memory parameter is large, the proportion of infected people decreases and stabilizes at zero. Below a critical point we observe periodic oscillations.
---
## Body
## 1. Introduction
The response to an epidemic is triggered by awareness of a coming epidemic or by an existing one. The response is aimed at reducing the incidence of the actual disease. In the case of Dengue fever, the disease is caused by a virus that is transmitted by the bite of the mosquito, usuallyAedes aegypti. The mosquitoes deposit eggs in small containers of water. These hatch to produce larvae. Some transform into pupae and then adult mosquitoes. The breeding sites may be ephemeral, such as water in an empty beer can or used tire, an animal drinking trough near a human habitation, or in-doors stored water in large containers [1, 2].The information about a Dengue epidemic can come from the number of reported cases of Dengue, the abundance of mosquitoes, or the numbers of breeding sites for mosquitoes, or some other indicator such as rainfall that predicts breeding sites. The information triggers consciousness, and the response can be either individual and/or one at the community level. In previous work, we studied the dynamics of a discrete time system in which we modeled the awareness as a factor that is triggered by the formation of potential breeding sites and the response was aimed at eliminating them. The system was studied by a pair of two difference equations [3].By expanding the model to introduce an ongoing educational program, our new model predicted that high consciousness over time kept the number of breeding sites low [4]. In a study with three difference equations, we study a system in which the information is related to the number of adult mosquitoes. The more mosquitoes, the greater the awareness of the population, and this awareness leads to action to reduce the mosquito population by controlling breeding sites [5]. This population awareness is prompted and dissipates at a rate determined by the abundance of mosquitoes, similar to a birth and death process. The dynamics then is that mosquitoes are produced when adult females locate breeding sites and deposit eggs which develop into adult mosquitoes, and mosquitoes die at a rate depending on their own biology and environmental conditions as a result of control measures implemented as awareness rises. Thus the pair of variables, mosquitoes and awareness, are linked in a negative feedback loop in a system of equations where decay due to control was modeled with a rational fractional term at the environmental level. With another system of three difference equations we have explored an intervention by spraying mosquitoes [6]. The change in the spraying parameter resulted in almost periodic behavior and fluctuations in the populations of mosquitoes. Simulations show that alertness in consciousness, by keeping the memory parameter of previous week high, has an impact on the behavior of solutions and implicitly on the number of mosquitoes. When the memory parameter is high, there will be a steady decrease in the number of mosquitoes. The present study builds upon the previous models. We present a system of four difference equations, with the proportion of infected people as an additional variable that prompts consciousness:
(1)Pn+1=aPn+1-e-iMn1-Pn,Mn+1=lMne-gAn+bHn1-e-sMn,Hn+1=cHn1+pAn+d1+pAn,An+1=rAn+fPn,mmmmmmmmmnmmmn=0,1,….
This discrete system links the proportion of the infected people (Pn), mosquitoes (Mn), habitats (Hn), and awareness An. The initial conditions are such that 0≤P0≤1, (M0,H0,A0)≥(0,0,0), the parameters l,a,c,r∈(0,1), and the parameters f, i, b, and p are positive. The current system represents a modification of the system in [5].The first equation describes the proportion of infected people (between0 and 1). They prompt consciousness, while the intervention is against mosquitoes and perhaps habitats. In the relationships among variables, the awareness is prompted by the proportion of sick people. The control of both adult mosquitoes by spraying and habitats is carried out by community intervention.The parameteri is related to the behavior of infected mosquitoes, and it can be viewed as a transmission rate. An explanation of the term (1-e-iMn) goes as follows. If Q represents the probability that a mosquito transmits the infection, then 1-Q is the probability that it does not transmit the infection. Therefore, (1-Q)Mn will be the probability that Mn mosquitoes do not transmit the infection. One can rewrite
(2)1-QMn=eln(1-Q)Mn=eMnln(1-Q).
We denote i=-ln(1-Q)>0.One can observe that if0≤P0≤1 then P1≤1. This is true because
(3)P1=aP0+1-e-iM01-P0≤aP0+1-P0≤1.
It follows by induction that 0≤Pn≤1. Also, if (M0,H0,A0)≥(0,0,0) then (Mn,Hn,An)≥(0,0,0). Thus, we have that (P0,M0,H0,A0)≥(0,0,0,0) then (Pn,Mn,Hn,An)≥(0,0,0,0).By using a series of transformations, one can rescale the parametersg, s. and d in (1). We use the following changes of variables, Mn=(1/s)mn (in the second and first equation), An=(1/g)an (in the third and fourth equation), and Hn=dhn. These transformations will not change the nature of parameters a, c, l, and r, as these remain between 0 and 1. Thus, after relabeling the variables and parameters, one can work with a simplified system of equations as below (it is this system that will get analyzed in the next sections):
(4)Pn+1=aPn+1-e-iMn1-Pn,Mn+1=lMne-An+bHn1-e-Mn,Hn+1=cHn1+pAn+11+pAn,An+1=rAn+fPn,mmmmmmmmmmmin=0,1,….
In the sequel, we look at boundedness properties, local and global asymptotic stability of equilibria. Numerical simulations, open problems, and further directions of improvement will be mentioned.
## 2. Boundedness of Solutions
Lemma 1.
Let{Pn,Mn,Hn,An}n≥0 be a positive solution of system (4). Parameters are such that 0<l<1, 0<a<1, 0<c<1, and 0<r<1. Then limsupn→∞Pn≤1/(1-a), limsupn→∞Mn≤b/(1-l)(1-c), limsupn→∞Hn≤1/(1-c), and limsupn→∞An≤f/(1-a)(1-r).Proof.
First equation of system (4) gives Pn+1≤aPn+(1-Pn)≤aPn+1. Thus limsupn→∞Pn≤1/(1-a) and then for any positive number ϵp, there exists N sufficiently large, such that, for all n≥N,
(5)Pn+1<11-a+ϵp.
Making use of (5) in the fourth equation, we get
(6)An+1≤rAn+f1-a+ϵp.
Since 0<r<1 we obtain limsupn→∞An≤f/(1-a)(1-r) and then for any positive number ϵa, there exists N sufficiently large, such that, for all n≥N,
(7)An+1<f1-a1-r+ϵa.
The third equation of (4) yields Hn+1≤cHn+1 which combined with 0<c<1 gives limsupn→∞Hn≤1/(1-c). Thus, for any positive number ϵh, there exists N sufficiently large, such that, for all n≥N,
(8)Hn+1<11-c+ϵh.
Finally, (8) and Mn+1≤lMn+bHn≤lMn+b/(1-c)+ϵh produce limsupn→∞Mn≤b/(1-l)(1-c). Thus for any positive number ϵm, there exists N sufficiently large, such that, for all n≥N,
(9)Mn+1<b1-l1-c+ϵm.Some notations that will be used throughout the paper are, in order,(10)limsupn→∞Pn=SP,liminfn→∞Pn=IP,limsupn→∞Hn=SH,liminfn→∞Hn=IH,limsupn→∞Mn=SM,liminfn→∞Mn=IM,limsupn→∞An=SA,liminfn→∞An=IA.
## 3. Equilibria
Clearly,(11)0,0,11-c,0
is an equilibrium point of system (4) for all the values of the parameters.Lemma 2.
(1) Assume thatb≤(1-c)(1-l). Then the degenerate equilibrium (0,0,1/(1-c),0) is the only equilibrium point.
(2) Assume thatb>(1-c)(1-l); then there are two equilibrium points, namely, the degenerate one and a positive one denoted by (P¯,M¯,H¯,A¯). The positive equilibrium can take the form
(12)∑f1-e-iM¯1-r2-a-e-iM¯1-e-iM¯2-a-e-iM¯,M¯,11-c+pf1-e-iM¯/1-r2-a-e-iM¯,f1-e-iM¯1-r2-a-e-iM¯∑f1-e-iM¯1-r2-a-e-iM¯.Proof.
The equilibrium solutions verify the system(13)P¯=aP¯+1-e-iM¯1-P¯,M¯=lM¯e-A¯+bH¯1-e-M¯,H¯=cH¯1+pA¯+11+pA¯,A¯=rA¯+fP¯.
The fourth equation in the above system gives
(14)P¯=(1-r)A¯f.
Solving for H¯ in the third equation yields
(15)H¯=11-c+pA¯.
Combining (15) with the second equation of system (13) produces
(16)1-le-A¯M¯=b1-e-M¯1-c+pA¯.
Replacing (14) in first system equation and multiplying by f to both sides,
(17)1-a1-rA¯=1-e-iM¯f-1-rA¯.
Since (1-c+pA¯)≠0 and (1-le-A¯)≠0, (16) can be written in the form
(18)M¯1-e-M¯=b1-c+pA¯1-le-A¯.
Equation (17) gives
(19)A¯=f1-e-iM¯1-r2-a-e-iM¯.
Notice that (2-a-e-iM¯)>0. Set
(20)w(M¯)=f(1-e-iM¯)(1-r)(2-a-e-iM¯).
Notice that
(21)A¯=wM¯,
where function w(M) has the property that it is an increasing function, first order derivative
(22)w′(M)=f(1-a)ie-iM(1-r)2-a-e-iM2>0
for M∈(0,∞). Set the real valued functions
(23)Φ1M=M1-e-M,gA=b1-c+pA1-le-A.
We have thatΦ1(M) is an increasing function in M, Φ1(0+)=1 and Φ1(M¯)=g(A¯).
From the above,(24)gA¯=gwM¯=g∘wM¯,
where we denote Φ2 as
(25)Φ2(M)=(g∘w)(M)=g(w(M)).
Function Φ2 is decreasing. Let M1<M2. Since function w is increasing, one has w(M1)<w(M2). But g is a decreasing function and
(26)Φ2M1=g∘wM1=gwM1>gwM2=Φ2M2.
Using that w(0+)=0, we have that
(27)Φ20+=b1-c1-l.
For (18) to have a unique solution (and thus system to have a unique solution), one must have Φ1(0+)<Φ2(0+) or equivalently 1<b/(1-c)(1-l) and the proof ends.
## 4. Stability of Equilibrium Points
Next we are concerned with the local and global asymptotic stability of equilibrium points. Notations for our map are as follows:(28)Pn+1=ΘPn,Mn,Hn,AnwithΘP,M,H,A=aP+1-e-iM1-P,Mn+1=gPn,Mn,Hn,Anwithg(P,M,H,A)=lMe-A+bH1-e-M,Hn+1=hPn,Mn,Hn,AnwithhP,M,H,A=cH1+pA+11+pA,An+1=ΦPn,Mn,Hn,AnwithΦP,M,H,A=rA+fP.
The Jacobian evaluated at the equilibrium point (P¯,M¯,H¯,A¯) has the form
(29)JP¯,M¯,H¯,A¯=a-1-e-iM¯1-P¯ie-iM¯000le-A¯+bH¯e-M¯b-be-M¯-lM¯e-A¯00c1+pA¯-p1+cH¯1+pA¯2f00r.
Using the third equilibrium equation, 1+cH¯=H¯(1+pA¯). Thus, -p(1+cH¯)/(1+pA¯)2=-pH¯/(1+pA¯). The characteristic equation associated with (P¯,M¯,H¯,A¯) is given by the fourth order polynomial:
(30)a-1-e-iM¯-λle-A¯+bH¯e-M¯-λ×c1+pA¯-λ][r-λ-1-P¯ie-iM¯f-pH¯b1-e-M¯1+pA¯mmmmmmmmmm+lM¯e-A¯c1+pA¯-λ-pH¯b1-e-M¯1+pA¯=0.
One can look at the characteristic equation in the form
(31)λ4-(A1+A2+A3+A4)λ3+(A1A2+A1A3+A1A4+A2A3+A2A4+A3A4)λ2-A1A2A3+A1A2A4mmm+A1A3A4+A2A3A4+A5A7λ+A1A2A3A4+A5A6+A3A5A7=0,
where
(32)A1=a-1-e-iM¯,A2=le-A¯+bH¯e-M¯,A3=c1+pA¯,A4=r,A5=-1-P¯ife-iM¯,A6=-pH¯b1-e-M¯1+pA¯,A7=lM¯e-A¯.
In the region of existence of positive equilibrium point, b>(1-c)(1-l), the values of parameters for which the roots of the fourth order polynomial are inside unit disc generate a locally asymptotically stable equilibrium point. The positive equilibrium point is not always locally asymptotically stable in the region b>(1-c)(1-l) (see Figure 3).The following theorem about the degenerate equilibrium point (Figure1) (0,0,1/(1-c),0) holds.Figure 1
The above graph is generated with parameter valuesa=0.5, p=5, b=0.48, c=0.04, l=0.5, i=0.5, r=0.5, and f=20 (the parameters are placed in the region where b=(1-c)(1-l)). One can see that the solutions converge to the degenerate equilibrium point.Theorem 3.
Assume thatb<(1-c)(1-l). Then (0,0,1/(1-c),0) is globally asymptotically stable.Proof.
WhenP¯=0, M¯=0, H¯=1/(1-c), and A¯=0, the Jacobian becomes
(33)J0,0,11-c,0=ai000l+b1-c0000c-cp1-c-pf00r
with the characteristic equation a polynomial that factors into
(34)a-λl+b1-c-λc-λr-λ=0.
Three of the roots, namely, λ1=a, λ2=c, and λ4=r are less than 1 and if l+b/(1-c)<1 (or b<(1-c)(1-l)) then the degenerate equilibrium is a sink and thus locally asymptotically stable. It remains to be shown that this equilibrium is a global attractor. We offer a proof by contradiction as in [5]. Let us suppose SP>0 and SM>0. Then using the last equation in the system, we conclude
(35)SA≤rSA+f1-a.
Using that 1-e-Mn<Mn in the second equation of the reduced system yields
(36)Mn≤lMn+bHnMn.
Thus
(37)SM≤lSM+bSHSM.
Dividing by SM>0 to both sides one obtains
(38)1-lb≤SH≤11-c
which implies that (1-l)(1-c)≤b (hence the contradiction). Thus SM=0.
First equation in the reduced system yields the inequality(39)Pn+1≤aPn+iMn1-Pn
or further Pn+1≤aPn+iMn. Passing to the limit one has
(40)SP≤aSP+iSM=aSP.
Dividing by (1-a)>0 the above yields SP≤0 which in combination with SP≥0 gives SP=0.
Using the inequality in the third equation(41)IH≥cIH1+pSA+11+pSA≥cIH+1.
It follows IH≥1/(1-c). But SH≤1/(1-c)≤IH and thus SH=IH=1/(1-c).
S
A
=
0 follows easily.
## 5. Conclusions and Open Problems
The global asymptotic stability of the degenerate equilibrium was investigated (but the global asymptotic stability of the positive equilibrium remains an open problem that is worth investigating mathematically). An interesting result pertains to the role that the memory plays in controlling the epidemic. We observed oscillatory behavior for marginally low memory parameter values (r=0.5, Figure 3), meaning that the population might recover only for a short period of time and then get periodically infected. Computer simulations indicate that high awareness (r=0.97, Figure 2) leads to a complete decrease in the proportion of infected people and the solutions stabilize.Figure 2
The above graph is generated with parameter valuesa=0.5, p=0.5, b=5, c=0.6, l=0.5, i=0.5, r=0.97, and f=20 (the parameters are placed in the region where b>(1-c)(1-l)). One can see that the solutions display convergence to the equilibrium for big values of r.Figure 3
The above graph is generated with parameter valuesa=0.5, p=0.5, b=5, c=0.6, l=0.5, i=0.5, r=0.5, and f=20 (the parameters are placed in the region where b>(1-c)(1-l)). One can see that the solutions display oscillatory behavior for smaller values of r.Simulations done with various parameter values seem to suggest that the memory parameter has a threshold below which there are oscillations and above which it exhibits the equilibria, leading to the extinction of the infection. This is consistent with other findings from studies specifically designed to discover thresholds (see [7]). In [7] the authors considered the rate of contact between susceptible people and infectious vectors, a component captured in our system in the first equation by the term (1-e-iMn)(1-Pn). Their study reports that they were surprised to discover that the size of the viral introduction“was not seen to significantly influence the magnitude of the threshold.” In our future study, we shall focus on finding the memory parameter threshold value that leads to the extinction of the infection and to testing whether changing the initial conditions of the proportion of infected people P0 has an impact on the threshold value or not.The average number of mosquitoes per breeding site (parameterb) was estimated to be 9.5, ranging from 3 to 30, in field studies (see [2]). We used b=5 (see Figures 2 and 3), a value within the range suggested by field studies in the aforementioned reference. Computer simulations on system (1) indicate that it is possible that for large values of parameter d (high pollution level such as new empty cans and tires that collect water), the memory parameter r alone may not be sufficiently strong enough to eliminate the infection from the population, and the infection might equilibrate at levels higher than zero. In future work we shall explore the relationship between environmental pollution and the memory that creates awareness in the community.In this section we also want to bring attention to some extensions and open problems related to system (1). An interesting question to be analytically investigated in a further study is the global asymptotic stability of nondegenerate equilibrium of system (1) especially in the case when the system incorporates different parameters that measure the sensitivity of surviving habitats to communal awareness and individual awareness (hence p≠q). Thus, in this case the third equation reads
(42)Hn+1=cHnh1pAn+dh2qAn.
Based on biological considerations, one can take h1(·) and h2(·) as decreasing functions, h1,h2∈C1((0,∞)→(0,1]) with properties (i) h1(0)=1 and h2(0)=1 and (ii) limy→∞h1(y)=0 and limy→∞h2(y)=0. Two most used examples of such functions (used in the previous work, [3]) are for instance h1(y)=1/(1+py) and h2(y)=1/(1+qy). Thus, an open problem that we want to pose here refers to the study of the existence and global asymptotic stability of the positive equilibrium of the general system:
(43)Pn+1=aPn+1-e-iMnPn1-Pn,Mn+1=lMne-gAn+bHn1-e-sMn,Hn+1=cHnh1(pAn)+dh2qAn,An+1=rAn+fPn,mmmmmmmiimmmmin=0,1,….
Mathematical models may serve at designing policy interventions and provide a better understanding of phenomena at study [3]. Because at times interventions are implemented when consciousness is prompted by an increase in the incidence of sick people, one can work with the original system in a form as such:
(44)Pn+1=aPn+1-e-iMnPn1-Pn,Mn+1=lMne-gAn+bHn1-e-sMn,Hn+1=cHn1+pAn+d1+qAn,An+1=rAn+fPn,mmmmmmmmiimmmn=0,1,….
The first equation describes the proportion of infected people in the population (between 0 and 1). Proportion of sick people is assumed to prompt consciousness, while the intervention is against mosquitoes and perhaps habitats. The control of both, adult mosquitoes (M) and habitats (H) where mosquitoes lay their eggs, is carried out by spraying and community intervention by reducing breeding sites. One may use this system (system (44)) to compare a few control strategies, where increase in the proportion of infected people is linked to consciousness. Insecticide spraying is a common method in mosquito control despite its many disadvantages; and new ones are continuously being developed and tested [8–10]. In the long run the mosquitoes become resistant and the insecticide ineffective [11]; it poses serious risks to humans and the environment [10, 12–14]. In order to assess the effect of insecticide spraying without habitat management, the equations are modified so that we eliminate the rational control on Hn, and keep the population control on Mn. To assess the effect of habitat control only, through citizens intervention, the equation will keep its intervention parameters as such, Hn+1=cHn/(1+pAn)+d/(1+qAn), for example. We believe that system (44) is not only useful biologically but also interesting mathematically. Both systems (43) and (44) possess bounded solutions.
---
*Source: 101965-2014-11-17.xml* | 2014 |
# Efficacy and Predictors for Biofeedback Therapeutic Outcome in Patients with Dyssynergic Defecation
**Authors:** Ting Yu; Xiaoxue Shen; Miaomiao Li; Meifeng Wang; Lin Lin
**Journal:** Gastroenterology Research and Practice
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1019652
---
## Abstract
Aim. To evaluate the short-term efficacy of biofeedback therapy (BFT) for dyssynergic defecation (DD) and to explore the predictors of the efficacy of BFT. Methods. Clinical symptoms, psychological state, and quality of life of patients before and after BFT were investigated. All patients underwent lifestyle survey and anorectal physiology tests before BFT. Improvement in symptom scores was considered proof of clinical efficacy of BFT. Thirty-eight factors that could influence the efficacy of BFT were studied. Univariate and multivariate analysis was conducted to identify the independent predictors. Results. Clinical symptoms, psychological state, and quality of life of DD patients improved significantly after BFT. Univariate analysis showed that efficacy of BFT was positively correlated to one of the 36-item Short-Form Health Survey terms, the physical role function (r=0.289; P=0.025), and negatively correlated to the stool consistency (r=−0.220; P=0.032), the depression scores (r=−0.333; P=0.010), and the first rectal sensory threshold volume (r=−0.297; P=0.022). Multivariate analysis showed depression score (β = −0.271; P=0.032) and first rectal sensory threshold volume (β = −0.325; P=0.013) to be independent predictors of BFT efficacy. Conclusion. BFT improves the clinical symptoms of DD patients. Depression state and elevated first rectal sensory threshold volume were independent predictors of poor outcome with BFT.
---
## Body
## 1. Introduction
Chronic constipation (CC) is diagnosed when there is at least a 6-month history of symptoms such as infrequent bowel movement, reduced stool volume, hard stools, and excessive straining at defecation [1]. Treatment can be very difficult. The median prevalence is 16% in the US and is as high as 33.5% in adults aged 60–101 years [2]. The overall prevalence in Chinese adults is 16%–20% [3].Primary constipation consists of several overlapping subtypes, among which dyssynergic defecation (DD) is relatively common [4, 5]. Patients with DD have symptoms of obstructive defecation, such as severe straining during defecation and a sensation of a “blockage” and of incomplete evacuation. The physiological mechanisms of DD include inability to coordinate abdominal, rectoanal, and pelvic floor muscles during defecation because of causes such as inadequate rectal and/or abdominal propulsive force, impaired anal relaxation (i.e., <20% relaxation of basal resting pressure), or increased anal outlet resistance as a result of paradoxical external anal sphincter or puborectalis contraction [6, 7]. Pharmacological therapies that are usually effective in CC, such as bulking agents, osmotic laxatives, stimulant laxatives, and stool softeners [8], are often ineffective in DD patients [9].Biofeedback therapy (BFT), which is based on behavior modification [10], can be used to train DD patients to defecate effectively. Patients are taught to brace the abdominal wall muscles and relax the pelvic floor muscles during defecation, and efforts are also made to modify sensory perception in the rectum [11]. The first application of BFT for treatment of CC due to DD was in 1987 [12]. Since then, a number of controlled studies have shown that BFT can be more effective than laxatives, muscle relaxants, and placebo, with benefits lasting for at least 12 months [13–15]. Based on these findings, BFT has been recognized as the most effective treatment for DD for several years [16, 17]. However, symptomatic improvement after BFT has varied widely between studies, ranging from 44% to 100% [18]. Few data are available regarding the factors predictive of success of BFT [19]. In our experience, we have seen that anorectal physiology, psychological state, quality of life, and lifestyle factors can all influence the efficacy of BFT.The aim of this study was to investigate the short-term efficacy of BFT and to identify the clinical and physiological factors that predict success or failure following BFT in Chinese patients.
## 2. Material and Methods
### 2.1. Patients
In this retrospective study, all adult patients diagnosed with CC due to DD at the Department of Gastroenterology of the First Affiliated Hospital of Nanjing Medical University, between January 1, 2012, and October 30, 2015, were eligible for inclusion. CC was diagnosed if the patient had at least two of the following constipation symptoms for >6 months: (1) infrequent stools (<3 bowel movements/week); (2) hard or lumpy stools (Bristol stool form scale score of 1-2) [20]; (3) straining at stool; (4) sensation of incomplete evacuation after bowel movement; or (5) sensation of anorectal blockage [21]. The presence of DD was determined using high-resolution anorectal manometry (HR-ARM) and rectal balloon expulsion test. Patients presented with inappropriate contraction or inadequate propulsive forces in HR-ARM and prolonged balloon expulsion time were considered to have DD. None of the patients had responded to standard management of constipation (e.g., increased dietary fiber and fluid intake or laxatives). Patients were excluded from the study if they (1) were <18 years in age, (2) had structural bowel disease or history of abdominal surgery, (3) had mental illness, (4) had recently received psychotropic drugs [22], (5) were pregnant, or (6) had not completed a full course of BFT (4 sessions).This study was approved by the Ethics Committee of the First Affiliated Hospital of Nanjing Medical University (2016-SRFA-064).
### 2.2. Constipation Severity
A questionnaire (Table1) adapted from the one developed by the Cleveland Clinic was used to assess defecatory symptoms [23] such as frequency of spontaneous bowel movements, stool consistency, straining during defecation, sensation of incomplete evacuation, sensation of blockage, and painful defecation. The latter four are deemed to be relatively specific for DD and were scored on a scale of 0 to 3, where 0 = never occurred, 1 = occurred occasionally, 2 = occurred during 25% of defecations, and 3 = occurred during 50% of defecations. The frequency of spontaneous bowel movements was scored as 0 = defecation interval 1-2 days, 1 = defecation interval 3 days, 2 = defecation interval 4-5 days, and 3 = defecation interval > 5 days. Stool consistency was evaluated according to the Bristol stool scale (a 7-point scale, ranging from 1 = separate hard lumps like nuts to 7 = watery) [20]; in this study, the scores were allotted as follows: Bristol type 4–7 = score 0, Bristol type 3 = score 1, Bristol type 2 = score 2, and Bristol type 1 = score 3.Table 1
Scoring system for symptoms of DD.
Grading/score
Defecation interval (days)
Straining
Sensation of incomplete evacuation
Sensation of blockage
Painful defecation
Stool consistency
0
1-2
None
None
None
None
BSS: 4–7
1
3
Occurs occasionally
Occurs occasionally
Occurs occasionally
Occurs occasionally
BSS: 3
2
4-5
Occurs during >25% of defecations
Occurs during >25% of defecations
Occurs during >25% of defecations
Occurs during >25% of defecations
BSS: 2
3
>5
Occurs during >50% of defecations
Occurs during >50% of defecations
Occurs during >50% of defecations
Occurs during >50% of defecations
BSS: 1
DD = dyssynergic defecation; BSS = Bristol stool scale.
### 2.3. Assessment of Psychological State and Quality of Life
Zung’s Self-Rating Anxiety Scale (SAS) [24] and Self-Rating Depression Scale (SDS) [25] were used to evaluate the levels of anxiety and depression. In Chinese populations, SAS ≥ 50 and SDS ≥ 53 represent diagnosable anxiety and depression [26]. The 36-item Short-Form Health Survey (SF-36) was used to evaluate quality of life [27]. The SF-36 consists of eight sections: vitality, physical functioning, bodily pain, general health perceptions, physical role functioning, emotional role functioning, social role functioning, and mental health. The scores in each section are the weighted sums of the scores for each question in that section. The scale is directly transformed into a 0–100 scale on the assumption that each question carries equal weight. The higher the score, the better the patient’s quality of life.
### 2.4. Lifestyle Survey
Information on physical activity, work pressure, and sleep quality were obtained from questionnaires filled in at first contact with the patient. Physical activity was assessed by one question on the frequency of exercise of at least 30 minutes per session during the past week; the possible responses were “often,” “sometimes,” “seldom,” and “never.” Work pressure was graded as “low,” “normal,” high,” and “very high.” Sleep quality was assessed by the Pittsburgh Sleep Quality Index (PSQI) questionnaire [28]. The PSQI assesses seven components of sleep: the quality, latency, duration, and efficiency of sleep, sleep disturbances, use of sleeping medication, and daytime dysfunction. Each component is scored from 0 to 3, and the seven component scores are summed to gain a global score. In Chinese populations, a PSQI global score > 7 indicates poor sleep quality [29].There were six questions on the frequency and/or volume of consumption of certain food items. Volume of intake was graded as “low,” “normal,” “high,” or “very high,” and frequency of consumption as “often,” “sometimes,” “seldom,” or “never.” Thus, we recorded the frequency and volume of consumption of vegetables (seldom, 250–<500 g/d, 500–1000 g/d, and >1000 g/d); fruits (seldom, 100–200 g/d, 200–500 g/d, and >500 g/d); and water (<500 mL/d, 500–1000 mL/d, >1000 mL/d). Predilection for a high-fat diet was also recorded (yes/no).
### 2.5. Rectal Balloon Expulsion Test
The time required for subjects to expel a rectal balloon filled with 50 mL of warm water while seated in privacy on a commode was measured. The balloon was removed if the subject was not able to expel the balloon within 1 minute [30, 31].
### 2.6. Colonic Transit Study
Colonic transit was assessed using radiopaque marker techniques. In brief, the patient ingested a single capsule containing 24 cylindrical radiopaque markers of 2 mm diameter and 6 mm length on day 1. A supine radiograph of the abdomen was obtained on day 3 (i.e., 72 hours later) to assess the number and distribution of the markers in the colon; patients were deemed positive for delayed colonic transit if there were >4 markers distributed throughout the colon [32, 33].
### 2.7. High-Resolution Anorectal Manometry
A novel solid-state HR-ARM device (Manoscan AR 360; Given Imaging, Yokneam, Israel) with 12 sensors was used for anorectal manometry. The procedure was performed after defecation. The patient was placed in the left lateral decubitus position, with the hips flexed to 90°. The rectal balloon, with the attached catheter, was placed 3 cm proximal to the upper part of the anal sphincter. Measurements were made in the following order: resting anal and rectal pressure (20–30 seconds), pressure during squeeze (best of three attempts, with a maximum duration of 20–30 seconds per attempt), and pressure during bearing down as in defecation (best of three attempts, with 20–30 seconds per attempt) [34]. Rectal sensation was simultaneously evaluated; for this, the rectal balloon was progressively distended in 10 mL increments from 0 mL to 50 mL, and threshold volumes for first sensation, urgency, and maximum discomfort were recorded.Four phenotypes of DD have been recognized based on HR-ARM: type I dyssynergia, in which there is an adequate increase (≥40 mmHg) in rectal pressure, accompanied by a paradoxical simultaneous increase in anal pressure; type II dyssynergia, in which there is an inadequate increase (<40 mmHg) in rectal pressure (poor propulsive force), accompanied by a paradoxical increase in anal pressure; type III dyssynergia, in which there is an adequate increase (≥40 mmHg) in rectal pressure, accompanied by failure of reduction in anal pressure (to ≤20% of baseline pressure); and type IV dyssynergia, in which there is an inadequate increase (<40 mmHg) in rectal pressure (poor propulsive force), accompanied by failure of reduction in anal pressure (to ≤20% of baseline pressure) [1].
### 2.8. Biofeedback Training
The Polygraf ID 8 (Medtronic Ltd, Denmark) was used for biofeedback training. Patients received a 1-hour biofeedback training once every other day for the first 2 weeks, and 2-3 times per week thereafter. For the training session, the patient was asked to lie on the right side, and a single manometry catheter and anal electrode were inserted into the patient’s anorectal canal at the sphincter. The catheter and the electrode were connected to the Polygraf ID, which displayed the data collected in the anorectal canal in a simple graphical format. The biofeedback application displayed a column, which the patient navigated using the pelvic floor muscles. By contracting and relaxing the pelvic floor muscles, the patient could move the signal level indicator up and down. The patient was instructed to try and keep the signal level within the limits of the column, while maintaining awareness of the changes in the pelvic floor muscle activity. They could thus learn to modulate the activity of the anorectal muscles [35]. During the training period, patients were required to practice at home, using the squeezing and relaxing maneuvers for 20 minutes at a time, 2-3 times/week. At the conclusion of biofeedback training, all patients were told that their pushing efforts had improved; this ensured that patients would be motivated to return for a follow-up and have positive expectations during the follow-up assessments.
### 2.9. Evaluation of Biofeedback Treatment Efficacy
Treatment efficacy was assessed at the completion of the BFT session. Treatment efficacy was expressed as a ratio, that is, the difference between the pretraining and posttraining constipation severity scores divided by the pretraining score, and graded as “very efficacious” (score > 0.05), “efficacious” (score 0.25–0.50), or “not efficacious” (score < 0.25).
### 2.10. Statistical Analysis
All data were analyzed using SPSS version 20.0 (IBM Corp., Armonk, NY, USA). Continuous variables were expressed as means ± standard deviation or medians (range), and categorical variables as relative frequencies. Student’st-test or the Mann–Whitney U test was used to compare continuous variables, and the chi-square test or Fisher’s exact test for categorical variables. Univariate and multivariate analysis was used to identify the predictors of BFT efficacy. P<0.05 was considered statistically significant.
## 2.1. Patients
In this retrospective study, all adult patients diagnosed with CC due to DD at the Department of Gastroenterology of the First Affiliated Hospital of Nanjing Medical University, between January 1, 2012, and October 30, 2015, were eligible for inclusion. CC was diagnosed if the patient had at least two of the following constipation symptoms for >6 months: (1) infrequent stools (<3 bowel movements/week); (2) hard or lumpy stools (Bristol stool form scale score of 1-2) [20]; (3) straining at stool; (4) sensation of incomplete evacuation after bowel movement; or (5) sensation of anorectal blockage [21]. The presence of DD was determined using high-resolution anorectal manometry (HR-ARM) and rectal balloon expulsion test. Patients presented with inappropriate contraction or inadequate propulsive forces in HR-ARM and prolonged balloon expulsion time were considered to have DD. None of the patients had responded to standard management of constipation (e.g., increased dietary fiber and fluid intake or laxatives). Patients were excluded from the study if they (1) were <18 years in age, (2) had structural bowel disease or history of abdominal surgery, (3) had mental illness, (4) had recently received psychotropic drugs [22], (5) were pregnant, or (6) had not completed a full course of BFT (4 sessions).This study was approved by the Ethics Committee of the First Affiliated Hospital of Nanjing Medical University (2016-SRFA-064).
## 2.2. Constipation Severity
A questionnaire (Table1) adapted from the one developed by the Cleveland Clinic was used to assess defecatory symptoms [23] such as frequency of spontaneous bowel movements, stool consistency, straining during defecation, sensation of incomplete evacuation, sensation of blockage, and painful defecation. The latter four are deemed to be relatively specific for DD and were scored on a scale of 0 to 3, where 0 = never occurred, 1 = occurred occasionally, 2 = occurred during 25% of defecations, and 3 = occurred during 50% of defecations. The frequency of spontaneous bowel movements was scored as 0 = defecation interval 1-2 days, 1 = defecation interval 3 days, 2 = defecation interval 4-5 days, and 3 = defecation interval > 5 days. Stool consistency was evaluated according to the Bristol stool scale (a 7-point scale, ranging from 1 = separate hard lumps like nuts to 7 = watery) [20]; in this study, the scores were allotted as follows: Bristol type 4–7 = score 0, Bristol type 3 = score 1, Bristol type 2 = score 2, and Bristol type 1 = score 3.Table 1
Scoring system for symptoms of DD.
Grading/score
Defecation interval (days)
Straining
Sensation of incomplete evacuation
Sensation of blockage
Painful defecation
Stool consistency
0
1-2
None
None
None
None
BSS: 4–7
1
3
Occurs occasionally
Occurs occasionally
Occurs occasionally
Occurs occasionally
BSS: 3
2
4-5
Occurs during >25% of defecations
Occurs during >25% of defecations
Occurs during >25% of defecations
Occurs during >25% of defecations
BSS: 2
3
>5
Occurs during >50% of defecations
Occurs during >50% of defecations
Occurs during >50% of defecations
Occurs during >50% of defecations
BSS: 1
DD = dyssynergic defecation; BSS = Bristol stool scale.
## 2.3. Assessment of Psychological State and Quality of Life
Zung’s Self-Rating Anxiety Scale (SAS) [24] and Self-Rating Depression Scale (SDS) [25] were used to evaluate the levels of anxiety and depression. In Chinese populations, SAS ≥ 50 and SDS ≥ 53 represent diagnosable anxiety and depression [26]. The 36-item Short-Form Health Survey (SF-36) was used to evaluate quality of life [27]. The SF-36 consists of eight sections: vitality, physical functioning, bodily pain, general health perceptions, physical role functioning, emotional role functioning, social role functioning, and mental health. The scores in each section are the weighted sums of the scores for each question in that section. The scale is directly transformed into a 0–100 scale on the assumption that each question carries equal weight. The higher the score, the better the patient’s quality of life.
## 2.4. Lifestyle Survey
Information on physical activity, work pressure, and sleep quality were obtained from questionnaires filled in at first contact with the patient. Physical activity was assessed by one question on the frequency of exercise of at least 30 minutes per session during the past week; the possible responses were “often,” “sometimes,” “seldom,” and “never.” Work pressure was graded as “low,” “normal,” high,” and “very high.” Sleep quality was assessed by the Pittsburgh Sleep Quality Index (PSQI) questionnaire [28]. The PSQI assesses seven components of sleep: the quality, latency, duration, and efficiency of sleep, sleep disturbances, use of sleeping medication, and daytime dysfunction. Each component is scored from 0 to 3, and the seven component scores are summed to gain a global score. In Chinese populations, a PSQI global score > 7 indicates poor sleep quality [29].There were six questions on the frequency and/or volume of consumption of certain food items. Volume of intake was graded as “low,” “normal,” “high,” or “very high,” and frequency of consumption as “often,” “sometimes,” “seldom,” or “never.” Thus, we recorded the frequency and volume of consumption of vegetables (seldom, 250–<500 g/d, 500–1000 g/d, and >1000 g/d); fruits (seldom, 100–200 g/d, 200–500 g/d, and >500 g/d); and water (<500 mL/d, 500–1000 mL/d, >1000 mL/d). Predilection for a high-fat diet was also recorded (yes/no).
## 2.5. Rectal Balloon Expulsion Test
The time required for subjects to expel a rectal balloon filled with 50 mL of warm water while seated in privacy on a commode was measured. The balloon was removed if the subject was not able to expel the balloon within 1 minute [30, 31].
## 2.6. Colonic Transit Study
Colonic transit was assessed using radiopaque marker techniques. In brief, the patient ingested a single capsule containing 24 cylindrical radiopaque markers of 2 mm diameter and 6 mm length on day 1. A supine radiograph of the abdomen was obtained on day 3 (i.e., 72 hours later) to assess the number and distribution of the markers in the colon; patients were deemed positive for delayed colonic transit if there were >4 markers distributed throughout the colon [32, 33].
## 2.7. High-Resolution Anorectal Manometry
A novel solid-state HR-ARM device (Manoscan AR 360; Given Imaging, Yokneam, Israel) with 12 sensors was used for anorectal manometry. The procedure was performed after defecation. The patient was placed in the left lateral decubitus position, with the hips flexed to 90°. The rectal balloon, with the attached catheter, was placed 3 cm proximal to the upper part of the anal sphincter. Measurements were made in the following order: resting anal and rectal pressure (20–30 seconds), pressure during squeeze (best of three attempts, with a maximum duration of 20–30 seconds per attempt), and pressure during bearing down as in defecation (best of three attempts, with 20–30 seconds per attempt) [34]. Rectal sensation was simultaneously evaluated; for this, the rectal balloon was progressively distended in 10 mL increments from 0 mL to 50 mL, and threshold volumes for first sensation, urgency, and maximum discomfort were recorded.Four phenotypes of DD have been recognized based on HR-ARM: type I dyssynergia, in which there is an adequate increase (≥40 mmHg) in rectal pressure, accompanied by a paradoxical simultaneous increase in anal pressure; type II dyssynergia, in which there is an inadequate increase (<40 mmHg) in rectal pressure (poor propulsive force), accompanied by a paradoxical increase in anal pressure; type III dyssynergia, in which there is an adequate increase (≥40 mmHg) in rectal pressure, accompanied by failure of reduction in anal pressure (to ≤20% of baseline pressure); and type IV dyssynergia, in which there is an inadequate increase (<40 mmHg) in rectal pressure (poor propulsive force), accompanied by failure of reduction in anal pressure (to ≤20% of baseline pressure) [1].
## 2.8. Biofeedback Training
The Polygraf ID 8 (Medtronic Ltd, Denmark) was used for biofeedback training. Patients received a 1-hour biofeedback training once every other day for the first 2 weeks, and 2-3 times per week thereafter. For the training session, the patient was asked to lie on the right side, and a single manometry catheter and anal electrode were inserted into the patient’s anorectal canal at the sphincter. The catheter and the electrode were connected to the Polygraf ID, which displayed the data collected in the anorectal canal in a simple graphical format. The biofeedback application displayed a column, which the patient navigated using the pelvic floor muscles. By contracting and relaxing the pelvic floor muscles, the patient could move the signal level indicator up and down. The patient was instructed to try and keep the signal level within the limits of the column, while maintaining awareness of the changes in the pelvic floor muscle activity. They could thus learn to modulate the activity of the anorectal muscles [35]. During the training period, patients were required to practice at home, using the squeezing and relaxing maneuvers for 20 minutes at a time, 2-3 times/week. At the conclusion of biofeedback training, all patients were told that their pushing efforts had improved; this ensured that patients would be motivated to return for a follow-up and have positive expectations during the follow-up assessments.
## 2.9. Evaluation of Biofeedback Treatment Efficacy
Treatment efficacy was assessed at the completion of the BFT session. Treatment efficacy was expressed as a ratio, that is, the difference between the pretraining and posttraining constipation severity scores divided by the pretraining score, and graded as “very efficacious” (score > 0.05), “efficacious” (score 0.25–0.50), or “not efficacious” (score < 0.25).
## 2.10. Statistical Analysis
All data were analyzed using SPSS version 20.0 (IBM Corp., Armonk, NY, USA). Continuous variables were expressed as means ± standard deviation or medians (range), and categorical variables as relative frequencies. Student’st-test or the Mann–Whitney U test was used to compare continuous variables, and the chi-square test or Fisher’s exact test for categorical variables. Univariate and multivariate analysis was used to identify the predictors of BFT efficacy. P<0.05 was considered statistically significant.
## 3. Results
The data of 171 patients (69 men and 102 women; mean age, 54.0 ± 23.3 years) were analyzed.
### 3.1. Baseline Clinical Symptoms, Psychological State, and Quality of Life
The mean disease duration was 6.5 ± 2.5 years. In this study population, 74.9% (128/171) patients had not had spontaneous bowel movements over the past 2 years. In all, 93.0% (159/171) patients had history of long-term use of stimulant laxatives. The mean defecation interval was 1.95 ± 1.20 days, and the mean stool consistency score was 1.82 ± 1.20. Almost all patients had complaints of straining during bowel movement, sensation of incomplete defecation, sensation of blockage, or pain during defecation. Table2 shows the defecatory symptom scores.Table 2
Symptom scores before and after BFT.
Clinical symptoms
Before BFT
After BFT
P
Defecation interval (days)
1.95 ± 1.20
1.20 ± 0.91
0.039
Straining
2.75 ± 1.63
1.60 ± 1.15
0.042
Sensation of incomplete evacuation
2.50 ± 1.35
1.62 ± 1.15
0.048
Sensation of blockage
1.82 ± 1.40
0.95 ± 1.07
0.021
Painful defecation
1.20 ± 0.90
1.17 ± 074
0.109
Stool consistency
1.82 ± 1.20
0.96 ± 1.13
0.034
Total
12.36 ± 6.00
7.61 ± 4.52
0.011
Data are expressed as mean ± standard deviation. BFT = biofeedback therapy.The anxiety and depression scores were 40.0 ± 15.5 and 50.1 ± 13.5, respectively, which were significantly higher than the Chinese norms (33.80 ± 5.90 and 41.88 ± 10.57, resp.; Table3) [26]; on the basis of these scores, 22.2% (38/171) and 33.9% (62/171) of the patients had anxiety and depression, respectively.Table 3
SAS and SDS scores before and after BFT.
Before BFT
After BFT
P
SAS
40.0 ± 15.5
33.5 ± 10.9
0.004
SDS
50.1 ± 13.5
46.0 ± 13.5
0.023
Data are expressed as mean ± standard deviation. SAS = Zung’s Self-Rating Anxiety Scale; SDS = Zung’s Self-Rating Depression Scale; BFT = biofeedback therapy.Table4 shows the scores of the DD patients in the different sections of the SF-36. All scores were significantly lower than the Chinese norms [36].Table 4
Scores for different quality of life indicators before and after BFT.
Quality of life indicator
Before BFT
After BFT
P
General health perception
41.3 ± 19.0
63.4 ± 19.2
<0.001
Physical functioning
84.0 ± 42.8
88.5 ± 39.2
0.045
Physical role functioning
60.5 ± 34.9
72.6 ± 39.0
0.033
Emotional role functioning
63.8 ± 32.0
75.4 ± 37.3
0.038
Social role functioning
74.0 ± 37.7
80.1 ± 37.5
0.087
Bodily pain
75.0 ± 40.0
86.3 ± 36.9
0.029
Vitality
62.1 ± 30.5
70.8 ± 23.0
0.040
Mental health
63.2 ± 23.6
65.9 ± 21.0
0.049
Data are expressed as mean ± standard deviation. BFT = biofeedback therapy.
### 3.2. Baseline Lifestyle Factors
Table5 shows the scores for physical activity, work pressure, sleep quality, and diet habits of the DD patients before BFT.Table 5
Frequency table of lifestyle characteristics.
Characteristic
Frequency (n)
Characteristic
Frequency (n)
Physical activity
Fruit intake
Often
41
Seldom
11
Sometimes
67
100–200 g/d
90
Seldom
57
200–500 g/d
60
Never
6
>500 g/d
10
Work pressure
Water intake
Low
106
<500 mL/d
53
Normal
30
500–1000 mL/d
100
High
26
>1000 mL/d
18
Very high
9
High-fat diet predilection
Poor sleep quality
Yes
57
No
118
No
114
Yes
53
Vegetable intake
Seldom
19
250–<500 g/d
44
500–1000 g/d
67
>1000 g/d
41
### 3.3. Baseline Anorectal Physiology
In this study, 48.5% (83/171) patients presented with prolonged colonic transit time. The mean values for the manometric parameters were as follows: anal resting pressure 82.5 ± 16.0 mmHg, maximum squeeze pressure 208.3 ± 41.5 mmHg, rectal defecation pressure 38.9 ± 8.6 mmHg, intrarectal pressure 88.9 ± 15.3 mmHg, and rectoanal pressure differential −42.0 ± 8.5 mmHg, and threshold for the first sensation 60.0 mL (range, 20.0–220.0 mL), urgency 100.0 mL (range, 40.0–350.0 mL), and maximum discomfort 150.0 mL (range, 80.0–350.0 mL). According to the HR-ARM results, 82/171 (48.0%), 51/171 (29.8%), 30/171 (17.5%), and 8/171 (4.7%) patients were classified as type I, type II, type III, and type IV DD, respectively.
### 3.4. Biofeedback Treatment Efficacy
Patients in this study received 10.0 ± 3.5 sessions of BFT. Treatment was assessed as “very efficacious” in 72.5% (124/171) patients, as “efficacious” in 8.2% (14/171) patients, and as “not efficacious” in 19.3% (33/171) patients; thus, the total efficacy was 80.7% (Table6).Table 6
Clinical efficacy of BFT.
Clinical efficacy
n
Proportion (%)
Grading of efficacy (%)
≥75%
73
42.7%
Very efficacious (72.5%)
50%–75%
51
29.8%
25%–50%
14
8.2%
Efficacious (8.2%)
≤25%
33
19.3%
Not efficacious (19.3%)
BFT = biofeedback therapy.There was a very significant decrease in the total and subscale scores of bowel symptoms (defecation interval, straining at defecation, sensation of incomplete evacuation/blockage, and stool consistency; Table2).Anxiety and depression was markedly improved, with significant decrease in the scores of SAS and SDS after the BFT (Table3). In the SF-36, the scores for general health perception, physical functioning, and bodily pain were increased significantly, indicating improvement in quality of life (Table 4).
### 3.5. Predictors of Outcome of BFT
Tables7 and 8 show the association between the efficacy of BFT and psychological state, quality of life, lifestyle factors, and anorectal physiology. Univariate analysis showed that BFT efficacy was positively correlated to the score for physical role function (r=0.289; P=0.025) and negatively correlated to the stool consistency score (r=−0.220; P=0.032), the depression score (r=−0.333; P=0.010), and the first sensory threshold volume (r=−0.297; P=0.022; Table 7). Multivariate analysis showed that depression score (β = −0.271; P=0.032) and the first sensory threshold volume (β = −0.325; P=0.013) were independent predictors of BFT efficacy (Table 8).Table 7
Univariate analysis of predictors of outcome of BFT.
Variables
r
P
General information
Age
−0.095
0.440
Gender
−0.112
0.202
Constipation duration
0.115
0.197
Symptoms
Defecation interval
−0.062
0.683
Straining
−0.121
0.149
Sensation of incomplete evacuation
−0.092
0.450
Sensation of blockage
−0.145
0.106
Painful defecation
−0.040
0.849
Stool consistency
−0.220
0.032
Psychological status
SAS
−0.184
0.093
SDS
−0.333
0.010
Quality of life indicators
General health perception
0.135
0.116
Physical functioning
0.112
0.202
Physical role function
0.289
0.025
Emotional role functioning
0.120
0.207
Social role functioning
0.153
0.104
Bodily pain
0.046
0.751
Vitality
0.196
0.084
Mental health
0.205
0.057
Lifestyle
Physical activity
−0.079
0.666
Work pressure
−0.089
0.490
Poor sleep quality
−0.078
0.666
Vegetable intake
0.145
0.106
Fruit intake
−0.062
0.683
Water intake
−0.095
0.468
High-fat diet predilection
0.017
0.800
Anorectal physiology
BET time
−0.188
0.091
CTT
−0.062
0.711
HR-ARM
Anal resting pressure
0.066
0.705
Maximum squeeze pressure
−0.030
0.761
Rectal defecation pressure
0.082
0.650
Intrarectal pressure
0.044
0.795
Rectoanal pressure differential
0.197
0.090
First sensation volume
−0.297
0.022
Urgency volume
−0.178
0.091
Maximum discomfort volume
−0.074
0.700
DD subtype
−0.099
0.365
BFT = biofeedback therapy; SAS = Zung’s Self-Rating Anxiety Scale; SDS = Zung’s Self-Rating Depression Scale; BET = balloon expulsion test; CTT = colonic transit time.Table 8
Multiple linear regression analysis of predictors of BFT outcome.
Variables
β coefficient
95% CI
P
Stool consistency
−0.110
−0.213 to −0.032
0.176
SDS
−0.271
−0.506 to −0.036
0.032
Physical role function
0.112
0.204 to 0.020
0.172
First sensation volume
−0.325
−0.534 to −0.012
0.013
BFT = biofeedback therapy; SDS = Zung’s Self-Rating.
## 3.1. Baseline Clinical Symptoms, Psychological State, and Quality of Life
The mean disease duration was 6.5 ± 2.5 years. In this study population, 74.9% (128/171) patients had not had spontaneous bowel movements over the past 2 years. In all, 93.0% (159/171) patients had history of long-term use of stimulant laxatives. The mean defecation interval was 1.95 ± 1.20 days, and the mean stool consistency score was 1.82 ± 1.20. Almost all patients had complaints of straining during bowel movement, sensation of incomplete defecation, sensation of blockage, or pain during defecation. Table2 shows the defecatory symptom scores.Table 2
Symptom scores before and after BFT.
Clinical symptoms
Before BFT
After BFT
P
Defecation interval (days)
1.95 ± 1.20
1.20 ± 0.91
0.039
Straining
2.75 ± 1.63
1.60 ± 1.15
0.042
Sensation of incomplete evacuation
2.50 ± 1.35
1.62 ± 1.15
0.048
Sensation of blockage
1.82 ± 1.40
0.95 ± 1.07
0.021
Painful defecation
1.20 ± 0.90
1.17 ± 074
0.109
Stool consistency
1.82 ± 1.20
0.96 ± 1.13
0.034
Total
12.36 ± 6.00
7.61 ± 4.52
0.011
Data are expressed as mean ± standard deviation. BFT = biofeedback therapy.The anxiety and depression scores were 40.0 ± 15.5 and 50.1 ± 13.5, respectively, which were significantly higher than the Chinese norms (33.80 ± 5.90 and 41.88 ± 10.57, resp.; Table3) [26]; on the basis of these scores, 22.2% (38/171) and 33.9% (62/171) of the patients had anxiety and depression, respectively.Table 3
SAS and SDS scores before and after BFT.
Before BFT
After BFT
P
SAS
40.0 ± 15.5
33.5 ± 10.9
0.004
SDS
50.1 ± 13.5
46.0 ± 13.5
0.023
Data are expressed as mean ± standard deviation. SAS = Zung’s Self-Rating Anxiety Scale; SDS = Zung’s Self-Rating Depression Scale; BFT = biofeedback therapy.Table4 shows the scores of the DD patients in the different sections of the SF-36. All scores were significantly lower than the Chinese norms [36].Table 4
Scores for different quality of life indicators before and after BFT.
Quality of life indicator
Before BFT
After BFT
P
General health perception
41.3 ± 19.0
63.4 ± 19.2
<0.001
Physical functioning
84.0 ± 42.8
88.5 ± 39.2
0.045
Physical role functioning
60.5 ± 34.9
72.6 ± 39.0
0.033
Emotional role functioning
63.8 ± 32.0
75.4 ± 37.3
0.038
Social role functioning
74.0 ± 37.7
80.1 ± 37.5
0.087
Bodily pain
75.0 ± 40.0
86.3 ± 36.9
0.029
Vitality
62.1 ± 30.5
70.8 ± 23.0
0.040
Mental health
63.2 ± 23.6
65.9 ± 21.0
0.049
Data are expressed as mean ± standard deviation. BFT = biofeedback therapy.
## 3.2. Baseline Lifestyle Factors
Table5 shows the scores for physical activity, work pressure, sleep quality, and diet habits of the DD patients before BFT.Table 5
Frequency table of lifestyle characteristics.
Characteristic
Frequency (n)
Characteristic
Frequency (n)
Physical activity
Fruit intake
Often
41
Seldom
11
Sometimes
67
100–200 g/d
90
Seldom
57
200–500 g/d
60
Never
6
>500 g/d
10
Work pressure
Water intake
Low
106
<500 mL/d
53
Normal
30
500–1000 mL/d
100
High
26
>1000 mL/d
18
Very high
9
High-fat diet predilection
Poor sleep quality
Yes
57
No
118
No
114
Yes
53
Vegetable intake
Seldom
19
250–<500 g/d
44
500–1000 g/d
67
>1000 g/d
41
## 3.3. Baseline Anorectal Physiology
In this study, 48.5% (83/171) patients presented with prolonged colonic transit time. The mean values for the manometric parameters were as follows: anal resting pressure 82.5 ± 16.0 mmHg, maximum squeeze pressure 208.3 ± 41.5 mmHg, rectal defecation pressure 38.9 ± 8.6 mmHg, intrarectal pressure 88.9 ± 15.3 mmHg, and rectoanal pressure differential −42.0 ± 8.5 mmHg, and threshold for the first sensation 60.0 mL (range, 20.0–220.0 mL), urgency 100.0 mL (range, 40.0–350.0 mL), and maximum discomfort 150.0 mL (range, 80.0–350.0 mL). According to the HR-ARM results, 82/171 (48.0%), 51/171 (29.8%), 30/171 (17.5%), and 8/171 (4.7%) patients were classified as type I, type II, type III, and type IV DD, respectively.
## 3.4. Biofeedback Treatment Efficacy
Patients in this study received 10.0 ± 3.5 sessions of BFT. Treatment was assessed as “very efficacious” in 72.5% (124/171) patients, as “efficacious” in 8.2% (14/171) patients, and as “not efficacious” in 19.3% (33/171) patients; thus, the total efficacy was 80.7% (Table6).Table 6
Clinical efficacy of BFT.
Clinical efficacy
n
Proportion (%)
Grading of efficacy (%)
≥75%
73
42.7%
Very efficacious (72.5%)
50%–75%
51
29.8%
25%–50%
14
8.2%
Efficacious (8.2%)
≤25%
33
19.3%
Not efficacious (19.3%)
BFT = biofeedback therapy.There was a very significant decrease in the total and subscale scores of bowel symptoms (defecation interval, straining at defecation, sensation of incomplete evacuation/blockage, and stool consistency; Table2).Anxiety and depression was markedly improved, with significant decrease in the scores of SAS and SDS after the BFT (Table3). In the SF-36, the scores for general health perception, physical functioning, and bodily pain were increased significantly, indicating improvement in quality of life (Table 4).
## 3.5. Predictors of Outcome of BFT
Tables7 and 8 show the association between the efficacy of BFT and psychological state, quality of life, lifestyle factors, and anorectal physiology. Univariate analysis showed that BFT efficacy was positively correlated to the score for physical role function (r=0.289; P=0.025) and negatively correlated to the stool consistency score (r=−0.220; P=0.032), the depression score (r=−0.333; P=0.010), and the first sensory threshold volume (r=−0.297; P=0.022; Table 7). Multivariate analysis showed that depression score (β = −0.271; P=0.032) and the first sensory threshold volume (β = −0.325; P=0.013) were independent predictors of BFT efficacy (Table 8).Table 7
Univariate analysis of predictors of outcome of BFT.
Variables
r
P
General information
Age
−0.095
0.440
Gender
−0.112
0.202
Constipation duration
0.115
0.197
Symptoms
Defecation interval
−0.062
0.683
Straining
−0.121
0.149
Sensation of incomplete evacuation
−0.092
0.450
Sensation of blockage
−0.145
0.106
Painful defecation
−0.040
0.849
Stool consistency
−0.220
0.032
Psychological status
SAS
−0.184
0.093
SDS
−0.333
0.010
Quality of life indicators
General health perception
0.135
0.116
Physical functioning
0.112
0.202
Physical role function
0.289
0.025
Emotional role functioning
0.120
0.207
Social role functioning
0.153
0.104
Bodily pain
0.046
0.751
Vitality
0.196
0.084
Mental health
0.205
0.057
Lifestyle
Physical activity
−0.079
0.666
Work pressure
−0.089
0.490
Poor sleep quality
−0.078
0.666
Vegetable intake
0.145
0.106
Fruit intake
−0.062
0.683
Water intake
−0.095
0.468
High-fat diet predilection
0.017
0.800
Anorectal physiology
BET time
−0.188
0.091
CTT
−0.062
0.711
HR-ARM
Anal resting pressure
0.066
0.705
Maximum squeeze pressure
−0.030
0.761
Rectal defecation pressure
0.082
0.650
Intrarectal pressure
0.044
0.795
Rectoanal pressure differential
0.197
0.090
First sensation volume
−0.297
0.022
Urgency volume
−0.178
0.091
Maximum discomfort volume
−0.074
0.700
DD subtype
−0.099
0.365
BFT = biofeedback therapy; SAS = Zung’s Self-Rating Anxiety Scale; SDS = Zung’s Self-Rating Depression Scale; BET = balloon expulsion test; CTT = colonic transit time.Table 8
Multiple linear regression analysis of predictors of BFT outcome.
Variables
β coefficient
95% CI
P
Stool consistency
−0.110
−0.213 to −0.032
0.176
SDS
−0.271
−0.506 to −0.036
0.032
Physical role function
0.112
0.204 to 0.020
0.172
First sensation volume
−0.325
−0.534 to −0.012
0.013
BFT = biofeedback therapy; SDS = Zung’s Self-Rating.
## 4. Discussion
In this study, we evaluated the efficacy of BFT in DD and attempted to identify the factors that could predict the success of BFT. We found that BFT could improve the clinical symptoms of patients with DD. The psychological state and the rectal first sensory threshold pressure were independent predictors of BFT outcome.The prevalence of anxiety and depression in DD patients was much higher than the rates in the general population. These findings are consistent with previous literature that has documented a positive association—though not a causal relationship—between certain psychological disorders and DD [37, 38]. We found that DD patients also have lower quality of life than the general population. This is not surprising, as the symptoms of constipation and psychological disorders can both disrupt daily living.DD patients in our study frequently experienced excessive straining at defecation and a sensation of incomplete evacuation, with the average scores of >2 indicating that symptoms occurred during at least 25% of defecations. Prolonged colonic transit time was seen in 49.2% of DD patients. Significant overlap (10%–60%) between slow-transit constipation and DD as well as between slow-transit constipation and constipation-predominant irritable bowel syndrome has been described previously [39, 40], which suggests that a proportion of patients with constipation may have colonic motor and/or sensory dysfunction and coexisting anorectal sensorimotor dysfunction. In our study population, type I dyssynergia was seen in 48.6%, type II dyssynergia in 28.4%, type III dyssynergia in 20.8%, and type IV dyssynergia in 2.1% of the patients. These rates are consistent with previous studies [6, 41]. The pathogenic mechanisms are different for the different subtypes of DD, and the response to BFT may vary greatly between subtypes.Recent controlled studies have shown that BFT is an effective treatment for pelvic floor dyssynergia [15, 42, 43]; BFT was found to be superior to laxatives, with improvement being maintained over a long-term follow-up. The superior efficacy of BFT was also demonstrated by Wang et al. [44] in their study of 50 CC patients. Seventy percent of their patients felt that BFT was helpful, and 62.5% were improved. Clinical manifestations such as straining, abdominal pain, and bloating were relieved, and the use of oral laxative decreased after BFT; frequency of spontaneous bowel movement and psychological state were also improved significantly after BFT. In our study, at the end of training, there was significant decrease in total and subscale scores of clinical symptoms, including frequency of spontaneous bowel movement, straining at defecation, sensation of incomplete evacuation, sensation of blockage, and stool consistency, suggesting that BFT was an effective behavioral treatment for DD. The emotional centers in the brain can affect motility and sensation in the gut, acting mainly via the hypothalamic-hypophyseal axis and brain-gut axis. Studies have shown that depression increases pelvic floor muscle tension and reduces rectal sensitivity [45, 46]. Mild depression can be relieved to some extent by psychological counseling and by explanation of the symptoms. Both of these approaches are components of BFT, and therefore, BFT can improve the symptoms of both constipation and depression and help improve the overall quality of life of DD patients.In our study, a harder stool was predictive of a substantial improvement in defecation symptoms after BFT. This finding is not unexpected because hard stool is a common feature of DD [1] and because BFT is known to improve dyssynergia and allow more efficient stool evacuation. Shim et al. studied 102 patients with CC and reported similar findings [47].The SDS score was another predictor of BFT efficacy. Many patients with chronic diseases have concurrent depression. Depression is associated with poor treatment compliance, and some researchers consider that this may be an important factor for the failure of BFT in some patients [48–50]. In addition, patients with depression have autonomic nervous dysfunction; low vagal tone can result in decreased gastrointestinal motility [51]. However, Ding et al. have demonstrated that BFT has no effect on autonomic nervous function [35].In our study, the only physiological parameter predictive of substantial improvement in defecation after BFT was the rectal first sensory threshold volume, an elevated value being related to poorer outcome with BFT. There could be several mechanisms for this. Normal rectal sensory function is essential for normal defecation. Patients with rectal hyposensitivity have elevated sensory thresholds, with resulting rectal dysfunction. Fecal retention in the rectum resulting from decreased desire to defecate leads to absorption of moisture from the stool, making it dry and hard. In addition, Schouten et al. have shown that rectal hyposensitivity patients have lower rectal contractility in response to rectal dilatation than control patients [52]. Decreased colonic motility could be another reason. Some rectal hyposensitivity patients have a primary decrease in colonic motility. Chronic dilatation of the rectum in these patients can cause a secondary decrease in proximal colonic motility (the rectum-colon reflex) [53]. Although there are studies proving the efficacy of BFT in slow-transit constipation, the findings are still debated [11, 54]. Currently, there is no effective therapy available for rectal hyposensitivity; the options include sensory training, neural regulation, and surgery.The results of this study may have been affected by some limitations. First, there is no uniform criterion for the curative effect of BFT on DD. We use the valid score that equals the decreasing index between pretraining and posttraining constipation severity scores divided by pretraining score to assess the efficacy. The constipation severity score used in our study was made up of the duplicate entries of the Cleveland Clinic Constipation Score and Rome III criteria. However, we have not test the reliability and validity of the questionnaire. This questionnaire may not reflect the constipation symptoms of the patient accurately. Second, BFT efficacy was assessed at the completion of the BFT session. We have not assessed the long-term outcome of BFT. Also, we do not know the predictors for long-term efficacy of BFT.
## 5. Conclusion
BFT improves the clinical symptoms of DD patients. High SDS score and elevated first rectal sensory threshold volume are independent predictors of poor outcome with BFT. Treatment for depression and rectal hyposensitivity could optimize the effects of BFT in DD patients.
---
*Source: 1019652-2017-08-29.xml* | 1019652-2017-08-29_1019652-2017-08-29.md | 48,837 | Efficacy and Predictors for Biofeedback Therapeutic Outcome in Patients with Dyssynergic Defecation | Ting Yu; Xiaoxue Shen; Miaomiao Li; Meifeng Wang; Lin Lin | Gastroenterology Research and Practice
(2017) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1019652 | 1019652-2017-08-29.xml | ---
## Abstract
Aim. To evaluate the short-term efficacy of biofeedback therapy (BFT) for dyssynergic defecation (DD) and to explore the predictors of the efficacy of BFT. Methods. Clinical symptoms, psychological state, and quality of life of patients before and after BFT were investigated. All patients underwent lifestyle survey and anorectal physiology tests before BFT. Improvement in symptom scores was considered proof of clinical efficacy of BFT. Thirty-eight factors that could influence the efficacy of BFT were studied. Univariate and multivariate analysis was conducted to identify the independent predictors. Results. Clinical symptoms, psychological state, and quality of life of DD patients improved significantly after BFT. Univariate analysis showed that efficacy of BFT was positively correlated to one of the 36-item Short-Form Health Survey terms, the physical role function (r=0.289; P=0.025), and negatively correlated to the stool consistency (r=−0.220; P=0.032), the depression scores (r=−0.333; P=0.010), and the first rectal sensory threshold volume (r=−0.297; P=0.022). Multivariate analysis showed depression score (β = −0.271; P=0.032) and first rectal sensory threshold volume (β = −0.325; P=0.013) to be independent predictors of BFT efficacy. Conclusion. BFT improves the clinical symptoms of DD patients. Depression state and elevated first rectal sensory threshold volume were independent predictors of poor outcome with BFT.
---
## Body
## 1. Introduction
Chronic constipation (CC) is diagnosed when there is at least a 6-month history of symptoms such as infrequent bowel movement, reduced stool volume, hard stools, and excessive straining at defecation [1]. Treatment can be very difficult. The median prevalence is 16% in the US and is as high as 33.5% in adults aged 60–101 years [2]. The overall prevalence in Chinese adults is 16%–20% [3].Primary constipation consists of several overlapping subtypes, among which dyssynergic defecation (DD) is relatively common [4, 5]. Patients with DD have symptoms of obstructive defecation, such as severe straining during defecation and a sensation of a “blockage” and of incomplete evacuation. The physiological mechanisms of DD include inability to coordinate abdominal, rectoanal, and pelvic floor muscles during defecation because of causes such as inadequate rectal and/or abdominal propulsive force, impaired anal relaxation (i.e., <20% relaxation of basal resting pressure), or increased anal outlet resistance as a result of paradoxical external anal sphincter or puborectalis contraction [6, 7]. Pharmacological therapies that are usually effective in CC, such as bulking agents, osmotic laxatives, stimulant laxatives, and stool softeners [8], are often ineffective in DD patients [9].Biofeedback therapy (BFT), which is based on behavior modification [10], can be used to train DD patients to defecate effectively. Patients are taught to brace the abdominal wall muscles and relax the pelvic floor muscles during defecation, and efforts are also made to modify sensory perception in the rectum [11]. The first application of BFT for treatment of CC due to DD was in 1987 [12]. Since then, a number of controlled studies have shown that BFT can be more effective than laxatives, muscle relaxants, and placebo, with benefits lasting for at least 12 months [13–15]. Based on these findings, BFT has been recognized as the most effective treatment for DD for several years [16, 17]. However, symptomatic improvement after BFT has varied widely between studies, ranging from 44% to 100% [18]. Few data are available regarding the factors predictive of success of BFT [19]. In our experience, we have seen that anorectal physiology, psychological state, quality of life, and lifestyle factors can all influence the efficacy of BFT.The aim of this study was to investigate the short-term efficacy of BFT and to identify the clinical and physiological factors that predict success or failure following BFT in Chinese patients.
## 2. Material and Methods
### 2.1. Patients
In this retrospective study, all adult patients diagnosed with CC due to DD at the Department of Gastroenterology of the First Affiliated Hospital of Nanjing Medical University, between January 1, 2012, and October 30, 2015, were eligible for inclusion. CC was diagnosed if the patient had at least two of the following constipation symptoms for >6 months: (1) infrequent stools (<3 bowel movements/week); (2) hard or lumpy stools (Bristol stool form scale score of 1-2) [20]; (3) straining at stool; (4) sensation of incomplete evacuation after bowel movement; or (5) sensation of anorectal blockage [21]. The presence of DD was determined using high-resolution anorectal manometry (HR-ARM) and rectal balloon expulsion test. Patients presented with inappropriate contraction or inadequate propulsive forces in HR-ARM and prolonged balloon expulsion time were considered to have DD. None of the patients had responded to standard management of constipation (e.g., increased dietary fiber and fluid intake or laxatives). Patients were excluded from the study if they (1) were <18 years in age, (2) had structural bowel disease or history of abdominal surgery, (3) had mental illness, (4) had recently received psychotropic drugs [22], (5) were pregnant, or (6) had not completed a full course of BFT (4 sessions).This study was approved by the Ethics Committee of the First Affiliated Hospital of Nanjing Medical University (2016-SRFA-064).
### 2.2. Constipation Severity
A questionnaire (Table1) adapted from the one developed by the Cleveland Clinic was used to assess defecatory symptoms [23] such as frequency of spontaneous bowel movements, stool consistency, straining during defecation, sensation of incomplete evacuation, sensation of blockage, and painful defecation. The latter four are deemed to be relatively specific for DD and were scored on a scale of 0 to 3, where 0 = never occurred, 1 = occurred occasionally, 2 = occurred during 25% of defecations, and 3 = occurred during 50% of defecations. The frequency of spontaneous bowel movements was scored as 0 = defecation interval 1-2 days, 1 = defecation interval 3 days, 2 = defecation interval 4-5 days, and 3 = defecation interval > 5 days. Stool consistency was evaluated according to the Bristol stool scale (a 7-point scale, ranging from 1 = separate hard lumps like nuts to 7 = watery) [20]; in this study, the scores were allotted as follows: Bristol type 4–7 = score 0, Bristol type 3 = score 1, Bristol type 2 = score 2, and Bristol type 1 = score 3.Table 1
Scoring system for symptoms of DD.
Grading/score
Defecation interval (days)
Straining
Sensation of incomplete evacuation
Sensation of blockage
Painful defecation
Stool consistency
0
1-2
None
None
None
None
BSS: 4–7
1
3
Occurs occasionally
Occurs occasionally
Occurs occasionally
Occurs occasionally
BSS: 3
2
4-5
Occurs during >25% of defecations
Occurs during >25% of defecations
Occurs during >25% of defecations
Occurs during >25% of defecations
BSS: 2
3
>5
Occurs during >50% of defecations
Occurs during >50% of defecations
Occurs during >50% of defecations
Occurs during >50% of defecations
BSS: 1
DD = dyssynergic defecation; BSS = Bristol stool scale.
### 2.3. Assessment of Psychological State and Quality of Life
Zung’s Self-Rating Anxiety Scale (SAS) [24] and Self-Rating Depression Scale (SDS) [25] were used to evaluate the levels of anxiety and depression. In Chinese populations, SAS ≥ 50 and SDS ≥ 53 represent diagnosable anxiety and depression [26]. The 36-item Short-Form Health Survey (SF-36) was used to evaluate quality of life [27]. The SF-36 consists of eight sections: vitality, physical functioning, bodily pain, general health perceptions, physical role functioning, emotional role functioning, social role functioning, and mental health. The scores in each section are the weighted sums of the scores for each question in that section. The scale is directly transformed into a 0–100 scale on the assumption that each question carries equal weight. The higher the score, the better the patient’s quality of life.
### 2.4. Lifestyle Survey
Information on physical activity, work pressure, and sleep quality were obtained from questionnaires filled in at first contact with the patient. Physical activity was assessed by one question on the frequency of exercise of at least 30 minutes per session during the past week; the possible responses were “often,” “sometimes,” “seldom,” and “never.” Work pressure was graded as “low,” “normal,” high,” and “very high.” Sleep quality was assessed by the Pittsburgh Sleep Quality Index (PSQI) questionnaire [28]. The PSQI assesses seven components of sleep: the quality, latency, duration, and efficiency of sleep, sleep disturbances, use of sleeping medication, and daytime dysfunction. Each component is scored from 0 to 3, and the seven component scores are summed to gain a global score. In Chinese populations, a PSQI global score > 7 indicates poor sleep quality [29].There were six questions on the frequency and/or volume of consumption of certain food items. Volume of intake was graded as “low,” “normal,” “high,” or “very high,” and frequency of consumption as “often,” “sometimes,” “seldom,” or “never.” Thus, we recorded the frequency and volume of consumption of vegetables (seldom, 250–<500 g/d, 500–1000 g/d, and >1000 g/d); fruits (seldom, 100–200 g/d, 200–500 g/d, and >500 g/d); and water (<500 mL/d, 500–1000 mL/d, >1000 mL/d). Predilection for a high-fat diet was also recorded (yes/no).
### 2.5. Rectal Balloon Expulsion Test
The time required for subjects to expel a rectal balloon filled with 50 mL of warm water while seated in privacy on a commode was measured. The balloon was removed if the subject was not able to expel the balloon within 1 minute [30, 31].
### 2.6. Colonic Transit Study
Colonic transit was assessed using radiopaque marker techniques. In brief, the patient ingested a single capsule containing 24 cylindrical radiopaque markers of 2 mm diameter and 6 mm length on day 1. A supine radiograph of the abdomen was obtained on day 3 (i.e., 72 hours later) to assess the number and distribution of the markers in the colon; patients were deemed positive for delayed colonic transit if there were >4 markers distributed throughout the colon [32, 33].
### 2.7. High-Resolution Anorectal Manometry
A novel solid-state HR-ARM device (Manoscan AR 360; Given Imaging, Yokneam, Israel) with 12 sensors was used for anorectal manometry. The procedure was performed after defecation. The patient was placed in the left lateral decubitus position, with the hips flexed to 90°. The rectal balloon, with the attached catheter, was placed 3 cm proximal to the upper part of the anal sphincter. Measurements were made in the following order: resting anal and rectal pressure (20–30 seconds), pressure during squeeze (best of three attempts, with a maximum duration of 20–30 seconds per attempt), and pressure during bearing down as in defecation (best of three attempts, with 20–30 seconds per attempt) [34]. Rectal sensation was simultaneously evaluated; for this, the rectal balloon was progressively distended in 10 mL increments from 0 mL to 50 mL, and threshold volumes for first sensation, urgency, and maximum discomfort were recorded.Four phenotypes of DD have been recognized based on HR-ARM: type I dyssynergia, in which there is an adequate increase (≥40 mmHg) in rectal pressure, accompanied by a paradoxical simultaneous increase in anal pressure; type II dyssynergia, in which there is an inadequate increase (<40 mmHg) in rectal pressure (poor propulsive force), accompanied by a paradoxical increase in anal pressure; type III dyssynergia, in which there is an adequate increase (≥40 mmHg) in rectal pressure, accompanied by failure of reduction in anal pressure (to ≤20% of baseline pressure); and type IV dyssynergia, in which there is an inadequate increase (<40 mmHg) in rectal pressure (poor propulsive force), accompanied by failure of reduction in anal pressure (to ≤20% of baseline pressure) [1].
### 2.8. Biofeedback Training
The Polygraf ID 8 (Medtronic Ltd, Denmark) was used for biofeedback training. Patients received a 1-hour biofeedback training once every other day for the first 2 weeks, and 2-3 times per week thereafter. For the training session, the patient was asked to lie on the right side, and a single manometry catheter and anal electrode were inserted into the patient’s anorectal canal at the sphincter. The catheter and the electrode were connected to the Polygraf ID, which displayed the data collected in the anorectal canal in a simple graphical format. The biofeedback application displayed a column, which the patient navigated using the pelvic floor muscles. By contracting and relaxing the pelvic floor muscles, the patient could move the signal level indicator up and down. The patient was instructed to try and keep the signal level within the limits of the column, while maintaining awareness of the changes in the pelvic floor muscle activity. They could thus learn to modulate the activity of the anorectal muscles [35]. During the training period, patients were required to practice at home, using the squeezing and relaxing maneuvers for 20 minutes at a time, 2-3 times/week. At the conclusion of biofeedback training, all patients were told that their pushing efforts had improved; this ensured that patients would be motivated to return for a follow-up and have positive expectations during the follow-up assessments.
### 2.9. Evaluation of Biofeedback Treatment Efficacy
Treatment efficacy was assessed at the completion of the BFT session. Treatment efficacy was expressed as a ratio, that is, the difference between the pretraining and posttraining constipation severity scores divided by the pretraining score, and graded as “very efficacious” (score > 0.05), “efficacious” (score 0.25–0.50), or “not efficacious” (score < 0.25).
### 2.10. Statistical Analysis
All data were analyzed using SPSS version 20.0 (IBM Corp., Armonk, NY, USA). Continuous variables were expressed as means ± standard deviation or medians (range), and categorical variables as relative frequencies. Student’st-test or the Mann–Whitney U test was used to compare continuous variables, and the chi-square test or Fisher’s exact test for categorical variables. Univariate and multivariate analysis was used to identify the predictors of BFT efficacy. P<0.05 was considered statistically significant.
## 2.1. Patients
In this retrospective study, all adult patients diagnosed with CC due to DD at the Department of Gastroenterology of the First Affiliated Hospital of Nanjing Medical University, between January 1, 2012, and October 30, 2015, were eligible for inclusion. CC was diagnosed if the patient had at least two of the following constipation symptoms for >6 months: (1) infrequent stools (<3 bowel movements/week); (2) hard or lumpy stools (Bristol stool form scale score of 1-2) [20]; (3) straining at stool; (4) sensation of incomplete evacuation after bowel movement; or (5) sensation of anorectal blockage [21]. The presence of DD was determined using high-resolution anorectal manometry (HR-ARM) and rectal balloon expulsion test. Patients presented with inappropriate contraction or inadequate propulsive forces in HR-ARM and prolonged balloon expulsion time were considered to have DD. None of the patients had responded to standard management of constipation (e.g., increased dietary fiber and fluid intake or laxatives). Patients were excluded from the study if they (1) were <18 years in age, (2) had structural bowel disease or history of abdominal surgery, (3) had mental illness, (4) had recently received psychotropic drugs [22], (5) were pregnant, or (6) had not completed a full course of BFT (4 sessions).This study was approved by the Ethics Committee of the First Affiliated Hospital of Nanjing Medical University (2016-SRFA-064).
## 2.2. Constipation Severity
A questionnaire (Table1) adapted from the one developed by the Cleveland Clinic was used to assess defecatory symptoms [23] such as frequency of spontaneous bowel movements, stool consistency, straining during defecation, sensation of incomplete evacuation, sensation of blockage, and painful defecation. The latter four are deemed to be relatively specific for DD and were scored on a scale of 0 to 3, where 0 = never occurred, 1 = occurred occasionally, 2 = occurred during 25% of defecations, and 3 = occurred during 50% of defecations. The frequency of spontaneous bowel movements was scored as 0 = defecation interval 1-2 days, 1 = defecation interval 3 days, 2 = defecation interval 4-5 days, and 3 = defecation interval > 5 days. Stool consistency was evaluated according to the Bristol stool scale (a 7-point scale, ranging from 1 = separate hard lumps like nuts to 7 = watery) [20]; in this study, the scores were allotted as follows: Bristol type 4–7 = score 0, Bristol type 3 = score 1, Bristol type 2 = score 2, and Bristol type 1 = score 3.Table 1
Scoring system for symptoms of DD.
Grading/score
Defecation interval (days)
Straining
Sensation of incomplete evacuation
Sensation of blockage
Painful defecation
Stool consistency
0
1-2
None
None
None
None
BSS: 4–7
1
3
Occurs occasionally
Occurs occasionally
Occurs occasionally
Occurs occasionally
BSS: 3
2
4-5
Occurs during >25% of defecations
Occurs during >25% of defecations
Occurs during >25% of defecations
Occurs during >25% of defecations
BSS: 2
3
>5
Occurs during >50% of defecations
Occurs during >50% of defecations
Occurs during >50% of defecations
Occurs during >50% of defecations
BSS: 1
DD = dyssynergic defecation; BSS = Bristol stool scale.
## 2.3. Assessment of Psychological State and Quality of Life
Zung’s Self-Rating Anxiety Scale (SAS) [24] and Self-Rating Depression Scale (SDS) [25] were used to evaluate the levels of anxiety and depression. In Chinese populations, SAS ≥ 50 and SDS ≥ 53 represent diagnosable anxiety and depression [26]. The 36-item Short-Form Health Survey (SF-36) was used to evaluate quality of life [27]. The SF-36 consists of eight sections: vitality, physical functioning, bodily pain, general health perceptions, physical role functioning, emotional role functioning, social role functioning, and mental health. The scores in each section are the weighted sums of the scores for each question in that section. The scale is directly transformed into a 0–100 scale on the assumption that each question carries equal weight. The higher the score, the better the patient’s quality of life.
## 2.4. Lifestyle Survey
Information on physical activity, work pressure, and sleep quality were obtained from questionnaires filled in at first contact with the patient. Physical activity was assessed by one question on the frequency of exercise of at least 30 minutes per session during the past week; the possible responses were “often,” “sometimes,” “seldom,” and “never.” Work pressure was graded as “low,” “normal,” high,” and “very high.” Sleep quality was assessed by the Pittsburgh Sleep Quality Index (PSQI) questionnaire [28]. The PSQI assesses seven components of sleep: the quality, latency, duration, and efficiency of sleep, sleep disturbances, use of sleeping medication, and daytime dysfunction. Each component is scored from 0 to 3, and the seven component scores are summed to gain a global score. In Chinese populations, a PSQI global score > 7 indicates poor sleep quality [29].There were six questions on the frequency and/or volume of consumption of certain food items. Volume of intake was graded as “low,” “normal,” “high,” or “very high,” and frequency of consumption as “often,” “sometimes,” “seldom,” or “never.” Thus, we recorded the frequency and volume of consumption of vegetables (seldom, 250–<500 g/d, 500–1000 g/d, and >1000 g/d); fruits (seldom, 100–200 g/d, 200–500 g/d, and >500 g/d); and water (<500 mL/d, 500–1000 mL/d, >1000 mL/d). Predilection for a high-fat diet was also recorded (yes/no).
## 2.5. Rectal Balloon Expulsion Test
The time required for subjects to expel a rectal balloon filled with 50 mL of warm water while seated in privacy on a commode was measured. The balloon was removed if the subject was not able to expel the balloon within 1 minute [30, 31].
## 2.6. Colonic Transit Study
Colonic transit was assessed using radiopaque marker techniques. In brief, the patient ingested a single capsule containing 24 cylindrical radiopaque markers of 2 mm diameter and 6 mm length on day 1. A supine radiograph of the abdomen was obtained on day 3 (i.e., 72 hours later) to assess the number and distribution of the markers in the colon; patients were deemed positive for delayed colonic transit if there were >4 markers distributed throughout the colon [32, 33].
## 2.7. High-Resolution Anorectal Manometry
A novel solid-state HR-ARM device (Manoscan AR 360; Given Imaging, Yokneam, Israel) with 12 sensors was used for anorectal manometry. The procedure was performed after defecation. The patient was placed in the left lateral decubitus position, with the hips flexed to 90°. The rectal balloon, with the attached catheter, was placed 3 cm proximal to the upper part of the anal sphincter. Measurements were made in the following order: resting anal and rectal pressure (20–30 seconds), pressure during squeeze (best of three attempts, with a maximum duration of 20–30 seconds per attempt), and pressure during bearing down as in defecation (best of three attempts, with 20–30 seconds per attempt) [34]. Rectal sensation was simultaneously evaluated; for this, the rectal balloon was progressively distended in 10 mL increments from 0 mL to 50 mL, and threshold volumes for first sensation, urgency, and maximum discomfort were recorded.Four phenotypes of DD have been recognized based on HR-ARM: type I dyssynergia, in which there is an adequate increase (≥40 mmHg) in rectal pressure, accompanied by a paradoxical simultaneous increase in anal pressure; type II dyssynergia, in which there is an inadequate increase (<40 mmHg) in rectal pressure (poor propulsive force), accompanied by a paradoxical increase in anal pressure; type III dyssynergia, in which there is an adequate increase (≥40 mmHg) in rectal pressure, accompanied by failure of reduction in anal pressure (to ≤20% of baseline pressure); and type IV dyssynergia, in which there is an inadequate increase (<40 mmHg) in rectal pressure (poor propulsive force), accompanied by failure of reduction in anal pressure (to ≤20% of baseline pressure) [1].
## 2.8. Biofeedback Training
The Polygraf ID 8 (Medtronic Ltd, Denmark) was used for biofeedback training. Patients received a 1-hour biofeedback training once every other day for the first 2 weeks, and 2-3 times per week thereafter. For the training session, the patient was asked to lie on the right side, and a single manometry catheter and anal electrode were inserted into the patient’s anorectal canal at the sphincter. The catheter and the electrode were connected to the Polygraf ID, which displayed the data collected in the anorectal canal in a simple graphical format. The biofeedback application displayed a column, which the patient navigated using the pelvic floor muscles. By contracting and relaxing the pelvic floor muscles, the patient could move the signal level indicator up and down. The patient was instructed to try and keep the signal level within the limits of the column, while maintaining awareness of the changes in the pelvic floor muscle activity. They could thus learn to modulate the activity of the anorectal muscles [35]. During the training period, patients were required to practice at home, using the squeezing and relaxing maneuvers for 20 minutes at a time, 2-3 times/week. At the conclusion of biofeedback training, all patients were told that their pushing efforts had improved; this ensured that patients would be motivated to return for a follow-up and have positive expectations during the follow-up assessments.
## 2.9. Evaluation of Biofeedback Treatment Efficacy
Treatment efficacy was assessed at the completion of the BFT session. Treatment efficacy was expressed as a ratio, that is, the difference between the pretraining and posttraining constipation severity scores divided by the pretraining score, and graded as “very efficacious” (score > 0.05), “efficacious” (score 0.25–0.50), or “not efficacious” (score < 0.25).
## 2.10. Statistical Analysis
All data were analyzed using SPSS version 20.0 (IBM Corp., Armonk, NY, USA). Continuous variables were expressed as means ± standard deviation or medians (range), and categorical variables as relative frequencies. Student’st-test or the Mann–Whitney U test was used to compare continuous variables, and the chi-square test or Fisher’s exact test for categorical variables. Univariate and multivariate analysis was used to identify the predictors of BFT efficacy. P<0.05 was considered statistically significant.
## 3. Results
The data of 171 patients (69 men and 102 women; mean age, 54.0 ± 23.3 years) were analyzed.
### 3.1. Baseline Clinical Symptoms, Psychological State, and Quality of Life
The mean disease duration was 6.5 ± 2.5 years. In this study population, 74.9% (128/171) patients had not had spontaneous bowel movements over the past 2 years. In all, 93.0% (159/171) patients had history of long-term use of stimulant laxatives. The mean defecation interval was 1.95 ± 1.20 days, and the mean stool consistency score was 1.82 ± 1.20. Almost all patients had complaints of straining during bowel movement, sensation of incomplete defecation, sensation of blockage, or pain during defecation. Table2 shows the defecatory symptom scores.Table 2
Symptom scores before and after BFT.
Clinical symptoms
Before BFT
After BFT
P
Defecation interval (days)
1.95 ± 1.20
1.20 ± 0.91
0.039
Straining
2.75 ± 1.63
1.60 ± 1.15
0.042
Sensation of incomplete evacuation
2.50 ± 1.35
1.62 ± 1.15
0.048
Sensation of blockage
1.82 ± 1.40
0.95 ± 1.07
0.021
Painful defecation
1.20 ± 0.90
1.17 ± 074
0.109
Stool consistency
1.82 ± 1.20
0.96 ± 1.13
0.034
Total
12.36 ± 6.00
7.61 ± 4.52
0.011
Data are expressed as mean ± standard deviation. BFT = biofeedback therapy.The anxiety and depression scores were 40.0 ± 15.5 and 50.1 ± 13.5, respectively, which were significantly higher than the Chinese norms (33.80 ± 5.90 and 41.88 ± 10.57, resp.; Table3) [26]; on the basis of these scores, 22.2% (38/171) and 33.9% (62/171) of the patients had anxiety and depression, respectively.Table 3
SAS and SDS scores before and after BFT.
Before BFT
After BFT
P
SAS
40.0 ± 15.5
33.5 ± 10.9
0.004
SDS
50.1 ± 13.5
46.0 ± 13.5
0.023
Data are expressed as mean ± standard deviation. SAS = Zung’s Self-Rating Anxiety Scale; SDS = Zung’s Self-Rating Depression Scale; BFT = biofeedback therapy.Table4 shows the scores of the DD patients in the different sections of the SF-36. All scores were significantly lower than the Chinese norms [36].Table 4
Scores for different quality of life indicators before and after BFT.
Quality of life indicator
Before BFT
After BFT
P
General health perception
41.3 ± 19.0
63.4 ± 19.2
<0.001
Physical functioning
84.0 ± 42.8
88.5 ± 39.2
0.045
Physical role functioning
60.5 ± 34.9
72.6 ± 39.0
0.033
Emotional role functioning
63.8 ± 32.0
75.4 ± 37.3
0.038
Social role functioning
74.0 ± 37.7
80.1 ± 37.5
0.087
Bodily pain
75.0 ± 40.0
86.3 ± 36.9
0.029
Vitality
62.1 ± 30.5
70.8 ± 23.0
0.040
Mental health
63.2 ± 23.6
65.9 ± 21.0
0.049
Data are expressed as mean ± standard deviation. BFT = biofeedback therapy.
### 3.2. Baseline Lifestyle Factors
Table5 shows the scores for physical activity, work pressure, sleep quality, and diet habits of the DD patients before BFT.Table 5
Frequency table of lifestyle characteristics.
Characteristic
Frequency (n)
Characteristic
Frequency (n)
Physical activity
Fruit intake
Often
41
Seldom
11
Sometimes
67
100–200 g/d
90
Seldom
57
200–500 g/d
60
Never
6
>500 g/d
10
Work pressure
Water intake
Low
106
<500 mL/d
53
Normal
30
500–1000 mL/d
100
High
26
>1000 mL/d
18
Very high
9
High-fat diet predilection
Poor sleep quality
Yes
57
No
118
No
114
Yes
53
Vegetable intake
Seldom
19
250–<500 g/d
44
500–1000 g/d
67
>1000 g/d
41
### 3.3. Baseline Anorectal Physiology
In this study, 48.5% (83/171) patients presented with prolonged colonic transit time. The mean values for the manometric parameters were as follows: anal resting pressure 82.5 ± 16.0 mmHg, maximum squeeze pressure 208.3 ± 41.5 mmHg, rectal defecation pressure 38.9 ± 8.6 mmHg, intrarectal pressure 88.9 ± 15.3 mmHg, and rectoanal pressure differential −42.0 ± 8.5 mmHg, and threshold for the first sensation 60.0 mL (range, 20.0–220.0 mL), urgency 100.0 mL (range, 40.0–350.0 mL), and maximum discomfort 150.0 mL (range, 80.0–350.0 mL). According to the HR-ARM results, 82/171 (48.0%), 51/171 (29.8%), 30/171 (17.5%), and 8/171 (4.7%) patients were classified as type I, type II, type III, and type IV DD, respectively.
### 3.4. Biofeedback Treatment Efficacy
Patients in this study received 10.0 ± 3.5 sessions of BFT. Treatment was assessed as “very efficacious” in 72.5% (124/171) patients, as “efficacious” in 8.2% (14/171) patients, and as “not efficacious” in 19.3% (33/171) patients; thus, the total efficacy was 80.7% (Table6).Table 6
Clinical efficacy of BFT.
Clinical efficacy
n
Proportion (%)
Grading of efficacy (%)
≥75%
73
42.7%
Very efficacious (72.5%)
50%–75%
51
29.8%
25%–50%
14
8.2%
Efficacious (8.2%)
≤25%
33
19.3%
Not efficacious (19.3%)
BFT = biofeedback therapy.There was a very significant decrease in the total and subscale scores of bowel symptoms (defecation interval, straining at defecation, sensation of incomplete evacuation/blockage, and stool consistency; Table2).Anxiety and depression was markedly improved, with significant decrease in the scores of SAS and SDS after the BFT (Table3). In the SF-36, the scores for general health perception, physical functioning, and bodily pain were increased significantly, indicating improvement in quality of life (Table 4).
### 3.5. Predictors of Outcome of BFT
Tables7 and 8 show the association between the efficacy of BFT and psychological state, quality of life, lifestyle factors, and anorectal physiology. Univariate analysis showed that BFT efficacy was positively correlated to the score for physical role function (r=0.289; P=0.025) and negatively correlated to the stool consistency score (r=−0.220; P=0.032), the depression score (r=−0.333; P=0.010), and the first sensory threshold volume (r=−0.297; P=0.022; Table 7). Multivariate analysis showed that depression score (β = −0.271; P=0.032) and the first sensory threshold volume (β = −0.325; P=0.013) were independent predictors of BFT efficacy (Table 8).Table 7
Univariate analysis of predictors of outcome of BFT.
Variables
r
P
General information
Age
−0.095
0.440
Gender
−0.112
0.202
Constipation duration
0.115
0.197
Symptoms
Defecation interval
−0.062
0.683
Straining
−0.121
0.149
Sensation of incomplete evacuation
−0.092
0.450
Sensation of blockage
−0.145
0.106
Painful defecation
−0.040
0.849
Stool consistency
−0.220
0.032
Psychological status
SAS
−0.184
0.093
SDS
−0.333
0.010
Quality of life indicators
General health perception
0.135
0.116
Physical functioning
0.112
0.202
Physical role function
0.289
0.025
Emotional role functioning
0.120
0.207
Social role functioning
0.153
0.104
Bodily pain
0.046
0.751
Vitality
0.196
0.084
Mental health
0.205
0.057
Lifestyle
Physical activity
−0.079
0.666
Work pressure
−0.089
0.490
Poor sleep quality
−0.078
0.666
Vegetable intake
0.145
0.106
Fruit intake
−0.062
0.683
Water intake
−0.095
0.468
High-fat diet predilection
0.017
0.800
Anorectal physiology
BET time
−0.188
0.091
CTT
−0.062
0.711
HR-ARM
Anal resting pressure
0.066
0.705
Maximum squeeze pressure
−0.030
0.761
Rectal defecation pressure
0.082
0.650
Intrarectal pressure
0.044
0.795
Rectoanal pressure differential
0.197
0.090
First sensation volume
−0.297
0.022
Urgency volume
−0.178
0.091
Maximum discomfort volume
−0.074
0.700
DD subtype
−0.099
0.365
BFT = biofeedback therapy; SAS = Zung’s Self-Rating Anxiety Scale; SDS = Zung’s Self-Rating Depression Scale; BET = balloon expulsion test; CTT = colonic transit time.Table 8
Multiple linear regression analysis of predictors of BFT outcome.
Variables
β coefficient
95% CI
P
Stool consistency
−0.110
−0.213 to −0.032
0.176
SDS
−0.271
−0.506 to −0.036
0.032
Physical role function
0.112
0.204 to 0.020
0.172
First sensation volume
−0.325
−0.534 to −0.012
0.013
BFT = biofeedback therapy; SDS = Zung’s Self-Rating.
## 3.1. Baseline Clinical Symptoms, Psychological State, and Quality of Life
The mean disease duration was 6.5 ± 2.5 years. In this study population, 74.9% (128/171) patients had not had spontaneous bowel movements over the past 2 years. In all, 93.0% (159/171) patients had history of long-term use of stimulant laxatives. The mean defecation interval was 1.95 ± 1.20 days, and the mean stool consistency score was 1.82 ± 1.20. Almost all patients had complaints of straining during bowel movement, sensation of incomplete defecation, sensation of blockage, or pain during defecation. Table2 shows the defecatory symptom scores.Table 2
Symptom scores before and after BFT.
Clinical symptoms
Before BFT
After BFT
P
Defecation interval (days)
1.95 ± 1.20
1.20 ± 0.91
0.039
Straining
2.75 ± 1.63
1.60 ± 1.15
0.042
Sensation of incomplete evacuation
2.50 ± 1.35
1.62 ± 1.15
0.048
Sensation of blockage
1.82 ± 1.40
0.95 ± 1.07
0.021
Painful defecation
1.20 ± 0.90
1.17 ± 074
0.109
Stool consistency
1.82 ± 1.20
0.96 ± 1.13
0.034
Total
12.36 ± 6.00
7.61 ± 4.52
0.011
Data are expressed as mean ± standard deviation. BFT = biofeedback therapy.The anxiety and depression scores were 40.0 ± 15.5 and 50.1 ± 13.5, respectively, which were significantly higher than the Chinese norms (33.80 ± 5.90 and 41.88 ± 10.57, resp.; Table3) [26]; on the basis of these scores, 22.2% (38/171) and 33.9% (62/171) of the patients had anxiety and depression, respectively.Table 3
SAS and SDS scores before and after BFT.
Before BFT
After BFT
P
SAS
40.0 ± 15.5
33.5 ± 10.9
0.004
SDS
50.1 ± 13.5
46.0 ± 13.5
0.023
Data are expressed as mean ± standard deviation. SAS = Zung’s Self-Rating Anxiety Scale; SDS = Zung’s Self-Rating Depression Scale; BFT = biofeedback therapy.Table4 shows the scores of the DD patients in the different sections of the SF-36. All scores were significantly lower than the Chinese norms [36].Table 4
Scores for different quality of life indicators before and after BFT.
Quality of life indicator
Before BFT
After BFT
P
General health perception
41.3 ± 19.0
63.4 ± 19.2
<0.001
Physical functioning
84.0 ± 42.8
88.5 ± 39.2
0.045
Physical role functioning
60.5 ± 34.9
72.6 ± 39.0
0.033
Emotional role functioning
63.8 ± 32.0
75.4 ± 37.3
0.038
Social role functioning
74.0 ± 37.7
80.1 ± 37.5
0.087
Bodily pain
75.0 ± 40.0
86.3 ± 36.9
0.029
Vitality
62.1 ± 30.5
70.8 ± 23.0
0.040
Mental health
63.2 ± 23.6
65.9 ± 21.0
0.049
Data are expressed as mean ± standard deviation. BFT = biofeedback therapy.
## 3.2. Baseline Lifestyle Factors
Table5 shows the scores for physical activity, work pressure, sleep quality, and diet habits of the DD patients before BFT.Table 5
Frequency table of lifestyle characteristics.
Characteristic
Frequency (n)
Characteristic
Frequency (n)
Physical activity
Fruit intake
Often
41
Seldom
11
Sometimes
67
100–200 g/d
90
Seldom
57
200–500 g/d
60
Never
6
>500 g/d
10
Work pressure
Water intake
Low
106
<500 mL/d
53
Normal
30
500–1000 mL/d
100
High
26
>1000 mL/d
18
Very high
9
High-fat diet predilection
Poor sleep quality
Yes
57
No
118
No
114
Yes
53
Vegetable intake
Seldom
19
250–<500 g/d
44
500–1000 g/d
67
>1000 g/d
41
## 3.3. Baseline Anorectal Physiology
In this study, 48.5% (83/171) patients presented with prolonged colonic transit time. The mean values for the manometric parameters were as follows: anal resting pressure 82.5 ± 16.0 mmHg, maximum squeeze pressure 208.3 ± 41.5 mmHg, rectal defecation pressure 38.9 ± 8.6 mmHg, intrarectal pressure 88.9 ± 15.3 mmHg, and rectoanal pressure differential −42.0 ± 8.5 mmHg, and threshold for the first sensation 60.0 mL (range, 20.0–220.0 mL), urgency 100.0 mL (range, 40.0–350.0 mL), and maximum discomfort 150.0 mL (range, 80.0–350.0 mL). According to the HR-ARM results, 82/171 (48.0%), 51/171 (29.8%), 30/171 (17.5%), and 8/171 (4.7%) patients were classified as type I, type II, type III, and type IV DD, respectively.
## 3.4. Biofeedback Treatment Efficacy
Patients in this study received 10.0 ± 3.5 sessions of BFT. Treatment was assessed as “very efficacious” in 72.5% (124/171) patients, as “efficacious” in 8.2% (14/171) patients, and as “not efficacious” in 19.3% (33/171) patients; thus, the total efficacy was 80.7% (Table6).Table 6
Clinical efficacy of BFT.
Clinical efficacy
n
Proportion (%)
Grading of efficacy (%)
≥75%
73
42.7%
Very efficacious (72.5%)
50%–75%
51
29.8%
25%–50%
14
8.2%
Efficacious (8.2%)
≤25%
33
19.3%
Not efficacious (19.3%)
BFT = biofeedback therapy.There was a very significant decrease in the total and subscale scores of bowel symptoms (defecation interval, straining at defecation, sensation of incomplete evacuation/blockage, and stool consistency; Table2).Anxiety and depression was markedly improved, with significant decrease in the scores of SAS and SDS after the BFT (Table3). In the SF-36, the scores for general health perception, physical functioning, and bodily pain were increased significantly, indicating improvement in quality of life (Table 4).
## 3.5. Predictors of Outcome of BFT
Tables7 and 8 show the association between the efficacy of BFT and psychological state, quality of life, lifestyle factors, and anorectal physiology. Univariate analysis showed that BFT efficacy was positively correlated to the score for physical role function (r=0.289; P=0.025) and negatively correlated to the stool consistency score (r=−0.220; P=0.032), the depression score (r=−0.333; P=0.010), and the first sensory threshold volume (r=−0.297; P=0.022; Table 7). Multivariate analysis showed that depression score (β = −0.271; P=0.032) and the first sensory threshold volume (β = −0.325; P=0.013) were independent predictors of BFT efficacy (Table 8).Table 7
Univariate analysis of predictors of outcome of BFT.
Variables
r
P
General information
Age
−0.095
0.440
Gender
−0.112
0.202
Constipation duration
0.115
0.197
Symptoms
Defecation interval
−0.062
0.683
Straining
−0.121
0.149
Sensation of incomplete evacuation
−0.092
0.450
Sensation of blockage
−0.145
0.106
Painful defecation
−0.040
0.849
Stool consistency
−0.220
0.032
Psychological status
SAS
−0.184
0.093
SDS
−0.333
0.010
Quality of life indicators
General health perception
0.135
0.116
Physical functioning
0.112
0.202
Physical role function
0.289
0.025
Emotional role functioning
0.120
0.207
Social role functioning
0.153
0.104
Bodily pain
0.046
0.751
Vitality
0.196
0.084
Mental health
0.205
0.057
Lifestyle
Physical activity
−0.079
0.666
Work pressure
−0.089
0.490
Poor sleep quality
−0.078
0.666
Vegetable intake
0.145
0.106
Fruit intake
−0.062
0.683
Water intake
−0.095
0.468
High-fat diet predilection
0.017
0.800
Anorectal physiology
BET time
−0.188
0.091
CTT
−0.062
0.711
HR-ARM
Anal resting pressure
0.066
0.705
Maximum squeeze pressure
−0.030
0.761
Rectal defecation pressure
0.082
0.650
Intrarectal pressure
0.044
0.795
Rectoanal pressure differential
0.197
0.090
First sensation volume
−0.297
0.022
Urgency volume
−0.178
0.091
Maximum discomfort volume
−0.074
0.700
DD subtype
−0.099
0.365
BFT = biofeedback therapy; SAS = Zung’s Self-Rating Anxiety Scale; SDS = Zung’s Self-Rating Depression Scale; BET = balloon expulsion test; CTT = colonic transit time.Table 8
Multiple linear regression analysis of predictors of BFT outcome.
Variables
β coefficient
95% CI
P
Stool consistency
−0.110
−0.213 to −0.032
0.176
SDS
−0.271
−0.506 to −0.036
0.032
Physical role function
0.112
0.204 to 0.020
0.172
First sensation volume
−0.325
−0.534 to −0.012
0.013
BFT = biofeedback therapy; SDS = Zung’s Self-Rating.
## 4. Discussion
In this study, we evaluated the efficacy of BFT in DD and attempted to identify the factors that could predict the success of BFT. We found that BFT could improve the clinical symptoms of patients with DD. The psychological state and the rectal first sensory threshold pressure were independent predictors of BFT outcome.The prevalence of anxiety and depression in DD patients was much higher than the rates in the general population. These findings are consistent with previous literature that has documented a positive association—though not a causal relationship—between certain psychological disorders and DD [37, 38]. We found that DD patients also have lower quality of life than the general population. This is not surprising, as the symptoms of constipation and psychological disorders can both disrupt daily living.DD patients in our study frequently experienced excessive straining at defecation and a sensation of incomplete evacuation, with the average scores of >2 indicating that symptoms occurred during at least 25% of defecations. Prolonged colonic transit time was seen in 49.2% of DD patients. Significant overlap (10%–60%) between slow-transit constipation and DD as well as between slow-transit constipation and constipation-predominant irritable bowel syndrome has been described previously [39, 40], which suggests that a proportion of patients with constipation may have colonic motor and/or sensory dysfunction and coexisting anorectal sensorimotor dysfunction. In our study population, type I dyssynergia was seen in 48.6%, type II dyssynergia in 28.4%, type III dyssynergia in 20.8%, and type IV dyssynergia in 2.1% of the patients. These rates are consistent with previous studies [6, 41]. The pathogenic mechanisms are different for the different subtypes of DD, and the response to BFT may vary greatly between subtypes.Recent controlled studies have shown that BFT is an effective treatment for pelvic floor dyssynergia [15, 42, 43]; BFT was found to be superior to laxatives, with improvement being maintained over a long-term follow-up. The superior efficacy of BFT was also demonstrated by Wang et al. [44] in their study of 50 CC patients. Seventy percent of their patients felt that BFT was helpful, and 62.5% were improved. Clinical manifestations such as straining, abdominal pain, and bloating were relieved, and the use of oral laxative decreased after BFT; frequency of spontaneous bowel movement and psychological state were also improved significantly after BFT. In our study, at the end of training, there was significant decrease in total and subscale scores of clinical symptoms, including frequency of spontaneous bowel movement, straining at defecation, sensation of incomplete evacuation, sensation of blockage, and stool consistency, suggesting that BFT was an effective behavioral treatment for DD. The emotional centers in the brain can affect motility and sensation in the gut, acting mainly via the hypothalamic-hypophyseal axis and brain-gut axis. Studies have shown that depression increases pelvic floor muscle tension and reduces rectal sensitivity [45, 46]. Mild depression can be relieved to some extent by psychological counseling and by explanation of the symptoms. Both of these approaches are components of BFT, and therefore, BFT can improve the symptoms of both constipation and depression and help improve the overall quality of life of DD patients.In our study, a harder stool was predictive of a substantial improvement in defecation symptoms after BFT. This finding is not unexpected because hard stool is a common feature of DD [1] and because BFT is known to improve dyssynergia and allow more efficient stool evacuation. Shim et al. studied 102 patients with CC and reported similar findings [47].The SDS score was another predictor of BFT efficacy. Many patients with chronic diseases have concurrent depression. Depression is associated with poor treatment compliance, and some researchers consider that this may be an important factor for the failure of BFT in some patients [48–50]. In addition, patients with depression have autonomic nervous dysfunction; low vagal tone can result in decreased gastrointestinal motility [51]. However, Ding et al. have demonstrated that BFT has no effect on autonomic nervous function [35].In our study, the only physiological parameter predictive of substantial improvement in defecation after BFT was the rectal first sensory threshold volume, an elevated value being related to poorer outcome with BFT. There could be several mechanisms for this. Normal rectal sensory function is essential for normal defecation. Patients with rectal hyposensitivity have elevated sensory thresholds, with resulting rectal dysfunction. Fecal retention in the rectum resulting from decreased desire to defecate leads to absorption of moisture from the stool, making it dry and hard. In addition, Schouten et al. have shown that rectal hyposensitivity patients have lower rectal contractility in response to rectal dilatation than control patients [52]. Decreased colonic motility could be another reason. Some rectal hyposensitivity patients have a primary decrease in colonic motility. Chronic dilatation of the rectum in these patients can cause a secondary decrease in proximal colonic motility (the rectum-colon reflex) [53]. Although there are studies proving the efficacy of BFT in slow-transit constipation, the findings are still debated [11, 54]. Currently, there is no effective therapy available for rectal hyposensitivity; the options include sensory training, neural regulation, and surgery.The results of this study may have been affected by some limitations. First, there is no uniform criterion for the curative effect of BFT on DD. We use the valid score that equals the decreasing index between pretraining and posttraining constipation severity scores divided by pretraining score to assess the efficacy. The constipation severity score used in our study was made up of the duplicate entries of the Cleveland Clinic Constipation Score and Rome III criteria. However, we have not test the reliability and validity of the questionnaire. This questionnaire may not reflect the constipation symptoms of the patient accurately. Second, BFT efficacy was assessed at the completion of the BFT session. We have not assessed the long-term outcome of BFT. Also, we do not know the predictors for long-term efficacy of BFT.
## 5. Conclusion
BFT improves the clinical symptoms of DD patients. High SDS score and elevated first rectal sensory threshold volume are independent predictors of poor outcome with BFT. Treatment for depression and rectal hyposensitivity could optimize the effects of BFT in DD patients.
---
*Source: 1019652-2017-08-29.xml* | 2017 |
# Environmental Life-Cycle Analysis of Hybrid Solar Photovoltaic/Thermal Systems for Use in Hong Kong
**Authors:** Tin-Tai Chow; Jie Ji
**Journal:** International Journal of Photoenergy
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101968
---
## Abstract
While sheet-and-tube absorber is generally recommended for flat-plate photovoltaic/thermal (PV/T) collector design because of the simplicity and promising performance, the use of rectangular-channel absorber is also tested to be a good alternative. Before a new energy technology, like PV/T, is fully implemented, its environmental superiority over the competing options should be assessed, for instance, by evaluating its consumption levels throughout its production and service life. Although there have been a plenty of environmental life-cycle assessments on the domestic solar hot water systems and PV systems, the related works on hybrid solar PV/T systems have been very few. So far there is no reported work on the assessment of PV/T collector with channel-type absorber design. This paper reports an evaluation of the energy payback time and the greenhouse gas payback time of free-standing and building-integrated PV/T systems in Hong Kong. This is based on two case studies of PV/T collectors with modular channel-type aluminium absorbers. The results confirm the long-term environmental benefits of PV/T applications.
---
## Body
## 1. Introduction
A photovoltaic/thermal (PV/T) system is a combination of photovoltaic (PV) and solar thermal devices that generate both electricity and heat energy from one integrated system. With solar cells as (part of) the thermal absorber, the hybrid design is able to maximize the energy output from an allocated space reserved for solar application. Air and/or water can be used as the heat removal fluid(s) to lower the solar cell working temperature and to improve the electricity conversion efficiency. Comparatively, the water-type product design provides more effective cooling than the air-type counterpart because of the favorable thermal properties. Those with flat plate collectors meet well the low temperature water heating system requirements. They are also ideal for preheating purposes when hot water at higher temperature is required.While sheet-and-tube absorber is one common feature in flat-plate collectors, the use of rectangular-channel absorbers also has been examined extensively [1–3]. An aluminum water-in-channel-type PV/T collector design is recommended by the authors, with the prototypes well-tested under both free-standing and building-integrated manners [4, 5]. Through the adoption of the channel absorber design, the potential problem of low fin efficiency can be readily improved. Based on the thermosyphon working principle, the collector performance is found to have geographical dependence and working well at the warmer climate zones. In the Asia Pacific region, most large cities are dominated by air-conditioned buildings where space cooling demands are high. In these buildings, the exposed facades provide very good opportunity for accommodating the building integrated systems, hence, the BiPV/T. When a part of the solar radiation that falls on the building façade is directly converted to useful thermal and electric power, the portion of solar energy transmitted through the external facade is reduced. Hence, the space cooling load is reduced. Through dynamic simulation with the use of experimentally validated system models and the typical meteorological year (TMY) data of Hong Kong, the cost payback time (CPBT) of free-standing and building-integrated PV/T systems were found 12.1 and 13.8 years, respectively [6, 7]. The assessments were taken, respectively, at their best tilted and vertical collector positions for maximizing their system outputs. It is expected that these CPBT will be gradually shortened as the PV technology is in progressive advancement. In this paper, the environmental life-cycle analysis (LCA) of such hybrid solar systems as applied in Hong Kong is reported.
## 2. Environmental Life-Cycle Analysis
LCA is a technique for assessing various aspects associated with development of a product and its potential impact throughout a product’s life [8]. Before a new energy technology is fully implemented, the environmental superiority over competing options can be asserted by evaluating its consumption levels (such as cost investments, energy uses, and GHG emissions) throughout its entire production and service life. In terms of economic analysis, a simplified approach is to ignore the time element so the cost payback time (CPBT) can be used. This is by adding together the cash inflows from successive years until the cumulative cash inflow is the same as the required investment. In analogy to the economical evaluation, two environmental cost-benefit parameters, the energy payback time (EPBT) and greenhouse gas payback time (GPBT), can be used to evaluate the time period after which the real environmental benefit starts [9]. EPBT is the period that a system has to be in operation in order to save the amount of primary energy that has been spent for production, operation, and maintenance of the system. It is the ratio of embodied energy to annual net energy output. In a BiPV/T system, for example,
(1)EPBT=∑pvt+∑bos-∑mtlEpv+Et+Eac-Eom,
where ∑pvt, ∑bos and ∑mtl, are, respectively, the embodied energy of the PV/T collectors, of the balance of system (BOS), and of the replaced building materials; Epv is the annual useful electricity output, Et the annual useful heat gain (equivalent), Eac the annual electricity saving of the HVAC system due to the space thermal load reduction, and Eom is the annual electricity consumed in system operation and maintenance activities. ∑mtl and Eac can be omitted in free-stand PV/T system evaluation. Hence,
(2)EPBT=∑pvt+∑bosEpv+Et-Eom.
Similarly, in terms of greenhouse gas (GHG) emission, for BiPV/T
(3)GPBT=Ωpvt+Ωbos-ΩmtlZpv+Zt+Zac,
where Ω stands for the embodied GHG (or carbon dioxide equivalent) emission and Z the reduction of annual GHG emission from the local power plant owing to the BiPV/T operation. And for the free-stand system,
(4)GPBT=Ωpvt+ΩbosZpv+Zt.
Thus EPBT and GPBT are functions of the related energy system performance and their environmental impacts, like those of the power utilities, the building systems, local and overseas manufacturing, and transportation and on-site handling of PV/T collector system as a whole.
## 3. Aluminum Rectangular-Channel PV/TSystems
The sectional view of an aluminum rectangular-channel PV/T collector developed by the authors is shown in Figure1. It is composed of the following layers: (i) front low-iron glass cover, (ii) crystalline silicon (c-Si) PV encapsulation, (iii) metallic thermal absorber constructed from extruded aluminum, (iv) thermal insulation layer with glass wool, and (v) back-cover steel sheet. The PV encapsulation includes TPT (tedlar-polyester-tedlar) and EVA (ethylene-vinyl acetate) layers at both sides of the solar cells. The rectangular-channel design strengthens the heat transfer and structural durability.Figure 1
Cross-sectional view of the PV/T collector showing several absorber modules in integration (N.T.S).In a free-stand thermosyphon system, the PV/T collector carries a water tank with the natural water circulation via inter-connecting pipes. Figure2 shows the external view. Water enters the collector at the lower header and leaves via the upper header. Table 1 lists the technical data of this PV/T collector for free-stand applications.Table 1
Collector and technical design data of free-stand PV/T system.
Design parameters
Data
Glazing (low-iron glass)
Thickness
0.004 m
Emissivity
0.88
Extinction coefficient
26/m
Refraction index
1.526
Depth of air gap underneath
0.025 m
PV encapsulation(TPT + EVA + solar cell + EVA + TPT + silicon gel)
Solar cell type
single-crystalline silicon
Cell area
1.11 m2
Cell electrical efficiency at STC
13%
Solar cell temperature coefficient
0.005/K
Emissivity
0.8
Absorptivity
0.8
Packing factor (wrt glazing)
63%
Thermal absorber (Aluminum)
No. of flat-box absorber module
15
Absorber module size
0.105 × 1.38 × 0.012 m
No. of header
2
Header size
1.575 × 0.025 (dia.) × 0.002 (thick) m
Thermal insulation layer (glass wool)
Thickness
0.03 m
Back cover (galvanized iron)
Thickness
0.001 m
Water tank and connecting pipes
Water storage capacity
155 kg
Tank length
1.2 m
Tank diameter
0.21 m
Pipe diameter
0.015 m
Thickness of insulation layer at tank
0.025 m
Thickness of insulation layer on pipe
0.02 mFigure 2
Front view of free-stand PV/T collector system.A BiPV/T system, on the other hand, is composed of an array of PV/T collectors that are integrated to the external wall of an air-conditioned building. See Figure3 for reference. The water tank is located at the roof-top and the water circulation is again by means of thermosyphon. Table 2 lists the technical data of the BiPV/T wall system in our study.Table 2
Collector and technical design data of BiPV/T system.
Design parameters
Data
Front glazing (low-iron glass)
Thickness
0.004 m
Surface area
1.61 m2
Depth of air gap underneath
0.025 m
PV encapsulation(TPT + EVA + solar cell + EVA + TPT + silicon gel)
Solar cell type
single-crystalline silicon
Cell area
0.81 m2
Cell electrical efficiency at STC
13%
Solar cell temperature coefficient
0.005/K
Emissivity
0.8
Absorptivity
0.8
Packing factor (wrt glazing)
50%
Thermal absorber (aluminum alloy)
Thermal capacity
903 kJ/(kg·K)
Density
2702 kg/m3
Thermal conductivity
237 W/(m·K)
Emissivity
0.8
Absorptivity
0.9
Insulation material (glass wool)
Thickness
0.03 m
Air gap between insulation layer and building wall
0.02 m
Building wall (brick)
Thickness
0.15 m
Density
1600 kg/m3
Thermal capacity
880 J/(kg·K)
Thermal conductivity
1.0 W/(m·K)
Water tank (steel) and connecting pipes (copper)
Water storage capacity
0.46 m3
Tank length
1.5 m
Tank diameter
0.54 m
Pipe diameter
0.055 m
Thickness of insulation layer at tank
0.025 m
Thickness of insulation layer on pipe
0.02 mFigure 3
Front view of BiPV/T system with water tank at top of wall.
## 4. Review of Previous Works onFlat Plate Collector Systems
### 4.1. Solar Hot Water Systems
The LCA works on domestic solar hot water (DSHW) systems in majority were from EU countries [10–13]. Streicher et al. [10] evaluated the EPBT of solar thermal systems by dividing the system into components. The cumulative energy demand was obtained by multiplying the weight of the main components with their respective cumulative energy demand values. They estimated that in Germany the DSHW systems have EPBT from 1.3 to 2.3 years. In their study, construction credit was given to the collector system in integrated roof-mounting mode. This is for the savings in building materials, transportation, and construction works. The collector itself accounts for 89% and 85% of the total embodied energy in the roof-integrated and open-stand systems, respectively. Tsilingiridis et al. [11] found that in Greece the materials used, including steel and copper, have the major contribution to the environmental impacts. Ardente et al. [12] found that in Italy the indirect emissions (related to production of raw materials) are about 80–90% of the overall GHG releases. Kalogirou [13] worked on a thermosyphon DSHW system in Cyprus. The system thermal performance was evaluated by dynamic simulation program. The LCA determined that 77% of the embodied energy goes to the collector panels, 15% goes to the steel frame, 5% goes to piping, and the remaining accounts for less than 3% of the total. Considerable amounts of GHG can be saved. The EPBT was estimated around 1.1 year.Outside Europe, the study of Crawford et al. [14] in Australia showed that although the CPBT of DSHW systems can be 10 years or more, the corresponding GPBT can be only around 2.5–5 years. In their study, a conversion factor of 60 kg CO2 eq/GJ was used to determine the GHG emission from the cumulative energy of the entire system. Arif [15] evaluated the environmental performance of DSHW systems in India. Based on the 100 litre-per-day and steady year-round usage, the EPBT was estimated 1.6–2.6 years, all depending on the local climates and also the collector materials in use. In the LCA work of Hang et al. [16] on a range of solar hot water systems in USA; dynamic thermal simulation was again applied.
### 4.2. PV Systems
In the last decades, plenty of works have been reported on life cycle performance of PV systems in both free-stand and building-integrated manners. The estimations of EPBT and GPBT have been kept on revising owing to the advancements in PV technology.The production of a PV module includes the following processes:(i)
silicon purification and processing,(ii)
silicon ingot slicing, and(iii)
PV module fabrication.Silica is first melted and manufactured into metallurgical-grade silicon (MG-Si), then into electronic silicon (EG-Si) through the Siemen’s process or into solar-grade silicon (SoG-Si) through the modified Siemens process [17]. Finally, after the Czochralski process (for sc-Si) or other production process, silicon is made available for the solar cell production. The silicon ingot is needed to be sliced into wafer. The technologies of cell production include etching, doping, screen printing, and coating. The solar cells are then tested, packed, and interconnected with other components to form PV modules.Alsema [18] studied the EPBT and the GHG emissions of grid-connected PV systems. The cumulative energy demands of sc-Si and mc-Si frameless modules were evaluated as 5700 and 4200 MJ/m2. Further, it was pointed out that with the implementation of new manufacturing technologies, the above data could be as low as 3200 and 2600 MJ/m2. Later on, Alsema et al. [19, 20] reviewed the important options that were available for further reduce energy consumption and environment impacts of the PV module production processes. As for BOS, Alsema and Nieuwlaar [21] presented that because of the less use of aluminum in supporting structure, the energy requirement for array support of ground-mounted PV system was about 1800 MJ/m2, but this could be only 700 MJ/m2 for rooftop installation; hence rooftop systems should have better potentials for EPBT reduction than ground-mounted systems.Mason et al. [22] studied the energy contents of the BOS components used in a 3.5 MWp mc-Si PV plant. By integrating the weight of the PV modules with the supports, the embodied energy of the BOS components was found as low as 542 MJ/m2—a sharp reduction from the previous estimations. Fthenakis and Kim [23] showed that in Japan the primary energy demand for sc-Si PV module was in the range of 4160–15520 MJ/m2, and the life-cycle GHG emissions rate for PV systems in the United States were from 22 to 49 g CO2-eq/kWhe.In Singapore, Kannan et al. studied a 2.7 kWp distributed PV system with sc-Si modules [24]. Specific energy consumptions for the PV modules and the inverters were estimated 16 and 0.17 MWhe/kWp respectively. The manufacturing of solar PV modules accounted for 81% of the life cycle energy use. The aluminium supporting structure accounted for about 10%, and the recycling of aluminium accounted for another 7%. The EPBT was estimated to be 6.74 years. It was claimed that this can be reduced to 3.5 years if the primary energy use on PV module production is reduced by 50%.In India, Nawaz and Tiwari [25] calculated EPBT by evaluating the energy requirement for manufacturing a sc-Si PV system for open field and rooftop conditions with BOS. Mitigation of CO2 emissions at macrolevel (where lifetime of battery and PV system are the same) and microlevel of the PV system has also been studied. For a 1 m2 sc-Si PV system, their estimations give an embodied energy of 666 kWh for silicon purification and processing, 120 kWh for cell fabrication, and 190 kWh for subsequent PV module production. Hence without BOS, the embodied energy was estimated 976 kWh/m2 and the GHG emission was 27.23 kg/m2.In Hong Kong Lu and Yang [26] investigated the EPBT and GPBT of a roof-mounted 22 kW BiPV system. It was found that 71% of the embodied energy on the whole is from the embodied energy of the PV modules, whereas the remaining 29% is from the embodied energy of BOS. The EPBT of the PV system was then calculated as 7.3 years. Considering the fuel mixture composition of local power stations, the corresponding GPBT is 5.2 years. Further, it was predicted that the possible range of EPBT of BiPV installations in Hong Kong is from 7.1 years (for optimal orientation) to 20 years (for west-facing vertical façade).Bankier and Gale [27] gave a review of EPBT of roof mounted PV systems reported in the 10-year period (1996–2005). A large range of discrepancy was found. They pointed out that the limitations to the accuracy of the assessments came from the difficulties in determining realistic energy conversion factors, and in determining realistic energy values for human labor. According to their estimation, the appropriate range of EPBT for mc-Si PV module installations should be between 2–8 years. A more recent review was done by Sherwani et al. [28]. The EPBT for sc-Si, mc-Si, and a-Si PV systems have been estimated in the ranges of 3.2–15.5, 1.5–5.7, and 2.5–3.2, years, respectively. Similarly, GHG emissions are 44–280, 9.4–104, and 15.6–50 g CO2-eq/kWh.
### 4.3. PV/T Systems
While there have been plenty studies of EPBT and GPBT on solar thermal and PV systems, our literature review shows that those on PV/T systems have been very few. In particular, there is so far no reported work on the assessment of PV/T collectors with channel-type absorber design.Battisti and Corrado [29] made evaluation based on a conventional mc-Si building-integrated system located in Rome, Italy. An experimental PV/T system with heat recovery for DSHW application was examined. Evaluations were made for alternative heat recovery to replace either natural gas or electricity. Their results give the EPBT and GPBT of PV system as 3.3 and 4.1 years. On the other hand, those of the PV/T systems designed for natural gas replacement are 2.3 and 2.4 years.Also in Italy, Tripanagnostopoulos et al. [30] evaluated the energy and environmental performance of their modified 3 kWp mc-Si PV and experimental water-cooled PV/T sheet-and-tube collector systems designed for horizontal-roof (free-stand) and tilted-roof (building integrated) installations. The application advantage of the glazed/unglazed PV/T over the PV options was demonstrated through the better LCA performances. The EPBT of the PV and BiPV system were found to be 2.9 and 3.2 years, whereas the GPBT were 2.7 and 3.1 years, respectively. For PV/T system with 35°C operating temperature, the EPBT of the PV/T and BiPV/T options were both 1.6 years, and the GPBT were 1.9 and 2.0 years respectively. The study showed that nearly the whole of the environmental impacts are due to PV module production, aluminium parts (reflectors and heat-recovery-unit) as well as copper parts (for heat-recovery-unit and hydraulic circuit), with barely significant contributions from the other system components, such as support structures or electrical/electronic devices. The disposal phase contribution is again almost negligible.Dubey and Tiwari [31] carried out an environmental impact analysis of a hybrid PV/T solar water heater for use in the Delhi climate of India. With a glazed sheet-and-tube flat plate collector system designed for pump operation, the EPBT was found 1.3 years.
## 4.1. Solar Hot Water Systems
The LCA works on domestic solar hot water (DSHW) systems in majority were from EU countries [10–13]. Streicher et al. [10] evaluated the EPBT of solar thermal systems by dividing the system into components. The cumulative energy demand was obtained by multiplying the weight of the main components with their respective cumulative energy demand values. They estimated that in Germany the DSHW systems have EPBT from 1.3 to 2.3 years. In their study, construction credit was given to the collector system in integrated roof-mounting mode. This is for the savings in building materials, transportation, and construction works. The collector itself accounts for 89% and 85% of the total embodied energy in the roof-integrated and open-stand systems, respectively. Tsilingiridis et al. [11] found that in Greece the materials used, including steel and copper, have the major contribution to the environmental impacts. Ardente et al. [12] found that in Italy the indirect emissions (related to production of raw materials) are about 80–90% of the overall GHG releases. Kalogirou [13] worked on a thermosyphon DSHW system in Cyprus. The system thermal performance was evaluated by dynamic simulation program. The LCA determined that 77% of the embodied energy goes to the collector panels, 15% goes to the steel frame, 5% goes to piping, and the remaining accounts for less than 3% of the total. Considerable amounts of GHG can be saved. The EPBT was estimated around 1.1 year.Outside Europe, the study of Crawford et al. [14] in Australia showed that although the CPBT of DSHW systems can be 10 years or more, the corresponding GPBT can be only around 2.5–5 years. In their study, a conversion factor of 60 kg CO2 eq/GJ was used to determine the GHG emission from the cumulative energy of the entire system. Arif [15] evaluated the environmental performance of DSHW systems in India. Based on the 100 litre-per-day and steady year-round usage, the EPBT was estimated 1.6–2.6 years, all depending on the local climates and also the collector materials in use. In the LCA work of Hang et al. [16] on a range of solar hot water systems in USA; dynamic thermal simulation was again applied.
## 4.2. PV Systems
In the last decades, plenty of works have been reported on life cycle performance of PV systems in both free-stand and building-integrated manners. The estimations of EPBT and GPBT have been kept on revising owing to the advancements in PV technology.The production of a PV module includes the following processes:(i)
silicon purification and processing,(ii)
silicon ingot slicing, and(iii)
PV module fabrication.Silica is first melted and manufactured into metallurgical-grade silicon (MG-Si), then into electronic silicon (EG-Si) through the Siemen’s process or into solar-grade silicon (SoG-Si) through the modified Siemens process [17]. Finally, after the Czochralski process (for sc-Si) or other production process, silicon is made available for the solar cell production. The silicon ingot is needed to be sliced into wafer. The technologies of cell production include etching, doping, screen printing, and coating. The solar cells are then tested, packed, and interconnected with other components to form PV modules.Alsema [18] studied the EPBT and the GHG emissions of grid-connected PV systems. The cumulative energy demands of sc-Si and mc-Si frameless modules were evaluated as 5700 and 4200 MJ/m2. Further, it was pointed out that with the implementation of new manufacturing technologies, the above data could be as low as 3200 and 2600 MJ/m2. Later on, Alsema et al. [19, 20] reviewed the important options that were available for further reduce energy consumption and environment impacts of the PV module production processes. As for BOS, Alsema and Nieuwlaar [21] presented that because of the less use of aluminum in supporting structure, the energy requirement for array support of ground-mounted PV system was about 1800 MJ/m2, but this could be only 700 MJ/m2 for rooftop installation; hence rooftop systems should have better potentials for EPBT reduction than ground-mounted systems.Mason et al. [22] studied the energy contents of the BOS components used in a 3.5 MWp mc-Si PV plant. By integrating the weight of the PV modules with the supports, the embodied energy of the BOS components was found as low as 542 MJ/m2—a sharp reduction from the previous estimations. Fthenakis and Kim [23] showed that in Japan the primary energy demand for sc-Si PV module was in the range of 4160–15520 MJ/m2, and the life-cycle GHG emissions rate for PV systems in the United States were from 22 to 49 g CO2-eq/kWhe.In Singapore, Kannan et al. studied a 2.7 kWp distributed PV system with sc-Si modules [24]. Specific energy consumptions for the PV modules and the inverters were estimated 16 and 0.17 MWhe/kWp respectively. The manufacturing of solar PV modules accounted for 81% of the life cycle energy use. The aluminium supporting structure accounted for about 10%, and the recycling of aluminium accounted for another 7%. The EPBT was estimated to be 6.74 years. It was claimed that this can be reduced to 3.5 years if the primary energy use on PV module production is reduced by 50%.In India, Nawaz and Tiwari [25] calculated EPBT by evaluating the energy requirement for manufacturing a sc-Si PV system for open field and rooftop conditions with BOS. Mitigation of CO2 emissions at macrolevel (where lifetime of battery and PV system are the same) and microlevel of the PV system has also been studied. For a 1 m2 sc-Si PV system, their estimations give an embodied energy of 666 kWh for silicon purification and processing, 120 kWh for cell fabrication, and 190 kWh for subsequent PV module production. Hence without BOS, the embodied energy was estimated 976 kWh/m2 and the GHG emission was 27.23 kg/m2.In Hong Kong Lu and Yang [26] investigated the EPBT and GPBT of a roof-mounted 22 kW BiPV system. It was found that 71% of the embodied energy on the whole is from the embodied energy of the PV modules, whereas the remaining 29% is from the embodied energy of BOS. The EPBT of the PV system was then calculated as 7.3 years. Considering the fuel mixture composition of local power stations, the corresponding GPBT is 5.2 years. Further, it was predicted that the possible range of EPBT of BiPV installations in Hong Kong is from 7.1 years (for optimal orientation) to 20 years (for west-facing vertical façade).Bankier and Gale [27] gave a review of EPBT of roof mounted PV systems reported in the 10-year period (1996–2005). A large range of discrepancy was found. They pointed out that the limitations to the accuracy of the assessments came from the difficulties in determining realistic energy conversion factors, and in determining realistic energy values for human labor. According to their estimation, the appropriate range of EPBT for mc-Si PV module installations should be between 2–8 years. A more recent review was done by Sherwani et al. [28]. The EPBT for sc-Si, mc-Si, and a-Si PV systems have been estimated in the ranges of 3.2–15.5, 1.5–5.7, and 2.5–3.2, years, respectively. Similarly, GHG emissions are 44–280, 9.4–104, and 15.6–50 g CO2-eq/kWh.
## 4.3. PV/T Systems
While there have been plenty studies of EPBT and GPBT on solar thermal and PV systems, our literature review shows that those on PV/T systems have been very few. In particular, there is so far no reported work on the assessment of PV/T collectors with channel-type absorber design.Battisti and Corrado [29] made evaluation based on a conventional mc-Si building-integrated system located in Rome, Italy. An experimental PV/T system with heat recovery for DSHW application was examined. Evaluations were made for alternative heat recovery to replace either natural gas or electricity. Their results give the EPBT and GPBT of PV system as 3.3 and 4.1 years. On the other hand, those of the PV/T systems designed for natural gas replacement are 2.3 and 2.4 years.Also in Italy, Tripanagnostopoulos et al. [30] evaluated the energy and environmental performance of their modified 3 kWp mc-Si PV and experimental water-cooled PV/T sheet-and-tube collector systems designed for horizontal-roof (free-stand) and tilted-roof (building integrated) installations. The application advantage of the glazed/unglazed PV/T over the PV options was demonstrated through the better LCA performances. The EPBT of the PV and BiPV system were found to be 2.9 and 3.2 years, whereas the GPBT were 2.7 and 3.1 years, respectively. For PV/T system with 35°C operating temperature, the EPBT of the PV/T and BiPV/T options were both 1.6 years, and the GPBT were 1.9 and 2.0 years respectively. The study showed that nearly the whole of the environmental impacts are due to PV module production, aluminium parts (reflectors and heat-recovery-unit) as well as copper parts (for heat-recovery-unit and hydraulic circuit), with barely significant contributions from the other system components, such as support structures or electrical/electronic devices. The disposal phase contribution is again almost negligible.Dubey and Tiwari [31] carried out an environmental impact analysis of a hybrid PV/T solar water heater for use in the Delhi climate of India. With a glazed sheet-and-tube flat plate collector system designed for pump operation, the EPBT was found 1.3 years.
## 5. Environmental Analysis ofAluminum Rectangular-Channel PV/T Systems
### 5.1. EPBT of Free-Stand System
Skillful lamination of solar cell onto thermal absorber with layers of EVA and TPT is needed for PV/T collector production. Aluminum thermal absorber parts are made available by raw material mining and extraction, ingot melting, mechanical extrusion, machining, and assembling into whole piece. The major-component production and assembly processes include front glass (low iron), PV-laminated absorber, insulation material and aluminum frame. The supply was from the mainland. As for the BOS, the electrical BOS components include inverters, electrical wirings, and electronic devices. The mechanical BOS include water storage tank, pipe work, supporting structure, and accessories. The embodied energy to be considered in the LCA include the above during production, plus those related to the required transportation from factory to installation site, construction and testing, decommissioning and disposal, and any other end-of-life energy requirements.Table3 summarizes the materials used and cumulative energy of the free-stand PV/T collector system. The cumulative energy intensity of sc-Si PV module was estimated as 976 kWh/m2, making references to [25, 26]. That of the inverter and electrical parts was taken as 5% of the PV module. The other values of cumulative energy intensity in MJ/unit was obtained from the Hong Kong government EMSD (Electrical and Mechanical Services Department) database that covers the specific (per unit quantity) impact profile due to consumption of materials in the “Cradle-to-As-built” stage [32]. The total cumulative energy comes up to 3041.8 kWh or 1728 kWh/m2 for this free-stand system. Table 4 shows the distribution of the embodied energy in this case. It can be seen that the hybrid PV/T collector itself accounts for around 80% of the embodied energy. For the BOS, the water tank accounts for 11.4%, the other mechanical components accounts for 7%, whereas the electrical accessories accounts for only 1.8%. ∑pvt and ∑bos are then 2429 and 613 kWh, respectively.Table 3
Cumulative energy in free-stand PV/T system.
Materials
Quantity consumed (kg)
Cumulative energy intensity (MJ/unit)
Cumulative energy (kWh)
PV/T collector
Front glazing
Low-iron glass (1.76 m2)
19.7
19.7
107.9
Thermal insulation
Glass wool
1.69
31.7
14.9
Thermal absorber
Aluminum absorber
18.3
219
1114.7
Frame and back cover
Aluminum
1.78
219
108.0
PV Encapsulation
PV Module
1.11 m2
976
1083.4
BOS
Water tank
Stainless steel tank
4.20
82.2
273.0
Tank insulation (Glass wool)
1.58
31.7
13.9
Aluminum Cladding
0.966
219
58.8
Connecting pipe
Copper piping (15 mm dia.)
2.4 m
6.33
4.2
Pipe insulation (Glass wool)
0.0627
31.7
0.6
Structural support and accessories
Steel stand
14.2
29.2
115.2
Pipe fittings and structural joints
7.19
140.0
93.3
Inverter + electric wiring
5% of PV module
54.2
Total:
3041.8Table 4
Distribution of embodied energy in PV/T collector systems.
System component description
Free-stand
BiPV/T
PV/T Collector
Mechanical components
44.2
51.8
Electrical components
35.6
37.7
BOS
Water tank
11.4
4.9
Pipe and structural supports
7.0
3.8
Electrical components
1.8
1.9With the installation of this PV/T system, two kinds of energy saving are involved: thermal energy for water heating and electrical energy. This will be no air-conditioning saving. A thermal energy saving of 2650 MJ/year and electricity saving of 473 MJ/year give anEt of 736 kWh/year and an Epv of 398 kWh/year. In the computation, a heat-to-electricity conversion factor of 0.33 has been used. Mainly labor costs were considered in Eom. This is estimated as 41 kWh/year and is therefore not significant. With (2), the EPBT is found 2.8 years. This is much shorter than the expected CPBT of 12.1 years reported in our previous work [6]. Assuming that the working life of PV/T system is similar to PV system, that is, 15–30 years in general [29], then it can be concluded that the EPBT in this case study is an order of magnitude lower than its expected working life.
### 5.2. BiPV/T System
Table5 summarizes the materials used and the cumulative energy in the 9.66 m2 BiPV/T case. Accordingly, the values of Zpvt and Zbos are, respectively, 11258 and 1328 kWh. Zmtl is estimated as 594 kWh, making reference to the work of Streicher et al. [10] and adjusted by the cost of living. Taking the advantage of building material replacement, the cumulative energy intensity reduces to 1241 kWh/m2. The embodied energy distribution of this BiPV/T system is also given in Table 4. It can be seen that for this building integrated case the portion of the collector increases to 89%. For the BOS, the water tank accounts for 4.9%, the pipe and supporting components account for 3.8%, and the electrical components remain at less than 2%.Table 5
Cumulative energy in BiPV/T system.
Materials
Quantity consumed (kg)
Cumulative energy intensity (MJ/unit)
Cumulative energy (kWh)
PV/T collector
Front glazing
Low-iron glass (1.61 m2 × 6)
99.6
19.7
545.0
Thermal insulation
Glass wool
9.50
31.7
83.7
Thermal absorber
Aluminum absorber
86.7
219
5273.8
Frame and back cover
Aluminum
10.1
219
611.8
PV Encapsulation
PV Module
4.86 m2
976
4743.4
BOS
Water tank
Stainless steel tank
19.9
82.2
454.0
Insulation (Glass wool)
2.14
31.7
18.8
Aluminum Cladding
1.53
219
93.0
Connecting pipe
Copper piping (55 mm dia.)
7 m
40.1
77.9
Pipe insulation (Glass wool)
1.07
31.7
9.4
Structural support and accessories
Pipe fittings and structural parts
5.25
140.0
68.1
Inverter + electric wiring
5% of PV module
237.2
Total
12585.2With the installation of this BiPVW system, the annual energy savings include the following:(i)
thermal energy: 2258 kWh (Et);(ii)
electrical energy: 323 kWh;(iii)
space cooling load: 206 kWh.By taking the COP of air-conditioning plant as 3.0,Epv and Eac are then 979 and 208 kWh/year, respectively. In this case Eom is 246 kWh/year, by estimation. By (1), the EPBT is 3.8 years, which is much shorter than its CPBT of 13.8 years. A longer period of EPBT in this BiPV/T than in the free-stand case is mainly because of its vertical collector position as compared to the best angle of tilt, and also the differences in collector size and solar cell packing factor. A shorter EPBT is expected if ms-Si cell modules were used in the analyses because of the lower energy consumption during the manufacturing process. As a matter of fact, this 3.8 years for vertical-mounted BiPV/T is advantageous as compared to the 7.1 years [26] for an optimal-oriented roof-top BiPV system in Hong Kong.
### 5.3. GHG Emission Analysis
In our analysis, the thermal energy saving was taken as a save of town gas consumed in the building. The electrical energy saving was taken as a save in purchased electricity from the utilities. Based on the data provided by the Hong Kong government, the territory-wide emission factor of GHG coming from utility power generation is 0.7 kg CO2-eq/kWhe including the transmission losses [33]. As for town gas, the emission factors for CO2, CH4, and N2O are, respectively, 2.815 kg/unit, 0.0446 g/unit and 0.0099 g/unit, where 1 unit of town gas is equivalent to 48 MJ consumed. For the free-stand case, the above information gives an annual reduction in GHG emission of 285 kg CO2-eq. The PV/T system itself does not produce polluting emissions during their daily operation. And in these days, most of the manufacturing activities of products consumed in Hong Kong are taking place in the Mainland, so the emission factor of China can be used in our embodied GHG assessment. In China, the primary energy consumption for power generation is 12.01 MJ/kWhe and the CO2 emission rate for coal-fired power plant is 24.7 g CO2-eq/MJ [34], the embodied GHG intensity of the PV/T collector in this case is therefore 0.297 kg CO2-eq/kWh cumulative energy. The local emission factor was used for the BOS part since local acquisition was assumed. Accordingly, with (4) this approximation gives a GPBT of 3.2 years for the free-stand system.Similarly, for the BiPVT system the saving in air-conditioning energy is converted as electricity saving based on a system COP (coefficient of performance) of 3.0. With (3) this gives a GPBT of 4.0 years. The result is again lower than the previously estimated GPBT of 5.2 years for the general performance of BiPV systems in Hong Kong [26].For completeness, Table6 shows the technical data in the evaluation of their CPBT. Comparing with the free-stand PV/T case, the BiPV/T system had a lower investment cost on unit collector area basis. This is because on one hand there were building materials saving and there was no requirement on the steel stands which is essential for tilt-mounting of the free-stand PV/T collector. On the other hand, it was benefitted by the economy of scale for mass handling of the system components. During operation, however, the vertical collector position of the BiPV/T system made it disadvantageous in the quantity of year-round solar radiation received by the collector surface. At the same time, there would be greater transmission loss for a centralized energy system. The simulation results showed that the annual useful heat gains of the free-stand and the building integrated cases are 418 kWh/m2 and 233 kWh/m2, respectively, on unit glazing area basis. And the electrical energy gains are 118 kWh/m2 and 66.4 kWh/m2 on unit PV cell area basis. These came out with the CPBT of 12.1 years for the free-stand case and 13.8 years for the building integrated case.Table 6
Evaluation of cost payback time.
Investment: HK$
Free-stand PV/T [6]
BiPV/T [7]
Water storage tank
400
750
Collector frame and support
400
1800
Modular thermal absorber
600
2700
Solar cells and encapsulation
4000
17500
Inverter
700
1000
Piping, wiring and accessories
300
900
Installation costs
1500
3000
Total system costs (HK$)
7900
27650
Useful energy savings
MJ (kWh)
MJ (kWh)
Thermal energy
2650.4 (736.2)
8127.5 (2257.6)
Electrical energy
473.2 (131.4)
1162.4 (322.9)
Space cooling load
—
742.6 (206.3)
Cost savings: HK$
Gaseous fuel at HK$0.2/MJ
530.1
1625.5
Electricity at HK$0.95/kWh
124.9
372.0
Annual saving
655.0
1997.5
Cost payback time (CPBT)
12.1 years
13.8 years
Note: USD1 is equivalent to HK$7.8.Our above findings are generally in line with the estimations by other researchers based on their own collector designs and local applications. Nevertheless, it should be noted that the above picture is not static. It is expected that the continuing improvements in material and energy utilization and recycling will change the current environmental profiles. On the other hand, the progression in solar cell performance will also lead to better EPBT and GPBT.
## 5.1. EPBT of Free-Stand System
Skillful lamination of solar cell onto thermal absorber with layers of EVA and TPT is needed for PV/T collector production. Aluminum thermal absorber parts are made available by raw material mining and extraction, ingot melting, mechanical extrusion, machining, and assembling into whole piece. The major-component production and assembly processes include front glass (low iron), PV-laminated absorber, insulation material and aluminum frame. The supply was from the mainland. As for the BOS, the electrical BOS components include inverters, electrical wirings, and electronic devices. The mechanical BOS include water storage tank, pipe work, supporting structure, and accessories. The embodied energy to be considered in the LCA include the above during production, plus those related to the required transportation from factory to installation site, construction and testing, decommissioning and disposal, and any other end-of-life energy requirements.Table3 summarizes the materials used and cumulative energy of the free-stand PV/T collector system. The cumulative energy intensity of sc-Si PV module was estimated as 976 kWh/m2, making references to [25, 26]. That of the inverter and electrical parts was taken as 5% of the PV module. The other values of cumulative energy intensity in MJ/unit was obtained from the Hong Kong government EMSD (Electrical and Mechanical Services Department) database that covers the specific (per unit quantity) impact profile due to consumption of materials in the “Cradle-to-As-built” stage [32]. The total cumulative energy comes up to 3041.8 kWh or 1728 kWh/m2 for this free-stand system. Table 4 shows the distribution of the embodied energy in this case. It can be seen that the hybrid PV/T collector itself accounts for around 80% of the embodied energy. For the BOS, the water tank accounts for 11.4%, the other mechanical components accounts for 7%, whereas the electrical accessories accounts for only 1.8%. ∑pvt and ∑bos are then 2429 and 613 kWh, respectively.Table 3
Cumulative energy in free-stand PV/T system.
Materials
Quantity consumed (kg)
Cumulative energy intensity (MJ/unit)
Cumulative energy (kWh)
PV/T collector
Front glazing
Low-iron glass (1.76 m2)
19.7
19.7
107.9
Thermal insulation
Glass wool
1.69
31.7
14.9
Thermal absorber
Aluminum absorber
18.3
219
1114.7
Frame and back cover
Aluminum
1.78
219
108.0
PV Encapsulation
PV Module
1.11 m2
976
1083.4
BOS
Water tank
Stainless steel tank
4.20
82.2
273.0
Tank insulation (Glass wool)
1.58
31.7
13.9
Aluminum Cladding
0.966
219
58.8
Connecting pipe
Copper piping (15 mm dia.)
2.4 m
6.33
4.2
Pipe insulation (Glass wool)
0.0627
31.7
0.6
Structural support and accessories
Steel stand
14.2
29.2
115.2
Pipe fittings and structural joints
7.19
140.0
93.3
Inverter + electric wiring
5% of PV module
54.2
Total:
3041.8Table 4
Distribution of embodied energy in PV/T collector systems.
System component description
Free-stand
BiPV/T
PV/T Collector
Mechanical components
44.2
51.8
Electrical components
35.6
37.7
BOS
Water tank
11.4
4.9
Pipe and structural supports
7.0
3.8
Electrical components
1.8
1.9With the installation of this PV/T system, two kinds of energy saving are involved: thermal energy for water heating and electrical energy. This will be no air-conditioning saving. A thermal energy saving of 2650 MJ/year and electricity saving of 473 MJ/year give anEt of 736 kWh/year and an Epv of 398 kWh/year. In the computation, a heat-to-electricity conversion factor of 0.33 has been used. Mainly labor costs were considered in Eom. This is estimated as 41 kWh/year and is therefore not significant. With (2), the EPBT is found 2.8 years. This is much shorter than the expected CPBT of 12.1 years reported in our previous work [6]. Assuming that the working life of PV/T system is similar to PV system, that is, 15–30 years in general [29], then it can be concluded that the EPBT in this case study is an order of magnitude lower than its expected working life.
## 5.2. BiPV/T System
Table5 summarizes the materials used and the cumulative energy in the 9.66 m2 BiPV/T case. Accordingly, the values of Zpvt and Zbos are, respectively, 11258 and 1328 kWh. Zmtl is estimated as 594 kWh, making reference to the work of Streicher et al. [10] and adjusted by the cost of living. Taking the advantage of building material replacement, the cumulative energy intensity reduces to 1241 kWh/m2. The embodied energy distribution of this BiPV/T system is also given in Table 4. It can be seen that for this building integrated case the portion of the collector increases to 89%. For the BOS, the water tank accounts for 4.9%, the pipe and supporting components account for 3.8%, and the electrical components remain at less than 2%.Table 5
Cumulative energy in BiPV/T system.
Materials
Quantity consumed (kg)
Cumulative energy intensity (MJ/unit)
Cumulative energy (kWh)
PV/T collector
Front glazing
Low-iron glass (1.61 m2 × 6)
99.6
19.7
545.0
Thermal insulation
Glass wool
9.50
31.7
83.7
Thermal absorber
Aluminum absorber
86.7
219
5273.8
Frame and back cover
Aluminum
10.1
219
611.8
PV Encapsulation
PV Module
4.86 m2
976
4743.4
BOS
Water tank
Stainless steel tank
19.9
82.2
454.0
Insulation (Glass wool)
2.14
31.7
18.8
Aluminum Cladding
1.53
219
93.0
Connecting pipe
Copper piping (55 mm dia.)
7 m
40.1
77.9
Pipe insulation (Glass wool)
1.07
31.7
9.4
Structural support and accessories
Pipe fittings and structural parts
5.25
140.0
68.1
Inverter + electric wiring
5% of PV module
237.2
Total
12585.2With the installation of this BiPVW system, the annual energy savings include the following:(i)
thermal energy: 2258 kWh (Et);(ii)
electrical energy: 323 kWh;(iii)
space cooling load: 206 kWh.By taking the COP of air-conditioning plant as 3.0,Epv and Eac are then 979 and 208 kWh/year, respectively. In this case Eom is 246 kWh/year, by estimation. By (1), the EPBT is 3.8 years, which is much shorter than its CPBT of 13.8 years. A longer period of EPBT in this BiPV/T than in the free-stand case is mainly because of its vertical collector position as compared to the best angle of tilt, and also the differences in collector size and solar cell packing factor. A shorter EPBT is expected if ms-Si cell modules were used in the analyses because of the lower energy consumption during the manufacturing process. As a matter of fact, this 3.8 years for vertical-mounted BiPV/T is advantageous as compared to the 7.1 years [26] for an optimal-oriented roof-top BiPV system in Hong Kong.
## 5.3. GHG Emission Analysis
In our analysis, the thermal energy saving was taken as a save of town gas consumed in the building. The electrical energy saving was taken as a save in purchased electricity from the utilities. Based on the data provided by the Hong Kong government, the territory-wide emission factor of GHG coming from utility power generation is 0.7 kg CO2-eq/kWhe including the transmission losses [33]. As for town gas, the emission factors for CO2, CH4, and N2O are, respectively, 2.815 kg/unit, 0.0446 g/unit and 0.0099 g/unit, where 1 unit of town gas is equivalent to 48 MJ consumed. For the free-stand case, the above information gives an annual reduction in GHG emission of 285 kg CO2-eq. The PV/T system itself does not produce polluting emissions during their daily operation. And in these days, most of the manufacturing activities of products consumed in Hong Kong are taking place in the Mainland, so the emission factor of China can be used in our embodied GHG assessment. In China, the primary energy consumption for power generation is 12.01 MJ/kWhe and the CO2 emission rate for coal-fired power plant is 24.7 g CO2-eq/MJ [34], the embodied GHG intensity of the PV/T collector in this case is therefore 0.297 kg CO2-eq/kWh cumulative energy. The local emission factor was used for the BOS part since local acquisition was assumed. Accordingly, with (4) this approximation gives a GPBT of 3.2 years for the free-stand system.Similarly, for the BiPVT system the saving in air-conditioning energy is converted as electricity saving based on a system COP (coefficient of performance) of 3.0. With (3) this gives a GPBT of 4.0 years. The result is again lower than the previously estimated GPBT of 5.2 years for the general performance of BiPV systems in Hong Kong [26].For completeness, Table6 shows the technical data in the evaluation of their CPBT. Comparing with the free-stand PV/T case, the BiPV/T system had a lower investment cost on unit collector area basis. This is because on one hand there were building materials saving and there was no requirement on the steel stands which is essential for tilt-mounting of the free-stand PV/T collector. On the other hand, it was benefitted by the economy of scale for mass handling of the system components. During operation, however, the vertical collector position of the BiPV/T system made it disadvantageous in the quantity of year-round solar radiation received by the collector surface. At the same time, there would be greater transmission loss for a centralized energy system. The simulation results showed that the annual useful heat gains of the free-stand and the building integrated cases are 418 kWh/m2 and 233 kWh/m2, respectively, on unit glazing area basis. And the electrical energy gains are 118 kWh/m2 and 66.4 kWh/m2 on unit PV cell area basis. These came out with the CPBT of 12.1 years for the free-stand case and 13.8 years for the building integrated case.Table 6
Evaluation of cost payback time.
Investment: HK$
Free-stand PV/T [6]
BiPV/T [7]
Water storage tank
400
750
Collector frame and support
400
1800
Modular thermal absorber
600
2700
Solar cells and encapsulation
4000
17500
Inverter
700
1000
Piping, wiring and accessories
300
900
Installation costs
1500
3000
Total system costs (HK$)
7900
27650
Useful energy savings
MJ (kWh)
MJ (kWh)
Thermal energy
2650.4 (736.2)
8127.5 (2257.6)
Electrical energy
473.2 (131.4)
1162.4 (322.9)
Space cooling load
—
742.6 (206.3)
Cost savings: HK$
Gaseous fuel at HK$0.2/MJ
530.1
1625.5
Electricity at HK$0.95/kWh
124.9
372.0
Annual saving
655.0
1997.5
Cost payback time (CPBT)
12.1 years
13.8 years
Note: USD1 is equivalent to HK$7.8.Our above findings are generally in line with the estimations by other researchers based on their own collector designs and local applications. Nevertheless, it should be noted that the above picture is not static. It is expected that the continuing improvements in material and energy utilization and recycling will change the current environmental profiles. On the other hand, the progression in solar cell performance will also lead to better EPBT and GPBT.
## 6. Conclusion
An environmental life-cycle assessment has been done to evaluate the energy and environmental profiles of two cases of PV/T system application in Hong Kong. In both cases, aluminum rectangular-channel absorber in association with sc-Si PV encapsulation was adopted in the single-glazed flat-plate PV/T collector design. In our analysis, the cumulative energy inputs and the embodied GHG emissions were determined by established methodology and technical data making reference to reported research works as well as local government publications. The annual thermal and electrical energy outputs were from results of dynamic simulation based on the TMY dataset of Hong Kong and validated PV/T system models. Our estimation shows that the EPBT of the free-stand PV/T system at the best angle of tilt is around 2.8 years, which is an order of magnitude lower than the expected system working life. In the vertical-mounted BiPV/T case, this is 3.8 years which is again considerably better than the general performance of roof-top BiPV system in Hong Kong. The corresponding GPBT of 3.2 and 4.0 years as a result demonstrate the environmental superiority of this PV/T option over many other competing renewable energy systems.
---
*Source: 101968-2012-10-11.xml* | 101968-2012-10-11_101968-2012-10-11.md | 52,258 | Environmental Life-Cycle Analysis of Hybrid Solar Photovoltaic/Thermal Systems for Use in Hong Kong | Tin-Tai Chow; Jie Ji | International Journal of Photoenergy
(2012) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101968 | 101968-2012-10-11.xml | ---
## Abstract
While sheet-and-tube absorber is generally recommended for flat-plate photovoltaic/thermal (PV/T) collector design because of the simplicity and promising performance, the use of rectangular-channel absorber is also tested to be a good alternative. Before a new energy technology, like PV/T, is fully implemented, its environmental superiority over the competing options should be assessed, for instance, by evaluating its consumption levels throughout its production and service life. Although there have been a plenty of environmental life-cycle assessments on the domestic solar hot water systems and PV systems, the related works on hybrid solar PV/T systems have been very few. So far there is no reported work on the assessment of PV/T collector with channel-type absorber design. This paper reports an evaluation of the energy payback time and the greenhouse gas payback time of free-standing and building-integrated PV/T systems in Hong Kong. This is based on two case studies of PV/T collectors with modular channel-type aluminium absorbers. The results confirm the long-term environmental benefits of PV/T applications.
---
## Body
## 1. Introduction
A photovoltaic/thermal (PV/T) system is a combination of photovoltaic (PV) and solar thermal devices that generate both electricity and heat energy from one integrated system. With solar cells as (part of) the thermal absorber, the hybrid design is able to maximize the energy output from an allocated space reserved for solar application. Air and/or water can be used as the heat removal fluid(s) to lower the solar cell working temperature and to improve the electricity conversion efficiency. Comparatively, the water-type product design provides more effective cooling than the air-type counterpart because of the favorable thermal properties. Those with flat plate collectors meet well the low temperature water heating system requirements. They are also ideal for preheating purposes when hot water at higher temperature is required.While sheet-and-tube absorber is one common feature in flat-plate collectors, the use of rectangular-channel absorbers also has been examined extensively [1–3]. An aluminum water-in-channel-type PV/T collector design is recommended by the authors, with the prototypes well-tested under both free-standing and building-integrated manners [4, 5]. Through the adoption of the channel absorber design, the potential problem of low fin efficiency can be readily improved. Based on the thermosyphon working principle, the collector performance is found to have geographical dependence and working well at the warmer climate zones. In the Asia Pacific region, most large cities are dominated by air-conditioned buildings where space cooling demands are high. In these buildings, the exposed facades provide very good opportunity for accommodating the building integrated systems, hence, the BiPV/T. When a part of the solar radiation that falls on the building façade is directly converted to useful thermal and electric power, the portion of solar energy transmitted through the external facade is reduced. Hence, the space cooling load is reduced. Through dynamic simulation with the use of experimentally validated system models and the typical meteorological year (TMY) data of Hong Kong, the cost payback time (CPBT) of free-standing and building-integrated PV/T systems were found 12.1 and 13.8 years, respectively [6, 7]. The assessments were taken, respectively, at their best tilted and vertical collector positions for maximizing their system outputs. It is expected that these CPBT will be gradually shortened as the PV technology is in progressive advancement. In this paper, the environmental life-cycle analysis (LCA) of such hybrid solar systems as applied in Hong Kong is reported.
## 2. Environmental Life-Cycle Analysis
LCA is a technique for assessing various aspects associated with development of a product and its potential impact throughout a product’s life [8]. Before a new energy technology is fully implemented, the environmental superiority over competing options can be asserted by evaluating its consumption levels (such as cost investments, energy uses, and GHG emissions) throughout its entire production and service life. In terms of economic analysis, a simplified approach is to ignore the time element so the cost payback time (CPBT) can be used. This is by adding together the cash inflows from successive years until the cumulative cash inflow is the same as the required investment. In analogy to the economical evaluation, two environmental cost-benefit parameters, the energy payback time (EPBT) and greenhouse gas payback time (GPBT), can be used to evaluate the time period after which the real environmental benefit starts [9]. EPBT is the period that a system has to be in operation in order to save the amount of primary energy that has been spent for production, operation, and maintenance of the system. It is the ratio of embodied energy to annual net energy output. In a BiPV/T system, for example,
(1)EPBT=∑pvt+∑bos-∑mtlEpv+Et+Eac-Eom,
where ∑pvt, ∑bos and ∑mtl, are, respectively, the embodied energy of the PV/T collectors, of the balance of system (BOS), and of the replaced building materials; Epv is the annual useful electricity output, Et the annual useful heat gain (equivalent), Eac the annual electricity saving of the HVAC system due to the space thermal load reduction, and Eom is the annual electricity consumed in system operation and maintenance activities. ∑mtl and Eac can be omitted in free-stand PV/T system evaluation. Hence,
(2)EPBT=∑pvt+∑bosEpv+Et-Eom.
Similarly, in terms of greenhouse gas (GHG) emission, for BiPV/T
(3)GPBT=Ωpvt+Ωbos-ΩmtlZpv+Zt+Zac,
where Ω stands for the embodied GHG (or carbon dioxide equivalent) emission and Z the reduction of annual GHG emission from the local power plant owing to the BiPV/T operation. And for the free-stand system,
(4)GPBT=Ωpvt+ΩbosZpv+Zt.
Thus EPBT and GPBT are functions of the related energy system performance and their environmental impacts, like those of the power utilities, the building systems, local and overseas manufacturing, and transportation and on-site handling of PV/T collector system as a whole.
## 3. Aluminum Rectangular-Channel PV/TSystems
The sectional view of an aluminum rectangular-channel PV/T collector developed by the authors is shown in Figure1. It is composed of the following layers: (i) front low-iron glass cover, (ii) crystalline silicon (c-Si) PV encapsulation, (iii) metallic thermal absorber constructed from extruded aluminum, (iv) thermal insulation layer with glass wool, and (v) back-cover steel sheet. The PV encapsulation includes TPT (tedlar-polyester-tedlar) and EVA (ethylene-vinyl acetate) layers at both sides of the solar cells. The rectangular-channel design strengthens the heat transfer and structural durability.Figure 1
Cross-sectional view of the PV/T collector showing several absorber modules in integration (N.T.S).In a free-stand thermosyphon system, the PV/T collector carries a water tank with the natural water circulation via inter-connecting pipes. Figure2 shows the external view. Water enters the collector at the lower header and leaves via the upper header. Table 1 lists the technical data of this PV/T collector for free-stand applications.Table 1
Collector and technical design data of free-stand PV/T system.
Design parameters
Data
Glazing (low-iron glass)
Thickness
0.004 m
Emissivity
0.88
Extinction coefficient
26/m
Refraction index
1.526
Depth of air gap underneath
0.025 m
PV encapsulation(TPT + EVA + solar cell + EVA + TPT + silicon gel)
Solar cell type
single-crystalline silicon
Cell area
1.11 m2
Cell electrical efficiency at STC
13%
Solar cell temperature coefficient
0.005/K
Emissivity
0.8
Absorptivity
0.8
Packing factor (wrt glazing)
63%
Thermal absorber (Aluminum)
No. of flat-box absorber module
15
Absorber module size
0.105 × 1.38 × 0.012 m
No. of header
2
Header size
1.575 × 0.025 (dia.) × 0.002 (thick) m
Thermal insulation layer (glass wool)
Thickness
0.03 m
Back cover (galvanized iron)
Thickness
0.001 m
Water tank and connecting pipes
Water storage capacity
155 kg
Tank length
1.2 m
Tank diameter
0.21 m
Pipe diameter
0.015 m
Thickness of insulation layer at tank
0.025 m
Thickness of insulation layer on pipe
0.02 mFigure 2
Front view of free-stand PV/T collector system.A BiPV/T system, on the other hand, is composed of an array of PV/T collectors that are integrated to the external wall of an air-conditioned building. See Figure3 for reference. The water tank is located at the roof-top and the water circulation is again by means of thermosyphon. Table 2 lists the technical data of the BiPV/T wall system in our study.Table 2
Collector and technical design data of BiPV/T system.
Design parameters
Data
Front glazing (low-iron glass)
Thickness
0.004 m
Surface area
1.61 m2
Depth of air gap underneath
0.025 m
PV encapsulation(TPT + EVA + solar cell + EVA + TPT + silicon gel)
Solar cell type
single-crystalline silicon
Cell area
0.81 m2
Cell electrical efficiency at STC
13%
Solar cell temperature coefficient
0.005/K
Emissivity
0.8
Absorptivity
0.8
Packing factor (wrt glazing)
50%
Thermal absorber (aluminum alloy)
Thermal capacity
903 kJ/(kg·K)
Density
2702 kg/m3
Thermal conductivity
237 W/(m·K)
Emissivity
0.8
Absorptivity
0.9
Insulation material (glass wool)
Thickness
0.03 m
Air gap between insulation layer and building wall
0.02 m
Building wall (brick)
Thickness
0.15 m
Density
1600 kg/m3
Thermal capacity
880 J/(kg·K)
Thermal conductivity
1.0 W/(m·K)
Water tank (steel) and connecting pipes (copper)
Water storage capacity
0.46 m3
Tank length
1.5 m
Tank diameter
0.54 m
Pipe diameter
0.055 m
Thickness of insulation layer at tank
0.025 m
Thickness of insulation layer on pipe
0.02 mFigure 3
Front view of BiPV/T system with water tank at top of wall.
## 4. Review of Previous Works onFlat Plate Collector Systems
### 4.1. Solar Hot Water Systems
The LCA works on domestic solar hot water (DSHW) systems in majority were from EU countries [10–13]. Streicher et al. [10] evaluated the EPBT of solar thermal systems by dividing the system into components. The cumulative energy demand was obtained by multiplying the weight of the main components with their respective cumulative energy demand values. They estimated that in Germany the DSHW systems have EPBT from 1.3 to 2.3 years. In their study, construction credit was given to the collector system in integrated roof-mounting mode. This is for the savings in building materials, transportation, and construction works. The collector itself accounts for 89% and 85% of the total embodied energy in the roof-integrated and open-stand systems, respectively. Tsilingiridis et al. [11] found that in Greece the materials used, including steel and copper, have the major contribution to the environmental impacts. Ardente et al. [12] found that in Italy the indirect emissions (related to production of raw materials) are about 80–90% of the overall GHG releases. Kalogirou [13] worked on a thermosyphon DSHW system in Cyprus. The system thermal performance was evaluated by dynamic simulation program. The LCA determined that 77% of the embodied energy goes to the collector panels, 15% goes to the steel frame, 5% goes to piping, and the remaining accounts for less than 3% of the total. Considerable amounts of GHG can be saved. The EPBT was estimated around 1.1 year.Outside Europe, the study of Crawford et al. [14] in Australia showed that although the CPBT of DSHW systems can be 10 years or more, the corresponding GPBT can be only around 2.5–5 years. In their study, a conversion factor of 60 kg CO2 eq/GJ was used to determine the GHG emission from the cumulative energy of the entire system. Arif [15] evaluated the environmental performance of DSHW systems in India. Based on the 100 litre-per-day and steady year-round usage, the EPBT was estimated 1.6–2.6 years, all depending on the local climates and also the collector materials in use. In the LCA work of Hang et al. [16] on a range of solar hot water systems in USA; dynamic thermal simulation was again applied.
### 4.2. PV Systems
In the last decades, plenty of works have been reported on life cycle performance of PV systems in both free-stand and building-integrated manners. The estimations of EPBT and GPBT have been kept on revising owing to the advancements in PV technology.The production of a PV module includes the following processes:(i)
silicon purification and processing,(ii)
silicon ingot slicing, and(iii)
PV module fabrication.Silica is first melted and manufactured into metallurgical-grade silicon (MG-Si), then into electronic silicon (EG-Si) through the Siemen’s process or into solar-grade silicon (SoG-Si) through the modified Siemens process [17]. Finally, after the Czochralski process (for sc-Si) or other production process, silicon is made available for the solar cell production. The silicon ingot is needed to be sliced into wafer. The technologies of cell production include etching, doping, screen printing, and coating. The solar cells are then tested, packed, and interconnected with other components to form PV modules.Alsema [18] studied the EPBT and the GHG emissions of grid-connected PV systems. The cumulative energy demands of sc-Si and mc-Si frameless modules were evaluated as 5700 and 4200 MJ/m2. Further, it was pointed out that with the implementation of new manufacturing technologies, the above data could be as low as 3200 and 2600 MJ/m2. Later on, Alsema et al. [19, 20] reviewed the important options that were available for further reduce energy consumption and environment impacts of the PV module production processes. As for BOS, Alsema and Nieuwlaar [21] presented that because of the less use of aluminum in supporting structure, the energy requirement for array support of ground-mounted PV system was about 1800 MJ/m2, but this could be only 700 MJ/m2 for rooftop installation; hence rooftop systems should have better potentials for EPBT reduction than ground-mounted systems.Mason et al. [22] studied the energy contents of the BOS components used in a 3.5 MWp mc-Si PV plant. By integrating the weight of the PV modules with the supports, the embodied energy of the BOS components was found as low as 542 MJ/m2—a sharp reduction from the previous estimations. Fthenakis and Kim [23] showed that in Japan the primary energy demand for sc-Si PV module was in the range of 4160–15520 MJ/m2, and the life-cycle GHG emissions rate for PV systems in the United States were from 22 to 49 g CO2-eq/kWhe.In Singapore, Kannan et al. studied a 2.7 kWp distributed PV system with sc-Si modules [24]. Specific energy consumptions for the PV modules and the inverters were estimated 16 and 0.17 MWhe/kWp respectively. The manufacturing of solar PV modules accounted for 81% of the life cycle energy use. The aluminium supporting structure accounted for about 10%, and the recycling of aluminium accounted for another 7%. The EPBT was estimated to be 6.74 years. It was claimed that this can be reduced to 3.5 years if the primary energy use on PV module production is reduced by 50%.In India, Nawaz and Tiwari [25] calculated EPBT by evaluating the energy requirement for manufacturing a sc-Si PV system for open field and rooftop conditions with BOS. Mitigation of CO2 emissions at macrolevel (where lifetime of battery and PV system are the same) and microlevel of the PV system has also been studied. For a 1 m2 sc-Si PV system, their estimations give an embodied energy of 666 kWh for silicon purification and processing, 120 kWh for cell fabrication, and 190 kWh for subsequent PV module production. Hence without BOS, the embodied energy was estimated 976 kWh/m2 and the GHG emission was 27.23 kg/m2.In Hong Kong Lu and Yang [26] investigated the EPBT and GPBT of a roof-mounted 22 kW BiPV system. It was found that 71% of the embodied energy on the whole is from the embodied energy of the PV modules, whereas the remaining 29% is from the embodied energy of BOS. The EPBT of the PV system was then calculated as 7.3 years. Considering the fuel mixture composition of local power stations, the corresponding GPBT is 5.2 years. Further, it was predicted that the possible range of EPBT of BiPV installations in Hong Kong is from 7.1 years (for optimal orientation) to 20 years (for west-facing vertical façade).Bankier and Gale [27] gave a review of EPBT of roof mounted PV systems reported in the 10-year period (1996–2005). A large range of discrepancy was found. They pointed out that the limitations to the accuracy of the assessments came from the difficulties in determining realistic energy conversion factors, and in determining realistic energy values for human labor. According to their estimation, the appropriate range of EPBT for mc-Si PV module installations should be between 2–8 years. A more recent review was done by Sherwani et al. [28]. The EPBT for sc-Si, mc-Si, and a-Si PV systems have been estimated in the ranges of 3.2–15.5, 1.5–5.7, and 2.5–3.2, years, respectively. Similarly, GHG emissions are 44–280, 9.4–104, and 15.6–50 g CO2-eq/kWh.
### 4.3. PV/T Systems
While there have been plenty studies of EPBT and GPBT on solar thermal and PV systems, our literature review shows that those on PV/T systems have been very few. In particular, there is so far no reported work on the assessment of PV/T collectors with channel-type absorber design.Battisti and Corrado [29] made evaluation based on a conventional mc-Si building-integrated system located in Rome, Italy. An experimental PV/T system with heat recovery for DSHW application was examined. Evaluations were made for alternative heat recovery to replace either natural gas or electricity. Their results give the EPBT and GPBT of PV system as 3.3 and 4.1 years. On the other hand, those of the PV/T systems designed for natural gas replacement are 2.3 and 2.4 years.Also in Italy, Tripanagnostopoulos et al. [30] evaluated the energy and environmental performance of their modified 3 kWp mc-Si PV and experimental water-cooled PV/T sheet-and-tube collector systems designed for horizontal-roof (free-stand) and tilted-roof (building integrated) installations. The application advantage of the glazed/unglazed PV/T over the PV options was demonstrated through the better LCA performances. The EPBT of the PV and BiPV system were found to be 2.9 and 3.2 years, whereas the GPBT were 2.7 and 3.1 years, respectively. For PV/T system with 35°C operating temperature, the EPBT of the PV/T and BiPV/T options were both 1.6 years, and the GPBT were 1.9 and 2.0 years respectively. The study showed that nearly the whole of the environmental impacts are due to PV module production, aluminium parts (reflectors and heat-recovery-unit) as well as copper parts (for heat-recovery-unit and hydraulic circuit), with barely significant contributions from the other system components, such as support structures or electrical/electronic devices. The disposal phase contribution is again almost negligible.Dubey and Tiwari [31] carried out an environmental impact analysis of a hybrid PV/T solar water heater for use in the Delhi climate of India. With a glazed sheet-and-tube flat plate collector system designed for pump operation, the EPBT was found 1.3 years.
## 4.1. Solar Hot Water Systems
The LCA works on domestic solar hot water (DSHW) systems in majority were from EU countries [10–13]. Streicher et al. [10] evaluated the EPBT of solar thermal systems by dividing the system into components. The cumulative energy demand was obtained by multiplying the weight of the main components with their respective cumulative energy demand values. They estimated that in Germany the DSHW systems have EPBT from 1.3 to 2.3 years. In their study, construction credit was given to the collector system in integrated roof-mounting mode. This is for the savings in building materials, transportation, and construction works. The collector itself accounts for 89% and 85% of the total embodied energy in the roof-integrated and open-stand systems, respectively. Tsilingiridis et al. [11] found that in Greece the materials used, including steel and copper, have the major contribution to the environmental impacts. Ardente et al. [12] found that in Italy the indirect emissions (related to production of raw materials) are about 80–90% of the overall GHG releases. Kalogirou [13] worked on a thermosyphon DSHW system in Cyprus. The system thermal performance was evaluated by dynamic simulation program. The LCA determined that 77% of the embodied energy goes to the collector panels, 15% goes to the steel frame, 5% goes to piping, and the remaining accounts for less than 3% of the total. Considerable amounts of GHG can be saved. The EPBT was estimated around 1.1 year.Outside Europe, the study of Crawford et al. [14] in Australia showed that although the CPBT of DSHW systems can be 10 years or more, the corresponding GPBT can be only around 2.5–5 years. In their study, a conversion factor of 60 kg CO2 eq/GJ was used to determine the GHG emission from the cumulative energy of the entire system. Arif [15] evaluated the environmental performance of DSHW systems in India. Based on the 100 litre-per-day and steady year-round usage, the EPBT was estimated 1.6–2.6 years, all depending on the local climates and also the collector materials in use. In the LCA work of Hang et al. [16] on a range of solar hot water systems in USA; dynamic thermal simulation was again applied.
## 4.2. PV Systems
In the last decades, plenty of works have been reported on life cycle performance of PV systems in both free-stand and building-integrated manners. The estimations of EPBT and GPBT have been kept on revising owing to the advancements in PV technology.The production of a PV module includes the following processes:(i)
silicon purification and processing,(ii)
silicon ingot slicing, and(iii)
PV module fabrication.Silica is first melted and manufactured into metallurgical-grade silicon (MG-Si), then into electronic silicon (EG-Si) through the Siemen’s process or into solar-grade silicon (SoG-Si) through the modified Siemens process [17]. Finally, after the Czochralski process (for sc-Si) or other production process, silicon is made available for the solar cell production. The silicon ingot is needed to be sliced into wafer. The technologies of cell production include etching, doping, screen printing, and coating. The solar cells are then tested, packed, and interconnected with other components to form PV modules.Alsema [18] studied the EPBT and the GHG emissions of grid-connected PV systems. The cumulative energy demands of sc-Si and mc-Si frameless modules were evaluated as 5700 and 4200 MJ/m2. Further, it was pointed out that with the implementation of new manufacturing technologies, the above data could be as low as 3200 and 2600 MJ/m2. Later on, Alsema et al. [19, 20] reviewed the important options that were available for further reduce energy consumption and environment impacts of the PV module production processes. As for BOS, Alsema and Nieuwlaar [21] presented that because of the less use of aluminum in supporting structure, the energy requirement for array support of ground-mounted PV system was about 1800 MJ/m2, but this could be only 700 MJ/m2 for rooftop installation; hence rooftop systems should have better potentials for EPBT reduction than ground-mounted systems.Mason et al. [22] studied the energy contents of the BOS components used in a 3.5 MWp mc-Si PV plant. By integrating the weight of the PV modules with the supports, the embodied energy of the BOS components was found as low as 542 MJ/m2—a sharp reduction from the previous estimations. Fthenakis and Kim [23] showed that in Japan the primary energy demand for sc-Si PV module was in the range of 4160–15520 MJ/m2, and the life-cycle GHG emissions rate for PV systems in the United States were from 22 to 49 g CO2-eq/kWhe.In Singapore, Kannan et al. studied a 2.7 kWp distributed PV system with sc-Si modules [24]. Specific energy consumptions for the PV modules and the inverters were estimated 16 and 0.17 MWhe/kWp respectively. The manufacturing of solar PV modules accounted for 81% of the life cycle energy use. The aluminium supporting structure accounted for about 10%, and the recycling of aluminium accounted for another 7%. The EPBT was estimated to be 6.74 years. It was claimed that this can be reduced to 3.5 years if the primary energy use on PV module production is reduced by 50%.In India, Nawaz and Tiwari [25] calculated EPBT by evaluating the energy requirement for manufacturing a sc-Si PV system for open field and rooftop conditions with BOS. Mitigation of CO2 emissions at macrolevel (where lifetime of battery and PV system are the same) and microlevel of the PV system has also been studied. For a 1 m2 sc-Si PV system, their estimations give an embodied energy of 666 kWh for silicon purification and processing, 120 kWh for cell fabrication, and 190 kWh for subsequent PV module production. Hence without BOS, the embodied energy was estimated 976 kWh/m2 and the GHG emission was 27.23 kg/m2.In Hong Kong Lu and Yang [26] investigated the EPBT and GPBT of a roof-mounted 22 kW BiPV system. It was found that 71% of the embodied energy on the whole is from the embodied energy of the PV modules, whereas the remaining 29% is from the embodied energy of BOS. The EPBT of the PV system was then calculated as 7.3 years. Considering the fuel mixture composition of local power stations, the corresponding GPBT is 5.2 years. Further, it was predicted that the possible range of EPBT of BiPV installations in Hong Kong is from 7.1 years (for optimal orientation) to 20 years (for west-facing vertical façade).Bankier and Gale [27] gave a review of EPBT of roof mounted PV systems reported in the 10-year period (1996–2005). A large range of discrepancy was found. They pointed out that the limitations to the accuracy of the assessments came from the difficulties in determining realistic energy conversion factors, and in determining realistic energy values for human labor. According to their estimation, the appropriate range of EPBT for mc-Si PV module installations should be between 2–8 years. A more recent review was done by Sherwani et al. [28]. The EPBT for sc-Si, mc-Si, and a-Si PV systems have been estimated in the ranges of 3.2–15.5, 1.5–5.7, and 2.5–3.2, years, respectively. Similarly, GHG emissions are 44–280, 9.4–104, and 15.6–50 g CO2-eq/kWh.
## 4.3. PV/T Systems
While there have been plenty studies of EPBT and GPBT on solar thermal and PV systems, our literature review shows that those on PV/T systems have been very few. In particular, there is so far no reported work on the assessment of PV/T collectors with channel-type absorber design.Battisti and Corrado [29] made evaluation based on a conventional mc-Si building-integrated system located in Rome, Italy. An experimental PV/T system with heat recovery for DSHW application was examined. Evaluations were made for alternative heat recovery to replace either natural gas or electricity. Their results give the EPBT and GPBT of PV system as 3.3 and 4.1 years. On the other hand, those of the PV/T systems designed for natural gas replacement are 2.3 and 2.4 years.Also in Italy, Tripanagnostopoulos et al. [30] evaluated the energy and environmental performance of their modified 3 kWp mc-Si PV and experimental water-cooled PV/T sheet-and-tube collector systems designed for horizontal-roof (free-stand) and tilted-roof (building integrated) installations. The application advantage of the glazed/unglazed PV/T over the PV options was demonstrated through the better LCA performances. The EPBT of the PV and BiPV system were found to be 2.9 and 3.2 years, whereas the GPBT were 2.7 and 3.1 years, respectively. For PV/T system with 35°C operating temperature, the EPBT of the PV/T and BiPV/T options were both 1.6 years, and the GPBT were 1.9 and 2.0 years respectively. The study showed that nearly the whole of the environmental impacts are due to PV module production, aluminium parts (reflectors and heat-recovery-unit) as well as copper parts (for heat-recovery-unit and hydraulic circuit), with barely significant contributions from the other system components, such as support structures or electrical/electronic devices. The disposal phase contribution is again almost negligible.Dubey and Tiwari [31] carried out an environmental impact analysis of a hybrid PV/T solar water heater for use in the Delhi climate of India. With a glazed sheet-and-tube flat plate collector system designed for pump operation, the EPBT was found 1.3 years.
## 5. Environmental Analysis ofAluminum Rectangular-Channel PV/T Systems
### 5.1. EPBT of Free-Stand System
Skillful lamination of solar cell onto thermal absorber with layers of EVA and TPT is needed for PV/T collector production. Aluminum thermal absorber parts are made available by raw material mining and extraction, ingot melting, mechanical extrusion, machining, and assembling into whole piece. The major-component production and assembly processes include front glass (low iron), PV-laminated absorber, insulation material and aluminum frame. The supply was from the mainland. As for the BOS, the electrical BOS components include inverters, electrical wirings, and electronic devices. The mechanical BOS include water storage tank, pipe work, supporting structure, and accessories. The embodied energy to be considered in the LCA include the above during production, plus those related to the required transportation from factory to installation site, construction and testing, decommissioning and disposal, and any other end-of-life energy requirements.Table3 summarizes the materials used and cumulative energy of the free-stand PV/T collector system. The cumulative energy intensity of sc-Si PV module was estimated as 976 kWh/m2, making references to [25, 26]. That of the inverter and electrical parts was taken as 5% of the PV module. The other values of cumulative energy intensity in MJ/unit was obtained from the Hong Kong government EMSD (Electrical and Mechanical Services Department) database that covers the specific (per unit quantity) impact profile due to consumption of materials in the “Cradle-to-As-built” stage [32]. The total cumulative energy comes up to 3041.8 kWh or 1728 kWh/m2 for this free-stand system. Table 4 shows the distribution of the embodied energy in this case. It can be seen that the hybrid PV/T collector itself accounts for around 80% of the embodied energy. For the BOS, the water tank accounts for 11.4%, the other mechanical components accounts for 7%, whereas the electrical accessories accounts for only 1.8%. ∑pvt and ∑bos are then 2429 and 613 kWh, respectively.Table 3
Cumulative energy in free-stand PV/T system.
Materials
Quantity consumed (kg)
Cumulative energy intensity (MJ/unit)
Cumulative energy (kWh)
PV/T collector
Front glazing
Low-iron glass (1.76 m2)
19.7
19.7
107.9
Thermal insulation
Glass wool
1.69
31.7
14.9
Thermal absorber
Aluminum absorber
18.3
219
1114.7
Frame and back cover
Aluminum
1.78
219
108.0
PV Encapsulation
PV Module
1.11 m2
976
1083.4
BOS
Water tank
Stainless steel tank
4.20
82.2
273.0
Tank insulation (Glass wool)
1.58
31.7
13.9
Aluminum Cladding
0.966
219
58.8
Connecting pipe
Copper piping (15 mm dia.)
2.4 m
6.33
4.2
Pipe insulation (Glass wool)
0.0627
31.7
0.6
Structural support and accessories
Steel stand
14.2
29.2
115.2
Pipe fittings and structural joints
7.19
140.0
93.3
Inverter + electric wiring
5% of PV module
54.2
Total:
3041.8Table 4
Distribution of embodied energy in PV/T collector systems.
System component description
Free-stand
BiPV/T
PV/T Collector
Mechanical components
44.2
51.8
Electrical components
35.6
37.7
BOS
Water tank
11.4
4.9
Pipe and structural supports
7.0
3.8
Electrical components
1.8
1.9With the installation of this PV/T system, two kinds of energy saving are involved: thermal energy for water heating and electrical energy. This will be no air-conditioning saving. A thermal energy saving of 2650 MJ/year and electricity saving of 473 MJ/year give anEt of 736 kWh/year and an Epv of 398 kWh/year. In the computation, a heat-to-electricity conversion factor of 0.33 has been used. Mainly labor costs were considered in Eom. This is estimated as 41 kWh/year and is therefore not significant. With (2), the EPBT is found 2.8 years. This is much shorter than the expected CPBT of 12.1 years reported in our previous work [6]. Assuming that the working life of PV/T system is similar to PV system, that is, 15–30 years in general [29], then it can be concluded that the EPBT in this case study is an order of magnitude lower than its expected working life.
### 5.2. BiPV/T System
Table5 summarizes the materials used and the cumulative energy in the 9.66 m2 BiPV/T case. Accordingly, the values of Zpvt and Zbos are, respectively, 11258 and 1328 kWh. Zmtl is estimated as 594 kWh, making reference to the work of Streicher et al. [10] and adjusted by the cost of living. Taking the advantage of building material replacement, the cumulative energy intensity reduces to 1241 kWh/m2. The embodied energy distribution of this BiPV/T system is also given in Table 4. It can be seen that for this building integrated case the portion of the collector increases to 89%. For the BOS, the water tank accounts for 4.9%, the pipe and supporting components account for 3.8%, and the electrical components remain at less than 2%.Table 5
Cumulative energy in BiPV/T system.
Materials
Quantity consumed (kg)
Cumulative energy intensity (MJ/unit)
Cumulative energy (kWh)
PV/T collector
Front glazing
Low-iron glass (1.61 m2 × 6)
99.6
19.7
545.0
Thermal insulation
Glass wool
9.50
31.7
83.7
Thermal absorber
Aluminum absorber
86.7
219
5273.8
Frame and back cover
Aluminum
10.1
219
611.8
PV Encapsulation
PV Module
4.86 m2
976
4743.4
BOS
Water tank
Stainless steel tank
19.9
82.2
454.0
Insulation (Glass wool)
2.14
31.7
18.8
Aluminum Cladding
1.53
219
93.0
Connecting pipe
Copper piping (55 mm dia.)
7 m
40.1
77.9
Pipe insulation (Glass wool)
1.07
31.7
9.4
Structural support and accessories
Pipe fittings and structural parts
5.25
140.0
68.1
Inverter + electric wiring
5% of PV module
237.2
Total
12585.2With the installation of this BiPVW system, the annual energy savings include the following:(i)
thermal energy: 2258 kWh (Et);(ii)
electrical energy: 323 kWh;(iii)
space cooling load: 206 kWh.By taking the COP of air-conditioning plant as 3.0,Epv and Eac are then 979 and 208 kWh/year, respectively. In this case Eom is 246 kWh/year, by estimation. By (1), the EPBT is 3.8 years, which is much shorter than its CPBT of 13.8 years. A longer period of EPBT in this BiPV/T than in the free-stand case is mainly because of its vertical collector position as compared to the best angle of tilt, and also the differences in collector size and solar cell packing factor. A shorter EPBT is expected if ms-Si cell modules were used in the analyses because of the lower energy consumption during the manufacturing process. As a matter of fact, this 3.8 years for vertical-mounted BiPV/T is advantageous as compared to the 7.1 years [26] for an optimal-oriented roof-top BiPV system in Hong Kong.
### 5.3. GHG Emission Analysis
In our analysis, the thermal energy saving was taken as a save of town gas consumed in the building. The electrical energy saving was taken as a save in purchased electricity from the utilities. Based on the data provided by the Hong Kong government, the territory-wide emission factor of GHG coming from utility power generation is 0.7 kg CO2-eq/kWhe including the transmission losses [33]. As for town gas, the emission factors for CO2, CH4, and N2O are, respectively, 2.815 kg/unit, 0.0446 g/unit and 0.0099 g/unit, where 1 unit of town gas is equivalent to 48 MJ consumed. For the free-stand case, the above information gives an annual reduction in GHG emission of 285 kg CO2-eq. The PV/T system itself does not produce polluting emissions during their daily operation. And in these days, most of the manufacturing activities of products consumed in Hong Kong are taking place in the Mainland, so the emission factor of China can be used in our embodied GHG assessment. In China, the primary energy consumption for power generation is 12.01 MJ/kWhe and the CO2 emission rate for coal-fired power plant is 24.7 g CO2-eq/MJ [34], the embodied GHG intensity of the PV/T collector in this case is therefore 0.297 kg CO2-eq/kWh cumulative energy. The local emission factor was used for the BOS part since local acquisition was assumed. Accordingly, with (4) this approximation gives a GPBT of 3.2 years for the free-stand system.Similarly, for the BiPVT system the saving in air-conditioning energy is converted as electricity saving based on a system COP (coefficient of performance) of 3.0. With (3) this gives a GPBT of 4.0 years. The result is again lower than the previously estimated GPBT of 5.2 years for the general performance of BiPV systems in Hong Kong [26].For completeness, Table6 shows the technical data in the evaluation of their CPBT. Comparing with the free-stand PV/T case, the BiPV/T system had a lower investment cost on unit collector area basis. This is because on one hand there were building materials saving and there was no requirement on the steel stands which is essential for tilt-mounting of the free-stand PV/T collector. On the other hand, it was benefitted by the economy of scale for mass handling of the system components. During operation, however, the vertical collector position of the BiPV/T system made it disadvantageous in the quantity of year-round solar radiation received by the collector surface. At the same time, there would be greater transmission loss for a centralized energy system. The simulation results showed that the annual useful heat gains of the free-stand and the building integrated cases are 418 kWh/m2 and 233 kWh/m2, respectively, on unit glazing area basis. And the electrical energy gains are 118 kWh/m2 and 66.4 kWh/m2 on unit PV cell area basis. These came out with the CPBT of 12.1 years for the free-stand case and 13.8 years for the building integrated case.Table 6
Evaluation of cost payback time.
Investment: HK$
Free-stand PV/T [6]
BiPV/T [7]
Water storage tank
400
750
Collector frame and support
400
1800
Modular thermal absorber
600
2700
Solar cells and encapsulation
4000
17500
Inverter
700
1000
Piping, wiring and accessories
300
900
Installation costs
1500
3000
Total system costs (HK$)
7900
27650
Useful energy savings
MJ (kWh)
MJ (kWh)
Thermal energy
2650.4 (736.2)
8127.5 (2257.6)
Electrical energy
473.2 (131.4)
1162.4 (322.9)
Space cooling load
—
742.6 (206.3)
Cost savings: HK$
Gaseous fuel at HK$0.2/MJ
530.1
1625.5
Electricity at HK$0.95/kWh
124.9
372.0
Annual saving
655.0
1997.5
Cost payback time (CPBT)
12.1 years
13.8 years
Note: USD1 is equivalent to HK$7.8.Our above findings are generally in line with the estimations by other researchers based on their own collector designs and local applications. Nevertheless, it should be noted that the above picture is not static. It is expected that the continuing improvements in material and energy utilization and recycling will change the current environmental profiles. On the other hand, the progression in solar cell performance will also lead to better EPBT and GPBT.
## 5.1. EPBT of Free-Stand System
Skillful lamination of solar cell onto thermal absorber with layers of EVA and TPT is needed for PV/T collector production. Aluminum thermal absorber parts are made available by raw material mining and extraction, ingot melting, mechanical extrusion, machining, and assembling into whole piece. The major-component production and assembly processes include front glass (low iron), PV-laminated absorber, insulation material and aluminum frame. The supply was from the mainland. As for the BOS, the electrical BOS components include inverters, electrical wirings, and electronic devices. The mechanical BOS include water storage tank, pipe work, supporting structure, and accessories. The embodied energy to be considered in the LCA include the above during production, plus those related to the required transportation from factory to installation site, construction and testing, decommissioning and disposal, and any other end-of-life energy requirements.Table3 summarizes the materials used and cumulative energy of the free-stand PV/T collector system. The cumulative energy intensity of sc-Si PV module was estimated as 976 kWh/m2, making references to [25, 26]. That of the inverter and electrical parts was taken as 5% of the PV module. The other values of cumulative energy intensity in MJ/unit was obtained from the Hong Kong government EMSD (Electrical and Mechanical Services Department) database that covers the specific (per unit quantity) impact profile due to consumption of materials in the “Cradle-to-As-built” stage [32]. The total cumulative energy comes up to 3041.8 kWh or 1728 kWh/m2 for this free-stand system. Table 4 shows the distribution of the embodied energy in this case. It can be seen that the hybrid PV/T collector itself accounts for around 80% of the embodied energy. For the BOS, the water tank accounts for 11.4%, the other mechanical components accounts for 7%, whereas the electrical accessories accounts for only 1.8%. ∑pvt and ∑bos are then 2429 and 613 kWh, respectively.Table 3
Cumulative energy in free-stand PV/T system.
Materials
Quantity consumed (kg)
Cumulative energy intensity (MJ/unit)
Cumulative energy (kWh)
PV/T collector
Front glazing
Low-iron glass (1.76 m2)
19.7
19.7
107.9
Thermal insulation
Glass wool
1.69
31.7
14.9
Thermal absorber
Aluminum absorber
18.3
219
1114.7
Frame and back cover
Aluminum
1.78
219
108.0
PV Encapsulation
PV Module
1.11 m2
976
1083.4
BOS
Water tank
Stainless steel tank
4.20
82.2
273.0
Tank insulation (Glass wool)
1.58
31.7
13.9
Aluminum Cladding
0.966
219
58.8
Connecting pipe
Copper piping (15 mm dia.)
2.4 m
6.33
4.2
Pipe insulation (Glass wool)
0.0627
31.7
0.6
Structural support and accessories
Steel stand
14.2
29.2
115.2
Pipe fittings and structural joints
7.19
140.0
93.3
Inverter + electric wiring
5% of PV module
54.2
Total:
3041.8Table 4
Distribution of embodied energy in PV/T collector systems.
System component description
Free-stand
BiPV/T
PV/T Collector
Mechanical components
44.2
51.8
Electrical components
35.6
37.7
BOS
Water tank
11.4
4.9
Pipe and structural supports
7.0
3.8
Electrical components
1.8
1.9With the installation of this PV/T system, two kinds of energy saving are involved: thermal energy for water heating and electrical energy. This will be no air-conditioning saving. A thermal energy saving of 2650 MJ/year and electricity saving of 473 MJ/year give anEt of 736 kWh/year and an Epv of 398 kWh/year. In the computation, a heat-to-electricity conversion factor of 0.33 has been used. Mainly labor costs were considered in Eom. This is estimated as 41 kWh/year and is therefore not significant. With (2), the EPBT is found 2.8 years. This is much shorter than the expected CPBT of 12.1 years reported in our previous work [6]. Assuming that the working life of PV/T system is similar to PV system, that is, 15–30 years in general [29], then it can be concluded that the EPBT in this case study is an order of magnitude lower than its expected working life.
## 5.2. BiPV/T System
Table5 summarizes the materials used and the cumulative energy in the 9.66 m2 BiPV/T case. Accordingly, the values of Zpvt and Zbos are, respectively, 11258 and 1328 kWh. Zmtl is estimated as 594 kWh, making reference to the work of Streicher et al. [10] and adjusted by the cost of living. Taking the advantage of building material replacement, the cumulative energy intensity reduces to 1241 kWh/m2. The embodied energy distribution of this BiPV/T system is also given in Table 4. It can be seen that for this building integrated case the portion of the collector increases to 89%. For the BOS, the water tank accounts for 4.9%, the pipe and supporting components account for 3.8%, and the electrical components remain at less than 2%.Table 5
Cumulative energy in BiPV/T system.
Materials
Quantity consumed (kg)
Cumulative energy intensity (MJ/unit)
Cumulative energy (kWh)
PV/T collector
Front glazing
Low-iron glass (1.61 m2 × 6)
99.6
19.7
545.0
Thermal insulation
Glass wool
9.50
31.7
83.7
Thermal absorber
Aluminum absorber
86.7
219
5273.8
Frame and back cover
Aluminum
10.1
219
611.8
PV Encapsulation
PV Module
4.86 m2
976
4743.4
BOS
Water tank
Stainless steel tank
19.9
82.2
454.0
Insulation (Glass wool)
2.14
31.7
18.8
Aluminum Cladding
1.53
219
93.0
Connecting pipe
Copper piping (55 mm dia.)
7 m
40.1
77.9
Pipe insulation (Glass wool)
1.07
31.7
9.4
Structural support and accessories
Pipe fittings and structural parts
5.25
140.0
68.1
Inverter + electric wiring
5% of PV module
237.2
Total
12585.2With the installation of this BiPVW system, the annual energy savings include the following:(i)
thermal energy: 2258 kWh (Et);(ii)
electrical energy: 323 kWh;(iii)
space cooling load: 206 kWh.By taking the COP of air-conditioning plant as 3.0,Epv and Eac are then 979 and 208 kWh/year, respectively. In this case Eom is 246 kWh/year, by estimation. By (1), the EPBT is 3.8 years, which is much shorter than its CPBT of 13.8 years. A longer period of EPBT in this BiPV/T than in the free-stand case is mainly because of its vertical collector position as compared to the best angle of tilt, and also the differences in collector size and solar cell packing factor. A shorter EPBT is expected if ms-Si cell modules were used in the analyses because of the lower energy consumption during the manufacturing process. As a matter of fact, this 3.8 years for vertical-mounted BiPV/T is advantageous as compared to the 7.1 years [26] for an optimal-oriented roof-top BiPV system in Hong Kong.
## 5.3. GHG Emission Analysis
In our analysis, the thermal energy saving was taken as a save of town gas consumed in the building. The electrical energy saving was taken as a save in purchased electricity from the utilities. Based on the data provided by the Hong Kong government, the territory-wide emission factor of GHG coming from utility power generation is 0.7 kg CO2-eq/kWhe including the transmission losses [33]. As for town gas, the emission factors for CO2, CH4, and N2O are, respectively, 2.815 kg/unit, 0.0446 g/unit and 0.0099 g/unit, where 1 unit of town gas is equivalent to 48 MJ consumed. For the free-stand case, the above information gives an annual reduction in GHG emission of 285 kg CO2-eq. The PV/T system itself does not produce polluting emissions during their daily operation. And in these days, most of the manufacturing activities of products consumed in Hong Kong are taking place in the Mainland, so the emission factor of China can be used in our embodied GHG assessment. In China, the primary energy consumption for power generation is 12.01 MJ/kWhe and the CO2 emission rate for coal-fired power plant is 24.7 g CO2-eq/MJ [34], the embodied GHG intensity of the PV/T collector in this case is therefore 0.297 kg CO2-eq/kWh cumulative energy. The local emission factor was used for the BOS part since local acquisition was assumed. Accordingly, with (4) this approximation gives a GPBT of 3.2 years for the free-stand system.Similarly, for the BiPVT system the saving in air-conditioning energy is converted as electricity saving based on a system COP (coefficient of performance) of 3.0. With (3) this gives a GPBT of 4.0 years. The result is again lower than the previously estimated GPBT of 5.2 years for the general performance of BiPV systems in Hong Kong [26].For completeness, Table6 shows the technical data in the evaluation of their CPBT. Comparing with the free-stand PV/T case, the BiPV/T system had a lower investment cost on unit collector area basis. This is because on one hand there were building materials saving and there was no requirement on the steel stands which is essential for tilt-mounting of the free-stand PV/T collector. On the other hand, it was benefitted by the economy of scale for mass handling of the system components. During operation, however, the vertical collector position of the BiPV/T system made it disadvantageous in the quantity of year-round solar radiation received by the collector surface. At the same time, there would be greater transmission loss for a centralized energy system. The simulation results showed that the annual useful heat gains of the free-stand and the building integrated cases are 418 kWh/m2 and 233 kWh/m2, respectively, on unit glazing area basis. And the electrical energy gains are 118 kWh/m2 and 66.4 kWh/m2 on unit PV cell area basis. These came out with the CPBT of 12.1 years for the free-stand case and 13.8 years for the building integrated case.Table 6
Evaluation of cost payback time.
Investment: HK$
Free-stand PV/T [6]
BiPV/T [7]
Water storage tank
400
750
Collector frame and support
400
1800
Modular thermal absorber
600
2700
Solar cells and encapsulation
4000
17500
Inverter
700
1000
Piping, wiring and accessories
300
900
Installation costs
1500
3000
Total system costs (HK$)
7900
27650
Useful energy savings
MJ (kWh)
MJ (kWh)
Thermal energy
2650.4 (736.2)
8127.5 (2257.6)
Electrical energy
473.2 (131.4)
1162.4 (322.9)
Space cooling load
—
742.6 (206.3)
Cost savings: HK$
Gaseous fuel at HK$0.2/MJ
530.1
1625.5
Electricity at HK$0.95/kWh
124.9
372.0
Annual saving
655.0
1997.5
Cost payback time (CPBT)
12.1 years
13.8 years
Note: USD1 is equivalent to HK$7.8.Our above findings are generally in line with the estimations by other researchers based on their own collector designs and local applications. Nevertheless, it should be noted that the above picture is not static. It is expected that the continuing improvements in material and energy utilization and recycling will change the current environmental profiles. On the other hand, the progression in solar cell performance will also lead to better EPBT and GPBT.
## 6. Conclusion
An environmental life-cycle assessment has been done to evaluate the energy and environmental profiles of two cases of PV/T system application in Hong Kong. In both cases, aluminum rectangular-channel absorber in association with sc-Si PV encapsulation was adopted in the single-glazed flat-plate PV/T collector design. In our analysis, the cumulative energy inputs and the embodied GHG emissions were determined by established methodology and technical data making reference to reported research works as well as local government publications. The annual thermal and electrical energy outputs were from results of dynamic simulation based on the TMY dataset of Hong Kong and validated PV/T system models. Our estimation shows that the EPBT of the free-stand PV/T system at the best angle of tilt is around 2.8 years, which is an order of magnitude lower than the expected system working life. In the vertical-mounted BiPV/T case, this is 3.8 years which is again considerably better than the general performance of roof-top BiPV system in Hong Kong. The corresponding GPBT of 3.2 and 4.0 years as a result demonstrate the environmental superiority of this PV/T option over many other competing renewable energy systems.
---
*Source: 101968-2012-10-11.xml* | 2012 |
# Economic Efficiency Assessment of Autonomous Wind/Diesel/Hydrogen Systems in Russia
**Authors:** O. V. Marchenko; S. V. Solomin
**Journal:** Journal of Renewable Energy
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101972
---
## Abstract
The economic efficiency of harnessing wind energy in the autonomous power systems of Russia is analyzed. Wind turbines are shown to be competitive for many considered variants (groups of consumers, placement areas, and climatic and meteorological conditions). The authors study the possibility of storing energy in the form of hydrogen in the autonomous wind/diesel/hydrogen power systems that include wind turbines, diesel generator, electrolyzer, hydrogen tank, and fuel cells. The paper presents the zones of economic efficiency of the system (set of parameters that provide its competitiveness) depending on load, fuel price, and long-term average annual wind speed. At low wind speed and low price of fuel, it is reasonable to use only diesel generator to supply power to consumers. When the fuel price and wind speed increase, first it becomes more economical to use a wind-diesel system and then wind turbines with a hydrogen system. In the latter case, according to the optimization results, diesel generator is excluded from the system.
---
## Body
## 1. Introduction
In the recent years, the energy policy of many countries has been aimed at increasing the share of renewable energy sources (RES) in the total energy production. In Russia, the share of RES (without large hydropower plants) in the electricity production does not exceed 1%. However, the “Energy Strategy of Russia for the period up to 2030” (approved by the Russian Government) suggests that, in 20 years, this share may increase up to 4.5%.Providing a substantial environmental effect (decrease in the emissions from energy sector), RES can often be economically efficient and competitive with the energy sources based on fossil fuel [1–7]. It is expected that, in the nearest and, moreover, distant future, the role of RES in Russian and world energy industry will essentially increase due to the improvement of characteristics and a projected rise in the fossil fuel price [8–12].It is reasonable to use RES primarily in small autonomous (decentralized) power systems located in remote hard-to-reach areas, where the price of imported fossil fuel is very high. Russian zones of decentralized power supply that do not have any modern electrical networks and large energy sources occupy about 70% of the country and are situated mostly in the Far North. The Far North is represented by a number of regions in the European part of the country (Murmansk and Arkhangelsk regions, the Republic of Karelia, and the Republic of Komi), Siberia (the north of Tyumen Region and Krasnoyarsk Territory), and the Far East (Yakutia, Chukotka, Magadan, Kamchatka, and Sakhalin regions). These territories have significant reserves of gold, platinum, diamonds, tin, lead, and other mineral resources.In total, about 1400 small settlements with a population of approximately 20 million people are located in decentralized power supply zones of Russia [3, 4]. About 7 thousand diesel power plants operate here. The majority of them have a high degree of physical deterioration and low efficiency. The cost of electricity produced by new diesel generators usually exceeds 25–30 cent/kWh (US dollar for economic assessment is used throughout the paper), and in the remotest regions with old diesel generators, it exceeds 50–70 cent/kWh.Decentralized systems of power supply are characterized by the following features:(i)
settlements are spread throughout large scarcely populated territories;(ii)
electric load of consumers in each settlement does not exceed 1–3 MW;(iii)
transport infrastructure is underdeveloped;(iv)
the main electricity source is diesel power plants;(v)
diesel power plants use expensive diesel fuel, which usually costs more than $800–$1000/toe [4].Therefore, the introduction of renewables to increase the cost effectiveness of power supply is an urgent problem for these systems. One of the most effective types of RES is wind turbines [1, 4].In 2011, the total installed capacity of all wind turbines operating in the world accounted for 238 GW (increased by 20% as compared to 2010) [13]. Wind turbines generate about 2.5% of the total electricity production in the world. As opposed to both many developed and developing countries, Russia uses wind energy very little. The installed capacity of wind turbines is about 20 MW, and the rate of wind energy development was less than 10% in 2010 and 2011 [14]. The share of wind turbines in the total electricity production in Russia is negligible (less than 0.01%).Meanwhile, the potential for the wind energy development in Russia is really great. Russia has the longest shoreline in the world, abundant treeless flatlands, and large water areas of inner rivers, lakes, and seas, which represent the most favorable sites for wind turbines.The main advantages of wind turbines are as follows:(i)
no harmful emissions in the process of electricity production;(ii)
relative cheapness of generated electricity (3–5 cent/kWh for the best turbines under good wind conditions);(iii)
possibility of significantly saving on fossil fuel in the course of operation in an autonomous power system.At the same time, wind turbines have a flaw. This is an unsteady electricity generation (depending on changes in wind speed). Therefore, wind turbines are used in combination with energy sources that operate under the controlled conditions and supply power to the load when power generated by wind turbines is insufficient or during their downtime. Besides, power systems with wind turbines include energy storage devices.The research aims to study the economic efficiency of harnessing wind energy in Russia for a wide range of parameters (fossil fuel price, climatic and meteorological conditions, power and load curve of consumers, and current and prospective technical and economic indices of the power system components). First of all, the authors consider the most promising wind/diesel systems, including the systems that produce, store, and use hydrogen in fuel cells (wind/diesel/hydrogen systems).
## 2. Variants of Harnessing Wind Energy
The authors consider three variants of harnessing wind energy in the autonomous power systems to supply power to consumers.In the first variant, wind turbines are included in the wind/diesel system (Figure1). The capacity of a diesel generator is chosen so that it provides uninterrupted power supply to consumers even in the case of wind turbine downtime during calms. In the periods of strong wind, however, some amount of power generated from wind turbines turns out to be redundant and is diverted to the dump load.Figure 1
Wind/diesel system.The other two variants allow for the storage of energy in the form of hydrogen produced by electrolysis and electrochemically transformed into electric energy. Under certain conditions, the introduction of a subsystem for production, storage, and use of hydrogen for energy purposes into the wind/diesel system can decrease the diesel fuel consumption (or even exclude the diesel generator from the system) and reduce the costs of electricity production.The second variant considers a simplified (linear) scheme in which electric energy or hydrogen sequentially passes through the system components; diesel generator is excluded (Figure2). Here, it is assumed that the volume of hydrogen produced is sufficient to have constant electric power of fuel cells. This means that the combination of wind turbines with stochastic power generation and a hydrogen system (electrolyzer, hydrogen tank, and fuel cells) allows us to obtain a new property of the energy source, that is, constant generation (by additional costs). If the installed capacity of an electrolyzer is lower than the capacity of wind turbines, some amount of the generated electricity may turn out to be redundant and will be diverted to the dump load. The use of excess electricity in heat supply (as well as the heat generated by fuel cells) was not considered in this paper. In fact, the utilization of this heat can prove to be economical and increase the efficiency of the whole system [1]. However, this will require the introduction of additional components into the scheme (heat exchangers, pipelines, pumps, etc.) and more detailed consideration of consumer specific features, which goes beyond the scope of this paper.Figure 2
Wind/hydrogen system.In the third variant (Figure3), the autonomous power system includes diesel units, one or several wind turbines, electrolyzer, hydrogen tank, fuel cells, and electricity consumers with their load curve. It is assumed that the diesel power plant, wind turbines, electrolyzer, and fuel cells are provided with all the devices required to control the network. Wind turbines supply electricity directly to the consumers with changing load. If this electricity is insufficient, diesel generator and fuel cells can operate simultaneously. The excess power from wind turbines is consumed by the electrolyzer (if the hydrogen tank is not fully filled with hydrogen) or absorbed by the dump load.Figure 3
Wind/diesel/hydrogen system.
## 3. Calculation Method
As is known [8], the cost effectiveness of an investment project is characterized by the net present value (NPV); that is, the total income obtained during the project period T reduced to the initial time point:
(1)NPV=∫0TE(τ)exp(-στ)dτ,
where E(τ) is the cash flow; σ=ln(1+d); d is the annual discount rate.For many energy sources, including RES, we can use the following simplified model, which describes their construction and operation: construction takes a short time and requires capital investmentsK. Right after the construction, the energy source starts to operate under nominal conditions with average annual costs Z and electricity supply Q at price p.Then,(2)NPV=-K+∫0T(peQ-Z)exp(-στ)dτ=1σ[1-exp(-σT)](peQ-Z)-K,
where pe is the electricity price.Among the compared alternative variants, the best one will provide the maximum NPV.CostsZ can be divided into two parts: constant components (that do not depend on the volume of electricity production) and variable components that consist mainly of fuel costs:
(3)Z=μK+pfbQ,
where μ is the specific constant costs (share of investments); pf is the fuel price; b is the specific fuel consumption (a value inversely proportional to the efficiency).Then, from (1)–(3), we obtain
(4)NPV=1σ[1-exp(-σT)]Q(pe-S),
where
(5)S=CRF·K+ZQ=khCRF+khμ+pfθ·η
is the electricity cost. Here, we use the following notations: k is the specific capital investments (per power unit); h=CF·H is the annual number of utilization hours (CF is the capacity factor, H=8760 h/year); CRF=σ/[1-exp(-σT)] is the capital recovery factor; θ=11.6·103 kWh/toe is the energy equivalent.According to (4), the electricity cost represents the minimum price, at which the project is cost effective (net present value equals zero), and the best variant (NPV=max) should be chosen on the basis of the criterion of minimum electricity cost (5).The terms in the right-hand side of equality (5) represent the capital, O&M, and fuel components of the electricity cost, respectively. Fuel price for the energy sources based on energy of the wind, sun, or rivers equals zero (pf=0), while the value of capacity factor CF significantly depends on meteorological conditions [1, 3, 4, 15, 16].To assess the competitiveness of RES, we should compare the cost of energy produced on their basis with the cost of energy from competing energy sources. In the case, where RES operate under uncontrolled (stochastic) conditions and require full capacity backup, the cost of energy produced by them should be compared not to the total cost but to the fuel component of the cost of energy produced by the energy source on fossil fuel [3, 4].This comparison makes it possible to assess the competitiveness of RES (Figure1) at the first approximation and exclude inefficient variants from further consideration. By analogy, we can make preliminary estimation of the hydrogen and electricity cost (Figure 2) that can be compared to the cost of diesel fuel and electricity from diesel generator. Such estimation proves to be very illustrative and narrows the feasibility region of parameters to be used in calculations for the autonomous power system, taking into account the interaction between its components.More accurate calculations require that the system effects related to power flows between system components and its storage have to be taken into account. Optimization of the autonomous power system structure and operation (Figure3) is reduced to solving the problem: find the objective function (electricity cost) minimum
(6)S=1Q[∑iCRFiKi+Zi]⟶min,subject to(7)PDG(t)+PWT(t)+PFC(t)=L(t)+U(t),(8)0≤Pi(t)≤Pimax,(9)U(t)≥0,(10)PWT(t)=PWTmaxf(v),(11)PFC(t)≤PHT*(t)ηHTηFCΔt,(12)PEL(t)=min(U(t),(PHTmax*-PHT*(t)(ηELΔt))),(13)PHT*(t)=PHT*(t-Δt)+[PEL(t)ηEL-PFC(t)(ηFCηHT)]Δt.
The following notation is used: P is the power of energy source; P* is the energy equivalent of hydrogen in hydrogen tank; t is the time; L is the load; U is the power surplus; f(v) is the wind turbine power curve; Δt is the time step; indices: i is the type of energy source (DG is the diesel generator, WT is the wind turbine, EL is the electrolyzer, HT is the hydrogen tank, and FC are the fuel cells); max and min are the maximum and minimum values.Equation (7) is the power balance at the time point t; (8) is the power constraints; (9) is the condition of shortage-free power supply; (10) is the dependence of wind turbine power on wind speed (random value); (11) is the constraint on the power of fuel cells with respect to hydrogen reserve; (12) is the constraint on the power of electrolyzer with respect to power surplus and available capacity in the hydrogen tank; (13) is the hydrogen balance in the hydrogen tank.To solve system (6)–(13), the algorithm described in [7] was used. Continuous time functions were replaced with sets of discrete values with the step of 1 hour. Wind speed was modeled in the form of random processes in terms of the alternating periods of low and high wind speeds. This is essential for the systems with energy storage systems and, in this case, for the determination of optimal capacity of the hydrogen tank.At the first stage, when the installed capacities of energy sources and the hydrogen tank capacity are given, the operating conditions are optimized at each time point in accordance with the criterion of minimum fuel costs. The electric load (7) is first covered by energy from wind turbines, then by the energy stored in the hydrogen tank, and finally by the energy from diesel generator. The excess energy from wind turbines is sent to electrolyzer for the production of hydrogen.At the second stage after the calculation of operating conditions on the entire time interval fromt=0 to t=T, the installed capacities of energy sources and the hydrogen tank capacity are optimized according to criterion (6) subject to constraints (8) and (9).
## 4. Initial Data
The factors that determine the economic efficiency of using wind turbines and hydrogen system in addition to the diesel power plant (or instead of it) in the autonomous power systems are(i)
wind speed,(ii)
diesel fuel price,(iii)
maximum power and degree of load unevenness,(iv)
technical and economic indices of power plants.The most important energy characteristic of the wind is its long-term average annual speedV measured at a height of a weather vane (about 10 m). Figure 4 presents the distribution of this characteristic across the territory of Russia. The zones of low (up to 4 m/s), medium (4–6 m/s), and high (more than 6 m/s) wind speeds are highlighted. Average wind speed reaches its maximum on the seacoasts and decreases in the continental areas.Figure 4
Long-term average annual wind speedV (m/s) at a height of 10 m on the territory of Russia.In the European part of the country, annual average wind speed is 2–4 m/s, and on the coasts of the Arctic Ocean and the Baltic Sea and in some regions of the North Caucasus and Volga region, it reaches 5–6 m/s. The highest wind speed (more than 7 m/s) is typical of the coastal areas in Arkhangelsk and Murmansk regions.In the Asian part of the country, the zone of light wind (less than 2-3 m/s) covers large territories of the continental areas of Siberia and the Far East. Higher wind speed (4-5 m/s) is characteristic of some mountainous areas, coast of Lake Baikal, and valleys of large Siberian rivers (the Ob, Yenisei, Angara, and Lena). The highest long-term average annual wind speed (above 6–8 m/s) is in the coastal areas of Tyumen region, Krasnoyarsk Territory, Magadan region, Chukotka, Kamchatka, Sakhalin, and islands in the Arctic and Pacific Oceans.In this study, the calculations were made for average wind speeds equal to 4–8 m/s, which corresponds to a range of wind conditions from “bad” to “very good.” Wind speed is distributed according to the Weibull distribution with a shape parameterα=1.5 [1].The relationship between wind speed and the height was described by a logarithmic dependence with a surface roughness degreez0=0.03m [1]. The operating characteristic of wind turbines (power versus wind speed) is taken in accordance with the installation data of the company “Fuhrländer”; the height of a wind turbine tower for the load of 50 kW is 20 m, and for the load of 1000 kW is 40 m.A retrospective analysis shows that, in the last 10–15 years, the price of diesel fuel has increased 2-3 times. Today, the minimum wholesale price of diesel fuel is in the North Caucasus ($780/toe), and the maximum is in the Republic of Yakutia ($1600/toe). For the southern and central regions of the European part of Russia, the typical prices are $800–$1000/toe, and for the northwest regions of the European Russia and northeast regions of the Far East, they equal $1100–$1500/toe.In the calculations, the diesel fuel price in the range $800–$1500/toe was considered, which corresponds to the conditions from “relatively cheap” to “moderately expensive” diesel fuel for the autonomous power systems.The majority of decentralized electricity consumers live in rural and urban settlements with the population from 50 to 5000 people. We consider the following consumers: (1) time-constant power of 50 or 1000 kW (for the schemes in Figures1 and 2); (2) small consumers with a variable electric load up to 50 kW, (3) larger consumers with a load up to 1000 kW (for the scheme in Figure 3).In the latter two cases, the load power was assumed to be normally distributed. The parameters of the normal law were chosen by the approximation of the annual load curve: for a load of 50 kW (maximum value), the average power is 20 kW, and the standard deviation is 5 kW, for the load of 1000 kW, 685 kW and 100 kW, respectively.Power of the wind turbine backup energy source (diesel or fuel cells) was chosen according to the condition of power supply to consumers under any wind conditions. This allows us not to include losses from power undersupply, whose numerical value is characterized by significant uncertainty, in the objective function. To take into account the startup inertia of diesel generator and undesirable sharp change in its operation, the diesel generator was assumed to operate constantly (minimum power is 30% of the installed capacity). Fuel cells work at stationary (Figure2) or near stationary regimes (Figure 3) because of the hydrogen store presence.The main technical and economic indices of the power system components which determine the economic efficiency of the considered power supply schemes are the specific capital investments, fixed operating costs, efficiency, and lifetime.Table1 shows the corresponding indices for two power levels (50 kW and 1000 kW) and two time points, that is, the current indices (equipment is available in the market) and prospective indices. The value of power consumed (50 or 1000 kW) influences technical and economic indices of the used equipment, and the degree of load unevenness influences the optimal relationship between the installed capacities as well as the operating conditions of the energy sources.Table 1
Technical and economic characteristics of the power system components.
Component
Specific investments($/kW)*
Specific fixed costs(% of investments)
Efficiency (%)**
Lifetime (years)
50 kW
1000 kW
50 kW
1000 kW
50 kW
1000 kW
50 kW
1000 kW
Current indices
Diesel
500
315
7
5
32
35
10
10
Wind turbine
1800
1300
3
2
35
35
20
20
Electrolyzer
3500
1600
3
2
70
70
10
10
Hydrogen tank
880
570
1
1
95
95
10
10
Fuel cells
5000
3000
2.5
2
40
40
4
5
Prospective indices
Diesel
450
280
7
5
34
37
10
10
Wind turbine
1600
1100
3
2
35
35
20
20
Electrolyzer
2000
1000
3.5
2.5
77
77
20
20
Hydrogen tank
600
400
1
1
98
98
20
20
Fuel cells
2500
1500
3.5
2.5
60
60
10
10
*Specific investments for the hydrogen tank are given in $/m3; **efficiency of wind turbines is given for information (the calculations use wind turbine characteristics (power curve)).These variants are formed on the basis of the studies presented in [3, 4, 17–24]. Current indices are taken from the price lists of Russian manufacturers of equipment (with rubles converted to dollars). Among the available forecasts of changes in the indices over time, we have chosen the “moderately optimistic” ones (the most probable from the authors’ viewpoint).The main problems of hydrogen energy are the high cost of equipment and the complexity of storing and transporting hydrogen in both gas and liquid form. Hydrogen production by electrolysis on the basis of “excess” power produced by wind turbines right in the place of consumption makes this process cheaper and obviates the need to transport hydrogen. Further technological development and increase in hydrogen production and utilization will considerably improve the economic characteristics of the basic hydrogen system components, that is, electrolyzers and fuel cells [17–24].Investment in wind turbines covers the costs of assembly; construction of the foundation and tower; connection to the network; installation of inverters, control, and automation modules, which on average make up about 25% of the equipment price.Investment in electrolyzers covers the costs of workshop construction; creation of systems for water treatment and supply, electrolyte circulation and filtration, gas collection and purification, and water condensation and cooling; the systems of automation and control (about 30% of the equipment cost).Cost indices of the hydrogen tank are given per unit of its capacity.
## 5. Calculation Results and Their Analysis
For the first variant (scheme in Figure1), Figure 5 presents electricity cost values for wind turbines and diesel generators that are calculated by (5) with a discount rate 10% and diesel price $800/toe. In this case, to estimate the economic efficiency of wind turbines, we should compare the cost of power generated by them to the fuel component of the cost of power produced by diesel generators.Figure 5
Cost of electricity generated by wind turbines and diesel generators (fuel component at a fuel price of $800/toe) for the scheme in Figure1: 1: wind turbine (50 kW), 2: wind turbine (1000 kW), 3: diesel generator (50 kW), 4: diesel generator (1000 kW).Figure5 shows that even at low long-term average annual wind speed (about 4.5 m/s), wind turbines are competitive with diesel generators. This is indicative of a great potential of wind turbines when used in the autonomous (decentralized) power supply systems, both in coastal areas (coasts of seas and oceans) and in some continental regions of Russia.Figures6 and 7 present the results of calculations for the scheme presented in Figure 2. Technical and economic indices (current and prospective) correspond to the high load power (1000 kW in Table 1). In the calculations, the electrolyzer power was chosen to be optimal (at high wind speed, it turned out to be equal to the installed capacity of wind turbines, and at low wind speed, less), the power of fuel cells was chosen according to the condition of constancy of generated power, and the hydrogen tank capacity was chosen according to the condition of continuous operation of fuel cells at their constant power during 120 hours.Cost of electricity generated by fuel cells (a) and cost of hydrogen supplied from the hydrogen tank (b) for the scheme in Figure2: 1: current indices of the system components; 2: prospective indices; 3: electricity from diesel generator at a fuel price of $800/toe (a) and fuel price of $800/toe (b); 4: the same, fuel price is $1400/toe.
(a)
(b)Structure of costs of electricity (a) and hydrogen (b) production at wind speed 6 m/s for the scheme in Figure2: 1: current indices of the system components; 2: prospective indices. WT are the wind turbines, EL is the electrolyzer; HT is the hydrogen tank, and FC are the fuel cells.
(a)
(b)The cost of electricity in Figure6(a) is compared to the cost of electricity generated by diesel generators (current indices, power of 1000 kW in Table 1), and the hydrogen cost (Figure 6(b)) is compared to the price of diesel fuel. To provide comparability, we assume that diesel generator as well as fuel cells operates with a capacity factor equal to 1. As we can see, according to current technical and economic indices (Table 1), the electricity generated by wind turbines and hydrogen produced by electrolyzer are more expensive than the electricity generated by diesel generator and diesel fuel, respectively.The main component in the hydrogen cost is electrolyzer and in the electricity cost is fuel cells (Figure7). However, in the future, the hydrogen system will become competitive, since even in the case of relatively cheap fuel and an average wind speed starting with 5-6 m/s, fuel cells generate cheaper electricity than diesel generators. This is achieved thanks to a significant decrease in specific investment in fuel cells (Table 1). Moreover, the cost of hydrogen becomes lower than the cost of diesel at higher wind speeds. This is explained by the higher efficiency of fuel cells as compared to diesel generators.Taking into account the obtained results, the authors made calculations for the scheme in Figure3 only for the prospective technical and economic indices (Figures 8 and 9).Zones of technological efficiency with the load power of 50 kW(a) and 1000 kW (b) and prospective indices of electrolyzer, hydrogen tank, and fuel cells for the scheme in Figure3. V is the average wind speed at a height of 10 m, p is the diesel price, WT are the wind turbines, DG is the diesel generator, and FC are the fuel cells.
(a)
(b)Electricity cost for the scheme in Figure3 at a fuel price of $1100/toe and with the load power of 50 kW (a) and 1000 kW (b) and prospective indices of electrolyzers, hydrogen tanks, and fuel cells. V is the average wind speed at a height of 10 m, p is the diesel price, DG is the diesel generator, WT are the wind turbines, and FC are the fuel cells.
(a)
(b)Figure8 presents the efficiency zones of energy technologies (optimal structure of the system depending on the diesel price and average wind speed). At low wind speed and low price of fuel, it is reasonable to use only diesel generator to supply power to consumers. When the fuel price and wind speed increase, first it becomes more economical to use a wind-diesel system and then wind turbines with a hydrogen system. In the latter case, according to the optimization results, diesel generator is excluded from the system. Cost effectiveness of wind turbines and fuel cells at a load of 1000 kW is provided at lower wind speed and lower diesel price than for a load of 50 kW, since the specific indices of larger energy sources are better. When the load equals 1000 kW, the application of a diesel generator alone in the considered region of parameters turns out to be inefficient.Zones of technological efficiency allow us to determine the best structure of the power supply system for the given conditions. Moreover, we should know what economic effect can be achieved by applying the optimal structure because only if the effect is sufficient, it makes sense to complicate the system by adding extra components. The corresponding data are presented in Figure9. As we can see, the size of the achieved effect is quite significant.For example, with the load of 1000 kW and the average wind speed of 6 m/s, the construction of wind turbines in addition to diesel generator makes it possible to decrease the electricity cost from 27 to 20 cent/kWh and the replacement of the diesel generator by an electrochemical unit to 15 cent/kWh. Thus, the introduction of a wind-hydrogen system will allow us to halve the costs of electricity production as compared to power supply from diesel generator alone.
## 6. Conclusion
The paper presents an analysis of the economic efficiency of harnessing wind energy in the autonomous power systems of Russia. The wind turbines are shown to be competitive in many considered variants (groups of consumers, placement areas, and climatic and meteorological conditions).At the current prices of fossil (diesel) fuel, the application of wind/diesel systems is efficient even if the long-term average annual wind speed on the wind turbine site is relatively low (about 4.5 m/s). In the regions with a long-term average annual wind speed higher than 5 m/s, the application of wind turbines can reduce the price of the generated electricity by 50% or more (as compared to the use of diesel generators alone).During the periods of strong wind, some amount of power generated by wind turbines turns out to be redundant. Therefore, it becomes possible to use this power for hydrogen production by electrolysis and its subsequent electrochemical transformation into electricity. Under certain conditions, the introduction of subsystem for hydrogen production, storage, and use for energy purposes to the wind/diesel system can decrease diesel consumption (or even completely exclude diesel generators from the system) and reduce the costs of electricity production.The study considered the possibility of storing energy in the form of hydrogen in the autonomous wind/diesel/hydrogen power systems that include a diesel generator, electrolyzer, hydrogen tank, and fuel cells. The authors determined the zones of economic efficiency in the system depending on the load power, fuel price, and long-term average annual wind speed.Technical and economic characteristics of the equipment, which is now available in the market, still do not allow the hydrogen system (electrolyzer, hydrogen tank, and fuel cells) to become competitive with diesel or wind-diesel power plants. However, the indices projected for the nearest future (an approximately twofold decrease in specific capital investment in fuel cells against the current level) make the hydrogen system economically efficient at the fuel prices typical of the autonomous power systems ($800–$1400/toe) and the average annual wind speed starting with 5-6 m/s. Application of the system for the hydrogen production and use for energy purposes will considerably reduce (more than by 50%) the costs of power supply to consumers.
---
*Source: 101972-2013-04-08.xml* | 101972-2013-04-08_101972-2013-04-08.md | 31,819 | Economic Efficiency Assessment of Autonomous Wind/Diesel/Hydrogen Systems in Russia | O. V. Marchenko; S. V. Solomin | Journal of Renewable Energy
(2013) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101972 | 101972-2013-04-08.xml | ---
## Abstract
The economic efficiency of harnessing wind energy in the autonomous power systems of Russia is analyzed. Wind turbines are shown to be competitive for many considered variants (groups of consumers, placement areas, and climatic and meteorological conditions). The authors study the possibility of storing energy in the form of hydrogen in the autonomous wind/diesel/hydrogen power systems that include wind turbines, diesel generator, electrolyzer, hydrogen tank, and fuel cells. The paper presents the zones of economic efficiency of the system (set of parameters that provide its competitiveness) depending on load, fuel price, and long-term average annual wind speed. At low wind speed and low price of fuel, it is reasonable to use only diesel generator to supply power to consumers. When the fuel price and wind speed increase, first it becomes more economical to use a wind-diesel system and then wind turbines with a hydrogen system. In the latter case, according to the optimization results, diesel generator is excluded from the system.
---
## Body
## 1. Introduction
In the recent years, the energy policy of many countries has been aimed at increasing the share of renewable energy sources (RES) in the total energy production. In Russia, the share of RES (without large hydropower plants) in the electricity production does not exceed 1%. However, the “Energy Strategy of Russia for the period up to 2030” (approved by the Russian Government) suggests that, in 20 years, this share may increase up to 4.5%.Providing a substantial environmental effect (decrease in the emissions from energy sector), RES can often be economically efficient and competitive with the energy sources based on fossil fuel [1–7]. It is expected that, in the nearest and, moreover, distant future, the role of RES in Russian and world energy industry will essentially increase due to the improvement of characteristics and a projected rise in the fossil fuel price [8–12].It is reasonable to use RES primarily in small autonomous (decentralized) power systems located in remote hard-to-reach areas, where the price of imported fossil fuel is very high. Russian zones of decentralized power supply that do not have any modern electrical networks and large energy sources occupy about 70% of the country and are situated mostly in the Far North. The Far North is represented by a number of regions in the European part of the country (Murmansk and Arkhangelsk regions, the Republic of Karelia, and the Republic of Komi), Siberia (the north of Tyumen Region and Krasnoyarsk Territory), and the Far East (Yakutia, Chukotka, Magadan, Kamchatka, and Sakhalin regions). These territories have significant reserves of gold, platinum, diamonds, tin, lead, and other mineral resources.In total, about 1400 small settlements with a population of approximately 20 million people are located in decentralized power supply zones of Russia [3, 4]. About 7 thousand diesel power plants operate here. The majority of them have a high degree of physical deterioration and low efficiency. The cost of electricity produced by new diesel generators usually exceeds 25–30 cent/kWh (US dollar for economic assessment is used throughout the paper), and in the remotest regions with old diesel generators, it exceeds 50–70 cent/kWh.Decentralized systems of power supply are characterized by the following features:(i)
settlements are spread throughout large scarcely populated territories;(ii)
electric load of consumers in each settlement does not exceed 1–3 MW;(iii)
transport infrastructure is underdeveloped;(iv)
the main electricity source is diesel power plants;(v)
diesel power plants use expensive diesel fuel, which usually costs more than $800–$1000/toe [4].Therefore, the introduction of renewables to increase the cost effectiveness of power supply is an urgent problem for these systems. One of the most effective types of RES is wind turbines [1, 4].In 2011, the total installed capacity of all wind turbines operating in the world accounted for 238 GW (increased by 20% as compared to 2010) [13]. Wind turbines generate about 2.5% of the total electricity production in the world. As opposed to both many developed and developing countries, Russia uses wind energy very little. The installed capacity of wind turbines is about 20 MW, and the rate of wind energy development was less than 10% in 2010 and 2011 [14]. The share of wind turbines in the total electricity production in Russia is negligible (less than 0.01%).Meanwhile, the potential for the wind energy development in Russia is really great. Russia has the longest shoreline in the world, abundant treeless flatlands, and large water areas of inner rivers, lakes, and seas, which represent the most favorable sites for wind turbines.The main advantages of wind turbines are as follows:(i)
no harmful emissions in the process of electricity production;(ii)
relative cheapness of generated electricity (3–5 cent/kWh for the best turbines under good wind conditions);(iii)
possibility of significantly saving on fossil fuel in the course of operation in an autonomous power system.At the same time, wind turbines have a flaw. This is an unsteady electricity generation (depending on changes in wind speed). Therefore, wind turbines are used in combination with energy sources that operate under the controlled conditions and supply power to the load when power generated by wind turbines is insufficient or during their downtime. Besides, power systems with wind turbines include energy storage devices.The research aims to study the economic efficiency of harnessing wind energy in Russia for a wide range of parameters (fossil fuel price, climatic and meteorological conditions, power and load curve of consumers, and current and prospective technical and economic indices of the power system components). First of all, the authors consider the most promising wind/diesel systems, including the systems that produce, store, and use hydrogen in fuel cells (wind/diesel/hydrogen systems).
## 2. Variants of Harnessing Wind Energy
The authors consider three variants of harnessing wind energy in the autonomous power systems to supply power to consumers.In the first variant, wind turbines are included in the wind/diesel system (Figure1). The capacity of a diesel generator is chosen so that it provides uninterrupted power supply to consumers even in the case of wind turbine downtime during calms. In the periods of strong wind, however, some amount of power generated from wind turbines turns out to be redundant and is diverted to the dump load.Figure 1
Wind/diesel system.The other two variants allow for the storage of energy in the form of hydrogen produced by electrolysis and electrochemically transformed into electric energy. Under certain conditions, the introduction of a subsystem for production, storage, and use of hydrogen for energy purposes into the wind/diesel system can decrease the diesel fuel consumption (or even exclude the diesel generator from the system) and reduce the costs of electricity production.The second variant considers a simplified (linear) scheme in which electric energy or hydrogen sequentially passes through the system components; diesel generator is excluded (Figure2). Here, it is assumed that the volume of hydrogen produced is sufficient to have constant electric power of fuel cells. This means that the combination of wind turbines with stochastic power generation and a hydrogen system (electrolyzer, hydrogen tank, and fuel cells) allows us to obtain a new property of the energy source, that is, constant generation (by additional costs). If the installed capacity of an electrolyzer is lower than the capacity of wind turbines, some amount of the generated electricity may turn out to be redundant and will be diverted to the dump load. The use of excess electricity in heat supply (as well as the heat generated by fuel cells) was not considered in this paper. In fact, the utilization of this heat can prove to be economical and increase the efficiency of the whole system [1]. However, this will require the introduction of additional components into the scheme (heat exchangers, pipelines, pumps, etc.) and more detailed consideration of consumer specific features, which goes beyond the scope of this paper.Figure 2
Wind/hydrogen system.In the third variant (Figure3), the autonomous power system includes diesel units, one or several wind turbines, electrolyzer, hydrogen tank, fuel cells, and electricity consumers with their load curve. It is assumed that the diesel power plant, wind turbines, electrolyzer, and fuel cells are provided with all the devices required to control the network. Wind turbines supply electricity directly to the consumers with changing load. If this electricity is insufficient, diesel generator and fuel cells can operate simultaneously. The excess power from wind turbines is consumed by the electrolyzer (if the hydrogen tank is not fully filled with hydrogen) or absorbed by the dump load.Figure 3
Wind/diesel/hydrogen system.
## 3. Calculation Method
As is known [8], the cost effectiveness of an investment project is characterized by the net present value (NPV); that is, the total income obtained during the project period T reduced to the initial time point:
(1)NPV=∫0TE(τ)exp(-στ)dτ,
where E(τ) is the cash flow; σ=ln(1+d); d is the annual discount rate.For many energy sources, including RES, we can use the following simplified model, which describes their construction and operation: construction takes a short time and requires capital investmentsK. Right after the construction, the energy source starts to operate under nominal conditions with average annual costs Z and electricity supply Q at price p.Then,(2)NPV=-K+∫0T(peQ-Z)exp(-στ)dτ=1σ[1-exp(-σT)](peQ-Z)-K,
where pe is the electricity price.Among the compared alternative variants, the best one will provide the maximum NPV.CostsZ can be divided into two parts: constant components (that do not depend on the volume of electricity production) and variable components that consist mainly of fuel costs:
(3)Z=μK+pfbQ,
where μ is the specific constant costs (share of investments); pf is the fuel price; b is the specific fuel consumption (a value inversely proportional to the efficiency).Then, from (1)–(3), we obtain
(4)NPV=1σ[1-exp(-σT)]Q(pe-S),
where
(5)S=CRF·K+ZQ=khCRF+khμ+pfθ·η
is the electricity cost. Here, we use the following notations: k is the specific capital investments (per power unit); h=CF·H is the annual number of utilization hours (CF is the capacity factor, H=8760 h/year); CRF=σ/[1-exp(-σT)] is the capital recovery factor; θ=11.6·103 kWh/toe is the energy equivalent.According to (4), the electricity cost represents the minimum price, at which the project is cost effective (net present value equals zero), and the best variant (NPV=max) should be chosen on the basis of the criterion of minimum electricity cost (5).The terms in the right-hand side of equality (5) represent the capital, O&M, and fuel components of the electricity cost, respectively. Fuel price for the energy sources based on energy of the wind, sun, or rivers equals zero (pf=0), while the value of capacity factor CF significantly depends on meteorological conditions [1, 3, 4, 15, 16].To assess the competitiveness of RES, we should compare the cost of energy produced on their basis with the cost of energy from competing energy sources. In the case, where RES operate under uncontrolled (stochastic) conditions and require full capacity backup, the cost of energy produced by them should be compared not to the total cost but to the fuel component of the cost of energy produced by the energy source on fossil fuel [3, 4].This comparison makes it possible to assess the competitiveness of RES (Figure1) at the first approximation and exclude inefficient variants from further consideration. By analogy, we can make preliminary estimation of the hydrogen and electricity cost (Figure 2) that can be compared to the cost of diesel fuel and electricity from diesel generator. Such estimation proves to be very illustrative and narrows the feasibility region of parameters to be used in calculations for the autonomous power system, taking into account the interaction between its components.More accurate calculations require that the system effects related to power flows between system components and its storage have to be taken into account. Optimization of the autonomous power system structure and operation (Figure3) is reduced to solving the problem: find the objective function (electricity cost) minimum
(6)S=1Q[∑iCRFiKi+Zi]⟶min,subject to(7)PDG(t)+PWT(t)+PFC(t)=L(t)+U(t),(8)0≤Pi(t)≤Pimax,(9)U(t)≥0,(10)PWT(t)=PWTmaxf(v),(11)PFC(t)≤PHT*(t)ηHTηFCΔt,(12)PEL(t)=min(U(t),(PHTmax*-PHT*(t)(ηELΔt))),(13)PHT*(t)=PHT*(t-Δt)+[PEL(t)ηEL-PFC(t)(ηFCηHT)]Δt.
The following notation is used: P is the power of energy source; P* is the energy equivalent of hydrogen in hydrogen tank; t is the time; L is the load; U is the power surplus; f(v) is the wind turbine power curve; Δt is the time step; indices: i is the type of energy source (DG is the diesel generator, WT is the wind turbine, EL is the electrolyzer, HT is the hydrogen tank, and FC are the fuel cells); max and min are the maximum and minimum values.Equation (7) is the power balance at the time point t; (8) is the power constraints; (9) is the condition of shortage-free power supply; (10) is the dependence of wind turbine power on wind speed (random value); (11) is the constraint on the power of fuel cells with respect to hydrogen reserve; (12) is the constraint on the power of electrolyzer with respect to power surplus and available capacity in the hydrogen tank; (13) is the hydrogen balance in the hydrogen tank.To solve system (6)–(13), the algorithm described in [7] was used. Continuous time functions were replaced with sets of discrete values with the step of 1 hour. Wind speed was modeled in the form of random processes in terms of the alternating periods of low and high wind speeds. This is essential for the systems with energy storage systems and, in this case, for the determination of optimal capacity of the hydrogen tank.At the first stage, when the installed capacities of energy sources and the hydrogen tank capacity are given, the operating conditions are optimized at each time point in accordance with the criterion of minimum fuel costs. The electric load (7) is first covered by energy from wind turbines, then by the energy stored in the hydrogen tank, and finally by the energy from diesel generator. The excess energy from wind turbines is sent to electrolyzer for the production of hydrogen.At the second stage after the calculation of operating conditions on the entire time interval fromt=0 to t=T, the installed capacities of energy sources and the hydrogen tank capacity are optimized according to criterion (6) subject to constraints (8) and (9).
## 4. Initial Data
The factors that determine the economic efficiency of using wind turbines and hydrogen system in addition to the diesel power plant (or instead of it) in the autonomous power systems are(i)
wind speed,(ii)
diesel fuel price,(iii)
maximum power and degree of load unevenness,(iv)
technical and economic indices of power plants.The most important energy characteristic of the wind is its long-term average annual speedV measured at a height of a weather vane (about 10 m). Figure 4 presents the distribution of this characteristic across the territory of Russia. The zones of low (up to 4 m/s), medium (4–6 m/s), and high (more than 6 m/s) wind speeds are highlighted. Average wind speed reaches its maximum on the seacoasts and decreases in the continental areas.Figure 4
Long-term average annual wind speedV (m/s) at a height of 10 m on the territory of Russia.In the European part of the country, annual average wind speed is 2–4 m/s, and on the coasts of the Arctic Ocean and the Baltic Sea and in some regions of the North Caucasus and Volga region, it reaches 5–6 m/s. The highest wind speed (more than 7 m/s) is typical of the coastal areas in Arkhangelsk and Murmansk regions.In the Asian part of the country, the zone of light wind (less than 2-3 m/s) covers large territories of the continental areas of Siberia and the Far East. Higher wind speed (4-5 m/s) is characteristic of some mountainous areas, coast of Lake Baikal, and valleys of large Siberian rivers (the Ob, Yenisei, Angara, and Lena). The highest long-term average annual wind speed (above 6–8 m/s) is in the coastal areas of Tyumen region, Krasnoyarsk Territory, Magadan region, Chukotka, Kamchatka, Sakhalin, and islands in the Arctic and Pacific Oceans.In this study, the calculations were made for average wind speeds equal to 4–8 m/s, which corresponds to a range of wind conditions from “bad” to “very good.” Wind speed is distributed according to the Weibull distribution with a shape parameterα=1.5 [1].The relationship between wind speed and the height was described by a logarithmic dependence with a surface roughness degreez0=0.03m [1]. The operating characteristic of wind turbines (power versus wind speed) is taken in accordance with the installation data of the company “Fuhrländer”; the height of a wind turbine tower for the load of 50 kW is 20 m, and for the load of 1000 kW is 40 m.A retrospective analysis shows that, in the last 10–15 years, the price of diesel fuel has increased 2-3 times. Today, the minimum wholesale price of diesel fuel is in the North Caucasus ($780/toe), and the maximum is in the Republic of Yakutia ($1600/toe). For the southern and central regions of the European part of Russia, the typical prices are $800–$1000/toe, and for the northwest regions of the European Russia and northeast regions of the Far East, they equal $1100–$1500/toe.In the calculations, the diesel fuel price in the range $800–$1500/toe was considered, which corresponds to the conditions from “relatively cheap” to “moderately expensive” diesel fuel for the autonomous power systems.The majority of decentralized electricity consumers live in rural and urban settlements with the population from 50 to 5000 people. We consider the following consumers: (1) time-constant power of 50 or 1000 kW (for the schemes in Figures1 and 2); (2) small consumers with a variable electric load up to 50 kW, (3) larger consumers with a load up to 1000 kW (for the scheme in Figure 3).In the latter two cases, the load power was assumed to be normally distributed. The parameters of the normal law were chosen by the approximation of the annual load curve: for a load of 50 kW (maximum value), the average power is 20 kW, and the standard deviation is 5 kW, for the load of 1000 kW, 685 kW and 100 kW, respectively.Power of the wind turbine backup energy source (diesel or fuel cells) was chosen according to the condition of power supply to consumers under any wind conditions. This allows us not to include losses from power undersupply, whose numerical value is characterized by significant uncertainty, in the objective function. To take into account the startup inertia of diesel generator and undesirable sharp change in its operation, the diesel generator was assumed to operate constantly (minimum power is 30% of the installed capacity). Fuel cells work at stationary (Figure2) or near stationary regimes (Figure 3) because of the hydrogen store presence.The main technical and economic indices of the power system components which determine the economic efficiency of the considered power supply schemes are the specific capital investments, fixed operating costs, efficiency, and lifetime.Table1 shows the corresponding indices for two power levels (50 kW and 1000 kW) and two time points, that is, the current indices (equipment is available in the market) and prospective indices. The value of power consumed (50 or 1000 kW) influences technical and economic indices of the used equipment, and the degree of load unevenness influences the optimal relationship between the installed capacities as well as the operating conditions of the energy sources.Table 1
Technical and economic characteristics of the power system components.
Component
Specific investments($/kW)*
Specific fixed costs(% of investments)
Efficiency (%)**
Lifetime (years)
50 kW
1000 kW
50 kW
1000 kW
50 kW
1000 kW
50 kW
1000 kW
Current indices
Diesel
500
315
7
5
32
35
10
10
Wind turbine
1800
1300
3
2
35
35
20
20
Electrolyzer
3500
1600
3
2
70
70
10
10
Hydrogen tank
880
570
1
1
95
95
10
10
Fuel cells
5000
3000
2.5
2
40
40
4
5
Prospective indices
Diesel
450
280
7
5
34
37
10
10
Wind turbine
1600
1100
3
2
35
35
20
20
Electrolyzer
2000
1000
3.5
2.5
77
77
20
20
Hydrogen tank
600
400
1
1
98
98
20
20
Fuel cells
2500
1500
3.5
2.5
60
60
10
10
*Specific investments for the hydrogen tank are given in $/m3; **efficiency of wind turbines is given for information (the calculations use wind turbine characteristics (power curve)).These variants are formed on the basis of the studies presented in [3, 4, 17–24]. Current indices are taken from the price lists of Russian manufacturers of equipment (with rubles converted to dollars). Among the available forecasts of changes in the indices over time, we have chosen the “moderately optimistic” ones (the most probable from the authors’ viewpoint).The main problems of hydrogen energy are the high cost of equipment and the complexity of storing and transporting hydrogen in both gas and liquid form. Hydrogen production by electrolysis on the basis of “excess” power produced by wind turbines right in the place of consumption makes this process cheaper and obviates the need to transport hydrogen. Further technological development and increase in hydrogen production and utilization will considerably improve the economic characteristics of the basic hydrogen system components, that is, electrolyzers and fuel cells [17–24].Investment in wind turbines covers the costs of assembly; construction of the foundation and tower; connection to the network; installation of inverters, control, and automation modules, which on average make up about 25% of the equipment price.Investment in electrolyzers covers the costs of workshop construction; creation of systems for water treatment and supply, electrolyte circulation and filtration, gas collection and purification, and water condensation and cooling; the systems of automation and control (about 30% of the equipment cost).Cost indices of the hydrogen tank are given per unit of its capacity.
## 5. Calculation Results and Their Analysis
For the first variant (scheme in Figure1), Figure 5 presents electricity cost values for wind turbines and diesel generators that are calculated by (5) with a discount rate 10% and diesel price $800/toe. In this case, to estimate the economic efficiency of wind turbines, we should compare the cost of power generated by them to the fuel component of the cost of power produced by diesel generators.Figure 5
Cost of electricity generated by wind turbines and diesel generators (fuel component at a fuel price of $800/toe) for the scheme in Figure1: 1: wind turbine (50 kW), 2: wind turbine (1000 kW), 3: diesel generator (50 kW), 4: diesel generator (1000 kW).Figure5 shows that even at low long-term average annual wind speed (about 4.5 m/s), wind turbines are competitive with diesel generators. This is indicative of a great potential of wind turbines when used in the autonomous (decentralized) power supply systems, both in coastal areas (coasts of seas and oceans) and in some continental regions of Russia.Figures6 and 7 present the results of calculations for the scheme presented in Figure 2. Technical and economic indices (current and prospective) correspond to the high load power (1000 kW in Table 1). In the calculations, the electrolyzer power was chosen to be optimal (at high wind speed, it turned out to be equal to the installed capacity of wind turbines, and at low wind speed, less), the power of fuel cells was chosen according to the condition of constancy of generated power, and the hydrogen tank capacity was chosen according to the condition of continuous operation of fuel cells at their constant power during 120 hours.Cost of electricity generated by fuel cells (a) and cost of hydrogen supplied from the hydrogen tank (b) for the scheme in Figure2: 1: current indices of the system components; 2: prospective indices; 3: electricity from diesel generator at a fuel price of $800/toe (a) and fuel price of $800/toe (b); 4: the same, fuel price is $1400/toe.
(a)
(b)Structure of costs of electricity (a) and hydrogen (b) production at wind speed 6 m/s for the scheme in Figure2: 1: current indices of the system components; 2: prospective indices. WT are the wind turbines, EL is the electrolyzer; HT is the hydrogen tank, and FC are the fuel cells.
(a)
(b)The cost of electricity in Figure6(a) is compared to the cost of electricity generated by diesel generators (current indices, power of 1000 kW in Table 1), and the hydrogen cost (Figure 6(b)) is compared to the price of diesel fuel. To provide comparability, we assume that diesel generator as well as fuel cells operates with a capacity factor equal to 1. As we can see, according to current technical and economic indices (Table 1), the electricity generated by wind turbines and hydrogen produced by electrolyzer are more expensive than the electricity generated by diesel generator and diesel fuel, respectively.The main component in the hydrogen cost is electrolyzer and in the electricity cost is fuel cells (Figure7). However, in the future, the hydrogen system will become competitive, since even in the case of relatively cheap fuel and an average wind speed starting with 5-6 m/s, fuel cells generate cheaper electricity than diesel generators. This is achieved thanks to a significant decrease in specific investment in fuel cells (Table 1). Moreover, the cost of hydrogen becomes lower than the cost of diesel at higher wind speeds. This is explained by the higher efficiency of fuel cells as compared to diesel generators.Taking into account the obtained results, the authors made calculations for the scheme in Figure3 only for the prospective technical and economic indices (Figures 8 and 9).Zones of technological efficiency with the load power of 50 kW(a) and 1000 kW (b) and prospective indices of electrolyzer, hydrogen tank, and fuel cells for the scheme in Figure3. V is the average wind speed at a height of 10 m, p is the diesel price, WT are the wind turbines, DG is the diesel generator, and FC are the fuel cells.
(a)
(b)Electricity cost for the scheme in Figure3 at a fuel price of $1100/toe and with the load power of 50 kW (a) and 1000 kW (b) and prospective indices of electrolyzers, hydrogen tanks, and fuel cells. V is the average wind speed at a height of 10 m, p is the diesel price, DG is the diesel generator, WT are the wind turbines, and FC are the fuel cells.
(a)
(b)Figure8 presents the efficiency zones of energy technologies (optimal structure of the system depending on the diesel price and average wind speed). At low wind speed and low price of fuel, it is reasonable to use only diesel generator to supply power to consumers. When the fuel price and wind speed increase, first it becomes more economical to use a wind-diesel system and then wind turbines with a hydrogen system. In the latter case, according to the optimization results, diesel generator is excluded from the system. Cost effectiveness of wind turbines and fuel cells at a load of 1000 kW is provided at lower wind speed and lower diesel price than for a load of 50 kW, since the specific indices of larger energy sources are better. When the load equals 1000 kW, the application of a diesel generator alone in the considered region of parameters turns out to be inefficient.Zones of technological efficiency allow us to determine the best structure of the power supply system for the given conditions. Moreover, we should know what economic effect can be achieved by applying the optimal structure because only if the effect is sufficient, it makes sense to complicate the system by adding extra components. The corresponding data are presented in Figure9. As we can see, the size of the achieved effect is quite significant.For example, with the load of 1000 kW and the average wind speed of 6 m/s, the construction of wind turbines in addition to diesel generator makes it possible to decrease the electricity cost from 27 to 20 cent/kWh and the replacement of the diesel generator by an electrochemical unit to 15 cent/kWh. Thus, the introduction of a wind-hydrogen system will allow us to halve the costs of electricity production as compared to power supply from diesel generator alone.
## 6. Conclusion
The paper presents an analysis of the economic efficiency of harnessing wind energy in the autonomous power systems of Russia. The wind turbines are shown to be competitive in many considered variants (groups of consumers, placement areas, and climatic and meteorological conditions).At the current prices of fossil (diesel) fuel, the application of wind/diesel systems is efficient even if the long-term average annual wind speed on the wind turbine site is relatively low (about 4.5 m/s). In the regions with a long-term average annual wind speed higher than 5 m/s, the application of wind turbines can reduce the price of the generated electricity by 50% or more (as compared to the use of diesel generators alone).During the periods of strong wind, some amount of power generated by wind turbines turns out to be redundant. Therefore, it becomes possible to use this power for hydrogen production by electrolysis and its subsequent electrochemical transformation into electricity. Under certain conditions, the introduction of subsystem for hydrogen production, storage, and use for energy purposes to the wind/diesel system can decrease diesel consumption (or even completely exclude diesel generators from the system) and reduce the costs of electricity production.The study considered the possibility of storing energy in the form of hydrogen in the autonomous wind/diesel/hydrogen power systems that include a diesel generator, electrolyzer, hydrogen tank, and fuel cells. The authors determined the zones of economic efficiency in the system depending on the load power, fuel price, and long-term average annual wind speed.Technical and economic characteristics of the equipment, which is now available in the market, still do not allow the hydrogen system (electrolyzer, hydrogen tank, and fuel cells) to become competitive with diesel or wind-diesel power plants. However, the indices projected for the nearest future (an approximately twofold decrease in specific capital investment in fuel cells against the current level) make the hydrogen system economically efficient at the fuel prices typical of the autonomous power systems ($800–$1400/toe) and the average annual wind speed starting with 5-6 m/s. Application of the system for the hydrogen production and use for energy purposes will considerably reduce (more than by 50%) the costs of power supply to consumers.
---
*Source: 101972-2013-04-08.xml* | 2013 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.